Senior Modeling Engineer
Neurophos
Other Engineering
United States · California, USA · San Jose, CA, USA · San Mateo, CA, USA
Posted on Jan 25, 2026
About Neurophos
We are developing an ultra-high-performance, energy-efficient photonic AI inference system. We’re transforming AI computation with the first-ever metamaterial-based optical processing unit (OPU).
As AI adoption accelerates, data centers face significant power and scalability challenges. Traditional solutions are struggling to keep up, leading to rapidly rising energy consumption and costs. We’re solving both problems with an OPU that integrates over one million micron-scale optical processing components on a single chip. This architecture will deliver up to 100 times the energy efficiency of existing solutions while significantly improving large-scale AI inference performance.
We’ve assembled a world-class team of industry veterans and recently raised a $110M Series A led by Gates Frontier. Participants include M12 (Microsoft’s Venture Fund), Carbon Direct Capital, Aramco Ventures, Bosch Ventures, Tectonic Ventures, Space Capital, and others. We have also been recognized on the EE Times Silicon 100 list for several consecutive years.
Join us and shape the future of optical computing!
Location: San Francisco Bay Area or Austin, TX. Full-time onsite position.
Position Overview
We are seeking experienced hardware modeling engineers to develop sophisticated functional and performance models that define the next generation of Neurophos chips. You will implement models of novel compute blocks, including optical GEMM engines, SRAM vector processors, and dataflow architectures within our YinYang event-driven framework. This role offers the opportunity to work on cutting-edge hardware that doesn't exist anywhere else while shaping modeling methodology from the ground up.
Key Responsibilities
We are developing an ultra-high-performance, energy-efficient photonic AI inference system. We’re transforming AI computation with the first-ever metamaterial-based optical processing unit (OPU).
As AI adoption accelerates, data centers face significant power and scalability challenges. Traditional solutions are struggling to keep up, leading to rapidly rising energy consumption and costs. We’re solving both problems with an OPU that integrates over one million micron-scale optical processing components on a single chip. This architecture will deliver up to 100 times the energy efficiency of existing solutions while significantly improving large-scale AI inference performance.
We’ve assembled a world-class team of industry veterans and recently raised a $110M Series A led by Gates Frontier. Participants include M12 (Microsoft’s Venture Fund), Carbon Direct Capital, Aramco Ventures, Bosch Ventures, Tectonic Ventures, Space Capital, and others. We have also been recognized on the EE Times Silicon 100 list for several consecutive years.
Join us and shape the future of optical computing!
Location: San Francisco Bay Area or Austin, TX. Full-time onsite position.
Position Overview
We are seeking experienced hardware modeling engineers to develop sophisticated functional and performance models that define the next generation of Neurophos chips. You will implement models of novel compute blocks, including optical GEMM engines, SRAM vector processors, and dataflow architectures within our YinYang event-driven framework. This role offers the opportunity to work on cutting-edge hardware that doesn't exist anywhere else while shaping modeling methodology from the ground up.
Key Responsibilities
- Implement functional models (fmod) of optical compute engines, vector processors, and memory systems
- Develop performance models (pmod) with discrete-event timing and power estimation
- Work within the YinYang (libyy) event-driven framework to build modular, reusable components
- Design clean abstractions and interfaces between hardware blocks
- Integrate with Verilator/SystemVerilog for RTL co-simulation and validation
- Build trace infrastructure for both coupled and independent simulation modes
- Validate models against RTL and contribute to architectural validation efforts
- Collaborate with architects, RTL designers, and software engineers
- Optimize simulation performance while maintaining modeling fidelity
- BS, MS, or PhD in Computer Engineering, Electrical Engineering, or Computer Science
- 5-7+ years of experience in hardware modeling, functional simulation, or performance modeling
- Strong C++ programming skills (modern C++17/20/23 preferred)
- Experience with hardware modeling frameworks, transaction-level modeling, or event-driven simulation
- Understanding of computer architecture fundamentals (pipelines, memory systems, accelerators)
- Ability to balance modeling fidelity with simulation speed based on analysis objectives
- Strong debugging and validation skills for complex hardware models
- Effective communication and collaboration across hardware/software teams
- Python proficiency for scripting, analysis, and automation
- Experience with SystemC, TLM 2.x, or custom event-driven simulation frameworks
- Background in accelerator modeling (GPU, TPU, NPU, DSP)
- Familiarity with Verilator, SystemVerilog, or RTL co-simulation
- Knowledge of memory system modeling (HBM, DRAM, caches)
- Understanding of ML workloads and framework internals (PyTorch, TensorFlow)
- Experience with performance analysis, profiling, and bottleneck identification
- Exposure to power modeling frameworks (McPAT, Cacti)
- Background in optical computing, photonics, or analog computing
- Experience with trace-driven simulation methodologies
- A pivotal role in an innovative startup redefining the future of AI hardware.
- A collaborative and intellectually stimulating work environment.
- Competitive compensation, including salary and equity options.
- Opportunities for career growth and future team leadership.
- Access to cutting-edge technology and state-of-the-art facilities.
- Opportunity to publish research and contribute to the field of efficient AI inference.