Egra Logo

Egra

Hardware Engineer

Posted 3 Days Ago
Be an Early Applicant
In-Office
New York City, NY, USA
150K-200K Annually
Mid level
In-Office
New York City, NY, USA
150K-200K Annually
Mid level
The Hardware Engineer will design and build EEG acquisition devices, create firmware for data streaming, evaluate existing hardware, and ensure data quality for ML models.
The summary above was generated by AI

Hi, I'm Brian, Co-Founder of Egra. We just raised $5.5M to build foundation models for brain signals, and we're looking for a hardware engineer to build the devices that make it all possible.

You'll have complete ownership over your work from day one. No lengthy onboarding, no waiting for permission, no navigating layers of approval. A small founding team, hard physics problems, and the resources to solve them. You'll define the hardware direction, make design decisions, and build the physical foundation of what becomes our core technology. If you thrive with high agency and want your work to directly shape the company's trajectory, this is that opportunity.

What you'd be doing

EEG — electrical brain activity recorded from the scalp — is one of the hardest real-world signal modalities in ML: low signal-to-noise ratio, massive subject variability, and device inconsistencies. Our ML models are only as good as the data they train on, and the data is only as good as the device that captures it. We need to build our own.

As a hardware engineer, you'd be working directly with us to design and build the hardware that collects the data our models learn from. To ground it with examples, the kind of projects you'd own:

  • Designing EEG acquisition hardware — electrode arrays (dry and semi-dry), analog front-end circuits (ADS1299 or similar), signal conditioning, and noise management.

  • Building wearable form factors — designing devices people forget they're wearing. Think baseball caps with hidden dry electrodes, behind-the-ear rigs, or earbuds with neural sensing. Rapid iteration with 3D printing, flexible PCBs, and off-the-shelf components.

  • Writing firmware and streaming infrastructure — embedded code that captures synchronized, timestamped EEG data and streams it reliably to our software stack. Timing precision matters — we're pairing brain signals with screen content, keystrokes, and user actions at millisecond resolution.

  • Benchmarking existing devices — systematically evaluating commercial hardware (Muse, Emotiv, OpenBCI) against our signal quality and comfort requirements. Understanding exactly where they fall short and why.

  • Integrating the data collection rig end-to-end — device → firmware → streaming → synchronization with action/context capture → storage. You'll be the bridge between the physical signal and the training data our models consume.

This isn't a role where you design something and throw it over the wall. You'll be in the room when we look at t-SNE plots and figure out whether the signal you're capturing is actually good enough for the models to learn from. Hardware and ML are tightly coupled here.

Where this is going

We're building toward a world where thought is an interface.

You silently compose a message and it types itself. You navigate an AR display without lifting a finger. Software adapts to your cognitive state in real time. A universal interface between human thought and digital action.

The product we're building to get there has three layers:

  1. A Neural Encoder: a foundation model that maps raw EEG into robust, reusable embeddings that work across devices, subjects, and contexts

  2. A Neural API: a stable interface that any app can call: "What is the user's state?" "What intent is most likely?" "What changed?"

  3. Reference applications: proving utility and driving our data collection flywheel

Near-term, the use cases are already real. A limited vocabulary of thought-to-action commands (volume, select, activate, navigate) would feel like magic to consumers. Sleep staging, stress detection, cognitive load monitoring, and engagement measurement are all feasible with today's signal quality. On the clinical side, we're pursuing avenues like epilepsy monitoring and migraine pre-emption as a wedge for high-quality data, credibility, and early revenue.

Hardware matters too. No comfortable, discreet consumer device today covers the brain regions needed for language decoding. We'll eventually design our own. Think a normal-looking baseball cap with dry electrodes hidden in the brim, or something that looks more like AirPods than a medical device. The model needs to be hardware-agnostic, because the form factors will keep evolving.

Research culture

We have a few strong opinions about how we work:

Speed over perfection. The first version of everything will be ugly. That's fine. We'd rather have a working prototype collecting real data this month than a beautiful design that ships in six months.

Hardware serves the data. Every design decision is evaluated by one question: does this produce better training data for our models? Signal quality, comfort, reliability, synchronization — all in service of the ML.

Internal criticism is encouraged. The fastest way to build real knowledge is to kill bad ideas early. We want people who are comfortable saying "this design won't work because..."

Failed experiments are documentation, not waste. We write up what doesn't work with the same care as what does.

Who we're looking for

Ideally, you have direct experience designing and building biosignal acquisition hardware — EEG, EMG, ECG, or other electrophysiology. You've already learned what works and what doesn't with wearable sensing, and you won't need to rediscover those lessons. That said, if you come from a closely related hardware domain (e.g., consumer wearables, medical devices, sensor systems) and have genuine curiosity about neurotech, we're open to that too.

You should have:

  • Experience designing analog front-end circuits for biosignal acquisition (EEG, EMG, ECG, or similar)

  • Proficiency with PCB design tools (KiCad, Altium, or Eagle) and rapid prototyping (3D printing, laser cutting)

  • Embedded firmware development (C/C++ on ARM Cortex or similar microcontrollers)

  • Understanding of electrode-skin interfaces, impedance, noise sources, and signal conditioning for wearable devices

  • Ability to work fast and scrappy — modifying off-the-shelf hardware, improvising with available components, iterating daily

You should NOT apply if:

  • You need a fully equipped hardware lab and a long timeline to ship anything

  • You're uncomfortable with ambiguity or making design decisions without complete information

  • You see hardware and software as separate worlds — here they're deeply intertwined

Interview process

Our process is three conversations:

  1. 30-minute intro call. We'll tell you what we're building, you'll tell us what you've built. Casual, honest, no prep needed.

  2. 30-minute technical conversation. We'll walk through a real hardware design problem together — electrode placement tradeoffs, noise management strategies, or form factor constraints. No trick questions — we want to see how you think about physical systems.

  3. 30-minute deep dive. You'll meet both founders. We'll go deeper on your past builds, talk about design taste, and figure out if we'd enjoy working together every day.

Benefits
  • Competitive salary and meaningful equity

  • Platinum-tier health insurance

  • Equipment and prototyping budget — get the tools and components you need

  • Full design autonomy: own the problem, not just a task list

  • No bureaucracy, no review committees

  • Conference budget

  • Relocation and visa support (flexible on remote)

Top Skills

3D Printing
Altium
Analog Front-End Circuits
Arm Cortex Microcontrollers
Eagle)
Ecg
Eeg
Embedded Firmware Development
Emg
Laser Cutting
Pcb Design Tools (Kicad

Similar Jobs

2 Days Ago
In-Office
New York, NY, USA
102K-145K Annually
Junior
102K-145K Annually
Junior
Cloud • Information Technology • Machine Learning
As a Hardware Engineer, you will troubleshoot GPUs and PCIe, collaborate with vendors, develop management services, and automate server hardware processes.
Top Skills: AnsibleGpuGrafanaIpmiPciePrometheusPythonRedfish
Yesterday
Hybrid
129K-219K Annually
Senior level
129K-219K Annually
Senior level
Aerospace • Hardware • Information Technology • Security • Software • Cybersecurity • Defense
Lead multi-discipline hardware teams in design and development for defense systems. Responsibilities include project management, team leadership, and ensuring quality in hardware engineering processes.
Top Skills: As9145 Compliant ProcessesCircuit Card Assembly DesignDigital DesignEarned Value ManagementFpga DesignHardware Engineering
Yesterday
Hybrid
119K-201K Annually
Senior level
119K-201K Annually
Senior level
Aerospace • Hardware • Information Technology • Security • Software • Cybersecurity • Defense
The role involves leading the design, integration, and testing of high-speed digital circuit card assemblies, mentoring junior engineers and interfacing with various engineering functions to ensure successful project delivery.
Top Skills: 1553Ddr3Ddr4EthernetFiber OpticFpgaHyperlynxMentor/Siemens XpeditionPcieSerdes

What you need to know about the NYC Tech Scene

As the undisputed financial capital of the world, New York City is an epicenter of startup funding activity. The city has a thriving fintech scene and is a major player in verticals ranging from AI to biotech, cybersecurity and digital media. It also has universities like NYU, Columbia and Cornell Tech attracting students and researchers from across the globe, providing the ecosystem with a constant influx of world-class talent. And its East Coast location and three international airports make it a perfect spot for European companies establishing a foothold in the United States.

Key Facts About NYC Tech

  • Number of Tech Workers: 549,200; 6% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Capgemini, Bloomberg, IBM, Spotify
  • Key Industries: Artificial intelligence, Fintech
  • Funding Landscape: $25.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Greycroft, Thrive Capital, Union Square Ventures, FirstMark Capital, Tiger Global Management, Tribeca Venture Partners, Insight Partners, Two Sigma Ventures
  • Research Centers and Universities: Columbia University, New York University, Fordham University, CUNY, AI Now Institute, Flatiron Institute, C.N. Yang Institute for Theoretical Physics, NASA Space Radiation Laboratory

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account