Hi, I'm Brian, Co-Founder of Egra. We just raised $5.5M to build foundation models for brain signals, and we're looking for research scientists to join our founding team.
You'll have complete ownership over your work from day one. No lengthy onboarding, no waiting for permission, no navigating layers of approval. A small founding team, deep technical problems, and the resources to solve them. You'll define the research direction, make architectural decisions, and build the foundations of what becomes our core technology. If you thrive with high agency and want your work to directly shape the company's trajectory, this is that opportunity.
What you'd be doingEEG — electrical brain activity recorded from the scalp — is one of the hardest real-world signal modalities in ML: low signal-to-noise ratio, massive subject variability, and device inconsistencies. Most people avoid it for these reasons.
As a founding research scientist, you'd be working directly with us to figure out what actually works. To ground it with examples, the kind of projects you'd own:
Designing self-supervised pretraining objectives that force generalization across subjects, devices, and recording conditions
Stress-testing existing approaches (like recent EEG foundation model papers) to understand exactly where and why they break: cross-dataset, cross-montage, under distribution shift
Building evaluation protocols that distinguish real progress from noise, so we're not fooling ourselves with leaky benchmarks
Writing internal research memos that become the shared knowledge base of the lab — "why model X fails on dataset Y," "what Z dataset teaches us incorrectly," "what we tried and why it didn't work"
This is about building genuine understanding of a modality that very few people have studied with modern ML tools and turning that understanding into something that compounds.
Where this is goingWe're building toward a world where thought is an interface.
You silently compose a message and it types itself. You navigate an AR display without lifting a finger. Software adapts to your cognitive state in real time. A universal interface between human thought and digital action.
The product we're building to get there has three layers:
A Neural Encoder: a foundation model that maps raw EEG into robust, reusable embeddings that work across devices, subjects, and contexts
A Neural API: a stable interface that any app can call: "What is the user's state?" "What intent is most likely?" "What changed?"
Reference applications: proving utility and driving our data collection flywheel
Near-term, the use cases are already real. A limited vocabulary of thought-to-action commands (volume, select, activate, navigate) would feel like magic to consumers. Sleep staging, stress detection, cognitive load monitoring, and engagement measurement are all feasible with today's signal quality. On the clinical side, we're pursuing avenues like epilepsy monitoring and migraine pre-emption as a wedge for high-quality data, credibility, and early revenue.
Hardware matters too. No comfortable, discreet consumer device today covers the brain regions needed for language decoding. We'll eventually design our own. Think a normal-looking baseball cap with dry electrodes hidden in the brim, or something that looks more like AirPods than a medical device. The model needs to be hardware-agnostic, because the form factors will keep evolving.
Research cultureWe have a few strong opinions about how research should work:
Minimal hand-engineering, maximal learning pressure. We're skeptical of approaches that hard-code domain heuristics into the model. We'd rather let models discover structure than force-feed it. If you've read Sutton's Bitter Lesson and felt something click, we're on the same page.
Reproducibility over vibes. If we can't answer "which preprocessing version produced this result," we don't trust the result. Every experiment is tracked, every pipeline is versioned, every claim is stress-tested.
Internal criticism is encouraged. The fastest way to build real knowledge is to kill bad ideas early. We want people who are comfortable saying "I think this is wrong"
Failed experiments are documentation, not waste. We write up what doesn't work with the same care as what does.
Who we're looking forIdeally, you have direct experience with EEG or neural signal decoding — you've already learned what works and what doesn't with this modality, and you won't need to rediscover those lessons. Experience competing in EEG/BCI competitions is a strong signal. That said, if you come from a closely related domain (e.g., other biosignals, brain imaging) and have genuine curiosity about EEG, we're open to that too.
You should have:
Deep experience with self-supervised learning, ideally on EEG or neural signals specifically
Strong opinions about what makes representations actually generalize
The ability to run your own experiments end-to-end (design, implement, train, evaluate, write up)
Comfort with ugly, heterogeneous data and strategies for making it useful
Familiarity with the EEG landscape, you know the public datasets, the benchmarks, and where current approaches fall short
You should NOT apply if:
You need a clear roadmap or structured team to do your best work
You're more interested in neuroscience theory than building systems that work
You rely heavily on hand-crafted features or domain-specific engineering
Our process is three conversations:
30-minute intro call. We'll tell you what we're working on, you'll tell us what you've worked on. Casual, honest, no prep needed.
30-minute technical conversation. We'll talk through a real research design problem together. No whiteboard tricks — we want to see how you think about signal problems, failure modes, and tradeoffs. Think of it as a research jam session.
30-minute deep dive. You'll meet both founders. We'll go deeper on your past work, talk about research taste, and figure out if we'd enjoy working together every day.
Competitive salary and meaningful equity
Platinum-tier health insurance
Uncapped compute access
Full research autonomy: own the problem, not just a task list
No bureaucracy, no review committees
Conference budget + co-author publication support
Relocation and visa support (flexible on remote)
Top Skills
Similar Jobs
What you need to know about the NYC Tech Scene
Key Facts About NYC Tech
- Number of Tech Workers: 549,200; 6% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Capgemini, Bloomberg, IBM, Spotify
- Key Industries: Artificial intelligence, Fintech
- Funding Landscape: $25.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Greycroft, Thrive Capital, Union Square Ventures, FirstMark Capital, Tiger Global Management, Tribeca Venture Partners, Insight Partners, Two Sigma Ventures
- Research Centers and Universities: Columbia University, New York University, Fordham University, CUNY, AI Now Institute, Flatiron Institute, C.N. Yang Institute for Theoretical Physics, NASA Space Radiation Laboratory



