Egra Logo

Egra

Research Scientist

Posted 3 Days Ago
Be an Early Applicant
In-Office
New York City, NY, USA
175K-275K Annually
Junior
In-Office
New York City, NY, USA
175K-275K Annually
Junior
As a research scientist, you'll design self-supervised training objectives, stress-test approaches, build evaluation protocols, and write research memos to enhance understanding of EEG signals.
The summary above was generated by AI

Hi, I'm Brian, Co-Founder of Egra. We just raised $5.5M to build foundation models for brain signals, and we're looking for research scientists to join our founding team.

You'll have complete ownership over your work from day one. No lengthy onboarding, no waiting for permission, no navigating layers of approval. A small founding team, deep technical problems, and the resources to solve them. You'll define the research direction, make architectural decisions, and build the foundations of what becomes our core technology. If you thrive with high agency and want your work to directly shape the company's trajectory, this is that opportunity.

What you'd be doing

EEG — electrical brain activity recorded from the scalp — is one of the hardest real-world signal modalities in ML: low signal-to-noise ratio, massive subject variability, and device inconsistencies. Most people avoid it for these reasons.

As a founding research scientist, you'd be working directly with us to figure out what actually works. To ground it with examples, the kind of projects you'd own:

  • Designing self-supervised pretraining objectives that force generalization across subjects, devices, and recording conditions

  • Stress-testing existing approaches (like recent EEG foundation model papers) to understand exactly where and why they break: cross-dataset, cross-montage, under distribution shift

  • Building evaluation protocols that distinguish real progress from noise, so we're not fooling ourselves with leaky benchmarks

  • Writing internal research memos that become the shared knowledge base of the lab — "why model X fails on dataset Y," "what Z dataset teaches us incorrectly," "what we tried and why it didn't work"

This is about building genuine understanding of a modality that very few people have studied with modern ML tools and turning that understanding into something that compounds.

Where this is going

We're building toward a world where thought is an interface.

You silently compose a message and it types itself. You navigate an AR display without lifting a finger. Software adapts to your cognitive state in real time. A universal interface between human thought and digital action.

The product we're building to get there has three layers:

  1. A Neural Encoder: a foundation model that maps raw EEG into robust, reusable embeddings that work across devices, subjects, and contexts

  2. A Neural API: a stable interface that any app can call: "What is the user's state?" "What intent is most likely?" "What changed?"

  3. Reference applications: proving utility and driving our data collection flywheel

Near-term, the use cases are already real. A limited vocabulary of thought-to-action commands (volume, select, activate, navigate) would feel like magic to consumers. Sleep staging, stress detection, cognitive load monitoring, and engagement measurement are all feasible with today's signal quality. On the clinical side, we're pursuing avenues like epilepsy monitoring and migraine pre-emption as a wedge for high-quality data, credibility, and early revenue.

Hardware matters too. No comfortable, discreet consumer device today covers the brain regions needed for language decoding. We'll eventually design our own. Think a normal-looking baseball cap with dry electrodes hidden in the brim, or something that looks more like AirPods than a medical device. The model needs to be hardware-agnostic, because the form factors will keep evolving.

Research culture

We have a few strong opinions about how research should work:

Minimal hand-engineering, maximal learning pressure. We're skeptical of approaches that hard-code domain heuristics into the model. We'd rather let models discover structure than force-feed it. If you've read Sutton's Bitter Lesson and felt something click, we're on the same page.

Reproducibility over vibes. If we can't answer "which preprocessing version produced this result," we don't trust the result. Every experiment is tracked, every pipeline is versioned, every claim is stress-tested.

Internal criticism is encouraged. The fastest way to build real knowledge is to kill bad ideas early. We want people who are comfortable saying "I think this is wrong"

Failed experiments are documentation, not waste. We write up what doesn't work with the same care as what does.

Who we're looking for

Ideally, you have direct experience with EEG or neural signal decoding — you've already learned what works and what doesn't with this modality, and you won't need to rediscover those lessons. Experience competing in EEG/BCI competitions is a strong signal. That said, if you come from a closely related domain (e.g., other biosignals, brain imaging) and have genuine curiosity about EEG, we're open to that too.

You should have:

  • Deep experience with self-supervised learning, ideally on EEG or neural signals specifically

  • Strong opinions about what makes representations actually generalize

  • The ability to run your own experiments end-to-end (design, implement, train, evaluate, write up)

  • Comfort with ugly, heterogeneous data and strategies for making it useful

  • Familiarity with the EEG landscape, you know the public datasets, the benchmarks, and where current approaches fall short

You should NOT apply if:

  • You need a clear roadmap or structured team to do your best work

  • You're more interested in neuroscience theory than building systems that work

  • You rely heavily on hand-crafted features or domain-specific engineering

Interview process

Our process is three conversations:

  1. 30-minute intro call. We'll tell you what we're working on, you'll tell us what you've worked on. Casual, honest, no prep needed.

  2. 30-minute technical conversation. We'll talk through a real research design problem together. No whiteboard tricks — we want to see how you think about signal problems, failure modes, and tradeoffs. Think of it as a research jam session.

  3. 30-minute deep dive. You'll meet both founders. We'll go deeper on your past work, talk about research taste, and figure out if we'd enjoy working together every day.

Benefits
  • Competitive salary and meaningful equity

  • Platinum-tier health insurance

  • Uncapped compute access

  • Full research autonomy: own the problem, not just a task list

  • No bureaucracy, no review committees

  • Conference budget + co-author publication support

  • Relocation and visa support (flexible on remote)

Top Skills

Eeg
Machine Learning
Neural Signals
Self-Supervised Learning

Similar Jobs

7 Hours Ago
In-Office
New York, NY, USA
96K-182K Annually
Mid level
96K-182K Annually
Mid level
Fintech • Insurance • Financial Services
The Research Machine Learning Scientist II role involves researching and developing machine learning systems using deep learning techniques, working with large datasets, and collaborating with engineering teams to deploy solutions.
Top Skills: AWSAzureDeep LearningGenerative AiJaxPyTorchSparkSQLTensorFlow
6 Days Ago
In-Office
New York, NY, USA
120K-250K Annually
Mid level
120K-250K Annually
Mid level
Biotech
The role involves developing methods for understanding biological reasoning models, conducting experiments, and creating visualization tools for biological data interpretation.
Top Skills: JaxPythonPyTorchTensorFlow
10 Days Ago
Hybrid
New York City, NY, USA
90K-150K Annually
Senior level
90K-150K Annually
Senior level
Artificial Intelligence • Big Data • Healthtech • Machine Learning • Analytics • Biotech • Generative AI
The Translational Scientist II will execute research projects utilizing computational analyses of multi-modal data to produce actionable insights for biopharma partners, while effectively communicating complex findings.
Top Skills: AWSCSS3D3DaskDockerFlaskGgplotGitHTML5JavaScriptJupyter NotebooksMatplotlibNumpyPandasPlot.LyRRstudioScikit-LearnScipySeabornTidyverse

What you need to know about the NYC Tech Scene

As the undisputed financial capital of the world, New York City is an epicenter of startup funding activity. The city has a thriving fintech scene and is a major player in verticals ranging from AI to biotech, cybersecurity and digital media. It also has universities like NYU, Columbia and Cornell Tech attracting students and researchers from across the globe, providing the ecosystem with a constant influx of world-class talent. And its East Coast location and three international airports make it a perfect spot for European companies establishing a foothold in the United States.

Key Facts About NYC Tech

  • Number of Tech Workers: 549,200; 6% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Capgemini, Bloomberg, IBM, Spotify
  • Key Industries: Artificial intelligence, Fintech
  • Funding Landscape: $25.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Greycroft, Thrive Capital, Union Square Ventures, FirstMark Capital, Tiger Global Management, Tribeca Venture Partners, Insight Partners, Two Sigma Ventures
  • Research Centers and Universities: Columbia University, New York University, Fordham University, CUNY, AI Now Institute, Flatiron Institute, C.N. Yang Institute for Theoretical Physics, NASA Space Radiation Laboratory

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account