Everstar Inc. Logo

Everstar Inc.

Founding AI Engineer

Posted Yesterday
In-Office
New York City, NY, USA
Mid level
In-Office
New York City, NY, USA
Mid level
The Founding AI Engineer will build production AI systems for nuclear deployment, focusing on fine-tuning, R&D, and collaborating with top tech partners. Responsibilities include designing eval infrastructure, owning fine-tuning pipelines, and pushing the frontier of AI applications relevant to nuclear operations, ensuring high-quality outputs and continuous improvement.
The summary above was generated by AI
Founding AI Engineer (AI + Production)

New York City (5 days on-site) · Top of market + equity + benefits

TL;DR: Build AI that accelerates nuclear deployment. Own AI production from evals to fine-tuning. Push the frontier on physics models, world models, and AI-accelerated simulations. High-leverage IC role with founding-level impact.

The Mission

Everstar builds the intelligence layer that makes nuclear power actually deployable—collapsing regulatory and manufacturing timelines from years to months. Gordian already powers engineering and compliance work for utilities, advanced reactor companies, and hyperscalers. We pair deep nuclear domain expertise with frontier AI and move with startup speed.

Now we need a Founding AI Engineer to turn research breakthroughs into production systems that ship—and push beyond LLMs into physics-informed AI, world models, and simulation acceleration.

You’ll be joining the Apollo Team of Nuclear. You’ll build alongside engineers from Tesla, SpaceX, Lockheed Martin, Google, and Microsoft. You’ll learn from nuclear and national security experts who cut their teeth at the Nuclear Regulatory Commission, CIA, and NuScale.

The Role (reporting to the CEO)

Not a researcher. Not a prompt engineer. This is a production-first role.

You'll own the AI stack end-to-end—from eval frameworks to fine-tuning pipelines to agent orchestration. But you'll also push the boundaries of what AI can do for nuclear: AI-accelerated weather simulations, design safety analyses powered by physics models, and world model applications that transform nuclear operations.

Think 70% building production systems / 30% frontier R&D with access to Microsoft and NVIDIA's latest tools through our first-party partnerships and a large AI research budget to experiment aggressively.

Most weeks you'll be shipping new model capabilities, debugging eval failures, and scaling inference—then immediately applying what you learned to the next sprint. Some weeks you'll be prototyping physics-informed models, running GPU-accelerated simulations, or collaborating directly with NVIDIA and Microsoft researchers.

You will:

  • Build production AI agents: power Gordian Search, Research, and Compose with outputs that are truthful, complete, and auditable—because in nuclear, "mostly right" isn't good enough.

  • Design eval infrastructure: create benchmarking suites that catch regressions before customers do; instrument quality metrics that actually matter.

  • Own fine-tuning pipelines: generate synthetic data, run ablations, and ship domain-adapted models that outperform off-the-shelf LLMs on nuclear regulatory tasks.

  • Push the frontier (R&D):

    • AI-accelerated weather simulations for site qualification and environmental impact assessments—replacing months of modeling with hours

    • Physics-informed design safety analyses using world models that reason about thermal hydraulics, neutronics, and structural integrity

    • Vision + physics models for automated document analysis, construction monitoring, and operational anomaly detection

    • Agentic workflows that compound over time, learning from each regulatory submission to improve the next

  • Leverage NVIDIA partnership: work directly with NVIDIA's research team to access cutting-edge tools (NeMo, Modulus, Omniverse) and contribute to the future of AI for critical infrastructure

  • Set technical direction: you're early enough to shape how we think about model selection, prompt design, guardrails, physics-AI integration, and the entire ML ops stack

  • Mentor and lead: as the team scales, you'll hire and guide other AI engineers—but first, you'll prove the playbook yourself

A sample week: debug why Research citations dropped 8%; ship new fine-tuned model for compliance drafting; design eval suite for multi-document reasoning; prototype physics-informed model for thermal analysis; pair with fullstack engineer to optimize inference latency; attend NVIDIA collaboration session on world models; read three ML papers and implement one idea.

What You've Done
  • 3–8 years building production ML/LLM systems—RAG, fine-tuning, evals, agent orchestration. You've shipped models that users depend on daily.

  • Mastery of the stack: Hugging Face, LangChain, vector databases, prompt engineering, and modern LLM ops. You know when to use off-the-shelf and when to build custom.

  • Rigor with evals: you've designed benchmark suites, tracked model quality over time, and know how to measure what matters (not just what's easy).

  • Leadership DNA: you've owned outcomes, not just tasks. You've set technical direction, mentored teammates, or led cross-functional projects.

  • Bonus points:

    • Experience with physics-informed neural networks, scientific computing, or simulation acceleration

    • Published research in ML/AI, contributions to open-source ML frameworks

    • Deep familiarity with NVIDIA tools (NeMo, Modulus, CUDA optimization)

    • You're the person who reads Arxiv papers on weekends and immediately wants to implement them

    • Background in physics, engineering, or computational science

No nuclear background required—only the hunger to build AI that matters and push the boundaries of what AI can do for physical systems.

Who You're Building For

This isn't benchmarks for benchmarks' sake. Your models will directly help:

  • Nuclear operators keeping 20% of U.S. electricity safe and reliable

  • Advanced reactor developers navigating regulatory approval for next-gen designs—and using AI-accelerated simulations to optimize designs in days, not months

  • Licensing teams drafting safety analyses that take months today, hours tomorrow—powered by physics models that understand first principles

  • Site qualification teams running environmental and weather analyses that currently require expensive consultants and 6+ month timelines

And the second-order effects matter even more:

  • Nuclear unlocks the energy needed for AGI/ASI—advanced AI requires unprecedented power.

  • AI accelerates nuclear deployment—breaking the regulatory bottleneck that's held back clean energy for decades.

  • The tokens you generate translate into safer infrastructure and a livable planet.

What’s at Stake
  • 🔥 If we succeed: We unlock nuclear at scale, power the AI revolution with clean energy, and collapse licensing timelines from years to months. The models you build help humanity leap toward AGI on a sustainable foundation. Your physics-informed AI becomes the standard for how critical infrastructure is designed and operated.

  • ❄️ If we fail: Nuclear stays bottlenecked in decades-old processes, AI's energy demand outpaces clean supply, and we miss the window to align technological progress with climate survival. The frontier AI capabilities remain academic curiosities instead of deployment accelerators.

What Success Looks Like (90 days)
  • Shipped ≥3 major model improvements to production (better evals, new fine-tuned model, or agent capability).

  • Eval framework is instrumented and running continuously; you catch quality regressions before customers report them.

  • Inference latency reduced ≥30% or accuracy improved ≥15% on key benchmarks.

  • Prototype ≥1 frontier capability (physics model for safety analysis, weather simulation acceleration, or world model application) that shows clear customer value.

  • You've set the technical roadmap for AI engineering and the team trusts your judgment.

  • At least one system you built (eval suite, fine-tuning pipeline, or agent orchestration) is now core infrastructure the company depends on.

Resources at Your Disposal
  • NVIDIA & Microsoft first-party partnership: Direct access to Microsoft & NVIDIA research team, early access to new tools (NeMo, Modulus, Omniverse), and collaboration on frontier AI applications

  • Large AI research budget: Aggressive compute allocation for training runs, experiments, and frontier R&D—no need to beg for GPU credits

  • Latest NVIDIA hardware: Access to H100s, GH200s, and future architectures as they become available

  • World-class team: Work alongside nuclear domain experts, AI researchers, and engineers who've shipped at SpaceX and top startups

Growth Path

Strong founding AI engineers typically grow into Head of AI/ML, AI Research Lead, or CTO-track roles as the company scales. The frontier R&D component opens paths toward Chief Scientist or VP of Applied Research as we expand into physics-AI and world models.

First, you'll prove you can own the entire LLM stack and ship production systems that matter.

Why Everstar
  • Work with the best: high‑caliber, wartime team that builds things that scale

  • Build shit that matters, accelerating nuclear energy and shaping the AI future

  • Large AI research budget for compute, conferences, and experimentation.

  • Top of market base + meaningful equity in a fast-growing company; standard benefits (health/dental/vision, FSA, wellness stipend).

  • IRL in NYC (midtown/Bryant Park). Occasional travel to client sites, Microsoft & NVIDIA offices, or ML conferences.

How to Apply (show, don't tell)

Submit application with:

  1. Resume AND LinkedIn profile

  2. GitHub or portfolio: show us something you built (open-source contributions, side projects, or production work you're proud of)

  3. 200 words: "What excites you most about building AI for nuclear deployment?"

  4. 150 words: "Describe a production ML system you owned. What were the hardest technical tradeoffs and how did you resolve them?"

  5. Bonus (optional): If you have experience with physics-informed AI, simulation acceleration, or scientific computing, share a brief example of work in this domain.

We respond to strong submissions within one week.

Let’s build.

Similar Jobs

Yesterday
In-Office
New York, NY, USA
100K-200K Annually
Mid level
100K-200K Annually
Mid level
Artificial Intelligence • Fintech • Software • Business Intelligence
The Founding AI Engineer will develop Vantager's AI-native platform, optimize financial task performance using LLMs, and improve AI systems through instrumentation and data processing.
Top Skills: AWSDockerFastapiMySQLPostgresPythonTypescript
8 Days Ago
In-Office
New York, NY, USA
200K-250K Annually
Senior level
200K-250K Annually
Senior level
Angel or VC Firm
As a Founding Senior AI Engineer at Peerbound, you'll build AI features for B2B SaaS products, owning technical decisions and project outcomes while collaborating closely with leadership and clients.
Top Skills: Distributed SystemsLangchainLanggraphPythonStatisticsVector Databases
8 Days Ago
Hybrid
New York City, NY, USA
Mid level
Mid level
Artificial Intelligence • Logistics • Software • Automation
The Founding Engineer will develop AI agents to automate supply chain tasks, collaborating with customers and ensuring production-grade efficiency.
Top Skills: Cloud InfrastructureNode.jsPythonTypescript

What you need to know about the NYC Tech Scene

As the undisputed financial capital of the world, New York City is an epicenter of startup funding activity. The city has a thriving fintech scene and is a major player in verticals ranging from AI to biotech, cybersecurity and digital media. It also has universities like NYU, Columbia and Cornell Tech attracting students and researchers from across the globe, providing the ecosystem with a constant influx of world-class talent. And its East Coast location and three international airports make it a perfect spot for European companies establishing a foothold in the United States.

Key Facts About NYC Tech

  • Number of Tech Workers: 549,200; 6% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Capgemini, Bloomberg, IBM, Spotify
  • Key Industries: Artificial intelligence, Fintech
  • Funding Landscape: $25.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Greycroft, Thrive Capital, Union Square Ventures, FirstMark Capital, Tiger Global Management, Tribeca Venture Partners, Insight Partners, Two Sigma Ventures
  • Research Centers and Universities: Columbia University, New York University, Fordham University, CUNY, AI Now Institute, Flatiron Institute, C.N. Yang Institute for Theoretical Physics, NASA Space Radiation Laboratory

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account