CoreWeave Logo

CoreWeave

Director of Engineering, Inference Services

Reposted 19 Days Ago
Be an Early Applicant
In-Office
2 Locations
206K-303K Annually
Expert/Leader
In-Office
2 Locations
206K-303K Annually
Expert/Leader
The Director of Engineering will oversee the development of CoreWeave's Inference Platform, focusing on high-performance GPU inference services, leading engineering teams, and collaborating cross-functionally to enhance model-serving capabilities.
The summary above was generated by AI
CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at www.coreweave.com.
About this Role:

CoreWeave is looking for a Director of Engineering to own and scale our next-generation Inference Platform. In this highly technical, strategic role you will lead a world-class engineering organization to design, build, and operate the fastest, most cost-efficient, and most reliable GPU inference services in the industry. Your charter spans everything from model-serving runtimes (e.g., Triton, vLLM, TensorRT-LLM) and autoscaling micro-batch schedulers to developer-friendly SDKs and airtight, multi-tenant security - all delivered on CoreWeave’s unique accelerated-compute infrastructure.

What You'll Do:
  • Vision & Roadmap -  Define and continuously refine the end-to-end Inference Platform roadmap, prioritizing low-latency, high-throughput model serving and world-class developer UX. Set technical standards for runtime selection, GPU/CPU heterogeneity, quantization, and model-optimization techniques.
  • Platform Architecture - Design and implement a global, Kubernetes-native inference control plane that delivers <50 ms P99 latencies at scale. Build adaptive micro-batching, request-routing, and autoscaling mechanisms that maximize GPU utilization while meeting strict SLAs. Integrate model-optimization pipelines (TensorRT, ONNX Runtime, BetterTransformer, AWQ, etc.) for frictionless deployment.
  • Implement state-of-the-art runtime optimizations—including speculative decoding, KV-cache reuse across batches, early-exit heuristics, and tensor-parallel streaming—to squeeze every microsecond out of LLM inference while retaining accuracy.
  • Operational Excellence -  Establish SLOs/SLA dashboards, real-time observability, and self-healing mechanisms for thousands of models across multiple regions. Drive cost-performance trade-off tooling that makes it trivial for customers to choose the best HW tier for each workload.
  • Leadership -  Hire, mentor, and grow a diverse team of engineers and managers passionate about large-scale AI inference. Foster a customer-obsessed, metrics-driven engineering culture with crisp design reviews and blameless post-mortems.
  • Collaboration -  Partner closely with Product, Orchestration, Networking, and Security teams to deliver a unified CoreWeave experience. Engage directly with flagship customers (internal and external) to gather feedback and shape the roadmap.
Who You Are: 
  • 10+ years building large-scale distributed systems or cloud services, with 5+ years leading multiple engineering teams.
  • Proven success delivering mission-critical model-serving or real-time data-plane services (e.g., Triton, TorchServe, vLLM, Ray Serve, SageMaker Inference, GCP Vertex Prediction).
  • Deep understanding of GPU/CPU resource isolation, NUMA-aware scheduling, micro-batching, and low-latency networking (gRPC, QUIC, RDMA).
  • Track record of optimizing cost-per-token / cost-per-request and hitting sub-100 ms global P99 latencies.
  • Expertise in Kubernetes, service meshes, and CI/CD for ML workloads; familiarity with Slurm, Kueue, or other schedulers a plus.
  • Hands-on experience with LLM optimization (quantization, compilation, tensor parallelism, speculative decoding) and hardware-aware model compression.
  • Excellent communicator who can translate deep technical concepts into clear business value for C-suite and engineering audiences.
  • Bachelor’s or Master’s in CS, EE, or related field (or equivalent practical experience).
Nice-to-have:
  • Experience operating multi-region inference fleets at a cloud provider or hyperscaler.
  • Contributions to open-source inference or MLOps projects.
    Familiarity with observability stacks (Prometheus, Grafana, OpenTelemetry) for AI workloads.
  • Background in edge inference, streaming inference, or real-time personalization systems.

The base salary range for this role is $206,000 to $303,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility). 

What We Offer

The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.

In addition to a competitive salary, we offer a variety of benefits to support your needs, including:

  • Medical, dental, and vision insurance - 100% paid for by CoreWeave
  • Company-paid Life Insurance 
  • Voluntary supplemental life insurance 
  • Short and long-term disability insurance 
  • Flexible Spending Account
  • Health Savings Account
  • Tuition Reimbursement 
  • Ability to Participate in Employee Stock Purchase Program (ESPP)
  • Mental Wellness Benefits through Spring Health 
  • Family-Forming support provided by Carrot
  • Paid Parental Leave 
  • Flexible, full-service childcare support with Kinside
  • 401(k) with a generous employer match
  • Flexible PTO
  • Catered lunch each day in our office and data center locations
  • A casual work environment
  • A work culture focused on innovative disruption

Our Workplace

While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.

California Consumer Privacy Act - California applicants only

CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.

As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: [email protected].


Export Control Compliance

This position requires access to export controlled information.  To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency.  CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.

Top Skills

Awq
Bettertransformer
Ci/Cd
Grpc
Kubernetes
Onnx Runtime
Quic
Rdma
Tensorrt-Llm
Triton
HQ

CoreWeave Livingston, New Jersey, USA Office

Livingston, NJ, United States

CoreWeave New York, New York, USA Office

New York, NY, United States

Similar Jobs at CoreWeave

10 Hours Ago
In-Office
5 Locations
115K-168K Annually
Senior level
115K-168K Annually
Senior level
Cloud • Information Technology • Machine Learning
The FP&A Manager will drive cost optimization and resource utilization in cloud operations through financial insights and analyses, collaborate on efficiency initiatives, and report on financial performance tracking KPIs related to hardware efficiency.
Top Skills: ExcelMicrosoft Office SuiteNetSuitePowerPointPythonSalesforceSQL
10 Hours Ago
In-Office
4 Locations
165K-242K Annually
Senior level
165K-242K Annually
Senior level
Cloud • Information Technology • Machine Learning
Design, deploy, and maintain SOAR capabilities for security automation, develop workflows, and integrate AI tooling for enhanced security responses.
Top Skills: ArgocdGitGoKubernetesPython
10 Hours Ago
In-Office
4 Locations
139K-275K Annually
Senior level
139K-275K Annually
Senior level
Cloud • Information Technology • Machine Learning
Design, build, and scale IAM solutions for secure identity and access management. Lead technical decisions and mentor engineers.
Top Skills: AWSAzureGCPGoJavaKubernetesPython

What you need to know about the NYC Tech Scene

As the undisputed financial capital of the world, New York City is an epicenter of startup funding activity. The city has a thriving fintech scene and is a major player in verticals ranging from AI to biotech, cybersecurity and digital media. It also has universities like NYU, Columbia and Cornell Tech attracting students and researchers from across the globe, providing the ecosystem with a constant influx of world-class talent. And its East Coast location and three international airports make it a perfect spot for European companies establishing a foothold in the United States.

Key Facts About NYC Tech

  • Number of Tech Workers: 549,200; 6% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Capgemini, Bloomberg, IBM, Spotify
  • Key Industries: Artificial intelligence, Fintech
  • Funding Landscape: $25.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Greycroft, Thrive Capital, Union Square Ventures, FirstMark Capital, Tiger Global Management, Tribeca Venture Partners, Insight Partners, Two Sigma Ventures
  • Research Centers and Universities: Columbia University, New York University, Fordham University, CUNY, AI Now Institute, Flatiron Institute, C.N. Yang Institute for Theoretical Physics, NASA Space Radiation Laboratory

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account