Red Hat Logo

Red Hat

Forward Deployed Engineer, AI Inference (vLLM and Kubernetes)

Posted 4 Days Ago
Be an Early Applicant
In-Office or Remote
2 Locations
193K-319K Annually
Senior level
In-Office or Remote
2 Locations
193K-319K Annually
Senior level
As a Forward Deployed Engineer, you will deploy, optimize, and scale AI inference systems, collaborating directly with customers to enhance performance and solve infrastructure challenges on Kubernetes clusters.
The summary above was generated by AI

The vLLM and LLM-D Engineering team at Red Hat is looking for a customer obsessed developer to join our team as a Forward Deployed Engineer. In this role, you will not just build software; you will be the bridge between our cutting-edge inference platform (LLM-D, and vLLM) and our customers' most critical production environments.

You will interface directly with the engineering teams at our customer to deploy, optimize, and scale distributed Large Language Model (LLM) inference systems. You will solve "last mile" infrastructure challenges that defy off-the-shelf solutions, ensuring that massive models run with low latency and high throughput on complex Kubernetes clusters. This is not a sales engineering role, you will be part of the core vLLM and LLM-D engineering team.

What You Will Do
  • Orchestrate Distributed Inference: Deploy and configure LLM-D and vLLM on Kubernetes clusters. You will set up and configure advanced deployment like disaggregated serving, KV-cache aware routing, KV Cache offloading etc to maximize hardware utilization.

  • Optimize for Production: Go beyond standard deployments by running performance benchmarks, tuning vLLM parameters, and configuring intelligent inference routing policies to meet SLOs for latency and throughput. You care about Time Per Output Token (TPOT), GPU utilization, GPU networking optimizations, and Kubernetes scheduler efficiency.

  • Code Side-by-Side: Work directly with customer engineers to write production-quality code (Python/Go/YAML) that integrates our inference engine into their existing Kubernetes ecosystem.

  • Solve the "Unsolvable": Debug complex interaction effects between specific model architectures (e.g., MoE, large context windows), hardware accelerators (NVIDIA GPUs, AMD GPUs, TPUs), and Kubernetes networking (Envoy/ISTIO).

  • Feedback Loop: Act as the "Customer Zero" for our core engineering teams. You will channel field learnings back to product development, influencing the roadmap for LLM-D and vLLM features.

  • Travel only as needed to customers to present, demo, or help execute proof-of-concepts.  

What You Will Bring
  • 8+ Years of Engineering Experience: You have a decade-long track record in Backend Systems, SRE, or Infrastructure Engineering.

  • Customer Fluency: You speak both "Systems Engineering" and "Business Value".

  • Bias for Action: You prefer rapid prototyping and iteration over theoretical perfection. You are comfortable operating in ambiguity and taking ownership of the outcome.

  • Deep Kubernetes Expertise: You are fluent in K8s primitives, from defining custom resources (CRDs, Operators, Controllers) to configuring modern ingress via the Gateway API. You have deep experience with stateful workloads and high-performance networking, including the ability to tune scheduler logic (affinity/tolerations) for GPU workloads and troubleshoot complex CNI failures.

  • AI Inference Proficiency: You understand how a LLM forward pass works. You know what KV Caching is, why prefill/decode disaggregation matters, why context length impacts performance, and how continuous batching works in vLLM.

  • Systems Programming: Proficiency in Python (for model interfaces) and Go (for Kubernetes controllers/scheduler logic).

  • Infrastructure as Code: Experience with Helm, Terraform, or similar tools for reproducible deployments.

  • Cloud & GPU Hardware Fluency: You are comfortable spinning up clusters and deploying LLMs on bare-metal and hyperscaler Kubernetes clusters.

Following is considered a plus
  • Experience contributing to open-source AI infrastructure projects (e.g., KServe, vLLM, Kubernetes).

  • Knowledge of Envoy Proxy or Inference Gateway (IGW).

  • Familiarity with model optimization techniques like Quantization (AWQ, GPTQ) and Speculative Decoding.

#AI-HIRING

#LI-MD2

The salary range for this position is $193,390.00 - $318,980.00. Actual offer will be based on your qualifications.

Pay Transparency

Red Hat determines compensation based on several factors including but not limited to job location, experience, applicable skills and training, external market value, and internal pay equity. Annual salary is one component of Red Hat’s compensation package. This position may also be eligible for bonus, commission, and/or equity. For positions with Remote-US locations, the actual salary range for the position may differ based on location but will be commensurate with job duties and relevant work experience. 

About Red Hat

Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact.

Benefits
●    Comprehensive medical, dental, and vision coverage
●    Flexible Spending Account - healthcare and dependent care
●    Health Savings Account - high deductible medical plan
●    Retirement 401(k) with employer match
●    Paid time off and holidays
●    Paid parental leave plans for all new parents
●    Leave benefits including disability, paid family medical leave, and paid military leave
●    Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more! 

Note: These benefits are only applicable to full time, permanent associates at Red Hat located in the United States. 

Inclusion at Red Hat
Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village.

Equal Opportunity Policy (EEO)
Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.


Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee.
Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email [email protected]. General inquiries, such as those regarding the status of a job application, will not receive a reply. 

Top Skills

Go
Helm
Kubernetes
Python
Terraform

Red Hat New York, New York, USA Office

140 Broadway, New York, NY, United States, 10005

Similar Jobs

10 Minutes Ago
Remote or Hybrid
Santa Clara, CA, USA
106K-154K Annually
Senior level
106K-154K Annually
Senior level
Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
The Executive Assistant will manage calendars, coordinate meetings, handle travel arrangements, assist in onboarding, and manage expenses for Senior Leadership.
Top Skills: Ai-Powered ToolsBoxConcurOutlookPowerPointWordZoom
11 Minutes Ago
Remote or Hybrid
San Francisco, CA, USA
Senior level
Senior level
Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
The role involves integrating AI into workflows, supporting product sales, conducting workshops, developing relationships, and managing territories to promote execution excellence.
Top Skills: Ai-Powered ToolsEnterprise Cloud SoftwareServicenow
11 Minutes Ago
Remote or Hybrid
Santa Clara, CA, USA
167K-291K Annually
Senior level
167K-291K Annually
Senior level
Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Lead the LogDB engineering team in developing high-performance distributed systems, mentoring engineers, and driving project delivery while integrating AI into workflows.
Top Skills: AIDatabasesDistributed SystemsGoRust

What you need to know about the NYC Tech Scene

As the undisputed financial capital of the world, New York City is an epicenter of startup funding activity. The city has a thriving fintech scene and is a major player in verticals ranging from AI to biotech, cybersecurity and digital media. It also has universities like NYU, Columbia and Cornell Tech attracting students and researchers from across the globe, providing the ecosystem with a constant influx of world-class talent. And its East Coast location and three international airports make it a perfect spot for European companies establishing a foothold in the United States.

Key Facts About NYC Tech

  • Number of Tech Workers: 549,200; 6% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Capgemini, Bloomberg, IBM, Spotify
  • Key Industries: Artificial intelligence, Fintech
  • Funding Landscape: $25.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Greycroft, Thrive Capital, Union Square Ventures, FirstMark Capital, Tiger Global Management, Tribeca Venture Partners, Insight Partners, Two Sigma Ventures
  • Research Centers and Universities: Columbia University, New York University, Fordham University, CUNY, AI Now Institute, Flatiron Institute, C.N. Yang Institute for Theoretical Physics, NASA Space Radiation Laboratory

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account