We’re not just building better tech. We’re rewriting how data moves and what the world can do with it. With Confluent, data doesn’t sit still. Our platform puts information in motion, streaming in near real-time so companies can react faster, build smarter, and deliver experiences as dynamic as the world around them.
It takes a certain kind of person to join this team. Those who ask hard questions, give honest feedback, and show up for each other. No egos, no solo acts. Just smart, curious humans pushing toward something bigger, together.
One Confluent. One Team. One Data Streaming Platform.
About the Role:The Stream Processing & Analytics (SPA) team is building an elastic, reliable, durable, cost-effective, and performant stream processing engine based on Apache Flink for Confluent Cloud.
This role is critical for enabling our customers to build custom functions and apps in Confluent Cloud, extending the stream processing engine to meet the specific needs of their use-cases no matter how complex. You will champion a best-in-class SDLC for users who prefer Java or Python over SQL and want to leverage the power of managed stream processing on Confluent Cloud without sacrificing best-practices from their typical workflows.
Work on Flink user-defined functions and Table API to provide a great experience for customers with sophisticated use-cases
Play a crucial role in designing, developing and operationalizing critical user-facing interfaces as well as the backing cloud infrastructure
Collaborate with the Apache Flink community to establish standards and improve open source foundations
Produce clean, well-documented, and maintainable code that adheres to established team standards and security best practices.
Deliver value for customers by taking on their most challenging problems
As a vital member of our team, take responsibility for developing, managing, and maintaining a mission-critical service with a 99.99 SLA running on 88+ AWS, GCP, and Azure regions
Enhance the stability, performance, scalability, and operational excellence across multiple critical systems
BS, MS, or PhD in computer science or a related field, or equivalent work experience
2-4 years of relevant stream processing experience
Strong fundamentals in distributed systems design and development
Experience running production services in the cloud and being part of oncall rotation
A self-starter with the ability to work effectively in teams
Proficiency in Java, and comfortable working with Go and Python
A strong background in distributed storage systems or databases
Experience/knowledge with public clouds (AWS, Azure, or GCP) and Kubernetes operators
Contributions to open-source projects, especially in Flink or the stream processing area
Belonging isn’t a perk here. It’s the baseline. We work across time zones and backgrounds, knowing the best ideas come from different perspectives. And we make space for everyone to lead, grow, and challenge what’s possible.
We’re proud to be an equal opportunity workplace. Employment decisions are based on job-related criteria, without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by law.
Top Skills
Similar Jobs
What you need to know about the NYC Tech Scene
Key Facts About NYC Tech
- Number of Tech Workers: 549,200; 6% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Capgemini, Bloomberg, IBM, Spotify
- Key Industries: Artificial intelligence, Fintech
- Funding Landscape: $25.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Greycroft, Thrive Capital, Union Square Ventures, FirstMark Capital, Tiger Global Management, Tribeca Venture Partners, Insight Partners, Two Sigma Ventures
- Research Centers and Universities: Columbia University, New York University, Fordham University, CUNY, AI Now Institute, Flatiron Institute, C.N. Yang Institute for Theoretical Physics, NASA Space Radiation Laboratory



