Senior Data Engineer (E-commerce) at Peloton
Peloton is looking for a Data Engineer to build our e-commerce Data Pipelines and increase the data integrity of e-commerce data models. You will work with multiple teams of passionate and skilled data engineers, architects, and analysts responsible for building batch and streaming data pipelines that process data daily and support all of the e-commerce reporting and ERP Integrations needs across the organization.
Peloton is a cloud-first engineering organization with all of our data infrastructure in AWS leveraging EMR, AWS Glue, Redshift, S3, Spark. You will be interacting with many business teams including finance, analytics, enterprise systems, and partner to scale Peloton’s e-commerce data infrastructure for future strategic needs.RESPONSIBILITIES
Help build a culture of quality
- Assume technical responsibility for new services and functionality, lookout for opportunities for platform improvement, and work with engineers to scale our production systems.
- Identify and lead technical initiatives to build clean, robust, and performant data applications.
- Contribute to the adoption of software architecture and new technologies.
- Lead, coach, pair with, and mentor e-commerce data software engineers.
- Mentor data engineers from diverse backgrounds to nurture a culture of ownership, learning, automation, re-use, and engineering efficiency through the use of software design patterns and industry best practices.
- Engage in code reviews helping maintain our coding standards.
- Be a leader within your team and the organization.
Facilitate the on-time completion of large projects
- Understand the data needs of different stakeholders across multiple business verticals including Business Intelligence, Finance, Enterprise System.
- Develop the vision and map strategy to provide proactive solutions and enable stakeholders to extract insights and value from data.
- Understand end to end data interactions and dependencies across complex data pipelines and data transformation and how they impact business decisions.
- Design best practices for big data processing, data modeling.
- Lead architecture meetings and technical discussions with the focus of reaching consensus and best practice solutions.
- Break down tasks for other engineers and offer guidance to other engineers on the team when they are blocked.
- Achieve on-time delivery without compromising quality.
- 8+ years of relevant experience including e-commerce and data engineering
- Good active listening skills, the ability to empathize with stakeholders, and other engineers.
- Experience in a high-paced, high-growth environment working with deadlines and milestones.
- Comfortable with ambiguity; you enjoy figuring out what needs to be done.
- Senior-level with at least one modern programming language and can learn anything you don't already know to get the job done.
- Excellent time management skills.
- Have a solid understanding of clean data design principles.
- Experience mentoring engineers with the team-focused mentality for success.
- Excellent knowledge about databases, such as PostgreSQL and Redshift.
- Has experiences with GIT, Github, JIRA, and SCRUM.
- 2+ years in building a data warehouse and data pipelines. Or, 3+ years in data-intensive engineering roles.
- Experience with big data architectures and data modeling to efficiently process large volumes of data.
- Background in ETL and data processing, know how to transform data to meet business goals.
- Experience developing large data processing pipelines on Apache Spark.
- Strong understanding of SQL and working knowledge of using SQL(prefer PostgreSQL and Redshift) for various reporting and transformation needs.
- Experience with distributed systems, CI/CD (ex: Jenkins) tools, and containerizing applications (ex: Kubernetes).
- Familiar with at least one of the programming languages: Python, Java.
- Comfortable with Linux operating system and command-line tools such as Bash.
- Familiar with REST for accessing cloud-based services.
- Excellent communication, adaptability, and collaboration skills.
- Experience running an Agile methodology and applying Agile to data engineering.
- Familiar with the AWS ecosystem, including RDS, Redshift, Glue, Athena, etc.
- Has experiences with Apache Hadoop, Hive, Spark, and PySpark.
Peloton is the largest interactive fitness platform in the world with a loyal community of more than 2.6 million Members. The company pioneered connected, technology-enabled fitness, and the streaming of immersive, instructor-led boutique classes for its Members anytime, anywhere. Peloton makes fitness entertaining, approachable, effective, and convenient, while fostering social connections that encourage its Members to be the best versions of themselves. An innovator at the nexus of fitness, technology, and media, Peloton has reinvented the fitness industry by developing a first-of-its-kind subscription platform that seamlessly combines the best equipment, proprietary networked software, and world-class streaming digital fitness and wellness content, creating a product that its Members love. The brand's immersive content is accessible through the Peloton Bike, Peloton Tread, and Peloton App, which allows access to a full slate of fitness classes across disciplines, on any iOS or Android device, Fire TV, Roku, Chromecast and Android TV. Founded in 2012 and headquartered in New York City, Peloton has a growing number of retail showrooms across the US, UK, Canada and Germany. For more information, visit www.onepeloton.com.