Foursquare is the leading independent location technology company, powered by our deep understanding of how people move throughout the world. Our solutions help businesses make smarter decisions, developers create more engaging experiences, and brands build more effective marketing strategies.
Foursquare’s platform includes Attribution, Audience, Pinpoint, Proximity, Places, Pilgrim SDK and Visits. As the industry’s first and only accredited company for location data from the Media Rating Council (MRC), this foundation powers all our solutions — those that exist today and those we have yet to build. Over 14 billion consumer-verified place visit confirmations help us keep our map and models fresh and up-to-date, building a phone’s-eye-view of the world with 105 million unique places of interest worldwide.
About the Team
The Delivery team is responsible for sending data to Foursquare’s partners. We work with data platform teams across the company to understand what information we have, how it can be accessed and extracted, and where it needs to be sent. The Delivery team writes and operates the software to accomplish these goals, ensuring that data deliveries are prompt, correctly formatted, and confirmed received by our partners. In this role, you will ship a platform product with high visibility and of strategic importance to Foursquare, and contribute directly to revenue.
Our Tech Stack
- Languages: Java, Python, Clojure, Ruby, SQL
- Frameworks: Airflow, Spark, Hadoop MapReduce, Spring Boot
- Infrastructure: AWS (S3, EMR, EC2, etc.), Kubernetes, Docker, Mesos
- CI/CD: Jenkins, TeamCity
- Other technologies: Postgres, Hive, Athena
Responsibilities of the role:
- Influence key decisions on architecture and implementation of scalable data processing and analytics structure.
- Build Hadoop MapReduce and Spark processing pipelines.
- Focus on performance, throughput, and latency, and drive these throughout our architecture.
- Write test automation, conduct code reviews, and take end-to-end ownership of deployments to production.
- Write, deploy, and monitor services for data access by systems across our infrastructure.
- Participate in on-call rotation duties.
- BS/BA in a technical field such as computer science or equivalent experience.
- 0-3 years of experience in software development working with production level code.
- Proficiency in one or more of the programming languages we use.
- Excellent communication skills, including the ability to identify and communicate data-driven insights.
- Experience with relational or document-oriented database systems.
- Strong algorithms and data structures knowledge.
- Comfort with Unix/Linux and the command line.
Nice to have:
- Experience with Hadoop MapReduce and/or Spark data processing pipelines.
- Prior software internship experience.