Data Engineer, Social Integrations at DoubleVerify
Data Engineer, Social Integrations
Who we are
DoubleVerify is the leader in digital performance solutions, improving the impression quality and audience impact of digital advertising. Built on best practices, DoubleVerify solutions create value for media buyers and sellers by bringing transparency and accountability to the market, ensuring ad viewability, brand safety, fraud protection, accurate impression delivery and audience quality across campaigns to drive performance.
Since 2008, DoubleVerify has helped hundreds of Fortune 500 companies gain the most value out of their media spend by delivering best in class solutions across the digital ecosystem that help build a better industry.
As Data Engineer of Social Integrations Engineering Team, you will work with high volume datasets of hundreds of data points and research many aspects of this social integrations data for monitoring, anomaly detection, and various business use cases, helping our clients to make smarter decisions that continuously improve their ad-impression quality.
What you’ll do
- Design, develop, and test data-driven products and features
- Explore new ways of producing, processing, and analyzing data in order to gain insights into our product features
- Work with state-of-the-art data processing frameworks, technologies, and platforms
- Analyze Data and Build large-scale batch and real-time data pipelines with data processing frameworks like Spark, Kafka, Kubernetes and the Google Cloud Platform.
- Help drive optimization and tools to improve data quality.
- Collaborate with other engineers, data analysts, and decision-makers, such as product owners, to build solutions and gain novel insights.
- Work in multi-functional agile teams to continuously experiment, iterate and deliver on new product objectives.
- Act as the bridge between our backend and product teams and work on data management and build/maintain crucial data pipelines
Who you are
- You have BS / MS in Computer Science/Engineering or relevant field
- 1-2 years of experience in data modeling, data access, and data storage techniques.
- Proven experience in building Big Data pipelines using Spark, Kafka
- Interested in being the glue between engineering and product
- Don’t like leaving questions unanswered and you love exploring/understanding data
- Have a passion for data and for transforming numbers into key business insights
- Excellent SQL Skills preferably in Hive and/or Spark SQL
- Love visualizing your data findings in a clear and easy to understand way and to capture corner cases of implementations
- Care about agile software processes, data-driven development, and responsible experimentation
- Passionate about crafting clean code and have a steady foundation in coding and building data pipelines