Who we are
DoubleVerify is the leading independent provider of marketing measurement software, data and analytics that authenticates the quality and effectiveness of digital media for the world's largest brands and media platforms. DV provides media transparency and accountability to deliver the highest level of impression quality for maximum advertising performance. Since 2008, DV has helped hundreds of Fortune 500 companies gain the most from their media spend by delivering best in class solutions across the digital ecosystem, helping to build a better industry. Learn more at www.doubleverify.com.
The DevOps team made up of engineers working cross-functionally to provide all of our product infrastructure and automation. You will be part of a talented team that sits at the center of multiple software engineering teams to provide standardized tools and streamlined processes.
- Work with peers to improve and expand our Kubernetes infrastructure
- Contribute to the design and implementation of new products and features, making sure they are all developed so they fit nicely in our Continuous Delivery framework and processes
- Design, deploy and manage streaming services such as Kafka, spark and HDFS
- Create and support cloud (GCP) and local cluster environment and infrastructure.
- You will work alongside the development teams to provision, automate, and tune multiple environments across datacenter and cloud platforms.
- Identifying bottlenecks, sniffing packets, and creating dashboards on the fly is key.
Who you are:
- Experience with Docker and container orchestration platforms (Kubernetes preferred)
- Ability to leverage application and system metrics, log events, and wire data to analyze performance
- Mastery of one or more configuration management frameworks (ansible preferred)
- Hands on experience with distributed data stores and data streaming services like Spark or Kafka
- Scripting skills that support working with APIs and harvesting custom metrics (Python or Go Lang)
- Good communication skills, a great personality, and a love for working collaboratively
- Experience working with GCP, AWS or other public cloud
- 5+ years’ experience as DevOps engineer
- 2+ years of leadership experience including managing direct reports
- 2+ years’ experience in Linux environment
- 2 + years hands on experience in Python/Bash.
- 2+ years hands on experience in Kubernetes.
- 1+ year experience in one of the following infrastructure automation tools: Ansible, Chef, or Puppet
- Excellent verbal and written communication skills - ability to effectively communicate with technical and non-technical stakeholders across all levels of the organization
- Experience with large scale production systems.
- Experience with cloud architectures such as GCP.
- Experience with CI/CD tools (GIT, TeamCity/Jenkins, Ansible, Artifactory
- Experience with DevOps Tools: Git/GitHub, Atlassian Suite, Teamcity, Maven, or Nuget
- Experience with containerized environments, micro-services, and distributed systems
- Experience working with GCP or other public cloud.
- Experience with Linux system administration.
- Experience with implementation of package management
Nice to have:
- Experience with Rancher.
- Experience with K8S Operators
- Experience with helm-charts
- Experience in monitoring and metrics collection/processing (Prometheus-Grafana Stack)
- Logging and log analysis systems (ELK, Splunk)