Sr. DevOps Engineer at DoubleVerify
The DevOps team made up of engineers working cross-functionally to provide all of our product infrastructure and automation. You will be part of a small talented team that sits at the center of multiple software engineering teams to provide standardized tools and streamlined processes.
We in DoubleVerify believe that giving our people a broad range of responsibilities results in the highest satisfaction for the engineers and a strong return on investment for the company. We want people who love the idea of working on tools and system testing in one week and fine-tuning kubernetes internals on the next. The ideal candidate embraces continuously evolving architecture and wants to play a role in designing how things are done here.
You will be working with our team to help architect, build, and support a high volume/low latency platform that processes several terabytes of data each day. You will have the freedom to innovate and take your projects from test to production in a short time.
- Work with peers to improve and expand our kubernetes infrastructure
- Contribute to the design and implementation of new products and features, making sure they are all developed so they fit nicely in our Continuous Delivery framework and processes
- Design, deploy and manage streaming services such as Kafka, spark and HDFS
- Create and support cloud (GCP) and local cluster environment and infrastructure.
- You will work alongside the development teams to provision, automate, and tune multiple environments across datacenter and cloud platforms.
- Identifying bottlenecks, sniffing packets, and creating dashboards on the fly is key.
Who you are:
- Previous experience in a DevOps role working closely with Software Engineers
- Experience with Docker and container orchestration platforms (kubernetes preferred)
- Ability to leverage application and system metrics, log events, and wire data to analyze performance
- Mastery of one or more configuration management frameworks (ansible preferred)
- Hands on experience with distributed data stores and data streaming services like spark, kafka, etc
- Scripting skills that support working with APIs and harvesting custom metrics (python or go is great!)
- Good communication skills, a great personality, and a love for working collaboratively
- Experience working with GCP or other public cloud
- 2+ years’ experience in Linux environment (4+ years of experience advantage)
- 3+ years’ experience as DevOps engineer (4+ years of experience advantage)
- Experience with large scale production systems.
- Experience with cloud architectures such as GCP.
- Hands on experience in Python/Bash.
- Experience with CI/CD tools (GIT, TeamCity/jenkins, Ansible, Artifactory
- DevOps Tools experience: Git/GitHub, Atlassian Suite, Teamcity, Maven, Nuget
- Experience in one of the following infrastructure automation tools: Ansible, Chef, Puppet (we use Ansible)
- Experience with containerized environments and micro services
- Hands on experience in Kubernetes – at least 2 years.
- You must be fluent in English
- Experience working with GCP or other public cloud.
- Experience with Linux system administration.
- Experience with implementation of package management
Nice to have:
- Experience with Rancher.
- Experience with K8S Operators
- Experience with helm-charts
- Experience in monitoring and metrics collection/processing (Prometheus-Grafana Stack)
- Logging and log analysis systems (ELK, Splunk)