Staff Engineer

Sorry, this job was removed at 12:11 p.m. (EST) on Wednesday, July 1, 2020
Find out who's hiring in Greater NYC Area.
See all Data + Analytics jobs in Greater NYC Area
Apply
By clicking Apply Now you agree to share your profile information with the hiring company.

You Will...

-Serve as an architect for propriety systems, lead the development process improvement initiatives that require architectural changes and expertise, and recommend required architectural, design, and implementation changes.
-Define the knowledge management process, establish design and coding guidelines, best practices, and mentor and coach junior developers.
-Conduct technical screenings and interview and hire engineering talent.
-Drive technical code reviews, ensure technical cohesion, write technical and functional design documents, drive technical discussion, and provide guidance and peer review the deliverables.
-Oversee the integration and implementation with strategic partners and work closely with internal and external engineering teams to prioritize and scope the smooth rollout of propriety products.
-Coordinate with product and engineering teams and represent teams in planning sessions
-Develop and deploy microservices with focus on APIs for data exchange and externalization.
-Use technologies including Kafka, Spark, Hadoop, Hive, Scala and Kubernetes and implement highly scalable fault tolerant data pipelines that process, enhance and distribute large volumes of data as well as perform data analysis and engage in SQL query writing.
-Enhance and optimize data-pipelines for various internal and external use-cases and provide customers with insights in near-real time.
-Have one to three direct subordinates.
Requirements:
-Bachelor’s degree in computer science, information technology, or engineering; we will also accept a master’s degree in computer science, information technology, or engineering and three years of experience in data engineering and distributed systems.
-5 years of overall progressive experience in data engineering and distributed systems.
-This experience must include 3 years of experience in the following: (1) Kafka; (2) Hadoop; (3) Spark; (4) Kubernetes; (5) implementing highly scalable and fault tolerant data pipelines to process large streams of data; (6) data analysis, including SQL query writing; (7) optimizing and enhancing the existing data pipeline; (8) collaborating with engineers, architects, and IT team members to build existing data platforms; (9) maintaining and managing existing data pipelines, including resolving production issues; and (10) automation to monitor data pipelines

Read Full Job Description
Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.

Location

DoubleVerify is located in the neighborhood of Soho New York. This neighborhood is full of great places to grab lunch or shop.

Similar Jobs

Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.
Learn more about DoubleVerifyFind similar jobs