Senior Data Analytics Engineer
Electric is a Series B startup backed by Bessemer Venture Partners, Bowery Capital, GGV Capital, Primary Venture Partners and led by a team of seasoned entrepreneurs, operators, and technologists. With software forming the foundation of every office, Electric is the world’s first all-in-one, modern IT support solution that can truly meet the needs of growing businesses. Through a chat interface, personalized service, and flat-rate pricing we keep our client’s email, computers, Wi-Fi and software running smoothly at a fraction of the cost and headaches normally experienced with traditional managed service providers.
Our company is a fun, fast-paced environment with enormous opportunities for career advancement.
Responsibilities
- Champion software engineering best practices such as code modularity, testing, version control, code review, observability, and CI/CD within our data analytics team.
- Become the go-to expert and owner of tools up and down our data stack, including Redshift, Airflow, DBT, Stitchdata, Segment, and Sisense (formerly Periscope).
- Guide the evolution of our data platform architecture as requirements and use cases change over time. Partner with the broader Engineering team on larger data infrastructure initiatives.
- Build tools and processes to scale the delivery of high-quality analytical artifacts and insights in a rapidly changing environment.
- Perform code reviews, and mentor data analysts and other more junior team members on software engineering practices.
Who You Are
- 5+ years of work experience in a highly technical or analytical environment, including at least 3 years in a software engineering role with a back-end or data focus.
- Degree in CS, math, physics, or a hard science is a plus, although we care more about motivation and intellectual curiosity.
- Strong knowledge of Python, including experience working with the PyData stack, including Pandas for data manipulation and processing.
- First hand experience operating and/or building solutions using job execution frameworks (e.g. Airflow, Luigi, Dagster) and MPP databases (e.g. Redshift, BigQuery, Snowflake).
- Capable of writing highly performant SQL in his or her sleep. Happens to have a favorite and least favorite business intelligence tool.
- Familiar with data engineering and distributed systems principles. Can explain why one might choose a column-oriented database, when it might be appropriate to sacrifice consistency guarantees, and how to guarantee “at least once” message delivery.
- Habitually pursues high test coverage and automation of rote tasks, with an eye toward driving up overall quality. Excited to explore the relatively nascent world of data testing.
- Motivated to make the most of off-the-shelf tools and avoid writing custom ETL code -- but willing to do so in a pinch.
- Well organized and analytical by nature, with excellent collaboration and communication skills.
- Has at some point strongly considered reading a book by Ralph Kimball.
What to Expect
- By day 15. Get familiar with our production environment, data schema, and analytics tooling. Put your first batch job into production.
- By day 30. Decompose our entire production schema into clean, reusable DBT models for use by our product and analytics teams.
- By day 90. Build out an automated data testing suite for our key data models and reports. Prototype new tooling for user segmentation and A/B testing.