Software Engineer - Platform
Background
Before new medical treatments can be administered to the public, they must demonstrate safety and efficacy in a clinical trial. These trials protect consumers from ineffective and dangerous products, but the clinical trial process also presents a tremendous bottleneck in delivering life-saving treatments to patients. A typical trial involves coordinating between numerous parties and data formats to gather, store, analyse, and audit clinical data. Mistakes and delays are common, and fewer than 10% of trials finish on time.
At TrialSpark, we are looking for talented software engineers to help us reimagine the clinical trial process from first principles and build the backbone for our technology platform.
Description
This is a foundational software engineering role at TrialSpark, both literally and figuratively. You will be building out core infrastructure to help our engineering and data teams scale, iterate, test, and release in a timely way. Moreover, there is a lot of greenfield work to be done and lots of potential for impact. Your purview will include our backend API servers, our database systems, and our cloud and data infrastructure.
Responsibilities
- Develop infrastructure, frameworks, APIs, and libraries to support and enable our engineering and data teams.
- Own monitoring, logging, alerting, tracing, deployment systems, and various environments.
- Build and manage a robust data pipeline that can integrate a variety of data sources.
- Inform major design, architecture, and capacity planning decisions for the engineering team by applying a seasoned systems perspective.
- Help enforce best practices and promote testability and maintainability throughout our systems and codebase.
- If it can be automated, you will automate it.
Qualifications
- Minimum 2+ years of software development experience in a platform, data, or backend engineering role.
- Experience with Linux, cloud technologies (AWS), and databases (Postgres, Redshift) required.
- Strong working knowledge of SQL and Python.
- Experience with a data pipeline scheduling platform like Airflow or Luigi is a plus.
- Understanding of networking and security concepts.
- Excellent problem solving and debugging skills.
- Exceptional communication skills with the ability to convey complicated systems to both technical and non-technical audiences
- B.S. in Computer Science or related field, or equivalent experience
Bonus points
- If you’ve worked with React, GraphQL, Flask, SQLAlchemy, Go, Airflow or Luigi or similar data pipeline platform.
- If you really enjoy making things better and/or faster - whether it’s optimizing API endpoints or SQL queries, ironing out ETL pipelines, improving overall engineer velocity and developer experience, adding instrumentation and tracers, or even cleaning up error messages!