Who We Are
Yieldmo is an advertising technology company that operates a smart exchange that differentiates and enhances the value of ad inventory for buyers and sellers. As a leader in contextual analytics, real time technology, and digital formats, we create, measure, model, and optimize campaigns for unmatched scale and performance. By understanding how each unique impression behaves and looking for patterns and performance in real time, we can drive real performance gains without relying on audience data.
What We Need
We are looking for a Senior Data Scientist to develop optimization models to enhance our marketplace. You will use statistical analysis, machine learning, and optimization techniques to help answer questions such as, what is the propensity for a buyer to purchase an ad impression, what price will the buyer pay for the ad impression, and how valuable is this ad opportunity. You will be part of a team of data scientists and engineers that is focused on increasing yield to sellers on our marketplace while ensuring that buyers’ performance objectives are met. You will use a cutting-edge cloud technology stack to work with a dataset comprising billions of daily records and petabytes of storage, and make hundreds of millions of predictions deployed to environments requiring milliseconds response times.
- Develop models that help predict the value of an ad impression on our marketplace.
- Perform large-scale data analysis and develop optimization algorithms.
- Predictive analytics and modeling.
- Machine learning with high dimensionality.
- Statistical data modeling and analysis.
- At least 5+ years experience working as a Data Scientist.
- A passion for innovating with data science at scale – applying modern algorithms to massive datasets and creating measurable business value.
- Must be able to develop working prototypes, prove that they add meaningful value, and ensure that they are implemented properly in a production environment.
- Excellent understanding of algorithms, scalability, and various tradeoffs in a big data setting.
- Strong verbal and written communication skills.
- Comfortable working with onshore and offshore distributed teams.
- Strong expertise in at least one language for data manipulation, analysis, and machine learning, such as Python, R, or Matlab, interacting with distributed computing resources.
- Experience in querying and manipulating data with SQL or another data query language in a big data environment.
- MS or equivalent combination of education and experience.
Nice to Haves
- Ad tech experience (SSPs, DSPs, Analytics, DMPs, CDPs).
- Intermediate-level programming experience with Python.
- Microservices architecture experience.
- Exposure to Snowflake and Looker or similar platforms.
- Exposure to high-volume low-latency environments.
- Exposure to AWS or Google Cloud.
- White glove service to help you upgrade your home office
- 1 Mental Escape (ME) day each month to fully unplug and recharge
- Work life balance, flexible PTO and competitive compensation packages
- A generous learning stipend and other opportunities for professional development