As a data scientist, you’ll be responsible for diving into our data lake, uncovering the latest interesting insights, and visualizing them in dazzling ways. You’ll work with the Peloton App team to understand behavior during our user journey and develop data-driven recommendations with a focus on measurable results. As part of the Data Science team you’ll work with data scientists and engineers that interface with the entire company, cross-pollinating analytic and business insights to the benefit of all Peloton members and employees.
Leverage cutting-edge data science capabilities and technology to design products used to help healthcare clients plan, target, measure and optimize their marketing campaigns.
Contribute to the development of Crossix’s rapidly growing and market leading programmatic modeling solutions.
Help to discover and deliver innovative advanced analytics powered features and product offerings from prototype to massive scale.
Apply machine learning and advanced analytics techniques to large data sets of health and consumer data.
Explore and find meaning in extremely high volumes of data to extract actionable insights that will help drive business decisions; perform data query, data cleansing and experiment design.
As a Data Scientist at Ohi, you will be building the technical systems and analytics that the business will need to quickly scale and grow. You will be part of a team that prioritizes collaboration, personal development, and getting things done. You will be part of a company that is trying to provide consumers with a faster, better, and more sustainable experience by enabling e-commerce companies an optimized platform.
Cherre is looking for a Data Scientist who will be building ML algorithms and AI. This role requires a combination of quantitative and software engineering skills to build scalable solutions. Because of the complexity and variety of the data sets we use, it's critical that you have an interest in learning Real Estate and Derived datasets. Previous Real Estate experience is not required. You will not only do analysis, but will also be expected to deliver production code.
* Our Senior Data Scientists are end-to-end owners. You will participate actively in all aspects of designing, researching, building, and delivering data-focused products for our clients.
* Our Data Science team is embedded into nearly a dozen individual, mission-focused engineering teams working across a wide spectrum of technological and scientific challenges. You will have the opportunity, depending on your interests and background, to work on problems related to natural language processing and machine learning over hundreds of millions of web pages; graph mining and algorithmic optimization on petabytes of data; differential privacy across billions of user IDs; and causal inference with missing data - to name a few.
* Senior Data Scientists contribute to more than our product – they build up our team. Through a combination of mentoring and technical leadership, they make others better and raise the bar for those around them.
We're looking for a Senior Data Scientist to join our Data Insights Engineering team and help us accomplish our mission of improving lives by learning from the experience of every cancer patient. Here's what you need to know about the role, our team and why Flatiron Health is the right next step in your career.
Summer is hiring a Data Analyst to power insights across the entire company. Data lies at the core of our loan algorithms as well as our business decisions, and this position will involve cross-team collaboration to transform data into actionable strategies.
We are looking for smart and ambitious people who would help us understand our data better. We are creating a centralised team of business analysts who will be working with all teams across the company. This is a unique opportunity to learn about all aspects of our business. Your main weapons will be SQL, dashboards, and a thorough understanding of our business. You will also be acting as a bridge between product owners and engineers.
We are looking for an enthusiastic data engineer who is interested in working with a fast-growing team in building industry-leading real estate data services. You will be part of designing and implementing server side services to ingest, organize, analyze, and display real estate data and insight. You will be working in a small team and be a real partner in the design and implementation of all aspects of our product.
Analyze and interpret acquired data from internal/external data sources and develop ETL process, mapping system and optimize efficiency and data quality using SAS/R, SQL;
Design and maintain databases for supporting new business models, reporting requirements using R, SQL, categorical data analysis;
Analyze the complex datasets and create models to interpret and predict trends or patterns in the customer base using time series analysis and forecasting, linear regression analysis (multivariate, simple/multiple linear regression);
Develop analytical frameworks and set KPI metrics to monitor business and operational performance.
Manage competing priorities across the company.
Maintain and automate reporting infrastructure.
Manage the design and architecture of our Data Warehouse.
Create Scripts to automate and manage ETL processes and Dependencies.
Advise on the design of our application DB, machine learning components, and our data infrastructure.
Cleaning and restructuring datasets.
Managing and optimizing reporting systems.
As a Data Engineer at Reonomy, you will tackle hard challenges everyday! We are working to build a data infrastructure that can manage the complexities of commercial real estate as well as scale to support the disparate datasets required to build a first-of-its-kind data product. Using a mix of the latest-and-greatest tech as well as some proven tools, we are pioneering the use of data discovery, pipelining, extraction, importation, sanitization, and visualization of massive datasets to an industry that is eager to utilize an Enterprise CRE SaaS solution like ours in their daily workflow. We take on tough problems in machine learning and scalable data processing using technologies like Scala, Spark, Postgres, ElasticSearch and Docker.