×
Register Here to Apply for Jobs or Post Jobs. X

Sr. Data Scientist; Azure​/Databricks​/Python

Job in Deerfield, Lake County, Illinois, 60063, USA
Listing for: Chamberlain Advisors
Full Time position
Listed on 2026-01-12
Job specializations:
  • IT/Tech
    Data Scientist, Machine Learning/ ML Engineer, Data Analyst, Data Engineer
Salary/Wage Range or Industry Benchmark: 125000 - 150000 USD Yearly USD 125000.00 150000.00 YEAR
Job Description & How to Apply Below
Position: Sr. Data Scientist (Azure / Databricks / Python)

Title: Sr. Data Scientist (Azure / Databricks / Python)

Location

Deerfield, IL (Hybrid - 4 days onsite)

Duration & Type

7‑Month Contract with potential to extend

Compensation

Competitive W2 Hourly Rate ($52.35 - $52.70), Access to Healthcare and Dental Insurance Plan of Choice. (Benefit Plans can be requested at time of submission to client)

Summary

Chamberlain Advisors is seeking a Sr. Data Scientist to support advanced analytics and machine learning initiatives for our clients team. This role is responsible for designing, developing, and operationalizing scalable data science solutions on the Azure cloud platform, with a strong emphasis on Azure Databricks and Python-based analytics. The ideal candidate brings strong statistical foundations, hands‑on machine learning expertise, and an ownership mindset across the full lifecycle from data ingestion through deployment and monitoring.

Click Apply Now to join the Chamberlain experience!

What You Will Be Accountable For
  • Apply statistical analysis and machine learning techniques to solve complex business problems using large, high‑dimensional datasets.
  • Perform feature engineering and select appropriate modeling approaches based on data characteristics and business context.
  • Design, train, validate, and evaluate machine learning models using metrics such as AUC, precision/recall, RMSE, and related measures.
  • Build and maintain time‑series forecasting models using approaches such as ARIMA, Prophet, or machine learning‑based forecasting methods.
  • Conduct hyperparameter tuning, cross‑validation, and model performance optimization.
  • Provide model interpretability using techniques such as SHAP and feature importance.
  • Develop data science solutions using Azure Databricks as the primary analytics and modeling platform.
  • Leverage Spark and Spark SQL to process and analyze large‑scale datasets efficiently.
  • Design, build, and maintain scalable ETL and ELT pipelines in Databricks.
  • Write performant SQL to support analytics, data validation, reconciliation, and quality checks.
  • Implement data quality monitoring, anomaly detection, and validation across datasets ranging from millions to billions of records.
  • Support and maintain machine learning models in production environments, including monitoring and troubleshooting.
  • Collaborate with engineering, platform, and product teams to deploy and operationalize data science solutions.
  • Refactor and modernize legacy pipelines and models to improve performance, scalability, and maintainability.
  • Document models, pipelines, assumptions, and known limitations.
What Qualifications You Need
  • Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, Engineering, or a related quantitative field, or equivalent practical experience.
  • Typically 5 to 7 years of relevant experience in data science, advanced analytics, or machine learning roles.
  • Strong proficiency in Python for data analysis, machine learning, and production‑grade code.
  • Solid understanding of statistics, probability, feature engineering, and model evaluation techniques.
  • Hands‑on experience with Azure for data science and analytics workloads.
  • Strong familiarity with Azure Databricks for data processing, model development, and production pipelines.
  • Experience developing machine learning models including regression models, tree‑based models such as Random Forest, XGBoost, and Light

    GBM, and time‑series models such as ARIMA or Prophet.
  • Strong SQL skills for analytical queries, data validation, reconciliation, and data quality checks.
  • Demonstrated experience working with large‑scale datasets in distributed computing environments.
  • Strong analytical thinking and problem‑solving skills.
  • Ability to clearly communicate technical concepts and findings to both technical and non‑technical stakeholders.
  • Self‑directed, proactive, and comfortable operating in ambiguous problem spaces.
  • Experience with MLOps practices, including model versioning, retraining strategies, and monitoring. (Preferred)
  • Experience with performance tuning and optimization in distributed data environments. (Preferred)
  • Experience working in regulated or enterprise environments. (Preferred)
  • Domain experience in retail, supply chain,…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary