AI/ML Data Engineer; Databricks
Listed on 2026-01-23
-
IT/Tech
Data Engineer, AI Engineer, Machine Learning/ ML Engineer, Data Science Manager
The Opportunity
Quidel Ortho unites the strengths of Quidel Corporation and Ortho Clinical Diagnostics, creating a world-leading in vitro diagnostics company with award-winning expertise in immunoassay and molecular testing, clinical chemistry and transfusion medicine. We are more than 6,000 strong and do business in over 130 countries, providing answers with fast, accurate and consistent testing where and when they are needed most - home to hospital, lab to clinic.
Our culture puts our team members first and prioritizes actions that support happiness, inspiration and engagement. We strive to build meaningful connections with each other as we believe that employee happiness and business success are linked. Join us in our mission to transform the power of diagnostics into a healthier future for all.
The RoleWe are seeking an AI/ML Data Engineer to support our Global Data and Analytics team. The AI/ML Data Engineer will be responsible for designing, building, and optimizing data pipelines and infrastructure using Databricks to support AI and machine learning (ML) initiatives. This role will involve working closely with business stakeholders to identify high-value AI/ML use cases and translating business requirements into technical solutions.
The engineer will work to ensure the successful deployment of AI/ML solutions at scale, leveraging Azure services and Databricks tools.
This position is remote eligible, with a strong preference for candidates based in San Diego and able to maintain office presence.
The Responsibilities- Work directly with business stakeholders to identify and define AI/ML use cases, translating business needs into technical requirements.
- Design, develop, and optimize scalable data pipelines in Databricks for AI/ML applications, ensuring efficient data ingestion, transformation, and storage.
- Build and manage Apache Spark-based data processing jobs in Databricks, ensuring performance optimization and resource efficiency.
- Implement ETL/ELT processes and orchestrate workflows using Azure Data Factory, integrating various data sources such as Azure Data Lake, Blob Storage, and Microsoft Fabric.
- Collaborate with Data Engineering teams to meet data infrastructure needs for model training, tuning, and deployment within Databricks and Azure Machine Learning.
- Monitor, troubleshoot, and resolve issues within Databricks workflows, ensuring smooth operation and minimal downtime.
- Implement best practices for data security, governance, and compliance within Databricks and Azure environments.
- Automate data and machine learning workflows using CI/CD pipelines through Azure Dev Ops.
- Maintain documentation of workflows, processes, and best practices to ensure knowledge sharing across teams.
- Perform other work-related duties as assigned.
Required:
- Bachelor s degree in Computer Science, Engineering, or a related field (or equivalent experience).
- Minimum 3 years of experience in data engineering, with a strong focus on Databricks and AI/ML applications.
- Proven experience working directly with business stakeholders to identify and implement AI/ML use cases.
- Expertise in Apache Spark and hands-on experience with Databricks for building and optimizing data pipelines.
- Strong programming skills in Python and Scala for data engineering and machine learning workflows in Databricks.
- Experience with Azure Data Factory, Azure Data Lake, Azure Blob Storage, and Azure Synapse Analytics.
- Proficiency with Databricks Delta Lake for data reliability and performance optimization.
- Familiarity with MLflow and Databricks Runtime for Machine Learning for model management and deployment.
- Knowledge of Azure Dev Ops for implementing CI/CD pipelines in Databricks-based projects.
- Strong understanding of data governance, security practices, and compliance requirements in cloud environments.
- Familiarity with emerging Databricks features such as Delta Live Tables and Unity Catalog.
- Ability to travel up to 5-10%.
- This position is not currently eligible for visa sponsorship.
Preferred:
- Experience with real-time data processing using Apache Kafka or Azure Event Hubs.
- Master s degree in Computer Science or related technical fields.
Internal Partners:
- Regular collaboration with business stakeholders to identify and implement AI/ML solutions that drive business value.
- Close interaction with data engineers and cross-functional teams to ensure the successful integration of data pipelines and AI/ML models.
- Work alongside IT teams to optimize cloud infrastructure and resource allocation within Databricks and Azure.
External Partners:
- Customers and Vendors.
No strenuous physical activity, though occasional light lifting of files and related materials is required. 30% of time in meetings, working with team, or talking on the phone, 70% of the time at the desk on computer, doing analytical work. Minimal travel required 5-10%. Travel includes airplane, automobile travel and overnight hotel.
Physical Demands
Typically, 40% of…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).