Data Engineer - Enterprise AI
Listed on 2026-01-12
-
IT/Tech
Data Engineer, AI Engineer
Overview
Chevron is accepting online applications for the position Data Engineer - Enterprise AI through January 20, 2026 at 11:59 p.m. (Central Time).
Join Chevron’s Enterprise AI team to build the next generation of intelligent data solutions that power advanced analytics and AI-driven decision-making. As a Data Engineer, you will apply software engineering principles to design and implement scalable, high-performance data and AI solutions that enable agentic AI systems, machine learning, predictive modeling, and real-time insights across global operations. This role is ideal for engineers passionate about modern data architectures, cloud-native technologies, and applying AI principles to enterprise-scale challenges.
You will deploy and maintain fully automated data transformation pipelines that integrate diverse storage and computation technologies to handle a wide range of data types and volumes. A successful Data Engineer designs data products and pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost-effective—ensuring our data ecosystem is future proof for rapidly changing world of AI.
Key Responsibilities
- Architect and Optimize Data Pipelines:
Design, develop, and maintain robust ETL/ELT pipelines leveraging Databricks (including Databricks Genie for AI-assisted development), Azure Data Factory, Azure Synapse, and Azure Fabric. Architect solutions with a holistic AI foundation, ensuring pipelines and frameworks are built to support agentic AI systems, machine learning, and generative AI at scale. - Enable AI-Ready Data:
Build modular, reusable data assets and products optimized for AI workloads, ensuring data quality, lineage, governance, and interoperability across multiple AI applications. - Collaborate Across Disciplines:
Partner with AI delivery teams, including software engineers, AI engineers, and applied scientists, to deliver AI-ready datasets and features that accelerate model development and deployment. - Performance and Scalability:
Optimize pipelines for big data processing using Spark, Delta Lake, and Databricks-native capabilities, ensuring scalability and reliability for enterprise-scale AI workloads. - Cloud-Native Engineering:
Implement best practices for CI/CD, infrastructure-as-code, and Dev Ops using Azure Dev Ops, Git, and Ansible, while integrating with Databricks workflows for seamless deployment and reuse. - Innovation and Continuous Learning:
Stay ahead of emerging technologies in data engineering, AI/ML, and cloud ecosystems, leveraging AI tools like Databricks Genie and Agent Bricks and emerging tools and services within Azure AI Foundry and Fabric to accelerate development and maintain cutting-edge, reusable solutions.
Required Qualifications
- Bachelor’s degree in computer science, Engineering, or a related field (or equivalent experience) and able to demonstrate high proficiency in programming fundamentals.
- At least 5 years of proven experience as a Data Engineer or similar role dealing with data and ETL processes in cloud-based data platforms (Azure preferred)
- Strong hands-on experience with Databricks (Lakehouse, Delta Lake, Unity Catalog) and Microsoft Azure services (Azure Data Factory, Azure Synapse, Azure Blob Storage and Azure Data Lake Gen
2) - Strong understanding data modeling, data governance, and software engineering principles and how they apply to data engineering (e.g., CI/CD, version control, testing).
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills.
Preferred Qualifications
- Demonstrated learning agility in emerging data and AI tools and services.
- Experience integrating AI/ML pipelines or feature engineering workflows into data platforms.
- Strong experience in Python is preferred but experience in other languages such as Scala, Java, C#, etc is accepted.
- Experience building spark applications utilizing PySpark.
- Experience with file formats such as Parquet, Delta, Avro.
- Experience efficiently querying API endpoints as a data source.
- Understanding of the Azure environment and related services such as subscriptions, resource groups, etc.
- Understanding of Git workflows in software development.
- Using Azure Dev Ops pipeline and repositories to deploy and maintain solutions.
- Understanding of Ansible and how to use it in Azure Dev Ops pipelines.
Relocation and International Considerations
- Relocation may be considered.
- Expatriate assignments will not be considered.
Chevron regrets that it is unable to sponsor employment Visas or consider individuals on time-limited Visa status for this position.
Details
- Seniority level:
Mid-Senior level - Employment type:
Full-time - Job function:
Information Technology - Industries:
Oil and Gas
Location:
Houston, TX
Apply BELOW
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).