More jobs:
Senior Databricks Data Engineer
Job in
Dallas, Dallas County, Texas, 75215, USA
Listed on 2026-03-01
Listing for:
InnoCore Solutions, Inc.
Full Time
position Listed on 2026-03-01
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing, Big Data
Job Description & How to Apply Below
Responsibilities
- Responsible for leading the design, development, optimization, and deployment of large-scale robust data solutions and data pipelines within the Databricks Lakehouse platform. This role requires technical leadership, advanced performance tuning expertise and a strong understanding of cloud architecture and Dev Ops practices.
- Lead the end-to-end design and implementation of scalable data pipelines and a medallion architecture (bronze, silver, gold layers) for structured and unstructured data.
- Architect and implement complex ETL/ELT processes to process and transform large-scale data sets from diverse sources using Databricks notebooks, Delta Lake, and Apache Spark (PySpark, Scala, or SQL)
- Optimize Spark jobs for performance and cost-efficiency using advanced techniques such as partitioning, caching, cluster configuration tuning, and troubleshooting bottlenecks.
- Implement data security strategies, compliance standards, and governance frameworks like Unity Catalog, including managing access controls, encryption, and data lineage.
- Champion code quality and implement Dev Ops (CI/CD) pipelines using tools like Azure Dev Ops, Git Hub Actions, or Jenkins to automate deployment and management of data assets and infrastructure.
- Develop and implement robust monitoring, logging, and altering systems for production grade pipelines to proactively identify and resolve issues.
- 10+ years of experience in data engineering with 5+ years specifically in Databricks development and leadership roles.
- 5 years of experience in optimizing Spark jobs for performance and cost efficiency using advanced techniques such as partitioning, caching, cluster configuration tuning, and troubleshooting bottlenecks.
- Advanced deep knowledge in Scala, Python, and SQL for optimizing code and build high performance data pipelines on Databricks.
- 5 years of experience in Scala, Spark and Databricks programming.
- 5-10 years of experience implementing complex ETL/ELT processes to process and transform large scale datasets from diverse sources using Databricks notebooks, Delta Lake, and Apache Spark (PySpark, Scala, or SQL).
- Must be able to design and develop CI/CD pipelines that can push the code to different development and prod environments, preferably to Azure using Git Actions.
- Deep knowledge of the Apache Spark ecosystem, Delta Lake and Lakehouse architecture.
- Hands-on experience with Azure and their relevant data services (Azure Data Lake Storage, Azure Data Factory)
- Databricks Certified Data Engineer Professional certification is preferred.
- Bachelor’s degree in Computer Science/Engineering or a related field
Position Requirements
10+ Years
work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×