Azure Databricks Lead
Listed on 2026-02-28
-
IT/Tech
Data Engineer, Data Analyst, Cloud Computing, Data Science Manager
We are looking for a Data Replication Specialist with expertise in Microsoft Fabric and Databricks Lakeflow connect to design implement and manage data replication and synchronization solutions across multiple data environments The ideal candidate will ensure that business critical data is consistently available accurate and up to date across systems for analytics reporting and operational needs
Key ResponsibilitiesDesign and implement data replication strategies using Microsoft Fabric Copy Data and Mirroring features
Configure and maintain replication pipelines between on premises and cloud based data sources eg SQL Server Azure SQL Synapse Data Lake Power BI
Monitor and optimize replication performance latency and reliability
Troubleshoot replication failures data drift and synchronization issues
Collaborate with data engineers architects and analytics teams to align replication processes with business and data governance requirements
Develop and maintain automation scripts for replication monitoring and ing using Power Shell Python or REST APIs
Document replication architectures data flow diagrams and operational procedures
Implement security compliance and data governance standards within replicated environments
Stay updated on Microsoft Fabric enhancements and best practices for data replication and integration
Design develop and maintain data ingestion pipelines using Databricks and Lakeflow Connect
Integrate data from various structured and unstructured sources into Delta Lake and other data storage systems
Implement realtime and batch ingestion workflows to support analytics and reporting needs
Optimize data ingestion performance ensuring scalability reliability and cost efficiency
Collaborate with data architects analysts and business stakeholders to define data requirements and ingestion strategies
Ensure data quality lineage and governance compliance across the ingestion process
Stay up to date with emerging Databricks Lakehouse and data integration technologies and best practices
Required QualificationsBachelors or Masters degree in Computer Science Information Systems Data Engineering or a related field
4 years of experience in data engineering or ETL development
Handson experience with Databricks SQL PySpark Delta Lake
Proficiency with Lakeflow Connect for building and managing data ingestion workflows
Strong understanding of data integration patterns data modeling and data lakehouse architectures
Experience with cloud platforms Azure AWS or GCP and associated data services
Knowledge of CICD version control Git and infrastructureascode practices
Familiarity with data governance security and compliance standards
Required Skills QualificationsBachelors degree in Computer Science Information Systems or related field
3-5 years of experience in data replication ETLELT or data integration
Handson experience with Microsoft Fabric Azure Data Factory or Synapse Pipelines
Strong understanding of Copy Data and Mirroring capabilities in Microsoft Fabric
Proficiency in SQL and scripting languages Power Shell Python
Experience working with Azure Data Lake One Lake and Power BI
Knowledge of change data capture CDC incremental loads and realtime replication
Familiarity with data governance security and compliance standards eg GDPR HIPAA
Preferred QualificationsExperience with hybrid data environments on premises cloud
Knowledge of data mesh or lakehouse architectures
Exposure to Dev Ops or CICD practices for data pipelines
Soft SkillsStrong analytical and problem solving abilities
Excellent communication and documentation skills
Ability to work collaboratively in cross functional teams
Detail oriented with a focus on data quality and reliability
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).