Job Description & How to Apply Below
Job Description
Key Responsibilities
- Design, build, and maintain end to end data pipelines using Microsoft Fabric, including Lakehouse, Data Factory, Dataflows, and notebooks.
- Develop and optimize SQL based transformations, data models, and curated datasets for enterprise reporting and analytics.
- Build and maintain Python based data engineering logic for ingestion, transformation, validation, and automation.
- Implement and operate data quality controls, including validation rules, reconciliation checks, and exception handling.
- Monitor data pipelines, investigate failures or data quality issues, and implement fixes with minimal escalation.
- Integrate data from multiple enterprise systems, including CRM, ticketing systems, telephony platforms, and operational databases.
- Maintain technical documentation, data lineage, and operational runbooks for owned pipelines and datasets.
- Work closely with the Manager and Data Analysts to ensure data assets meet defined standards and reporting requirements.
Required Qualifications and Experience
- Minimum 7 years of hands-on experience in data engineering or analytics engineering roles.
- Strong, practical experience with Microsoft Fabric in a production environment.
- Advanced proficiency in SQL, including complex transformations and performance optimization.
- Strong hands-on experience using Python for data engineering and automation.
- Demonstrated experience implementing data quality controls and operating reliable data pipelines.
- Experience working in cloud-based data platforms and modern data architectures.
- Ability to deliver independently while aligning to technical direction and priorities set by management.
- Microsoft certifications preferred such as Microsoft certified Fabric analytics engineer, Azure data engineer or Microsoft data platform certification.
Preferred Qualifications
- Degree in Computer Science, Engineering, or a related technical field.
- Experience supporting Power BI semantic models and enterprise reporting environments.
- Familiarity with Git, version control, and basic CI/CD practices for data engineering.
- Experience working in regulated, operational, or public sector environments.
Must Haves:
- Minimum 7 years Demonstrated hands-on experience building and maintaining end-to-end data pipelines in production
- Minimum 7 years Strong expertise in SQL and Python from an engineering standpoint
- Minimum 7 years Experience implementing data quality controls, monitoring, and troubleshooting with minimal escalation
- Minimum 7 years Practical experience with modern data platforms (e.g., Microsoft Fabric or similar enterprise environments)
Position Requirements
10+ Years
work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×