×
Register Here to Apply for Jobs or Post Jobs. X

Databricks Engineer

Job in Mississauga, Ontario, Canada
Listing for: Compunnel, Inc.
Full Time position
Listed on 2026-02-23
Job specializations:
  • IT/Tech
    Data Engineer, Data Science Manager, Big Data, Cloud Computing
Job Description & How to Apply Below
The Databricks Engineer will design, develop, and optimize large-scale data pipelines using Databricks and Azure cloud services.

This role requires advanced expertise in Spark/PySpark, Delta Lake technologies, data governance, and cloud-based data engineering.

The engineer will collaborate with cross-functional teams to deliver scalable, reliable, and high-performance data solutions that support analytics, BI, and business initiatives.

Key Responsibilities

Build and maintain scalable ETL/ELT data pipelines using Databricks.

Use PySpark/Spark and SQL to process, cleanse, and transform large datasets.

Integrate data from diverse sources such as Azure Blob Storage, ADLS, and relational/non-relational databases.

Work with teams across the organization to prepare data for BI dashboards and reporting tools.

Partner with business stakeholders to understand requirements and deliver tailored data solutions.

Performance & Optimization

Optimize Databricks workloads for performance, scalability, and cost efficiency.

Monitor, troubleshoot, and resolve issues in data pipelines to ensure accuracy and reliability.

Governance & Security

Implement data governance, access controls, and security frameworks using Unity Catalog.

Ensure compliance with enterprise data policies and regulatory standards.

Deployment

Use Databricks Asset Bundles for deploying jobs, notebooks, and configurations across environments.

Maintain version control for Databricks artifacts and collaborate with the team on development best practices.

Required Qualifications

10+ years of overall experience in data engineering.

Recent experience in financial services, banking, or capital markets.

Strong expertise in Databricks, including:

Unity Catalog

Lakehouse Architecture

Table Triggers

Databricks Runtime

Proficiency with Azure cloud services.

Deep understanding of Spark and PySpark for big data processing.

Strong programming skills in Python.

Experience with relational databases.

Knowledge of Databricks Asset Bundles and Git Lab for version control.

Preferred Qualifications

Familiarity with advanced Databricks Runtime configurations.

Experience with streaming frameworks such as Spark Streaming.

#J-18808-Ljbffr
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary