×
Register Here to Apply for Jobs or Post Jobs. X

RQSoftware Developer - ETL

Job in Toronto, Ontario, C6A, Canada
Listing for: Rubicon Path
Part Time position
Listed on 2026-01-10
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst, Data Warehousing
Job Description & How to Apply Below
Position: RQ08100: Software Developer - ETL

About the job RQ08100:
Software Developer - ETL General Responsibilities

This role is responsible for designing, developing, maintaining, and optimizing ETL (Extract, Transform, Load) processes in Databricks for data warehousing, data lakes, and analytics. The developer will work closely with data architects and business teams to ensure the efficient transformation and movement of data to meet business needs, including handling Change Data Capture (CDC) and streaming data.

  • Azure Databricks, Delta Lake, Delta Live Tables, and Spark to process structured and unstructured data.
  • Azure Databricks/PySpark (good Python/PySpark knowledge required) to build transformations of raw data into curated zone in the data lake.
  • Azure Databricks/PySpark/SQL (good SQL knowledge required) to develop and/or troubleshoot transformations of curated data into FHIR.
  • Understand the requirements. Recommend changes to models to support ETL design.
  • Define primary keys, indexing strategies, and relationships that enhance data integrity and performance across layers.
  • Define the initial schemas for each data layer.
  • Assist with data modelling and updates of source-to-target mapping documentation.
  • Document and implement schema validation rules to ensure incoming data conforms to expected formats and standards.
  • Design data quality checks within the pipeline to catch inconsistencies, missing values, or errors early in the process.
  • Proactively communicate with business and IT experts on any changes required to conceptual, logical and physical models, communicate and review timelines, dependencies, and risks.
Development of ETL strategy and solution for different sets of data modules
  • Understand the Tables and Relationships in the data model.
  • Create low level design documents and test cases for ETL development.
  • Implement error-catching, logging, retry mechanisms, and handling data anomalies.
  • Create the workflows and pipeline design.
Development and testing of data pipelines with Incremental and Full Load.
  • Develop high quality ETL mappings/scripts/notebooks.
  • Develop and maintain pipeline from Oracle data source to Azure Delta Lakes and FHIR.
  • Perform unit testing.
  • Ensure performance monitoring and improvement.
Performance review, data consistency checks
  • Troubleshoot performance issues, ETL issues, log activity for each pipeline and transformation.
  • Review and optimize overall ETL performance.
End-to-end integrated testing for Full Load and Incremental Load

No specific tasks listed.

Plan for Go Live, Production Deployment.
  • Configure parameters, scripts for go live. Test and review the instructions.
  • Create release documents and help build and deploy code across servers.
Go Live Support and Review after Go Live.
  • Review existing ETL process, tools and provide recommendation on improving performance and reduce ETL timelines.
  • Review infrastructure and remediate issues for overall process improvement.
Knowledge Transfer to Ministry staff, development of documentation on the work completed.
  • Document work and share the ETL end-to-end design, troubleshooting steps, configuration and scripts review.
  • Transfer documents, scripts and review of documents to Ministry.
Skills Experience and Skill Set Requirements

Please note this role is part of a Hybrid Work Arrangement and resource(s) will be required to work at a minimum of 3 days per week at the Office Location.

Must Have Skills
  • 7+ years using ETL tools such as Microsoft SSIS, stored procedures, T-SQL.
  • 2+ Delta Lake, Databricks and Azure Databricks pipelines.
  • 2+ years Python and PySpark.
  • Solid understanding of the Medallion Architecture (Bronze, Silver, Gold) and experience implementing it in production environments.
  • Hands‑on experience with CDC tools (e.g., Golden Gate) for managing real‑time data.
  • SQL Server, Oracle.
Experience:
  • Experience of 7+ years of working with SQL Server, T‑SQL, Oracle, PL/SQL development or similar relational databases.
  • Experience of 2+ years of working with Azure Data Factory, Databricks and Python development.
  • Experience building data ingestion and change data capture using Oracle Golden Gate.
  • Experience in designing, developing, and implementing ETL pipelines using Databricks and related tools to ingest,…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary