×
Register Here to Apply for Jobs or Post Jobs. X

X - Data Architect - Government of Alberta Pipeline

Job in Toronto, Ontario, C6A, Canada
Listing for: Source Code
Full Time position
Listed on 2026-03-01
Job specializations:
  • IT/Tech
    Data Engineer, Data Security, Data Analyst, Cloud Computing
Salary/Wage Range or Industry Benchmark: 60000 - 80000 CAD Yearly CAD 60000.00 80000.00 YEAR
Job Description & How to Apply Below

Overview

About the job:
Sr. Data Architect - GOAPRDJP

Location:

Primarily remote with on-site meetings as required (up to 3-4 times per fiscal month, frequency determined on-demand).

Contract:

4+ months

Opening: 1 opening

Project Name:
Data Management Platform Projects

Scope:
The Government of Alberta is modernizing legacy systems to a cloud-native Azure Data Management Platform, alongside on-premises geospatial systems. This transformation requires a Data Architect to design, implement, and manage scalable, secure, and integrated data solutions.

Ministries involved include Environment and Protected Areas, Transportation and Economic Corridors, and Service Alberta, which rely on complex data from systems like Service Now, ERP platforms, and geospatial tools. The Data Architect will enable ingestion, transformation, and integration of this data using Azure services including Data Factory, Synapse Analytics, Data Lake Storage, and Purview. Azure Databricks will support advanced data engineering, analytics, and machine learning workflows.

The Data Architect will ensure data pipelines are optimized for both batch and real-time processing to support operational reporting, predictive modeling, and automation. Downstream systems will consume data via APIs and data services; the Data Architect will design and manage these interfaces using Azure API Management to ensure secure, governed, and scalable access. Security, governance, and compliance are critical. The Data Architect will implement role-based access controls, encryption, data masking, and metadata management to meet FOIP and other regulatory requirements.

As data volumes and complexity grow, the platform should remain extensible, reliable, and future-ready, supporting new data sources, ministries, and analytical capabilities.

Responsibilities
  • Design and implement scalable, secure, and high-performance data architecture on Microsoft Azure, supporting both cloud-native and hybrid environments.
  • Lead the development of data ingestion, transformation, and integration pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse Analytics.
  • Architect and manage data lakes and structured storage solutions using Azure Data Lake Storage Gen2, ensuring efficient access and governance.
  • Integrate data from diverse source systems including Service Now and geospatial systems, using APIs, connectors, and custom scripts.
  • Develop and maintain robust data models and semantic layers to support operational reporting, analytics, and machine learning use cases.
  • Build and optimize data workflows using Python and SQL for data cleansing, enrichment, and advanced analytics within Azure Databricks.
  • Design and expose secure data services and APIs using Azure API Management for downstream systems.
  • Implement data governance practices, including metadata management, data classification, and lineage tracking.
  • Ensure compliance with privacy and regulatory standards (e.g., FOIP, GDPR) through role-based access controls, encryption, and data masking.
  • Collaborate with cross-functional teams to align data architecture with business requirements, program timelines, and modernization goals.
  • Monitor and troubleshoot data pipelines and integrations, ensuring reliability, scalability, and performance across the platform.
  • Provide technical leadership and mentorship to data engineers and analysts, promoting best practices in cloud data architecture and development.
  • Other duties as needed.
Must-Haves
  • A college or Bachelor's degree in Computer Science or a related field.
  • Hands-on experience managing Databricks work spaces, including cluster configuration, user roles, permissions, cluster policies, and monitoring/cost optimization for efficient, governed Spark workloads – 3 years.
  • Experience as a Data Architect in a large enterprise, designing and implementing data architecture strategies and models that align data, technology, and business goals with strategic objectives – 8 years.
  • Experience designing data solutions for analytics-ready, trusted datasets using tools like Power BI and Synapse, including semantic layers, data marts, and data products for self-service, data science, and reporting – 4…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary