×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer

Job in Vancouver, BC, Canada
Listing for: Spring Financial
Full Time position
Listed on 2026-02-28
Job specializations:
  • IT/Tech
    Data Engineer, AI Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 120000 - 140000 CAD Yearly CAD 120000.00 140000.00 YEAR
Job Description & How to Apply Below

About us:

Spring Financial is a Canadian financial technology company focused on making every day financial services simpler, faster, and more accessible. We build technology that helps Canadians build credit, save money, and access lending products without unnecessary friction. Our platforms allow customers to apply and manage their finances online, by text, or over the phone, making the experience convenient and flexible. Since launching in 2014, Spring has grown into one of Canada’s largest fintechs, with over 250,000+ product originations across credit‑building products, personal lending, and mortgage solutions.

We’re a fast‑growing, product‑driven team that values practical solutions, strong execution, and thoughtful collaboration. We give people ownership, trust them to make decisions, and focus on building systems that scale reliably. If you’re interested in working on real‑world fintech platforms used by hundreds of thousands of Canadians, Spring offers the opportunity to make a tangible impact through well‑built technology.

NOTE:

This is a full‑time, permanent, hybrid position in downtown Vancouver. 3 set days in the office and 2 WFH.

Job Overview:

As a Senior Data Engineer at Spring, you are a technical leader and strategic problem‑solver who architects high‑impact, resilient, and scalable data infrastructure. You work across a diverse stack—AWS‑native services, Snowflake, Kafka, Spark/Flink, and orchestration frameworks—to deliver pipelines and data platforms that are secure, observable, and aligned with business needs.

You own the design and execution of complex data initiatives, such as real‑time event platforms, secure data sharing mechanisms, and ML feature pipelines into modern infrastructure. You’re skilled at navigating ambiguity, balancing trade‑offs between speed and sustainability, and ensuring that our data architecture scales with the business.

You are an advocate for AI‑augmented engineering—both in how you and your team build (e.g., using AI for testing, schema discovery, and code generation), and in what the platform enables (e.g., semantic tagging, automated QA, natural language interfaces). You model responsible AI usage and help others incorporate it thoughtfully into development workflows.

You work across teams to frame problems, define SLAs, manage risk, and align data architecture with business goals. You are a trusted partner to senior stakeholders across Finance, Risk, Product, Analytics, and Engineering, and you influence both platform strategy and organizational standards. You also coach other engineers, lead architectural design reviews, and help set the technical direction for the team.

What you’ll do:
  • Architect and lead delivery of scalable, secure, and reliable data pipelines and platform components across AWS, Snowflake, and streaming systems (Kafka, Flink, etc.)
  • Scope and drive strategic initiatives to modernize legacy data flows, unify batch and streaming sources, and enable cross‑functional self‑serve analytics
  • Champion the integration of AI capabilities—within data pipelines (e.g., anomaly detection, tagging) and within development practices (e.g., assisted testing, documentation)
  • Collaborate with technical and business leaders to clarify ambiguous problems, assess risks, and propose scalable, extensible solutions
  • Define and enforce engineering standards around testing, observability, security, and CI/CD within data systems
  • Lead technical design and code reviews, mentor engineers across levels, and support the ongoing evolution of our data architecture
  • Partner with ML, Analytics, and Finance teams to deliver data with appropriate latency, accuracy, and governance for their use cases
Requirements:
  • Expertise in designing and scaling data pipelines using Snowflake, and AWS‑native tools (e.g., Glue, Lambda, Redshift, Step Functions)
  • Strong experience with real‑time data systems, including Kafka, Kinesis, Flink, or Spark Streaming
  • Deep knowledge of data modeling, schema evolution, privacy‑preserving design, and secure data architectures
  • Fluency in Python and SQL; proficiency with infrastructure‑as‑code (e.g., Terraform or CDK)
  • Track record of embedding AI into both…
Position Requirements
10+ Years work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary