×
Register Here to Apply for Jobs or Post Jobs. X

Senior Database Engineer

Job in Toronto, Ontario, C6A, Canada
Listing for: G Adventures
Full Time position
Listed on 2026-02-28
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing, Database Administrator, Data Analyst
Salary/Wage Range or Industry Benchmark: 150000 - 200000 CAD Yearly CAD 150000.00 200000.00 YEAR
Job Description & How to Apply Below

About Us

G Adventures is the world’s largest small-group adventure travel company and we’ve been making epic travel memories happen on all seven continents for over 30 years.

Our mission is simple: to change lives through travel. And not just our travelers’ either. Since day one, our tours have been built to establish meaningful relationships with local communities, directly benefiting the people and places we visit at every step of our tours.

With the demand for travel coming back strong, we are set up and excited for this next chapter in our company’s story — and we’d love for you to be a part of it.

Our DNA revolves around building, nurturing, and developing a diverse culture of people and a true sense of belonging, where everyone is encouraged to bring their authentic self to work each and every day. You’ll have the opportunity to grow your career, and yourself, alongside a passionate, talented, and welcoming community that works hard to spread goodness around the world.

If all that sounds like your kind of thing, well, we can’t wait for you to join us.

Role Overview

As a Data Engineer / DBA, you will play a critical role in our data infrastructure, focusing on the seamless integration, transformation, and management of large-scale datasets. Your responsibilities will include designing and optimizing Postgre

SQL databases in Amazon RDS and Aurora environments, building robust ETL/ELT pipelines, and ensuring data quality and accessibility. You will collaborate closely with software engineers, data scientists, and analysts to support data-driven decision-making across the organization.

Key Responsibilities

  • Database Design & Maintenance
    :
    Design, implement, and maintain high-performance Postgre

    SQL databases in Amazon RDS and Aurora environments.

  • Data Ingestion & Transformation
    :
    Manage data ingestion from diverse sources into cloud-based data platforms and build optimized ETL/ELT pipelines for large-scale data processing.

  • Data Lake Integration
    :
    Export structured and semi-structured data to modern data lakes (e.g., Amazon S3, AWS Lake Formation, Redshift).

  • Performance Monitoring
    :
    Monitor and improve database performance, security, and availability using tools like Datadog and Cloud Watch.

  • Collaboration
    :
    Work closely with software engineers, data scientists, and analysts to ensure data quality, reliability, and accessibility.

  • Automation
    :
    Automate data workflows using tools such as AWS Lambda, Glue, Step Functions, Fivetran, Snap Logic, and Airflow.

  • Disaster Recovery
    :
    Implement and manage backup, disaster recovery, and failover strategies.

  • Best Practices
    :
    Establish and enforce best practices for data architecture, schema design, and database operations.

  • Monitoring & Alerting
    :
    Implement and maintain Datadog monitoring for AWS Aurora Postgre

    SQL and RDS, ensuring real-time visibility into performance, slow queries, and resource utilization. Define custom Datadog dashboards, alerts, and metrics to support proactive database performance management and incident response.

Qualifications

Required:

  • 5+ years of experience in a Data Engineering or DBA role.

  • Deep expertise in Postgre

    SQL administration, performance tuning, and SQL optimization.

  • Proven experience with Amazon RDS/Aurora, including provisioning, scaling, monitoring, and securing databases.

  • Strong hands-on experience with AWS services, especially S3, Glue, IAM, Lambda, and Cloud Watch.

  • Familiarity with data lake architectures and best practices for exporting structured and semi-structured data.

  • Solid understanding of data modeling, ETL processes, and schema design.

  • Proficiency in scripting languages (Python, Bash, etc.) for automation and data wrangling.

  • Experience managing large datasets (hundreds of GB to TB scale) with attention to performance and cost-efficiency.

  • Experience using Datadog for monitoring cloud infrastructure and database health, including metric collection, alerting, and log analysis.

  • Ability to define custom monitors and dashboards in Datadog for RDS, Aurora, Lambda, and Redshift.

  • Experience using Git Hub for version control, collaboration, and code management across data pipelines, infrastructure scripts, and database configurations.

  • Comfortable working…

Position Requirements
10+ Years work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary