×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer; Snowflake

Job in Snowflake, Navajo County, Arizona, 85937, USA
Listing for: Valtech
Full Time position
Listed on 2026-02-01
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst
Salary/Wage Range or Industry Benchmark: 60000 - 80000 USD Yearly USD 60000.00 80000.00 YEAR
Job Description & How to Apply Below
Position: Data Engineer (Snowflake)

Why Valtech? We’re advisors, visionaries, creative and techies. We embrace all things digital. We talk to each other. We have fun. We love our clients. We’re looking ahead
• We are global

Why Valtech? We’re the experience innovation company - a trusted partner to the world’s most recognized brands. To our people we offer growth opportunities, a values-driven culture, international careers and the chance to shape the future of experience.

The Opportunity

At Valtech, you’ll find an environment designed for continuous learning, meaningful impact, and professional growth. Whether you're pioneering new digital solutions, challenging conventional thinking or building the next generation of customer experiences, your work will help transform industries.

As a Data Engineer you are passionate about experience innovation and eager to push the boundaries of what’s possible. You bring 4+ years of experience, a growth mindset and a drive to make a lasting impact.

You will thrive in this role if you are:

  • A curious problem solver who challenges the status quo
  • A collaborator who values teamwork and knowledge-sharing
  • Excited by the intersection of technology, creativity and data
  • Experienced in Agile methodologies and consulting (a plus)
Role & Responsibilities Data Architecture & Engineering
  • Design and implement scalable, secure Snowflake data warehouses (multi-environment, role-based access, RBAC/ABAC, masking/row access).
  • Architect ELT pipelines using Fivetran/Stitch/Singer; extend with custom connectors when required.
  • Build and maintain dbt-based transformation layers (staging → marts → semantic models) with modular, test-driven designs.
  • Implement orchestration using Airflow/Prefect/Dagster for scheduling, dependencies, and retries.
Data Modeling & Optimization
  • Apply dimensional modeling, Data Vault, and star schema best practices for analytics at scale.
  • Optimize Snowflake performance: warehouse sizing, clustering/partitioning strategies, query tuning, caching, and cost controls (Resource Monitors, Query Acceleration).
  • Leverage advanced Snowflake features (Tasks, Streams, Time Travel, Data Sharing, External Tables/Iceberg, Snowpipe, object tagging).
Quality, Governance & Documentation
  • Implement robust testing (dbt tests, Great Expectations) and proactive data validation frameworks.
  • Establish and enforce governance: data lineage, cataloging, PII classification, policies, RBAC, auditability.
  • Maintain documentation and lineage in tools like Alation/Collibra/Data Hub; automate as part of CI/CD.
Dev Ops & Automation
  • Set up CI/CD for dbt and data artifacts (Git-based workflows, environments, automated tests, PR checks).
  • Build Python utilities for ELT automation, data processing, backfills, and metadata operations.
  • Implement observability using Monte Carlo/Datafold (freshness, volume, schema change, anomaly monitoring).
  • Partner with analytics/data science teams to enable governed self-service (semantic/metric layers, certified datasets).
  • Translate stakeholder requirements into scalable data models and SLAs.
  • Champion Data Ops and best practices across teams; conduct knowledge sessions.
Must Have

To be considered for this role, you must meet the following essential qualifications:

  • Education:

    A Bachelor's or Master’s degree in computer science, Engineering, or a related technical field is typically required.
  • 4–7 years in data engineering/analytics engineering with cloud data warehousing focus.
  • Deep expertise in Snowflake
    : architecture, security (RBAC, masking, row access), performance tuning, cost optimization, and workload management.
  • Advanced dbt
    : macros, packages, refactoring layered models, exposures, tests (unique//relationships/custom), and best practices.
  • Strong ELT experience with Fivetran/Stitch/Singer
    ; ability to build custom connectors if needed.
  • Excellent SQL and data modeling (dimensional modeling, star schema, Data Vault).
  • Proficient in Python for data processing, scripting, and automation.
  • Experience with Airflow/Prefect/Dagster for orchestration.
  • Solid CI/CD knowledge:
    Git workflows, environment promotion, automated testing.
  • Experience on at least one cloud (
    AWS/GCP/Azure
    ) and their native data services (e.g., S3/GCS/ADLS, IAM, secrets).
  • D…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary