×
Register Here to Apply for Jobs or Post Jobs. X

Software Engineer, Data

Job in Snowflake, Navajo County, Arizona, 85937, USA
Listing for: TechBrains
Full Time position
Listed on 2026-01-12
Job specializations:
  • IT/Tech
    Data Engineer, Data Science Manager
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below
Position: Staff Software Engineer, Data
Location: Snowflake

About The Role

As a Staff Software Engineer, Data, you will play a pivotal role in shaping Harry’s & Flamingo’s data engineering strategy within our broader Data Team. This position requires deep technical expertise, a passion for data-driven solutions, and a proven ability to deliver impactful data engineering outcomes. You will be responsible for designing and optimizing scalable data pipelines, enabling robust data ingestion and analytics, and ensuring data is readily accessible and actionable for stakeholders.

In this role, you will drive initiatives that maximize the value of data across the organization, empower cross‑functional teams with timely and reliable insights, and collaborate with leadership and partners to align data engineering efforts with business priorities. Your work will accelerate how quickly data can support decision‑making and deliver measurable value across Harry’s & Flamingo.

About the Team

Our team combines expertise in data engineering, product management, platform development, and site reliability engineering (SRE) to build scalable, data‑driven solutions that support key stakeholders across various business functions, including CIA, DTC, Retail, Engagement, Marketing, and Manufacturing at Harry’s Inc. We are committed to delivering actionable insights, cost‑efficient platforms, and resilient infrastructure that ensure data accuracy, process optimization, and continuous innovation.

Our team fosters a culture of collaboration, ownership, and innovation, empowering the organization to make data‑driven decisions confidently, adapt to emerging technologies, and maintain agility in a competitive environment. We aim to empower teams with strategic insights and operational excellence, fueling sustainable growth and long‑term success.

What You Will Accomplish
  • Design and implement robust pipelines to ingest, process, and transform data, ensuring it is optimized for analytics and reporting.
  • Build scalable ELT pipelines and integrate data using Open Source, internal tools and AWS technologies, while applying Dev Ops practices like CI/CD, IaC, and automation to enhance workflow efficiency and scalability.
  • Collaborate with stakeholders to understand reporting needs and translate business objectives into actionable technical and data engineering requirements.
  • Develop and maintain data pipelines to ingest, process, and transform data from diverse sources, including REST and Graph

    QL APIs, Google Drive and Sheets, S3, and data warehouses.
  • Collaborate with cross‑functional teams to create innovative, scalable data solutions that address complex business challenges and support informed decision‑making.
  • Proven experience working with AWS services, including AWS Glue, AWS Lambda, Amazon Kinesis, Amazon EMR, Amazon Athena, Amazon Cloud Watch, Amazon SNS, and AWS Step Functions.
  • Develop and apply advanced knowledge of optimal data structures and transformation techniques to design solutions for complex business challenges.
  • Design and implement processes to extract, ingest, and integrate data through REST and Graph

    QL APIs.
  • Extensive experience with Data Warehouses, Database technologies, and Big Data ecosystems, including AWS Redshift and AWS S3‑based data lakes for managing and processing structured and unstructured datasets.
  • Develop and implement strategies to measure, monitor, and improve data quality.
  • Data Architecture & Modeling:
    Design and manage large‑scale data warehouses and marts, creating clean, accessible fact and dimension tables to drive insights.
  • Data Transformation:
    Build and optimize data transformation pipelines using tools like dbt, supporting data flow from ingestion through analytics.
  • Develop and enforce data governance practices and quality standards, ensuring accurate data lineage, comprehensive documentation, and robust metadata management.
This should describe you
  • Bachelor's degree in computer science, Information Systems, Software, Electrical or Electronics Engineering, or comparable field of study, and/or equivalent work experience.
  • Minimum 7 years of related data and analytics engineering experience modeling and developing data pipelines with hands‑on proficiency in PySpark,…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary