Data Engineer
Phoenix, Maricopa County, Arizona, 85003, USA
Listed on 2026-01-16
-
Engineering
Data Engineer -
IT/Tech
Data Engineer
Hiring Range (Monthly Pay): $7,599 - $8,750
United States, Arizona, Phoenix
Jan 13, 2026
DATA ENGINEER
Job Code: DATAENG
Reports To:
Manager, Data Engineering
Base
Location:
AZ or CO
Work Status:
Virtual Office
Minimum Starting Monthly Range: $7,599
Hiring Range (Monthly Pay): $7,599 - $8,750
Full-time / Part-time:
Full-time
Exempt / Non-Exempt:
Exempt
Risk Designation:
Extremely High
The Data Engineer will work as part of the Data Operations & Engineering, Transformation and Analysis team to derive business value from enterprise data by implementing technical specifications provided by the Data Architect and Director of Data Operations & Engineering, including data storage, processing, transformation, ingestion, consumption, and automation. This role will work with multiple healthcare data sources and cross-functional teams to establish integrated datasets across legacy and greenfield data systems/platforms.
You will work closely with data producers, consumers, and subject matter experts to enhance data service capabilities by implementing data architecture frameworks, standards, and principals, including modeling, metadata, security, and reference data.
This position is based in Phoenix Arizona;
Denver Colorado; or Grand Junction, Colorado and requires local residency in one of these base locations. Our strategic flexibility allows for local work from home opportunities.
To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skills, and/or abilities required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
Essential Duties and Responsibilities- Develop, implement, and maintain enterprise data management solutions to enable organizational business intelligence, reporting, visualization, and analysis.
- Assist with the development, implementation and maintenance of an overall organizational data strategy that is in line with business processes.
- Design and build data processing flows to extract data from various sources, such as databases, API endpoints, and flat files.
- Load data into data storage systems, specifically Microsoft SQL Server, Mongo
DB, and Snowflake. - Transform data using industry-standard techniques such as standardization, normalization, de-duplication, filtering, projection, and aggregation.
- Build and maintain data processing environments, including hardware and software infrastructure. Working knowledge of data flow orchestration tools such as Prefect and Airflow is preferred.
- Collaborate with data producers, consumers, and subject matter experts to ensure smooth dissemination and flow of data within the organization.
- Performs other related duties as assigned.
- Demonstrated experience with relational and non-relational data storage models, schemas, and structures used in data lakes and warehouses for big data, business intelligence, reporting, visualization, and analytics.
- Hands‑on experience with extract, transform, load (ETL) process design, data lifecycle management, metadata management, and data visualization/report generation.
- Practical experience with industry‑accepted standards, best practices, and principles for implementing a well‑designed enterprise data architecture.
- Technical
Skills and Experience:- Data processing experience in a production environment with terabyte‑sized datasets that are both structured and unstructured data:
- Required
Languages:
Python and SQL - Required Libraries:
PyData stack, Dask, and Prefect
- Required
- Data storage experience with Microsoft SQL Server, Mongo
DB, and Snowflake - Leveraging data partitioning, parallelization, and data flow optimization principles
- Implementing data sterilization, normalization, and standardization processes
- Orchestrating fault tolerant/redundant data processing flows
- SQL query planning and optimization for incremental and historical workloads
- File manipulation across varied file types, encodings, formats, and standards
- Secure Software Development Lifecycle (SSDLC), version control and release management
- Data processing experience in a production environment with terabyte‑sized datasets that are both structured and unstructured data:
- Knowledge of healthcare interoperability standards such…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).