More jobs:
Sr. Data Engineer
Job in
Cincinnati, Hamilton County, Ohio, 45208, USA
Listed on 2026-01-12
Listing for:
Cypress HCM
Full Time
position Listed on 2026-01-12
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing, Data Analyst
Job Description & How to Apply Below
We are seeking an experienced and strategic Sr. Data Engineer to architect, implement, and optimize reliable data pipelines and models that power both our analytics and operational systems. This role owns the flow of data from ingestion through transformation and modeling in dbt and Snowflake, ensuring data is accurate, governed, accessible, and trusted across the organization. You will lead the effort across technical and business teams to deliver scalable, well‑documented solutions that fuel insights, automation, and system performance.
Responsibilities- Architect, implement, and optimize data pipelines using Fivetran, Airflow (via Astronomer), and AWS to support both analytical and operational workloads.
- Design complex dbt transformations and Snowflake models for performance, reliability, and scalability.
- Integrate structured and semi‑structured source data from multiple systems.
- Define and promote internal data governance standards, emphasize rigorous testing and adhere to data quality and engineering best practices.
- Develop and maintain documentation, including data dictionaries, data flow diagrams, best practices, and data recovery processes to provide clear visibility into the data ecosystem.
- Lead the collaboration with analytics and visualization engineers to ensure data models align with business logic and reporting needs.
- Participate in code reviews, testing, and CI/CD workflows using Git and containerized environments such as Docker and drive best practices in these processes.
- Continuously identify opportunities to improve data quality, automation, and observability.
- Mentor junior engineers and provide guidance on technical design, coding standards, and adoption of emerging technologies.
- Participate in cross‑functional and business discussions to understand how cleaned and transformed data is used by the business, ensuring alignment with business goals.
- 5+ years of experience in data engineering, building and scaling modern data pipelines and models.
- Advanced proficiency in SQL and Python for data transformation, automation, and performance tuning.
- Deep hands‑on experience with Snowflake, dbt, and orchestration tools like Airflow.
- Strong knowledge with Fivetran, AWS, and Docker.
- Strong understanding of data modeling, warehousing, and CI/CD workflows.
- Clear written and verbal communication skills, including the ability to document and present technical work effectively to engineering teams and business stakeholders.
- Experience with unstructured data (JSON, Parquet, Avro).
- Exposure to containerization and infrastructure‑as‑code concepts.
- Healthcare or regulated industry experience preferred.
- Commitment to data governance, documentation, and data quality.
- Ownership mindset and accountability for delivering reliable solutions.
- Strong analytical and problem‑solving abilities with attention to detail.
- Adaptable and proactive learner in a fast‑evolving technical environment.
- Effective communicator with both technical and non‑technical stakeholders.
Compensation: $60 - $80 per hour
#J-18808-LjbffrTo View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×