×
Register Here to Apply for Jobs or Post Jobs. X

Consultant, Data Engineer; Tech Lead

Job in Columbus, Franklin County, Ohio, 43224, USA
Listing for: Nationwide Mutual Insurance Company
Full Time position
Listed on 2026-03-01
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst, Data Science Manager
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below
Position: Consultant, Data Engineer (Tech Lead)
** This role will work a hybrid schedule coming into the Columbus, Ohio office 2 days (Wednesday & Thursday) per week.
**** This role does not qualify for employer-sponsored work authorization.  Nationwide does not participate in the Stem OPT Extension program.
** Be at the forefront of building our new cloud-native Annuity Data Platform. If you’re passionate about modern data platforms and leading strong data engineers, we invite you to apply for this role in the Annuity Modernization program.
* Lead the design and delivery of scalable, production-grade data products across cloud and hybrid environments, with deep hands-on expertise in Snowflake, Python, and modern data engineering practices.
* Provide technical leadership to a global team of 5–6 data engineers, collaborating with analysts, QA, and business partners to translate requirements into robust, high-quality data solutions.
* Develop and optimize complex ETL/ELT workflows and SQL workloads across Postgres, Oracle, SQL Server, and Snowflake, ensuring reliability, performance, and maintainability at scale.
* Engineer and deploy containerized data applications using Kubernetes and cloud platforms (AWS preferred), and design/manage orchestration workflows using UAC or similar tools.
* Drive data quality, lineage, observability, and engineering best practices, contributing to and enforcing standards, frameworks, and repeatable patterns across the broader engineering team.
* Deep experience writing, tuning, and optimizing complex SQL across multiple RDBMS systems.
* Strong troubleshooting skills with the ability to solve complex data pipeline and performance issues.
* Excellent communication, collaboration, and problem‑solving skills.
* Experience in financial services or insurance.
* Consults on complex data product projects by analyzing moderate to complex end to end data product requirements and existing business processes to lead in the design, development and implementation of data products.
* Responsible for producing data building blocks, data models, and data flows for varying client demands such as dimensional data, standard and ad hoc reporting, data feeds, dashboard reporting, and data science research & exploration.
* Translates business data stories into a technical story breakdown structure and work estimate so value and fit for a schedule or sprint.
* Responsible for applying secure software and systems engineering practices throughout the delivery lifecycle to ensure our data and technology solutions are protected from threats and vulnerabilities.
* Creates business user access methods to structured and unstructured data by such techniques such as mapping data to a common data model, NLP, transforming data as necessary to satisfy business rules, AI, statistical computations and validation of data content.
* Builds data cleansing, imputation, and common data meaning and standardization routines from source systems by understanding business and source system data practices and by using data profiling and source data change monitoring, extraction, ingestion and curation data flows.
* Facilitates medium to large-scale data using cloud technologies – Azure and AWS (i.e. Redshift, S3, EC2, Data-pipeline and other big data technologies).
* Collaborates with enterprise Dev Sec Ops  team and other internal organizations on CI/CD best practices experience using JIRA, Jenkins, Confluence etc.
* Implements production processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
* Develops and maintains scalable data pipelines for both streaming and batch requirements and builds out new API integrations to support continuing increases in data volume and complexity
* Writes and performs data unit/integration tests for data quality   With input from a business requirements/story, creates and executes testing data and scripts to validate that quality and completeness criteria are satisfied. Can create automated testing programs and data that are re-usable for future code changes.
* Practices code management and integration with engineering Git principle…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary