Data Engineer - Global Master Data
Job in
Indianapolis, Hamilton County, Indiana, 46262, USA
Listed on 2026-01-14
Listing for:
Eli Lilly and Company
Full Time
position Listed on 2026-01-14
Job specializations:
-
IT/Tech
Data Engineer, Data Analyst, Data Science Manager, Cloud Computing
Job Description & How to Apply Below
Data Engineer - Global Master Data page is loaded## Data Engineer - Global Master Data locations:
US, Indianapolis INtime type:
Full time posted on:
Posted Todayjob requisition :
R-98241
At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first.
We’re looking for people who are determined to make life better for people around the world.
** About the Tech at Lilly Organization:
** Tech at Lilly builds and maintains capabilities using cutting edge technologies like most prominent tech companies. What differentiates Tech at Lilly is that we create new possibilities through tech to advance our purpose – creating medicines that make life better for people around the world, like data driven drug discovery and connected clinical trials. We hire the best technology professionals from a variety of backgrounds, so they can bring an assortment of knowledge, skills, and diverse thinking to deliver innovative solutions in every area of the enterprise.
** What You Will Do
** A Data Engineer is responsible for designing, developing, and maintaining the data solutions that ensure the availability and quality of data for analysis and/or business transactions. They design and implement efficient data storage, processing and retrieval solutions for datasets and build data pipelines, optimize database designs, and work closely with data scientists, architects, and analysts to ensure data quality and accessibility.
Data engineers require strong skillsets in data integration, acquisition, cleansing, harmonization, and transforming data. They play a crucial role in transforming raw data into datasets designed for analysis which enable organizations to unlock valuable insights for decision making.
* Design, build, and maintain scalable and reliable data pipelines for batch and real-time processing.
* Own
** incident response and resolution**, including root cause analysis and post-mortem reporting for data failures and performance issues.
* Develop and optimize data models, ETL/ELT workflows, and data integration across multiple systems and platforms.
* Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions.
* Implement data governance, security, and quality standards across data assets.
* Lead end-to-end data engineering projects and contribute to architectural decisions.
* Design and implement cloud-native solutions on AWS (preferred) using tools such as AWS Glue, EMR, and Databricks.
Experience with Azure or GCP is a plus.
* Promote best practices in coding, testing, and deployment.
* Monitor, troubleshoot, and improve performance and reliability of data infrastructure.
* Automate manual processes and identify opportunities to optimize data workflows and reduce costs.
** How You Will Succeed:**
* ** Deliver scalable solutions
** by designing robust data pipelines and architectures that meet performance and reliability standards.
* ** Collaborate effectively
** with cross-functional teams to turn business needs into technical outcomes.
* ** Lead with expertise**, mentoring peers and driving adoption of best practices in data engineering and cloud technologies.
* ** Continuously improve systems
** through automation, performance tuning, and proactive issue resolution.
* ** Communicate with clarity
** to ensure alignment across technical and non-technical stakeholders.
** Your Basic Qualifications
*** Bachelor’s degree in Computer Science, Information Technology, Management Information Systems or similar stem fields
* At least 2 years of experience in data engineering using core technologies such as SQL, Python, PySpark, and AWS services including Lambda, Glue, S3, Redshift, Athena, and IAM roles/policies.
* 1+ years of experience working in Agile environments, with hands-on experience…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×