×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer

Job in Framingham, Middlesex County, Massachusetts, 01704, USA
Listing for: Definitive Healthcare
Full Time position
Listed on 2026-02-28
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 60000 - 80000 USD Yearly USD 60000.00 80000.00 YEAR
Job Description & How to Apply Below

About Definitive Healthcare

At Definitive Healthcare (NASDAQ: DH), we’re passionate about turning data, analytics, and expertise into meaningful intelligence that helps our customers achieve success and shape the future of healthcare. We empower them to uncover the right markets, opportunities, and people—paving the way for smarter decisions and greater impact. Headquartered just outside of Boston, Massachusetts, Definitive Healthcare operates across North America, Europe, and India, supporting a growing global client base of more than 2,400 customers since our founding in 2011.

In 2024 and 2025, we earned multiple workplace honors, including Built In’s 100 Best Places to Work in Boston (both years), a Stevie Bronze Award for Great Employers, and recognition as a Great Place to Work in India. We foster a collaborative, inclusive culture where diverse perspectives drive innovation. Through programs like Definitive Cares and our employee‑led affinity groups we strive to promote connection, education, and inclusion.

Data

Engineer

We are looking for a Data Engineer who is passionate about building scalable data pipelines, working with complex healthcare datasets, and contributing to a modern, cloud‑native data architecture. If you thrive in a fast‑paced, data‑driven environment and have strong experience with Python, Spark, Databricks, AWS, SQL, and related technologies, we’d love to hear from you.

What You'll Do Design and Develop Data Pipelines
  • Develop and maintain robust data pipelines using Python, Spark, Databricks, SQL, and SSIS
  • Implement and orchestrate ETL/ELT workflows using SSIS
  • Build reliable, repeatable processes that support the ingestion and transformation of large healthcare datasets
Data Integration and Management
  • Integrate data from diverse sources (AWS, on‑prem, third‑party vendors) into our enterprise data platform
  • Work with a wide range of file formats including CSV, XML, Parquet, Delta, and more
  • Apply strong data quality, cleansing, and curation practices to ensure accuracy and consistency
  • Optimize storage and compute resources for performance, cost, and scalability
  • Automate observability and monitoring across data pipelines and workloads
Metadata Management and Governance
  • Implement and manage Unity Catalog for metadata, lineage, and access control
  • Ensure adherence to data governance, security, and privacy standards
  • Maintain clear documentation, data dictionaries, and lineage tracking
  • Contribute to automation of data observability and governance workflows
Performance Tuning and Troubleshooting
  • Tune and optimize Spark jobs for speed, reliability, and cost efficiency
  • Diagnose and resolve performance bottlenecks across distributed systems
  • Apply JVM tuning and Spark optimization techniques to improve throughput
Data Maturity Lifecycle
  • Support and enhance our Medallion architecture (bronze/silver/gold) to improve data quality and usability
  • Ensure data is processed, enriched, and validated at each stage of the lifecycle
Collaboration and Continuous Improvement
  • Partner with data scientists, analysts, product teams, and business stakeholders to understand data needs
  • Implement CI/CD pipelines to streamline deployment and testing of data assets
  • Stay current with emerging technologies and bring forward recommendations to evolve our data platform
What You Bring Technical Skills
  • Strong programming experience in SQL and Python or Scala
  • Hands‑on experience with Apache Spark and Databricks
  • Experience with Apache Airflow or similar orchestration tools
  • Knowledge of data cleansing, curation, and quality frameworks
  • Familiarity with Unity Catalog or other metadata management tools
  • Understanding of data governance, security, and compliance best practices
  • Experience working with AWS cloud services
  • Proficiency with CI/CD tools (Jenkins, Git Lab CI, etc.)
  • Experience tuning Spark jobs and JVM‑based applications
  • Experience implementing or working within a Medallion architecture
Soft Skills
  • Strong analytical and problem‑solving abilities
  • Excellent communication and cross‑functional collaboration skills
  • Ability to work independently and within a team environment
  • High attention to detail and commitment to quality
Preferred Qualifications
  • AWS…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary