×
Register Here to Apply for Jobs or Post Jobs. X

Scientific Data Infrastructure Engineer

Job in Westbrook, Cumberland County, Maine, 04098, USA
Listing for: IDEXX Laboratories, Inc
Full Time position
Listed on 2026-01-15
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

We’re proud to be a global leader in pet healthcare innovation. Our diagnostic instruments, software, tests, and services help veterinarians around the world advance medical care, improve staff efficiency, and build more economically successful practices. At IDEXX, you’ll be part of a team that’s passionate  making a difference in the lives of pets, people, and our planet.

We are seeking a Scientific Data Infrastructure Engineer to join our R&D Discovery and Technology Futures team. In this role, you’ll be the technical architect enabling rapid development and deployment of data pipelines and scientific computing infrastructure that support our biomarker discovery and diagnostic development programs.mx? You’ll work embedded within our LCMS research team, bridging cloud infrastructure, database architecture, and scientific computing – helping transform raw analytical data into production‑ready diagnostic solutions.

This role is onsite in Westbrook, Maine.

What You Will Do Infrastructure & Automation Leadership
  • Design and implement CI/CD pipelines using Git Hub Actions, Git Lab CI/CD, AWS Code Pipeline, and Google Cloud Build to streamline deployment of mass spectrometry‑based data processing systems and proteomic computing workloads
  • Develop and maintain infrastructure‑as‑code solutions using Terraform for AWS and Google Cloud environments
  • Build automated deployment systems for serverless functions using AWS Lambda and Google Cloud Run
  • Orchestrate large‑scale batch processing jobs using AWS Batch and Google Cloud Batch
Database Architecture & Data Pipeline Development
  • Design and implement scalable database solutions for proteomic, metabolomic and genomic data storage and retrieval
  • Architect and optimize Snowflake data warehouses for large‑scale multi‑omic datasets
  • Build ETL/ELT workflows for instrument data ingestion, including metadata capture and provenance tracking
  • Manage both SQL and No

    SQL database systems supporting research applications
  • Implement data governance, backup, disaster recovery, and audit trail strategies
Scientific Computing Operations
  • Create and manage computing infrastructure for mass spectrometry‑based Јdata processing
  • Implement scalable solutions for high‑throughput multi‑omic data pipelines from analytical instruments
  • Deploy and maintain data annotation platforms and curation systems
  • Мұстьæ49/build monitorigno?
  • Build monitoring and alerting systems that track pipeline health, processing backlogs, and system performance
knowing^Cross-functional Collaboration
  • Partner with research scientists, bioinformaticians, and software engineers to understand computational requirements and translate scientific needs into technical solutions
  • Provide technical leadership to implement modern Dev Ops practices across research workflows
  • Develop documentation, playbooks, and training materials to enable self‑service capabilities for research teams
  • Mentor team members and drive adoption of Dev Ops best practices
What You Need to Succeed
  • Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience)
  • 7‑10+ years of experience in Dev Ops, Database Architecture, or related fields
  • Proven track record of leading complex infrastructure projects, preferably in research or data‑intensive environments
  • Strong experience with CI/CD tools (Git Hub Actions, Git Lab CI/CD, AWS Code Pipeline, Google Cloud Build, Jenkins, ArgoCD)
  • Proficiency in infrastructure‑as‑code (Terraform, Cloud Formation)
  • Advanced Python programming and scripting capabilities (Bash, Power Shell)
  • Experience with container orchestration (Kubernetes, Docker)
  • Cloud platform expertise (AWS, Google Cloud) with focus on serverless computing and batch processing systems
  • Strong database administration and architecture skills including:
    • Snowflake data warehouse design, optimization, and administration
    • SQL databases (Postgre

      SQL, MySQL, SQL Server)
    • No

      SQL databases (Mongo

      DB, Dynamo

      DB, Cassandra)
    • Database performance tuning and ETL/ELT pipeline development력이
Preferred Qualifications
  • Experience in life sciences, biotechnology, diagnostics, or other research‑intensive industries
  • Familiarity with scientific data workflows, laboratory informatics, or…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary