×
Register Here to Apply for Jobs or Post Jobs. X

Sr.Data Engineer

Job in Greenfield, Milwaukee County, Wisconsin, USA
Listing for: Damco Spain SL
Full Time position
Listed on 2026-02-28
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

What We’re Building – and Why You Should Care

GDA is Maersk’s bold leap into the future of data and AI. It’s not just a platform-it’s a transformation of how the world’s largest integrated logistics company turns its operational data into strategic intelligence. Think: real-time insights on vessel ETA and carbon emissions, metadata-driven supply chain automation, and retrieval-augmented copilots that advise planners and operators. Our data engineers don’t just build pipelines-they shape the very foundation that powers AI-native logistics.

As a data engineer in the GDA-FBM team, you’ll help modernize and operationalize Maersk’s global data estate. You’ll craft reusable, observable, and intelligent pipelines that enable ML, GenAI, and domain-specific data products across a multi-cloud environment. Your code won’t just move data-it’ll move trade.

What You'll Be Doing

  • Ingest the world: Design and maintain ingestion frameworks for high-volume, structured and unstructured data-from operational systems, APIs, file drops, and events. Support streaming and batch use cases across latency windows.
  • Transform at scale: Develop transformation logic using SQL
    , Python
    , Spark
    , and modern declarative tools like dbt or sqlmesh
    . You’ll handle deduplication, windowing, watermarking, late-arriving data, and more.
  • Curate for trust: Collaborate with domain teams to annotate datasets with metadata
    , ownership
    , PII classification
    , and usage lineage
    . Enforce naming standards, partitioning schemes, and schema evolution policies.
  • Optimize for the lakehouse: Work within a modern lakehouse architecture
    -leveraging Delta Lake
    , S3
    , Glue
    , and EMR
    -to ensure scalable performance and queryability across real-time and historical views.
  • Strong expertise building data modelling and dimensional modelling. Experience in SSAS/AAS.
  • Experience in Azure tech stack like ADF, ADB.
  • Build for observability: Instrument your pipelines with quality checks, cost visibility, and lineage hooks. Integrate with Open Metadata
    , Prometheus
    , or Open Lineage to ensure platform reliability and traceability.
  • Enable production-readiness: Support deployment workflows via Git Hub Actions
    , Terraform
    , and IaC patterns
    . Your code will be versioned, testable, and safe for multi-tenant deployments.
  • Think platform-first: Everything you build should be reusable. You’ll help codify data engineering standards, create scaffolding for onboarding new datasets, and drive automation over repetition.

What We’re Looking For

  • Strong foundation in data engineering: You know your way around distributed systems
    , columnar storage formats (Parquet, Avro),
    data lake performance tuning
    , and schema evolution
    .
  • Hands-on cloud experience: You’ve worked with AWS-native services like Glue
    , EMR
    , Athena
    , Lambda
    , and object storage (S3). Bonus if you’ve used Databricks
    , Snowflake
    , or Trino
    .
  • Modern engineering practices: Familiarity with Git Ops
    , containerized workflows (Docker, K8s), and CI/CD pipelines for data workflows.

    Experience with
    Terraform and IaC is highly valued.
  • Programming proficiency: Fluency in Python and SQL is a must. Bonus if you’ve worked with Scala
    , Jinja-templated SQL
    , or DSL-based modeling frameworks like dbt/sqlmesh.
  • Curiosity and systems thinking: You understand the tradeoffs between batch vs streaming, structured vs unstructured, cost vs latency-and you ask why before you build.
  • Collaboration skills: You’ll work closely with ML engineers, platform architects, security teams, and domain data owners. Ability to communicate clearly and write clean, documented code is key.

What Makes This Role Special

  • Impact at global scale: Your work will influence container journeys, terminal operations, vessel routing, and sustainability metrics across 130+ countries and $4T+ in global trade.
  • Platform-level thinking: You’re not just solving one use case-you’re building primitives for others to reuse. This is your chance to shape a high-leverage internal data platform.
  • Freedom to experiment: We don’t believe in checkbox engineering. You’ll have space to challenge the status quo, propose better tooling, and refine the foundations of our platform stack.
  • Career-defining scope: Greenfield. Executive visibility.…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary