More jobs:
Data Engineer; Retail
Job in
Egg Harbor Township, Atlantic County, New Jersey, 08234, USA
Listed on 2026-03-05
Listing for:
Mondo
Contract
position Listed on 2026-03-05
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing
Job Description & How to Apply Below
Location-Type: Hybrid (
Location:
Egg Harbor, NJ)
Start Date Is: ASAP (Beginning of March)
Duration: 6 Month Contract (option to extend)
Compensation Range: $65-70/hour W2
ResponsibilitiesAs an Operational Data Engineer, you will play a key role in designing, building, and operating highly reliable operational data platforms that support business-critical systems and near-real-time workflows. This role emphasizes data availability, resiliency, observability, and cross-system integration, while providing technical leadership within the Operational Data team.
Key responsibilities include:- Lead the design, development, and support of operational data pipelines serving systems such as eCommerce, OMS, WMS, 3PL, integrations, and other operational platforms
- Architect and implement incremental, CDC-based, event-driven, and batch data pipelines to support downstream consumers including Databricks, operational dashboards, analytics platforms, and business applications
- Partner closely with Integration teams, BI/Data Engineering, QA, Platform, and business stakeholders to define operational data contracts, SLAs, and reliability expectations
- Own and evolve the data consumption layer that enables multiple applications and platforms to consume consistent and trusted operational data
- Proactively monitor pipelines and jobs, perform deep root cause analysis for failures, performance degradation, and data quality issues, and drive long-term remediation
- Define and implement data quality, reconciliation, validation, and control frameworks for operational datasets
- Lead CI/CD strategy and implementation for operational data workflows, including environment promotion, rollback, and deployment automation
- Create and maintain comprehensive operational documentation, including architecture diagrams, data lineage, runbooks, and support playbooks
- Establish and promote best practices around performance tuning, scalability, resiliency, observability, and error handling
- Mentor and guide junior engineers, providing technical direction, design reviews, and best-practice coaching
- Participate in or lead production support and on-call rotations for business-critical operational data workloads
- Collaborate on initiatives involving seasonal readiness, peak-load preparation, and operational hardening
- Bachelor's degree in Computer Science, Information Technology, or a related field; or equivalent practical experience
- 5+ years of experience in data engineering, operational data platforms, integrations, or production data systems
- Strong hands-on experience with Azure Databricks, Azure Data Factory, Azure Data Lake Storage (ADLS), Azure SQL / SQL Database, and Azure Key Vault
- Advanced proficiency in SQL and Python, including performance tuning and troubleshooting in production environments
- Strong experience with SQL Database platforms in cloud environments (Azure SQL, managed SQL services, performance optimization)
- Strong experience with incremental loading, CDC patterns, operational data modeling, and large-scale data ingestion
- Solid understanding of cloud-native data architectures and distributed systems
- Experience with Dev Ops and CI/CD practices, including source control, automated deployments, and environment management
- Hands-on experience with job orchestration and scheduling tools (e.g., Airflow, Visual Cron, or equivalent)
- Strong communication skills with the ability to collaborate across technical and business teams
- Proven ability to operate effectively in a high-availability, operationally critical environment
- Experience building or supporting Power Automate workflows for operational automation and integrations
- Experience defining and maintaining data lineage across operational and analytical data platforms
- Familiarity with event-driven architectures, messaging systems, or streaming platforms
- Experience with monitoring, logging, and alerting frameworks for data platforms
- Exposure to Power BI or other visualization tools for operational insights
- Experience using AI-assisted development tools such as Microsoft Copilot, Git Hub Copilot, Databricks Assistant, or ChatGPT
- Basic understanding of AI/ML concepts and how operational data supports ML workflows
- This role is eligible to enroll in both Mondo's health insurance plan and retirement plan. Mondo defers to the applicable State or local law for paid sick leave eligibility
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×