×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer, Business Operations

Job in Paramus, Bergen County, New Jersey, 07653, USA
Listing for: SK Life Science
Full Time position
Listed on 2026-03-01
Job specializations:
  • IT/Tech
    Data Engineer, Data Science Manager, AI Engineer, Data Analyst
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

Job Location:

US-NJ-Paramus

Overview

The Senior Data Engineer, Biz Ops will play a critical role in architecting the data infrastructure powering our AI-driven business operations platform. This role is responsible for analyzing the current database environment, redesigning it for AI-native use cases, and establishing the foundational data architecture for our end-to-end decision-intelligence platform-driven business operations platform. You will design scalable data ecosystems—including Data Lakes, Data Pipelines, and semantic modeling layers—using modern engineering standards (dbt, orchestration frameworks, CI/CD).

Working closely with commercial and business operations experts, you will dissect existing workflows and reimagine them as AI-ready, streamlined processes in collaboration with AI Scientists and AI Engineers. You will translate ambiguous operational and business challenges into clean, reliable, ontology-aligned data models that enable forecasting, planning, and optimization across the value chain, beginning with supply chain operations. This is a high-impact senior role for someone who thrives in owning a data ecosystem end-to-end and building AI-centric data infrastructure from the ground up.

Responsibilities
  • Analyze existing databases and redesign them for AI/ML readiness, including ontology-driven and semantic data modeling.
  • Architect and implement centralized Data Lake and scalable, robust data pipelines supporting operational workflows and AI-driven decision processes.
  • Build and maintain high-quality data transformations using dbt and enforce software engineering best practices across the data stack.
  • Design feature-ready data models to support AI/ML use cases such as forecasting, classification, and optimization.
  • Develop secure and reliable data ingestion frameworks (batch and streaming) with strong observability and performance controls.
  • Partner with Commercial, Marketing, and AI teams to translate business problems into data requirements, semantic models, and scalable pipelines.
  • Implement data quality, lineage, and governance practices aligned with enterprise standards.
  • Lead technical direction on modern data stack architecture and continuously improve scalability, efficiency, and maintainability.
  • Contribute to an agile, experimentation-driven culture, balancing rapid PoC execution with long-term architectural integrity.
Qualifications
  • Education
    :
    Bachelor's degree or higher in Computer Science, Engineering, or related field.
  • Experience
    : 5+ years of hands-on experience in Data Engineering or technical Analytics Engineering, with deep experience building data lakes and orchestrating complex pipelines.
  • Skills
    :
    • Strong programming proficiency in Python and PySpark for large scale distributed data processing, data manipulation, automation, and pipeline development.
  • Expert-level SQL for data modeling, complex transformations, and performance optimization.
  • Experience with modern data lake table formats such as Apache Iceberg.
  • Familiarity with Medallion Data Architecture (Bronze/Silver/Gold) for scalable and governed data processing.
  • Hands-on experience with modern transformation frameworks (e.g., dbt) and orchestration tools (e.g., Airflow or Python-based schedulers).
  • Knowledge of core AWS or Azure data services and data observability practices.
  • Experience optimizing data models for BI and visualization tools (e.g., Tableau).
  • Ability to define business metrics and derive semantic meaning from operational KPIs.
Strongly Preferred
  • Master's degree or higher in a quantitative or technical field.
  • Experience working with ML pipelines (e.g., MLflow, Feature Stores) and collaborating with AI Scientists/Engineers.
  • Knowledge of ontology-based modeling, semantic layers, and modern data architectures (e.g., Data Mesh, Data Fabric).
  • Experience with Graph Databases (e.g., Neo4j) for semantic modeling, ontology alignment, or operational knowledge graphs.
  • Domain experience in Supply Chain Management (SCM), Biz Ops, Rev Ops, or Commercial Operations.
  • Experience in regulated industries (e.g., biopharma, healthcare, finance).
  • Experience in a Biz Ops or highly cross-functional technical role.
  • Hands-on experience with Snowflake architecture.
Who Thrives in This Role
  • Someone who enjoys owning a data ecosystem end-to-end and building from zero to one.
  • A strategic thinker who balances strong technical depth with understanding of real business context.
  • An engineer who thrives in close collaboration with Commercial and AI teams to define how data powers decisions.
  • A builder comfortable operating in a fast-paced, start-up like environment where innovation and speed matter.
  • An "Agile Operator" who can rapidly prototype for PoCs while architecting for long-term scalability and reliability.
#J-18808-Ljbffr
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary