×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer

Job in Urbana, Champaign County, Illinois, 61803, USA
Listing for: RAPP
Full Time position
Listed on 2026-03-07
Job specializations:
  • IT/Tech
    Data Engineer, Data Science Manager
Salary/Wage Range or Industry Benchmark: 60000 - 80000 USD Yearly USD 60000.00 80000.00 YEAR
Job Description & How to Apply Below

RAPP Chicago is looking for a Senior Data Engineer to join our award-winning Technology team.

Who We Are

We are RAPP – world leaders in activating growth with precision and empathy  a global, next-generation precision marketing agency we leverage data, creativity, technology, and empathy to foster client growth. We champion individuality in the marketing solutions we create, and in our workplace. We fight for solutions that adapt to the individual’s needs, beliefs, behaviors, and aspirations. We foster an inclusive workplace that emphasizes personal well-being.

How

We Do It

At RAPP, our fearless superconnectors help to create value from personal brand experiences by focusing on three key areas: connected data, connected content and connected decisioning. Our data analysts identify who that person is, our strategists understand what they want, and our award-winning technologists and creatives know how to deliver it – ensuring we’re able to activate authentic customer connections for our clients.

Part of Omnicom’s Precision Marketing Group, RAPP is comprised of 2,000+ creatives, technologists, strategists, and data and marketing scientists across 15+ global markets.

Your Role

We are looking for a Senior Data Engineer with deep expertise in building scalable, cloud-native data pipelines and platforms. The ideal candidate is highly skilled in Python, Apache Airflow, AWS Lambda, Dynamo

DB, and dbt

, and has experience designing reliable data workflows that enable advanced analytics, reporting, and machine learning use cases. The ideal candidate will have strong attention to detail, a passion for information management, and the ability to work collaboratively with creative teams to enhance the efficiency and scalability of our asset workflows.

Your Responsibilities
  • Data Pipeline Development
    • Design, build, and maintain robust ETL/ELT pipelines using Python and Airflow.
    • Develop serverless workflows leveraging AWS Lambda for scalable event-driven data processing.
    • Implement and optimize dbt models for analytics and transformations.
  • Data Architecture & Storage
    • Design schemas and manage data in Dynamo

      DB and other cloud-native storage solutions.
    • Ensure high availability, scalability, and performance of data systems.
    • Integrate structured, semi-structured, and unstructured data sources.
  • Automation & Orchestration
    • Build workflow orchestration strategies using Airflow for scheduling and monitoring pipelines.
    • Automate infrastructure deployment and CI/CD pipelines for data services.
  • Quality & Governance
    • Implement data validation, testing, and monitoring frameworks.
    • Ensure compliance with security, privacy, and governance standards.
  • Collaboration & Leadership
    • Partner with analytics, product, and engineering teams to deliver reliable datasets.
    • Mentor junior engineers and enforce best practices in data engineering.
    • Actively contribute to improving team efficiency, scalability, and standards.
Required Skills
  • 5–8+ years of experience in data engineering, software engineering, or a related role.
  • Strong expertise in Python for data engineering and automation.
  • Hands‑on experience with Apache Airflow for orchestration.
  • Proficiency with AWS Lambda and serverless design patterns.
  • Solid experience with Dynamo

    DB (schema design, performance tuning, scaling).
  • Strong knowledge of dbt for transformation and analytics modeling.
  • Experience with cloud environments (AWS preferred).
  • Familiarity with CI/CD workflows, Git, and Dev Ops practices.
  • Strong problem‑solving and communication skills.
Preferred Qualifications
  • Experience with other AWS services (S3, Glue, Redshift, Kinesis).
  • Familiarity with data warehouse and data lake architectures.
  • Exposure to real‑time streaming and event‑driven data pipelines.
  • Knowledge of containerization (Docker, Kubernetes).
Soft Skills
  • Exceptional attention to detail and organizational skills.
  • Strong written and verbal communication skills, with the ability to explain complex metadata systems to non‑technical users.
  • Ability to work collaboratively and cross‑functionally with creative, marketing, and IT teams.
  • Proactive problem‑solver who can identify issues and suggest improvements.
  • Time management skills with the ability to prioritize and…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary