Analytics Engineer
Listed on 2026-01-12
-
IT/Tech
Data Engineer, Data Analyst, Data Science Manager, Data Scientist
Company description
A division of Publicis Groupe, Publicis Digital Experience is a network of top-tier agencies designed to develop capabilities and solutions to enable growth and provide scaled access to the digital capabilities of Publicis Groupe in service of our clients. Together, the Publicis Digital Experience portfolio endeavors to create value at the intersection of technology and experiences to connect brands and people.
Our model to transform every brand experience will help clients navigate, develop, and activate commerce in a way that will provide them with a future-proof model for modern marketing. With our unique expertise in consumer engagement, CRM, and commerce, Publicis Digital Experience powers brands and empowers people in a new era of creativity. An ever-changing landscape and the need for fluid thinking is just part of our problem-solving nature.
Which means we’re untethered from any specific medium or method we go where ideas will work best.
We are an expanding network of more than 7,000 employees across global offices, unified under the Publicis Digital Experience umbrella. Our portfolio includes agency brands such as Razorfish, Digitas, Mars United Commerce, Arc Worldwide, Saatchi & Saatchi X, Plowshare and 3
Share. Our capabilities span the full customer journey—from creative and experience to Commerce and CRM—through specialized practices like Connected
CRM and the Publicis Commerce.
Part of the overall Analytics Group, the Analytics Engineering team is responsible for data modeling automation and preparing data needed by the Data Science team for modeling and the Business Analytics team for data analysis. This will involve documenting the data cleaning and transformation processes, automating these processes, ensuring data governance practices are followed, and maintaining a data catalog. The team is also responsible for developing all Power
BI dashboards based on client needs or Data Science outputs.
- Core Responsibilities:
Data Integration, Data Cleaning, Data Transformation, Data Warehousing, Dashboarding, MLOps, Charts & Visualizations, Reporting Automation, etc. - Primary Tools:
Databricks, Azure Synapse, Power BI
- Develop and maintain data pipelines and ETL processes.
- Optimize data infrastructure for efficient data processing.
- Ensure data quality and accessibility for data scientists and analysts.
- Collaborate with cross-functional teams to address data needs and challenges.
- Implement data governance and security best practices.
- Support annual planning initiatives with clients.
- Work closely with cross-functional teams, including analysts, product managers and domain experts to understand business requirements, formulate problem statements, and deliver relevant data science solutions.
- Develop and optimize machine learning models by processing, analyzing and extracting data from varying internal and external data sources.
- Develop supervised, unsupervised, and semi-supervised machine learning models using state-of-the-art techniques to solve client problems.
- Show up - be accountable, take responsibility, and get back up when you are down.
- Make stuff.
- Share so others can see what’s happening.
- A bachelor’s/master’s degree in data analytics, computer science, or a directly related field.
- 3-5 years of industry experience in a data analytics or related role.
- Proficiency in SQL for data querying and manipulation.
- Experience with data warehousing solutions.
- Design, implement, and manage ETL workflows to ensure data is accurately and efficiently collected, transformed, and loaded into our data warehouse.
- Proficiency in programming languages such as Python and R.
- Experience with cloud platforms such as AWS, Azure, and Google Cloud.
- Experience in developing and deploying machine learning models.
- Knowledge of machine learning engineering practices, including model versioning, deployment, and monitoring.
- Familiarity with machine learning frameworks and libraries (e.g., Tensor Flow, PyTorch, Scikit-learn).
- Ability to design and develop scalable data pipelines for batch and real-time data processing.
- Experience with big data technologies such as Apache Spark,…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).