×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer

Job in Bengaluru, 560001, Bangalore, Karnataka, India
Listing for: Confidential
Full Time position
Listed on 2026-02-04
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing, Big Data
Job Description & How to Apply Below
Location: Bengaluru

Hello, Truecaller is calling you from Bangalore, India! Ready to pick up

Our goal is to make communication smarter, safer, and more efficient, while building trust across the world. With our roots in Sweden and a global reach, we deliver smart services that create meaningful social impact. We are committed to protecting you from fraud, harassment, scam calls, and unwanted messages, so you can focus on the conversations that matter.

Top 20 most downloaded apps globally, and world's #1 caller  spam-blocking service for Android and iOS, with extensive AI capabilities, with more than  450 million  active users per month.
Founded in 2009, listed on Nasdaq OMX Stockholm and is categorized as a Large Cap. Our focus on innovation, operational excellence, sustainable growth, and collaboration has resulted in consistently high profitability and strong EBITDA margins.
A team of 400 people from 45 different nationalities spread across our headquarters in Stockholm and offices in Bangalore, Mumbai, Gurgaon and Tel Aviv  with high ambitions .

We in the Messaging team  own everything under the messaging domain  build, transform, and optimize existing and new features to make messaging more reliable and secure for our users. With over 450 million users, we operate one of the world's largest Instant Messaging (IM) services.

As a Senior Data Engineer,  you will play an important role in the development of data pipelines, frameworks and models to support the understanding of our users and making better product decisions. You will contribute to empowering the product teams with a complete self-serve analytics platform by working on scalable and robust solutions while collaborating with data engineers, data scientists and data analysts across the company.

What you bring in:

6+ years of experience as a Data Engineer
Hands-on experience with Airflow for managing workflows and building complex data pipelines in a production environment.
Experience working with big data and ETL development.
Strong proficiency in SQL and experience working with relational databases
Programming skills in PySpark, Spark with Scala, Apache Spark, Kafka, or Flink.
Experience working with cloud computing services (eg : GCP, AWS, Azure).

Experience with Data Science workflows.
Experience in data modeling and creating data lakes using GCP services like Big Query and Cloud Storage.
Expertise in containerization and orchestration using Docker and Kubernetes (GKE) for scaling applications and services on GCP.
Build data models and transformations using DBT following software engineering best practices (modularity, testing).
Version control experience with Git and familiarity with CI/CD pipelines (e.g., Github actions).
Strong understanding of data security, encryption, and GCP IAM roles to ensure privacy and compliance (especially in relation to GDPR and other regulations).
Experience in ML model lifecycle management (model deployment, versioning, and retraining) using GCP tools like AI Platform, Tensor Flow Extended (TFX), or Kubeflow, and Vertex AI.
Experience in working with Data Analysts and Scientists in building Systems in Production.
Excellent problem solving and communication skills both with peers and experts from other areas.
Self-motivated and have a proven ability to take initiative to solve problems.

The impact you will create:

Design, develop, and maintain scalable data pipelines to process and analyze large data sets in real-time and batch environments.
Play a crucial role in the team and own ETL pipelines.
Collaborate with data scientists, analysts, and stakeholders to gather data requirements, translate them into robust ETL solutions, and optimize the data flows.
Implement best practices for data ingestion, transformation, and data quality to ensure data consistency and accuracy.
Develop, test, and deploy complex data models and ensure the performance, reliability, and security of the infrastructure.
Own the architecture and design of data pipelines and systems, ensuring they are aligned with business needs and capable of handling growing volumes of data.
Make data-driven decisions accompanied by past experience.
Monitor data pipeline performance and troubleshoot any issues related to data ingestion, processing, or extraction.
Work with big data technologies to enable storage, processing, and analysis of massive datasets.
Ensure compliance with data protection and privacy regulations, particularly in regions like the EU where GDPR compliance is essential.

It would be great if you also have:

Familiarity with event-driven architecture and microservices using Cloud Pub/Sub, Cloud Run, or GKE to build highly scalable, resilient, and loosely coupled systems.
Proficiency in backend programming languages like Go, Python, Java, or Scala specifically for building highly scalable, low-latency data services and APIs.
Hands-on experience in designing and implementing RESTful APIs or gRPC services for seamless integration with data pipelines and external systems.
Hands-on experience with…
Position Requirements
10+ Years work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary