Job Description & How to Apply Below
Location - Hyderabad/Ahmedabad
About the Role
We’re hiring a hands-on Senior Data Engineer to build and operate the data platforms and services that power multiple customer-facing products. This is an individual contributor role focused on designing pragmatic, cost-efficient data architectures and exposing data through reliable, well-designed Python services.
You’ll work in a matrixed data engineering organization , partnering with multiple product teams simultaneously—balancing competing requirements, evolving schemas, and different performance and latency needs.
What You’ll Do
Design and build AWS-based data lake and analytics architectures using S3, Apache Iceberg, AWS Glue, Redshift, and DMS
Own end-to-end data flows, from ingestion through storage to product-facing access layers
Develop and maintain Python-based backend services and APIs that serve data to applications, internal tools, and downstream systems
Partner with multiple product teams to understand requirements and deliver scalable, reusable data solutions
Make deliberate tooling and architecture decisions with a strong focus on cost efficiency , performance, and operational simplicity
Optimize table design, partitioning, indexing, and query patterns for real-world product workloads
Ensure high availability, data correctness, and predictable performance for production systems
Operate as a senior IC: writing code, reviewing PRs, debugging production issues, and improving platform reliability
Document architectural decisions and trade-offs to support long-term maintainability
Required Qualifications
5+ years of hands-on experience in data engineering or backend engineering roles
Strong experience designing and operating AWS data platforms , including:
S3 with Apache Iceberg and Parquet
AWS Glue (ETL and Data Catalog)
Amazon Redshift
AWS DMS
Advanced Python skills, including building production-grade web services (e.g., FastAPI, Flask)
Strong SQL skills and experience supporting low-latency and analytical workloads
Proven experience designing systems with cost as a first-class constraint
Experience supporting multiple products or domains simultaneously in a matrixed team environment
Comfort owning systems in production, including monitoring, incident response, and performance tuning
Nice to Have
Experience building data-backed APIs with authentication, authorization, and rate limiting
Familiarity with infrastructure-as-code (Terraform, CDK, or Cloud Formation)
Exposure to ML-adjacent workloads (feature generation, offline training data)
Experience with schema evolution and data versioning in product environments
What Success Looks Like
Product teams can reliably and efficiently access the data they need via well-designed services and tables
Data architectures scale across multiple products without unnecessary duplication
Platform and product costs are continuously optimized as usage grows
Data systems are observable, debuggable, and resilient in production
Technical decisions are grounded in trade-offs and communicated clearly across teams
Position Requirements
10+ Years
work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×