Backend Data Developer
Listed on 2026-02-28
-
Software Development
Backend Developer
We are seeking a highly skilled Backend Data Developer to join our Data Core Team
, responsible for building the foundational infrastructure that powers our high-scale distributed product.
This role combines backend engineering excellence with deep familiarity in modern data systems.
You will design and implement infrastructure components, data services, and internal frameworks using Java, working with technologies such as Open Search, Click House, and Kafka. You will solve complex distributed challenges, address real-world production issues, and ensure our platform remains fast, resilient, and scalable.
If you enjoy building core systems that handle massive data volumes and high throughput—and you want to own critical parts of the platform—this role is for you.
What will you do?- Build and maintain core data infrastructure and backend services used across R&D.
- Develop scalable, efficient, and maintainable code in Java for high-throughput distributed systems.
- Design and implement robust ingestion, indexing, and storage workflows.
- Create internal tooling and frameworks that simplify data access, indexing, and processing for other teams.
- Work with and optimize large-scale distributed technologies like Open Search, Click House, Kafka, Cassandra.
- Investigate and resolve challenging production issues involving data pipelines, storage layers, indexing, and distributed services.
- Proactively identify bottlenecks and scalability issues, and lead efforts to optimize them. Contribute to production readiness, incident response, and system hardening initiatives.
- Develop solutions that ensure durability, efficiency, and elasticity under high load.
- Mentor teams across the R&D as part of the core team.
- Strong development skills in Java, including deep knowledge of JVM fundamentals.
- Experience in distributed systems and enterprise-scale backend development.
- Hands-on work with data technologies like Open Search, Click House, Kafka, Cassandra, Spark, iceberg, postgres or similar high-scale data technologies.
- Proven ability to write efficient, optimized, production-grade code.
- Strong troubleshooting and debugging skills.
- Experience with high-scale, high-throughput systems with real-time processing.
- Experience with performance profiling, memory optimization.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: