×
Register Here to Apply for Jobs or Post Jobs. X

Senior Kafka Engineer

Job in Tempe, Maricopa County, Arizona, 85285, USA
Listing for: Enormous Enterprise LLC
Full Time position
Listed on 2026-01-14
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing, Big Data, Data Analyst
Job Description & How to Apply Below

Senior Kafka Engineer

Tempe, AZ - On site - W2 Only

Skills:

Kafka, Java or Scala, Kafka pipelines, AWS, Azure, GCP, Cloud, Dev Ops

Overview

Experienced Kafka Engineer with expertise in Confluent Kafka, Java/Scala, and distributed systems. Skilled in designing scalable, fault-tolerant Kafka-based data pipelines, troubleshooting messaging issues, and optimizing performance. Strong background in cloud deployments, microservices, and Agile development with an automate-first approach.

Responsibilities
  • Identify and rectify Kafka messaging issues within justified time.
  • Work with the business and IT team to understand business problems and design, implement, and deliver an appropriate solution using Agile methodology across the larger program.
  • Work independently to implement solutions on multiple platforms (DEV, QA, UAT, PROD).
  • Provide technical direction, guidance, and reviews to other engineers working on the same project.
  • Administer distributed Kafka clusters in Dev, QA, UAT, and PROD environments and troubleshoot performance issues.
  • Implement and debug subsystems/microservices and components.
  • Follow an automate-first/automate-everything philosophy.
  • Hands-on in programming languages.
Key Skills & Expertise
  • Deep understanding of Confluent Kafka:
    Thorough knowledge of Kafka concepts like producers, consumers, topics, partitions, brokers, and replication mechanisms.
  • Programming language proficiency:
    Primarily Java or Scala, with potential for Python depending on the project.
  • System design and architecture:
    Ability to design robust and scalable Kafka-based data pipelines, considering factors like data throughput, fault tolerance, and latency.
  • Data management skills:
    Understanding of data serialization formats like JSON, Avro, and Protobuf, and how to manage data schema evolution.
  • Kafka Streams API (optional):
    Knowledge of Kafka Streams for real-time data processing within the Kafka ecosystem.
  • Monitoring and troubleshooting:
    Familiarity with tools to monitor Kafka cluster health, identify performance bottlenecks, and troubleshoot issues.
  • Cloud integration:
    Experience deploying and managing Kafka on cloud platforms like AWS, Azure, or GCP.
  • Distributed systems concepts.
#J-18808-Ljbffr
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary