More jobs:
Job Description & How to Apply Below
Qualifications:
Proven experience in developing Kafka producers and consumers for real-time data
ingestion pipelines.
Strong hands-on expertise in Apache Kafka, Kafka Connect, Kafka Streams, and
Schema Registry.
Proficiency in Apache Spark (Structured Streaming) for real-time data transformation
and enrichment.
Solid understanding of IPFIX, Net Flow, and network flow data formats; experience
integrating with nProbe Cento is a plus.
Experience with Avro, JSON, or Protobuf for message serialization and schema
evolution.
Familiarity with Cloudera Data Platform components such as HDFS, Hive, YARN, and
Knox.
Experience integrating Kafka pipelines with data lakes or warehouses using Parquet or
Delta formats.
Strong programming skills in Scala, Java, or Python for stream processing and data
engineering tasks.
Knowledge of Kafka security protocols including TLS/SSL, Kerberos, and access control
via Apache Ranger.
Experience with monitoring and logging tools such as Prometheus, Grafana, and Splunk.
Understanding of CI/CD pipelines, Git-based workflows, and containerization
(Docker/Kubernetes)
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×