More jobs:
Job Description & How to Apply Below
Role and Responsibilities :
Strong proficiency in Python for data processing and ETL
Advanced SQL skills, including query optimization, indexing, joins, and analytical functions
Hands-on experience with Click House, Mongo
DB, Redis, and Elastic Search
Proficiency in Apache Spark/PySpark and working with data lakes
Experience with ETL and data ingestion tools such as Apache Ni Fi
Familiarity with messaging and streaming platforms like Kafka, Rabbit
MQ, and ActiveMQ
Experience with workflow orchestration frameworks such as Apache Airflow
Exposure to cloud platforms (AWS, GCP, or Azure) and their data services (S3, Redshift, Big Query, Dataproc, etc.)
Understanding of data warehousing, data modeling, and performance optimization techniques
(t.tech)
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×