More jobs:
Job Description & How to Apply Below
We are seeking an experienced Technical Lead to design, develop, and lead Big Data and Cloud-based solutions. The ideal candidate will have strong expertise in distributed data processing, cloud-native architectures, and modern data technologies, along with proven leadership in guiding teams and delivering enterprise-grade solutions.
Key Responsibilities:
Lead end-to-end design and development of Big Data solutions using Spark, Kafka, and Python.
Architect and implement scalable data pipelines and real-time streaming applications.
Design and optimize data storage solutions using No
SQL and SQL databases.
Drive cloud-native development and deployment on GCP, including Cloud Storage and related services.
Implement workflow orchestration using Airflow for data pipelines.
Collaborate with cross-functional teams including Dev Ops, QA, and product stakeholders.
Ensure data security, scalability, and performance across platforms.
Mentor and guide development teams on best practices and coding standards.
Participate in architecture reviews and technical decision-making.
Implement CI/CD pipelines and ensure smooth deployments.
Deliver high-quality Big Data solutions within agreed timelines.
Maintain low-latency data processing and high system availability.
Optimize cloud resource utilization and cost efficiency.
Ensure compliance with security and governance standards.
Required Skills &
Qualifications:
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
8 to 12 years of experience in Big Data and cloud-based solution development, with at least 3 years in a technical lead role.
Strong proficiency in Big Data technologies:
Spark, Kafka, Hadoop.
Expertise in Python programming.
Hands-on experience with GCP cloud services including Cloud Storage.
Experience with Airflow for workflow orchestration.
Strong knowledge of SQL and No
SQL database design and optimization.
Familiarity with CI/CD pipelines, containerization (Docker), and Kubernetes.
Strong problem-solving, leadership, and communication skills.
Preferred Skills &
Certifications:
GCP Professional Data Engineer or Cloud Architect certification.
Experience with data modeling and lakehouse architectures.
Knowledge of other cloud platforms (AWS, Azure)
Roles & Responsibilities
Lead end-to-end design and development of Big Data solutions using Spark, Kafka, and Python.
Architect and implement scalable data pipelines and real-time streaming applications.
Design and optimize data storage solutions using No
SQL and SQL databases.
Drive cloud-native development and deployment on GCP, including Cloud Storage and related services.
Implement workflow orchestration using Airflow for data pipelines.
Collaborate with cross-functional teams including Dev Ops, QA, and product stakeholders.
Ensure data security, scalability, and performance across platforms.
Mentor and guide development teams on best practices and coding standards.
Participate in architecture reviews and technical decision-making.
Implement CI/CD pipelines and ensure smooth deployments.
Deliver high-quality Big Data solutions within agreed timelines.
Maintain low-latency data processing and high system availability.
Optimize cloud resource utilization and cost efficiency.
Ensure compliance with security and governance standards.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×