More jobs:
Job Description & How to Apply Below
If interested, Please share your resume to VS
Location:
TechM-Mumbai-Chandivali Years of
Experience:
2-4 Years Responsibilities:
• Design, develop, and maintain scalable data pipelines using GCP Big Query.
• Collaborate with data scientists and analysts to understand data requirements and deliver high-quality datasets.
• Implement data quality checks and monitoring processes to ensure data integrity.
• Optimize existing data workflows for performance and cost efficiency.
• Work with cross-functional teams to gather and document data requirements.
• Participate in code reviews and contribute to best practices in data engineering.
• Stay updated with the latest trends and technologies in data engineering and GCP. Mandatory
Skills:
• Strong expertise in GCP Big Query Pyspark
• Proficient in SQL and data modeling.
• Experience with ETL processes and tools.
• Familiarity with data warehousing concepts and architecture.
• Knowledge of Python or Java for data processing. Preferred
Skills:
• Experience with other GCP services such as Dataflow, Dataproc, or Cloud Storage.
• Understanding of data governance and security best practices.
• Familiarity with machine learning concepts and tools.
• Experience with version control systems like Git.
• Strong analytical and problem-solving skills.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×