We are seeking an exceptional Data Engineer to design, develop, and optimise intelligent, large-scale data systems that power advanced analytics, AI models, and cybersecurity intelligence within vQsystems’ innovation ecosystem.
This role demands a deep understanding of distributed data architectures, high-performance data pipelines, and scalable data storage strategies. You will work with cutting-edge technologies in a multi-cloud environment, collaborating with AI/ML, backend, and platform teams to deliver a real-time, insight-driven data infrastructure that fuels innovation across our enterprise.
If you are passionate about solving complex data problems, building resilient data architectures, and enabling AI-powered decision systems, this role offers a unique opportunity to shape the foundation of next-generation intelligent platforms.
Responsibilities- Design, develop, and maintain scalable, real-time and batch data pipelines using tools like Apache Spark, Kafka, and Airflow
. - Build and optimise data lake and warehouse architectures on AWS, Azure, or GCP (Redshift, Big Query, Snowflake, or Synapse).
- Implement ETL/ELT workflows that ensure high-quality, consistent, and secure data ingestion from multiple structured and unstructured sources.
- Collaborate with AI/ML engineers to design data pipelines optimised for machine learning models and continuous training.
- Develop and enforce data governance, lineage, and quality frameworks for enterprise-grade compliance and traceability.
- Implement monitoring, observability, and automation for all data flows to ensure reliability and minimal downtime.
- Work closely with software engineers and product teams to integrate real-time analytics and predictive insights into production systems.
- Continuously evaluate and integrate emerging data technologies to improve scalability, performance, and automation
.
- 4+ years of professional experience as a Data Engineer
, preferably in complex, data-intensive environments. - Strong proficiency in Python and SQL for data manipulation, transformation, and automation.
- Hands‑on experience with big data technologies such as Apache Spark, Kafka, Hadoop, and Airflow
. - Proven expertise with cloud data platforms (AWS Glue, Redshift, GCP Big Query, or Azure Synapse).
- Deep understanding of data modelling, warehousing, and lakehouse architectures
. - Experience in ETL/ELT design, data partitioning, and performance tuning
. - Familiarity with containerised and microservice‑based architectures (Docker, Kubernetes).
- Exposure to AI/ML data workflows
, including feature stores and model‑serving pipelines, is a plus. - Strong knowledge of data governance, compliance (GDPR), and security best practices
. - Excellent problem‑solving skills, attention to detail, and ability to work collaboratively in a fast‑paced, innovation‑driven environment.
- Competitive compensation and equity package.
- Hybrid or fully remote work flexibility.
- Opportunity to work on AI‑integrated data infrastructure projects at enterprise scale.
- Access to modern data technologies and high‑performance computing environments.
- Continuous professional learning.
- Inclusive, forward‑thinking engineering culture that rewards innovation, precision, and long‑term impact.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: