×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer; Time; Remote

Remote / Online - Candidates ideally in
Cambourne, Cambridgeshire, CB23, England, UK
Listing for: Remotestar
Remote/Work from Home position
Listed on 2026-02-28
Job specializations:
  • Software Development
    Data Engineer, Software Engineer
Job Description & How to Apply Below
Position: Data Engineer (Real Time) (Remote)

DATA ENGINEER (Real Time)

About client:

At Remote Star, we are hiring for a client who is a world-class iGaming operator offering various online gaming products across multiple markets through proprietary gaming sites and partner brands.

Their iGaming platform supports over 25 online brands and is used by hundreds of thousands of users worldwide. The company embraces a Hybrid work-from-home model, with the flexibility of working three days in the office and two days from home.

About the Data Engineer role:

You will contribute to designing and developing Real-Time Data Processing applications to meet business needs. This environment offers an excellent opportunity for technical data professionals to build a consolidated Data Platform with innovative features while working with a talented and fun team.

Responsibilities include:

  • Development and maintenance of Real-Time Data Processing applications using frameworks like Spark Streaming, Spark Structured Streaming, Kafka Streams, and Kafka Connect.
  • Manipulation of streaming data, including ingestion, transformation, and aggregation.
  • Researching and developing new technologies and techniques to enhance applications.
  • Collaborating with Data Dev Ops, Data Streams teams, and other disciplines.
  • Working in an Agile environment following SDLC processes.
  • Managing change and release processes.
  • Troubleshooting and incident management with an investigative mindset.
  • Owning projects and tasks, and working effectively within a team.
  • Documenting processes and sharing knowledge with the team.

Preferred skills:

  • Strong knowledge of Scala.
  • Familiarity with distributed computing frameworks such as Spark, KStreams, Kafka.
  • Experience with Kafka and streaming frameworks.
  • Understanding of monolithic vs. microservice architectures.
  • Familiarity with Apache ecosystem including Hadoop modules (HDFS, YARN, HBase, Hive, Spark) and Apache NiFi.
  • Experience with containerization and orchestration tools like Docker and Kubernetes.
  • Knowledge of time-series or analytics databases such as Elasticsearch.
  • Experience with AWS services like S3, EC2, EMR, Redshift.
  • Familiarity with data monitoring and visualization tools such as Prometheus and Grafana.
  • Experience with version control tools like Git.
  • Understanding of Data Warehouse and ETL concepts; familiarity with Snowflake is a plus.
  • Strong analytical and problem-solving skills.
  • Good learning mindset and ability to prioritize tasks effectively.
#J-18808-Ljbffr
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary