×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Data Engineer specializing in AWS Bedrock

Job in Chicago, Cook County, Illinois, 60290, USA
Listing for: Vipany Global
Full Time position
Listed on 2026-01-16
Job specializations:
  • Software Development
    Data Engineer
Job Description & How to Apply Below

Job Description

A highly skilled Data Engineer specializing in AWS Bedrock and modern data platforms, responsible for designing, building, and optimizing scalable data solutions and pipelines for advanced analytics and AI-driven applications.

Responsibilities
  • Design, develop, and maintain robust data pipelines and architecture for large-scale data processing.
  • Implement and optimize data workflows using AWS services (Glue, Lambda, EMR, Kinesis) and Bedrock.
  • Collaborate with data scientists and ML engineers to integrate machine learning models into production environments.
  • Ensure data quality, security, and compliance across all stages of the data lifecycle.
  • Develop CI/CD pipelines for data engineering projects using Git, Terraform, and containerization tools.
  • Work with streaming and batch processing frameworks (Spark, Kafka/Kinesis, Spark Streaming).
  • Manage and optimize relational (Postgre

    SQL) and No

    SQL databases (Redis, Elasticsearch).
  • Monitor and troubleshoot data systems for performance and reliability.
  • Stay updated खिलाड़ी on emerging technologies in big data, AI, and cloud platforms.
  • Collaborate closely with teams in an Agile/Scrum environment.
Educational Qualifications
  • Engineering Degree BE/ME/BTech/MTech/BSc/MSc.
  • Technical certification in multiple technologies is desirable.
Skills Mandatory Skills
  • Programming: Strong proficiency in Python; ability to learn other languages quickly.
  • AWS Expertise: Hands‑on experience with AWS Bedrock, Lambda, Glue, Athena, Kinesis, IAM, EMR/PySpark.
  • Big Data Technologies: EMR, Spark, Kafka/Kinesis, Airflow.
  • Databases: Advanced SQL (complex queries), Postgre

    SQL, Redis, Elasticsearch.
  • CI/CD & Infrastructure: Git, Terraform, Docker; experience with agile methodologies.
  • Stream Processing: Spark Streaming or similar frameworks.
Good to Have Skills
  • Knowledge Graph Technologies: Graph DB, OWL, SPARQL.
  • Machine Learning Frameworks: Tensor Flow, PyTorch, Scikit-learn, XGBoost.
  • Model Deployment: Flask, FastAPI, Docker, Kubernetes, Tensor Flow Serving, Torch Serve.
  • Exposure to Databricks and workflow orchestration tools.
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary