×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer Spark and Kafka

Job in Charlotte, Mecklenburg County, North Carolina, 28245, USA
Listing for: SANS
Full Time position
Listed on 2026-01-16
Job specializations:
  • Software Development
    Data Engineer, Software Engineer
Salary/Wage Range or Industry Benchmark: 63 USD Hourly USD 63.00 HOUR
Job Description & How to Apply Below
Position: Data Engineer with Spark and Kafka

Please send your resume ONLY if you can interview IN-PERSON in Charlotte, NC
Only accepting direct candidates to work on $63/hour on W2 only

Job Title:
Data Engineer with Python, Spark and Kafka

Location:
Phoenix, Arizona

Work Schedule: 3 days in office, 2 days remotely Hybrid

Summary

The Software Engineer, Data will be responsible for designing, developing, and maintaining robust data solutions that enable efficient storage, processing
31 Bosco analysis of large-scale datasets. This role focuses on building scalable data pipelines, optimizing data workflows, and ensuring data integrity across systems. The engineer collaborates closely with cross‑functional teams including data scientists, analysts, and business stakeholders to translate requirements into technical solutions that support strategic decision‑making. A successful candidate will have strong programming skills, deep knowledge of data architecture, and experience with modern cloud and big data technologies, while adhering to best practices in security, governance, and performance optimization.

Principal

Duties and Responsibilities abrang
  • Design and implement scalable, reliable, and secure data solutions.
  • Proficient in SQL and data modeling concepts; develop data models and schemas optimized for performance and maintainability.
  • Build and maintain ETL/ELT pipelines to ingest, transform, and load data from multiple sources; optimize workflows for efficiency and cost‑effectiveness.
  • Collaborate closely with data analysts and business teams to understand their requirements.
  • Build frameworks for data ingestion pipelines that handle various data sources, including batch and real‑time data.
  • Participate in technical decision‑making.
  • Design, develop, test, deploy, and maintain data processing pipelines.
  • Design and build scalable, reliable infrastructure with a strong focus on quality, security, and privacy techniques.
  • Communicate complex technical concepts effectively to diverse audiences.
  • Understand business KPIs and translate them into technical solutions.
  • Create detailed technical documentation covering processes, frameworks, best practices, and operational support.
  • Provide constructive feedback during code reviews.
Position Specifications
  • Bachelor’s degree in Computer Science, Computer Engineering, or Information Systems Technology.
  • 5+ years of overall experience in software development using Python.
  • Strong experience with SQL and database technologies (Relational and No

    SQL).
  • Hands‑on experience with data pipeline frameworks (Apache Spark, Airflow, Kafka).
  • Familiarity with cloud platforms (Google Cloud Platform) and data services.
  • Knowledge of data modeling, ETL/ELT processes, and performance optimization.
  • Strong analytical and communication skills: verbal and written.
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary