×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer; Op AI

Job in San Diego, San Diego County, California, 92189, USA
Listing for: SAIC
Full Time position
Listed on 2026-03-03
Job specializations:
  • Engineering
    Data Engineer, Data Science Manager
  • IT/Tech
    Data Engineer, Data Science Manager, Data Analyst
Job Description & How to Apply Below
Position: Data Engineer (Op AI)
Job

Location: SAN DIEGO, CA, US

Date Posted:

Category: Information Technology

Subcategory: Database Engr

Schedule: Full-time

Shift: Day Job

Travel: No

Minimum Clearance Required: Top Secret

Clearance Level Must Be Able to Obtain: TS/SCI

Potential for Remote Work: No

Description

SAIC is seeking a Data Engineer in support of NAVWAR's Naval Operational Architecture (NOA) program in San Diego, CA. The Data Engineer will support a transformational, Navy-led network-centric initiative that converges existing systems, automating the correlation of multiple ISR data streams, dramatically reducing the "sensor-to-shooter" timeline. This is a recently awarded contract, funded for five years.

Work is performed on site in San Diego, SAIC is seeking a Data Engineer in support of NAVWAR's Naval Operational Architecture (NOA) program in San Diego, CA. The Data Engineer will support a transformational, Navy-led network-centric initiative that converges existing systems, automating the correlation of multiple ISR data streams, dramatically reducing the "sensor-to-shooter" timeline. This is a recently awarded contract, funded for five years.

Work is performed on site in San Diego, CA.

Please view our program page for more information and to view all open positions available:

Job Duties:
  • Conduct data pre-processing, exploratory data analysis, and data pipeline engineering to ensure performant and high-quality data output.
  • Conduct thorough testing and validation of data pipelines and analytics to ensure accuracy, reliability, and robustness.
  • Design or normalize data to common standards to support interoperability and analytical workflows.
  • Develop and deploy data pipelines and analytics in real-world applications.
  • Work with multiple data formats, including CSV, JSON, XML, Parquet, and ORC.
  • Perform exploratory data analysis, algorithm development, and testing.
  • Deploy, monitor, and improve data pipelines for operational environments.
  • Implement event streaming pipelines using Apache Kafka, Rabbit

    MQ, or ZeroMQ.
  • Collaborate with analytics, engineering, and mission teams to ensure effective data integration and output quality.
  • Stay current with emerging trends in data engineering, distributed systems, and modern data architecture.
  • Document data processes, pipeline structures, and engineering best practices.
Qualifications

Education and Years of Experience Requirements:
  • Bachelor's degree in computer science, Data Science, Engineering, Mathematics, Statistics, or a related STEM field and eight (8) years of experience in of AI/ML experience in defense or intelligence environments.
Citizenship and Clearance requirements:
  • No dual citizenship.
  • Active Top Secret clearance required; TS SCI clearance preferred.
Required Skills and Experience:
  • Experience as a business analyst, data analyst, data scientist, data engineer, database administrator, geospatial analyst/engineer, machine learning engineer, or software engineer.
  • Strong programming skills in Python.
  • Experience designing or normalizing data to common standards.
  • Experience with data pipeline development and real-world deployment.
  • Experience with multiple data formats: CSV, JSON, XML, Parquet, ORC.
  • Familiarity with event streaming platforms (Kafka, Rabbit

    MQ, ZeroMQ).
  • Experience with exploratory data analysis, algorithm development, and testing.
  • Experience deploying, monitoring, and improving data pipelines.
  • Strong problem-solving and analytical skills.
  • Excellent communication skills and ability to work effectively in a collaborative team environment.
  • Familiarity with data pipeline frameworks and libraries (Air Byte, Apache Airflow, dbt, Apache Iceberg, Snowflake).
  • Experience retrieving and managing GIS data (ArcGIS, PostGIS).
  • Programming skills in Go or Rust.
  • Expertise with Elasticsearch, Redis, S3, Postgre

    SQL, or similar data stores.
  • Experience with AWS native data services: EFS, RDS, S3, SNS, SQS.
  • Experience with distributed computing and parallel processing (AWS Lambda, DASK, Spark).
  • Familiarity with cloud platforms (AWS, Azure) and containerization (Docker, Kubernetes).
  • Understanding of cybersecurity principles in the context of data applications.
  • Ability to safely carry tools, equipment, and materials aboard ship,…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary