×
Register Here to Apply for Jobs or Post Jobs. X

Forward Deployed Engineer - AI & Supply Chain

Job in Houston, Harris County, Texas, 77246, USA
Listing for: Bechtel Global Corporation
Part Time position
Listed on 2026-03-01
Job specializations:
  • IT/Tech
    Data Engineer, Systems Engineer, Cloud Computing
Job Description & How to Apply Below

Requisition

  • Relocation Authorized:

    National - Family
  • Telework Type:

    Part-Time Telework
  • Work Location:
    Houston, TX
Extraordinary teams building inspiring projects:

Since 1898, we have helped customers complete more than 25,000 projects in 160 countries on all seven continents that have created jobs, grown economies, improved the resiliency of the world's infrastructure, increased access to energy, resources, and vital services, and made the world a safer, cleaner place.

Differentiated by the quality of our people and our relentless drive to deliver the most successful outcomes, we align our capabilities to our customers' objectives to create a lasting positive impact. We serve the Infrastructure;
Nuclear, Security & Environmental;
Energy;
Mining & Metals, and the Manufacturing and Technology markets. Our services span from initial planning and investment, through start-up and operations.

Core to Bechtel is our Vision, Values and Commitments. They are what we believe, what customers can expect, and how we deliver. Learn more about our extraordinary teams building inspiring projects in our Impact Report.

Job Summary:

The Forward Deployed Engineer (FDE) operates at the intersection of advanced data engineering and real-world business execution; embedded with operational teams to uncover gaps, translates ambiguous needs into clear technical plans, and delivers production-grade solutions that measurably improve throughput, reliability, and efficiency. This is not a "back-office" engineering role; rather on the front lines with stakeholders in operations, supply chain, finance, customer success, or project delivery;

working side-by-side to understand how work happens, why bottlenecks occur and what decisions need better data and automation.

Major Responsibilities:

Designs and implements end-to-end data products that connect disparate systems (ERP, MES, CRM, ticketing tools, IoT/telemetry, logistics platforms, and third-party APIs) into governed, trusted datasets and real-time event streams.

Builds scalable pipelines using modern data stack patterns (batch + streaming), applying strong engineering practices (testing, CI/CD, observability, performance tuning), and modeling business entities in a way that enables reliable analytics and operational applications. Where appropriate, layers decision support and intelligent workflows on top-such as anomaly detection, recommendations, or AI-assisted triage-always grounded in security, auditability, and business outcomes.

Writes production code, whiteboards with business owners at all levels of the company, runs discovery sessions, defines measurable success metrics, prototypes quickly, hardens solutions into durable systems with documentation and enablement so teams can sustain and extend what is built. Manages trade-offs across time, scope, and complexity, prioritizing pragmatic delivery while still building the right foundations for scale.

Embeds with business teams to identify operational pain points, define success metrics, and prioritize use cases. Designs and builds scalable batch and streaming pipelines (ingestion transformation modeling serving). Implements real-time/event-driven architectures using streaming tools (e.g., Kafka/Event Hubs/Kinesis) with SLAs. Creates strong data models / semantic layers representing core business entities and workflows (orders, assets, shipments, etc.). Delivers operational applications and decision workflows (dashboards, workflows, write-backs, audits) that drive action.

Ensures reliability and governance: testing, lineage, access controls, data quality checks, monitoring, and incident response. Collaborates with security/compliance/IT to meet enterprise requirements for privacy, auditability, and controls. Documents solutions and enables users/engineers through playbooks, training, and repeatable patterns.

Education and Experience Requirements:

Bachelor's degree in CS/Engineering/STEM (or equivalent practical experience) plus at least 5 years in software engineering, data engineering, or platform engineering delivering production systems.

Required Knowledge and

Skills:

Strong coding ability in Python and SQL; experience with Spark/PySpark or equivalent distributed processing. Proven experience building streaming and near-real-time data solutions (Kafka/Event Hubs/Kinesis or similar). Experience integrating data from APIs, databases, files, and operational systems; handling messy/heterogeneous data. Solid understanding of data modeling, schema evolution, incremental processing, and performance optimization.

Experience with cloud data ecosystems (Azure/AWS/GCP) and modern storage/compute patterns (lakehouse/warehouse). Strong engineering discipline: version control, testing, CI/CD, observability/monitoring, and production support. Experience delivering solutions on Palantir Foundry or similar enterprise data platforms (e.g., Databricks, Snowflake), including production pipelines, governance, and operational applications. Familiarity with Foundry…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary