AI Runtime Engineer
Listed on 2026-02-28
-
Software Development
AI Engineer, Machine Learning/ ML Engineer
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders‑of‑magnitude higher compute efficiency and density compared to today’s best‑in‑class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space‑constrained applications.
EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
EnCharge AI is seeking an AI Runtime Engineer to develop and optimize the execution stack for our next-generation AI accelerator. In this role, you will work on low‑latency, high-performance runtime software that enables efficient execution of deep learning models on specialized hardware. You will collaborate with hardware, compiler, and AI framework teams to deliver optimized AI inference and training performance across cloud and edge environments.
Responsibilities- Develop and optimize the AI runtime software stack for executing deep learning workloads on AI accelerators.
- Implement task scheduling, memory management, and kernel execution strategies for efficient computation.
- Optimize data movement between host and device using PCIe, DMA, and shared memory.
- Design and implement high-performance APIs for AI inference frameworks such as Open Vino, ONNX Runtime, and vLLM.
- Work on graph execution optimizations, including kernel fusion, pipelining, tensor tiling, and caching.
- Integrate runtime components with AI compilers (LLVM, MLIR, XLA, TVM) for optimized execution.
- Ensure scalability and reliability of the AI runtime for cloud-based and edge AI deployments.
- Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field.
- 3+ years of experience in developing low-level runtime software for AI accelerators, GPUs, or HPC systems.
- Strong proficiency in C/C++ and low-level systems programming.
- Deep understanding of task scheduling, concurrency, and memory hierarchy.
- Experience with hardware-aware optimizations and dataflow architectures.
- Familiarity with deep learning execution frameworks (ONNX Runtime, Tensor
RT, TVM, OpenVINO). - Experience with low-latency, high-throughput workload execution for AI models.
- Strong debugging and profiling skills for optimizing AI execution performance.
- Exposure to AI model deployment pipelines (Triton, Tensor Flow Serving).
Encharge
AI is an equal employment opportunity employer in the United States.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).