×
Register Here to Apply for Jobs or Post Jobs. X

Senior AI Engineer; Agentic Systems & Inference - Onsite

Remote / Online - Candidates ideally in
Saudi Arabia
Listing for: COGNNA
Remote/Work from Home position
Listed on 2026-01-11
Job specializations:
  • Software Development
    AI Engineer, Software Engineer
Salary/Wage Range or Industry Benchmark: 200000 - 300000 SAR Yearly SAR 200000.00 300000.00 YEAR
Job Description & How to Apply Below
Position: Senior AI Engineer (Agentic Systems & Inference) - Onsite

As a Senior AI Engineer, you will be the primary architect of Cognna’s autonomous agent reasoning engine and the high-scale inference infrastructure that powers it. You are responsible for building production‑grade reasoning systems that proactively plan, use tools, and collaborate. You will own the full lifecycle of our specialized security models, from domain‑specific fine‑tuning to architecting distributed, high‑throughput inference services that serve as Security‑core intelligence in our platform.

Key Responsibilities
    • Agentic Architecture & Multi‑Agent Coordination & Autonomous Orchestration:
      Design stateful, multi‑agent systems using frameworks like Google ADK.
    • Protocol‑First Integration:
      Architect and scale MCP servers and A2A interfaces, ensuring a decoupled and extensible agent ecosystem.
    • Cognitive Optimization:
      Develop lean, high‑reasoning microservices for deep reasoning, optimizing context token usage to maintain high planning accuracy with minimal latency.
    • Model Adaptation & Performance Engineering & Specialized Fine‑Tuning:
      Lead the architectural strategy for fine‑tuning open‑source and proprietary models on massive cybersecurity‑specific telemetry.
    • Advanced Training Regimes:
      Implement Quantization‑Aware Training (QAT) and manage Adapter‑based architectures to enable the dynamic loading of task‑specific specialists without the overhead of full‑model swaps.
    • Evaluation Frameworks:
      Engineer rigorous, automated evaluation harnesses (including Human annotations and AI‑judge patterns) to measure agent groundedness and resilience against the Security Engineer’s adversarial attack trees.
    • Production Inference & MLOps at Scale & Distributed Inference Systems:
      Architect and maintain high‑concurrency inference services using engines like vLLM, TGI, or Tensor

      RT‑LLM.
    • Infrastructure Orchestration:
      Own the GPU/TPU resource management strategy.
    • Observability & Debugging:
      Implement deep‑trace observability for non‑deterministic agentic workflows, providing the visibility needed to debug complex multi‑step reasoning failures in production.
    • Advanced RAG & Semantic Intelligence & Hybrid Retrieval Architectures:
      Design and optimize RAG pipelines involving graph‑like data structures, agent‑based knowledge retrieval and semantic searches.
    • Memory Management:
      Architect episodic and persistent memory systems for agents, allowing for long‑running security investigations that persist context across sessions.
  • 💰 Competitive Package – Salary + equity options + performance incentives 🧘 Flexible & Remote – Work from anywhere with an outcomes‑first culture 🤝 Team of Experts – Work with designers, engineers, and security pros solving real‑world problems 🚀 Growth‑Focused – Your ideas ship, your voice counts, your growth matters 🌍 Global Impact – Build products that protect critical systems and data

    Experience & Education
    • 5-7+ years in AI/ML Engineering or Backend Systems.
    • Must have contributed to large‑scale AI/ML inference service in production.
    • M.S. or B.S. in Computer Science/AI is preferred.
    • Inference Orchestration: KV‑cache management, quantization formats like AWQ/FP8, and distributed serving across multi‑node GPU clusters).
    • Agentic Development:
      Expert in building autonomous systems using Google ADK/Langgraph/Langchain and experienced with AI Observervability frameworks like Lang Smith or Langfuse.
    • Hands‑on experience building AI applications with MCP and A2A protocols.
    • Cloud AI Native:
      Proficiency in Google Cloud (Vertex AI), including custom training pipelines, high‑performance prediction endpoints, and the broader MLOps suite.
    • Programming:
      Python and experience with high‑performance backends (Go/C++) for inference optimization.
    • You are comfortable working in a Kubernetes‑native environment.
    • CI/CD:
      You are comfortable working in a Kubernetes‑native environment.
    #J-18808-Ljbffr
    Position Requirements
    10+ Years work experience
    To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
    (If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
     
     
     
    Search for further Jobs Here:
    (Try combinations for better Results! Or enter less keywords for broader Results)
    Location
    Increase/decrease your Search Radius (miles)

    Job Posting Language
    Employment Category
    Education (minimum level)
    Filters
    Education Level
    Experience Level (years)
    Posted in last:
    Salary