×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Artificial Intelligence Engineer

Job in Charlotte, Mecklenburg County, North Carolina, 28245, USA
Listing for: Rivago Infotech Inc
Full Time position
Listed on 2026-03-11
Job specializations:
  • Software Development
    AI Engineer
Salary/Wage Range or Industry Benchmark: 60000 - 80000 USD Yearly USD 60000.00 80000.00 YEAR
Job Description & How to Apply Below

AI Platform Engineer to design and build the foundational components that power enterprise-scale GenAI applications.

This includes data guardrails, model safety tooling, observability pipelines, evaluation harnesses, and standardized logging/monitoring frameworks. This role is critical for enabling safe, reliable, and compliant AI development across multiple use cases, teams, and business units. The goal is to create the common platform services that the AI team will build upon.

Key Responsibilities
  • Guardrails, Safety & Governance
    • Design and implement data guardrail frameworks (pre-processing, redaction, PII/PHI filtering, DLP)
    • Build "Model Armor" components such as safety tooling: policy engines, classifiers, DLP APIs/safety models
    • Collaborate with Security, Compliance, and Data Privacy teams to ensure frameworks meet enterprise governance requirements
  • Observability Frameworks
    • Build and maintain observability pipelines using tools like Arize AI (tracing, quality metrics, dataset drift/hallucination tracking, embedding monitoring)
    • Define and enforce platform-wide standards for:
      • Token usage and cost monitoring
      • Latency and reliability metrics
    • Provide reusable SDKs or middleware for engineering teams to adopt observability with minimal friction
    • Provide safety flags, user context and attribution, monitoring dashboards for performance, cost, anomalies, errors, and safety events
    • Implement alerting and SLOs/SLIs for LLM inference systems
  • Evaluation Infrastructure
    • Architect and maintain evaluation harnesses for GenAI systems, including RAG evaluation (faithfulness, relevance, hallucination risk), human-in-the-loop review workflows, automated eval pipelines integrated into CI/CD
    • Support frameworks such as RAGAS, G‑Eval, rubric scoring, pairwise comparisons, and test case generation
    • Build reusable tooling for teams to write, run, and track model evaluations
    • Develop shared libraries, APIs, and services for embedding pipelines, model wrappers, common data loaders, and document preprocessing
    • Drive consistency across teams with standards, reference architectures, and best practices; review system designs across use cases to ensure alignment to platform patterns
    • Partner with AI engineers, product teams, and data scientists to understand cross-cutting needs and convert into platform solutions
    • Create documentation, onboarding guides, examples, and developer tooling; provide internal training (brown bags, workshops) on guardrails, observability, and evaluation frameworks
Required Qualifications Technical Skills
  • 5-10+ years software engineering or ML infrastructure experience
  • Strong Python engineering fundamentals (FastAPI, async, typing/Pydantic, testing)
  • Experience with model safety/guardrails approaches (prompt injection defense, PII redaction, toxicity filters, policy enforcement)
  • Hands‑on with Arize AI, Lang Smith, or similar LLM observability platforms
  • Experience creating evaluation frameworks using RAGAS, G‑Eval, or custom rubric systems
  • Strong familiarity with vector databases (Pinecone, Weaviate, Milvus), embeddings, and retrieval pipelines
  • Solid understanding of LLM architectures, tokenization, embeddings, context limits, and RAG patterns
  • Experience in cloud (GCP preferred), Kubernetes/GE, containers, and CI/CD
  • Strong understanding of security, governance, DLP, data privacy, RBAC, and enterprise compliance requirements
Soft Skills
  • Strong documentation and communication skills
  • Ability to influence engineering teams and standardize best practices
  • Comfortable working across multiple stakeholders—platform, security, ML engineering, product
Nice to Have
  • Experience with Lang Chain/Lang Graph or Llamalndex orchestrations
  • Experience with Guardrails.ai, Rebuff, Protect AI, or similar LLM security tooling
  • Experience with GCP Vertex AI pipelines, Model Monitoring, and Vector Search
  • Familiarity with knowledge graphs, grounding models, fact‑checking models
  • Building SDKs or developer frameworks adopted across multiple teams
  • On‑prem or hybrid AI deployment experience
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary