×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer

Job in San Jose, Santa Clara County, California, 95199, USA
Listing for: Avaya
Full Time position
Listed on 2025-12-07
Job specializations:
  • IT/Tech
    Data Engineer, Data Security
Job Description & How to Apply Below

About Avaya

Avaya is an enterprise software leader that helps the world’s largest organizations and government agencies forge unbreakable connections.

The Avaya Infinity™ platform unifies fragmented customer experiences, connecting the channels, insights, technologies, and workflows that together create enduring customer and employee relationships.

We believe success is built through strong connections – with each other, with our work, and with our mission. At Avaya, you'll find a community that values your contributions and supports your growth every step of the way.

Learn more at

Overview

You’ll build and scale the real-time and batch data platform that powers a large enterprise contact center solution. Our products demand ultra-low-latency decisioning for live interactions and cost-efficient big-data analytics for historical insights. We’re primarily on Azure today and expanding to GCP and AWS. Data is the backbone for our AI features and product intelligence.

Primary charte
r: complex contact center analytics and operational intelligence: an AI-enabled enterprise contact center analytics. Our vision is a flexible AI-enabled data platform that unifies contact center KPIs, customer/business outcomes, and AI quality/performance, and pervasively applies AI to deliver advanced features that help users easily leverage rich contact center data alongside business data and AI performance monitoring to drive decisions end-to-end.

What you’ll do
  • Design, build, and operate low-latency streaming pipelines (Kafka, Spark Structured Streaming) and robust batch ETL/ELT on Databricks Lakehouse.
  • Establish reliable orchestration and dependency management (Airflow), with strong SLAs and on-call readiness for business-critical data flows.
  • Model, optimize, and document curated datasets and interfaces that serve analytics, product features, and AI workloads.
  • Implement data quality checks, observability, and backfills; drive root-cause analysis and incident prevention.
  • Partner with application teams (Go/Java), analytics, and ML/AI to ship data products into production.
  • Build and maintain datasets and services that power RAG pipelines and agentic AI workflows (tool-use/function calling).
  • When Spark/Databricks isn’t optimal, design and operate custom processors/services in Go to meet strict latency or specialized transformation requirements.
  • Instrument prompt/response and token usage telemetry to support LLMOps evaluation and cost optimization; provide datasets for labeling and golden sets.
  • Improve performance and cost (storage/compute), review code, and raise engineering standards.
Security & Compliance
  • Design data solutions aligned to enterprise security, privacy, and compliance requirements (e.g., SOC 2, ISO 27001, GDPR/CCPA as applicable), partnering with Security/Legal.
  • Implement RBAC/ABAC and least-privilege access; manage service principals, secrets, and key rotation; enforce encryption in transit and at rest.
  • Govern sensitive data: classification, PII handling, masking/tokenization, retention/archival, lineage, and audit logging across pipelines and storage.
  • Build observability for data security and quality; support incident response, access reviews, and audit readiness.
  • Embed controls in CI/CD (policy checks, dependency vulnerability scanning) and ensure infra‑as‑code adheres to guardrails.
  • Partner with security engineering on penetration tests, threat modeling, and red‑team exercises; remediate findings and document controls.
  • Contribute to compliance audits (e.g., SOC 2/ISO 27001) with evidence collection and continuous control monitoring; support DPIAs/PIAs where required.
Qualifications
  • 6+ years building production‑grade data pipelines at scale (streaming and batch).
  • Deep proficiency in Python and SQL; strong Spark experience on Databricks (or similar).
  • Advanced SQL: window functions, CTEs, partitioning/z‑ordering, query planning and tuning in lakehouse environments.
  • Hands‑on with Kafka (or equivalent) and an orchestrator (Airflow preferred).
  • Strong data modeling skills and performance tuning for low latency and high throughput.
  • Production mindset: SLAs, monitoring, alerting, CI/CD, and on‑call participation.
  • Proficient using AI…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary