×
Register Here to Apply for Jobs or Post Jobs. X

Data Architect

Job in Charlotte, Mecklenburg County, North Carolina, 28245, USA
Listing for: NearU
Full Time position
Listed on 2026-01-24
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst, Data Science Manager
Salary/Wage Range or Industry Benchmark: 125000 - 150000 USD Yearly USD 125000.00 150000.00 YEAR
Job Description & How to Apply Below

Data Architect

Join to apply for the Data Architect role at NearU

NearU is a people‑centric, process‑driven, and technology‑enabled customer service platform dedicated to revolutionizing the home services industry by vastly improving the customer and employee experience.

Location:

Charlotte Metro (On‑site)

Employment Type:

Full‑time

Objective

Build and maintain a governed Snowflake + Databricks data platform where data quality, cleansing, lineage, KPI consistency, and observability are enforced by design, enabling reliable analytics and scalable AI adoption.

Key Responsibilities
  • Architect and maintain Snowflake as the enterprise analytical and AI data backbone
  • Design schemas and data models optimized for financial and operational KPIs, executive and self‑service analytics, and AI context datasets, features, and embeddings
  • Build and maintain Snowflake views that centralize KPI logic and support downstream BI, reporting, and AI workloads
  • Implement and govern Streams, Tasks, Dynamic Tables, Snowpark, and secure data sharing
  • Optimize warehouse sizing, performance, and cost across BI, transformation, and AI workloads
  • Reverse architect Databricks as the data processing, cleansing, and enrichment layer
  • Understand and document current medallion architecture (Bronze: raw, immutable ingestion; Silver: cleansed, standardized, validated datasets; Gold: analytics‑ and AI‑ready datasets)
  • Understand Spark‑based cleansing logic (deduplication, record survivorship, data standardization, normalization, schema enforcement, drift handling, null handling, outlier detection, anomaly flags)
  • Ensure reliable and governed handoff of curated data from Databricks into Snowflake
  • Define enterprise standards for data quality, completeness, and consistency
  • Design reusable cleansing and validation frameworks for structured and unstructured data
  • Implement automated data quality checks and scoring at each pipeline stage
  • Partner with analytics and business teams to resolve source system inconsistencies
  • Ensure analytics and AI datasets are accurate, explainable, and auditable
  • Partner with the Manager of Data Analytics and IT to define, document, and operationalize enterprise KPIs
  • Translate KPI definitions into governed, reusable Snowflake views for reporting, dashboards, and AI context
  • Ensure KPI logic is consistent, traceable to source data, and performant at scale
  • Design and govern ingestion and ELT pipelines from ERP, SaaS, APIs, and operational systems
  • Oversee pipelines built with Fivetran, dbt, and Python
  • Embed data cleansing, validation, and reconciliation into ingestion workflows
  • Ensure pipelines meet SLAs for analytics, finance, Marketing and AI workloads
  • Build monitoring and alerting for Snowflake and Databricks pipelines
  • Implement alerts for pipeline failures and SLA breaches, data freshness issues, volume and schema anomalies
  • Implement anomaly detection for KPI and dataset behavior that may impact reporting or AI outputs
  • Partner with IT and Analytics teams on incident response and escalation
  • Design data architectures that support RAG, prompt context datasets, embedding generation and lifecycle management, AI inference feedback loops
  • Prepare cleansed, curated datasets that reduce hallucinations and improve AI reliability
  • Support integrations with Snowflake Cortex, Azure OpenAI, and related LLM platforms
  • Enforce data governance, access controls, and masking aligned with enterprise and AI usage policies
  • Ensure lineage and transparency from source systems to analytics and AI outputs
  • Collaborate with data engineers, analysts, and IT teams to deliver production‑ready data assets
Required Qualifications
  • 6+ years of experience in data architecture, data engineering, or analytics engineering
  • Strong hands‑on experience with Snowflake
  • Experience with Databricks / Apache Spark
  • Demonstrated experience designing data cleansing, standardization, and quality frameworks
  • Experience supporting AI or LLM‑driven data workflows
  • Advanced SQL and strong Python skills
  • Experience designing enterprise data platforms in a cloud environment (Azure preferred)
Preferred Qualifications
  • Experience with Snowflake Cortex, Snowpark, SQL or AI SQL functions
  • Experience enabling RAG pipelines, embeddings, or vector search
  • Familiarity with Azure OpenAI or similar LLM platforms
  • Experience with data quality or observability tools (e.g., Experian, etc.)
  • Experience supporting finance, ERP, or multi‑entity data environments

NearU is an Equal Opportunity Employer AA/EOE/M/F/V/D. In compliance with the Americans with Disabilities Act, NearU may provide reasonable accommodations to qualified individuals with disabilities and encourages both prospective and current employees to discuss potential accommodations with the employer.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary