Local to WA - Azure Data analyst/Engineer Fabric
Listed on 2026-02-18
-
IT/Tech
Data Engineer
Azure Data analyst/Engineer with Fabric (Local to WA)
Bellevue - Washington Hybrid
Mandatory SkillsMicrosoft Fabric - Warehousing, Microsoft Fabric - Data Engineering
Requirements Gathering Solution Design
Engage with business stakeholders to understand analytical, operational and compliance needs
Convert business requirements into functional design source‑to‑target mappings, transformation logic and technical specifications
Validate requirements against enterprise data models and recommend architecture patterns:
Lakehouse, Warehouse, Realtime Hub
Data Modeling Fabric Semantic Layer
Design, build and govern Fabric Semantic Models, Direct Lake Import, DQ Hybrid modes
Define enterprise wide canonical models: shared dimensions, hierarchies, KPIs and reusable DAX measures
Optimize semantic models for performance using aggregations, incremental refresh and partitioning strategies
Enable certified datasets, semantic governance and role‑level security within Fabric
ETL / ELT Engineering with Fabric Pipelines, Dataflows, Notebooks
Build ingestion and transformation processes using Data Factory pipelines, Dataflows (Gen2), Warehouse pipelines and PySpark notebooks
Maintain metadata‑driven ETL patterns and reusable frameworks for ingestion, harmonization and transformation
Fabric Notebook EngineeringUse Fabric Notebooks to perform
- PySpark transformations
- Data validation and data quality checks
- ML feature engineering and lightweight model operations
Automate notebook execution via pipelines, triggers and Fabric scheduling
Integrate notebooks with Lakehouse tables, Warehouse tables and ML model outputs
Near Real Time (NRT) Data Processing in FabricDesign and implement near‑real‑time data ingestion pipelines using
- Fabric Real Time Hub
- Event Streams
- KQL Databases
- Streaming Dataflows
Build streaming transformations and real‑time analytical models leveraging Kusto Query Language (KQL) and PySpark Structured Streaming
Ensure low‑latency ingestion to Lakehouse/Warehouse for downstream consumption
Optimize real‑time workloads for durability, recovery and performance under high throughput
Build dashboards and semantic models that support near‑real‑time refresh scenarios
Performance OptimizationOptimize SQL queries, Lakehouse Delta tables, semantic models, DAX expressions and Power BI datasets
Review and tune pipeline throughput, notebook execution performance and refresh schedules
Improve Direct Lake performance by optimizing storage layouts, file size distributions and columns
Perform workload monitoring using Fabric capacity metrics and logs
Build high‑quality Power BI dashboards and enterprise reports integrated with centralized semantic models
Develop DAX calculations, KPIs, UX/UI standards, drillthroughs and row‑level security
Drive semantic model reuse and promote governed gold datasets
Governance, Security & Compliance (Purview Integration)Implement data governance and cataloging using Microsoft Purview for Fabric assets
Manage lineage tracking, glossary management, classification and metadata enrichment
Define enterprise security controls: RBAC, masking, PII handling, encryption, retention policies
Ensure compliance with GDPR, CCPA, HIPAA, SOX and internal audit controls
Govern Fabric workspace structure, capacity usage, data certification processes and lifecycle management
Technical Leadership & Program DeliveryLead data engineers, BI developers and analysts across multiple initiatives
Review designs, STTMs, code, semantic models and performance benchmarks
Own sprint planning, estimation, milestone tracking and stakeholder communication
Promote documentation, technical standards, reusable design frameworks and automation
Required Skills & ExperienceTechnical Expertise
8 years of data engineering or BI experience with 2 years in Microsoft Fabric
Strong hands‑on experience with
- Fabric Lake houses, Warehouses
- Fabric Semantic Models, Direct Lake Import, Hybrid
- Real Time Hub, Event Streams, KQL Databases
- Notebooks, PySpark data transformations, optimization
- Data Factory pipelines, Dataflows (Gen2)
Solid understanding of Delta Lake, Spark performance tuning and workload optimization
Expertise in implementing source‑to‑target mappings, transformation logic and validation rules
Preferred SkillsKnowledge of Data Ops, CICD, Git, Dev Ops pipelines and unit testing
Familiarity with Fabric AI Copilot for Power BI and AI‑driven engineering accelerators
Soft SkillsExcellent communication, articulation and stakeholder management
Strong leadership skills and the ability to mentor others
Problem‑solving mindset with a focus on scalability, efficiency and accuracy
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).