×
Register Here to Apply for Jobs or Post Jobs. X

PowerBI Engineer

Job in Glasgow, Glasgow City Area, G1, Scotland, UK
Listing for: Head Resourcing Ltd
Full Time position
Listed on 2026-01-14
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst, Data Science Manager
Salary/Wage Range or Industry Benchmark: 80000 - 100000 GBP Yearly GBP 80000.00 100000.00 YEAR
Job Description & How to Apply Below

Power BI Report Engineer (Azure / Databricks)
Glasgow | 3-4 days onsite | Exclusive Opportunity with a Leading UK Consumer Brand

Are you a Power BI specialist who loves clean, governed data and high-performance semantic models? Do you want to work with a business that’s rebuilding its entire BI estate the right way-proper Lakehouse architecture, curated Gold tables, PBIP, Git, and end-to-end governance? If so, this is one of the most modern, forward-thinking Power BI engineering roles in Scotland. Our Glasgow-based client is transforming its reporting platform using Azure + Databricks, with Power BI sitting on top of a fully curated Gold Layer.

They develop everything using PBIP + Git + Tabular Editor 3, and semantic modelling is treated as a first-class engineering discipline. This is your chance to own the creation of high-quality datasets and dashboards used across Operations, Finance, Sales, Logistics and Customer Care‑turning trusted Lakehouse data into insights the business relies on every day.

? Why This Role Exists

To turn clean, curated Gold Lakehouse data into trusted, enterprise‑grade Power BI insights. You’ll own semantic modelling, dataset optimisation, governance and best‑practice delivery across a modern BI ecosystem.

? What You’ll Do
Semantic Modelling with PBIP + Git
  • Build and maintain enterprise PBIP datasets fully version‑controlled in Git.
  • Use Tabular Editor 3 for DAX, metadata modelling, calc groups and object governance.
  • Manage branching, pull requests and releases via Azure Dev Ops
    .
Lakehouse‑Aligned Reporting (Gold Layer Only)
  • Develop semantic models exclusively on top of curated Gold Databricks tables
    .
  • Work closely with Data Engineering on schema design and contract‑first modelling.
  • Maintain consistent dimensional modelling aligned to the enterprise Bus Matrix.
High‑Performance Power BI Engineering
  • Optimise performance: aggregations, composite models, incremental refresh, DQ/Import strategy.
  • Tune Databricks SQL Warehouse queries for speed and cost efficiency.
  • Monitor PPU capacity performance, refresh reliability and dataset health.
Governance, Security & Standards
  • Implement RLS/OLS
    , naming conventions, KPI definitions and calc groups.
  • Apply dataset certification, endorsements and governance metadata.
  • Align semantic models with lineage and security policies across the Azure/Databricks estate.
Lifecycle, Release & Best Practice Delivery
  • Use Power BI Deployment Pipelines for Dev ? UAT ? Prod releases.
  • Enforce semantic CI/CD patterns with PBIP + Git + Tabular.
  • Build reusable, certified datasets and dataflows enabling scalable self‑service BI.
Adoption, UX & Collaboration
  • Design intuitive dashboards with consistent UX across multiple business functions.
  • Support BI adoption through training, documentation and best‑practice guidance.
  • Use telemetry to track usage, performance and improve user experience.
? What We For
Required Certifications

To meet BI engineering standards, candidates must hold:

  • PL-300:
    Power BI Data Analyst Associate
  • DP-600:
    Fabric Analytics Engineer Associate
Skills & Experience
  • 3‑5+ years building enterprise Power BI datasets and dashboards.
  • Strong DAX and semantic modelling expertise (calc groups, conformed dimensions, role‑playing dimensions).
  • Strong SQL skills; comfortable working with Databricks Gold‑layer tables.
  • Proven ability to optimise dataset performance (aggregations, incremental refresh, DQ/Import).
  • Experience working with Git‑based modelling workflows and PR reviews via Tabular Editor.
  • Excellent design intuition‑clean layouts, drill paths, and KPI logic.
Nice to Have
  • Python for automation or ad‑hoc prep;
    PySpark familiarity.
  • Understanding of Lakehouse patterns, Delta Lake, metadata‑driven pipelines.
  • Unity Catalog / Purview experience for lineage and governance.
  • RLS/OLS implementation experience.
#J-18808-Ljbffr
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary