More jobs:
Remote Cloud Data Warehouse Architect
Remote / Online - Candidates ideally in
Camp Hill, Cumberland County, Pennsylvania, 17001, USA
Listed on 2026-03-04
Camp Hill, Cumberland County, Pennsylvania, 17001, USA
Listing for:
EDI Staffing
Remote/Work from Home
position Listed on 2026-03-04
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing
Job Description & How to Apply Below
SUMMARY
The Cloud Data Warehouse Architect will design and deliver the next-generation enterprise analytics platform. This position is highly technical and will focus on building a cloud-native, SAP-integrated, AI-ready architecture that supports analytics, reporting, and advanced machine learning at scale.
The architect will modernize current BI and data warehouse environment, anchored today in IBM Netezza, Cognos, and Tableau into a cloud-based architecture.
This role will require deep technical expertise in data modeling, cloud-native design, and hybrid architectures that bridge legacy on-prem systems with cloud-first capabilities.
The Data Science & Insights group is at the center of analytics transformation. Our mission is to:
- Consolidate legacy BI systems (Netezza, Cognos) into a modern cloud architecture.
- Support the SAP S/4
HANA migration with tight integration into futrestate. - Deliver governed, high-performance datasets for self-service analytics in Tableau, Power BI, and SAC.
- Enable AI/ML use cases through Databricks and Azure ML.
- Extend analytics capabilities to our partners and vendors via embedded reporting
ESSENTIAL DUTIES AND RESPONSIBILITIES
Architectural Design & Modernization
- Lead the design of a cloud data warehouse and data lakehouse architecture capable of ingesting large-scale transactional and operational data.
- Define integration strategies for core systems.
- Develop a reference architecture that leverages Azure Data Lake Storage (ADLS) and Databricks Delta Lake as core components.
- Implement semantic modeling to unify reporting across Tableau, Power BI, and SAP Analytics Cloud (SAC).
- Oversee ingestion pipelines for batch (Netezza extracts, flat files, nightly jobs) and near real-time (APIs, streaming) data sources.
- Optimize query performance through partitioning, clustering, caching, and Delta Lake / warehouse design.
- Establish reusable ETL/ELT patterns across Databricks notebooks, SQL-based orchestration, and integration with Active Batch scheduling.
- Define and enforce data governance standards (naming conventions, metadata, lineage, data quality).
- Partner with Info Sec on identity management (Azure AD), encryption, and RBAC/ABAC models.
- Implement governance tooling such as Azure Purview, SAP metadata catalogs, Databricks Unity Catalog, and Glasswing.
- Partner with data engineers and visualization teams to deliver governed, high-performance datasets consumable in Tableau, Power BI, SAC, and SAP Fiori.
- Serve as the technical SME for architects, engineers, and analysts, ensuring alignment to best practices in cloud-native data warehouse design.
- Drive knowledge transfer from legacy platforms (Netezza, Cognos) into the new ecosystem.
Education
- Bachelor's degree in Computer Science, Engineering, or related field.
- 7+ years in data engineering, data warehouse architecture, or cloud data architecture.
- Expertise in Azure (ADLS, Synapse, Purview, Databricks, networking, security).
- Strong proficiency in Databricks (Delta Lake, PySpark, SQL) and/or Snowflake (warehouse design, scaling, security).
- Proven experience in data modeling (3NF, star schema, semantic layers).
- Deep SQL expertise across both cloud and traditional RDBMS (Netezza, SQL Server, Progress Open Edge).
- Understanding of SAP S/4
HANA integration and familiarity with SAP Datasphere.
- Prior experience migrating from on-prem Netezza or other MPP systems to cloud-native platforms.
- Familiarity with Cognos to Tableau/Power BI migrations and dashboard optimization.
- Hands-on experience with SAP Analytics Cloud (SAC) and embedded analytics.
- Knowledge of machine learning workflows and integration with Databricks MLflow or Azure ML.
- Strong knowledge of data governance frameworks and tooling (Purview, Unity Catalog, SAC governance).
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×