More jobs:
USA_Database Administrator
Job in
San Jose, Santa Clara County, California, 95199, USA
Listed on 2026-01-12
Listing for:
Varite
Full Time
position Listed on 2026-01-12
Job specializations:
-
IT/Tech
Cloud Computing, Data Engineer
Job Description & How to Apply Below
Pay Rate Range: $/hr. GBaMS ReqID:
Job Description:
The SAP HANA Package Implementation Specialist will be responsible for the implementation, support, and optimization of SAP HANA solutions within the enterprise data and analytics platform. This role requires expertise in platform governance, automation, security, application support, and collaboration with various data platforms and application teams to ensure high performance and reliability.
Platform Governance and Optimization- Define, implement, and enforce platform governance, standards, and best practices for SAP HANA environments.
- Develop and maintain SLA and audit frameworks to ensure compliance, stability, and operational excellence.
- Drive capacity planning and performance optimization initiatives through detailed workload analysis and quarterly performance reviews.
- Lead the design and automation of platform processes including ingestion, extraction, performance testing, alerting, auditing, backup, and cleanup activities.
- Oversee security governance for SAP HANA, ensuring compliance with enterprise security and regulatory requirements.
- Conduct application load testing, performance benchmarking, and code reviews prior to production deployments.
- Partner with development and business teams to optimize SAP HANA application performance and resource utilization.
- Manage code migration and version control processes, leveraging tools such as CHARM, Git, or similar change management frameworks.
- Provide expert-level support for the Enterprise Data & Analytics platform with a primary focus on SAP HANA operations and performance.
- Lead incident management and root cause analysis for production issues, ensuring minimal downtime and business impact.
- Collaborate with downstream data platforms (Snowflake, GCP, SFDC, Teradata) for data extraction and transformation processes.
- Support upstream ingestion pipelines leveraging tools such as BODS, Kafka (Python consumers), and other ETL frameworks.
- Manage and execute month-end and quarter-end snapshot activities, ensuring data integrity and timeliness.
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×