Full Stack Integration & Data Migration Engineer
Listed on 2026-02-28
-
Software Development
Data Engineer, Software Engineer
Job Overview
Possible 3 Month CTH | No Fees | Do Not Re-Post | Confidential
Role:
Full Stack Integration & Data Migration Engineer
Work location:
Dallas, Texas or Remote
Background and Meet and Greet: MANDATORY
US Citizen is a must
Looking for a highly skilled Full Stack Integration & Data Migration Engineer with strong expertise in Java, Spring, Neo4j, API integrations, and data migration from relational databases to graph databases. The ideal candidate should have experience working directly with customers, handling end-to-end solution implementation, and troubleshooting complex issues across application, data, and integration layers. This role requires close collaboration with architects, customers, product teams, and deployment engineers to deliver high‑quality, production‑ready solutions.
- Participate in customer discussions to understand functional and technical requirements; provide clarifications during solution implementation; support and conduct User Acceptance Testing (UAT) with customers.
- Triage UAT defects and classify them as bugs, enhancements, or non-issues, ensuring quick resolution.
- Work with the Delivery Architect to design scalable technical architectures; define integration, ingestion, and data exchange points across systems.
- Implement application components using Java, Spring, Angular, JSON, REST, and JUnit; customize and extend Drools rules and business logic; develop and customize TMF‑compliant APIs for external system integrations.
- Integrate with external systems using their APIs for data ingestion; develop outbound APIs for write operations and downstream system updates; customize and enhance ingestion pipelines to meet Neo4j ingestion requirements.
- Write efficient Cypher queries, SQL queries, stored procedures, and database functions; design and execute complex data migration strategies from relational databases (Oracle, Postgre
SQL) to Neo4j graph databases; reverse‑engineer legacy data models, build ETL frameworks, and transform relational data to graph structures. - Develop and maintain bulk‑loading scripts using APOC JDBC, Cypher scripting, and Postgre
SQL‑based transformation logic; perform data validation, referential integrity checks, and resolve data relationship issues involving hierarchical datasets. - Perform device modeling and deploy discovery resource adapters for network discovery operations.
- Develop and support configuration management scripts for Blue Planet CI/CD processes; support production deployment, including code packaging, system checks, and staging environment dry runs.
- Raise and track issues with Product Line teams for product defects or enhancements; provide one‑month post‑production warranty support to resolve production bugs.
- Create user guides, operational documentation, and conduct end‑user training.
- Strong expertise in Java, Spring Boot, REST APIs, JSON, JUnit.
- Hands‑on experience with Neo4j, Cypher, and graph‑based modeling.
- Proficiency with Postgre
SQL / Oracle, including stored procedures and SQL optimization. - Experience in ETL frameworks, bulk loading, data transformation, and APOC‑based ingestion.
- Frontend skills in Angular (preferred).
- Familiarity with Drools for business rule implementation.
- Experience in API integration, microservices, and system interconnectivity.
- Knowledge of CI/CD pipelines, preferably Blue Planet.
- Strong debugging, analytical, and problem‑solving skills.
- Excellent communication and customer‑facing abilities.
- Experience with telecom network inventory systems or number management systems.
- Prior exposure to TMF Open APIs.
- Knowledge of network resource modeling and discovery adapters.
- Experience working in Agile delivery environments.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).