We are looking for a Microsoft Fabric Data Engineer to accelerate a backlog of data engineering initiatives. The role involves building ingestion pipelines, implementing transformation logic, and ensuring data readiness for financial reporting workflows. This position is critical for enabling application modernization efforts to integrate cleanly with Fabric.
Key Responsibilities
• Design and build ingestion pipelines in Microsoft Fabric
• Develop data transformations using Spark / PySpark
• Build and manage Lakehouse / Warehouse structures
• Implement Dataflows Gen2
• Design data quality and reconciliation frameworks
• Optimize Fabric capacity usage and performance
• Integrate ACA applications with Fabric pipelines
• Collaborate with application teams to define data contracts
• Monitor Fabric using Monitor Hub & capacity metrics
• Ensure data readiness SLAs are met
Key Competencies
Required Skills
Fabric Expertise
• Lakehouse architecture
• Pipelines
• Notebooks (Spark)
• Semantic models
• Delta format
• Dataflows Gen2
• Capacity management
Data Engineering
• PySpark
• SQL
• Data modeling
• ETL/ELT design
• Data validation frameworks
Azure Integration
• Event Hub / Service Bus
• Managed Identity
• Secure integration patterns
Nice to Have
• Experience in integrating Salesforce data
• Exposure to Oracle / accounting system integrations
Microsoft Fabric, Lakehouse architecture, Pipelines, Notebooks (Spark), Semantic models, Delta format, Dataflows Gen2, Capacity management, PySpark, SQL, Data modeling (dimensional & transactional), ETL/ELT pipeline design, Data validation & data quality frameworks, Azure Integration, Event Hub, Service Bus, Managed Identity, Secure integration patterns.