Senior Data Engineer (Azure & Databricks)
** This is a 6+ month contract with our client based out of Bloomington, MN. This is a hybrid role, working in office Tuesday/Thursday. Candidates must be able to work in the US without sponsorship.**
We’re looking for a Senior Data Engineer with strong Azure experience, especially in Azure Databricks, Delta Lake, and SQL, to build and scale a medallion-based data platform. This role focuses on designing high‑performance, governed data pipelines using PySpark, SQL, and Databricks tools to integrate data from Azure systems, SQL Server Managed Instance, and third‑party sources, while partnering closely with analytics teams and business stakeholders. Experience or strong interest in supporting AI/ML use cases is highly valued, with financial‑services experience considered a plus but not required.
Responsibilities
Design, develop, and optimize data pipelines in Azure Databricks using PySpark and SQL, applying Delta Lake and Unity Catalog best practices.
Build modular, reusable libraries and utilities within Databricks to accelerate development and standardize workflows.
Implement Medallion architecture (Bronze, Silver, Gold layers) for scalable, governed data zones.
Integrate external data sources via REST APIs, SFTP file delivery, and SQL Server Managed Instance, implementing validation, logging, and schema enforcement.
Utilize parameter-driven jobs and manage compute using Spark clusters and Databricks serverless.
Collaborate with data analytics teams and business stakeholders to understand requirements and deliver analytics-ready datasets.
Monitor and troubleshoot Azure Data Factory (ADF) pipelines (jobs, triggers, activities, data flows) to identify and resolve job failures and data issues.
Automate deployments and manage code using Azure DevOps for CI/CD, version control, and environment management.
Contribute to documentation, architectural design, and continuous improvement of data engineering best practices.
Support the design and readiness of the data platform for AI and machine learning initiatives.
Requirements
Strong expertise with Azure Databricks, including PySpark, Delta Lake, Unity Catalog, and the ability to build reusable libraries, utility notebooks, and parameterized jobs.
Advanced SQL skills with experience working in Azure SQL Database and/or SQL Server Managed Instance.
Experience designing, troubleshooting, and supporting data pipelines using Azure Data Factory.
Proven ability to integrate external data sources, including REST APIs and SFTP.
Working knowledge of Azure DevOps for CI/CD, version control, and parameterized deployments.
Demonstrated experience partnering closely with data analytics teams and business stakeholders, supported by strong communication, problem-solving, and collaboration skills.
Interest or experience in preparing data platforms to support AI and machine learning initiatives.
Nice to Haves
Experience implementing Medallion architecture within governed Azure data environments, including data governance and RBAC.
Familiarity with data warehousing concepts, dimensional modeling, and preparing datasets for BI tools such as Power BI.
Understanding of Spark performance optimization, cluster or serverless compute management, and advanced Delta Lake features.
Hands-on experience preparing datasets to support AI/ML use cases.
Prior experience in the financial-services industry.
Our Vetting Process
At Emergent Staffing, we work hard to find Data Engineers who are the right fit for our clients. Here are the steps of our vetting process for this position:
Application (5 minutes)
Online Assessment (40 minutes)
Initial Phone Interview (30-45 minutes)
Virtual Interview with Hiring team
Onsite Interview
Job Offer!
#EmergentStaffing
#IND3


