Job Description
Job Summary
- Design, develop and maintain robust ETL pipelines using Data Bricks and Azure Data Factory.
- Transform raw data into actionable insights to support business decision‑making.
- Manage and optimize data storage solutions using SQL Server.
- Collaborate with cross‑functional teams and stakeholders to gather requirements and ensure seamless data integration.
- Develop and maintain scalable data architectures and data workflows.
- Automate data operations and enhance deployment processes using DevOps practices.
- Write and optimize Python scripts for efficient data processing and analysis.
- Monitor, troubleshoot and ensure accuracy and availability of data pipelines.
- Stay updated with industry trends and adopt new data engineering technologies and best practices.
- Ensure high data quality across platforms and facilitate ongoing improvements in data engineering processes.