Job Description
• Hands-on experience creating automated data pipelines using modern technology stacks for batch ETL, data streaming, or change data capture, and for data processing to load advanced analytics data repositories
• Experience in designing data lake storage structures, data acquisition, transformation, and distribution processing
• Proficient in designing and implementing data integration processes in a large distributed environment using cloud services e.g. Azure Data Factory, Data Catalog, Databricks, Stream Analytics
• Advanced experience in SQL programming
• Proficient in programming languages (e.g. Python, Java) and REST APIs (e.g. Azure API Management, MuleSoft) to process data
• Experience in designing data lake storage structures, data acquisition, transformation, and distribution processing
• Proficient in designing and implementing data integration processes in a large distributed environment using cloud services e.g. Azure Data Factory, Data Catalog, Databricks, Stream Analytics
• Advanced experience in SQL programming
• Proficient in programming languages (e.g. Python, Java) and REST APIs (e.g. Azure API Management, MuleSoft) to process data