Job Description
Job Summary
We are seeking a skilled Data Engineer to design, build, and optimize scalable data pipelines and cloud-based data platforms. The role involves working with large-scale batch and real-time data processing systems, collaborating with cross-functional teams, and ensuring data reliability, security, and performance across the data lifecycle.
Key Responsibilities
ETL Pipeline Development & Optimization
Design, develop, and maintain complex end-to-end ETL pipelines for large-scale data ingestion and processing.
Optimize data pipelines for performance, scalability, fault tolerance, and reliability.
Big Data Processing
Develop and optimize batch and real-time data processing solutions using Apache Spark (PySpark/Scala) and Apache Kafka.
Ensure fault-tolerant, scalable, and high-performance data processing systems.
We are seeking a skilled Data Engineer to design, build, and optimize scalable data pipelines and cloud-based data platforms. The role involves working with large-scale batch and real-time data processing systems, collaborating with cross-functional teams, and ensuring data reliability, security, and performance across the data lifecycle.
Key Responsibilities
ETL Pipeline Development & Optimization
Design, develop, and maintain complex end-to-end ETL pipelines for large-scale data ingestion and processing.
Optimize data pipelines for performance, scalability, fault tolerance, and reliability.
Big Data Processing
Develop and optimize batch and real-time data processing solutions using Apache Spark (PySpark/Scala) and Apache Kafka.
Ensure fault-tolerant, scalable, and high-performance data processing systems.