Job Description
Job Role: -AWS Data Engineer
Work Mode - Remote
Experience: - 7+ Years
Job Roles & Responsibilities: -
- Design, build, and optimize data pipelines and scalable data assets.
- Develop and maintain high-performance code using PySpark/Python with best practices.
- Perform code optimization using Spark SQL and PySpark for efficiency.
- Implement code refactoring to modernize legacy codebases for readability and maintainability.
- Write and maintain unit tests (TDD) to ensure code reliability and reduce bugs.
- Debug and resolve complex issues including performance, concurrency, and logic flaws.
- Work with AWS services (S3, EC2, Lambda, Redshift, CloudFormation) to architect and deliver solutions.
- Manage version control and artifact repositories with Git and JFrog Artifactory.