Is this role right for you?
In this role, you will:
Design, build, and maintain scalable data pipelines using Apache Spark, PySpark, Apache Beam, or Flink.Develop and enhance ETL/ELT workflows using Airflow and ETL tools such as Talend, Spring Batch, or NiFi.Work with structured and semi‑structured data (CSV, JSON, XML, Parquet, ORC, Iceberg) and apply data wrangling techniques using Python and pandas.Contribute to Test Automation Framework enhancements, ensuring reusable and scalable automation components.Support Group Intake Automation by translating business and operational requirements into backend workflows and data processes.Contribute to BotData Enhancement, developing synthetic data generation and validation logic using PySpark.Develop, test, and optimize Python scripts for automation, data transformations, and validation.Participate in Agile ceremonies (stand-ups, planning, re...