• Design, develop, and maintain end-to-end big data pipelines for enterprise applications

• Work on Apache Hadoop and Apache Spark-based data processing frameworks

• Manage and optimize data pipelines, scripts, and production systems for performance and stability

• Perform unit testing of data pipelines and support UAT (User Acceptance Testing) activities

• Debug and resolve issues in existing data pipelines, scripts, and production workflows

• Modify and enhance existing data processing systems based on business requirements

• Review design, code, and deliverables to ensure high-quality output

• Build and maintain scalable and secure big data solutions using cloud platforms

• Ensure implementation of data governance, security, and compliance standards

• Mentor junior team members and contribute to knowledge sharing initiatives

• Participate in Agile development practices and cross-team collaboration

• Handle Proof of Concepts (PoCs) and deliver solutions within timelines





Source link