AGÕæÈ˰ټÒÀÖ

Onsite
Posted 10 days ago
Save Job

Job Details

Location(s):

  • Quay Building 8th Floor, Bagmane Tech Park, Bengaluru, IN

Line Of Business: Data Estate(DE)

Job Category:

  • Engineering & Technology

Experience Level: Experienced Hire

Skills and Competencies

  • 5+ years of hands-on experience in data engineering roles
  • Expert-level proficiency with Apache Spark (Python/PySpark and Scala)
  • Deep experience building and managing real-time data streaming pipelines using Apache Kafka
  • Strong understanding of big data technologies including HDFS, Hive, distributed computing, and file formats like Parquet/Avro
  • Demonstrated ability to work independently, manage priorities, and deliver high-quality, tested solutions
  • Strong problem-solving, debugging, and analytical skills
  • Excellent communication and interpersonal skills
  • Preferred: Experience with cloud-based platforms (AWS, Azure), orchestration tools (Airflow, Dagster), containerization (Docker/Kubernetes), data warehouses (Snowflake, Redshift), IaC tools (Terraform), CI/CD for data pipelines, and database design (SQL/NoSQL)

Education

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Information Technology, or a related field
  • Equivalent practical experience also considered

Responsibilities

Design and build scalable data pipelines and infrastructure to support large-scale data processing in real time and batch.

  • Design, implement, and maintain scalable data pipelines using Apache Spark (PySpark/Scala) and Apache Kafka
  • Work with large datasets across data lakes, warehouses, and distributed systems
  • Build low-latency streaming solutions to ingest, process, and serve data
  • Optimize processing jobs and storage for cost-efficiency and scalability
  • Develop comprehensive automated tests for pipeline reliability and data quality
  • Collaborate with data scientists, analysts, engineers, and product teams to deliver data solutions
  • Actively contribute in Agile ceremonies and deliver sprint commitments
  • Take ownership of tasks and drive projects to completion independently
  • Troubleshoot and resolve issues with data performance and reliability
  • Mentor junior engineers and share knowledge across the team
  • Document technical processes, architecture, and pipeline workflows
  • Contribute to ongoing improvements in data architecture and engineering practices

About the Team

Our Data Engineering team is responsible for designing and maintaining the infrastructure that powers our company’s data-driven decision-making.

By joining our team, you will:

  • Help scale real-time and batch data processing platforms
  • Support mission-critical analytics and data science initiatives
  • Influence architectural decisions that shape the future of data at Moody’s


Mission
We're connecting diverse talent to big career moves. Meeting people who boost your career is hard - yet networking is key to growth and economic empowerment. We’re here to support you - within your current workplace or somewhere new. Upskill, join daily virtual events, apply to roles (it’s free!).
Are you hiring? Join our platform for diversifiying your team
Software Engineer
Save Job