Job Title:
Data Engineer � C13/VP
The Role
We are looking for a
hands-on Data Engineer
who is passionate about solving
business problems through innovation and engineering practices. As a Data
Engineer, you will leverage your deep technical knowledge to drive the creation of
high-quality software products. You will also be expected to share your technical
expertise and promote a culture of technical excellence within the team. The Data
Engineer will work with a Team Lead and will be a code-contributing member of the
team that will deliver solutions against the sprint-level commitments.
Responsibilities
� Code contributing member of an Agile team, working to deliver sprint goals.
� Demonstrating technical knowledge and expertise in software development,
including programming languages, frameworks, and best practices.
� Actively contributes to the implementation of features and technical
solutions. Write clean, efficient, and maintainable code that meets the
highest standards of quality.
� Collaborate with other Engineers to define and evolve the overall system
architecture and design.
� Provide guidance on scalable, robust, and efficient solutions that align with
business requirements and industry best practices.
� Offer expert engineering guidance and support to multiple teams, helping
them overcome technical challenges, make informed decisions, and deliver
high-quality software solutions. Foster a culture of technical excellence and
continuous improvement.
� Stay up to date with emerging technologies, tools, and industry trends.
Evaluate their potential impact on the organization and provide
recommendations for technology adoption and innovation.
Required Qualifications
� 5+ years� experience of implementing data-intensive solutions using agile
methodologies.
� Proficient in one or more programming languages commonly used in data
engineering such as
Python, Java, or Scala.
� Multiple years of experience with software engineering best practices (unit
testing, automation, design patterns, peer review, etc.)
� Multiple years of experience with
Hadoop
for data storage and processing is
valuable, as is exposure to modern data platforms such as
Snowflake
and
Databricks
.
� Multiple years of experience with Cloud-native development and Container
Orchestration tools (Serverless, Docker, Kubernetes, OpenShift, etc.).
� Multiple years of experience with open-source data engineering tools and
frameworks (e.g.
Spark
, Kafka, Beam, Flink, Trino, Airflow, DBT)
� Exposure to a range of table and file formats including
Iceberg
,
Hive
, Avro,
Parquet
and JSON
� Multiple years of experience architecting and building horizontally scalable,
highly available, highly resilient, and low latency applications