We are seeking a Senior Data Engineer with 5–6 years of experience in building and optimizing data pipelines and architectures. The ideal candidate will have strong expertise in Hadoop and Spark, with programming skills in Python, Java, or Scala. Experience with Databricks and AWS is a plus, though not mandatory.
Design, develop, and maintain scalable data pipelines using Hadoop and Spark.
Write efficient code in Python, Java, or Scala for data processing and transformation.
Optimize data workflows for performance, reliability, and scalability.
Collaborate with cross-functional teams to deliver data solutions for business needs.
Ensure data quality, consistency, and governance across systems.
Hadoop & Spark, Python / Java / Scala, Data bricks and AWS.