AWS Data Engineer

Overview

With the Cloud serving as an enabler, Data as a business driver, and AI as a core differentiator, we offer comprehensive, cost effective and value added services across a variety of industries such as Energy, Utilities, Healthcare and Real estate.

Job Description

About the Role:

We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus.

Key Responsibilities:

Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena

Process and integrate structured and unstructured data, including sensor/IoT and real-time streams

Optimize pipeline performance and ensure reliability and fault tolerance

Collaborate with cross-functional teams including data scientists and analysts

Perform data transformations using Python, Pandas, and SQL

Maintain data integrity, quality, and security across the platform

Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation

Support and monitor pipeline workflows, troubleshoot issues, and implement fixes

Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions

Required Skills and Qualifications:

Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field

3+ years of experience in data engineering using AWS

Strong skills in:

AWS Glue, Redshift, S3, Lambda, EMR, Athena

Python, Pandas, SQL

RDS, Postgres, SAP HANA

Solid understanding of data modeling, warehousing, and pipeline orchestration

Experience with version control (Git) and infrastructure as code (Terraform)

Preferred Skills:

Experience working with energy sector data or IoT/sensor-based data

Exposure to machine learning tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn)

Familiarity with big data technologies like Apache Spark, Kafka

Experience with data visualization tools (Tableau, Power BI, AWS QuickSight)

Awareness of data governance and catalog tools such as AWS Data Quality, Collibra, and AWS Databrew

AWS Certifications (Data Analytics, Solutions Architect)

Skills & Requirements

Aws Glue, Redshift, S3, Lambda, EMR, Athena, Python, Pandas, SQL, RDS, Postgres, SAP HANA, Data Modeling, Data Warehousing, Pipeline Orchestration, Git, Terraform, Energy Sector Data, IoT Data, Sensor Data, SageMaker, TensorFlow, Scikit-learn, Apache Spark, Kafka, Tableau, Power BI, AWS QuickSight, AWS Data Quality, Collibra, AWS Databrew, AWS Certifications (Data Analytics, Solutions Architect)

Apply Now

Join Our Community

Let us know the skills you need and we'll find the best talent for you