Data Engineer

Overview

Acuity Knowledge Partners is a leading provider of high-value research, analytics and business intelligence to the financial services sector. The company supports over 500 financial institutions and consulting companies through a team of over 2,500 subject matter experts who work as an extension of the clients’ teams based out of various global delivery centers.

We EMPOWER our clients to drive revenues higher. We INNOVATE using our proprietary technology and automation solutions. We enable our clients to TRANSFORM their operating model and cost base.

 

Job Description

You will primarily collaborate closely with a global leading hedge fund on data engagements. Partner with data strategy and sourcing team on data requirements to design data pipelines and delivery structures.

Desired Skills and Experience

Essential skills

5+ years of experience with data modeling, data warehousing, and building data pipelines

Expert in Python and Snowflake data ingestion and manipulation on the back end

Expert working in different data formats i.e. parquet, AVRO, JSON, XLXS, etc.

Experience with web scraping and checking / editing web scrape codes

Experience with data modeling, data warehousing, and building data pipelines

Experience working with FTP, SFTP, API, S3 and other distribution channels to source data

Experience working with PySpark, docker and AWS cloud

Understanding of financial modeling

Education:

B.E./B.Tech in Computer Science or related field.

Key Responsibilities

Partner with data strategy and sourcing team on data requirements to design data pipelines and delivery structures

Engage with vendors and technical teams to systematically ingest, evaluate, and create valuable data assets

Collaborate with core engineering team to create central capabilities to process, manage, monitor, and distribute datasets at scale

Apply robust data quality rules to systemically qualify data deliveries and guarantee the integrity of datasets

Engage with technical and non-technical clients as SME on data asset offerings

Key Metrics

Python, SQL, Snowflake, Pyspark, docker

Data Engineering and pipelines

Behavioral Competencies

Good communication (verbal and written)

Experience in managing client stakeholders

Skills & Requirements

Python, SQL, AWS, Snowflake, PySpark, Docker, Data Modeling, Data Warehousing, Data Pipelines, Web Scraping, Parquet, Avro, JSON, XLXS, FTP, SFTP, API, S3, AWS Cloud, Financial Modeling, Data Ingestion, Data Manipulation, Data Quality, Client Stakeholder Management

Apply Now

Join Our Community

Let us know the skills you need and we'll find the best talent for you