Narendra Reddy Yediginjala, Developer in Bengaluru, Karnataka, India
Narendra is available for hire
Hire Narendra

Narendra Reddy Yediginjala

Verified Expert  in Engineering

Big Data Developer

Bengaluru, Karnataka, India

Toptal member since June 24, 2020

Bio

Narendra has 18 years of experience in data engineering, data science, business intelligence, and data warehousing. He has handled multimillion-dollar projects for clients and worked with numerous big data tools, including Databricks, AWS, Google Cloud, Hadoop, Hive, Spark, Scala, Python, and SQL. Narendra's most significant projects have been for financial services and healthcare clients, including HelloFresh, Change Healthcare, Aetna, TIAA-CREF, and ADP.

Portfolio

Change Healthcare
Amazon Web Services (AWS), Amazon S3 (AWS S3), EMR, SQL, IntelliJ IDEA, Spark...
Cognizant Technology Solutions US Corp
Oracle, Datastage, SQL, Hue, Cloudera, Impala, MapReduce, Apache Hive, HDFS...
Infosys Limited
IBM Db2, SQL, Datastage, Data Warehouse Design, Data Warehousing, ETL, Bash...

Experience

  • SQL - 14 years
  • IBM InfoSphere (DataStage) - 9 years
  • Apache Hive - 5 years
  • Hadoop - 5 years
  • Big Data - 5 years
  • Spark - 4 years
  • Amazon Elastic MapReduce (EMR) - 1 year
  • Amazon S3 (AWS S3) - 1 year

Availability

Full-time

Preferred Environment

SQL, Python, Scala, Spark, Hadoop, Amazon Web Services (AWS), Data Build Tool (dbt), Databricks, EMR, Snowflake

The most amazing...

...project I've deployed was a distributed-system reporting platform. I redesigned BI systems with reusable components, reducing development time to six months.

Work Experience

Senior Big Data Engineer

2019 - PRESENT
Change Healthcare
  • Created Spark programs with Scala to generate demographic relative scores based on different algorithms.
  • Developed and enhanced a demographics-based identity platform.
  • Created, administered, and managed EMR clusters on an AWS platform to run the Sparks program.
  • Worked extensively on AWS S3, AWS Glue, and EMR for data processing.
  • Extracted data from sources using Fivetran and dbt tools.
  • Wrote data lake pipelines using tools like dbt, AWS, and Airflow.
Technologies: Amazon Web Services (AWS), Amazon S3 (AWS S3), EMR, SQL, IntelliJ IDEA, Spark, Hadoop, Big Data, Data Engineering, Python 3, Bash, Agile Software Development, Linux, Data Modeling, Amazon Elastic MapReduce (EMR), PySpark, Data Lakes, Data Architecture, Data Pipelines, ELT, Databases

Senior Associate

2014 - 2018
Cognizant Technology Solutions US Corp
  • Worked on multiple multimillion-dollar projects with different clients.
  • Migrated a slow-performing data reporting platform to distribute a computing-based data platform using Oracle BDA.
  • Enhanced and maintained TIAA PlanFocus, an application for a data reporting platform.
  • Worked on a data integration platform to gather, transform, and report data from multiple sources.
  • Rewrote ETL jobs from Talend to IBM DataStage with enhanced reusability and performance.
  • Migrated a reporting platform from T+2 ETA to T+1 8 AM ETA.
Technologies: Oracle, Datastage, SQL, Hue, Cloudera, Impala, MapReduce, Apache Hive, HDFS, Hadoop, Big Data, Bash, Agile Software Development, Data Modeling, PySpark, Data Lakes, Data Architecture, Databases, Data Pipelines

Technology Lead

2011 - 2013
Infosys Limited
  • Rewrote and enhanced the functionalities for a new business-related reporting system.
  • Modified parallel jobs to include recent changes, migrating the old data from DB2 history tables to Oracle tables.
  • Developed data model changes for the functional enhancements to coordinate with the DBA team to get the necessary requirements.
  • Migrated from a legacy platform to a modern eCommerce platform with minimal issues.
  • Kept pace with the frequently changing user requirements and delivered high-quality parallel jobs on time.
Technologies: IBM Db2, SQL, Datastage, Data Warehousing, Data Warehouse Design, ETL, Bash, Agile Software Development, IBM InfoSphere (DataStage), Data Modeling, PySpark, Databases, Stored Procedure, Data Pipelines

Senior Member–Technical

2006 - 2010
ADP Private Limited
  • Developed ETL jobs using data rules defined by the business.
  • Wrote and executed unit test scripts using internally developed frameworks.
  • Supported ETL jobs in the production environment and resolved issues as they arose.
Technologies: Oracle, SQL, Datastage, ETL, Agile Software Development, PL/SQL, Databases, Stored Procedure

UPI – IHDP

IHDP is a data platform under development by Change Healthcare and the platform is designed to gather data from multiple external sources, create indexes on core demographics, link all the records based on indexes, and serve different data consumers.

JAD (Joint Application Development) – Data Science

After CVS and Aetna merged, Aetna launched an analytics initiative to utilize data from both companies to improve services and overall business performance. Understanding data from different sources and creating other analytic datasets allows the merged company to run multiple machine learning algorithms.

Finance Data Repository (FDR)

Finance Data Repository (FDR) is a long-term program that Aetna initiated to transform its existing operational and analytical reporting infrastructure from legacy infrastructure to a Hadoop-based infrastructure. The FDR is a data lake with multiple layers and the flexibility to consume data from different layers to serve different purposes.

Consultant Book of Business (CBOB)

CBOB is a new application built by TIAA for third-party consultants who evaluate and make suggestions on plan performance and outcomes. This application provides metrics and reports to the consultants on their clients' plans.

PlanFocus T1

PlanFocus is TIAA-CREF’s website for retirement plan sponsors. This site helps plan sponsors and consultants manage and optimize plans, drive outcomes, and engage employees.

TIAA-CREF has invested in PlanFocus to improve its functionalities and services. PlanFocus T1 is a project to improve data availability for plan sponsors. Before this project, data was refreshed within two days. This project aimed to provide data to plan sponsors within one day by 8:00 AM.

PlanFocus Data Integration

PlanFocus is a client-facing, front-end platform for plan sponsors who enroll employees in their plans.

The platform consists of several self-service tools that gather many data attributes from multiple systems and databases. The data integration system collects data from discrete systems; cleanses, analyzes, and transforms it; and sends report-ready data to a memory-stored data presentation system (Endeca) that creates PlanFocus reports.

I executed multiple functional enhancements and InfoSphere server upgrades.
2021 - 2023

Executive MBA in Management

Quantic School Of Technology and Management - Washington, D.C., USA

2002 - 2006

Bachelor of Technology Degree in Electrical and Electronics Engineering

Jawaharlal Nehru Technological University - Hyderabad, India

MAY 2020 - PRESENT

Google Cloud Platform Big Data and Machine Learning Fundamentals

Coursera

MAY 2020 - PRESENT

Smart Analytics, Machine Learning, and AI on GCP

Coursera

MAY 2020 - PRESENT

Modernizing Data Lakes and Data Warehouses with GCP

Coursera

MAY 2020 - PRESENT

Building Resilient Streaming Analytics Systems on GCP

Coursera

MAY 2020 - PRESENT

Building Batch Data Pipelines on GCP

Coursera

MAY 2020 - MAY 2022

CCA Spark and Hadoop Developer

Cloudera

APRIL 2020 - PRESENT

Problem Solving (Basic)

HackerRank

MARCH 2020 - PRESENT

Python (Basic)

HackerRank

DECEMBER 2018 - PRESENT

Machine Learning

Coursera

DECEMBER 2018 - PRESENT

Machine Learning

Coursera

Libraries/APIs

PySpark, Pandas, NumPy

Tools

IBM InfoSphere (DataStage), Impala, BigQuery, Amazon QuickSight, IntelliJ IDEA, Cloudera, Hue, Amazon Elastic MapReduce (EMR), Spark SQL, Google Sheets, Stitch Data, Geocoding, Microsoft Power BI

Languages

SQL, Python 3, Python, Scala, Bash, Stored Procedure, Snowflake

Frameworks

Hadoop, Spark

Paradigms

ETL, Dimensional Modeling, MapReduce, Agile Software Development, Management

Platforms

Oracle, Amazon Web Services (AWS), Linux, Azure, Spark Core, Google Cloud Platform (GCP), Databricks

Storage

Apache Hive, Amazon S3 (AWS S3), Databases, Google Cloud, PL/SQL, HDFS, Datastage, IBM Db2, MySQL, Distributed Databases, Data Pipelines, Data Lakes

Other

Data Warehousing, Data Warehouse Design, Big Data, Data Engineering, Data Modeling, Data Marts, ELT, Fivetran, EMR, Engineering, Development, Data Science, Machine Learning, Data, Team Leadership, Cloud Infrastructure, Marketing Analytics, Streaming Data, Cloud Architecture, Data Build Tool (dbt), Data Architecture, Data Visualization, Electronic Medical Records (EMR), Strategy, Accounts

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring