Surendra Medisetty, Developer in Farmington Hills, MI, United States
Surendra is available for hire
Hire Surendra

Surendra Medisetty

Verified Expert  in Engineering

Data Engineer and Developer

Farmington Hills, MI, United States

Toptal member since November 22, 2023

Bio

Surendra has over seven years of experience in IT, data engineering, and big data. He has also focused on building ETL pipelines, collecting data from multiple sources, and loading data to target tables while applying successful transformations and mapping. In addition, Surendra is well-versed in Python, APIs, PySpark, and Spark.

Portfolio

Credibly
PySpark, Apache Kafka, Snowflake, Tableau, SQL Server 2016, Jenkins, Python...
Clearcover
Azure, Azure Data Factory (ADF), Blob Storage, SQL Server 2016, Python 3...
CoreLogic
Python 3, Django, Git, Jenkins, Docker, MySQL, Kubernetes, Amazon S3 (AWS S3)...

Experience

  • ETL - 7 years
  • SQL - 7 years
  • Git - 7 years
  • Python 3 - 6 years
  • Pytest - 6 years
  • PySpark - 5 years
  • Azure Data Factory (ADF) - 4 years
  • Snowflake - 3 years

Availability

Part-time

Preferred Environment

Python 3, AWS IoT, SQL, Azure, PySpark, ETL, Snowflake, Apache Airflow, Git, Docker

The most amazing...

...projects I've worked on involved cloud environments to build ETL pipelines.

Work Experience

Senior Data Engineer

2021 - PRESENT
Credibly
  • Wrote and executed various MySQL database queries from Python using connectors and MySQL's database package.
  • Combined views and reports into interactive Tableau dashboards presented to business users, program managers, and end users.
  • Worked extensively on fine-tuning Spark applications and providing support to various production pipelines.
Technologies: PySpark, Apache Kafka, Snowflake, Tableau, SQL Server 2016, Jenkins, Python, MySQL, Spark

Data Engineer

2018 - 2020
Clearcover
  • Created pipelines built in Azure Data Factory using linked services, datasets, and pipelines to extract, transform, and load data from various sources, including Azure SQL, Blob Storage, Azure SQL Data Warehouse (SQL DW), and write-back tools.
  • Built Spark apps with Azure Data Factory and Spark SQL for data extraction, transformation, and aggregation from various file formats to analyze and transform data and reveal insights into consumer usage patterns.
  • Prepared highly interactive Tableau reports and dashboards for data visualization using data blending, calculations, filters, actions, parameters, maps, extracts, context filters, sets, and aggregate measures.
Technologies: Azure, Azure Data Factory (ADF), Blob Storage, SQL Server 2016, Python 3, Azure Databricks, Tableau, Kubernetes, Azure SQL, Azure SQL Data Warehouse, Dedicated SQL Pool (formerly SQL DW), Spark, Spark SQL

Python Developer

2016 - 2018
CoreLogic
  • Created Apache Airflow DAGs in Python to run data pipelines, monitoring and maintaining these DAGs for production readiness.
  • Wrote Python scripts using Boto 3 to programmatically access AWS resources like Amazon S3, EC2, Athena, and Lambda functions.
  • Prepared Python test cases using Pytest and PyUnit packages to test all the functionalities in the Python scripts.
Technologies: Python 3, Django, Git, Jenkins, Docker, MySQL, Kubernetes, Amazon S3 (AWS S3), Amazon EC2, AWS CloudTrail, Pytest, Apache Airflow, Python, Boto 3

Experience

ETL Pipelines for Credibly

https://www.credibly.com/
This project involved building ETL pipelines to move data from on-premises systems to Azure Data Lake using Azure Data Factory and Databricks. I used Python as the primary programming language and Spark as the framework for ETL purposes. For this project, I leveraged my extensive knowledge of SQL, databases, and Tableau reports.

Education

2014 - 2016

Master's Degree in Computer Science

Campbellsville University - Campbellsville, Kentucky, USA

2009 - 2013

Bachelor's Degree in Computer Engineering

Jawaharlal Nehru Technological University - Hyderabad, India

Skills

Libraries/APIs

PySpark

Tools

Git, Pytest, Apache Airflow, Jenkins, AWS CloudTrail, Tableau, Spark SQL, Boto 3

Languages

Python 3, SQL, Snowflake, Java, C++, Python

Paradigms

ETL, Agile

Platforms

AWS IoT, Azure, Docker, Kubernetes, Amazon EC2, Apache Kafka, Azure SQL Data Warehouse, Databricks, Dedicated SQL Pool (formerly SQL DW)

Frameworks

Django, Spark

Storage

Databases, MySQL, Amazon S3 (AWS S3), SQL Server 2016, Azure SQL

Other

Azure Data Factory (ADF), Software, Blob Storage, Azure Databricks, Azure Data Lake

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring