Adam Ivansky, Developer in Buffalo, NY, United States
Adam is available for hire
Hire Adam

Adam Ivansky

Verified Expert  in Engineering

Machine Learning Developer

Buffalo, NY, United States
Toptal Member Since
November 6, 2018

Adam has nine years of experience as an engineer and two years of experience as a tech load. His tools of choice include Python 3, Snowflake, Spark, and SQL. His main focus areas include ETLs and machine learning marketing pipelines. Adam is able to communicate effectively with both highly technical and non-technical specialists.


Python 3, Python API, Amazon EKS, Docker, Kubernetes, Amazon S3 (AWS S3)...
BJ's Wholesale Club
Jenkins, AWS CLI, Amazon S3 (AWS S3), Redshift, Python 3, Spark...
SQL, TensorFlow, Scikit-learn, Tableau, PySpark, Apache Hive, Python, Teradata...




Preferred Environment

Amazon Web Services (AWS), Python, Terraform, Snowflake, PySpark, Amazon Elastic Container Service (Amazon ECS), ETL, Django, FastAPI, Streaming Data

The most amazing...

...project I've worked on is the development of a Spark metastore data warehouse.

Work Experience

Data Engineering Tech Lead

2020 - 2021
  • Served as a data engineer in charge of two projects end-to-end. The projects involved collecting data from 3rd-party cloud vendors.
  • Developed scheduled ETLs based on Python and Spark that collected data from various APIs and loaded the data to Amazon S3 and PostgreSQL databases. The ETLs were deployed to Airflow and Kubernetes.
  • Built a number of APIs that were exposing data from the data warehouse to consumers of the data.
  • Created and modified ETLs based on AWS Glue. Created a serverless ETL based on Amazon SQS and AWS Lambda.
Technologies: Python 3, Python API, Amazon EKS, Docker, Kubernetes, Amazon S3 (AWS S3), Amazon Simple Queue Service (SQS), Amazon Elastic MapReduce (EMR), Redshift, PostgreSQL, SQL, Spark, Data Engineering

Data Engineer

2019 - 2020
BJ's Wholesale Club
  • Developed an ETL pipeline based on PySpark running on AWS EMR for the extraction of data from Redshift to S3.
  • Contributed to a product recommendation engine based on Spark machine learning.
  • Developed a data quality assessment tool in PySpark.
  • Owned cloud cost reporting. Managed EMR cluster creation/termination in AWS CLI and AWS console.
  • Completely automated ETL/marketing pipeline in Jenkins.
  • Contributed to the algorithm for identifying new prospective members based on third-party data.
Technologies: Jenkins, AWS CLI, Amazon S3 (AWS S3), Redshift, Python 3, Spark, Amazon Elastic MapReduce (EMR), SQL, Data Engineering

Senior Database Marketing Analyst

2017 - 2018
  • Developed targeting scripts for flagship marketing campaigns with an emphasis on email, mobile push notification, social, and on-site channels. The campaigns often targeted over 50 million users and sometimes resulted in over $100,000 in iGMB annually.
  • Designed, developed, implemented, and maintained multi-armed bandit algorithms written in Python while adhering to marketing standards and processes within eBay. The algorithm was measured to generate $5 mil. annually.
  • Trained an algorithm for send-time optimization. This has resulted in a 15% increase in click-through-rate in campaigns where it was implemented.
  • Assessed existing email, social, and mobile marketing campaigns in terms of KPIs such as iGMB, OR, and CTR.
  • Created dashboards in Tableau that reported on the performance of different marketing algorithms I have created.
  • Created scripts that moved data between HIVE and Teradata servers.
  • Worked with the largest Teradata DWH in the world and often queried tables with 100+ billion rows.
  • Communicated with stakeholders across multiple timezones.
Technologies: SQL, TensorFlow, Scikit-learn, Tableau, PySpark, Apache Hive, Python, Teradata, Python 3, Spark, Data Engineering

Machine Learning SW Developer

2016 - 2017
  • Developed and trained a machine vision algorithm for recognition of pedestrians in front of a vehicle. The algorithm has since been implemented in a number of vehicle models including the GM 2019 Chevy.
  • Trained and algorithm for detection of dirt on the camera lens. This algorithm had a crucial role in supporting other more complex self-driving functionalities.
  • Assessed the quality of unstructured annotated video data used for algorithm training.
  • Created a script for synchronization of both structured and unstructured data between multiple teams who participated on the project.
  • Attended a computer science conferences and studied scientific literature to keep up-to-date with new trends in machine learning and computer science. Knowledge exchange with other team-members.
  • Communicated and networked with teammates and stakeholders from France and Ireland.
Technologies: Protocol Buffers, Intel TBB, C++, OpenCV, SQL, MATLAB, Python, Python 3, Data Engineering

Credit Risk Analyst

2014 - 2015
Erste Group
  • Calculated risk parameters CCF, LGD and PD according to BASEL 2.
  • Reduced the overall reserve requirements of Erste Bank subsidiaries by over 7 % thanks to the improvements in the statistical engine for calculation of risk parameters CCF, LGD and PD that I have introduced.
  • Designed and trained a mathematical model in SAS for prediction of the overall loss in the event of a client default. This helped Erste improve the repossession process and reduce expenses.
  • Performed ad-hoc stress-tests for Erste subsidiaries. The results were later submitted directly to the European National Bank.
  • Assessed of risk portfolio stability via bootstrapping and monte-carlo methods.
  • Created interactive dashboards for risk parameter reporting in MS SQL and Excel.
  • Developed a data quality testing system.
Technologies: Microsoft Excel, MATLAB, Microsoft SQL Server, SAS, SQL

Teaching and Research Assistant

2012 - 2014
University of Rochester
  • Led lab lectures for undergraduate students.
  • Developed software for automation of experiments and analyzed data produced by the experiments.
  • Wrote several scientific papers that are available online.
Technologies: MATLAB

eBay App Push Notification Send Time Optimization Project

The project aimed to improve the click-through rates of mobile push notifications. The introduction of the algorithm resulted in a 15% improvement in the mobile push notification click-through rate.

I decided to achieve this by developing an ML algorithm that predicted the optimum contact time for every user. The algorithm was developed in Python and was trained using scikit-learn. Obtaining training data required the use of Hive and PySpark. I successfully implemented the algorithm into the marketing production environment and instructed marketing analysts on how to use it.

Model for Dynamic Content Optimization and Customization

The aim of the project was to increase the click-through rate of eBay coupon campaigns via the use of machine learning. The development of the algorithm was successful, and it was measured to generate a 20% lift in click-through rate and IGMB.

The early version of the algorithm was based on the multi-armed bandit. Later versions made use of contextual NLP-based multi-armed bandit. The algorithm was developed using a combination of Teradata SQL and Python. I also developed an interactive Tableau dashboard in order to monitor the function of the algorithm and to measure the KPI lift that the algorithm was bringing.

Model for Pedestrian Detection Intended for Self-driving Vehicles

The project aimed to develop a machine vision algorithm capable of detecting pedestrians in front of a vehicle by analyzing the input from the vehicle camera. The algorithm is now fully functional and embedded into several newer vehicle models, including the GM 2019 Chevy.

The machine learning algorithm we decided to use was the AdaBoost cascade classifier combined with a deep neural network. We wrote the training application from scratch in C++. Training had to be multithreaded in order to be efficient. Testing and validation were done in Python. A large database of annotated video data was used for algorithm training.

Prediction Model

Precise prediction of the total final loss after a client's default is key to reducing the risk associated with different loan products.

I developed a model that relied on the loan-to-value ratio and the value of the collateral. It was done using a combination of SAS and Microsoft SQL Server. The development of the model required extensive data cleaning and data quality testing.

Product Recommendation Algorithm

Involved in the development of a recommendation engine based on a collaborative filtering model. The engine was capable of recommending even the products that a given customer did not necessarily buy in the past. The solution was implemented in PySpark and was based on MLlib, Spark's machine learning (ML) library.

ETL for Recommendation Algorithm

Developed an ETL in PySpark to transfer data from Amazon Redshift into an Amazon S3 data lake. I also developed code for customer-level data aggregation and historicization. Finally, I assessed data quality and investigated and remediated data quality issues.
2012 - 2014

Master of Science Degree in Physics

University of Rochester - New York, USA

2008 - 2012

Bachelor's Degree in Physics

National University of Ireland, Galway - Galway, Ireland


AWS Certified Developer



AWS Certified Cloud Practitioner



PySpark, Scikit-learn, TensorFlow, OpenCV, Intel TBB, Amazon EC2 API, Python API


Amazon Elastic MapReduce (EMR), Apache Airflow, Git, Spark SQL, AWS Glue, Bitbucket, Tableau, MATLAB, Microsoft Excel, Jenkins, AWS CLI, Amazon EKS, Amazon Simple Queue Service (SQS), Terraform, Amazon Elastic Container Service (Amazon ECS), GitHub


Spark, Hadoop, Django


SQL, Python 3, Python 2, C++14, Python, C++, SAS, Snowflake


Unit Testing, Agile, Continuous Integration (CI), ETL


Amazon S3 (AWS S3), Teradata, Redshift, Microsoft SQL Server, Apache Hive, PostgreSQL, Data Lakes

Industry Expertise



iOS, Windows, Linux, Amazon EC2, Spark Core, Docker, Kubernetes, Amazon Web Services (AWS), Visual Studio Code (VS Code)


Data Analytics, Data Engineering, Recommendation Systems, Machine Learning, Data Quality Analysis, Deep Learning, Protocol Buffers, ETL Tools, Physics, FastAPI, Streaming Data

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.


Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring