Senior Data Analytics Engineer
2021 - 2022Enterprise Client (via Toptal)- Architected and delivered the clients' entire PROD logic lift (over 200 SQL workflows) and legacy data migration (over 10 Petabytes) from AWS Redshift to Snowflake.
- Automated the daily ingestion jobs via dbt and created a self-updating data catalog via dbt Cloud.
- Established end-to-end CI/CD with dbt Cloud and Gitlab pipelines.
- Lifted all SQL logic from Redshift SQL to dbt SQL. Used macros and Jinja, which allowed us to get visibility into very complex SQL logic and visualize it via the catalog.
- Achieved an 80% reduction in time, cost, and real-time materialization of all our client-facing BI reports due to a cascading trigger in dbt, as opposed to using Redshift, where the sequential tables all got refreshed at once.
- Linked all our Periscope charts and dashboards to a Git repo which was then indexed in an IDE. This allowed us to make and push large bulk updates, replacing a manual logic updating process on Periscope, and significantly increased our efficiency.
- Trained the fresh data engineers to manage and extend this entire big data infrastructure.
- Compared performance, cost, and ease of maintenance of Snowpipe versus Fivetran versus Stitch.
Technologies: Snowflake, Data Building Tool (DBT), Spark, GitLab, Data Migration, Data Warehouse Design, Big Data, Data Pipelines, ELT, Big Data Architecture, Data ArchitectureCloud Solutions Architect
2020 - 2021Enterprise Client (via Toptal)- Worked on the orchestration and automation of the workflows via Azure Data Factory.
- Optimized and partitioned storage in Azure Data Lake Storage (ADLS) Gen2.
- Implemented complex, strongly-typed Scala Spark workloads in Azure Databricks along with dependency management and Git integration.
- Implemented real-time low cost and low latency streaming workflows which at their peak were processing more than 2MM raw JSON blobs per second. Integrated Azure Blob Storage, Azure Event Hubs, and Azure Queues via ABS-AQS.
- Created a multi-layered ELT platform that consisted of raw/bronze (Azure Blob Storage), current and silver (Azure Delta Lake), and mapped/gold (Azure Delta Lake) layers.
- Balanced the cost of computing by spinning up clusters on demand versus persisting them.
- Made big data available for efficient and real-time analysis throughout the client via Delta tables, which provided indexed and optimized stores, ACID transaction guarantees, and table level and row-level access controls.
- Tied all together in end-to-end workflows that were either refreshed with just a few clicks or automated as jobs.
- Led a team of five consisting of four developers and one solutions architect to productionize big data workflows in Azure Cloud, enabling the client to sunset its legacy applications and experience far more reliable and scalable prod workflows.
- Enabled a wide diversity of use cases and future-proofed them by relying upon open source and open standards.
Technologies: Scala, Spark, Azure, Azure Data Factory, Azure Data Lake, Azure Databricks, Delta Lake, Data Engineering, ETL, Data Migration, Databricks, Big Data, Data Pipelines, ELT, Big Data Architecture, Azure Cloud Services, Azure Event Hubs, Data Architecture, Azure Data Lake Analytics, Data LakesLead Data Engineer
2019 - 2020Stealth mode AI startup (Series A $20 Million)- Architected and implemented a distributed machine learning platform.
- Productionized 20+ machine learning models via Spark MLlib.
- Built products and tools to reduce time to market (TTM) for machine learning projects. Reduced the startup's TTM from the design phase to production by 50%.
- Productionalized 8 Scala Spark applications to transform the ETL layer to feed into the machine learning models downstream.
- Used Spark SQL for ETL and Spark Structured Streaming and Spark MLlib for analytics.
- Led a team of six comprising of three data scientists, two back-end engineers, and one front-end engineer. Delivered a solution that had a back-end layer that talked to the front end via REST API and launched and managed Spark jobs on demand.
Technologies: Data Engineering, Apache Hive, Apache Impala, SQL, Apache Spark, Scala, Bash, Linux, Spark Structured Streaming, Machine Learning, MLlib, Spark, Spark SQL, ETL, Big Data, Data Pipelines, ELT, Big Data Architecture, Data Architecture, Data LakesSenior Data Engineer
2018 - 2019Dow Chemical (Fortune 62)- Productionalized five Scala Spark apps for ETL. Wrote multiple Bash Scripts for the automation of these jobs.
- Architected and productionalized a Scala Spark app for validating the Oracle source tables with their ingested counterparts in HDFS. The user could dynamically choose to conduct either a high-level validation or a data level validation. The output of the app in case of a discrepancy was the exact columns and the exact rows that mismatched between source and destination.
- Reduced the engineer's manual debug workload by over 99%, reducing it to just running the app and then reading the human-readable output file.
- Delivered the entire ETL and validation project ahead of schedule.
Technologies: Data Engineering, Apache Hive, Apache Impala, SQL, Apache Spark, Scala, Hadoop, Bash, Linux, Oracle Database, Spark SQL, ETL, Big Data, Data Pipelines, ELT, Big Data Architecture, Data ArchitectureSenior Data Engineer
2018 - 2019Boston Scientific (Fortune 319)- Designed and implemented a Scala Spark application to build Apache Solr indices from Hive tables. The app was designed for a rollback on any failure and reduced the downtime for downstream consumers from ~three hrs to ~ten seconds.
- Implemented Spark Structured Streaming application to ingest data from Kafka streams and upsert into Kudu tables in a kerberized cluster.
- Implemented multiple Shell scripts to automate Spark jobs, Apache Sqoop jobs, Impala commands, and more.
Technologies: Data Engineering, Apache Hive, Apache Impala, SQL, Apache Spark, Scala, Hadoop, Bash, Linux, Kudu, Spark Structured Streaming, Apache Solr, Spark SQL, ETL, Big Data, Data Pipelines, ELT, Big Data Architecture, Data ArchitectureSenior Data Engineer
2017 - 2018General Mills (Fortune 200)- Consumed social marketing data from various sources. Namely Google Analytics API, Oracle Databases, various streaming sources, and more.
- Productionalized a Scala Spark application to ingest >100Gb of data as a daily batch job, partition, and store as parquet in HDFS, with corresponding Hive partitions at the query layer. App replaced legacy Oracle solution and reduced runtime by 90%.
- Used Spark SQL and Spark Structured Streaming for ETL.
Technologies: Data Engineering, Apache Hive, Apache Impala, SQL, Apache Spark, Scala, Hadoop, Spark Structured Streaming, Spark SQL, ETL, Big Data, Data Pipelines, ELT, Big Data Architecture, Data ArchitectureSoftware Engineer
2015 - 2016MetLife Insurance (Fortune 44)- Acted as the product manager for a motorcycle insurance web app. The app grew into becoming the primary landing site for motorcycle insurance leads.
- Built master for deployment until production. Deployed all builds and was primary on the stability of the build.
- Led Scrum development for client teams of 30+ developers, testers, and analysts.
- Architected and supported the solution within the client organization.
Technologies: Model View Controller (MVC), Agile