Blake Byerly, Developer in Seattle, WA, United States
Blake is available for hire
Hire Blake

Blake Byerly

Verified Expert  in Engineering

Bio

Blake possesses both startup and large enterprise experience leveraging machine learning (deep learning and classical techniques) to drive value. He has applied machine learning to a variety of problems, including network event correlation, incident forecasting, resource-constrained scheduling, and eCommerce applications at scale.

Portfolio

Knock
Amazon Web Services (AWS), Terraform, Kubernetes, Python, Go, Scala
Amazon Web Services (AWS)
AWS Lambda, Java, Docker, Python 3, Coral Services Framework...
Zulily
Kubernetes, H20, Spark, Java, Amazon Web Services (AWS)...

Experience

Availability

Part-time

Preferred Environment

MacOS

The most amazing...

...enterprise applications (terabytes of data) I've migrated were from Dataproc (Google cloud-managed spark) to a GKE in-house solution leveraging Spark-on-K8s.

Work Experience

Senior Software Engineer

2022 - 2023
Knock
  • Designed a secure and cost-efficient infrastructure supporting various data engineering initiatives.
  • Migrated Airflow, data pipelines, and support from AWS-managed solutions to Kubernetes (EKS) infrastructure.
  • Designed and implemented an internal VPN solution to enable engineering teams access to sensitive data.
Technologies: Amazon Web Services (AWS), Terraform, Kubernetes, Python, Go, Scala

Software Engineer

2021 - 2022
Amazon Web Services (AWS)
  • Was a founding member of a new AWS AI services team supporting low code development initiatives.
  • Designed and implemented the business logic for new services.
  • Carried out the design and implementation of the API supporting the science team model training and development.
  • Designed and implemented the benchmarking platform.
Technologies: AWS Lambda, Java, Docker, Python 3, Coral Services Framework, AWS Cloud Development Kit (CDK), AWS CloudFormation, AWS Fargate, JUnit, Mockito, Jackson, Amazon CloudWatch, Cloud Architecture, Design

Machine Learning Engineer

2018 - 2021
Zulily
  • Adapted an API for deploying a scalable, cloud-based machine learning model using Go, Kubernetes, and Docker. Developed a mechanism for the scheduled execution of said API with Apache Airflow.
  • Worked on the back-end process for writing data to an in-memory Redis cache with Java.
  • Developed EDA and validation metrics (Java/H20) for a machine learning model used for a daily email job.
  • Migrated Spark jobs to GKE from Dataproc, from GCP-managed Spark to a customized environment managed by the team.
  • Integrated Kubecost, Prometheus, and Istio into a GKE cluster for the data science team's Kubeflow capabilities.
  • Worked on NLP-based SKUs similarity matching for best price promise relative to competitors.
  • Optimized the collection of SKUs with respect to aggregate demand.
  • Worked on the on-call rotations and DevOps monitoring of GCP and AWS cloud infrastructure.
Technologies: Kubernetes, H20, Spark, Java, Amazon Web Services (AWS), Google Cloud Platform (GCP), Apache Airflow, Bash, Java 8, Python 3, Docker, Natural Language Processing (NLP)

Machine Learning Engineer

2018 - 2018
Boldiq
  • Developed an AI-based optimization engine employing deep reinforcement learning for learning strategies for optimized resource-constrained scheduling.
  • Maintained and debugged ‘Solver’, company’s proprietary real-time optimization engine (for private aviation scheduling).
Technologies: Dlib, C++, C

Senior Data Scientist

2016 - 2017
Cisco Systems
  • Oversaw AI for optimizing network monitoring. Developed a machine learning pipeline allowing for analysis of Cisco’s unstructured data (through Splunk’s Rest API) using ensemble techniques from the SciKit-Learn library. Initiative resulted in improved event correlations on Cisco CMS’s network management platform.
  • Extended the initiative to perform network incident forecasting using deep learning techniques on a customized architecture (NLP, semantic analysis via CNNs) using a TensorFlow backend and Keras (high-level API).
  • Architected of “Splunk to Excel," an automated reporting mechanism.
Technologies: Splunk, TensorFlow, Keras, CVXOPT, Scikit-learn, Matplotlib, SciPy, Pandas, NumPy, Python

Intern/Research Assistant

2015 - 2016
Ecole Polytechnique Federale de Lausanne
  • Developed embedded DB and SDC-constrained scheduling software in Java for High-Level Synthesis and data-flow programming applications (with applications to embedded systems). Accepted into the doctoral program of the EPFL.
Technologies: MySQL, Spring, Java

Intern/Research Assistant

2014 - 2015
Swisscom
  • Developed insertion loss and cross-talk cable models for 4th-generation DSL standard, G.fast. The use of vectoring to achieve higher data rates requires an accurate understanding of how the cable manipulates intended signaling. Models were used in the Broadband Forum for standardization purposes.
Technologies: Oscilloscopes & Tester Equipment, Processing, Optimization, MATLAB

Machine Learning Class Definition

This Python script was one of several for a project I worked on. I analyzed the event correlation of network incidents. This Python class defines machine learning functionalities that were needed elsewhere in the project.
2012 - 2015

Master's Degree in Electrical Engineering and Information Technology

ETH-Zurich - Zurich, Switzerland

2009 - 2012

Bachelor's Degree in Electrical Engineering

University of Texas at Austin - Austin, Texas

Libraries/APIs

NumPy, Pandas, SciPy, Matplotlib, Scikit-learn, Dlib, Keras, TensorFlow, Jackson

Tools

IntelliJ IDEA, BigQuery, Apache Airflow, Splunk, MATLAB, Google Kubernetes Engine (GKE), AWS Cloud Development Kit (CDK), AWS CloudFormation, AWS Fargate, Amazon CloudWatch, Terraform

Languages

Python, Java, Bash, C, Processing, Java 8, Python 3, C++, SQL, Go, Scala

Frameworks

Apache Spark, Spring, Spark, Coral Services Framework, JUnit, Mockito

Platforms

Amazon Web Services (AWS), Kubernetes, Docker, H20, Google Cloud Platform (GCP), Ubuntu, Jupyter Notebook, Ubuntu 14.04, Visual Studio 2017, Windows, AWS Lambda, MacOS

Paradigms

Scrum, ITIL, Agile

Storage

MySQL, SQL CE, Redis

Other

Google BigQuery, Machine Learning, Optimization Algorithms, Natural Language Processing (NLP), Generative Pre-trained Transformers (GPT), Windows 10, Optimization, Oscilloscopes & Tester Equipment, CVXOPT, Cloud Architecture, Design

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring