Matthew Alhonte, Data Scientist and Developer in New York, NY, United States
Matthew Alhonte

Data Scientist and Developer in New York, NY, United States

Member since August 21, 2018
Matt has officially worked as a Python-based data scientist for the past six years; however, he's spent the last ten at the intersection of stats and programming (before the term data scientist had caught on). He combines strong technical skills with a rigorous background in experiment design and statistical inference. More recently, he's been focusing on machine learning, including some natural language processing and computer vision.
Matthew is now available for hire




New York, NY, United States



Preferred Environment

PyCharm, Git, Spacemacs, VS Code, Jupyter

The most amazing...

...thing I've done is to reverse-engineer an undocumented file format containing electrophysiology readings.


  • Data Science Consultant

    2013 - PRESENT
    Ophidian Scientific
    • Assisted numerous small clients with data-related work, ranging from data science and analysis, data engineering, and machine learning engineering.
    • Designed and built ETL pipelines in Python, Dask, and Prefect.
    • Oversaw the migrations between Google Sheets and Airtable. Airtable automation was execued in Python.
    • Used operations research libraries in Python to optimize teams for the sports betting website FanDuel.
    • Built a natural language processing (NLP) classifier for an article archive of a finance-based publication.
    Technologies: Amazon Web Services (AWS), PostgreSQL, Keras, XGBoost, Random Forests, Spark, Database Design, Experimental Design, Clojure, Docker, Jupyter, Time Series, Pandas, SQL, Machine Learning, Natural Language Processing (NLP), Operations Research, Data Visualization, ETL, Scientific Data Analysis, Data Engineering, Data Science, Python
  • Data Scientist & Data Architect

    2021 - 2021
    Birch Infrastructure
    • Assisted with architect data infrastructure for a utility-scale renewable energy and data center company.
    • Created data pipelines with Prefect, mostly stitching together Google Cloud Functions and Cloud Run jobs.
    • Managed BigQuery data warehouse with dbt, made table schemas and transformations.
    • Set up data infrastructure (including Prefect and dbt).
    Technologies: Google Cloud Platform (GCP), BigQuery, Data Build Tool (dbt), Prefect, Python, Serverless
  • Senior Data Scientist

    2018 - 2019
    The University of Colorado — Office of Data Analytics
    • Performed statistical analyses and modeling to support student success and helped establish practices during a restructuring of the university’s office of data analytics.
    • Created and presented findings and visualizations to high-level administrators with Jupyter and Zeppelin.
    • Developed a Monte Carlo simulation-based model to predict semester-by-semester student retention.
    • Built a Bayesian model of re-offense after student misconduct.
    • Modeled the effects of different kinds of financial aid with XGBoost.
    • Created a model to predict student GPAs with scikit-learn and Keras.
    • Helped establish practices during a restructuring of the university’s office of data analytics.
    Technologies: Amazon Web Services (AWS), XGBoost, Random Forests, Experimental Design, Data Visualization, Time Series, SQL, Data Science, Machine Learning, Oracle Database, Zeppelin, Jupyter, Keras, PySpark, Scikit-learn, Pandas, Python
  • Data Engineer

    2017 - 2018
    NOMI Beauty
    • Designed and built the data infrastructure for a startup that made it easier to book hair-&-makeup appointments in hotel rooms.
    • Architected a big data pipeline with Spark, Kafka, and Cassandra.
    • Built data dashboards in Tableau for the operations team.
    • Designed an ETL for survey data from Typeform's API into MySQL.
    • Created reports in Jupyter notebooks with data visualizations in Python with Altair and Seaborn.
    • Designed and implemented a database schema in MySQL.
    • Designed and supported ETL from Couchbase to MySQL using Python.
    Technologies: Amazon Web Services (AWS), Spark, Database Design, Data Visualization, SQL, Jupyter, Simulations, Cassandra, Apache Kafka, PySpark, MySQL, Pandas, Python
  • Data Science and Blockchain Integration Consultant

    2017 - 2017
    Tanktwo, Inc.
    • Architected a blockchain-based solution for managing IoT devices and the data they generate.
    • Create a demo of a potential network using Hyperledger.
    • Simulated a private blockchain network in action, using Python.
    • Helped present a demo to the venture capitalists who were looking to invest.
    • Conducted research on optimal blockchain implementation to suit business needs.
    Technologies: Amazon Web Services (AWS), Jupyter, Data Visualization, Time Series, Hyperledger, Pandas, Python
  • Data Science Consultant

    2014 - 2017
    Hospital for Special Surgery
    • Worked on data science topics in a neurology lab that investigated intraoperative neurophysiological monitoring (IONM)—monitoring muscles and nerves during surgery to prevent damage.
    • Reverse-engineered an undocumented file format containing biosignal data.
    • Attempted to classify nerve conduction readings as indicating injury or anesthesia response using Scikit-learn.
    • Visualized biosignal data with Plotly and presented findings.
    • Investigated using Higuchi Fractal Dimension of nerve conduction readings taken during surgery as a means of assessing potential damage.
    • Analyzed biosignal data with a Python data suite (NumPy, Pandas, and SciPy).
    Technologies: Experimental Design, Data Visualization, Time Series, Data Science, Machine Learning, Scikit-learn, PyEEG, Jupyter, Plotly, SciPy, Pandas, NumPy, Python
  • Natural Language Processing Consultant

    2015 - 2015
    New York City Department of Administrative Services
    • Scraped PDFs with Python to help digitize the back catalog for a publication, The City Record.
    • Helped design a schema for entries (such as extracting addresses).
    • Created data cleaning regimens to standardize entries from over a hundred city agencies reported in different formats.
    • Used Python and NLTK to perform exploratory natural language processing (NLP) on a century-long corpus of publications.
    • Worked to integrate a pipeline into their MS Access.
    Technologies: Jupyter, Data Visualization, Data Science, Machine Learning, Python, NLTK
  • Integration and Development Consultant

    2013 - 2014
    Broadband Technologies Group
    • Provided computer vision-based assistance for digitizing video archives.
    • Used OpenCV and Python to tag damaged video areas.
    • Implemented Python to automatically fix certain types of damaged videoes.
    • Helped architect an Android application to deliver simultaneous subtitles for live performances.
    • Prepared presentations with Jupyter.
    Technologies: Jupyter, Data Visualization, OpenCV, Python
  • Research Assistant

    2008 - 2013
    Hunter College
    • Designed and validated a novel psychometric scale.
    • Analyzed survey data in SPSS.
    • Presented findings at research conferences.
    • Maintained relationships with the lab after graduation, eventually moving from data analysis to Python.
    • Worked on the publication of older data.
    Technologies: Experimental Design, Data Visualization, Data Science, SciPy, Python, SPSS
  • Summer Research Assistant

    2009 - 2010
    Yale School of Medicine
    • Designed and piloted a small study investigating psychopathic traits and behavior during an ultimatum game.
    • Analyzed GSR data.
    • Ran research participants through computer-based tasks in a presentation and DMDX.
    • Analyzed data from surveys and computer-based tasks.
    • Built and maintained a database of participants.
    Technologies: Experimental Design, Data Visualization, Data Science, DMDX, SPSS



  • Languages

    Python 3, Python, SQL, Snowflake, Clojure, Rust
  • Libraries/APIs

    Pandas, Scikit-learn, TensorFlow Deep Learning Library (TFLearn), XGBoost, NumPy, Keras, Dask, SciPy, OpenCV, NLTK, PySpark, TensorFlow
  • Tools

    DataViz, Jupyter, VS Code, Spacemacs, PyCharm, SPSS, Plotly, DMDX, Git, Amazon SageMaker, BigQuery
  • Paradigms

    Data Science, Database Design, Agile, Functional Programming, ETL
  • Platforms

    Jupyter Notebook, Amazon Web Services (AWS), Docker, Hyperledger, Oracle Database, Linux, Zeppelin, Apache Kafka, Google Cloud Platform (GCP)
  • Other

    Data, Statistical Data Analysis, Exploratory Data Analysis, Unstructured Data Analysis, Complex Data Analysis, Statistical Methods, Statistical Modeling, Statistical Forecasting, Statistical Analysis, Statistical Significance, Random Forests, Random Forest Regression, Experimental Design, Time Series, Machine Learning, Predictive Modeling, Data Visualization, Data Analysis, Data Analytics, Statistics, Computational Statistics, Bayesian Statistics, Statistical Programming, Amazon Machine Learning, Tf-idf, Convolutional Neural Networks, Analysis of Variance (ANOVA), Dashboards, Analytical Dashboards, Data Build Tool (dbt), Deep Learning, Natural Language Processing (NLP), Mathematical Modeling, Data Engineering, Operations Research, Simulations, PyEEG, Scientific Data Analysis, Prefect, Serverless
  • Storage

    Databases, NoSQL, Cassandra, PostgreSQL, MySQL
  • Frameworks



  • Bachelor of Arts Degree in Psychology
    2006 - 2012
    Hunter College - New York City, NY, USA


  • Machine Learning Engineer Nanodegree

To view more profiles

Join Toptal
Share it with others