Felipe Batista, Computer Vision Developer in Belo Horizonte - State of Minas Gerais, Brazil
Felipe Batista

Computer Vision Developer in Belo Horizonte - State of Minas Gerais, Brazil

Member since August 7, 2015
Felipe has 8+ years of experience in machine learning and full-stack software development. He's currently focused on cutting-edge technologies such as Tensorflow, Keras, PyTorch, OpenCV, and most of the Python Data Science stack. He is an AWS certified solutions architect skilled in implementing deep learning models from research papers with a focus on computer vision and reinforcement learning.
Felipe is now available for hire


  • Totem AI
    Google Cloud, AWS, Sklearn, PyTorch, Keras, TensorFlow, Python
  • ShopYak
    Express.js, Grunt, SCSS, PostgreSQL, Node.js, Angular, Google Cloud, AWS...



Belo Horizonte - State of Minas Gerais, Brazil



Preferred Environment

GitHub, PyCharm, Linux, MacOS

The most amazing...

...project I've implemented was a deep learning based digital signal processing pipeline for Freezing of Gait Detection for patients with Parkinson's disease.


  • Founder and Software Engineer

    2012 - PRESENT
    Totem AI
    • Developed image classification pipelines using convolutional neural networks (CNNs, data augmentation, and transfer learning) for real-time object detection, face classification/recognition, and semantic segmentation.
    • Developed digital signal processing pipelines for healthcare (freezing of gait detection for Parkinson's disease patients using sensor data).
    • Developed real-time machine learning models for fantasy football including analysis of optimal lineup selections.
    • Implemented scientific papers containing state-of-the-art research related to computer vision, digital signal processing, time series modeling for financial markets, and NLP for data enrichment.
    Technologies: Google Cloud, AWS, Sklearn, PyTorch, Keras, TensorFlow, Python
  • Data Scientist and Full Stack Software Engineer

    2015 - 2018
    • Developed a neural network+Q-learning (reinforcement learning) to perform automated A/B testing on different website layouts for eCommerce stores. The end goal was to change the layout/fonts/colors in order to maximize conversions on each store.
    • Designed and developed portions of the front-end using AngularJS and SCSS. Set up the build process using Grunt.
    • Developed a significant portion of the back-end using NodeJS, Express, and Postgres.
    • Integrated Stripe (regular and connect) both in the front-end and in the back-end.
    • Deployed on AWS with HTTPS, Cloudfront, and ELB.
    Technologies: Express.js, Grunt, SCSS, PostgreSQL, Node.js, Angular, Google Cloud, AWS, Sklearn, PyTorch, Keras, TensorFlow, Python


  • Deep Learning Based Video Deduplication Using Tensorflow, OpenCV, and FFmpeg (Development)

    Architected and implemented a video processing pipeline to deduplicate videos at scale using deep learning (Tensorflow) to generate video signatures.

    Developed video augmentation pipeline to validate and test deduplication models using OpenCV, MoviePy, and FFmpeg.

    Implemented several evaluation routines (both visual and quantitative) to evaluate model results.

  • DSP Using Multiple Deep Learning Architectures (CNNs, LSTM, GRU) (Development)

    The project involved the creation of a digital signal processing pipeline to perform non-intrusive load monitoring (process for analyzing changes in the voltage and current going into a house and deducing what appliances are used in the house as well as their individual energy consumption).

    Over the course of the project I:

    1.Reviewed several papers containing the state of the art methods for NILM

    2. Optimized available models in order to perform an initial POC

    3. Explored different model architectures on a specific setting defined by the client, including:
    a. Parallel CNNs
    b. Parallel CNNs with LSTMs
    c. CNNs with LSTM (bidirectional)
    d. CNN with GRU

    Models were optimized and ultimately the best model was chosen based on the results of a cross-validation routine.

    Deliverables were both Jupyter Notebooks and Python Scripts.

  • Image Classification Pipeline With Tensorflow/Keras (Development)

    Implemented a state of the art image classification pipeline using Tensorflow/Keras.

    Tested several different model architectures (including multi-input models with both images and bounding boxes).

    Settled on a fine-tuned VGG16 network.

    Pipeline included data augmentation, cross-validation, visualization of accuracy and loss across epochs (and sample results)

  • DSP/DL for Freezing of Gait Detection (with POC Mobile App) (Development)

    Implemented a digital signal processing pipeline for freezing of gait detection for patients with Parkinson's.

    Replicated state of the art medical scientific papers using Python, Sklearn, Tensorflow, Keras, and Jupyter in order to obtain well-defined baselines.

    After in-depth research of DSP techniques for anomaly detection, I implemented a few Deep Learning Model architectures never before applied to this specific domain.

    Performed cross-validation to assess model performance focusing on the model's generalization potential. The model was trained using the data of 80% of the patients and tested on the data for the remaining patients.

    Examples of Models implemented on this project:

    Deep Conv LSTM
    Parallels CNNs w/ LSTM

    In order to test the model, I implemented a simple Android application that used the trained model to make a live Freezing of Gait inference based on the phone's accelerometer data. The model was converted to CoreML for future iOS app development.

    Other relevant work includes handling of class imbalance by adjustment of the loss function of the deep learning models, hyperparameter optimization using GridSearch, and simulation of the effect of different windows sizes on model performance.


  • Languages

    Python, JavaScript, SCSS, R, C++
  • Libraries/APIs

    NumPy, Pandas, Keras, TensorFlow, Sklearn, OpenCV, PyTorch, Node.js, SpaCy
  • Paradigms

    Data Science, Agile Software Development
  • Platforms

    Jupyter Notebook, Amazon Web Services (AWS), Google Cloud Platform (GCP), MacOS, Linux
  • Other

    Computer Vision, Deep Learning, Artificial Intelligence (AI), AWS
  • Frameworks

    Express.js, Flask, Angular
  • Tools

    Amazon SageMaker, Git, PyCharm, GitHub, Grunt, AWS Rekognition
  • Storage

    Google Cloud, MongoDB, PostgreSQL


  • Bachelor of Science degree in Economics with a focus on Econometrics and Computational Methods
    2008 - 2012
    UFMG - Belo Horizonte, Brazil


  • AWS Certified Solutions Architect - Associate
    AUGUST 2018 - AUGUST 2018
  • Deep Learning Specialization
    MAY 2018 - PRESENT
  • Image and Video Processing
    APRIL 2018 - PRESENT

To view more profiles

Join Toptal
Share it with others