Narayan Nandeda, Developer in Bengaluru, Karnataka, India
Narayan is available for hire
Hire Narayan

Narayan Nandeda

Verified Expert  in Engineering

Machine Learning Developer

Location
Bengaluru, Karnataka, India
Toptal Member Since
July 31, 2019

Narayan is a data scientist with 10+ years of experience in statistical modeling, NLP, ML, deep learning, AI, and GenAI. He's experienced in deploying ML solutions across the healthcare, retail, supply chain, telecom, and eCommerce domains. Narayan has been designing, developing, and deploying data science solutions using Python and R since 2011. He's an expert in supervised and unsupervised ML, regression, classification, forecasting, reinforcement learning, GANs, and Keras.

Portfolio

Blue Yonder
Artificial Intelligence (AI), BERT, Classification Algorithms, Computer Vision...
Blue Yonder
R Programming, Python, Deep Learning, Statistical Modeling, Data Analysis...
Verizon Data Services
R Programming, Python, Statistical Modeling, Random Forests...

Experience

Availability

Part-time

Preferred Environment

Jupyter, R, Python, Windows, Artificial Intelligence (AI), Classification Algorithms

The most amazing...

...project I've implemented is "Probabilistic Graphical Models(PGM)." I've also built "Insights Engine" for a supply chain company.

Work Experience

Lead Data Scientist

2020 - PRESENT
Blue Yonder
  • Developed and deployed an ML model to predict the length of stay (LOS) of patients in the emergency department of the hospital. Utilized BERT, ALBERT, and Roberta models along with supervised ML models for this task.
  • Developed generative adversarial network models to generate synthetic tabular data of the healthcare records like lab test values, vital signals, age, gender, and the like. Utilized GAN architectures, CTGAN, and more for this task.
  • Built a COVID-19 mortality prediction model to predict the mortality of the disease in positive patients from day 10.
  • Developed ML/DL models to predict the "Discharge/Admit" status of the patients who come to the emergency department of the hospital.
  • Built and deployed GCP (CPU/GPU) based ML models and built a GCP infrastructure to use AI Notebooks.
Technologies: Artificial Intelligence (AI), BERT, Classification Algorithms, Computer Vision, Machine Learning, Forecasting, Regression, Random Forest Regression, Long Short-term Memory (LSTM), LSTM, Autoregressive Integrated Moving Average (ARIMA), Bayesian Inference & Modeling, Healthcare, Health IT, Data Analytics, Data Analysis, Exploratory Data Analysis, Natural Language Toolkit (NLTK), Natural Language Processing (NLP), Generative Pre-trained Transformers (GPT), GPT, Statistical Analysis, Data Science, NumPy, Pandas, Keras, OpenCV, Google Cloud, Google Cloud Platform (GCP), Big Data, SQL, Flask, Docker, TensorFlow, PyTorch, Jupyter, Mathematics, Data Mining, Data Visualization, Data Cleaning, Statistical Modeling, Statistics, Neural Networks, Logistic Regression, Modeling, Clustering, Plotly, TensorFlow Deep Learning Library (TFLearn), Matplotlib, Dash, Probability Theory, Generative Adversarial Networks (GANs), Google AI Platform, XGBoost, Data Validation, Scikit-learn, Jupyter Notebook, GPT-2, Hugging Face Transformers, Hugging Face, Image Processing, Principal Component Analysis (PCA), Predictive Analytics, Data Architecture, Computer Vision Algorithms, LSTM Networks, Recurrent Neural Networks (RNNs), Supply Chain, Supply Chain Optimization, Generalized Linear Model (GLM), Google Cloud Machine Learning, Naive Bayes, Google BigQuery, R-CNN

Senior Data Scientist

2018 - 2020
Blue Yonder
  • Converted LP formulation into an image and used computer vision techniques to decompose large LP problems into small subproblems.
  • Created a deep learning autoencoder pipeline to convert the supply chain to a fixed-dimension vector.
  • Used page ranking to relatively rank supply chain exceptions. Used historical click-stream data along with exception properties to learn the patterns and importance.
  • Built probabilistic graphical models (PGM) to create an insight engine for a supply chain client. PGMs were designed to help planners to automate planning and decisions.
  • Developed time series forecasting models to forecast daily inbound and outbound volume of 100+ distribution centers of a retail client. Utilized ARIMA, SARIMA, ETS, and PROPHET and supervised ML models for them.
  • Developed a forecasting model for a retail client to forecast the journey time of trailers to arrive at stores.
  • Created a sales forecasting model for a retail client to forecast daily sales at different granularities like daily, weekly, store-level, store-department-level, and more.
Technologies: R Programming, Python, Deep Learning, Statistical Modeling, Data Analysis, Exploratory Data Analysis, Machine Learning, Artificial Intelligence (AI), Probabilistic Graphical Models, Data Analytics, Statistical Analysis, Computer Vision, Data Science, NumPy, Pandas, Keras, OpenCV, Google Cloud, Google Cloud Platform (GCP), Big Data, SQL, Flask, Docker, TensorFlow, RStudio Shiny, Linux, PyTorch, Jupyter, Mathematics, Data Mining, Data Visualization, Data Cleaning, Random Forest Regression, Long Short-term Memory (LSTM), Regression, Forecasting, Statistics, Neural Networks, Logistic Regression, Modeling, Clustering, Plotly, TensorFlow Deep Learning Library (TFLearn), Matplotlib, Deep Reinforcement Learning, Dash, Linear Optimization, Probability Theory, OCR, RoBERTa, XLNet, Containers, Linear Algebra, Linear Programming, XGBoost, SciPy, Data Validation, Scikit-learn, Jupyter Notebook, Classification Algorithms, Clustering Algorithms, Principal Component Analysis (PCA), Predictive Analytics, Data Architecture, Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNNs), Supply Chain, Supply Chain Optimization, Generalized Linear Model (GLM), Google Cloud Machine Learning, Naive Bayes

Data Scientist

2015 - 2018
Verizon Data Services
  • Implemented real-time cancel propensity model to predict the cancel order propensity score for the customer in real time, before he submits the order.
  • Implemented real-time churn model to predict the churn propensity of the customer in real time, which was deployed using the IBM info-streams platform.
  • Worked on happy path scoring that takes the path the customer traverse through in the website and scores it as a happy path or unhappy path with a lift score to completion.
  • Implemented session categorization and customer segmentation. The goal was to categorize the customers' sessions based on the activity they do in self-serve channels and use sessions to segment customers into clusters to generate complex insights.
  • Implemented a next-day call prediction model to predict the propensity of call.
  • Trained and deployed Call and Chat Classification model, to classify the transcript in various categories. Worked end to end starting from data collection to deployment (Call Classification and Chat Classification Model).
  • Used NLTK to clean and preprocess text data, Stanford library for dependency parsing, BERT Model for embeddings, LDA for Topics identification, and more (Call Summary model).
  • Implemented a sales forecasting model to forecast weekly sales for each store. Utilized Time series algorithms like ARIMA, SARIMA, Exponential Smoothing, Supervised ML Algo, and Deep Learning like LSTM, RNN (Sales Forecasting at Store Level).
Technologies: R Programming, Python, Statistical Modeling, Random Forests, Logistic Regression, Data Analytics, Data Analysis, Exploratory Data Analysis, Machine Learning, Natural Language Processing (NLP), GPT, Generative Pre-trained Transformers (GPT), Statistical Analysis, Artificial Intelligence (AI), Data Science, R, NumPy, Pandas, SQL, TensorFlow, Tableau, Jupyter, Mathematics, Data Mining, Data Visualization, Data Cleaning, Random Forest Regression, Long Short-term Memory (LSTM), Regression, Statistics, Neural Networks, Modeling, Clustering, Matplotlib, Probability Theory, Telecommunications, XGBoost, SciPy, Data Validation, Scikit-learn, Jupyter Notebook, Classification Algorithms, Clustering Algorithms, Principal Component Analysis (PCA), Predictive Analytics, Data Architecture, LSTM Networks, Generalized Linear Model (GLM), Naive Bayes

Business Analyst

2011 - 2013
Verizon Data Services
  • Developed a model to calculate customer lifetime values based on the profile and transactional data.
  • Did insights generation using state-of-the-art ML algorithms for the representative to better serve customers.
  • Implemented customer segmentation using online transaction details.
  • Implemented Next Best Offer model (NBO). The NBO model identifies the next offer that can be given to a customer from all existing offers. It makes sure that the conversion rate increases.
  • Implemented an agent-customer mapping model to improve call center agents' performance.
Technologies: SQL, R Programming, Python, Modeling, Demand Sizing & Segmentation, Clustering, Data Analytics, Data Analysis, Exploratory Data Analysis, Machine Learning, Natural Language Toolkit (NLTK), Statistical Analysis, Data Science, R, NumPy, Pandas, Jupyter, Mathematics, Data Mining, Data Visualization, Data Cleaning, Random Forest Regression, Regression, Forecasting, Statistics, Neural Networks, Logistic Regression, TensorFlow Deep Learning Library (TFLearn), Matplotlib, Telecommunications, XGBoost, Data Validation, Scikit-learn, Jupyter Notebook, Classification Algorithms, Principal Component Analysis (PCA), Predictive Analytics, Data Architecture, Generalized Linear Model (GLM), Naive Bayes

Generative Adversarial Network (GANs)

Worked on various GAN architectures like TGAN and CTGAN to generate synthetic data and synthetic images, among other features.

Computer Vision to Identify Scratches on The Surface of Items in Manufacturing

Used computer vision to identify scratches on the surface of the items (to identify defective items) in the manufacturing product line.

The objective was to have an automated quality check using computer vision.

Albert, Bert, and GPT2 Language Models

Implemented Albert, Bert, and GPT2 language models using Huggingface Transformers for healthcare data. I have utilized the Google AI platform to train the model using GPU computation.

Computer Vision to Identify Physical Defects on Phone Bodies

Used computer vision to identify scratches on mobile phone bodies. The objective was to automate the phone replacement decision for the under warranty defective phones and identify the phones' screen damage using the phone images uploaded by the customer care center or end user.

Optical Character Recognition

I have also used OCR (Google Tesseract) to do some OCR experiments.

I am also aware of AWS Textract.
2013 - 2015

Master of Business Administration (MBA) Degree in Business Analytics

Indian Institute of Management Indore (IIM - Indore) - Indore, India

2007 - 2011

Bachelor of Engineering Degree in Computer Science and Engineering

Rajiv Gandhi Technical University, (RGPV- Bhopal) - Bhopal, India

SEPTEMBER 2019 - PRESENT

Machine Learning (Statistics and Machine Learning Micro Master)

MITx

DECEMBER 2018 - PRESENT

Statistical Learning

Stanford University

DECEMBER 2018 - PRESENT

Reinforcement Learning

National Research University — Higher School of Economics

DECEMBER 2018 - PRESENT

Bayesian Methods for Machine Learning

National Research University — Higher School of Economics

MAY 2018 - PRESENT

Deep Learning Specialization

deeplearning.ai

DECEMBER 2017 - PRESENT

Data Science Associate

Dell EMC

MAY 2015 - PRESENT

Machine Learning

Stanford University School of Engineering

FEBRUARY 2011 - PRESENT

Oracle Certified Associate (OCA)

Oracle

Languages

Python, R, SQL

Libraries/APIs

Natural Language Toolkit (NLTK), XGBoost, Keras, OpenCV, NumPy, Pandas, Scikit-learn, SciPy, TensorFlow, PyTorch, TensorFlow Deep Learning Library (TFLearn), Matplotlib, LSTM

Tools

Jupyter, Google AI Platform, ChatGPT, Tableau, Plotly

Paradigms

Linear Programming, Data Science

Platforms

Google Cloud Platform (GCP), Jupyter Notebook, Docker, Linux, Windows

Industry Expertise

Healthcare, Telecommunications

Storage

Google Cloud, Data Validation

Other

Data Mining, Neural Networks, Statistics, R Programming, Statistical Modeling, Deep Learning, Linear Regression, Machine Learning, Decision Trees, Random Forests, Natural Language Processing (NLP), Mathematics, Clustering Algorithms, Classification Algorithms, Principal Component Analysis (PCA), BERT, Google BigQuery, Convolutional Neural Networks (CNN), LSTM Networks, Recurrent Neural Networks (RNNs), R-CNN, Hugging Face Transformers, GPT-2, Computer Vision, Image Processing, Predictive Analytics, Statistical Analysis, Data Analysis, Data Analytics, Data Visualization, Data Architecture, Artificial Intelligence (AI), Supply Chain, Supply Chain Optimization, Data Cleaning, Probabilistic Graphical Models, Generalized Linear Model (GLM), Computer Vision Algorithms, Google Cloud Machine Learning, Exploratory Data Analysis, Naive Bayes, Bayesian Inference & Modeling, Hugging Face, GPT, Generative Pre-trained Transformers (GPT), Language Models, OpenAI GPT-3 API, OpenAI GPT-4 API, Large Language Models (LLMs), XLNet, RoBERTa, Generative Adversarial Networks (GANs), OCR, Big Data, Deep Reinforcement Learning, Probability Theory, Linear Algebra, Linear Optimization, Containers, EMC Certified Data Scientist, Dash, Clustering, Demand Sizing & Segmentation, Modeling, Logistic Regression, Forecasting, Regression, Random Forest Regression, Long Short-term Memory (LSTM), Autoregressive Integrated Moving Average (ARIMA), Health IT

Frameworks

Flask, RStudio Shiny

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring