Rudolf Eremyan, Developer in Tbilisi, Georgia
Rudolf is available for hire
Hire Rudolf

Rudolf Eremyan

Verified Expert  in Engineering

Data Science Developer

Tbilisi, Georgia
Toptal Member Since
August 2, 2018

Rudolf is a data scientist with six years of experience in the field. He developed the first chatbot framework for the Georgian language, which the largest bank in Georgia adopted. Rudolf designed big data processing pipelines based on cloud technologies for Fortune 500 companies. He was invited to be a speaker and judge on international hackathons and conferences like PyData, Google DevFest, and NASA's international space app challenge.


Staude Capital
Data Engineering, Excel VBA, SQL, Data Science, Amazon Web Services (AWS)...
ATH Digital LLC
Docker, Plotly, PostgreSQL, Jupyter Notebook, Pandas, AdWords API, Facebook API...
Amazon Elastic MapReduce (EMR), PySpark, Jupyter, Amazon Web Services (AWS)...




Preferred Environment

Amazon Web Services (AWS), Python, Big Data, Apache Airflow, PostgreSQL, SQL, PySpark, Data Modeling, Data Pipelines, Pandas

The most amazing...

...framework I've developed is a chatbot framework for the Georgian language.

Work Experience

Data Engineer for a Cloud Solution

2021 - PRESENT
Staude Capital
  • Designed a data model based on customer-provided requirements and business needs.
  • Developed an investor CRM system for managing hedge fund trades, orders, and other operations.
  • Created automated reporting tools and deployed them on the Amazon cloud services.
Technologies: Data Engineering, Excel VBA, SQL, Data Science, Amazon Web Services (AWS), Hedge Funds, Python, Pandas, Data Modeling, Docker

Data Scientist

2020 - 2021
ATH Digital LLC
  • Created data ingestion scripts for pulling data from ad platforms like Adwords and Facebook Ads.
  • Developed automatic uploading of the CSV and Excel files data into the database based on the AWS services.
  • Set up the marketing streaming cloud infrastructure of the data processing pipeline.
  • Designed a database model based on the data science team requirements.
  • Created a model for forecasting and visualizing the balance burn rate metric.
Technologies: Docker, Plotly, PostgreSQL, Jupyter Notebook, Pandas, AdWords API, Facebook API, Cron, Python, Amazon Kinesis, Amazon EC2, Docker Compose, Jupyter, Google Analytics API, Apache Airflow, Big Data, Amazon Web Services (AWS)

Senior Data Scientist

2019 - 2020
  • Processed and analyzed over 100 million athletic performance data with PySpark running on AWS EMR.
  • Designed a data model based on the companies business requirements.
  • Made a batch data processing pipeline orchestrated by Airflow.
  • Created a data scraping tool for parsing dynamic and static web pages using Scrapy, Selenium, lxml.
  • Developed athletics competitions simulations based on the Monte Carlo approach.
Technologies: Amazon Elastic MapReduce (EMR), PySpark, Jupyter, Amazon Web Services (AWS), Statistics, Data Science, Amazon DynamoDB, Amazon EC2, lxml, Data Modeling, Database Modeling, Code Architecture, Markov Model, Markov Chain Monte Carlo (MCMC) Algorithms, Scrapy, DB, Data Scraping, Selenium, Data Engineering, Machine Learning, GPT, Generative Pre-trained Transformers (GPT), Natural Language Processing (NLP), ETL, Docker, Python, Apache Airflow, Pandas, Big Data

Data Scientist

2018 - 2019
  • Optimized existing SQL queries, making them less complex and having higher performance.
  • Used SQL for gaining insights, detecting anomalies and problems in the collected data.
  • Created a workflow for the data migration between different database management systems.
  • Developed scripts for ingesting data from different online advertising platforms.
  • Designed new database tables according to the analytics team requirements.
Technologies: Jupyter, DB, Marketing, Google Analytics, PostgreSQL, SQL, Statistics, R, Pandas, Python, Docker, Facebook API, AdWords API, Big Data, Amazon Web Services (AWS)

Data Scientist

2018 - 2019
Frontier Data Corporation
  • Developed models for trend detection in the Twitter stream.
  • Developed AI-based application's architecture.
  • Integrated in-house ML models with cloud services as IBM BlueMix and Google Cloud NLP.
  • Worked with big datasets using Google BigQuery.
  • Created customized modules for new ML models evaluation.
  • Trained machine learning models for text classification.
  • Created tests for existing applications.
Technologies: Jupyter, DB, Time Series Analysis, R, Generative Pre-trained Transformers (GPT), GPT, Natural Language Processing (NLP), Big Data, Python, Pandas, Docker, PostgreSQL, Amazon Web Services (AWS)

Data Scientist

2016 - 2018
Pulsar AI
  • Developed a chatbot framework for the Georgian language applying machine learning and natural language processing (NLP) techniques.
  • Trained and deployed a machine learning model for an automated grouping of the news and articles from Georgian media websites.
  • Designed a tool for sentiment classification on texts from social networks.
  • Analyzed a large amount of user conversations data applying NLP, statistics and presented precise results.
  • Worked with time series for analyzing and predicting cryptocurrency prices.
  • Managed a team of linguists who worked on the data collection and labeling.
Technologies: Jupyter, DB, MongoDB, Git, Docker, NumPy, Pandas, SpaCy, fastText, Natural Language Toolkit (NLTK), Gensim, Scikit-learn, Python, PostgreSQL, Amazon Web Services (AWS)

Software Developer Internship

2016 - 2016
Virtuace Inc.
  • Fixed bugs.
  • Expanded functionality of the existing application.
  • Tested new modules.
Technologies: XML, Java, Git, Linux, Docker

Full-stack Software Engineer

2014 - 2016
Georgian Technical University
  • Developed the front-end for managing and working with linguistic corpora.
  • Created web services for operating with linguistic corpus data.
  • Organized database structure for storing and manipulating the linguistic corpora.
  • Analyzed documents using NLP tools and presented results in a clear manner.
Technologies: DB, Python, Natural Language Toolkit (NLTK), Linguistics, MySQL, REST, JavaScript, CSS, HTML, PostgreSQL

Trend Detection in Twitter Stream

Using natural language processing algorithms with a combination of time series analysis approaches developed model for earlier trend detection in the Twitter stream.
Developed scripts for pulling and analyzing Twitter Stream using Twitter API.

Visualized results of the analysis with different plots for better interpreting.

Attribution Modeling for Marketing Optimization

Attribution modeling is the method used to measure the monetary impact a piece of communication has on real business goals, for example, sales, customer retention, revenue, and profit.

During working on this project I have extensively used SQL for data manipulation and analysis, as well as Python and R libraries. I have developed data migration and client notification scripts. Also, implemented data integrity tests for checking completeness and the correctness of existing data. Worked with an international team distributed around the world.

Advanced News Filter

Using Google BigQuery analyzed news big dataset.

Trained machine learning models for text classification which used in text filtering mechanism. Integrated cloud ML services such as IBM BlueMix and Google Cloud NLP with an existing application.

Chatbot Framework for Georgian Language
Ti-Bot, the first ever Chat Bot to speak Georgian.

Automated News Article Grouping Tool

News article grouping tool uses word vectorizing technologies with a combination of clustering algorithms for automatically grouping similar articles parsed from news websites.

Social Media Sentiment Analysis Tool

Social media sentiment analysis tool is a combination of natural language processing technologies and machine learning algorithms for predicting the sentiment for comments and posts, collected from social networks such as Facebook and Instagram.

Spell Checker for Georgian Language

Spell checker tool uses classical algorithms with a combination of powerful machine learning and natural language processing methods for detecting and correcting mistakes in the sentences. This product used by the largest companies in Georgia for detecting and correcting mistakes in documents.

Cryptocurrency Prices Monitoring Tool

Cryptocurrency prices monitoring tool uses time series analysis algorithms and Tweeter API combined with NLP tools such as Sentiment analysis, for monitoring and predicting price movements of Bitcoin and other cryptocurrencies.

NLP Tool for Automatic Identification of Georgian Dialects

A tool used for automatic identification of the Georgian dialects in documents from different sources such as forums, social networks, etc. It's based on machine learning classification methods and NLP approaches. During development, I worked with a group of linguists who prepared training and evaluated data for a classification model.

This project was awarded the "Best Scientific Research of the Tbilisi State University 76th Student Conference"

Linguistic Corpus Management System

Developed a web application for storing, manipulating, and analyzing linguistic data.

ETL pipeline for pharmaceutical industry data

Worked with clients team building new database for the pharmaceutical industry, by collecting, cleaning and managing data from different sources. Used AWS services for implementing ETL, storing logs, etc.

Simulation of the Tokio 2020 Olympic Games

Parsed and analyzed a large volume of athletes' performance data. Applied the Monte Carlo statistical approach on athletes' performance data for simulating track and field competitions. Used AWS cloud services for running computations and storing generated results.


Python, SQL, XML, JavaScript, Java, HTML, CSS, R, Bash, Excel VBA


Pandas, Beautiful Soup, REST APIs, XGBoost, SciPy, NumPy, SpaCy, Scikit-learn, Natural Language Toolkit (NLTK), Twitter API, PySpark, Google AdWords, Matplotlib, Google Cloud API, AdWords API, Facebook API, Google Analytics API


Trello, Jupyter, GitHub, Gensim, Apache Airflow, pgAdmin, Bitbucket, Git, Cron, Plotly, Amazon Elastic MapReduce (EMR), Google Analytics, Docker Compose, Spark SQL


Data Science, ETL, Scrum, REST, Database Design


Jupyter Notebook, Docker, Amazon Web Services (AWS), Linux, Amazon EC2


PostgreSQL, MySQL, DB, MongoDB, Database Modeling, Amazon DynamoDB, Redshift, Data Lakes, Data Pipelines


Data Scraping, Big Data, Data Engineering, Machine Learning, Text Classification, Text Mining, Data Analysis, Data Analytics, Batch File Processing, Predictive Analytics, Apache Superset, Regular Expressions, Web Scraping, Clustering Algorithms, Topic Modeling, Web Services, Data Mining, Attribution Modeling, Data Visualization, Reporting, Trading, Natural Language Processing (NLP), Markov Chain Monte Carlo (MCMC) Algorithms, Markov Model, Code Architecture, Data Modeling, lxml, fastText, Linguistics, Time Series Analysis, SSH, Computational Linguistics, Statistics, Data Structures, Algorithms, IBM Cloud, Amazon Kinesis, Hedge Funds, GPT, Generative Pre-trained Transformers (GPT)


Selenium, Flask, Scrapy, Spark

Industry Expertise

Marketing, Healthcare

2013 - 2017

Bachelor's Degree in Computer Science

Tbilisi State University of Ivane Javakhishvili - Tbilisi, Georgia


Data Analysis Nanodegree



AWS Certified Solutions Architect Associate 2020



Marketing Analytics with R


Google Analytics Individual Qualification

Digital Academy for Ads


Deep Learning Summer School

University of Deusto


Deep Learning Nanodegree



Machine Learning Online Course

Stanford University


Language and Modern Technologies

Goethe University Frankfurt/Main