Rudolf Eremyan, Data Science Developer in Tbilisi, Georgia
Rudolf Eremyan

Data Science Developer in Tbilisi, Georgia

Member since July 3, 2018
Rudolf is a data scientist with seven years of experience in the field. He's developed the first chatbot framework for the Georgian language, which the largest bank adopted in Georgia. Rudolf designed big data processing pipelines based on cloud technologies for Fortune 500 companies, and in 2021 he was invited as a speaker and judge for NASA's international space app challenge.
Rudolf is now available for hire

Portfolio

  • ATH Digital LLC
    Docker, Plotly, PostgreSQL, AWS S3, AWS Lambda, Jupyter Notebook, Pandas...
  • Zelos.AI
    AWS EMR, PySpark, Jupyter, Amazon Web Services (AWS), Statistics...
  • Windsor.AI
    Jupyter, DB, Marketing, Google Analytics, PostgreSQL, SQL, Statistics, R...

Experience

Location

Tbilisi, Georgia

Availability

Part-time

Preferred Environment

AWS, Python, Big Data, Apache Airflow, PostgreSQL, Data Engineering, ETL, SQL, PySpark, Database Design

The most amazing...

...framework I've developed is a chatbot framework for the Georgian language.

Employment

  • Data Scientist

    2020 - 2021
    ATH Digital LLC
    • Created data ingestion scripts for pulling data from ad platforms like Adwords and Facebook Ads.
    • Developed automatic uploading of the CSV and Excel files data into the database based on the AWS services.
    • Set up the marketing streaming cloud infrastructure of the data processing pipeline.
    • Designed a database model based on the data science team requirements.
    • Created a model for forecasting and visualizing the balance burn rate metric.
    Technologies: Docker, Plotly, PostgreSQL, AWS S3, AWS Lambda, Jupyter Notebook, Pandas, AdWords API, Facebook API, Cron, Python, AWS Kinesis, AWS EC2, Docker Compose, Jupyter, Google Analytics API, Apache Airflow, Big Data, AWS
  • Senior Data Scientist

    2019 - 2020
    Zelos.AI
    • Processed and analyzed over 100 million athletic performance data with PySpark running on AWS EMR.
    • Designed a data model based on the companies business requirements.
    • Made a batch data processing pipeline orchestrated by Airflow.
    • Created a data scraping tool for parsing dynamic and static web pages using Scrapy, Selenium, lxml.
    • Developed athletics competitions simulations based on the Monte Carlo approach.
    Technologies: AWS EMR, PySpark, Jupyter, Amazon Web Services (AWS), Statistics, Data Science, AWS DynamoDB, AWS Lambda, AWS EC2, AWS S3, lxml, Data Modeling, Database Modeling, Code Architecture, Markov Model, Markov Chain Monte Carlo (MCMC) Algorithms, Batch, Scrapy, DB, Data Scraping, Selenium, Data Engineering, Machine Learning, Natural Language Processing (NLP), ETL, Docker, AWS, Python, Apache Airflow, Pandas, Big Data
  • Data Scientist

    2018 - 2019
    Windsor.AI
    • Optimized existing SQL queries, making them less complex and having higher performance.
    • Used SQL for gaining insights, detecting anomalies and problems in the collected data.
    • Created a workflow for the data migration between different database management systems.
    • Developed scripts for ingesting data from different online advertising platforms.
    • Designed new database tables according to the analytics team requirements.
    Technologies: Jupyter, DB, Marketing, Google Analytics, PostgreSQL, SQL, Statistics, R, Pandas, Python, Docker, Facebook API, AdWords API, Big Data, AWS
  • Data Scientist

    2018 - 2019
    Frontier Data Corporation
    • Developed models for trend detection in the Twitter stream.
    • Developed AI-based application's architecture.
    • Integrated in-house ML models with cloud services as IBM BlueMix and Google Cloud NLP.
    • Worked with big datasets using Google BigQuery.
    • Created customized modules for new ML models evaluation.
    • Trained machine learning models for text classification.
    • Created tests for existing applications.
    Technologies: Jupyter, DB, Time Series Analysis, R, Natural Language Processing (NLP), Big Data, Python, Pandas, Docker, PostgreSQL, AWS
  • Data Scientist

    2016 - 2018
    Pulsar AI
    • Developed a chatbot framework for the Georgian language applying machine learning and natural language processing (NLP) techniques.
    • Trained and deployed a machine learning model for an automated grouping of the news and articles from Georgian media websites.
    • Designed a tool for sentiment classification on texts from social networks.
    • Analyzed a large amount of user conversations data applying NLP, statistics and presented precise results.
    • Worked with time series for analyzing and predicting cryptocurrency prices.
    • Managed a team of linguists who worked on the data collection and labeling.
    Technologies: Jupyter, DB, MongoDB, Git, Docker, NumPy, Pandas, SpaCy, fastText, Keras, NLTK, Gensim, Scikit-learn, Python, PostgreSQL, AWS, AWS Lambda
  • Software Developer Internship

    2016 - 2016
    Virtuace Inc.
    • Fixed bugs.
    • Expanded functionality of the existing application.
    • Tested new modules.
    Technologies: XML, Apache Tomcat, Java, Git, Linux, Docker
  • Full Stack Software engineer

    2014 - 2016
    Georgian Technical University
    • Developed the front-end for managing and working with linguistic corpora.
    • Created web services for operating with linguistic corpus data.
    • Organized database structure for storing and manipulating the linguistic corpora.
    • Analyzed documents using NLP tools and presented results in a clear manner.
    Technologies: DB, Python, NLTK, Linguistics, MySQL, REST, JavaScript, CSS, HTML, PostgreSQL

Experience

  • Trend Detection in Twitter Stream

    Using natural language processing algorithms with a combination of time series analysis approaches developed model for earlier trend detection in the Twitter stream.
    Developed scripts for pulling and analyzing Twitter Stream using Twitter API.

    Visualized results of the analysis with different plots for better interpreting.

  • Attribution Modeling for Marketing Optimization

    Attribution modeling is the method used to measure the monetary impact a piece of communication has on real business goals, for example, sales, customer retention, revenue, and profit.

    During working on this project I have extensively used SQL for data manipulation and analysis, as well as Python and R libraries. I have developed data migration and client notification scripts. Also, implemented data integrity tests for checking completeness and the correctness of existing data. Worked with an international team distributed around the world.

  • Advanced News Filter

    Using Google BigQuery analyzed news big dataset.

    Trained machine learning models for text classification which used in text filtering mechanism. Integrated cloud ML services such as IBM BlueMix and Google Cloud NLP with an existing application.

  • Chatbot Framework for Georgian Language
    https://www.facebook.com/TBCTIbot/

    Ti-Bot, the first ever Chat Bot to speak Georgian.

  • Automated News Article Grouping Tool

    News article grouping tool uses word vectorizing technologies with a combination of clustering algorithms for automatically grouping similar articles parsed from news websites.

  • Social Media Sentiment Analysis Tool

    Social media sentiment analysis tool is a combination of natural language processing technologies and machine learning algorithms for predicting the sentiment for comments and posts, collected from social networks such as Facebook and Instagram.

  • Spell Checker for Georgian Language

    Spell checker tool uses classical algorithms with a combination of powerful machine learning and natural language processing methods for detecting and correcting mistakes in the sentences. This product used by the largest companies in Georgia for detecting and correcting mistakes in documents.

  • Cryptocurrency Prices Monitoring Tool

    Cryptocurrency prices monitoring tool uses time series analysis algorithms and Tweeter API combined with NLP tools such as Sentiment analysis, for monitoring and predicting price movements of Bitcoin and other cryptocurrencies.

  • NLP Tool for Automatic Identification of Georgian Dialects

    A tool used for automatic identification of the Georgian dialects in documents from different sources such as forums, social networks, etc. It's based on machine learning classification methods and NLP approaches. During development, I worked with a group of linguists who prepared training and evaluated data for a classification model.

    This project was awarded the "Best Scientific Research of the Tbilisi State University 76th Student Conference"

  • Linguistic Corpus Management System

    Developed a web application for storing, manipulating, and analyzing linguistic data.

  • ETL pipeline for pharmaceutical industry data

    Worked with clients team building new database for the pharmaceutical industry, by collecting, cleaning and managing data from different sources. Used AWS services for implementing ETL, storing logs, etc.

  • Simulation of the Tokio 2020 Olympic Games

    Parsed and analyzed a large volume of athletes' performance data. Applied the Monte Carlo statistical approach on athletes' performance data for simulating track and field competitions. Used AWS cloud services for running computations and storing generated results.

  • Four Pitfalls of Sentiment Analysis Accuracy (Publication)
    Manually gathering information about user-generated data is time-consuming, to say the least. That's why more organizations are turning to automatic sentiment analysis methods—but basic models don't always cut it. In this article, Toptal Freelance Data Scientist Rudolf Eremyan gives an overview of some sentiment analysis gotchas and what can be done to address them.

Skills

  • Languages

    Python, SQL, XML, JavaScript, Java, HTML, CSS, R, Bash, Excel VBA
  • Libraries/APIs

    Pandas, Beautiful Soup, REST APIs, XGBoost, SciPy, NumPy, SpaCy, Scikit-learn, NLTK, Twitter API, PySpark, Google AdWords, Matplotlib, Google Cloud API, AdWords API, Facebook API, Google Analytics API
  • Tools

    Trello, Jupyter, GitHub, Gensim, Apache Airflow, pgAdmin, Bitbucket, Git, Cron, Plotly, Google Analytics, Docker Compose
  • Paradigms

    Data Science, ETL, Scrum, REST, Database Design
  • Platforms

    Jupyter Notebook, Docker, Linux, AWS EC2, AWS Kinesis
  • Storage

    PostgreSQL, MySQL, DB, MongoDB, Database Modeling, AWS DynamoDB
  • Other

    Data Scraping, Big Data, Data Engineering, Machine Learning, Text Classification, Text Mining, Data Analysis, Data Analytics, Batch File Processing, AWS, Predictive Analytics, Apache Superset, Regular Expressions, Web Scraping, Clustering Algorithms, Topic Modeling, Web Services, Data Mining, Attribution Modeling, Trading, Natural Language Processing (NLP), Markov Chain Monte Carlo (MCMC) Algorithms, Markov Model, Code Architecture, Data Modeling, lxml, fastText, Linguistics, Time Series Analysis, SSH, Computational Linguistics, Statistics, Data Structures, Algorithms, IBM Cloud
  • Frameworks

    Selenium, Flask, Scrapy, AWS EMR
  • Industry Expertise

    Marketing, Healthcare

Education

  • Master's degree in Computer Science
    2017 - 2019
    Tbilisi State University of Ivane Javakhishvili - Tbilisi, Georgia
  • Bachelor's degree in Computer Science
    2013 - 2017
    Tbilisi State University of Ivane Javakhishvili - Tbilisi, Georgia

Certifications

  • AWS Certified Solutions Architect Associate 2020
    MAY 2020 - PRESENT
    CloudGuru
  • Marketing Analytics with R
    AUGUST 2019 - PRESENT
    Datacamp.com
  • Google Analytics Individual Qualification
    DECEMBER 2018 - DECEMBER 2019
    Digital Academy for Ads
  • Deep Learning Summer School
    JULY 2017 - PRESENT
    University of Deusto
  • Deep Learning Nanodegree
    JANUARY 2017 - PRESENT
    Udacity
  • Machine Learning Online Course
    FEBRUARY 2016 - PRESENT
    Stanford University
  • Language and Modern Technologies
    FEBRUARY 2016 - PRESENT
    Goethe University Frankfurt/Main

To view more profiles

Join Toptal
Share it with others