Enes Gokce, Developer in State College, PA, United States
Enes is available for hire
Hire Enes

Enes Gokce

Verified Expert  in Engineering

Bio

Enes is a data scientist with seven years of experience in machine learning and natural language processing (NLP). He has a demonstrated history of working with deep learning and extensive experience programming in Python and R. His areas of professional interest include generative AI, large language models (LLMs), and hybrid AI solutions with retrieval-augmented generation (RAG) systems. Enes is a US permanent resident.

Portfolio

Native AI
SQL, Open-source LLMs, Large Language Model Operations (LLMOps)...

Experience

  • Statistics - 8 years
  • Machine Learning - 8 years
  • Artificial Intelligence (AI) - 8 years
  • Amazon SageMaker - 6 years
  • Python - 5 years
  • SQL - 5 years
  • Large Language Model Operations (LLMOps) - 5 years
  • Vector Databases - 3 years

Availability

Part-time

Preferred Environment

Git, Visual Studio Code (VS Code)

The most amazing...

...application I've built is a RAG system with a vector database.

Work Experience

NLP Data Scientist

2021 - 2025
Native AI
  • Developed novel and accurate NLP systems using generative AI and large language models (LLMs).
  • Contributed to named entity recognition (NER), text summarization, and emotion classification systems.
  • Prepared and presented reports on the NLP AI engine for investors and client onboarding.
  • Developed the R&D part of the Pinecone vector database solution for the RAG conversational AI chatbot system.
  • Monitored NLP repositories' logs on AWS Cloud Monitor to ensure optimal performance of AI algorithms.
  • Collaborated closely with the product team, kept them updated, and prepared technical documentation.
  • Led a team to build a chatbot system using the retrieval augmented generation (RAG) framework.
Technologies: SQL, Open-source LLMs, Large Language Model Operations (LLMOps), Retrieval-augmented Generation (RAG), Scalable Vector Databases, PyTorch, GitHub, Amazon SageMaker, Amazon Bedrock, Artificial Intelligence (AI), Python, Docker Compose, PostgreSQL, Large Language Models (LLMs), Test-driven Development (TDD), Generative Artificial Intelligence (GenAI), Generative Pre-trained Transformers (GPT), Prompt Engineering, Anthropic, ChatGPT, Claude, APIs, Neural Networks, Amazon Web Services (AWS), AI Chatbots, Chatbots, Conversational AI, Fine-tuning

Experience

Market Research Survey Data Analysis with Large Language Models (LLMs)

The project was for an enterprise market research company client. I was responsible for developing a retrieval augmented generation (RAG) solution to analyze their data on the R&D side. Once I executed my solution on the R&D side, I worked with the SWE team to put it into production.

Some steps I completed in this project:
• Converted tabular data to textual data with text augmentation.
• Created a RAG system by using LangChain and Llamaindex frameworks
• Tested different implementation ideas that give the best results for this specific client.
• Evaluated LLM results using the LLM Evaluation Benchmark Rubric developed by the in-house data science team (human evaluation).

Building a RAG System for an Enterprise Client

A hybrid LLM-based retrieval augmented generation (RAG) system that allows users to extract insight from big survey datasets. With this system, the user can communicate and explore survey data better.

Tools: PostgreSQL vector database, Bedrock API, Claude 3 model, Mistral 7B model, word embeddings

Interview Question Generation System

It's a project that generates interview questions based on job descriptions posted on a job board website. My AI solution generated five interview questions from different categories for each job role category.

Tools: Amazon Bedrock, Amazon SageMaker, Claude 3, prompt engineering

Topic Understanding

Worked on topic generation for each document.
• Did literature review.
• Created R&D roadmap based on the literature review.
• Created a demo output.
• Communicated with shareholders about the feature development process.
• Delivered the R&D part of the solution for topic extraction and topic classification.
• Worked with the SWE team on the project deployment.

NLP Data Scientist

• Developed novel and accurate NLP systems using generative AI and large language models (LLMs).
• Contributed to named entity recognition (NER), text summarization, and emotion classification systems.
• Prepared and presented reports on the NLP AI engine for investors and client onboarding.
• Developed the R&D part of the Pinecone vector database solution for the RAG conversational AI chatbot system.
• Monitored NLP repositories' logs on AWS Cloud Monitor to ensure optimal performance of AI algorithms.
• Collaborated closely with the product team, updated them, and prepared technical documentation.
• Led a team to build a chatbot system using the retrieval augmented generation (RAG) framework.

Education

2015 - 2016

Master of Education Degree in Adult Education

University of Minnesota - Saint Paul, Minnesota, USA

2006 - 2013

Bachelor of Science Degree in Mathematics Education

Bogazici University - Istanbul, Turkey

Skills

Libraries/APIs

PyTorch

Tools

Amazon SageMaker, ChatGPT, Claude, Git, GitHub, Docker Compose

Languages

Python, SQL

Platforms

Visual Studio Code (VS Code), Amazon Web Services (AWS), AWS IoT

Storage

PostgreSQL

Paradigms

Test-driven Development (TDD)

Frameworks

Bedrock, LlamaIndex

Other

Mathematics, Statistics, Machine Learning, Deep Learning, Natural Language Processing (NLP), Vector Databases, Large Language Model Operations (LLMOps), Amazon Bedrock, Prompt Engineering, Vectorization, Open-source LLMs, Literature Review, Retrieval-augmented Generation (RAG), Scalable Vector Databases, Artificial Intelligence (AI), Large Language Models (LLMs), Generative Artificial Intelligence (GenAI), Generative Pre-trained Transformers (GPT), Anthropic, Neural Networks, AI Chatbots, Chatbots, Conversational AI, Fine-tuning, Data Visualization, APIs, LangChain, ChatGPT API, Cloud Computing, SaaS, Startups

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring