Lukas Sirsinaitis, Developer in Vilnius, Vilnius County, Lithuania
Lukas is available for hire
Hire Lukas

Lukas Sirsinaitis

Verified Expert  in Engineering

Artificial Intelligence Developer

Vilnius, Vilnius County, Lithuania

Toptal member since July 24, 2020

Bio

With an academic background in finance and healthcare, Lukas excels at solving business problems using machine learning. Lukas' most commonly used tools are Python, SQL, and Spark. He has 5+ years of experience in NLP and recommender systems. He is a developer with multiple certifications, including Google Data Engineer and Azure AI Engineer, capable of working with pipelines in the cloud. His previous experience includes working with IBM Global Business Services and IBM Research.

Portfolio

Legal Tech Stealth Mode Startup
Python, Artificial Intelligence (AI), Machine Learning, ETL, Data Engineering...
Benable
Elasticsearch, Artificial Intelligence (AI), Machine Learning, Search Engines...
University of North Carolina at Chapel Hill
Artificial Intelligence (AI), Generative Artificial Intelligence (GenAI)...

Experience

  • Python - 5 years
  • Pandas - 5 years
  • SQL - 4 years
  • Natural Language Processing (NLP) - 3 years
  • Artificial Intelligence (AI) - 3 years
  • Generative Pre-trained Transformers (GPT) - 3 years
  • Microsoft Azure - 1 year
  • Google Cloud Platform (GCP) - 1 year

Availability

Part-time

Preferred Environment

Python, MacOS, Anaconda, Jupyter Notebook, PyCharm

The most amazing...

...thing I've created is a neural network-based system that handles thousands of complex emails every month and heavily reduces the labor burden.

Work Experience

Senior Machine Learning Engineer

2024 - 2024
Legal Tech Stealth Mode Startup
  • Developed a legal tech tool leveraging Google Cloud Vision's OCR technology and OpenAI's GPT models for interpreting and presenting data extracted from legal documents dating back up to 60 years.
  • Engineered a cost-efficient AI chain of prompts, utilizing regex and appropriately scaled large language models (LLMs) to achieve an optimal balance between cost efficiency and extraction accuracy.
  • Transitioned a web scraper prototype to a fully operational headless scraping solution, employing Selenium within a Docker container for enhanced scalability and maintenance.
  • Designed and deployed a scalable cloud architecture on AWS, using services such as S3, Lambda, OpenAI's API, and ECR, along with serverless scraping tasks using repurposed SageMaker training jobs with custom Docker containers.
  • Prepared automated deployment scripts using AWS Cloud Development Kit (CDK) and TypeScript in an infrastructure as code (IaC) format.
  • Conducted extensive testing and continuous refinements to improve the reliability and accuracy of the tool, collaborating closely with legal professionals.
Technologies: Python, Artificial Intelligence (AI), Machine Learning, ETL, Data Engineering, Cloud Deployment, Amazon Web Services (AWS), Infrastructure as Code (IaC), Google Cloud, Optical Character Recognition (OCR), AWS Cloud Development Kit (CDK), TypeScript, Large Language Models (LLMs), OpenAI, Large Language Model Operations (LLMOps), Docker, Selenium, Amazon S3 (AWS S3), AWS Lambda, Amazon Elastic Container Registry (ECR), Amazon SageMaker, Minimum Viable Product (MVP), OpenAI GPT-4 API, OpenAI GPT-3 API, Generative Pre-trained Transformers (GPT), Generative Pre-trained Transformer 4 (GPT-4), Generative Pre-trained Transformer 3 (GPT-3), Image Processing, DevOps, AWS DevOps, Serverless

Advisor (via Toptal)

2024 - 2024
Benable
  • Provided advisory services on recommender system planning based on business needs and helped with data preparation, feature engineering, and overall system design.
  • Explored various recommender system approaches, including hybrid matrix factorization and closed-source solutions, and recommended Amazon Personalize, leading to improved user experience based on qualitative user feedback.
  • Helped the company substantially improve content personalization in just a month with minimal development resources.
Technologies: Elasticsearch, Artificial Intelligence (AI), Machine Learning, Search Engines, Recommendation Systems, Amazon Personalize, Matrix Factorization, Data Preprocessing

Senior AI Engineer (via Toptal)

2024 - 2024
University of North Carolina at Chapel Hill
  • Led, planned, and implemented a generative AI pilot project from initial concept to a rigorously tested solution ready for public testing, everything being done under tight deadlines.
  • Utilized generative AI (GPT-4 Turbo and GPT-4o) to interact with users and enable automated decision-making using adapted medical documentation and advanced prompt engineering. Implemented numerous guardrails to ensure safety and reliability.
  • Worked on the tool that mimicked consultations with a health practitioner. Advised personal health strategies based on medical literature (side effects, contraindications), user health history, preferred method of administration, and other factors.
  • Worked on every consultation, and the result is a comprehensive PDF profile for healthcare provider visits, enhancing the efficiency of medical consultations.
  • Planned and implemented serverless infrastructure for conversation initiation and history storage using AWS Lambda, Amazon S3, Amazon API Gateway, and Amazon EventBridge, resulting in low operating costs and seamless scalability during peak usage.
  • Planned and implemented a chat history storage and retrieval system, enabling subject-matter experts to efficiently review interactions and prepare high-quality training data.
Technologies: Artificial Intelligence (AI), Generative Artificial Intelligence (GenAI), AI Chatbots, OpenAI, Cloud, OpenAI GPT-4 API, AWS Lambda, Amazon S3 (AWS S3), Amazon EventBridge, Amazon API Gateway, JSON Web Tokens (JWT), Generative Pre-trained Transformer 4 (GPT-4), Generative Pre-trained Transformers (GPT), Generative Pre-trained Transformer 3 (GPT-3), AI Agents

Senior Machine Learning Engineer (via Toptal)

2023 - 2024
CultureX Inc.
  • Developed an end-to-end MLOps pipeline, which included fine-tuned LLM (780M and 3B Flan-T5 model options). The parallel pipeline facilitated inference on millions of data points using GPUs, AWS Step Functions, a SageMaker training job, and AWS Lambda.
  • Refactored an XGBoost and SHAP values algorithm from GPU-based to an efficient CPU and EFS-based solution with massively parallel AWS Lambda invocations, enabling over 20x increase in speed, reducing the average runtime from 10 minutes to 30 seconds.
  • Developed an LLM-based classifier as a copilot to the internal human evaluation of models.
  • Utilized Hugging Face's Optimum library and ONNX Runtime to prepare a quantized open-source large language model (Flan-T5) for deployment to AWS Lambda, enabling massively scalable inference requests.
  • Fine-tuned OpenAI's GPT models with custom datasets and incorporated models into the main application using OpenAI API, AWS Step Functions, AWS Lambda, and the AWS Cloud Development Kit (TypeScript).
  • Conducted numerous experiments in summarization and retrieval-augmented generation tasks. Utilized models at Amazon Bedrock and used a second-generation AWS Inferentia accelerator for experiments with the LLaMA-2 model.
  • Developed a scalable information retrieval system for million-row datasets. It included an embarrassingly parallel pipeline with GPU-based embedding generation and upload to PostgreSQL DB using AWS Step Functions, Amazon SageMaker, and Amazon S3.
  • Built the information retrieval system in IaC format (TypeScript and CDK), enabling rapid deployment in minutes.
  • Developed a hybrid, low-latency system designed for querying large datasets. The solution efficiently caches results by leveraging a combination of Amazon DynamoDB, DuckDB, Amazon Elastic File System (EFS), and Amazon Athena.
Technologies: Machine Learning, Language Models, Deep Neural Networks (DNNs), AWS Cloud Architecture, TensorFlow, Python, Amazon SageMaker, PyTorch, Natural Language Processing (NLP), AWS Cloud Development Kit (CDK), AWS Trainium, Hugging Face, Generative Pre-trained Transformers (GPT), HPCC Systems, Amazon Web Services (AWS), AWS Lambda, Lambda Functions, AWS Step Functions, Amazon S3 (AWS S3), XGBoost, SHAP, Machine Learning Operations (MLOps), Large Language Models (LLMs), Optimum, Open Neural Network Exchange (ONNX), Flan-T5, Llama 2, TypeScript, Amazon Athena, DuckDB, Amazon DynamoDB, OpenAI, AWS Inferentia, Infrastructure as Code (IaC), PostgreSQL, Pgvector, Amazon RDS, Relational Database Services (RDS), Llama, Retrieval-augmented Generation (RAG), Generative Pre-trained Transformer 4 (GPT-4), Generative Pre-trained Transformer 3 (GPT-3), Software as a Service (SaaS), Open-source LLMs, Large Language Model Operations (LLMOps), DevOps, AWS DevOps, Model Tuning, Serverless, Amazon Bedrock, Bedrock, Vector Search, Vector Databases, Scalable Vector Databases

Machine Learning Engineer

2022 - 2023
A Leading Publisher of English Language Reference Material
  • Spearheaded a project, as the primary machine learning engineer, alongside an intern, where we successfully implemented two innovative language models that generated novel dictionary entries and ranked existing dictionary data.
  • Used PyTorch, fastText, NLTK, spaCy, and other Python libraries to develop generative and ranking algorithms that employed large language models, word vectors, pre-trained models for toxicity filtering, spell-checking tools, and rule-based filtering.
  • Increased the speed of the final algorithm using Redis cache, accessed terabytes of public and private data stored in MongoDB and Amazon S3, and preprocessed using powerful AWS EC2 instances.
  • Established a comprehensive MLOps pipeline hosted on an EC2 instance, which incorporated data retrieval from MongoDB, algorithmic data transformations using Python, and extensive data validation of the model output.
  • Refined, iteratively, the algorithm based on close collaboration with subject matter experts and metrics scored against a sample dataset. Led biweekly meetings with non-technical SMEs, presenting slides with diagrams and algorithm explanations.
  • Managed, despite working under tight deadlines, and successfully implemented solutions and received excellent feedback after an extensive review by dictionary editors. The outcome is utilized by tens of millions of individuals worldwide.
Technologies: Python, Generative Pre-trained Transformers (GPT), Natural Language Processing (NLP), JSON, CSV, fastText, BERT, Word2Vec, PyTorch, Generative Systems, SpaCy, Generative Artificial Intelligence (GenAI), OpenAI, Jupyter, Jupyter Notebook, Databases, Natural Language Toolkit (NLTK), ChatGPT, Team Leadership, CI/CD Pipelines, Distributed Systems, Hugging Face

Machine Learning Engineer

2021 - 2022
Visibly Works LLC, a subsidiary of Channel Bakers, Inc.
  • Guided user feedback and data-driven iterative planning with the CEO of a large eCommerce analytics company based in California. The long-term goal was to optimize over $250 million of the clients' spend using data science and machine learning.
  • Researched terabytes of eCommerce data using Elasticsearch, MongoDB, and Amazon Athena. Dashboards and charts for stakeholder decision-making were prepared using Google Data Studio, Tableau, Plotly, or Matplotlib.
  • Unlocked better spending opportunities by building proprietary automated insights. Algorithms were developed in Python, but the data was preprocessed using Amazon Athena or Elasticsearch.
  • Investigated an early version of Amazon Marketing Cloud containing 300+ features with interaction-level data on millions of users. Contributed to improving data infrastructure by identifying issues in data aggregation from high-traffic sources.
  • Extracted insights from Amazon Marketing Cloud by developing complex SQL queries with multiple interrelated subquery components in the context of privacy restrictions and limited SQL functionality.
Technologies: Amazon Web Services (AWS), Elasticsearch, Data Science, Python 3, Microsoft Visual Studio, Jupyter Notebook, Anaconda, Documentation, NoSQL, eCommerce, Predictive Analytics, SQL, Plotly, Matplotlib, Tableau, Google Data Studio, Amazon Athena, MongoDB, NumPy, Pandas, Data Analytics, Microsoft Excel, Data Queries, Time Series, JSON, CSV, Cloud, ARIMA, Forecasting, SARIMA, Jupyter, Statistical Analysis, Databases, ETL, eCommerce Analysis, Distributed Systems, Algorithms, Statistical Modeling, Marketing Mix Modeling, Customer Segmentation, Data-driven Marketing, Data Modeling, Software as a Service (SaaS)

Machine Learning Engineer

2020 - 2021
Jumprope (acquired by LinkedIn)
  • Tasked with developing a video and image content recommendation engine, as a sole machine learning engineer, for a social platform similar to Pinterest.
  • Developed a recommendation engine consisting of a hybrid matrix factorization model, a custom algorithm based on user activity data distribution, and rule-based filters.
  • Built a custom UDF-based ETL pipeline in Redshift. The pipeline aggregated user behavior data (time spent, views, progress, likes, bookmarks, user polls, impressions) and data on user and item features.
  • Employed online A/B testing by continuously training multiple ML models to refine the production model towards optimum gradually. The platform eventually grew to 2 million monthly users and was later acquired by LinkedIn.
  • Implemented a multi-armed bandit testing system that optimized push notification timing for every user.
  • Developed a proof of concept for the summarization of textual data by using state-of-the-art transformer models.
Technologies: Amazon Web Services (AWS), Anaconda, Artificial Intelligence (AI), Python 3, Machine Learning, Recommendation Systems, ETL, Redshift, User-defined Functions (UDF), A/B Testing, Neural Networks, Data Science, Amazon SageMaker, Deep Learning, Data Engineering, SQL, NumPy, Pandas, APIs, Tableau, Product Analytics, Microsoft Excel, Data Queries, Machine Learning Operations (MLOps), JSON, CSV, Word2Vec, fastText, Cloud, Machine Vision, PyTorch, Jupyter, Jupyter Notebook, Databases, CI/CD Pipelines, Algorithms, Customer Segmentation, Data Modeling, Computer Vision Algorithms

Data Scientist

2018 - 2020
IBM
  • Used data science to solve various business problems, including human resource department transformation, M&A process transformation, fraud detection, and IT asset commercialization, all supporting revenue and profitability growth.
  • Made significant contributions to various projects and was chosen as a member of IBM's highly selective special equity program designed to reward IBM's highest contributors.
  • Led workshops at IBM events with up to 350 participants. The workshops covered Watson Health, natural language processing, the latest cloud advancements for data scientists (AutoAI, petabyte-scale databases, etc.), and cloud certifications.
  • Collaborated with remote global teams at IBM Global Business Services and IBM Research.
  • Mentored five interns who then went on to be successful full-time employees at IBM.
Technologies: Data Visualization, Predictive Modeling, Data Analysis, Predictive Analytics, Analytics, MacOS, Linux, Docker, Python, Deep Neural Networks (DNNs), Machine Learning, PostgreSQL, IBM Cloud, Spark, Keras, TensorFlow, Natural Language Toolkit (NLTK), NumPy, SpaCy, Scikit-learn, Pandas, Data Science, Deep Learning, Computer Vision, BERT, Data Engineering, SQL, APIs, Data Analytics, Microsoft Excel, Data Queries, Kubernetes, Team Leadership, Machine Learning Operations (MLOps), JSON, CSV, Cloud, Neural Networks, Machine Vision, PyTorch, Flask, XGBoost, Distributed Systems, Jupyter Notebook, Jupyter, Databases, NoSQL, fastText, CI/CD Pipelines, Recurrent Neural Networks (RNNs), REST APIs, Algorithms, Statistical Modeling, Data Modeling, Hugging Face

Experience

Complex Email Answering System

An email answering system based on the transformer (DistilBERT) neural network trained on GPU using PyTorch. It handles thousands of emails monthly, which heavily reduces the labor burden.

My Contributions:
• Enabled the system to reach precision levels of over 90% on multiple topics.
• Worked closely with the team from multiple continents to achieve the final result.

Investigative Crime Analysis Tool

An investigative analysis (counter-terrorism, cyber-crime, counter-narcotics) tool for law enforcement agencies.

My contributions:
• Implemented a custom machine learning model (NER, decision trees, and rules) to automate a data import process (file content recognition within XLSX, CSV, TXT) and mapping to a custom schema.
• Used Kafka Event Streams and RabbitMQ for time-sensitive decoupled messaging and cloud-object storage for data retrieval.
• Packaged the application into a Docker container for deployment to Kubernetes.

Custom Recommender System

I built a custom video and image content recommendation engine for a social platform similar to Pinterest.

My Contributions:
• Built an engine that consisted of an ML model (hybrid matrix factorization), a custom algorithm based on users' activity data distribution, and rule-based filters.
• Developed a custom UDF-based ETL pipeline in Redshift that ingested and preprocessed user behavior data (time spent, views, progress, likes, bookmarks, user polls, impressions) and data on user and item features.
• Gradually refined the hyper-parameters of an ML production model towards optimum using continuous online A/B testing.

Commercial Project Classification

Project:
• Our goal was to assist senior management with project investigation by estimating the probability the project belongs to one of the following domains: technology and IT, central support and facilities management, customer interaction and sales, finance and risk, general management, human capital, marketing and experience management, supply, and make and delivery.

My Contributions:
• Overtook the project in the middle of it.
• Iterated through different machine learning algorithms, augmented and preprocessed the data; also implemented a Flask API.
• Used an award-winning XGBoost algorithm to classify commercial projects and managed to increase the accuracy of predictions on the test set.

Creating Customer Segments

My Contributions:
• Applied unsupervised learning techniques on product spending data of customers of a wholesale distributor in Lisbon, Portugal, to identify customer segments hidden in the data.
• Explored correlations between product categories, applied PCA transformations, and implemented clustering algorithms to segment the transformed customer data.
• Provided insights and ways this information could assist the wholesale distributor with future service changes.

Blended ChatGPT with Warren Buffett's Investment Wisdom

Developed a prototype of a LangChain-based application that integrates ChatGPT with the valuable insights present in Warren Buffett's Letters to Shareholders (spanning from 1978 to 2022). This allows users to receive answers to investment-related questions using information derived from these letters.

Image Caption Generation Model

Implemented a generative model based on CNN-RNN architecture outlined in a research paper written by Google scientists. The model integrated concepts from computer vision and machine translation to produce coherent sentences describing images. The model was trained to maximize the likelihood of the target description sentence given the training image. Preliminary results indicated that even after a relatively small amount of training, the model was able to produce pretty reasonable results.

Education

2017 - 2018

MSc Double Degree in Finance (Thesis in Machine Learning)

Norwegian BI Business School - Oslo, Norway

2016 - 2017

MSc Double Degree in Finance

ISM - Vilnius, Lithuania

2010 - 2016

MD Degree in Medical Science

Vilnius University - Vilnius, Lithuania

Certifications

FEBRUARY 2024 - PRESENT

Foundations of Business Strategy

University of Virginia | via Coursera

FEBRUARY 2024 - PRESENT

Advanced Business Strategy

University of Virginia | via Coursera

FEBRUARY 2024 - PRESENT

Business Growth Strategy

University of Virginia | via Coursera

FEBRUARY 2024 - PRESENT

Strategic Planning and Execution

University of Virginia | via Coursera

FEBRUARY 2024 - PRESENT

Connected Leadership

Yale University | via Coursera

JANUARY 2024 - PRESENT

International Leadership and Organizational Behavior

Università Bocconi | via Coursera

DECEMBER 2022 - DECEMBER 2024

Machine Learning Engineer

Google Cloud

AUGUST 2022 - PRESENT

Computer Vision Nanodegree

Udacity

JANUARY 2020 - PRESENT

Microsoft Certified: Azure AI Engineer Associate

Microsoft

DECEMBER 2019 - DECEMBER 2021

Professional Data Engineer

Google

SEPTEMBER 2019 - PRESENT

Building Resilient Streaming Systems on Google Cloud Platform

Coursera

SEPTEMBER 2019 - PRESENT

Google Cloud Platform Big Data and Machine Learning Fundamentals

Coursera

SEPTEMBER 2019 - PRESENT

Serverless Data Analysis with Google BigQuery and Cloud Dataflow

Coursera

SEPTEMBER 2019 - PRESENT

Serverless Machine Learning with TensorFlow on the Google Cloud Platform

Coursera

JULY 2019 - PRESENT

Artificial Intelligence Nanodegree

Udacity

JUNE 2019 - PRESENT

Natural Language Processing Nanodegree

Udacity

OCTOBER 2018 - PRESENT

Machine Learning Engineer Nanodegree

Udacity

JUNE 2018 - PRESENT

Big Data Applications: Machine Learning at Scale

Coursera

JUNE 2018 - PRESENT

Big Data Essentials: HDFS, MapReduce and Spark RDD

Coursera

MAY 2018 - PRESENT

Data Scientist with Python Career Track

DataCamp

DECEMBER 2015 - PRESENT

CFA Level 1

CFA Institute

Skills

Libraries/APIs

Scikit-learn, Keras, TensorFlow, Natural Language Toolkit (NLTK), SpaCy, Pandas, NumPy, PyTorch, Matplotlib, XGBoost, LSTM, REST APIs, OpenCV

Tools

BigQuery, PyCharm, Microsoft Visual Studio, Plotly, Amazon Athena, Amazon SageMaker, Microsoft Excel, ARIMA, SARIMA, Jupyter, ChatGPT, AWS Cloud Development Kit (CDK), AWS Step Functions, Amazon Elastic Container Registry (ECR), Gensim, Hidden Markov Model, Tableau, Spark SQL, You Only Look Once (YOLO), AWS Trainium, Open Neural Network Exchange (ONNX), AWS Inferentia, OpenAI Gym

Languages

R, Python, SQL, Python 3, TypeScript

Frameworks

Apache Spark, Spark, Flask, JSON Web Tokens (JWT), Selenium, LlamaIndex, Bedrock

Paradigms

ETL, MapReduce, Management, DevOps

Platforms

Jupyter Notebook, Linux, Windows, Docker, Anaconda, MacOS, Amazon Web Services (AWS), AWS Lambda, Google Cloud Platform (GCP), RStudio, Kubernetes, Azure

Industry Expertise

Healthcare, Bioinformatics

Storage

Databases, Data Pipelines, PostgreSQL, Elasticsearch, NoSQL, JSON, Amazon S3 (AWS S3), Cloud Deployment, Redshift, MongoDB, Google Cloud, Amazon DynamoDB

Other

Machine Learning, Statistics, Natural Language Processing (NLP), Deep Neural Networks (DNNs), Word2Vec, Artificial Intelligence (AI), Finance, Medicine, Data Science, Data Visualization, Data Analytics, Predictive Analytics, Analytics, Statistical Analysis, Predictive Modeling, Recommendation Systems, A/B Testing, Neural Networks, Documentation, Google Data Studio, Deep Learning, BERT, Data Queries, Machine Learning Operations (MLOps), Time Series, CSV, fastText, Convolutional Neural Networks (CNNs), CI/CD Pipelines, Pharmaceuticals, General Medicine, Generative Systems, Generative Artificial Intelligence (GenAI), Forecasting, OpenAI, Generative Pre-trained Transformers (GPT), LangChain, OpenAI GPT-3 API, Language Models, Image to Text, Recurrent Neural Networks (RNNs), Algorithms, Statistical Modeling, Data Modeling, Hugging Face, Lambda Functions, SHAP, Large Language Models (LLMs), Flan-T5, Llama 2, DuckDB, Pgvector, Amazon RDS, Relational Database Services (RDS), AI Content Creation, AI Chatbots, OpenAI GPT-4 API, Amazon API Gateway, Amazon Personalize, Matrix Factorization, Data Preprocessing, Optical Character Recognition (OCR), Large Language Model Operations (LLMOps), Llama, Retrieval-augmented Generation (RAG), Minimum Viable Product (MVP), Generative Pre-trained Transformer 4 (GPT-4), Generative Pre-trained Transformer 3 (GPT-3), Software as a Service (SaaS), Open-source LLMs, Model Tuning, Serverless, Amazon Bedrock, Vector Search, AI Agents, Vector Databases, Scalable Vector Databases, Microsoft Azure, IBM Cloud, Data Analysis, User-defined Functions (UDF), eCommerce Analysis, eCommerce, Computer Vision, Data Engineering, APIs, Product Analytics, Team Leadership, Azure Data Factory (ADF), Cloud, Machine Vision, Object Recognition, Distributed Systems, Generative Adversarial Networks (GANs), Image Recognition, Marketing Mix Modeling, Customer Segmentation, Data-driven Marketing, Computer Vision Algorithms, Mathematics, AWS Cloud Architecture, HPCC Systems, Optimum, Infrastructure as Code (IaC), Amazon EventBridge, Search Engines, AWS DevOps, Leadership, Kalman Filtering, Image Processing, Genomics, Medical Imaging, Strategic Planning & Execution, Strategic Planning, Business Strategy, International Leadership, Entrepreneurship, Sales

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring