Akshay Aradhya, Developer in Bengaluru, Karnataka, India
Akshay is available for hire
Hire Akshay

Akshay Aradhya

Verified Expert  in Engineering

Software Developer

Location
Bengaluru, Karnataka, India
Toptal Member Since
August 25, 2022

Akshay is a former startup cofounder who is extremely passionate about programming. He is keen to get involved in any technology-related project and is a seasoned problem solver. Akshay is an experienced AWS and GCP DevOps cloud architect, a data engineer managing scraping, visualization, ETL, and pipelines, and a proficient full-stack JavaScript and TypeScript developer.

Portfolio

The Cresston Company LLC (dba Compass Languages)
JavaScript, TypeScript, Python, React, Material UI, Google Material Design...
Freelance Client
Python, Web Scraping, Amazon S3 (AWS S3), Amazon Web Services (AWS)...
Nalyze
Python, Web Scraping, Scraping, Amazon Web Services (AWS), REST, REST APIs...

Experience

Availability

Part-time

Preferred Environment

Python, JavaScript, TypeScript, Node.js, Jupyter Notebook, Amazon Web Services (AWS), Google Cloud Platform (GCP), React

The most amazing...

...thing I've built is my MonteCarlo Tree Search algorithm for a bot that played ultimate Tic-Tac-Toe, which is now ranked among the top bots on CodinGame.

Work Experience

Lead Full-stack Developer | DevOps

2022 - PRESENT
The Cresston Company LLC (dba Compass Languages)
  • Worked on a web app that allowed product managers to look at incoming client translation requests, assign translation tasks to linguists, and send the translated files back to the client.
  • Built and designed the whole front end using React, MUI 5, and Redux. Supported many neat features, such as request filtering, pagination, direct S3 uploads, user authentication, and signup supported by AWS Amplify.
  • Wrote an entirely serverless REST API back end hosted on AWS Lambda, powered by Node.js and Express.js, wrapped with the serverless-HTTP module. It was connected to Amazon API Gateway, and all requests were authenticated via Amazon Cognito.
  • Wrote two additional microservices. One of them copied new DynamoDB records to another table with a different schema. The other one kept the DynamoDB tables synced with Amazon OpenSearch (Elasticsearch), allowing complex querying and pagination.
  • Conducted a lot of DevOps tasks, including setting up AWS Cognito, API Gateway, configuring Amazon OpenSearch, Lambda, and an AWS IAM role. Also, I maintained a development and production environment for everything and wrote various scripts.
Technologies: JavaScript, TypeScript, Python, React, Material UI, Google Material Design, Axios, AWS Amplify, React Redux, Redux, Node.js, Express.js, Serverless, Serverless Architecture, AWS Lambda, Amazon S3 (AWS S3), Amazon Cognito, Amazon API Gateway, Amazon OpenSearch, Elasticsearch, Amazon DynamoDB, JSON Web Tokens (JWT), REST APIs, WebApp, User Interface (UI), Web UI, Amazon CloudFront CDN, Microservices, Full-stack, Full-stack Development, DevOps, AWS DevOps, DevOps Engineer, Data Pipelines, ETL, Amazon Simple Queue Service (SQS), SaaS, Front-end, Back-end, Interactive UI, Architecture

Lead Web Scraping Developer

2022 - 2023
Freelance Client
  • Developed a web scraper to scrape over 100k high-quality art prints from various websites, along with rich metadata about the artwork.
  • Wrote the scraper in Python using the following libraries: Selenium, Beautiful Soup, and requests. The images were downloaded locally and analyzed before being uploaded to AWS S3. The image metadata was pre-processed and uploaded to DynamoDB.
  • Designed and set up the entire AWS architecture. Also designed a simple schema for the database.
Technologies: Python, Web Scraping, Amazon S3 (AWS S3), Amazon Web Services (AWS), Amazon DynamoDB, Python 3, REST APIs, REST, Scraping, Data Scraping, Image Analysis, Back-end, Back-end Architecture, Back-end Development, Scripting, Microservices, Beautiful Soup, Selenium, Requests, APIs, Third-party APIs

Python Engineer

2022 - 2022
Nalyze
  • Built a web scraper capable of fetching four million Twitter user details per day and per account with a single lambda function. The scraper also fetched the details of the follower and following accounts.
  • Designed a serverless architecture with a lambda function consuming events from an SQS queue, so there was no upper bound limit on scaling. One could scrape 40 million users a day with ten Twitter accounts.
  • Built a web scraper using Python Selenium but later changed it to Requests, an HTTP library for Python, for better performance. The scraper was containerized and pushed to Amazon ECR. The containerized image was directly run on AWS Lambda.
  • Set up CloudWatch Logs, metrics, and alarms for the scraper to track and monitor its status. Integrated the scraper with the AWS Systems Manager Agent parameter store to fetch account credentials from which scrapping occurs.
  • Wrote simple unit tests for the scraper using pytest.
  • Surpassed the client's initial goal of 100,000 users a day with ease. Received top-scoring feedback from the client (10/10).
Technologies: Python, Web Scraping, Scraping, Amazon Web Services (AWS), REST, REST APIs, Twitter API, Selenium, AWS Lambda, Amazon CloudWatch, Amazon Simple Queue Service (SQS), Amazon Elastic Container Registry (ECR), Docker, Containerization, Containers, Pytest, Python Asyncio, Microservices, Serverless, Serverless Architecture, Back-end, Architecture, Back-end Development, Back-end Architecture, Back-end Performance

Founding Engineer (Contract)

2021 - 2022
Synth AI Labs Inc
  • Worked on the front- and back-end of the search feature. Built data pipelines for a continuous data transfer from DynamoDB to Algolia for faster search queries.
  • Built on various front-end features in Vue and Electron.
  • Built a custom front end for predictive text suggestions similar to what GitHub Copilot does for testing our very own GPT-3 model.
  • Wrote E2E tests for a desktop app using Selenium, saving time wasted on manual testing.
Technologies: Electron, Vue, Algolia, Amazon Web Services (AWS), Amazon DynamoDB, Amazon API Gateway, AWS Lambda, JavaScript, Python, ETL, Data Pipelines, APIs, REST APIs, AWS Amplify, Front-end, Back-end, Interactive UI, Front-end Development, Front-end Architecture, Back-end Development, Back-end Architecture, Back-end Performance

Chief Technical Officer | Co-founder

2021 - 2021
Gokion
  • Set up and managed the entire AWS infrastructure from scratch. This included setting up EC2 and load balancer and configuring SSL/HTTPS, DynamoDB, Elasticsearch, and Lambda functions to connect various microservices, SNS SMS, and IAM management.
  • Took care of all deployment at Gokion, including the website, web app, back-end API, and Android app. Wrote custom automation scripts to make my life easier.
  • Set up the Google Play Store console and app listing page from scratch. Took care of the entire SMS set up, DLT registration, and configuration from scratch.
  • Wrote the entire API's back end using TypeScript, Node.js, and Express.js with a few other validation modules. Wrote payment as the whole integration module for Razorpay.
  • Designed and coded four big websites and web apps using TypeScript, React, and MUI. Some of them had PWA support and were fully responsive. Used JWT authentication along with a mobile number login.
Technologies: JavaScript, Python, TypeScript, Node.js, Express.js, REST, Data Scraping, React, Redux, User Interface (UI), JSON Web Tokens (JWT), Amazon Web Services (AWS), Amazon EC2, Load Balancers, Amazon DynamoDB, Elasticsearch, Kibana, Amazon S3 (AWS S3), Amazon Kinesis, Data Engineering, Data Pipelines, ETL, AWS Lambda, APIs, NoSQL, REST APIs, Full-stack, Full-stack Development, Microservices, React Redux, DevOps, AWS DevOps, DevOps Engineer, Material UI, Google Material Design, Scraping, Web Scraping, Axios, Amazon CloudWatch, SaaS, Canvas, Charts, Front-end, Back-end, Interactive UI, Architecture, API Integration, Back-end Development, Front-end Development, Back-end Architecture, Back-end Performance

Data Scientist

2018 - 2019
ShareChat
  • Worked extensively with BigQuery and Jupypter Notebooks.
  • Set up various data pipelines with Dataflow and BigQuery on GCP.
  • Wrote a geolocation-based post recommendation system using DynamoDB and Redis GEORADIUS.
  • Worked on a custom variant of collaborative filtering, using Implicit, which showed promising results as a recommendation system.
Technologies: JavaScript, Python, Jupyter Notebook, Data Science, Machine Learning, Collaborative Filtering, Google BigQuery, Amazon Web Services (AWS), Amazon EC2, Amazon Kinesis, AWS Lambda, Amazon S3 (AWS S3), Redis, Node.js, Recommendation Systems, Google Cloud Platform (GCP), Data Engineering, ETL, Data Pipelines, APIs, NoSQL, SQL, REST APIs, Back-end Development, Back-end Performance, Front-end, Front-end Development, Front-end Architecture, Back-end Architecture

Associate Software Developer

2017 - 2018
BETSOL
  • Developed a web chat application for a client. In a team of four, my role mainly was designing the front end using AngularJS and AngularJS Material.
  • Worked on all modern browser features, such as browser desktop notifications, AngularJS Material theming, and user session management. My team received much positive feedback from the client and the management team.
  • Won first place at an internal hackathon competition for building a chatbot service using Dialogflow.
  • Managed all the interns, oversaw their projects, monitored, and helped them along the way.
  • Built some impressive landing pages with HTML canvas, Three.js, and p5.js. One of them, at websiteawards.com/innovations-betsol-lab, won a Web Award.
Technologies: JavaScript, Python, Docker, AngularJS, Node.js, Three.js, D3.js, Data Scraping, Web Scraping, HTML, CSS, HTML5, CSS3, NoSQL, APIs, REST APIs, Google Material Design, Full-stack, Full-stack Development, Canvas, Charts, Front-end, Back-end, Interactive UI, Web UI

Internal Web App for Video Translation Service

http://atlas.compasslanguages.com
Worked on a web app that allowed product managers to look at incoming client translation requests, assign translation tasks to linguists, and send the translated files back to the client. This was primarily an internal tool that improved efficiency and helped them deliver the videos on time.

As the only developer on this project, I built and designed the entire front end and API back end.

I built the front end using React, MUI 5, and Redux. It supports many neat features, such as search filters, pagination, and direct S3 uploads. The user login and all other user auth flows were taken care of by AWS Amplify and AWS Cognito.

The REST API back-end architecture was entirely serverless. It was written using Node.js and Express.js, directly hosted on AWS Lambda, and connected to API Gateway, where all requests were authenticated via AWS Cognito.

I also wrote many additional microservices and did a lot of DevOps for this project. There were many firsts for me here, which helped me learn and explore many AWS services I hadn't used before. It was an excellent learning experience, and I loved the autonomy I had.

Image Web Scraper for Textual Inversion

I developed a web scraper to scrape over 100k high-quality art prints from various websites, along with rich metadata about the artwork.

The scraper was written in Python using the following libraries: selenium, beautiful soup, and requests.

The images were downloaded locally and analyzed before being uploaded to AWS S3. The image metadata was pre-processed and uploaded to DynamoDB.

Large Scale Twitter Web Scraper

Built a large-scale web scraper capable of fetching four million Twitter user details per day and per account with a single lambda function.

The architecture was entirely serverless, with a lambda function consuming events and writing back an SQS queue, so there was no upper bound limit on scaling. One could scrape 40 million users a day with ten Twitter accounts.

The scraper also fetched user, follower, and following accounts' details. This was done via an unofficial Twitter API using Requests, an HTTP library for Python. The scraper was containerized and pushed to Amazon ECR. The containerized image was directly run on AWS Lambda. Also, the scraper had CloudWatch Logs, metrics, and alarms to track and monitor its status.

The client's initial goal of 100,000 users a day was easily surpassed, and I received top-scoring feedback from the client (10/10).

SaaS Platform | Catering

Developed a beautiful SaaS platform for caterers. It was a feature-rich product, with over 200 local business users from Bengaluru signed up and using it. This was the primary SaaS product of our startup, and I was the CTO and co-founder and wrote most of the codebase powering this platform.

The front end was written in TypeScript and used React, Redux, and MUI. The back end was written in TypeScript, Node.js, and Express.js and used DynamoDB as our primary database and Elasticsearch as our querying database.

Our infrastructure was hosted on AWS, and we used various services to power this SaaS platform.

CF Recommendation System

Worked on a primary post recommendation system at one of India's largest social media companies, ShareChat and the system was based on collaborative filtering and the implicit library for Python, a GPU-optimized version of collaborative filtering.

I worked on the entire system end-to-end, including the data pipeline, which ran SQL queries on BigQuery and saved post recommendations to a DynamoDB table. I made tweaks every week based on the AB testing results. In the end, we saw a minor increase in user likes and a 15-20% increase in user retention.

Algolia - Full Text Search

https://www.usesynth.com/
Built a quick and powerful search tool using Algolia to perform full-text search queries on user notes. I also created the data pipeline to transfer records from DynamoDB to Algolia using DynamoDB streams and AWS Lambda functions.
The front end for the search feature was built using CommandBar.

Arise Chat

Developed a web-based chat application for a US client. It was a platform for support executives and customers to communicate with their team and customers. The app was very similar to what Slack is today, with some custom features.

In a team of four, I led the front-end team. My role was mainly focused on designing mockups and building apps using AngularJS and AngularJS Material. Also worked with web sockets to send messages between the server and client.

I worked on all modern browser features, such as browser desktop notifications, AngularJS Material theming, and user session management. My team received much positive feedback from the client and the management team.

Web Scraper for Google Maps

A Selenium script that would scan geographically across the city and store the results for a particular search keyword directly to our DynamoDB database. It mostly gathered all the basic details provided by Google Maps. This script is highly customizable and can be used for any city and search keyword.

I was the lead developer who built the software. As my startup's CTO and lead developer, we needed to contact a few caterers and snack shops in Bangalore who would be willing to sign up for our SaaS platform. This helped us onboard another 100 business clients on our SaaS platform.

Grafana Dashboard

A private Grafana dashboard using Google BigQuery as the back end. I wrote custom SQL queries for most of the data displayed here.

Data was collected and uploaded by a background agent running on a GCP compute VM instance. Data was batched and uploaded using the load job functionality of BigQuery to keep the costs low and stay within BigQuery limits.
2013 - 2017

Bachelor's Degree in Computer Science

B. N. M. Institute of Technology - Bengaluru

Languages

Python, JavaScript, TypeScript, SQL, HTML, CSS, HTML5, Python 3, CSS3

Frameworks

Express.js, Redux, Selenium, Material UI, WebApp, JSON Web Tokens (JWT), Electron, AngularJS

Libraries/APIs

Node.js, React, Selenium WebDriver, REST APIs, React Redux, P5.js, Beautiful Soup, Requests, D3.js, AWS Amplify, Three.js, Twitter API, Python Asyncio, Vue

Tools

BigQuery, Amazon CloudWatch, Amazon Simple Queue Service (SQS), Amazon OpenSearch, Grafana, Amazon Elastic Container Registry (ECR), Amazon Cognito, Amazon CloudFront CDN, Canvas, Kibana, Pytest

Paradigms

ETL, Microservices, DevOps, Data Science, Serverless Architecture, Back-end Architecture, REST

Platforms

Jupyter Notebook, Amazon Web Services (AWS), AWS Lambda, Amazon EC2, Google Cloud Platform (GCP), Docker, Algolia

Storage

Amazon DynamoDB, Amazon S3 (AWS S3), NoSQL, Elasticsearch, Redis, Data Pipelines, MySQL

Other

Data Scraping, Google BigQuery, Web Scraping, Data Visualization, APIs, Scraping, Axios, Web UI, Full-stack, Full-stack Development, Charts, Front-end, Back-end, Interactive UI, Architecture, API Integration, Back-end Development, Front-end Development, Back-end Performance, Front-end Architecture, Algorithms, Amazon API Gateway, User Interface (UI), Amazon Kinesis, Machine Learning, Collaborative Filtering, Recommendation Systems, Data Engineering, Google Material Design, Serverless, SaaS, Load Balancers, Containers, Containerization, AWS DevOps, DevOps Engineer, Image Analysis, Scripting, Third-party APIs

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring