Abhishek Gupta, Developer in Pune, Maharashtra, India
Abhishek is available for hire
Hire Abhishek

Abhishek Gupta

Verified Expert  in Engineering

DevOps Engineer Developer

Pune, Maharashtra, India

Toptal member since March 4, 2020

Bio

Abhishek is a certified Kubernetes Administrator and AWS Solution Architect with 14+ years of experience. He excels in cloud orchestration, configuration management, build and release, deployment, and system administration. He has worked as a senior DevOps engineer with top Fortune 100 companies (FIS) and startup companies (Axim Global and FPComplete), with exposure to banking, finance, telecommunications, product development, crypto, blockchain, gaming, and the services consulting domain.

Portfolio

ShibaInu
AWS IoT, Amazon EKS, Kubernetes, Helm, Pixel, Streaming, Grafana...
Varda
Amazon EKS, Helm, Amazon Web Services (AWS), Instana, AWS DevOps, APIs, Node.js...
FP Complete
Amazon Web Services (AWS), Python 3, OpenID Connect (OIDC), Python, Bash Script...

Experience

  • Jenkins - 10 years
  • Python - 7 years
  • Docker - 6 years
  • Microservices - 6 years
  • Amazon Web Services (AWS) - 6 years
  • Kubernetes - 5 years
  • Terraform - 5 years
  • Amazon EKS - 1 year

Availability

Part-time

Preferred Environment

Automation, SaaS Monitoring, Microservices, Amazon Web Services (AWS), DevOps, Kubernetes, Python, Jenkins, Docker, ELK (Elastic Stack)

The most amazing...

...project I've built is a Kubernetes-managed product with SSO login and RBAC-enabled roles for Grafana, Prometheus, and a Kubernetes dashboard.

Work Experience

Senior DevSecOps Engineer

2024 - PRESENT
ShibaInu
  • Architected and deployed gaming servers on EKS using Matchmaking, Agones, and Global Accelerator on multi-region clusters.
  • Set up the Pixel Gaming servers using high GPU-based instances on EKS with GPU operators installed and ingresses to manage session-based games.
  • Established Jenkins builds and pipelines to allow game servers to be built on Windows and Linux instances.
  • Set up a new L2 chain on both Testnet and Mainnet to allow DApps to be built on top of the chain.
  • Established the complete Infra for blockchain, which includes backups, disaster recovery, etc.
  • Optimized costs on AWS for blockchain infrastructure.
Technologies: AWS IoT, Amazon EKS, Kubernetes, Helm, Pixel, Streaming, Grafana, Amazon CloudWatch, Databases, CI/CD Pipelines, GitHub Actions, Games

Senior DevOps Engineer

2020 - PRESENT
Varda
  • Set up the project from scratch using Terraform for infrastructure deployment and Helm file for application deployment.
  • Migrated Kubernetes Clusters from 1.16 to 1.18, along with Helm migration from Helm 2 to Helm 3.
  • Set up Instana for monitoring and Elasticsearch for logging solutions with proper HA and indices for faster query time.
  • Improved the current release pipeline with a Clair scanner to check vulnerabilities, proper slack alerting for build failures, etc.
  • Worked on AWS Network from scratch using Terraform. It included VPC, subnets, VPN connections, DHCP options, route tables, and more.
  • Hosted applications on Tyk and AWS API gateway to rate limit, control access using policies, and restrict access to certain APIs based on user-based scenarios.
  • Improved the security aspect of the AWS environments by enabling the best DevSecOps practice using AWS CloudTrail, AWS Config, Amazon CloudWatch, AWS Security Hub, and many other tools.
Technologies: Amazon EKS, Helm, Amazon Web Services (AWS), Instana, AWS DevOps, APIs, Node.js, Security, Cloud Monitoring, Prometheus, ELK (Elastic Stack), Grafana, Argo CD, CI/CD Pipelines, Crossplane, GitOps, Streaming, API Gateways, DevSecOps

Senior DevOps Engineer

2020 - 2021
FP Complete
  • Automated and integrated the entire EKS with Istio, Federated Prometheus, Grafana, Loki, and Prometheus Operator using Terraform.
  • Developed an SSO to integrate all internal services such as Kiali, Kubernetes dashboard, Minio, and ArgoCD under one dashboard with Dex and SAML Google ID integration.
  • Secured an entire Kubernetes cluster with an Istio service mesh to ensure recommended practices were followed and hosted applications on Istio Gateway.
  • Set up an HA scalable Kubernetes cluster and helped customers build a robust cluster with monitoring and logging enabled.
  • Integrated Loki, Promtail, Grafana, Slack, Cassandra, and Prometheus both on Windows and Linux Node.js groups.
Technologies: Amazon Web Services (AWS), Python 3, OpenID Connect (OIDC), Python, Bash Script, GitLab CI/CD, Jenkins, Istio, Terraform, Amazon EKS, Kubernetes, AWS DevOps, CI/CD Pipelines, APIs, Node.js, Security, Cloud Monitoring, Prometheus, ELK (Elastic Stack), Grafana, Bash

Senior Cloud DevOps Engineer

2017 - 2019
Axim Global
  • Took ownership of migrating all on-premise applications from Docker to a Kubernetes platform using kOps and to Amazon EKS and Amazon ECS.
  • Served as an SRE for Axim to maintain and deploy all customers' products to the Axim environment, from development to production, using an automated pipeline.
  • Set up logging, alerting, and monitoring systems using Prometheus, Grafana, Alertmanager, Promtail, and Elasticsearch on all environments to ensure proper logs and alerts related to disk and CPU memory. The application uses Slack and Microsoft Teams.
  • Worked intensively with Python to set up crawlers for data scraping for one of the customer products and also used the Boto3 AWS Lambda for project reporting.
  • Set up an entire AWS infrastructure from scratch for development, staging, production, and DR with Terraform using the best security measure by hosting all services in private subnets and enabling relevant ports on SG as required by the products.
  • Utilized AWS ECS, both EC2 and Fargate, including task definition, autoscaling, load balancers, and namespaces for services to talk to each other using the microservice architecture deployment.
  • Worked on RDS tuning comparing with PostgreSQL on-premise and Docker. Used database migration tools to import a large amount of data to and from AWS.
  • Set up AWS CodeBuild, AWS CodePipeline, and GitLab CI/CD using spec.yaml.
  • Set up highly available systems using the Kubernetes platform and deploying all open-source tools related to Cluster Autoscaler and Metrics Server.
Technologies: Site Reliability Engineering (SRE), AWS Data Pipeline Service, Amazon Web Services (AWS), Amazon Elastic Container Service (ECS), AWS Lambda, Python, Microservices, Kubernetes, Docker, Terraform, AWS DevOps, Node.js, Security, Cloud Monitoring, Prometheus, Grafana, CI/CD Pipelines, Bash

Senior DevOps Engineer

2015 - 2017
Avaya India Pvt, Ltd.
  • Migrated VMware applications to AWS and deployed the product as a service.
  • Set up a CI/CD pipeline. Integrated automating test cases in the build and deployed pipeline.
  • Set up a high-alerting system to send out alerts using Nagios.
  • Established a release management process with Jira and Python to create and update tickets at runtime.
  • Worked on Jenkins and Bamboo to setup up multiple builds using groovy scripting.
Technologies: Jenkins Pipeline, Jenkins, Amazon Web Services (AWS), VMware, AWS Lambda, Python, Microservices, Docker, AWS DevOps, Node.js, Grafana, CI/CD Pipelines, Bash

Automation Engineer

2012 - 2015
SunGard (FIS)
  • Designed and developed modules and script using Unix Shell scripting.
  • Automated manual tasks using Unix to schedule jobs via AutoSys.
  • Provided L3 support for automation failures and made root cause analysis (RCA) available to customers.
Technologies: Amazon Web Services (AWS), Python, Bash, CI/CD Pipelines

Build Release Engineer

2009 - 2012
Tech Mahindra
  • Involved in the Unix platform migration (HP UX to Linux) and RAC implementation for WebLogic.
  • Resolved issues related to development, testing, and E2E.
  • Oversaw the configuration and maintenance of the WebLogic application server for deployment.
Technologies: Jenkins, CI/CD Pipelines, Bash

Experience

Migrating Snackr App to Amazon ECS

http://www.snackr.com
Deployed a food delivery app during live events. I worked as a DevOps engineer to set up the complete infrastructure using Terraform. It involved migrating Amazon EC2 apps to Docker images and deploying the pipeline via AWS CodeBuild, AWS CodeDeploy, and AWS CodeCommit.

With every change committed to GitHub, the release pipeline is triggered, pushing the updated change to development, staging, and production (via manual approval).

AWS ECS has two containers running via blue-green deployment with load balancers for each app. It was deployed securely on private subnets using proper rule setup in security groups.

Overall, the infrastructure used Terraform code to deploy VPCs, subnets, AWS ECS, task definition, Amazon Route 53, AWS CodePipeline, and more.

AWS Compliance Reporting

Set up a reporting workflow using AWS Lambda and AWS CloudWatch that runs daily on all AWS accounts added to AWS organizations to check for missing tags on all resources.

The main objective of this project was to replace the costly AWS Config. The project setup benefited the company as it became 100% reliable with zero downtime, and the execution cost was reduced to a few dollars a day.

Terraform was used to automate deployment and create all the AWS internal resources to simplify the upgrade process for software development.

AWS Security Group Recovery During Disaster Recovery

Currently, there is no tool on the market to ensure that a security group is replicated across and region or any AWS account. During migration or disaster recovery, the security group must be created manually by comparing it to existing AWS EC2 instances or AWS services.

I worked as a freelancer on this project, which involved planning, architectural design, implementation, execution, and code deployment to AWS infrastructure.

The code was written in Python 3 using the Boto3 module, with proper error-handling checks. Each day, Lambda runs to collect information on the existing production and saves the information in an AWS S3 bucket. As the data is saved in AWS S3, it allows us to use it to restore any disaster recovery times system.

Using AWS CloudWatch, Cloud Monitoring, and CloudFormation, the execution and deployment of all the resources related to it, such as IAM roles and AWS S3 bucket policies for access, were configured.

I delivered all the components from development to production.

Education

2005 - 2009

Bachelor of Technology Degree in Electronics

Sardar Vallabhbhai National Institute of Technology - Surat, Gujarat, India

Certifications

MARCH 2020 - MARCH 2023

AWS Certified Solutions Architect

Amazon

AUGUST 2019 - AUGUST 2022

Certified Kubernetes Administrator

CNCF

Skills

Libraries/APIs

Node.js, Jenkins Pipeline

Tools

Istio, Grafana, Amazon CloudWatch, AWS IAM, GitLab, Terraform, Jenkins, AWS Fargate, Amazon Elastic Container Service (ECS), Makefile, Make, Amazon Simple Queue Service (SQS), GitLab CI/CD, Amazon EKS, ELK (Elastic Stack), VMware, AWS CloudFormation, Helm, Instana, Git

Languages

Bash, Bash Script, Python, Python 3

Paradigms

DevOps, Automation, Continuous Integration (CI), Continuous Development (CD), Continuous Delivery (CD), Serverless Architecture, Microservices, DevSecOps

Platforms

Amazon Web Services (AWS), Linux, CentOS, AWS Lambda, Kubernetes, Docker, AWS IoT

Storage

Amazon S3 (AWS S3), PostgreSQL, Elasticsearch, AWS Data Pipeline Service, Databases

Frameworks

Crossplane

Other

Prometheus, AWS Certified Solution Architect, SaaS Monitoring, Site Reliability Engineering (SRE), Serverless, Architecture, Autoscaling, Networking, AWS DevOps, CI/CD Pipelines, Security, Cloud Monitoring, Argo CD, APIs, GitOps, Streaming, OpenID Connect (OIDC), Deployment, AWS CodePipeline, Amazon Route 53, Pixel, GitHub Actions, Games, API Gateways

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring