Meet Dave, Developer in Ahmedabad, Gujarat, India
Meet is available for hire
Hire Meet

Meet Dave

Verified Expert  in Engineering

DevOps Engineer and Software Developer

Ahmedabad, Gujarat, India

Toptal member since April 20, 2022

Bio

Meet is a seasoned DevOps professional with six years of experience working with diverse clients, projects, and tech stacks. He has a proven track record of designing scalable, resilient, cost-effective architectures, CI/CD pipelines, infrastructure as code, configuration management, and automation. Meet is adept at troubleshooting and debugging complex issues. Meet's eagerness to tackle challenging tasks and consistently expand his knowledge makes him an invaluable asset to any team.

Portfolio

Levl Sub Israel LTD
Amazon Web Services (AWS), Kubernetes, Apache Kafka, Terraform, CI/CD Pipelines...
Velotio
Kubernetes, Amazon EKS, Azure Kubernetes Service (AKS)...
Blake Regalia
Amazon Web Services (AWS), Azure, Traefik, Infrastructure as Code (IaC)...

Experience

  • Python - 6 years
  • Docker - 5 years
  • ELK (Elastic Stack) - 5 years
  • Terraform - 5 years
  • Elasticsearch - 5 years
  • Chef - 2 years
  • Azure DevOps - 2 years
  • Puppet - 1 year

Availability

Part-time

Preferred Environment

Amazon Web Services (AWS), Terraform, Azure DevOps, Chef, Puppet, Docker, Kubernetes, ELK (Elastic Stack), Python, Shell Scripting

The most amazing...

...thing I've done is migrate the Elastic Stack from AWS to Elastic Cloud without downtime. The Stack had over 100TB of data and received 1.5TB of data a day.

Work Experience

AWS DevOps Engineer

2022 - PRESENT
Levl Sub Israel LTD
  • Managed AWS infrastructure using Terraform for an application that monitors network activity, detects threats, and blocks them across millions of devices.
  • Achieved a 30% – 40% reduction in cloud costs by applying various optimization strategies and architectural improvements.
  • Streamlined CI/CD pipelines, cutting pipeline execution time by 40% through caching, restructuring, and parallel job implementations.
  • Adopted a GitOps approach for infrastructure management.
  • Designed and implemented an active-passive multi-region disaster recovery strategy.
Technologies: Amazon Web Services (AWS), Kubernetes, Apache Kafka, Terraform, CI/CD Pipelines, Bitbucket, Amazon DynamoDB, Terragrunt, Bitbucket Pipelines, Agile, Data Structures, Docker, Python, Shell Scripting, Helm, Linux, DevOps, APIs, AWS Cloud Architecture, Infrastructure as Code (IaC), Bash Script, Continuous Delivery (CD), Continuous Integration (CI), Bash, GitHub

Lead DevOps Engineer

2022 - PRESENT
Velotio
  • Designed and enhanced the architecture for multi-region, multi-cluster OpenSearch setups with a hot-warm logging solution.
  • Developed an autoscaling architecture for OpenSearch to minimize manual scaling activities and improve cost-effectiveness.
  • Implemented various indexing strategies, shard and mapping tunings, and search/indexing optimizations, resulting in a 50% improvement in cluster performance and a 50% reduction in search latencies and costs.
  • Created a hybrid solution for Kinesis streams, combining provisioned mode with a custom autoscaling solution to align with on-demand design, reducing Kinesis costs by 40%.
  • Established a customized blue/green deployment strategy to prevent impacts on critical cluster configurations.
  • Created Helm charts to automate operational cluster configuration tasks, deploying them through FluxCD pipelines.
  • Assisted the DevOps team in developing and troubleshooting internal tools for spinning up Kubernetes infrastructure on platforms such as GKE, AKS, EKS, and KOPS.
  • Automated the Kubernetes product release process for the AWS Marketplace.
  • Created CI/CD pipelines to streamline the build and release process.
  • Assisted in developing the product Helm chart to simplify the deployment and management of the package for users.
Technologies: Kubernetes, Amazon EKS, Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), Google Cloud Platform (GCP), Go, Shell, Linux, Containerization, Docker, Helm, Amazon Web Services (AWS), Architecture, Networking, CI/CD Pipelines, Infrastructure as Code (IaC), APIs, DevOps, AWS DevOps, Elasticsearch, Continuous Delivery (CD), Continuous Integration (CI), Amazon OpenSearch, GitHub Actions, Flux CD, Terraform, Terragrunt, Python, Agile, Data Structures, ELK (Elastic Stack), Shell Scripting, AWS Cloud Architecture, Bash Script, Bash, Fluentd, GitHub, Logstash, Kibana

Cloud Architect

2022 - 2022
Blake Regalia
  • Designed a scalable, secure, self-healing, highly available, and resilient architecture for both cloud and on-premises environments, enabling the client's infrastructure to integrate with other blockchain networks.
  • Created infrastructure as code (IaC) using Terraform and Terragrunt to automate infrastructure provisioning on AWS, Azure, and on-premises platforms.
  • Developed CI/CD pipelines to build and deploy custom machine images for on-premises deployment.
Technologies: Amazon Web Services (AWS), Azure, Traefik, Infrastructure as Code (IaC), Cloud Architecture, Google Cloud Platform (GCP), Containerization, Shell, Linux, Terraform, Terragrunt, Blockchain, GitHub Actions, CircleCI, Continuous Delivery (CD), Continuous Integration (CI), Agile, Data Structures, Shell Scripting, DevOps, APIs, AWS Cloud Architecture, Bash Script, Bash, GitHub

DevOps Technical Lead

2018 - 2022
Crest Data Systems
  • Migrated an Elastic cluster with over 100 TB of data and a daily intake of 1.5 TB from AWS to Elastic Cloud without any downtime successfully.
  • Optimized and scaled an Elastic cluster that originally processed 400 – 500 GB per day, enhancing its capacity to handle 1.5 TB daily after improvements.
  • Designed scalable, highly available, self-healing, resilient, and cost-effective architectures for various applications on AWS.
  • Developed Terraform modules, CloudFormation templates, and ARM templates to automate infrastructure management on AWS and Azure, utilizing Terragrunt as a wrapper for Terraform.
  • Enhanced Puppet modules and Chef cookbooks to manage configurations across thousands of servers.
  • Created diverse CI/CD architectures for multiple projects to automate application build and deployment on AWS and Azure, using tools such as Jenkins, Bitbucket Pipelines, and Azure Pipelines.
  • Collaborated with the security team to build automation and applications that improve monitoring and compliance of Azure resources.
  • Developed Azure policies and automation processes to ensure compliance and enforce organizational standards.
  • Migrated applications from Splunk to Elastic to achieve cost reductions.
Technologies: ELK (Elastic Stack), Chef, Puppet, Python, Shell Scripting, Terraform, Docker, Azure, Azure DevOps, Jenkins, Bitbucket Pipelines, Linux, Splunk, Terragrunt, Agile, Beats, Ruby, Elastic Cloud, Cribl, AWS DevOps, APIs, AWS Cloud Architecture, Infrastructure as Code (IaC), CI/CD Pipelines, Networking, Architecture, Amazon Web Services (AWS), Amazon Virtual Private Cloud (VPC), Monitoring, AWS CLI, AWS Lambda, Amazon S3 (AWS S3), Bitbucket, Amazon Elastic Container Service (ECS), Containerization, Infrastructure, GitLab CI/CD, Ansible, Elasticsearch, Data Structures, Continuous Delivery (CD), Continuous Integration (CI), DevOps, Bash Script, Bash, GitHub, Logstash, Kibana

Experience

Logging as Service for Leading Multimedia Streaming Platform

The client operated a high-traffic multimedia streaming platform, managing a logging service that ingested over 300 TB of data daily across multiple multi-region OpenSearch clusters. They sought a DevOps engineer to optimize their architecture, enhance performance, and reduce operational costs while ensuring scalability and stability.

As a DevOps engineer, I designed and improved the architecture for a hot-warm logging solution, implementing autoscaling to minimize manual interventions and enhance cost-effectiveness. My efforts led to a 50% boost in cluster performance and a significant reduction in search latencies and costs through advanced indexing strategies and mapping optimizations.

I developed a hybrid solution for Kinesis streams, merging provisioned and custom autoscaling modes, cutting Kinesis costs by 40%. Additionally, I established a blue/green deployment strategy to safeguard critical configurations during updates and created Helm charts to automate cluster management via FluxCD.

Through troubleshooting, cloud cost optimization, and maintaining IaC with CI/CD pipelines, I ensured the system's reliability and improved service level agreements (SLAs), allowing the client to focus on their core objectives.

ELK Managed Services

The client had an Elastic cluster on AWS, serving between 400 and 500GB of data per day. They wanted an engineer to support and manage their infrastructure and the stack as well as optimize and scale the cluster to provide monitoring solutions to other teams. With the increasing growth, the development of automation was also needed.

As a DevOps engineer, I oversaw managing customers' Elastic Stack and entire infrastructure. I helped the client with capacity planning, optimization, architecture improvements, and upgrades through different major versions of the Stack without downtime required. I also helped the client with stability improvements, which resulted in improving their SLA. I migrated Splunk applications onto Elastic Stack, which reduced the Splunk licensing cost. The daily ingestion in the cluster was increased from between 400 to 500GB to 1.5TB. To reduce the operational tasks, I helped the client build automation to focus on their milestones more. Apart from managing the Elastic Stack, CI/CD development, Chef cookbook development for configuration management on hosts, development, and enhancements of CloudFormation templates to manage infrastructure were also part of my roles and responsibilities.

Elastic Cloud Migration

The client wanted to migrate their Elastic cluster, which was running on AWS, to the Elastic Cloud to reduce the operational costs. They also wanted to migrate their ingestion layer from one AWS account to another.

As a DevOps engineer, I helped the client migrate the Elastic cluster from AWS to Elastic Cloud without downtime. The cluster contained over 100TB of data, and it received 1.5TB of data per day. I also helped the client migrate the stack's ingestion layer from one AWS account to another. I helped with the planning, architecture design, Chef cookbook development, infrastructure as code (IaC), scripts to automate the migration, CI/CD for the new stack, and the migration.

SaaS Product on AWS for Automated Deployment

The organization wanted to build software as a service product on AWS to automate the deployment and management of the databases for their customers.

As a DevOps engineer, I helped a client develop Terraform modules to manage the entire infrastructure for a SaaS product. I helped the client design scalable architecture with self-healing and high availability (HA) capabilities. Also, I developed CI/CD pipelines to build, test, and deploy applications in AWS ECS and integration test pipelines for end-to-end testing.

SaaS | Puppet Module Development

As a member of the Puppet module development team, I contributed to the successful management of thousands of servers for a prominent SaaS product. My primary responsibility was to assist clients in enhancing their existing Puppet modules to efficiently manage configurations, ensuring optimal performance and reliability. In addition, I leveraged my expertise to design and implement advanced functionalities that enabled the SRE team to apply configurations on hosts effortlessly.

Cloud Security

As a DevSecOps engineer, I worked with clients to build automation around Azure policies to improvise the compliance and monitoring of Azure resources. I also developed CI/CD pipelines using Azure Pipelines to build, test, and deploy applications in Azure. They were created around Azure policy deployment, assignment, remediation, exemption, and data collection. Also, I enforced policies to achieve compliance and designed and managed infrastructure using ARM templates.

Backup and Restore App for Kubernetes

As a DevOps engineer, I collaborated with a development team to aid in the implementation and improvement of their continuous integration/continuous deployment (CI/CD) workflows for a Kubernetes application. Specifically, the client required a product that would enable the efficient backup and restoration of the application state and data. I contributed to creating, repairing, and advancing internal tools that allow teams to construct and configure infrastructure and Kubernetes environments on various platforms such as GKE, EKS, AKS, RKE, OCP, KOPS, Kind, and others. Moreover, I devoted efforts to optimizing the Helm chart and streamlining the deployment and management process for end-users.

Cloud Architect for Cryptocurrency Startup

The client had a requirement to host their application and API on a blockchain network and sought to implement monitoring solutions for the entire system. Additionally, they needed an infrastructure that could join the existing blockchain network. I designed a scalable architecture for the client in Azure and AWS to meet these needs.

The architecture I developed allowed the client's infrastructure to be easily deployed across multiple regions, with an active-active DR (disaster recovery) strategy in place for added resilience. To enable this scalability and resilience, I implemented complex infrastructure-as-code (IaC) modules using Terraform and Terragrunt. These modules automated the infrastructure deployment across all environments, streamlining the process and minimizing the risk of errors.

In addition, I also implemented continuous integration/continuous deployment (CI/CD) pipelines for the client's infrastructure, ensuring that any changes or updates were deployed seamlessly and with minimal disruption. Overall, my design and implementation of this infrastructure provided the client with a highly scalable, resilient, and automated solution to meet their blockchain hosting requirements.

AWS DevOps Engineer for a Cybersecurity Project

The client operated a cybersecurity application that monitored network activity and detected threats across millions of devices. They needed an AWS DevOps engineer to enhance their infrastructure management, optimize costs, and improve deployment processes.

As an AWS DevOps engineer, I managed the AWS infrastructure using Terraform, implementing best practices for scalability and reliability. My optimization strategies and architectural improvements led to a 30-40% reduction in cloud costs, significantly enhancing the project's efficiency.

I streamlined CI/CD pipelines, achieving a 40% reduction in execution time through caching, restructuring, and parallel job implementations, which accelerated deployment cycles. Embracing a GitOps approach, I improved infrastructure management and collaboration among development teams.

To further ensure resilience, I designed and implemented an active-passive multi-region disaster recovery strategy, enhancing the system's reliability and minimizing downtime in case of failures. This comprehensive approach not only improved performance and cost-effectiveness but also bolstered the client's ability to respond to security threats effectively.

Education

2014 - 2018

Bachelor's Degree in Computer Engineering

Dharmsinh Desai University - Nadiad, Gujarat, India

Certifications

JANUARY 2022 - JANUARY 2024

HashiCorp Certified Terraform Associate

HashiCorp

Skills

Libraries/APIs

Terragrunt

Tools

Terraform, ELK (Elastic Stack), Amazon Virtual Private Cloud (VPC), AWS CLI, Bitbucket, Amazon OpenSearch, GitHub, Logstash, Kibana, Puppet, Chef, Jenkins, Helm, Amazon Elastic Container Service (ECS), Shell, Fluentd, GitLab, GitLab CI/CD, Ansible, Splunk, Amazon CloudFront CDN, Amazon EKS, Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), Traefik, CircleCI

Paradigms

Azure DevOps, DevOps, Continuous Delivery (CD), Continuous Integration (CI), Agile

Platforms

Docker, AWS Lambda, Azure, Kubernetes, Linux, Amazon Web Services (AWS), Cribl, Google Cloud Platform (GCP), Blockchain, Red Hat OpenShift, Proxmox, Apache Kafka

Storage

Amazon S3 (AWS S3), Elasticsearch, Amazon DynamoDB

Languages

Python, Bash Script, Bash, Ruby, Go

Other

Bitbucket Pipelines, APIs, Infrastructure as Code (IaC), CI/CD Pipelines, Architecture, Monitoring, GitHub Actions, Data Structures, Algorithms, Shell Scripting, Beats, AWS DevOps, AWS Cloud Architecture, Networking, Containerization, Infrastructure, Elastic Cloud, Cloud Architecture, Kubernetes Operations (kOps), Flux CD

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring