Mamta Yadlpalli
Verified Expert in Engineering
AWS DevOps Developer
Princeton, NJ, United States
Toptal member since July 9, 2020
Mamta is a DevOps and cloud engineer with seven years of experience in on-premise infrastructure and cloud computing. She has expertise in working with AWS and has developed web applications using various JavaScript libraries, HTML, XML, Express.js, Node.js, React, and CSS, deploying them into the cloud platform using Kubernetes and cloud formation templates using Terraform. She has a basic understanding of the Google Cloud Platform (GCP) and has deployed resources to GCP using Terraform.
Portfolio
Experience
- Linux - 7 years
- Amazon Web Services (AWS) - 6 years
- Node.js - 5 years
- AWS DevOps - 5 years
- Kubernetes - 4 years
- Docker - 3 years
- Kubernetes Operations (kOps) - 2 years
- Google Cloud Platform (GCP) - 2 years
Availability
Preferred Environment
GitLab, GitHub, Visual Studio Code (VS Code), Windows, Linux, Amazon Web Services (AWS), Google Cloud Platform (GCP), Terraform, Azure DevOps
The most amazing...
...project I've developed and deployed is a data lake that supports infrastructure solutions in AWS to meet the business requirements using AWS and Terraform.
Work Experience
Cloud Automation Engineer
Premier Global Consulting Firm
- Worked on pulling the standard Terraform image from Docker and uploaded it to an Artifactory. Created a pipeline in Circle CI to pull the Terraform image from Artifactory.
- Provisioned an EKS cluster using Terraform templates through the CircleCI pipeline. Created an AWS S3 bucket to store the Terraform state files.
- Created security groups, ALB Ingress Controller, and IAM role, group, and policy for the EKS cluster. I also deployed a manifest, Kubernetes RoleBinding, and Node.js groups using Terraform templates.
- Used the 1Password Secrets manager to store secrets in a vault as a file and share it with a group of users. Secrets can be moved from one vault to another.
- Applied Python and 1Password CLI to create vaults and groups, add users to the groups, and share the vault with specific users or groups.
- Fetched the list of resources from all AWS and Azure cloud accounts using Boto3 and Python Azure SDK and stored the inventory in the DynamoDB table. The script executes daily to keep track of resources.
- Developed a front-end application to display the inventory, which helps the end user easily track resources. If a resource has been deleted from the cloud, the record from DynamoDB is deleted after 30 days.
- Ensured tags were added for all cloud resources so users could search the inventory based on tags, resource name, and account ID. Also, granted, users get notified if the tags are not present.
- Enabled users to download data or send it as an email.
- Used React, Node.js, Express.js, and Python for development. Configured the email notification for system upgrades.
DevOps Engineer
ArrAy
- Worked on data and application migration using Google Compute Engine. Grouped VMs based on the requirement to enable the group migration.
- Migrated the infrastructure from AWS to GCP via Terraform templates. Worked with Terragrunt to manage the Terraform templates and created a staging and production environment.
- Created Compute Engine instance templates, VPC networks, database servers, and more using Terraform templates.
- Architected Terraform templates to provide Google Cloud Platform (GCP) resources. Provided services, such as cloud storage, Cloud SQL, GKE, Compute Engine, log metrics, etc.
- Set up alerting and monitoring using Google Stackdriver in GCP using Terraform templates. Created custom log metrics using Google Stackdriver logging and built charts and alerts using the custom log metrics.
- Used GitLab and GitHub as source code repositories and created the CI/CD pipeline for the deployments in AWS and GCP. Developed a Google-managed SSL certificate using Terraform templates.
- Set up Terraform to automate the creation of the GKE Kubernetes cluster in Google Cloud with the best security standards.
- Set up GCP Firewall rules to allow or deny traffic to and from VM instances based on specified configurations.
- Used Terraform and created projects, VPCs, subnetworks, and GKE clusters for environments. Created database instances for production and development environments and updated the configuration of the database instances for migration using Terraform.
- Created a dynamic routing and load balancing capability that enabled large application scaling; used Ingress rules and controllers.
DevOps Expert
Sun Nuclear
- Worked with GitHub Actions for continuous integration and continuous deployment processes. Created the actions and workflows for the Git repository for continuous integration and deployment.
- Created the CloudFormation templates to create various resources in AWS through the workflow. CloudFormation stacks are also developed through GitHub Actions and are automatically destroyed on deleting a branch in GitHub.
- Managed CI/CD pipelines using GitHub Actions and workflows. Triggered the workflows on different events like push, pull, delete, schedule, and workflow dispatch, which is used to provide the input values by the user.
- Collaborated with the workflow distributor to distribute it between repositories, which would run the workflow on a scheduled time and update the workflows in different repositories.
- Used third-party actions from the GitHub marketplace in the workflow. The workflow triggers different repository events to process a new build of our application.
- Performed continuous build, continuous deploy, and test jobs using GitHub Actions.
- Used Docker images and deployed the application in the Docker container through CI/CD using workflows in GitHub Actions.
AWS CloudFormation
Syngenta
- Developed the CloudFormation scripts to provide all the services using the CloudFactory application, similar to the Amazon console that the client is developing.
- Migrated all the services to CloudFactory UI using CloudFormation templates, after which the customer can provision the cloud services from there.
- Worked in the AWS MALZ environment (multi-account landing zone), a centralized shared service to eliminate costs, a preconfigured network to interconnect multiple Amazon virtual private clouds, on-premises, AMS operators, and the internet.
- Automated workflows to provision new accounts, known as an account vending machine. Worked on cross-account logging and monitoring to facilitate audits, diagnostics, and analytics. Handled governance rules for security, operations, and compliance.
- Provisioned more than 30 services using CloudFormation and integrating those scripts into the CloudFactory UI. Used GitHub to push the code to code commit and run the build process.
- Developed the UI using JavaScript and the UI updates with the Glue job that ran in a 24-hour timeframe and stored the data in the test database. It integrated the test and production environment with the Glue job.
- Created product files in the service catalog using CloudFormation. The service catalog maintains the code version, and we can roll back if there is any code failure.
DevOps and Back-end Developer
Client (via Toptal)
- Created an application that handles financial data and stores purchase receipts to DynamoDB and Amazon RDS. Based on the client's request, the data is sent back to the client based on the requested data.
- Triggered a Lambda function with API Gateway, DynamoDB, S3 SQS, and SNS. Wrote Lambda functions in Node.js and Python.
- Created CloudFormation templates for different environments, including development, stage, and production, to automate infrastructure for ELB, CloudWatch alarms, ASGs, SNS, RDS, etc., with the click of a button.
- Provided the security for API Gateway with AWS Cognito. Configured and managed AWS Simple Notification Service (SNS) and Simple Queue Service (SQS).
- Managed cryptographic keys and controlled the user to access the various platforms with Amazon KMS.
- Created a Lambda deployment function and configured it to receive and store events from the S3 bucket.
- Installed, configured, and managed RDBMS and NoSQL tools such as DynamoDB.
- Implemented a serverless architecture using API Gateway, Lambda, and DynamoDB and deployed AWS Lambda code from Amazon S3 buckets.
- Worked on Amazon RDS Multi-AZ for automatic failover and high availability at the database tier for MySQL workloads.
Senior Cloud Engineer
CITIBANK
- Managed AWS EC2 instances utilizing autoscaling, elastic load balancing, and Glacier for our QA and UAT environments.
- Built AWS infrastructure resources, load balancers (ELBs), VPC EC2, S3, IAM, importing volumes, EBS, security group, auto-scaling, and RDS in Cloud Formation templates.
- Migrated an existing on-premises application to AWS. Used AWS services like EC2 and S3 for small data set processing and storage. Maintained the Hadoop cluster on AWS EMR.
- Set up a continuous integration environment using Jenkins for building jobs and to push the artifacts into an Artifactory repository on successful builds.
- Added a multi-factor authentication (MFA) to protect the user identity and validated the sign in details. Created user pools to maintain the user directory using Amazon Cognito. Customized workflows and user migration through AWS Lambda triggers.
- Automated the download process with Shell scripting from AWS S3 bucket. Worked with EMR and set up the Hadoop environment in AWS EC2 instances.
- Created AWS CloudWatch alarms to monitor the performance environment instances for operational and performance metrics during load testing.
- Provided 24x7 on-call support to all other engineering, administration, development, and application support teams.
- Created automated scripts that will build, configure, deploy, and test applications deployed to different environments; maintained, supported, and enhanced the continuous integration environment.
- Assisted an automation scripting and execution framework design and development using Selenium WebDriver. Analyzed test requirements and automation feasibility. Used JUnit and TestNG controllers for data extraction and generation of proper reports.
DevOps Engineer
T. ROWE PRICE
- Implemented and supported monitoring and alerting of production and corporate servers/storage via AWS CloudWatch.
- Maintained and expanded AWS (Cloud Services) infrastructure using AWS Stack.
- Automated provisioning and maintained a large number of servers on AWS instances. Experienced in cloud migration to AWS. Involved in the planning, implementation, and growth of our infrastructure on Amazon Web Services (AWS).
- Created complete CI/CD pipelines using Jenkins.
- Configured networking concepts DNS, NFS, and DHCP, troubleshooting network problems such as TCP/IP.
- Maintained, updated, and configured all Windows and Linux servers to ensure 24/7 uptime.
- Built and maintained Docker container clusters managed by Kubernetes Linux, Bash, Git, and Docker on AWS.
- Built, maintained, and scaled the infrastructure for production, QA, and development environments.
- Implemented a cloud infrastructure with full automation and created the first regulated exchange production disaster recovery in the cloud on Amazon’s AWS platform.
- Secured an EMR launcher with custom spark-submit steps using S3 Event, SNS, KMS, and Lambda function. Executed Hadoop/Spark jobs on AWS EMR using programs and data stored in S3 Buckets.
Software Developer
CONTINENTAL HOSPITAL
- Worked with the Serverless framework. Involved in gathering the requirements from the stakeholders.
- Developed ad-hoc reports and worked with RESTful APIs.
- Handled the deployment, scaling, and performance of our applications through their entire lifecycle from development to production.
- Created a migration road map and CI/CD delivery processes to convert the application from a monolithic to microservices architecture.
- Developed an SMTP protocol and servlets for securing the application.
- Created and managed various development, build platforms, and deployment strategies.
- Implemented Autosys for scheduling the ETL, Java, WebLogic, and PL/SQL jobs.
- Performed regular updates and installation of patches using RPM and YUM.
- Wrote PowerShell scripts to pull the data from the APIs.
- Delivered scalable, resilient, and automated builds in a cloud environment using Cloudformation, Ansible, and Jenkins for high-quality data pipelines.
Experience
Trans Automation
Each functionality is divided into separate microservices, the front end is developed with Vue.js, JavaScript, HTML5, and CSS3. The back end is developed with Node.js. I used EMR to securely handle a large amount of data. The application is stored in an S3 bucket. Once the clusters are created, EMR integrates with Amazon cloud watch to monitor the cluster. Automatic troubleshooting using debug GUI. EMR destroys the cluster automatically.
Education
Master's Degree in Computer Science
New York Institute of Technology - New York
Certifications
Certified Solutions Architect - Associate
AWS
Skills
Libraries/APIs
Node.js, Vue, REST APIs, jQuery, Terragrunt, React
Tools
Terraform, Jenkins, Amazon Elastic MapReduce (EMR), AWS IAM, Amazon Cognito, Amazon Simple Queue Service (SQS), GitHub, GitLab, AWS CloudFormation, Amazon CloudWatch, AWS SDK, AWS Key Management Service (KMS), Amazon Simple Notification Service (SNS), Apache Tomcat, AWS CloudTrail, NGINX, Bitbucket, Puppet, Chef, Ansible, Servlet, Amazon EKS, AWS CodeDeploy, AWS CodeCommit, AWS CodeBuild, Amazon SageMaker, AWS Service Catalog, Google Kubernetes Engine (GKE), Boto 3, Artifactory, CircleCI, Windows Azure SDK
Platforms
Amazon Web Services (AWS), Kubernetes, Docker, Linux, Amazon EC2, AWS Lambda, Windows, AWS NLB, Google Cloud Platform (GCP), 1Password, Visual Studio Code (VS Code)
Languages
HTML, JavaScript, Python, Bash Script, Java, YAML, Python 3
Frameworks
Express.js, Selenium, TestNG, JUnit
Paradigms
Testing, Continuous Integration (CI), Continuous Development (CD), Azure DevOps
Storage
MongoDB, SQL Server 2012, Amazon S3 (AWS S3), Amazon DynamoDB, JSON, Microsoft SQL Server, MySQL, Google Cloud
Other
Kubernetes Operations (kOps), AWS DevOps, Machine Learning, Data Science, API Gateways, Shell Scripting, Amazon Route 53, AWS CodePipeline, GitHub Actions, Site Reliability Engineering (SRE), CI/CD Pipelines, Infrastructure as Code (IaC)
How to Work with Toptal
Toptal matches you directly with global industry experts from our network in hours—not weeks or months.
Share your needs
Choose your talent
Start your risk-free talent trial
Top talent is in high demand.
Start hiring