Alejandro Medina
Verified Expert in Engineering
DevOps Engineer and Developer
Calgary, AB, Canada
Toptal member since December 19, 2022
Alejandro has over a decade of professional experience in DevOps, automation, and the development of custom software lifecycles. He started his career as a developer, but at some point began working with Linux and became its fan. Skillful in software development, operating systems, and infrastructure, Alejandro knows to troubleshoot issues and find optimal solutions to problems. He graduated as an information systems engineer in 2002.
Portfolio
Experience
Availability
Preferred Environment
Linux, Ansible, Jenkins, Terraform, Ansible Tower, Scripting, IT Project Management, Agile DevOps, IT Infrastructure, Amazon Web Services (AWS)
The most amazing...
...project I've done is creating a custom P2V process to clone and restore disks.
Work Experience
Senior DevOps and Linux Engineer
Watsco
- Created a Jenkins pipeline to deploy Docker containers via a tooling-provided API. The build form gets image versions (tags) on the fly from JFrog Artifactory, a Docker repository, and the version-related branch commit information from Bitbucket.
- Automated infrastructure provisioning on AWS in multiple accounts for various services via Terraform or Terraform Cloud, such as EC2, RabbitMQ, API gateway, load balancers, Amazon ECS, Amazon RDS, VPC, subnetworks, AWS Cloud Map, Amazon Route53, etc.
- Configured microservice applications on ECS integrated with an API gateway for access. The deployment method is "rolling updates" for seamless updates of the app, meaning that a set of containers is always running while the others are being updated.
- Set up a series of Bitbucket Pipelines to build Docker images upon branches and tag commits. Depending on the environment and/or the pushed tag, these pipelines also run image deployment to AWS ECS.
- Improved existing CI/CD pipelines by adding secret and error management to stages and custom scripts.
- Hardened Linux servers by setting up kernel parameters and system and user-level services to follow industry standard procedures.
Senior DevOps and Linux Engineer
Suncor
- Configured Ansible Tower workflows that orchestrated a series of Ansible Playbooks to build and deploy Docker images to Red Hat OpenShift, and run containers for JSON data processing.
- Set up Red Hat OpenShift cron jobs that cache data from the Cisco Secure Firewall Management Center via a REST API. This process lets a network team know about unused and outdated firewall rules across dozens of offices and locations.
- Wrote and set up Ansible Playbooks to keep Cisco and Juniper network switches updated via Netmiko and Nornir Python libraries, enabling the client to improve security standards by learning which switch needs updates to patch known vulnerabilities.
Senior DevOps and Linux Engineer
Shaw
- Designed and implemented an infrastructure on VMware vSphere, including virtual machines (VMs), virtual networks, and virtual storage for over a hundred different development and test environments.
- Used API from multiple vendors to integrate VM provisioning that ran on vSphere with Active Directory, Men&Mice, and Red Hat Satellite via a modular Ruby script, helping a client cut costs by automating manual processes.
- Provisioned AWS resources, including VPC, subnets, EC2, RDS, ELB, and security groups via AWS CloudFormation and manual deployments for a couple of custom components that make up the core business application.
DevOps Engineer
Walmart
- Participated in infrastructure troubleshooting, including VMs, LBs, and networks, along with developers and a quality assurance team to diagnose issues in the build process and deployment testing.
- Set up the right combination of HTTP parameters, logical conditions, and a general configuration on Akamai to get the optimal route to a resource within a web application.
- Contributed to the CI/CD automation, primarily in Bash and Perl, that allowed a team to accomplish a tight timeline for production deployment.
Unix and Linux Administrator
Cenovus Energy
- Managed and troubleshot SAN LUN storage on Linux and AIX systems.
- Reduced operational costs by cutting down the time spent on the physical-to-virtual (P2V) process for Linux servers. By creating and developing a custom P2V tool based on Clonezilla, a Linux bare-metal server can be processed in less than 30 minutes.
- Contributed to improvements in building a Linux server on both physical and virtual environments by tuning and adjusting Red Hat Satellite and Spacewalk profiles according to new corporation standards.
Linux Administrator and DevOps Engineer
Canadian Pacific
- Set up a high-availability environment at the software level based on open-source components, such as HAProxy and Keepalived, minimizing service downtime. This allows us to follow uptime metrics stated in the original project specification.
- Used configuration managers, such as Puppet and Ansible, to deploy software components and configuration across servers.
- Contributed to the project's dramatic reduction in information searching time by keeping the project's documentation up-to-date in the wiki.
Experience
Custom P2V Conversion
Given my long-term experience with open-source software, I chose Clonezilla for this activity. Due to the old hardware on which Linux instances were running on, I had to customize Clonezilla by adding specific storage and network kernel modules to read data from disks and transfer them through the LAN. After this, the P2V was a quite simple two-step process:
• Creating an image of all physical disks on the Sun Fire v240 on the bare-metal server and copying it to network-attached storage (NAS) over the network. Clonezilla was loaded on the Sun Fire v240 as an image via ALOM.
• Restoring the image on an empty VM. This VM needed a disk layout similar to the bare-metal server. Clonezilla was loaded on VMware as a media drive.
The custom P2V process had to be done for 10 more Sun Fire v240 servers.
Automated Cisco Firewall Data Processing
I developed this custom solution using the vendor's Firewall API and Python to process the data. These scripts ran as a Docker container on Red Hat OpenShift. The custom software consisted of the following:
• One Python process that downloaded the data from the vendor's management console via a REST API in JSON format.
• Another Python process that processed the data provided IP address parameters.
• A separate Python script built daily reports.
As the amount of data downloaded via an API was massive—around 1.5TB—and the downloading process took a long time, it had to be run as a scheduled job once a day. The expected outcome was a daily JSON report containing a list of network devices and the associated firewall rules given the IP address parameters. This report lets the client know what firewall rules are outdated, which devices change IP addresses, and how often.
Ephemeral Environments Provisioning for an On-premise Application
• Virtual machines running on VMWare vSphere.
• Each virtual machine had to be set up on Red Hat Satellite for package management and operating system patching.
• Subnets and static IP addresses were managed by Men&Mice as IP Address Management (IPAM) software.
• Active Directory (AD) as identity management software.
Ansible and Ansible Tower were chosen to automate and orchestrate all this to follow industry standards. As a result, I wrote modular Ansible playbooks to connect to:
• Men&Mice via a REST API to create new subnets if necessary and get the available IP address(es) for virtual machines to be built.
• vSphere via a REST API to create virtual machines with CPU, memory, storage, and network specifications from YAML files.
• Red Hat Satellite via a REST API to create a profile for the new virtual machine, so it receives operating system patches and software packages.
• AD via an LDAP client to add the virtual machine to the proper domain.
Ansible Tower orchestrated the process through a pipeline with multiple stages and access control.
AWS ECS | Build and Deployment via Bitbucket and Jenkins
• Used a Bitbucket Pipeline to build and deploy it straight to non-production environments, such as testing and staging. The build stage builds a Docker image, tags it with a semantic versioning commit tag and commit ID, and pushes it to JFrog Artifactory, a default Docker repository. The second stage of the pipeline deploys this image to AWS ECS via Bitbucket Pipelines by setting up a new task definition that functions only if the image version has changed.
• Handled deployments to production through a Jenkins job that connects to AWS ECS and follows the same process, i.e., it creates a new task definition only if a different image version is deployed. This Jenkins job displays a drop-down feature of the available image versions on Jfrog. These versions are taken on the fly via a JFrog REST API. The app versions shown here are the ones that were promoted after the image had been tested and approved for production.
This Jenkins job is helpful for manual app rollbacks and/or regular manual redeployments.
Education
Engineer's Degree in Information Systems
Universidad Católica Andrés Bello - Caracas, Venezuela
Certifications
AWS Solutions Architect – Associate
Amazon Web Services
LPIC-2
Linux Professional Institute
LPIC-1
Linux Professional Institute
Skills
Libraries/APIs
Jenkins Job DSL
Tools
Ansible, Jenkins, Terraform, Ansible Tower, Artifactory, VMware vSphere, Amazon Elastic Container Service (ECS), Amazon Virtual Private Cloud (VPC), Amazon Elastic Container Registry (ECR), AWS CLI, AWS IAM, Git, Bitbucket, AWS ELB, AWS Deployment, Cron, Red Hat Satellite, Puppet, NGINX, RabbitMQ, AWS CodeBuild, Amazon CloudWatch, Splunk
Paradigms
DevOps, Continuous Integration (CI), Continuous Deployment, Continuous Delivery (CD), ETL, DevSecOps, Automation
Platforms
Linux, Red Hat Enterprise Linux, Ubuntu, CentOS, AWS IoT, Docker, Amazon EC2, Amazon Web Services (AWS), AWS ALB, Windows, Red Hat OpenShift, OpenStack, AWS Lambda
Storage
NAS Servers, JSON, Amazon S3 (AWS S3), Cloud Deployment, MySQL, Datadog
Languages
Python 3, Ruby, HTML5, Bash Script, YAML, Python, SQL, JavaScript, PHP
Other
Scripting, Operating Systems, Troubleshooting, IT Automation, Unix/Linux Virtualization, CI/CD Pipelines, DevOps, System Architecture, Documentation, System Administration, Shell Scripting, Infrastructure as Code (IaC), Configuration Management, GitOps, File Systems, AWS Certified Solution Architect, IT Project Management, Agile DevOps, IT Infrastructure, Analytical Thinking, Problem Management, Software Development, Teamwork, Process Flows, Capacity Planning, Out of Box Experience (OOBE), APIs, Cloud, Content Delivery Networks (CDN), Storage, Networks, Virtualization, AWS DevOps, Amazon API Gateway, Amazon Route 53, Elastic Load Balancers, Business Continuity & Disaster Recovery (BCDR), Amazon RDS, LDAP, IPAM (IP Address Management), Cloud Architecture, DNS, SSL, Transport Layer Security (TLS), Scaling, AWS Auto Scaling, Back-end, Load Balancers, Autoscaling, Cloud Services, Networking, Monitoring, eCommerce, ECS, File Servers, Cloudflare, Linux Kernel Modules, Physical-to-virtual (P2V), Cisco, ASA Firewalls, HAProxy, Microsoft Servers, Firewalls
How to Work with Toptal
Toptal matches you directly with global industry experts from our network in hours—not weeks or months.
Share your needs
Choose your talent
Start your risk-free talent trial
Top talent is in high demand.
Start hiring