Alejandro Medina, Developer in Calgary, AB, Canada
Alejandro is available for hire
Hire Alejandro

Alejandro Medina

Verified Expert  in Engineering

DevOps Engineer and Developer

Location
Calgary, AB, Canada
Toptal Member Since
December 19, 2022

Alejandro has over a decade of professional experience in DevOps, automation, and the development of custom software lifecycles. He started his career as a developer, but at some point began working with Linux and became its fan. Skillful in software development, operating systems, and infrastructure, Alejandro knows to troubleshoot issues and find optimal solutions to problems. He graduated as an information systems engineer in 2002.

Portfolio

Watsco
Terraform, Linux, Artifactory, IT Automation, Jenkins, Scripting...
Suncor
Red Hat OpenShift, Ansible, Ansible Tower, Python 3, Cisco, Scripting...
Shaw
Ansible, Ansible Tower, VMware vSphere, Ruby, Red Hat Satellite...

Experience

Availability

Part-time

Preferred Environment

Linux, Ansible, Jenkins, Terraform, Ansible Tower, Scripting, IT Project Management, Agile DevOps, IT Infrastructure, Amazon Web Services (AWS)

The most amazing...

...project I've done is creating a custom P2V process to clone and restore disks.

Work Experience

Senior DevOps and Linux Engineer

2021 - PRESENT
Watsco
  • Created a Jenkins pipeline to deploy Docker containers via a tooling-provided API. The build form gets image versions (tags) on the fly from JFrog Artifactory, a Docker repository, and the version-related branch commit information from Bitbucket.
  • Automated infrastructure provisioning on AWS in multiple accounts for various services via Terraform or Terraform Cloud, such as EC2, RabbitMQ, API gateway, load balancers, Amazon ECS, Amazon RDS, VPC, subnetworks, AWS Cloud Map, Amazon Route53, etc.
  • Configured microservice applications on ECS integrated with an API gateway for access. The deployment method is "rolling updates" for seamless updates of the app, meaning that a set of containers is always running while the others are being updated.
  • Set up a series of Bitbucket Pipelines to build Docker images upon branches and tag commits. Depending on the environment and/or the pushed tag, these pipelines also run image deployment to AWS ECS.
  • Improved existing CI/CD pipelines by adding secret and error management to stages and custom scripts.
  • Hardened Linux servers by setting up kernel parameters and system and user-level services to follow industry standard procedures.
Technologies: Terraform, Linux, Artifactory, IT Automation, Jenkins, Scripting, IT Project Management, Agile DevOps, IT Infrastructure, Analytical Thinking, Problem Management, Software Development, Operating Systems, Teamwork, Process Flows, Troubleshooting, Python 3, Red Hat Enterprise Linux, Capacity Planning, HTML5, JavaScript, DevOps, AWS DevOps, Continuous Integration (CI), Continuous Deployment, Docker, CI/CD Pipelines, Amazon API Gateway, Amazon Elastic Container Service (Amazon ECS), Amazon Route 53, Elastic Load Balancers, RabbitMQ, Amazon S3 (AWS S3), Business Continuity & Disaster Recovery (BCDR), Amazon Virtual Private Cloud (VPC), Amazon RDS, Amazon Elastic Container Registry (ECR), AWS CodeBuild, Amazon EC2, AWS CLI, Amazon CloudWatch, AWS Lambda, AWS IAM, Cloud Deployment, Amazon Web Services (AWS), DevOps Engineer, Git, Bitbucket, YAML, Cloud Architecture, DNS, Python, Ubuntu, SSL, Transport Layer Security (TLS), CentOS, AWS ALB, AWS ELB, Scaling, AWS Auto Scaling, System Architecture, Back-end, Jenkins Job DSL, AWS Deployment, Load Balancers, Autoscaling, Documentation, System Administration, Cloud Services, Networking, MySQL, Cron, Monitoring, Shell Scripting, SQL, Infrastructure as Code (IaC), DevSecOps, Configuration Management, Continuous Delivery (CD), eCommerce, PHP, ECS, GitOps, Datadog, File Servers, Cloudflare, File Systems, AWS Certified Solution Architect, Automation, AWS IoT

Senior DevOps and Linux Engineer

2021 - 2021
Suncor
  • Configured Ansible Tower workflows that orchestrated a series of Ansible Playbooks to build and deploy Docker images to Red Hat OpenShift, and run containers for JSON data processing.
  • Set up Red Hat OpenShift cron jobs that cache data from the Cisco Secure Firewall Management Center via a REST API. This process lets a network team know about unused and outdated firewall rules across dozens of offices and locations.
  • Wrote and set up Ansible Playbooks to keep Cisco and Juniper network switches updated via Netmiko and Nornir Python libraries, enabling the client to improve security standards by learning which switch needs updates to patch known vulnerabilities.
Technologies: Red Hat OpenShift, Ansible, Ansible Tower, Python 3, Cisco, Scripting, IT Project Management, Agile DevOps, Analytical Thinking, Problem Management, Software Development, Operating Systems, Teamwork, Troubleshooting, IT Automation, Red Hat Enterprise Linux, Capacity Planning, DevOps, Continuous Integration (CI), Continuous Deployment, Docker, CI/CD Pipelines, DevOps Engineer, Git, Bitbucket, YAML, Cloud Architecture, DNS, Python, Ubuntu, CentOS, Scaling, System Architecture, Back-end, Documentation, Cron, Shell Scripting, Infrastructure as Code (IaC), DevSecOps, Configuration Management, Continuous Delivery (CD), GitOps, Firewalls, File Servers, File Systems, AWS Certified Solution Architect, Amazon Web Services (AWS), Automation, AWS IoT

Senior DevOps and Linux Engineer

2017 - 2020
Shaw
  • Designed and implemented an infrastructure on VMware vSphere, including virtual machines (VMs), virtual networks, and virtual storage for over a hundred different development and test environments.
  • Used API from multiple vendors to integrate VM provisioning that ran on vSphere with Active Directory, Men&Mice, and Red Hat Satellite via a modular Ruby script, helping a client cut costs by automating manual processes.
  • Provisioned AWS resources, including VPC, subnets, EC2, RDS, ELB, and security groups via AWS CloudFormation and manual deployments for a couple of custom components that make up the core business application.
Technologies: Ansible, Ansible Tower, VMware vSphere, Ruby, Red Hat Satellite, Red Hat Enterprise Linux, Agile DevOps, Capacity Planning, Jenkins, Scripting, IT Project Management, IT Infrastructure, Analytical Thinking, Problem Management, Software Development, Operating Systems, Teamwork, Process Flows, Troubleshooting, Artifactory, IT Automation, DevOps, AWS DevOps, Continuous Integration (CI), Continuous Deployment, Docker, CI/CD Pipelines, Amazon API Gateway, Amazon Elastic Container Service (Amazon ECS), Amazon Route 53, Elastic Load Balancers, Amazon S3 (AWS S3), Business Continuity & Disaster Recovery (BCDR), Amazon Virtual Private Cloud (VPC), Amazon RDS, Amazon Elastic Container Registry (ECR), Amazon EC2, AWS CLI, Amazon CloudWatch, AWS Lambda, AWS IAM, Cloud Deployment, Amazon Web Services (AWS), DevOps Engineer, Git, Bitbucket, YAML, IPAM (IP Address Management), LDAP, Cloud Architecture, DNS, Python, Ubuntu, SSL, Transport Layer Security (TLS), CentOS, AWS ALB, AWS ELB, Scaling, AWS Auto Scaling, System Architecture, Back-end, Jenkins Job DSL, AWS Deployment, Load Balancers, Autoscaling, Documentation, System Administration, Cloud Services, Networking, Cron, Monitoring, Shell Scripting, SQL, Splunk, Infrastructure as Code (IaC), DevSecOps, Configuration Management, Windows, Continuous Delivery (CD), Microsoft Servers, ECS, GitOps, Firewalls, File Servers, Cloudflare, NAS Servers, File Systems, AWS Certified Solution Architect, Automation, AWS IoT

DevOps Engineer

2016 - 2017
Walmart
  • Participated in infrastructure troubleshooting, including VMs, LBs, and networks, along with developers and a quality assurance team to diagnose issues in the build process and deployment testing.
  • Set up the right combination of HTTP parameters, logical conditions, and a general configuration on Akamai to get the optimal route to a resource within a web application.
  • Contributed to the CI/CD automation, primarily in Bash and Perl, that allowed a team to accomplish a tight timeline for production deployment.
Technologies: Agile DevOps, Cloud, OpenStack, Linux, Bash Script, Content Delivery Networks (CDN), Jenkins, HTML5, JavaScript, DevOps, Continuous Integration (CI), Continuous Deployment, Docker, CI/CD Pipelines, DevOps Engineer, Git, YAML, IPAM (IP Address Management), LDAP, DNS, SSL, Transport Layer Security (TLS), Scaling, System Architecture, Back-end, Load Balancers, Autoscaling, Documentation, System Administration, Cloud Services, Networking, MySQL, Cron, Monitoring, Shell Scripting, SQL, Infrastructure as Code (IaC), DevSecOps, Configuration Management, Amazon EC2, Continuous Delivery (CD), eCommerce, GitOps, File Systems

Unix and Linux Administrator

2014 - 2015
Cenovus Energy
  • Managed and troubleshot SAN LUN storage on Linux and AIX systems.
  • Reduced operational costs by cutting down the time spent on the physical-to-virtual (P2V) process for Linux servers. By creating and developing a custom P2V tool based on Clonezilla, a Linux bare-metal server can be processed in less than 30 minutes.
  • Contributed to improvements in building a Linux server on both physical and virtual environments by tuning and adjusting Red Hat Satellite and Spacewalk profiles according to new corporation standards.
Technologies: Linux, Scripting, Storage, Networks, VMware vSphere, Unix/Linux Virtualization, Red Hat Satellite, Docker, Business Continuity & Disaster Recovery (BCDR), LDAP, Python, Ubuntu, SSL, Transport Layer Security (TLS), CentOS, Scaling, System Architecture, Back-end, Documentation, System Administration, Networking, Cron, Monitoring, Shell Scripting, SQL, Infrastructure as Code (IaC), Configuration Management, Windows, Firewalls, File Servers, NAS Servers, File Systems

Linux Administrator and DevOps Engineer

2011 - 2013
Canadian Pacific
  • Set up a high-availability environment at the software level based on open-source components, such as HAProxy and Keepalived, minimizing service downtime. This allows us to follow uptime metrics stated in the original project specification.
  • Used configuration managers, such as Puppet and Ansible, to deploy software components and configuration across servers.
  • Contributed to the project's dramatic reduction in information searching time by keeping the project's documentation up-to-date in the wiki.
Technologies: Amazon Web Services (AWS), Linux, Virtualization, VMware vSphere, Scripting, HAProxy, Puppet, Ansible, NGINX, DevOps, Continuous Integration (CI), Continuous Deployment, CI/CD Pipelines, DevOps Engineer, Git, LDAP, DNS, Ubuntu, SSL, Transport Layer Security (TLS), CentOS, AWS ALB, AWS ELB, Scaling, AWS Auto Scaling, System Architecture, Back-end, Autoscaling, Documentation, System Administration, Cloud Services, Networking, MySQL, Cron, Monitoring, Shell Scripting, SQL, Splunk, Infrastructure as Code (IaC), DevSecOps, Configuration Management, Amazon EC2, Continuous Delivery (CD), eCommerce, File Servers, NAS Servers, File Systems

Custom P2V Conversion

We were getting issues when using a proprietary P2V tool on some outdated versions of Linux operating systems running on old hardware, Sun Fire v240. It turned out that this kind of hardware doesn't support the tool. However, the P2V project had a tight timeline, and my client needed to finish it as planned.

Given my long-term experience with open-source software, I chose Clonezilla for this activity. Due to the old hardware on which Linux instances were running on, I had to customize Clonezilla by adding specific storage and network kernel modules to read data from disks and transfer them through the LAN. After this, the P2V was a quite simple two-step process:

• Creating an image of all physical disks on the Sun Fire v240 on the bare-metal server and copying it to network-attached storage (NAS) over the network. Clonezilla was loaded on the Sun Fire v240 as an image via ALOM.
• Restoring the image on an empty VM. This VM needed a disk layout similar to the bare-metal server. Clonezilla was loaded on VMware as a media drive.

The custom P2V process had to be done for 10 more Sun Fire v240 servers.

Automated Cisco Firewall Data Processing

A client struggled with network devices whose IP address changed over time, making firewall rules break. The firewall vendor (Cisco) didn't have a specific tool for this scenario. Since the client had thousands of devices attached to the network across different locations, they needed a custom solution.

I developed this custom solution using the vendor's Firewall API and Python to process the data. These scripts ran as a Docker container on Red Hat OpenShift. The custom software consisted of the following:

• One Python process that downloaded the data from the vendor's management console via a REST API in JSON format.
• Another Python process that processed the data provided IP address parameters.
• A separate Python script built daily reports.

As the amount of data downloaded via an API was massive—around 1.5TB—and the downloading process took a long time, it had to be run as a scheduled job once a day. The expected outcome was a daily JSON report containing a list of network devices and the associated firewall rules given the IP address parameters. This report lets the client know what firewall rules are outdated, which devices change IP addresses, and how often.

Ephemeral Environments Provisioning for an On-premise Application

Automated the provisioning of over 100 ephemeral development and testing environments for an on-premise app primarily handled by hand. I needed the following to set up an environment:

• Virtual machines running on VMWare vSphere.
• Each virtual machine had to be set up on Red Hat Satellite for package management and operating system patching.
• Subnets and static IP addresses were managed by Men&Mice as IP Address Management (IPAM) software.
• Active Directory (AD) as identity management software.

Ansible and Ansible Tower were chosen to automate and orchestrate all this to follow industry standards. As a result, I wrote modular Ansible playbooks to connect to:

• Men&Mice via a REST API to create new subnets if necessary and get the available IP address(es) for virtual machines to be built.
• vSphere via a REST API to create virtual machines with CPU, memory, storage, and network specifications from YAML files.
• Red Hat Satellite via a REST API to create a profile for the new virtual machine, so it receives operating system patches and software packages.
• AD via an LDAP client to add the virtual machine to the proper domain.

Ansible Tower orchestrated the process through a pipeline with multiple stages and access control.

AWS ECS | Build and Deployment via Bitbucket and Jenkins

The client needed CI/CD pipelines to build and deploy Node.js apps running as Docker containers. I did the following to achieve this automation:

• Used a Bitbucket Pipeline to build and deploy it straight to non-production environments, such as testing and staging. The build stage builds a Docker image, tags it with a semantic versioning commit tag and commit ID, and pushes it to JFrog Artifactory, a default Docker repository. The second stage of the pipeline deploys this image to AWS ECS via Bitbucket Pipelines by setting up a new task definition that functions only if the image version has changed.
• Handled deployments to production through a Jenkins job that connects to AWS ECS and follows the same process, i.e., it creates a new task definition only if a different image version is deployed. This Jenkins job displays a drop-down feature of the available image versions on Jfrog. These versions are taken on the fly via a JFrog REST API. The app versions shown here are the ones that were promoted after the image had been tested and approved for production.

This Jenkins job is helpful for manual app rollbacks and/or regular manual redeployments.

Paradigms

DevOps, Continuous Integration (CI), Continuous Deployment, Continuous Delivery (CD), ETL, DevSecOps, Automation

Platforms

Linux, Red Hat Enterprise Linux, Ubuntu, CentOS, AWS IoT, Docker, Amazon EC2, Amazon Web Services (AWS), AWS ALB, Windows, Red Hat OpenShift, OpenStack, AWS Lambda

Storage

NAS Servers, JSON, Amazon S3 (AWS S3), Cloud Deployment, MySQL, Datadog

Other

Scripting, Operating Systems, Troubleshooting, IT Automation, Unix/Linux Virtualization, CI/CD Pipelines, DevOps Engineer, System Architecture, Documentation, System Administration, Shell Scripting, Infrastructure as Code (IaC), Configuration Management, GitOps, File Systems, AWS Certified Solution Architect, IT Project Management, Agile DevOps, IT Infrastructure, Analytical Thinking, Problem Management, Software Development, Teamwork, Process Flows, Capacity Planning, Out of Box Experience (OOBE), APIs, Cloud, Content Delivery Networks (CDN), Storage, Networks, Virtualization, AWS DevOps, Amazon API Gateway, Amazon Route 53, Elastic Load Balancers, Business Continuity & Disaster Recovery (BCDR), Amazon RDS, LDAP, IPAM (IP Address Management), Cloud Architecture, DNS, SSL, Transport Layer Security (TLS), Scaling, AWS Auto Scaling, Back-end, Load Balancers, Autoscaling, Cloud Services, Networking, Monitoring, eCommerce, ECS, File Servers, Cloudflare, Linux Kernel Modules, Physical-to-virtual (P2V), Cisco, ASA Firewalls, HAProxy, Microsoft Servers, Firewalls

Languages

Python 3, Ruby, HTML5, Bash Script, YAML, Python, SQL, JavaScript, PHP

Libraries/APIs

Jenkins Job DSL

Tools

Ansible, Jenkins, Terraform, Ansible Tower, Artifactory, VMware vSphere, Amazon Elastic Container Service (Amazon ECS), Amazon Virtual Private Cloud (VPC), Amazon Elastic Container Registry (ECR), AWS CLI, AWS IAM, Git, Bitbucket, AWS ELB, AWS Deployment, Cron, Red Hat Satellite, Puppet, NGINX, RabbitMQ, AWS CodeBuild, Amazon CloudWatch, Splunk

1996 - 2002

Engineer's Degree in Information Systems

Universidad Católica Andrés Bello - Caracas, Venezuela

FEBRUARY 2024 - FEBRUARY 2027

AWS Solutions Architect – Associate

Amazon Web Services

NOVEMBER 2013 - NOVEMBER 2018

LPIC-2

Linux Professional Institute

DECEMBER 2008 - NOVEMBER 2018

LPIC-1

Linux Professional Institute

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring