Andres Gonzalez, Developer in Sydney, New South Wales, Australia
Andres is available for hire
Hire Andres

Andres Gonzalez

Verified Expert  in Engineering

Bio

Andres is a skilled engineer with over 15 years of experience across many facets of technology, including DevOps, web development, and system administration. At Amazon Web Services, he led the team that added new capacity to the AWS Edge network throughout Asia pacific. He is an avid believer in cloud native solutions that are appropriately sized, cost-effective, and deliver outstanding performance. Andres has enjoys working with organizations of all sizes, from startups to global enterprises.

Portfolio

Nutrino - Main
DevOps, Terraform, Continuous Integration (CI), Continuous Delivery (CD)...
Westpac
Azure, Azure Data Lake, Azure Synapse, Dedicated SQL Pool (formerly SQL DW)...
Nano Home Loans (Startup)
Amazon Web Services (AWS), Architecture, Security, Gatling Load Testing...

Experience

  • Linux - 10 years
  • CI/CD Pipelines - 10 years
  • Cost Reduction & Optimization (Cost-down) - 5 years
  • Data Analytics - 5 years
  • Python - 5 years
  • Containers - 5 years
  • Terraform - 4 years
  • ETL - 3 years

Availability

Part-time

Preferred Environment

Terraform, Python, Linux, Go, PyCharm

The most amazing...

...project I've worked on was a new public cloud region for a large public cloud provider. I was able to take a look behind the curtains and see it work.

Work Experience

DevOps Engineer

2021 - PRESENT
Nutrino - Main
  • Refactored existing infrastructure deployments from CloudFormation, Atmos, and manual provisioning to Terraform modules. Included modules for EKS, EKS Addons, Cognito, API Gateway, and networking (VPCs, subnets, peering, etc.).
  • Implemented a policy as code framework (Cloud Custodian) that was used to identify and remediate insecurely deployed resources (e.g., public access S3 buckets) as well as cost management (e.g., deleting unattached EBS volumes).
  • Set up CI/CD infrastructure required for running GitHub Actions with private runners hosted on EKS.
Technologies: DevOps, Terraform, Continuous Integration (CI), Continuous Delivery (CD), Kubernetes, Amazon Web Services (AWS), Cloud Custodian, GitHub Actions, Python, Amazon EKS, Networking, Amazon EC2, AWS Key Management Service (KMS), API Gateways, AWS DevOps, PKI, AWS VPN, DevSecOps, Infrastructure as Code (IaC), Amazon Elastic Block Store (EBS), Amazon S3 (AWS S3), Helm, Cloud Engineering, AWS Auto Scaling, Docker Compose, GitHub Workflows

Principal Technologist – Data

2021 - 2023
Westpac
  • Designed patterns to provision and operate cloud data infrastructure services in Azure. These services include Azure Data Lake Storage Gen2, the Azure Synapse suite, Azure Machine Learning, Azure SQL, Azure Cognitive Services, and Microsoft Purview.
  • Led infrastructure engineering squads of 4 to 10 engineers working on endorsed pattern automation and creating reusable Terraform modules for cloud infrastructure. All modules that were developed met security, risk, and operational requirements.
  • Presented technical designs in various design forums to gain endorsements that allowed them to be consumable across the bank.
Technologies: Azure, Azure Data Lake, Azure Synapse, Azure SQL Data Warehouse, Dedicated SQL Pool (formerly SQL DW), Azure SQL, Terraform, Azure Machine Learning, Azure Landing Zones, Bitbucket, Azure PaaS, Machine Learning Operations (MLOps), PKI, DevSecOps, Infrastructure as Code (IaC), Amazon S3 (AWS S3), Azure DevOps, Helm, Cloud Engineering, Cloud FinOps, Financial Operations & Processes

Head of Tech Ops

2021 - 2021
Nano Home Loans (Startup)
  • Established CI/CD pipelines for deploying applications onto Heroku (development staging and production) . Tooling used: GitHub Actions and Heroku Pipelines.
  • Designed and implemented monitoring and alerts across the whole application stack. Leveraged tools such as New Relic, CloudWatch alarms, Sentry, and Logtail. This was a critical capability that was required for the public launch.
  • Developed load testing suites using Gatling that would simulate end-to-end user integrations so we could identify bottlenecks in our infrastructure and remediate them prior to the public launch.
Technologies: Amazon Web Services (AWS), Architecture, Security, Gatling Load Testing, Compliance, Heroku, Amazon Elastic Container Service (ECS), Azure, Startups, Amazon EC2, Amazon RDS, AWS Key Management Service (KMS), API Gateways, AWS DevOps, PKI, AWS VPN, DevSecOps, Infrastructure as Code (IaC), Amazon Elastic Block Store (EBS), Amazon Redshift, Amazon S3 (AWS S3), APIs, Cloud Engineering, Cloud FinOps, Financial Operations & Processes, Financial Options, Operational Finance, AWS Auto Scaling, Docker Compose, GitHub Workflows

Chief Engineer

2021 - 2021
Wakehub (Startup)
  • Designed the cloud architecture for a prototype mobile app that will allow its users to register, create a profile, and track their progress across different categories. The GCP services used were Firebase, Cloud Functions, and Cloud Storage.
  • Performed all the back-end development in Go, which primarily consisted of writing Cloud Functions that would perform actions based on user activity. The tech stack included React Native, Redux, GitHub, and GitHub Actions.
  • Tracked and managed the development of the MVP for the project through GitHub and GitHub Issues.
Technologies: Firebase, Google Cloud, Google Cloud Platform (GCP), React Native, Go, Google Cloud Functions, Firebase Cloud Functions, Startups, Infrastructure as Code (IaC), Cloud Engineering, Docker Compose, GitHub Workflows

Senior DevOps Engineer – Cloud and Data Center Optimization

2017 - 2021
nbn
  • Developed dashboards using Tableau and Plotly Dash to visualize cloud spend. These dashboards were used to identify optimization opportunities, which would then turn into automation jobs (Lambda and ECR tasks) to fix or clean up resources.
  • Created ETL pipelines to extract data from diverse data sources (API, SQL databases, and flat files) that fed a Redshift data warehouse used to build analytics jobs and other automation. The pipelines were written in both Python and Go.
  • Designed and built a Django-based data center asset management system. This system was used to track all physical infrastructure across many data centers.
  • Assisted in the development of a serverless data archiving solution, moving terabytes of data from on-prem Oracle databases to a cloud-native solution and dramatically lowering operating costs. Used technologies such as S3, Python, Glue, and Athena.
  • Provided consulting to engineering teams during the solution design phase to provide cost-effective solutions to cloud infrastructure requirements.
  • Built various REST API endpoints to publish aggregated data views from the data collected through the ETL pipelines by leveraging Django, Flask, and Go (Gorilla Mux). This data is used to set financial budgets, forecasting, and cost optimization.
Technologies: Python, Jenkins, Data Analytics, CI/CD Pipelines, ETL, Cost Reduction & Optimization (Cost-down), Cloud Native, Continuous Delivery (CD), Continuous Integration (CI), DevOps, Redshift, Shell Scripting, Go, Terraform, Linux, AWS CloudFormation, Containers, Amazon Web Services (AWS), PostgreSQL, SQL, Flask, Flask-RESTful, Windows, Data Migration, GitHub, GitLab, Tableau, Dashboards, MongoDB, Kubernetes, Microservices Architecture, Pandas, Scikit-learn, Plotly, NGINX, NumPy, Docker, Virtualization, Linux Administration, IT Infrastructure, Ubuntu, Bash Script, System Administration, Agile, IT Projects, Amazon Elastic Container Registry (ECR), Amazon Elastic Container Service (ECS), Amazon Route 53, AWS IAM, Redis, Amazon ElastiCache, Cloud Infrastructure, PyCharm, Scrum, JavaScript, Django, AWS Fargate, Amazon EC2, Amazon RDS, AWS Key Management Service (KMS), API Gateways, AWS DevOps, PKI, GitLab CI/CD, DevSecOps, Infrastructure as Code (IaC), Analytics, Amazon Elastic Block Store (EBS), Amazon Redshift, Amazon S3 (AWS S3), Helm, APIs, Cloud Engineering, Cloud FinOps, Costs, Financial Operations & Processes, Financial Options, Operational Finance, AWS Auto Scaling, Docker Compose

Technical Operations Manager

2016 - 2017
RedBalloon
  • Led the DevOps/infrastructure team, working closely with the engineering and product management teams to deliver customer features in a rapid, tested, and secure manner.
  • Automated the creation and tear down of the cloud infrastructure used by our engineering teams. By doing so, we ensured the infrastructure was maintained as code, was repeatable, and torn down when not required.
  • Reduced cloud OPEX spending by 15% per month. This was achieved by automating environment creation and tear down, right sizing and decommissioning legacy systems, and purchasing reserved capacity at lower rates.
  • Created CI/CD pipelines containerizing applications across multiple frameworks (Node.js, Scala, Java, and .NET). These services were deployed to a self-hosted Docker server and Elastic Container Service in AWS.
Technologies: Amazon Web Services (AWS), Amazon Elastic Container Registry (ECR), Amazon Elastic Container Service (ECS), AWS CloudFormation, Python, Containers, Jenkins, Network Security, Amazon Route 53, Amazon Virtual Private Cloud (VPC), AWS IAM, Amazon ElastiCache, Redis, AWS Lambda, Content Delivery Networks (CDN), Cost Reduction & Optimization (Cost-down), DevOps, Continuous Integration (CI), Continuous Delivery (CD), Docker, Linux, CI/CD Pipelines, Data Analytics, ETL, PostgreSQL, Project Management, SQL, Windows, Data Migration, GitHub, GitLab, MongoDB, Microservices Architecture, NGINX, Serverless, Virtualization, Linux Administration, IT Infrastructure, Shell Scripting, Ubuntu, Bash Script, System Administration, Agile, IT Projects, Cloud Infrastructure, PyCharm, Scrum, JavaScript, DevOps Engineer, Bitbucket, Amazon EC2, Amazon RDS, AWS Key Management Service (KMS), API Gateways, AWS DevOps, PKI, Infrastructure as Code (IaC), Amazon Elastic Block Store (EBS), Amazon S3 (AWS S3), APIs, Cloud Engineering, Operational Finance, AWS Auto Scaling, Docker Compose, GitHub Workflows

Technical Project Lead (APAC)

2013 - 2016
Amazon Web Services (AWS)
  • Led a team based out of Sydney responsible for delivering projects that added new capacity or increased existing capacity of the AWS Edge network throughout Asia–Pacific (APAC).
  • Collaborated with other regional counterparts to create a streamlined project delivery process, which reduced the time in which these projects were delivered.
  • Designed a custom high-density cabling solution with a fiber manufacturer, which allowed a much higher density of fiber optic cable to be provisioned at a lower cost and a faster installation time.
Technologies: Linux, Cloud Infrastructure, IT Projects, Data Analytics, Amazon Web Services (AWS), PostgreSQL, Project Management, SQL, Windows, Virtualization, Linux Administration, IT Infrastructure, Shell Scripting, Ubuntu, Bash Script, System Administration, Amazon Route 53, JavaScript, Operational Finance

Infrastructure Manager

2007 - 2012
Axe Group
  • Managed all internal IT infrastructure for the company, including physical end-user devices, network infrastructure, telephony systems, and virtualized hosting environments.
  • Managed data center relocation projects for infrastructure supporting internal systems as well as customer-hosted applications.
  • Migrated all applications and services running on bare metal servers to virtualized environments running on VMware Vsphere.
Technologies: Virtualization, Azure Active Directory, BIND DNS, DHCP, Bash, Perl, IT Infrastructure, Linux, CI/CD Pipelines, Data Analytics, Amazon Web Services (AWS), PostgreSQL, SQL, Windows, Windows Server, DevOps, Continuous Integration (CI), Continuous Delivery (CD), Shell Scripting, Bash Script, System Administration, Agile, IT Projects, Jenkins, Scrum, JavaScript, Linux Administration, Git

Cloud Consumption Data Warehouse

Cloud consumption data warehouse provides aggregated public/private consumption data to help identify waste and cost optimization opportunities.

I was responsible for acquiring data from source systems and storing the data in a data warehouse. The complete process was automated and required no manual intervention.

Source systems:
• AWS/Azure billing APIs
• Oracle databases
• Utilization metrics: CPU, disk, memory, and network (to identify underutilized resources).

Technology:
• Go/Python to extract data from APIs and Oracle databases.
• Jenkins/Lambda/AWS Glue to orchestrate various jobs.
• AWS Redshift as the data warehouse.
• Tableau to create visualizations and reports.

Serverless Event-driven Inventory

I wrote this blog article about creating a serverless inventory solution in AWS. The solution I discuss is built using the AWS Cloud development kit (AWS CDK). It's an entirely serverless solution that takes advantage of events to trigger actions.

Calculating WCU and RCU for Amazon DynamoDB

I wrote this blog article to explain how capacity units of DynamoDB work and how to calculate capacity requirements. The capacity sizing of DynamoDB is not very well understood. This article aimed to provide a clear understanding of the topic.
2012 - 2016

Master's Degree in Internetworking

University of Technology - Sydney, Australia

JANUARY 2020 - PRESENT

Certified Developer Associate

Amazon Web Services

MAY 2017 - PRESENT

Prince 2 Agile Practitioner

Axelos

Libraries/APIs

Pandas, NumPy, Flask-RESTful, Scikit-learn

Tools

Terraform, Jenkins, GitHub, Git, Amazon Elastic Block Store (EBS), AWS CloudFormation, Amazon Elastic Container Registry (ECR), Amazon Elastic Container Service (ECS), AWS IAM, Amazon EKS, AWS Key Management Service (KMS), GitLab CI/CD, Helm, Docker Compose, AWS CodeBuild, AWS CodeCommit, Amazon Virtual Private Cloud (VPC), Amazon ElastiCache, PyCharm, GitLab, Tableau, Plotly, NGINX, AWS Cloud Development Kit (CDK), AWS Fargate, Bitbucket, Gatling Load Testing, Azure Machine Learning

Paradigms

DevOps, ETL, Microservices Architecture, Continuous Integration (CI), Continuous Delivery (CD), DevSecOps, Agile, Scrum, Test-driven Development (TDD), Azure DevOps

Platforms

Linux, Amazon Web Services (AWS), Amazon EC2, Azure, Kubernetes, Docker, Ubuntu, Google Cloud Platform (GCP), AWS Lambda, Cloud Native, Windows, Windows Server, Firebase, Heroku, Azure Synapse, Azure SQL Data Warehouse, Azure PaaS, Dedicated SQL Pool (formerly SQL DW)

Storage

Amazon S3 (AWS S3), PostgreSQL, Azure Active Directory, Azure SQL, Redis, MongoDB, Amazon DynamoDB, Redshift, Google Cloud

Languages

Python, SQL, Go, JavaScript, Bash, Perl, Bash Script

Frameworks

Django, Flask, React Native, Apache Spark

Industry Expertise

Project Management, Network Security

Other

System Administration, Cost Reduction & Optimization (Cost-down), Cloud Infrastructure, Linux Administration, DevOps Engineer, Amazon RDS, AWS DevOps, Infrastructure as Code (IaC), Cloud Engineering, Containers, CI/CD Pipelines, GitHub Actions, Startups, API Gateways, PKI, Analytics, Amazon Redshift, APIs, Cloud FinOps, Costs, Financial Operations & Processes, Financial Options, Operational Finance, AWS Auto Scaling, GitHub Workflows, Data Analytics, System Design, Network Design, Amazon API Gateway, AWS CodePipeline, IT Projects, Amazon Route 53, Content Delivery Networks (CDN), Data Migration, Dashboards, Serverless, Virtualization, BIND DNS, DHCP, IT Infrastructure, Shell Scripting, Google Cloud Functions, Firebase Cloud Functions, Architecture, Security, Compliance, Cloud Custodian, Networking, Azure Data Lake, Azure Landing Zones, Machine Learning Operations (MLOps), Policy as code (PaC), AWS VPN

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring