Nikos Tzelvenzis, Developer in Thessaloniki, Greece
Nikos is available for hire
Hire Nikos

Nikos Tzelvenzis

Verified Expert  in Engineering

DevOps Engineer and Developer

Thessaloniki, Greece

Toptal member since July 15, 2020

Bio

Nikos is a DevOps engineer and infrastructure designer with several years of hands-on experience, which includes building a high-traffic platform with at least 7 million users. On that project, the whole infrastructure was horizontally scaled based on the load and traffic. The challenge there had to do with the size of traffic—specifically auditing and logging—due to the size of logs, etc. were much larger than usual. It's safe to say whatever your DevOps needs are, Nikos can handle them.

Portfolio

Schoox Inc
AWS, PHP, Kubernetes, Python, Amazon Elastic Block Store (EBS), Amazon Route 53...
Datajolt Limited
Python, AWS, DevOps, Flask, Kubernetes, Scaling, AWS ALB, Gunicorn, Terraform...
NebulOS, Inc.
Linux, Ubuntu, OpenStack, Data Centers, Ceph, Bash

Experience

Availability

Part-time

Preferred Environment

Firefox, Visual Studio Code (VS Code), Bash, Linux

The most amazing...

...project was making a scalable way to restore all files in 2 hours without service interruption after an accidental deletion of 20 million files in S3 storage.

Work Experience

Director of DevOps

2018 - PRESENT
Schoox Inc
  • Created a new secure network within the cloud infrastructure and helped the company archive security certifications to take on new and more significant customers.
  • Designed and enabled a monitoring-and-audit platform that helped the support department to answer customer's questions more confidently.
  • Transformed the build and deploy procedure. Previously, the company used git pull as the deployment method, which caused HTTP errors; now, the node's upgrades happen without traffic and with zero HTTP errors.
Technologies: AWS, PHP, Kubernetes, Python, Amazon Elastic Block Store (EBS), Amazon Route 53, Amazon EC2, Bash, Ansible, AWS, Docker, Jenkins, AWS, Agile Development, MongoDB, AWS, Linux, Firefox Development, SAML, AWS ELB, AWS CLI, Terraform, OpenVPN, Amazon EKS, Amazon Simple Queue Service (SQS), Continuous Integration (CI), Helm, Cloud Engineering, System Administration

DevOps Engineer

2023 - 2023
Datajolt Limited
  • Solved a problem in EKS deployments with network issues on an AWS Application Load Balancer as an ingress resource.
  • Suggested a better way to keep logs for the application that was distributed and deployed in the EKS cluster. Listened to the company's needs and suggested two to three solutions based on the budget.
  • Checked the existing security state of the infrastructure and evaluated data access based on how they gave me access to the AWS account.
Technologies: Python, AWS, DevOps, Flask, Kubernetes, Scaling, AWS ALB, Gunicorn, Terraform, Bash, Docker, AWS ELB, AWS CLI, Helm, Cloud Engineering

Senior DevOps Engineer

2021 - 2022
NebulOS, Inc.
  • Solved problems in a Ceph cluster with node outages. The cluster was unstable, and all writes were disabled, so it was re-balanced.
  • Added four new nodes in the existing cluster and updated the OpenStack to a newer version. Set up new virtual networks and started to work with new clients on this cluster. Created the new OpenStack images for windows and provisioned them.
  • Supported the existing Oracle VM cluster with other technical members during US off-work hours.
Technologies: Linux, Ubuntu, OpenStack, Data Centers, Ceph, Bash

DevOps Engineer

2021 - 2021
Piggy, LLC
  • Changed, working as a developer focused on security, the application's access to AWS from static keys to IAM roles.
  • Designed and wrote Lambda function to rotate the credentials of RDS databases automatically.
  • Reviewed and fixed the security groups and the access to infrastructure.
Technologies: AWS, Terraform, CI/CD Pipelines, Automation, AWS RDS, Bash, Kubernetes, Linux, AWS CLI, Continuous Integration (CI), Cloud Engineering

DevOps AWS Engineer

2021 - 2021
ZYP.ONE GmbH
  • Designed and created the entire environment (production, staging, and testing) with Terraform and created networking with OpenVPN to access the environment without the need to enable public access.
  • Migrated the database to RDS, configured database backups, and unique dump exports to S3 with an ECS service.
  • Installed and configured the Jenkins CI/CD system with a master node and dynamic slave nodes.
  • Improved the automated pipeline to work better with CI and prepared the CD portion for testing and staging environments.
Technologies: AWS, Terraform, Docker, VPN, Networks, Jenkins, GitHub, AWS, Bash, Linux, AWS CLI, Continuous Integration (CI), Cloud Engineering

Software Developer | Oracle Database Specialist | Infrastructure Architect

2005 - 2018
Logismos SA
  • Deployed OpenStack as a private cloud solution for the company.
  • Converted storage to be more central with Ceph and with the added benefit that maintenance could happen without interruption to services.
  • Designed and implemented an integration service between two systems in near real-time between multi-sites in different countries worldwide.
  • Installed and configured an Oracle database with a standby service and automatic backup without interruptions for two different hosts.
Technologies: Ceph, Oracle Development, OpenStack, Bash, Ansible, Python, Linux, Java, SAML, OpenVPN, System Administration

Recovery of 20 Million Deleted Files in Amazon S3 Buckets

After an accident, a process started to delete files from two buckets rapidly. The buckets had versioning enabled but we needed to start a flow to recover the data. To solve this, I created a mechanism with a queue and Lambdas that found the files and restored them (i.e., deleting the deletion mark).

Worker Daemon in Python

A Python application to work as a Linux daemon to consume messages from Amazon SQS queues and execute them in a PHP environment. The daemon use metrics posts to an Amazon CloudWatch metrics service and keen audit logs in Amazon S3.

Scalable Logging Parser

With the ELK stack, I used a scalable way with queues to parse files in Amazon S3 Storage and populate data in an Elasticsearch cluster for auditing. The parsing happens in a Kubernetes cluster and uses S3 events and queues to make parsing asynchronous.

SSO Authentication with All Internal Services

A startup, due to an increase in its employee numbers, decided to enable a user repository with Okta. I designed and implemented the authentication, authorization, and internal portals with Okta (SAML and OpenID). The internal services include the AWS console, CLI, and all Dockerized/Kubernetes applications like schedulers and logging tools like Elasticsearch.

SOC2 Certification

I was a member of the team that passes the SOC2 audit and pass for a company.
This team involved the reports and procedures to keep any security aspects and be tracked to audit trails. Finally, we apply any changes requested by the audit team.
1998 - 2004

Bachelor's Degree in Electronics

Technical University - Thessaloniki, Greece

SEPTEMBER 2019 - SEPTEMBER 2022

AWS Certified SysOps Administrator — Associate

AWS

Libraries/APIs

OpenID

Tools

Amazon Elastic Block Store (EBS), AWS ELB, AWS CLI, Terraform, AWS, Ansible, Helm, Amazon EKS, OpenVPN, Amazon Simple Queue Service (SQS), Jenkins, AWS, AWS, AWS, VPN, GitHub, AWS, ELK Stack

Frameworks

AWS, Flask

Paradigms

DevOps, Agile Development, Continuous Integration (CI), DevSecOps, Automation

Platforms

Amazon EC2, OpenStack, Kubernetes, Docker, AWS, Azure Design, Linux, Firefox Development, Oracle Development, Visual Studio Development, AWS ALB, Ubuntu, AWS Lambda

Storage

Amazon S3, Ceph, MongoDB, Data Centers

Languages

Python, Bash, PHP, Java, SAML

Other

Amazon Route 53, System Administration, Electronics, Microcontrollers, Circuit Design, OKTA, SSO Engineering, Security, Cloud Engineering, Networks, CI/CD Pipelines, AWS RDS, Scaling, Gunicorn

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring