Júlio César Batista, Developer in Blumenau - State of Santa Catarina, Brazil
Júlio is available for hire
Hire Júlio

Júlio César Batista

Verified Expert  in Engineering

Back-end Developer

Blumenau - State of Santa Catarina, Brazil

Toptal member since March 15, 2022

Bio

Júlio is a software developer with 6+ years of experience delivering high-quality, medium to extensive distributed systems. He is a practitioner of well-known best practices in the software industry, like test-driven development (TDD), Agile management, and CI/CD practices. Júlio is willing to embrace new challenges, especially building back-end distributed apps to handle large amounts of data.

Portfolio

Shippo
Python, Go, APIs, Apache Kafka, NSQ.io, PostgreSQL, CI/CD Pipelines, FastAPI...
Terrastruct
Go, Graphs, PostgreSQL, React, WebSockets, Redis
Zyte
Python, Scrapy, Docker, Django, SQL, Mesos, RabbitMQ, Apache Kafka

Experience

Availability

Part-time

Preferred Environment

Visual Studio Code (VS Code), Bash, Python, SQL, Slack, Git, CI/CD Pipelines, Docker, Unit Testing, Agile

The most amazing...

...project I've worked on was a configurable broad crawler using Amazon S3 and a discovery and extraction strategy to scrap thousands of seed URLs daily.

Work Experience

Senior Software Engineer

2023 - PRESENT
Shippo
  • Built integrations with 3rd-party eCommerce platforms to import orders into the web app.
  • Built a platform to charge customers for API usage capable of processing hundreds of thousands of records per hour.
  • Improved internal tools, making it easier and faster for other engineers to build new services.
Technologies: Python, Go, APIs, Apache Kafka, NSQ.io, PostgreSQL, CI/CD Pipelines, FastAPI, Django, Stripe, Stripe API, React

Senior Software Engineer

2022 - 2023
Terrastruct
  • Optimized the graph layout engine memory usage and speed by 2x.
  • Researched and built the engine for hierarchical graphs and sequence diagrams.
  • Improved internal tools, making debugging and profiling easier.
Technologies: Go, Graphs, PostgreSQL, React, WebSockets, Redis

Python Developer | Technical Team Leader | Senior Back-end Engineer

2018 - 2022
Zyte
  • Built various web crawling projects for many worldwide customers to collect thousands to millions of records daily or monthly.
  • Managed a global team of three people working on different projects.
  • Maintained cloud products that run millions of web crawling jobs—Docker containers—every day in our cluster.
Technologies: Python, Scrapy, Docker, Django, SQL, Mesos, RabbitMQ, Apache Kafka

Back-end Engineer

2017 - 2018
Inventti
  • Refactored an ERP's financial module to make the workflow easier for users while we handled the complexity through UI and back-end routines.
  • Rebuilt the integration to a third-party invoicing service to ensure consistent states during network outages. It guaranteed the system didn't process duplicate requests that would be inconsistent.
  • Mentored an intern who was still in the university and started entering the industry.
Technologies: C#, SQL, RabbitMQ, CI/CD Pipelines, Agile

Researcher | Grad School Sholarship

2016 - 2018
IMAGO-UFPR Research Group
  • Published several research papers in relevant computer vision conferences in Brazil and abroad.
  • Built computer vision prototypes and POCs for face analysis.
  • Contributed to machine and deep learning research for facial expression analysis.
Technologies: Python, Scikit-learn, Scikit-image, OpenCV, PyTorch, Caffe, TensorFlow, Deep Learning, Machine Learning, Image Processing, Computer Vision

CS Undergraduate Capstone Project

https://github.com/ejulio/signa
This was my undergraduate capstone project, an experiment to use Leap Motion and machine learning to classify signs of the Brazilian Sign Language. Within this project's scope, I directly worked with education professionals to validate the idea and identify potentially required refinements.

Open-source Contributions to Scrapy

I contributed to Scrapy, an open-source project maintained by Zyte, with code changes and reviews of pull requests and issues.

This project required adding support to data export to the Google Cloud Storage (GCS), github.com/scrapy/scrapy/pull/3608. In this pull request, I followed the framework practices to easily export scraped data to the GCS, as it happens with the Amazon S3. This allowed future developers to use the framework to GCS off the shelf, only requiring some configuration without any explicit line code.

It also involved moving the ItemLoader to a standalone library, github.com/scrapy/scrapy/pull/4516. I removed the ItemLoader code from Scrapy in a new library, allowing developers to use it without Scrapy.

Customizable Broad Crawler

https://www.zyte.com/blog/extract-articles-at-scale-designing-a-web-scraping-solution/
I worked on a broad crawler project to make it customizable by the customer. Through a configuration file in Amazon S3, the customer should be able to add or remove seed URLs, decide the scraping depth (how many links to follow), and use JavaScript to render the page or go with raw HTML. This project was scraping thousands of seed URLs daily through a discovery and extraction strategy in which a set of crawlers were finding links in the seed URLs, and other processes were collecting the data out of these discovered links.
2016 - 2018

Master's Degree in Computer Science

Federal University of Paraná (UFPR) - Curitiba, Paraná, Brazil

2010 - 2015

Bachelor's Degree in Computer Science

Regional University of Blumenau (FURB) - Blumenau, Santa Catarina, Brazil

Libraries/APIs

Scikit-learn, OpenCV, PyTorch, TensorFlow, React, NSQ.io, Stripe, Stripe API

Tools

Slack, Git, Mesos, RabbitMQ, Scikit-image

Languages

Python, Bash, SQL, C#, JavaScript, Go

Frameworks

Scrapy, Django, Caffe

Paradigms

Unit Testing, Agile, Parallel Programming

Platforms

Docker, Apache Kafka, Visual Studio Code (VS Code)

Storage

PostgreSQL, Redis

Other

CI/CD Pipelines, Deep Learning, Machine Learning, Image Processing, Computer Vision, Leap Motion, Algorithms, Computer Graphics, Software Development, Networking, Open Source, Web Scraping, Distributed Systems, Graphs, WebSockets, APIs, FastAPI

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring