Júlio César Batista
Verified Expert in Engineering
Back-end Developer
Júlio is a software developer with 6+ years of experience delivering high-quality, medium to extensive distributed systems. He is a practitioner of well-known best practices in the software industry, like test-driven development (TDD), Agile management, and CI/CD practices. Júlio is willing to embrace new challenges, especially building back-end distributed apps to handle large amounts of data.
Portfolio
Experience
Availability
Preferred Environment
Visual Studio Code (VS Code), Bash, Python, SQL, Slack, Git, CI/CD Pipelines, Docker, Unit Testing, Agile
The most amazing...
...project I've worked on was a configurable broad crawler using Amazon S3 and a discovery and extraction strategy to scrap thousands of seed URLs daily.
Work Experience
Python Developer | Technical Team Leader | Senior Back-end Engineer
Zyte
- Built various web crawling projects for many worldwide customers to collect thousands to millions of records daily or monthly.
- Managed a global team of three people working on different projects.
- Maintained cloud products that run millions of web crawling jobs—Docker containers—every day in our cluster.
Back-end Engineer
Inventti
- Refactored an ERP's financial module to make the workflow easier for users while we handled the complexity through UI and back-end routines.
- Rebuilt the integration to a third-party invoicing service to ensure consistent states during network outages. It guaranteed the system didn't process duplicate requests that would be inconsistent.
- Mentored an intern who was still in the university and started entering the industry.
Researcher | Grad School Sholarship
IMAGO-UFPR Research Group
- Published several research papers in relevant computer vision conferences in Brazil and abroad.
- Built computer vision prototypes and POCs for face analysis.
- Contributed to machine and deep learning research for facial expression analysis.
Experience
CS Undergraduate Capstone Project
https://github.com/ejulio/signaOpen Source Contributions to Scrapy
This project required adding support to data export to the Google Cloud Storage (GCS), github.com/scrapy/scrapy/pull/3608. In this pull request, I followed the framework practices to easily export scraped data to the GCS, as it happens with the AWS S3. This allowed future developers to use the framework to GCS off the shelf, only requiring some configuration without any explicit line code.
It also involved moving the ItemLoader to a standalone library, github.com/scrapy/scrapy/pull/4516. I removed the ItemLoader code from Scrapy in a new library, allowing developers to use it without Scrapy.
Customizable Broad Crawler
https://www.zyte.com/blog/extract-articles-at-scale-designing-a-web-scraping-solution/Skillset
Languages
Python, Bash, SQL, C#, JavaScript
Frameworks
Scrapy, Django, Caffe
Libraries/APIs
Scikit-learn, OpenCV, PyTorch, TensorFlow
Tools
Slack, Git, Mesos, RabbitMQ, Scikit-image
Paradigms
Unit Testing, Agile, Parallel Programming
Platforms
Docker, Apache Kafka, Visual Studio Code (VS Code)
Other
CI/CD Pipelines, Deep Learning, Machine Learning, Image Processing, Computer Vision, Leap Motion, Algorithms, Computer Graphics, Software Development, Networking, Open Source, Web Scraping, Distributed Systems
Education
Master's Degree in Computer Science
Federal University of Paraná (UFPR) - Curitiba, Paraná, Brazil
Bachelor's Degree in Computer Science
Regional University of Blumenau (FURB) - Blumenau, Santa Catarina, Brazil
How to Work with Toptal
Toptal matches you directly with global industry experts from our network in hours—not weeks or months.
Share your needs
Choose your talent
Start your risk-free talent trial
Top talent is in high demand.
Start hiring