Branko Fulurija, Developer in Višegrad, Republika Srpska, Bosnia and Herzegovina
Branko is available for hire
Hire Branko

Branko Fulurija

Verified Expert  in Engineering

Data Engineer and Developer

Location
Višegrad, Republika Srpska, Bosnia and Herzegovina
Toptal Member Since
September 28, 2021

Branko is a data engineer who specializes in building big data platforms in the cloud. In addition to extensive experience in cloud architecture, data analytics, serverless solutions, and cost optimization, Branko has five AWS and two GCP certifications and is an award-winning competitive programmer and hackathon winner.

Portfolio

Microsoft
C++, C#, React, TypeScript, JavaScript, Algorithms
Educational Center Belgrade
Algorithms, Data Structures, Competitive Programming, Computer Science
Microsoft
C#, C++, Graphs, Algorithms, Monitoring, Big Data, Big Data Architecture

Experience

Availability

Part-time

Preferred Environment

Amazon Web Services (AWS), Google Cloud Platform (GCP), Apache Airflow, Docker, Git, Terminal, Serverless, Agile, Infrastructure as Code (IaC), JetBrains

The most amazing...

...thing I've developed is a petabyte-scale big data analytics platform.

Work Experience

Software Engineering Intern

2018 - 2019
Microsoft
  • Built a Microsoft Office add-in to help students practice math skills.
  • Combined and integrated services from multiple Microsoft Office products.
  • Assisted in shipping a feature to production that has thousands of users worldwide.
  • Implemented the front end and back end using C++, C#, TypeScript, and React.
Technologies: C++, C#, React, TypeScript, JavaScript, Algorithms

Programming Tutor

2016 - 2018
Educational Center Belgrade
  • Prepared high school students for national programming competitions.
  • Taught advanced computer science topics, such as dynamic programming, graph theory, and data structures.
  • Assisted students in winning medals during international olympiads in programming.
Technologies: Algorithms, Data Structures, Competitive Programming, Computer Science

Software Engineering Intern

2017 - 2017
Microsoft
  • Created an internal big data tool that provides insights about system health and performance.
  • Enabled users to discover performance bottlenecks, quickly recover from failures, and understand the system state through visualizations by using this tool.
  • Used a tech stack that included C# and proprietary big data engines.
Technologies: C#, C++, Graphs, Algorithms, Monitoring, Big Data, Big Data Architecture

Coding Interview Jumpstart Online Course Creator for Udemy

Independently created and published a Udemy online course that's been taken by 23,000+ students. The course content teaches some foundational algorithms and computer science concepts and explains the principles behind fundamental algorithms asked about during interviews with top tech companies.

MTS Assistant | Hackathon Winner

A distributed cloud-based analysis system that consumed and processed telecom data to create interactive data visualization. The tool included personalized package recommendations, which used clustering and a tool for detecting irregularities in the network in real time. The telecom data was stored in Elasticsearch, and Kibana was used for data visualizations and machine learning jobs to detect anomalies.

US Accidents | Udacity Nanodegree Project

https://github.com/brfulu/us-accidents-data-engineering
This project was the capstone project in the Udacity Data Engineering Nanodegree program. The idea was to create an optimized data lake that would enable users to analyze US accident data and determine the root causes of accidents. The main goal was to build an end-to-end data pipeline capable of processing big volumes of data. We wanted to clean, transform, and load the data to our optimized data lake on AWS S3. The data lake would consist of logical tables partitioned by certain columns to optimize query latency.

Redshift Data Modeling | Udacity Nanodegree Project

https://github.com/brfulu/redshift-data-modeling
This project was part of the Udacity Data Engineering Nanodegree program. The task was to build an ETL pipeline that extracted data from AWS S3, staged it in Redshift, and transformed it into a set of dimensional tables that were ready analytics for consumption and enabled users to find insights into which songs certain users listened to. The output was a relational star schema in Redshift.

Data Lake ETL with Spark | Udacity Nanodegree

https://github.com/brfulu/datalake-spark-etl
This project was part of the Udacity Data Engineering Nanodegree. The task was to take data from a source system and move it from a data warehouse to a data lake. The data resided in AWS S3, a directory of JSON logs of user activity on the app, and a directory with JSON metadata on the songs users listened to in the imaginary app.

The requirements included building an ETL pipeline that extracted user data from AWS S3, processing it using Spark, and loading the data back into AWS S3 as a set of dimensional tables. This would allow the analytics team to continue finding insights into what songs their users were listening to.

Airflow Data Pipeline | Udacity Nanodegree Project

https://github.com/brfulu/airflow-data-pipeline
This project was part of the Udacity Data Engineering Nanodegree program. The task was to implement an automated and monitored ETL pipeline for loading data to a data warehouse. The pipeline was implemented using Apache Airflow. The source data resided in AWS S3 and needed to be processed in the target data warehouse in Amazon Redshift. The source datasets consisted of JSON logs of user activity in the application and JSON metadata about the songs the users listened to.

Languages

Python, Java, SQL, C++, JavaScript, C#, TypeScript

Frameworks

Hadoop, Presto, Spark

Tools

Apache Airflow, Google Cloud Dataproc, IntelliJ IDEA, PyCharm, Terraform, Ansible, Google Compute Engine (GCE), Google Kubernetes Engine (GKE), BigQuery, AWS Glue, Amazon Elastic MapReduce (EMR), Amazon Athena, AWS Step Functions, Git, Terminal, JetBrains, TeamCity, Grafana, AWS CloudFormation, Google Cloud Composer, Apache Beam, Cloud Dataflow, Tableau, Looker, Jenkins, Amazon CloudWatch, Kibana, Amazon QuickSight

Paradigms

ETL, Object-oriented Programming (OOP), Testing, Microservices, Business Intelligence (BI), DevOps, Agile, Data Science

Platforms

Amazon Web Services (AWS), Google Cloud Platform (GCP), AWS Lambda, Apache Kafka, Amazon EC2, Jupyter Notebook, Docker, Kubernetes

Storage

Data Pipelines, Databases, Amazon S3 (AWS S3), Data Lakes, Google Cloud Storage, Apache Hive, Redshift, PostgreSQL, Google Cloud SQL, Amazon DynamoDB, Elasticsearch, Amazon Aurora, Google Bigtable, JSON

Other

Data Warehousing, Software Engineering, Algorithms, Data Structures, Cloud Computing, Serverless, Data Engineering, Data Analytics, Data Processing, Data Modeling, Dataproc, Big Data, Competitive Programming, ELT, Graphs, Networking, Identity & Access Management (IAM), Google BigQuery, EMR, Amazon RDS, Streaming, Internet of Things (IoT), GraphDB, Amazon Neptune, Google Data Studio, Infrastructure as Code (IaC), CI/CD Pipelines, AWS DevOps, Monitoring, Prometheus, Amazon API Gateway, Cloud Architecture, AWS Cloud Architecture, Big Data Architecture, Data Quality, API Gateways, Computer Science, Amazon Kinesis, Machine Learning, Recommendation Systems

Libraries/APIs

PySpark, React

2016 - 2020

Bachelor's Degree in Computer Science

Faculty of Computing - Belgrade, Serbia

JUNE 2021 - JUNE 2024

GCP Associate Cloud Engineer

GCP

OCTOBER 2020 - OCTOBER 2022

GCP Professional Data Engineer

GCP

DECEMBER 2019 - PRESENT

Data Engineering Nanodegree

Udacity

NOVEMBER 2019 - SEPTEMBER 2022

AWS Certified Big Data - Specialty

AWS

MARCH 2019 - MARCH 2022

AWS Certified SysOps Administrator - Associate

AWS

JANUARY 2019 - JANUARY 2022

AWS Certified Developer - Associate

AWS

SEPTEMBER 2018 - MARCH 2022

AWS Certified Cloud Practitioner

AWS

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring