Burak Ercan, Developer in Ankara, Turkey
Burak is available for hire
Hire Burak

Burak Ercan

Verified Expert  in Engineering

Bio

Burak is a senior computer vision engineer with a PhD and 13 years of industrial experience. As an expert in computer vision, deep learning, and software development with C++ and Python, he's worked on various projects in large defense companies and AI startups, demonstrating a broad range of technical and soft skills. Burak describes himself as a lifelong learner. His goal is to be a part of impactful applied research and engineering where he can fully demonstrate his skills.

Portfolio

HAVELSAN
C++, Python, PyTorch, OpenCV, Robot Operating System (ROS), GStreamer, PCL...
ArgosAI
Computer Vision, Deep Learning, C++, OpenCV, Boost, Keras, TensorFlow, Python...
ASELSAN
C++, Embedded C, Embedded C++, Embedded Linux...

Experience

  • Software Engineering - 11 years
  • C++ - 11 years
  • Machine Learning - 7 years
  • PyTorch - 5 years
  • Computer Vision - 5 years
  • Python - 5 years
  • Deep Learning - 4 years
  • Event-based Vision - 3 years

Availability

Part-time

Preferred Environment

Ubuntu, PyCharm, PyTorch, Git

The most amazing...

...project I've done has vision-based terrain mapping algorithms on a robotic ground vehicle, assisting autonomous navigation in complex outdoor environments.

Work Experience

Senior Computer Vision Engineer

2020 - 2024
HAVELSAN
  • Designed and developed computer vision-based solutions using Python and C++ for various projects, such as autonomous unmanned ground and aerial vehicles, a maritime patrol aircraft, and video analytic applications for smart cities.
  • Researched and created terrain traversability estimation algorithms that use multiple sensor modalities to navigate autonomous ground robots in unstructured outdoor environments.
  • Developed 3D surround view systems for automotive applications.
  • Investigated deep learning-based computer vision algorithms for autonomous driving.
  • Worked on face detection and recognition algorithms.
  • Contributed to object detection and tracking algorithms for unpiloted aerial vehicles.
  • Explored and developed color correction methods for image and video stitching.
  • Designed real-time georeferencing solutions that use GNSS/INS and discrete element method (DEM) data for various platforms.
  • Gained experience in algebraic multigrid methods for solving large systems of partial differential equations.
Technologies: C++, Python, PyTorch, OpenCV, Robot Operating System (ROS), GStreamer, PCL, Boost, Eigen, NumPy, SciPy, Git, Data Versioning, PyCharm, Visual Studio Code (VS Code), CMake, Computer Vision, Deep Learning, Convolutional Neural Networks (CNNs), Deep Neural Networks (DNNs), Object Detection, Software Engineering, Embedded Linux, Embedded C++, Machine Learning, Image Processing, Conda, Object-oriented Programming (OOP), Object-oriented Design (OOD), Design Patterns, Linux, Docker, Open Neural Network Exchange (ONNX), LaTeX, NVIDIA Triton, NVIDIA TensorRT, Computer Vision Algorithms, Object Tracking, Artificial Neural Networks (ANN), Artificial Intelligence (AI), Stereoscopic Video, REST APIs, Pandas, Simultaneous Localization & Mapping (SLAM), Facial Recognition, Embedded Systems, Development, DeepStream SDK, NVIDIA Jetson, Image Recognition, C, Self-driving Cars

Senior Computer Vision Engineer and Product Owner

2019 - 2020
ArgosAI
  • Designed and developed deep learning-based computer vision algorithms for various applications, such as airfield safety and management, industrial inspection, and surveillance.
  • Worked extensively on the problem of detecting and classifying FODs (foreign object debris), which are defined as any unwanted objects lying on the runway and apron areas of airports, such as loose hardware and tools, pavement fragments, rocks, etc.
  • Contributed as the product owner of A-FOD, an automated remote foreign object debris (FOD) detection system. The system became the flagship product of ArgosAI.
  • Designed and developed computer vision and AI algorithms for detecting specific vehicles on apron areas of airfields, such as luggage cars, pushback cars, fuel, and catering trucks, etc., to gather analytics for airside operations.
  • Led efforts of PoC projects for two of the largest airports in the world. Worked in close contact with the clients during installation and test phases, addressing their needs and guiding them through a roadmap that best fits their operations.
  • Created, trained, and tested deep learning-based neural networks for classification, segmentation, and detection.
  • Developed an image processing pipeline with classical computer vision techniques for tasks, such as image registration and foreground and background detection, using various technologies, including C++ STL, Boost, and OpenCV.
  • Wrote Python scripts to create, manage, process, and synthesize large image datasets.
  • Automated various tasks with Python scripts, such as end-to-end system testing.
Technologies: Computer Vision, Deep Learning, C++, OpenCV, Boost, Keras, TensorFlow, Python, Image Processing, Deep Neural Networks (DNNs), Object Detection, Software Engineering, CMake, Visual Studio Code (VS Code), Databases, Machine Learning, Conda, Object-oriented Programming (OOP), Object-oriented Design (OOD), Computer Vision Algorithms, Artificial Neural Networks (ANN), Artificial Intelligence (AI), Django, Windows Desktop Software, SQLite, Development, Image Recognition, C

Senior Software Engineer

2011 - 2018
ASELSAN
  • Designed, developed, and maintained real-time embedded software using C and C++ for various military electro-optic systems.
  • Developed software for various product families, including handheld thermal cameras, electro-optic sensor systems, and laser warning receiver systems for airborne, naval, and land platforms.
  • Led teams of three to five junior software developers and subcontractors. I analyzed requirements, decomposed software projects into basic tasks and activities, and integrated software components.
  • Improved the quality and reusability of the software by utilizing object-oriented design principles, design patterns, modular programming, and unit testing.
  • Contributed to non-uniformity correction, bad pixel replacement, and image enhancement algorithms for thermal images.
  • Designed and developed reusable software modules for various components and hardware architectures.
  • Managed software through the entire lifecycle in CMMI level 3-compliant processes using requirements management, source control, and issue tracking tools.
Technologies: C++, Embedded C, Embedded C++, Embedded Linux, Real-time Operating System (RTOS), Image Processing, Microcontrollers, Algorithms, Software Engineering, CMake, Computer Vision, MATLAB, Electronics, Subversion (SVN), Object-oriented Programming (OOP), Object-oriented Design (OOD), Design Patterns, Linux, C#, DOORS, .NET, Optics, LaTeX, Windows Desktop Software, C#.NET, SQLite, Embedded Systems, Development, C

R&D Engineer (Part-time)

2010 - 2011
SDT Space & Defence Technologies
  • Implemented an FPGA-based SD card controller for recording and playing streaming videos that can be managed from a graphical user interface.
  • Programmed FPGA using VHDL and Xilinx Ise while coding in behavioral and register-transfer level (RTL).
  • Gained hands-on experience with serial peripheral interface (SPI), Secure Digital, and JPEG 2000 standards.
  • Developed graphical user interfaces using Borland C++.
  • Built applications with TI MSP430 microcontrollers utilizing communication and analog-to-digital converter modules.
Technologies: FPGA, VHDL, Microcontrollers, Software Engineering, Electronics, Image Processing, C++, Windows Desktop Software, Embedded Systems, Development, C

HyperE2VID: Improving Event-based Video Reconstruction via Hypernetworks

https://ercanburak.github.io/HyperE2VID.html
A research manuscript that has been submitted to the scientific journal "IEEE Transactions on Image Processing."

Abstract: Event-based cameras are becoming increasingly popular for their ability to capture high-speed motion with low latency and high dynamic range. However, generating videos from events remains challenging due to the highly sparse and varying nature of event data. To address this, in this study, we propose HyperE2VID, a dynamic neural network architecture for event-based video reconstruction. Our approach uses hypernetworks and dynamic convolutions to generate per-pixel adaptive filters guided by a context fusion module that combines information from event voxel grids and previously reconstructed intensity images. We also employ a curriculum learning strategy to train the network more robustly. Experimental results demonstrate that HyperE2VID achieves better reconstruction quality with fewer parameters and faster inference time than the state-of-the-art methods.

EVREAL: Towards a Comprehensive Benchmark and Analysis Suite for Event-based Video Reconstruction

https://ercanburak.github.io/evreal.html
A research paper that has been accepted to the 4th International Workshop on Event-based Vision at CVPR 2023.

Abstract: Event cameras are a new type of vision sensor that incorporates asynchronous and independent pixels, offering advantages over traditional frame-based cameras, such as high dynamic range and minimal motion blur. However, their output is not easily understandable by humans, making the reconstruction of intensity images from event streams a fundamental task in event-based vision. While recent deep learning-based methods have shown promise in video reconstruction from events, this problem is not completely solved yet. To facilitate comparison between different approaches, standardized evaluation protocols and diverse test datasets are essential. This paper proposes a unified evaluation methodology and introduces an open-source framework called EVREAL to comprehensively benchmark and analyze various event-based video reconstruction methods from the literature. Using EVREAL, we give a detailed analysis of the state-of-the-art methods for event-based video reconstruction and provide valuable insights into the performance of these methods under varying settings, challenging scenarios, and downstream tasks.

ArgosAI A-FOD

https://argosai.com/foreign-object-detection/
An AI and computer vision-based automated remote FOD detection system.

I designed, trained, and tested deep learning-based neural networks for computer vision tasks, such as classification, segmentation, and detection, using Python, Keras, and TensorFlow. I also developed an image processing pipeline with classical computer vision techniques for image registration and foreground/background detection utilizing C++ STL, Boost, and OpenCV. Finally, I wrote Python scripts to create, manage, process, and synthesize large image datasets and automate various tasks, including end-to-end system testing.

HAVELSAN BARKAN Autonomous Medium-class UGV System

An autonomous unmanned ground vehicle.

I researched, designed, developed, and optimized terrain traversability estimation algorithms for this system. I also implemented software written in C++ and based it on the Robot Operating System (ROS), which fuses information from multiple sensory modalities, such as cameras and LiDARs. The software works on NVIDIA Jetson AGX Xavier devices in real time, enabling the navigation of an autonomous ground robot in unstructured outdoor environments. Finally, I implemented a ROS-based software package to visualize a terrain traversability map on the video coming from an onboard camera.

HAVELSAN ADVENT MARTI Air Command and Control System

A high-technology air command and control system.

I worked on object (ships) detection algorithms and real-time georeferencing software that uses GNSS/INS and DEM data to calculate the coordinates of detected objects accurately.

HAVELSAN BAHA Autonomous Aircraft

A fixed-wing sub-cloud autonomous aircraft that can take off and land vertically.

I developed real-time georeferencing software that uses GNSS/INS and DEM data to accurately calculate the coordinates of targets that are tracked with a camera.

ASELSAN Laser Warning Receiver System

An advanced military electro-optic system to detect, classify, identify, and warn of hostile laser threats aiming at a platform—airborne, naval, or land—in a short time with high sensitivity.

I was the leading software developer of the system for five years while the system was rigorously evaluated with qualification tests. During this time, I developed real-time embedded software in C to process sensory data and handle all the communication and control system requirements. I optimized laser threat detection and recognition algorithms and contributed to many software packages that support the main project, such as simulation and test applications.

Synthetic18k | Published Paper on the Use of Synthetic Person Images for Representation Learning

An article published in Signal Processing: Image Communication Scientific Journal.

Here we first introduce a large-scale synthetic dataset called Synthetic18K. Then, we demonstrate that the pretraining of simple deep architectures on Synthetic18K for person re-identification, attribute recognition, and fine-tuning on real data leads to significant improvements in prediction performances. I mainly worked on training and validation of deep neural networks on person re-identification and attribute recognition tasks, also contributing to writing the original draft of the article.

A Real-time 3D Surround View Pipeline for Embedded Devices

https://www.scitepress.org/Papers/2022/107665/pdf/index.html
A paper published at the 17th International Conference on Computer Vision, Theory and Applications | VISAPP 2022.

Here we propose an end-to-end algorithm pipeline for 3D surround view systems that work on embedded devices in real time. I mainly worked on the color correction part of the pipeline, designing and implementing a local color correction method based on Poisson image editing. I used an algebraic multi-grid solver from AmgX (a GPU accelerated library) to solve the resulting large and sparse system of linear equations. Then, I optimized this algorithm to work on resource-constrained embedded devices in real time. Finally, I contributed to the experimental set up and the paper writing.

ASELSAN Mini-TWS Thermal Weapon Sight

A light thermal weapon sight with excellent image quality and high range performance.

I was the system's leading software developer for four years. During this time, the project has evolved from prototypes to thousands of devices produced and sold. I designed and developed embedded software in C++ to cover a wide range of product features, including thermal image acquisition and enhancement, communication and control, built-in tests, on-screen displays, and button controls.

ASELSAN FALCONEYE | Electro-optical Sensor System

An integrated military electro-optic sensor system that incorporates a thermal and daylight camera, laser range finder, digital magnetic compass, and GPS-based locator.

I was the system's leading software developer for four years, creating embedded software while the system was continuously updated with new features and changing components. I designed the system's software with many features, including thermal image acquisition, enhancement, and target coordinate calculation.

ASELSAN YAMGOZ Enhanced 360 Degree Close Range Surveillance System

A compact, high-performance vision system designed for tanks and other armored vehicles.

The system provides enhanced maneuvering capability and situational awareness under severe conditions. It uses thermal and daylight sensors, covering a 360° field of view. I was the leading developer of the system's embedded software for two years. During this time, the system was designed and prototyped from scratch.
2017 - 2024

PhD in Computer Science

Hacettepe University - Ankara, Turkey

2012 - 2015

Master's Degree in Engineering Management

Middle East Technical University - Ankara, Turkey

2006 - 2011

Bachelor's Degree in Electrical and Electronics Engineering

Middle East Technical University - Ankara, Turkey

JULY 2022 - PRESENT

Deep Learning Engineer

Workera

OCTOBER 2021 - PRESENT

Master CMake for Cross-Platform C++ Project Building

Udemy

SEPTEMBER 2021 - PRESENT

CUDA Programming Masterclass with C++

Udemy

AUGUST 2021 - PRESENT

Complete Modern C++ (C++11/14/17)

Udemy

JULY 2020 - PRESENT

Certificate of Participation in the EEML2020 Summer School

EEML – Eastern European Machine Learning Summer School

JUNE 2018 - PRESENT

GPU Programming

Middle East Technical University (METU) Informatics Institute

FEBRUARY 2018 - PRESENT

Linux System Calls

UCanLinux

JULY 2017 - PRESENT

Embedded Linux

UCanLinux

APRIL 2017 - PRESENT

Embedded C and C++ Unit Testing

First Technology Transfer (FTT)

MARCH 2017 - PRESENT

Altera SoC Training

Doulos

SEPTEMBER 2016 - PRESENT

Programming in C#

Infopark

JULY 2015 - PRESENT

Using DOORS for Requirements Management

PROYA

APRIL 2014 - PRESENT

C6000 Embedded Design Workshop Using SYS/BIOS

Texas Instruments

FEBRUARY 2012 - PRESENT

Optical Design

ASELSAN

Libraries/APIs

PyTorch, NumPy, OpenCV, TensorFlow, Keras, Pandas, Scikit-learn, PCL, Eigen, SciPy, ZeroMQ, Matplotlib, REST APIs

Tools

PyCharm, Git, MATLAB, Slack, Subversion (SVN), LaTeX, DOORS, NVIDIA Jetson, CMake, You Only Look Once (YOLO), Open Neural Network Exchange (ONNX)

Languages

C++, Embedded C, Python, C, C++11, Embedded C++, C++14, C#, VHDL, C++17, SQL, C#.NET

Platforms

Visual Studio Code (VS Code), Ubuntu, Linux, Embedded Linux, NVIDIA CUDA, BeagleBone Black, Docker

Paradigms

Siamese Neural Networks, Object-oriented Programming (OOP), Object-oriented Design (OOD), Design Patterns, Unit Testing

Frameworks

Boost, GStreamer, Qt, Google Test, .NET, Django, Django REST Framework

Storage

Databases, SQLite

Other

Image Processing, Machine Learning, Computer Vision, Deep Learning, Research, Neural Networks, Deep Neural Networks (DNNs), Recurrent Neural Networks (RNNs), Convolutional Neural Networks (CNNs), Microcontrollers, Software Engineering, Event-based Vision, Artificial Intelligence (AI), Software Development, Computer Vision Algorithms, Artificial Neural Networks (ANN), Facial Recognition, Embedded Systems, Development, Image Recognition, Operating Systems, Data Structures, Microprocessors, Object Detection, Real-time Operating System (RTOS), Robot Operating System (ROS), Mathematics, Conda, Embedded Software, NVIDIA Triton, Writing & Editing, Windows Desktop Software, Self-driving Cars, Digital Signal Processing, Electronics, Data Mining, Statistics, Monte Carlo Simulations, Economics, Strategic Planning, Decision Modeling, Operations Management, Optimization, Text Mining, Generative Adversarial Networks (GANs), Variational Autoencoders, Discrete Mathematics, Algorithms, Programming Languages, FPGA, Data Versioning, Data Science, Autonomous Robots, Robotics, Differential Equations, Partial Differential Equations, Autonomous Navigation, Linux System Calls, System-on-a-Chip (SoC), Optics, NVIDIA TensorRT, Grant Proposals, Object Tracking, Stereoscopic Video, Simultaneous Localization & Mapping (SLAM), Natural Language Processing (NLP), Generative Pre-trained Transformers (GPT), DeepStream SDK, Voxel

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring