Burak Ercan
Verified Expert in Engineering
Computer Vision Developer
Ankara, Turkey
Toptal member since September 27, 2022
Burak is a senior computer vision engineer with a PhD and 13 years of industrial experience. As an expert in computer vision, deep learning, and software development with C++ and Python, he's worked on various projects in large defense companies and AI startups, demonstrating a broad range of technical and soft skills. Burak describes himself as a lifelong learner. His goal is to be a part of impactful applied research and engineering where he can fully demonstrate his skills.
Portfolio
Experience
- Software Engineering - 11 years
- C++ - 11 years
- Machine Learning - 7 years
- PyTorch - 5 years
- Computer Vision - 5 years
- Python - 5 years
- Deep Learning - 4 years
- Event-based Vision - 3 years
Availability
Preferred Environment
Ubuntu, PyCharm, PyTorch, Git
The most amazing...
...project I've done has vision-based terrain mapping algorithms on a robotic ground vehicle, assisting autonomous navigation in complex outdoor environments.
Work Experience
Senior Computer Vision Engineer
HAVELSAN
- Designed and developed computer vision-based solutions using Python and C++ for various projects, such as autonomous unmanned ground and aerial vehicles, a maritime patrol aircraft, and video analytic applications for smart cities.
- Researched and created terrain traversability estimation algorithms that use multiple sensor modalities to navigate autonomous ground robots in unstructured outdoor environments.
- Developed 3D surround view systems for automotive applications.
- Investigated deep learning-based computer vision algorithms for autonomous driving.
- Worked on face detection and recognition algorithms.
- Contributed to object detection and tracking algorithms for unpiloted aerial vehicles.
- Explored and developed color correction methods for image and video stitching.
- Designed real-time georeferencing solutions that use GNSS/INS and discrete element method (DEM) data for various platforms.
- Gained experience in algebraic multigrid methods for solving large systems of partial differential equations.
Senior Computer Vision Engineer and Product Owner
ArgosAI
- Designed and developed deep learning-based computer vision algorithms for various applications, such as airfield safety and management, industrial inspection, and surveillance.
- Worked extensively on the problem of detecting and classifying FODs (foreign object debris), which are defined as any unwanted objects lying on the runway and apron areas of airports, such as loose hardware and tools, pavement fragments, rocks, etc.
- Contributed as the product owner of A-FOD, an automated remote foreign object debris (FOD) detection system. The system became the flagship product of ArgosAI.
- Designed and developed computer vision and AI algorithms for detecting specific vehicles on apron areas of airfields, such as luggage cars, pushback cars, fuel, and catering trucks, etc., to gather analytics for airside operations.
- Led efforts of PoC projects for two of the largest airports in the world. Worked in close contact with the clients during installation and test phases, addressing their needs and guiding them through a roadmap that best fits their operations.
- Created, trained, and tested deep learning-based neural networks for classification, segmentation, and detection.
- Developed an image processing pipeline with classical computer vision techniques for tasks, such as image registration and foreground and background detection, using various technologies, including C++ STL, Boost, and OpenCV.
- Wrote Python scripts to create, manage, process, and synthesize large image datasets.
- Automated various tasks with Python scripts, such as end-to-end system testing.
Senior Software Engineer
ASELSAN
- Designed, developed, and maintained real-time embedded software using C and C++ for various military electro-optic systems.
- Developed software for various product families, including handheld thermal cameras, electro-optic sensor systems, and laser warning receiver systems for airborne, naval, and land platforms.
- Led teams of three to five junior software developers and subcontractors. I analyzed requirements, decomposed software projects into basic tasks and activities, and integrated software components.
- Improved the quality and reusability of the software by utilizing object-oriented design principles, design patterns, modular programming, and unit testing.
- Contributed to non-uniformity correction, bad pixel replacement, and image enhancement algorithms for thermal images.
- Designed and developed reusable software modules for various components and hardware architectures.
- Managed software through the entire lifecycle in CMMI level 3-compliant processes using requirements management, source control, and issue tracking tools.
R&D Engineer (Part-time)
SDT Space & Defence Technologies
- Implemented an FPGA-based SD card controller for recording and playing streaming videos that can be managed from a graphical user interface.
- Programmed FPGA using VHDL and Xilinx Ise while coding in behavioral and register-transfer level (RTL).
- Gained hands-on experience with serial peripheral interface (SPI), Secure Digital, and JPEG 2000 standards.
- Developed graphical user interfaces using Borland C++.
- Built applications with TI MSP430 microcontrollers utilizing communication and analog-to-digital converter modules.
Experience
HyperE2VID: Improving Event-based Video Reconstruction via Hypernetworks
https://ercanburak.github.io/HyperE2VID.htmlAbstract: Event-based cameras are becoming increasingly popular for their ability to capture high-speed motion with low latency and high dynamic range. However, generating videos from events remains challenging due to the highly sparse and varying nature of event data. To address this, in this study, we propose HyperE2VID, a dynamic neural network architecture for event-based video reconstruction. Our approach uses hypernetworks and dynamic convolutions to generate per-pixel adaptive filters guided by a context fusion module that combines information from event voxel grids and previously reconstructed intensity images. We also employ a curriculum learning strategy to train the network more robustly. Experimental results demonstrate that HyperE2VID achieves better reconstruction quality with fewer parameters and faster inference time than the state-of-the-art methods.
EVREAL: Towards a Comprehensive Benchmark and Analysis Suite for Event-based Video Reconstruction
https://ercanburak.github.io/evreal.htmlAbstract: Event cameras are a new type of vision sensor that incorporates asynchronous and independent pixels, offering advantages over traditional frame-based cameras, such as high dynamic range and minimal motion blur. However, their output is not easily understandable by humans, making the reconstruction of intensity images from event streams a fundamental task in event-based vision. While recent deep learning-based methods have shown promise in video reconstruction from events, this problem is not completely solved yet. To facilitate comparison between different approaches, standardized evaluation protocols and diverse test datasets are essential. This paper proposes a unified evaluation methodology and introduces an open-source framework called EVREAL to comprehensively benchmark and analyze various event-based video reconstruction methods from the literature. Using EVREAL, we give a detailed analysis of the state-of-the-art methods for event-based video reconstruction and provide valuable insights into the performance of these methods under varying settings, challenging scenarios, and downstream tasks.
ArgosAI A-FOD
https://argosai.com/foreign-object-detection/I designed, trained, and tested deep learning-based neural networks for computer vision tasks, such as classification, segmentation, and detection, using Python, Keras, and TensorFlow. I also developed an image processing pipeline with classical computer vision techniques for image registration and foreground/background detection utilizing C++ STL, Boost, and OpenCV. Finally, I wrote Python scripts to create, manage, process, and synthesize large image datasets and automate various tasks, including end-to-end system testing.
HAVELSAN BARKAN Autonomous Medium-class UGV System
I researched, designed, developed, and optimized terrain traversability estimation algorithms for this system. I also implemented software written in C++ and based it on the Robot Operating System (ROS), which fuses information from multiple sensory modalities, such as cameras and LiDARs. The software works on NVIDIA Jetson AGX Xavier devices in real time, enabling the navigation of an autonomous ground robot in unstructured outdoor environments. Finally, I implemented a ROS-based software package to visualize a terrain traversability map on the video coming from an onboard camera.
HAVELSAN ADVENT MARTI Air Command and Control System
I worked on object (ships) detection algorithms and real-time georeferencing software that uses GNSS/INS and DEM data to calculate the coordinates of detected objects accurately.
HAVELSAN BAHA Autonomous Aircraft
I developed real-time georeferencing software that uses GNSS/INS and DEM data to accurately calculate the coordinates of targets that are tracked with a camera.
ASELSAN Laser Warning Receiver System
I was the leading software developer of the system for five years while the system was rigorously evaluated with qualification tests. During this time, I developed real-time embedded software in C to process sensory data and handle all the communication and control system requirements. I optimized laser threat detection and recognition algorithms and contributed to many software packages that support the main project, such as simulation and test applications.
Synthetic18k | Published Paper on the Use of Synthetic Person Images for Representation Learning
Here we first introduce a large-scale synthetic dataset called Synthetic18K. Then, we demonstrate that the pretraining of simple deep architectures on Synthetic18K for person re-identification, attribute recognition, and fine-tuning on real data leads to significant improvements in prediction performances. I mainly worked on training and validation of deep neural networks on person re-identification and attribute recognition tasks, also contributing to writing the original draft of the article.
A Real-time 3D Surround View Pipeline for Embedded Devices
https://www.scitepress.org/Papers/2022/107665/pdf/index.htmlHere we propose an end-to-end algorithm pipeline for 3D surround view systems that work on embedded devices in real time. I mainly worked on the color correction part of the pipeline, designing and implementing a local color correction method based on Poisson image editing. I used an algebraic multi-grid solver from AmgX (a GPU accelerated library) to solve the resulting large and sparse system of linear equations. Then, I optimized this algorithm to work on resource-constrained embedded devices in real time. Finally, I contributed to the experimental set up and the paper writing.
ASELSAN Mini-TWS Thermal Weapon Sight
I was the system's leading software developer for four years. During this time, the project has evolved from prototypes to thousands of devices produced and sold. I designed and developed embedded software in C++ to cover a wide range of product features, including thermal image acquisition and enhancement, communication and control, built-in tests, on-screen displays, and button controls.
ASELSAN FALCONEYE | Electro-optical Sensor System
I was the system's leading software developer for four years, creating embedded software while the system was continuously updated with new features and changing components. I designed the system's software with many features, including thermal image acquisition, enhancement, and target coordinate calculation.
ASELSAN YAMGOZ Enhanced 360 Degree Close Range Surveillance System
The system provides enhanced maneuvering capability and situational awareness under severe conditions. It uses thermal and daylight sensors, covering a 360° field of view. I was the leading developer of the system's embedded software for two years. During this time, the system was designed and prototyped from scratch.
Education
PhD in Computer Science
Hacettepe University - Ankara, Turkey
Master's Degree in Engineering Management
Middle East Technical University - Ankara, Turkey
Bachelor's Degree in Electrical and Electronics Engineering
Middle East Technical University - Ankara, Turkey
Certifications
Deep Learning Engineer
Workera
Master CMake for Cross-Platform C++ Project Building
Udemy
CUDA Programming Masterclass with C++
Udemy
Complete Modern C++ (C++11/14/17)
Udemy
Certificate of Participation in the EEML2020 Summer School
EEML – Eastern European Machine Learning Summer School
GPU Programming
Middle East Technical University (METU) Informatics Institute
Linux System Calls
UCanLinux
Embedded Linux
UCanLinux
Embedded C and C++ Unit Testing
First Technology Transfer (FTT)
Altera SoC Training
Doulos
Programming in C#
Infopark
Using DOORS for Requirements Management
PROYA
C6000 Embedded Design Workshop Using SYS/BIOS
Texas Instruments
Optical Design
ASELSAN
Skills
Libraries/APIs
PyTorch, NumPy, OpenCV, TensorFlow, Keras, Pandas, Scikit-learn, PCL, Eigen, SciPy, ZeroMQ, Matplotlib, REST APIs
Tools
PyCharm, Git, MATLAB, Slack, Subversion (SVN), LaTeX, DOORS, NVIDIA Jetson, CMake, You Only Look Once (YOLO), Open Neural Network Exchange (ONNX)
Languages
C++, Embedded C, Python, C, C++11, Embedded C++, C++14, C#, VHDL, C++17, SQL, C#.NET
Platforms
Visual Studio Code (VS Code), Ubuntu, Linux, Embedded Linux, NVIDIA CUDA, BeagleBone Black, Docker
Paradigms
Siamese Neural Networks, Object-oriented Programming (OOP), Object-oriented Design (OOD), Design Patterns, Unit Testing
Frameworks
Boost, GStreamer, Qt, Google Test, .NET, Django, Django REST Framework
Storage
Databases, SQLite
Other
Image Processing, Machine Learning, Computer Vision, Deep Learning, Research, Neural Networks, Deep Neural Networks (DNNs), Recurrent Neural Networks (RNNs), Convolutional Neural Networks (CNNs), Microcontrollers, Software Engineering, Event-based Vision, Artificial Intelligence (AI), Software Development, Computer Vision Algorithms, Artificial Neural Networks (ANN), Facial Recognition, Embedded Systems, Development, Image Recognition, Operating Systems, Data Structures, Microprocessors, Object Detection, Real-time Operating System (RTOS), Robot Operating System (ROS), Mathematics, Conda, Embedded Software, NVIDIA Triton, Writing & Editing, Windows Desktop Software, Self-driving Cars, Digital Signal Processing, Electronics, Data Mining, Statistics, Monte Carlo Simulations, Economics, Strategic Planning, Decision Modeling, Operations Management, Optimization, Text Mining, Generative Adversarial Networks (GANs), Variational Autoencoders, Discrete Mathematics, Algorithms, Programming Languages, FPGA, Data Versioning, Data Science, Autonomous Robots, Robotics, Differential Equations, Partial Differential Equations, Autonomous Navigation, Linux System Calls, System-on-a-Chip (SoC), Optics, NVIDIA TensorRT, Grant Proposals, Object Tracking, Stereoscopic Video, Simultaneous Localization & Mapping (SLAM), Natural Language Processing (NLP), Generative Pre-trained Transformers (GPT), DeepStream SDK, Voxel
How to Work with Toptal
Toptal matches you directly with global industry experts from our network in hours—not weeks or months.
Share your needs
Choose your talent
Start your risk-free talent trial
Top talent is in high demand.
Start hiring