Osman Furkan Kınlı, Developer in Istanbul, Turkey
Osman is available for hire
Hire Osman

Osman Furkan Kınlı

Verified Expert  in Engineering

Bio

Furkan is a computer vision researcher and Ph.D. candidate in computer science at Özyeğin University. He worked on data and ML-related jobs in Turkish Telecom and eBay Turkey. He founded a startup called T-Fashion and developed numerous state-of-the-art computer vision and time-series analysis algorithms. Moreover, Furkan worked for Fishency, an international startup that monitors the health conditions in aquatic ecosystems, and developed different recognition algorithms for them.

Portfolio

T-Fashion
Computer Vision Algorithms, Linux, Python, OpenCV, Torch, Kornia, Detectron2...
Ozyegin University
Computer Vision Algorithms, Deep Learning, Torch, Kornia, OpenCV, Python, C++...
Alexander de Cadenet
Image Processing, Computer Vision, Image Generation, Python

Experience

  • Python - 7 years
  • Linux - 7 years
  • Image Processing - 6 years
  • Torch - 6 years
  • Computer Vision Algorithms - 6 years
  • Deep Learning - 6 years
  • Artificial Intelligence (AI) - 5 years
  • Kornia - 3 years

Availability

Part-time

Preferred Environment

Linux, Python, Torch, Kornia, Detectron2, Stable Diffusion, Deep Learning, Image Processing

The most amazing...

...thing I've developed is the whole AI pipeline for T-Fashion, which is now the heart of the T-Fashion business.

Work Experience

Co-founder | AI Lead

2019 - PRESENT
T-Fashion
  • Developed different computer vision algorithms to understand the categories, attributes, and color histograms of clothing items seen in social media images.
  • Conducted a research project that aims to improve the performance of the state-of-the-art deep learning architectures in mainstream computer vision tasks by directly removing social media filters from the images as a pre-processing step.
  • Employed numerous image processing algorithms to enhance the quality of social media images and eliminate images of very low-quality or different artifacts.
Technologies: Computer Vision Algorithms, Linux, Python, OpenCV, Torch, Kornia, Detectron2, Computer Vision, Image Processing, Stable Diffusion, Generative Adversarial Networks (GANs)

Research Assistant

2019 - PRESENT
Ozyegin University
  • Assisted in several different computer science courses such as data structures and algorithms, programming languages, and data science in Python.
  • Conducted and helped with different research projects in different areas, including computer vision, natural language processing, data science, and negotiation.
  • Supported administrative affairs and the computer science department in different administrative jobs.
Technologies: Computer Vision Algorithms, Deep Learning, Torch, Kornia, OpenCV, Python, C++, Linux, Computer Vision, Image Processing, Machine Learning, Scikit-learn, Google Cloud

Computer Vision/Image Generation Engineer

2024 - 2024
Alexander de Cadenet
  • Prepared and manipulated the raw asset files to make them ready to be blended.
  • Developed software that blends and customizes the asset files to meet the client's expectations to generate final images.
  • Delivered 10,000 high-quality NFT images to the client.
Technologies: Image Processing, Computer Vision, Image Generation, Python

Computer Vision Research Engineer

2020 - 2021
Fishency Innovation
  • Developed a program that automatically generates training and validation datasets from real-time videos for different vision tasks like detection, segmentation, tracking, and keypoint estimation.
  • Conducted different research projects for solving various vision tasks in the aquatic ecosystem by applying state-of-the-art deep learning strategies to that domain.
  • Developed the whole AI pipeline that processes the videos from the aquatic ecosystem and infers some statistics from the visual data.
Technologies: Computer Vision Algorithms, Deep Learning, Torch, Kornia, OpenCV, Detectron2, Linux, Python, Computer Vision, Image Processing

Machine Learning Engineer

2018 - 2019
GittiGidiyor (eBay Türkiye)
  • Developed a program that crawls the related user data to generate training and validation data for machine learning models.
  • Conducted several experiments for product ranking models by using different machine learning algorithms.
  • Deployed the best-performed ML models to A/B test the traditional product ranking algorithms.
Technologies: Python, Spark, Scikit-learn, Spark ML, Google Cloud, Machine Learning

Software Engineer

2018 - 2018
Türk Telekom
  • Helped create SAP BI reports of millions of customers and thousands of customer groups.
  • Developed a web application for the business intelligence team to display, search, and filter SAP BI reports.
  • Completed an internal certification program called Türk Telekom Akademi.
Technologies: Java, Spring, SAP Business Intelligence (BI), SAP BusinessObjects (BO)

Experience

Deterministic Neural Illuminant Mapping for Efficient Auto-white Balance Correction

https://github.com/birdortyedi/DeNIM
Auto-white balance (AWB) correction is a critical operation in image signal processors (ISPs) for accurate and consistent color correction across various illumination scenarios. This paper presents a novel and efficient AWB correction method that achieves at least 35 times faster processing with equivalent or superior performance on high-resolution images for the current state-of-the-art methods. Inspired by deterministic color style transfer, our approach introduces deterministic illumination color mapping, leveraging learnable projection matrices for both canonical illumination form and AWB-corrected output. It involves feeding high-resolution images and corresponding latent representations into a mapping module to derive a canonical form, followed by another mapping module that maps the pixel values to those for the corrected version. This strategy is designed as resolution-agnostic and enables seamless integration of any pre-trained AWB network as the backbone. Our method provides an efficient deep learning-based AWB correction solution, promising real-time, high-quality color correction for digital imaging applications.

Exact Feature Distribution Matching for Arbitrary Style Transfer and Domain Generalization

https://github.com/birdortyedi/efdm-pytorch
In this reproducibility study, we present our results and experience while replicating the paper titled Exact Feature Distribution Matching for Arbitrary Style Transfer and Domain Generalization. In real‐world scenarios, the feature distributions are mostly much more complicated than Gaussian, so only mean and standard deviation may not be fully representative to match them. This paper introduces a novel strategy to exactly match the histograms of image features via the sort‐matching algorithm in a computationally feasible way. We were able to reproduce most of the results presented in the original paper both qualitatively and quantitatively.

Modeling the Lighting in Scenes as Style for Auto White-balance Correction

https://github.com/birdortyedi/lighting-as-style-awb-correction
Style may refer to different concepts (e.g., painting style, hairstyle, texture, color, filter, etc.) depending on how the feature space is formed. In this work, we propose a novel idea of interpreting the lighting in the single- and multi-illuminant scenes as the concept of style. To verify this idea, we introduce an enhanced auto white-balance (AWB) method that models the lighting in single- and mixed-illuminant scenes as the style factor. Our AWB method does not require any illumination estimation step, yet it contains a network learning to generate the weighting maps of the images with different WB settings. The proposed network utilizes the style information extracted from the scene by a multi-head style extraction module. AWB correction is completed after blending these weighting maps and the scene. Experiments on single- and mixed illuminant datasets demonstrate that our proposed method achieves promising correction results when compared to the recent works. This shows that the lighting in the scenes with multiple illuminations can be modeled by the
concept of style.

Patch-wise Contrastive Style Learning for Instagram Filter Removal

https://github.com/birdortyedi/cifr-pytorch
Image-level corruptions and perturbations degrade the performance of CNNs on different downstream vision tasks. Social media filters are one of the most common sources of corruptions and perturbations for real-world visual analysis applications. The negative effects of these distractive factors can be alleviated by recovering the original images with their pure style for the inference of the downstream vision tasks. Assuming these filters substantially inject a piece of additional style information into the social media images, we can formulate the problem of recovering the original versions as a reverse style transfer problem. We introduce the Contrastive Instagram Filter Removal Network (CIFR), which enhances this idea for Instagram filter removal by employing a novel multi-layer patch-wise contrastive style learning mechanism. Experiments show our proposed strategy produces better qualitative and quantitative results than the previous studies. Finally, we present the inference outputs and quantitative comparison of filtered and recovered images on localization and segmentation tasks to encourage the main motivation for this problem.

Instagram Filter Removal from Fashionable Images

https://github.com/birdortyedi/instagram-filter-removal-pytorch
Social media images are generally transformed by filtering to obtain an aesthetically more pleasing appearance, however, convolutional neural networks (CNNs) generally fail to interpret both the image and its filtered version as the same in the visual analysis of social media images.

We introduced the Instagram Filter Removal Network (IFRNet) to mitigate the effects of image filters for social media analysis applications. To achieve this, we assumed any filter applied to an image substantially injects a piece of additional style information to it, and we considered this problem as a reverse style transfer problem.

The visual effects of filtering can be directly removed by adaptively normalizing external style information in each encoder level. Experiments demonstrate that IFRNet outperforms all compared methods in quantitative and qualitative comparisons and can remove the visual effects to a great extent. Additionally, we present the filter classification performance of our proposed model and analyze the dominant color estimation on the images unfiltered by all compared methods.

A Benchmark for Inpainting of Clothing Images with Irregular Holes

https://github.com/birdortyedi/fashion-image-inpainting
Fashion image understanding is an active research field with a large number of practical applications for the industry and despite its practical impacts on intelligent fashion analysis systems, clothing image inpainting has not been extensively examined yet; for that matter, we presented an extensive benchmark of clothing image inpainting on well-known fashion datasets.

Furthermore, we introduced the use of a dilated version of partial convolutions, which efficiently derive the mask update step, and empirically show that the proposed method reduces the required number of layers to form fully-transparent masks. Experiments show that dilated partial convolutions (DPConv) improve the quantitative inpainting performance compared to the other inpainting strategies; it performs better when the mask size is 20% or more of the image.

You can find more information in the following paper: https://arxiv.org/pdf/2007.05080.pdf.

Description-aware Fashion Image Inpainting with CNNs in a Coarse-to-fine Manner

https://github.com/birdortyedi/description-aware-fashion-inpainting
Inpainting a particular missing region in an image is a challenging vision task, and promising improvements on this task have been achieved with the help of the recent developments in vision-related deep learning studies. Although it may directly impact the decisions of AI-based fashion analysis systems, a limited number of studies for image inpainting have been done in the fashion domain.

This study proposes a multi-modal, generative deep learning approach for filling the missing parts in fashion images by constraining visual features with textual features extracted from image descriptions. Our model comprises four main blocks, which can be introduced as textual feature extractor, coarse image generator guided by textual features, fine image generator enhancing the coarse output, and global and local discriminators improving refined outputs.

Several experiments conducted on the FashionGen dataset with different combinations of neural network components show that our multi-modal approach can generate visually plausible patches to fill the missing parts in the images.

Fashion Image Retrieval with Capsule Networks

https://github.com/birdortyedi/image-retrieval-with-capsules
In this study, we investigate the in-shop clothing retrieval performance of densely-connected capsule networks with dynamic routing. To achieve this, we propose a triplet-based design of capsule network architecture with two different feature extraction methods. In our design, stacked convolutional (SC) and residual-connected (RC) blocks are used to form the input of capsule layers.

Experimental results show that both of our designs outperform all variants of the baseline study, namely FashionNet, without relying on the landmark information. Moreover, compared to the SOTA architectures on clothing retrieval, our proposed triplet capsule networks achieve comparable recall rates only with half of the parameters used in the SOTA architectures.

Education

2019 - 2021

Doctorate Degree in Computer Science

Özyeğin University - İstanbul, Turkey

2018 - 2019

Master's Degree in Computer Science

Özyeğin University - İstanbul, Turkey

2013 - 2018

Bachelor's Degree in Computer Science in Engineering

Özyeğin University - İstanbul, Turkey

Certifications

FEBRUARY 2018 - PRESENT

Deep Learning Specialization

Coursera

FEBRUARY 2018 - PRESENT

Convolutional Neural Networks

Coursera

JANUARY 2018 - PRESENT

Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization

Coursera

JANUARY 2018 - PRESENT

Structuring Machine Learning Projects

Coursera

Skills

Libraries/APIs

PyTorch, OpenCV, Kornia, Scikit-learn, Spark ML

Languages

Python, C++

Platforms

Linux, MacOS

Frameworks

Spark

Storage

Google Cloud

Other

Torch, Computer Vision, Image Processing, Computer Vision Algorithms, Deep Learning, Generative Adversarial Networks (GANs), Artificial Intelligence (AI), Generative Artificial Intelligence (GenAI), Image Generation, Detectron2, Machine Learning, Stable Diffusion, Statistics, Digital Filters, Color Science

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring