Jiri Pokorny, Concurrency Developer in Prague, Czech Republic
Jiri Pokorny

Concurrency Developer in Prague, Czech Republic

Member since April 9, 2021
Jiri started programming at a young age and is a programmer at heart. Over time, he has grown into an experienced senior developer currently focused on big data processing. He strongly believes in writing clean and maintainable code, and while he enjoys solving complex problems, Jiri strives to keep solutions as simple as possible.
Jiri is now available for hire


  • Open Bean
    Scala, Amazon S3 (AWS S3), AWS Lambda, Akka Streams, Akka, Kubernetes...
  • Second Foundation
    Kotlin, Coroutines, PostgreSQL, RabbitMQ, Solace, JVM, Trading Systems...
  • Jumpshot
    Apache Spark, Scala, Akka Actors, Concurrency, MongoDB, HDFS, Python



Prague, Czech Republic



Preferred Environment

Linux, IntelliJ IDEA, Scala, JVM, Big Data

The most amazing...

...thing I've discovered was functional programming, which has opened my eyes and made my code much more reliable, readable, and concise.


  • Senior Software Developer

    2021 - 2021
    Open Bean
    • Developed raw data ingestion using Scala, AWS S3, and Lambda.
    • Designed raw data format that is versatile enough for required use-cases.
    • Architected index data structure and library to speed up relevant queries and support downstream distributed computations.
    Technologies: Scala, Amazon S3 (AWS S3), AWS Lambda, Akka Streams, Akka, Kubernetes, Amazon Simple Queue Service (SQS), AWS EMR, SBT, Apache Spark, Spark SQL
  • Senior Software Developer

    2020 - 2020
    Second Foundation
    • Developed a real-time market trading adapter for normalizing market operations for multiple use-cases.
    • Participated in internal system design and protocols between distributed components.
    • Developed a system for scraping of published data on various web pages.
    Technologies: Kotlin, Coroutines, PostgreSQL, RabbitMQ, Solace, JVM, Trading Systems, Distributed Systems
  • Senior Software Developer

    2017 - 2020
    • Served as the lead developer of the processing pipeline for internal platforms that were used throughout the company.
    • Implemented a custom Spark job scheduling service for continuous application of patterns to data. This allowed for scaling out the computation to keep strict delivery deadlines.
    • Implemented a safe mechanism of mutable data publication on HDFS using snapshotting. This prevented consumer errors and allowed for safe synchronization between clusters.
    • Optimized recalculation algorithms to speed-up the computation by the order.
    • Vastly improved service reliability, resiliency, and monitoring.
    • Completely redesigned the pipeline for processing search engine results, improved source code, and participated in improving data quality.
    • Occasionally led a few other developers on associated tasks.
    Technologies: Apache Spark, Scala, Akka Actors, Concurrency, MongoDB, HDFS, Python
  • Technical Lead

    2012 - 2016
    ZOOM International
    • Implemented live screen monitoring into existing screen monitoring solutions.
    • Developed integrations with an external system for call recording.
    • Participated in complex bug fixing of business-critical applications.
    Technologies: Java, JVM, Multithreading, Linux, Python, Bash


  • Pattern Application System

    This project was responsible for the application of patterns to large-scale data. I was the main developer on this project. Patterns themselves were a moving target, were updated several times a day, and there was a deadline when the pattern has to be applied to all the historical data from the time of its publication.

    As updates mutated the resulting data, there had to be a safe publishing mechanism on HDFS and safe synchronization to other clusters. There was a service to expose the current status of patterns for consumers. The pattern application was costly—there was an algorithm to limit the recalculation to just specific parts. Another interesting aspect of this service was the resiliency and the effort to minimize the chances of disrupting the computation by a single bad batch/pattern.

    The service itself was a Scala-based application with multiple concurrent Spark jobs spawned into the Cloudera cluster. It used Scala futures and Akka Actors for handling concurrency and MongoDB to persist global computation state and track job states. There was an HTTP API and CLI tool to control various aspects of the running system.

  • Metrostation Prague

    A small application to notify users of the current station in Prague metro in the notification bar on Android phone. Prague metro was specific as it had cell signal just in stations—this application exploited that and used a list of GSM cell ids to track and predict the movement of the user. Now the application is outdated as metro tunnels are being covered with signals, and I did not continue with development, but still, it was a nice little hobby project.


  • Languages

    Scala, Kotlin, Python, Java, Bash
  • Frameworks

    Apache Spark, Akka, AWS EMR
  • Platforms

    Linux, JVM, Android, AWS Lambda, Kubernetes
  • Other

    Big Data, Concurrency, Coroutines, Multithreading, Computer Science, Akka Actors, Solace, Distributed Systems
  • Libraries/APIs

    Akka Streams
  • Tools

    IntelliJ IDEA, RabbitMQ, Amazon Simple Queue Service (SQS), SBT, Spark SQL
  • Storage

    MongoDB, HDFS, PostgreSQL, Amazon S3 (AWS S3)
  • Industry Expertise

    Trading Systems


  • Master's Degree in Computer Science
    2001 - 2008
    Czech Technical University in Prague - Prague, Czech Republic

To view more profiles

Join Toptal
Share it with others