Charles-Edouard Ladari, Developer in Paris, France
Charles-Edouard is available for hire
Hire Charles-Edouard

Charles-Edouard Ladari

Verified Expert  in Engineering

Data Engineer and Developer

Location
Paris, France
Toptal Member Since
September 27, 2023

Charles is a highly experienced data engineer with a background in software engineering and data science spanning from 2015. He excels in handling all aspects of data processing, whether in research or industry settings. Passionate about exploring new technologies and working with big data, Charles thrives on solving complex problems through innovative, data-based solutions that deliver tangible value to businesses.

Portfolio

My Money Group
Amazon Web Services (AWS), Scala, Terraform, AWS Glue, Apache Kafka, Amazon RDS...
Groupe SEB
Scala, Apache Spark, Amazon Web Services (AWS), Docker, PostgreSQL, Python...
Invenis
Scala, Apache Spark, Hadoop, Redis, Docker, Playlib, Akka, SBT, GRAPH...

Experience

Availability

Part-time

Preferred Environment

Apache Spark, Scala, Amazon Web Services (AWS), Terraform, Apache Kafka, Python

The most amazing...

...thing I've done was designing a new Scala engine for ETL purposes, entirely bypassing compilation and cutting processing times from 45 seconds to 1 second.

Work Experience

Cloud and Data Engineer

2022 - 2023
My Money Group
  • Established a secure environment for handling sensitive, non-anonymized data during the CD3 certification dynamique phase.
  • Implemented strict access policies following the banking guidelines and principles to ensure the secure handling of sensitive data.
  • Developed cloud-agnostic code and designed a comprehensive data pipeline to handle large volumes of data from end to end.
Technologies: Amazon Web Services (AWS), Scala, Terraform, AWS Glue, Apache Kafka, Amazon RDS, PostgreSQL, Amazon CloudWatch, Amazon Simple Queue Service (SQS), Java, Apache Airflow, Python, CI/CD Pipelines, Docker, Amazon Athena, Bash Script, Data Engineering

Data Engineer

2021 - 2022
Groupe SEB
  • Ingested, cleansed, and aggregated data from Amazon, delivering crucial business insights to our global team and board. These reports showcased annual sales in the billions on the Amazon platform.
  • Tackled substantial technical debt and redesigned the infrastructure and pipelines to improve scalability, reliability, and clarity. Successfully resolved a 5-month delay in the project due to shortcomings in the data pipeline and processes.
  • Implemented a structured backup system for complex raw data and automated loading of the complete dataset, ensuring swift restoration in case of issues in the production environment.
Technologies: Scala, Apache Spark, Amazon Web Services (AWS), Docker, PostgreSQL, Python, AWS Fargate, AWS Glue, Ansible, Amazon CloudWatch, Amazon Athena, Java, AWS Step Functions, Data Engineering

Software and Data Engineer

2018 - 2020
Invenis
  • Architected and implemented the software's core computational engine in Scala. By eliminating the compilation step, we reduced the phase one execution time from 45 seconds to under 10 seconds and phase two to 1 second for small and medium datasets.
  • Redesigned the development framework for data processing modules. Migrated the software's distributed data structure from RDD to DataFrame, reducing the pipeline computation on small and medium data volumes from 2 minutes to less than 50 seconds.
  • Excelled in a pivotal assignment for a key governmental client, leveraging our proprietary software to audit and process data and derive valuable insights. Over 100 regional administrators consistently used our dashboard reports daily.
Technologies: Scala, Apache Spark, Hadoop, Redis, Docker, Playlib, Akka, SBT, GRAPH, Apache Kafka, RabbitMQ, Apache Superset, Python, Algorithms, PostgreSQL, Amazon Web Services (AWS), Dask, Data Engineering

Research Intern

2017 - 2017
ETH Zurich
  • Developed a new algorithm for tensor decomposition of orders four and six in the over-complete case.
  • Provided theoretical guarantees for the tensor-decomposition algorithms.
  • Conducted experiments using Cython and a non-naive approach to handle high-dimensional data with more than 2,000 dimensions.
Technologies: Python

Part-time Data Scientist

2016 - 2017
Nickel
  • Segmented customers according to their banking activity into ten different relevant profiles.
  • Proposed a machine learning (ML) model trained on 80% of the customers' database to predict customer churn within each profile.
  • Proposed a comprehensive strategy to address customer churn across multiple segments.
Technologies: Algorithms, Python

Open-source Scala Library: JSON Logic Scala

https://www.jsonlogicscala.com/
This project involved building complex rules, serializing them as JSON, and executing them in Scala. This implementation facilitated the sharing of logic between the front and back end through Scala code.

Key features of this system include:
• Serialization and deserialization of complex rules to and from JSON
• Evaluation of these complex rules in Scala using evaluators
• Creation of custom serializer and deserializer
• Development of custom rule evaluators in Scala
2016 - 2017

Master of Research Degree in Artificial Intelligence

ENS Paris-Saclay - Paris, France

2013 - 2017

Master of Engineering Degree in Mathematics and General Engineering

CentraleSupélec - Paris, France

2010 - 2013

CPGE in Mathematics and Physics

Lycée Michel Montaigne - Bordeaux, Nouvelle-Aquitaine

Libraries/APIs

Dask

Tools

SBT, AWS Glue, Terraform, RabbitMQ, AWS Step Functions, Amazon Simple Queue Service (SQS), AWS Fargate, Ansible, Amazon CloudWatch, Amazon Athena, Apache Airflow

Frameworks

Apache Spark, Hadoop, Akka, Presto

Platforms

Amazon Web Services (AWS), Apache Kafka, Azure, Docker, Azure Event Hubs, Azure Synapse

Languages

Scala, Python, Java, Bash Script

Paradigms

Unit Testing

Storage

Redis, PostgreSQL, Apache Hive, Azure Cosmos DB

Other

High Code Quality, GRAPH, Data Engineering, Algorithms, Machine Learning, CI/CD Pipelines, Documentation, Playlib, Amazon RDS, Azure Databricks, Azure Data Factory, Azure Data Lake, Apache Superset, Data Build Tool (dbt)

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring