Olajire Atose, Developer in Kigali, Kigali City, Rwanda
Olajire is available for hire
Hire Olajire

Olajire Atose

Verified Expert  in Engineering

Database Engineer and Developer

Kigali, Kigali City, Rwanda

Toptal member since April 27, 2022

Bio

Olajire is an enthusiastic and knowledgeable data engineer who makes data available for everyone's use. With over eight years of experience in designing, developing, testing, and supporting data solutions with proficiency in building and maintaining ETLs, data warehouses, databases and creating data pipeline infrastructure for greater scalability and availability.

Portfolio

Thynk Software
Azure Data Factory (ADF), Azure, Azure SQL Databases, Databricks, SQL, Python...
One Acre Fund
SQL, SQL Server BI, SQL Server DBA, PostgreSQL, Azure DevOps, Kubernetes...
Zenith Bank
Python, SQL, SQL Server BI, ETL, SQL Server Integration Services (SSIS)...

Experience

  • SQL - 9 years
  • Data Engineering - 7 years
  • ETL - 5 years
  • PySpark - 4 years
  • Big Data - 4 years
  • Python - 4 years
  • Azure - 3 years
  • Databricks - 2 years

Availability

Full-time

Preferred Environment

Azure, Azure Data Factory (ADF), Big Data, Data Analysis, Databricks, Data Warehousing, Geospatial Analytics, PySpark, Python, SQL

The most amazing...

...thing I've worked on is designing and implementing solutions to process large geospatial data that built the foundation for robust and insightful analytics.

Work Experience

Senior Data Engineer

2022 - 2022
Thynk Software
  • Designed solutions to process extensive geospatial data using Databricks, Azure Data Factory, and Azure Blob storage.
  • Transformed logical information models into database designs and defined data rules and schema designs aligned with the business model.
  • Developed high-level data flow diagrams and data standards, enforced naming conventions and evaluated the consistencies and integrity of data warehouse models and designs.
  • Performed tuning for ETL code, stored procedures, functions, and SQL queries.
Technologies: Azure Data Factory (ADF), Azure, Azure SQL Databases, Databricks, SQL, Python, PySpark, Spark SQL, SQL Server BI, Blob Storage, ETL, Geospatial Analytics, Data Integration, Data Processing, Infrastructure, Data-level Security, Azure Databricks, Data Modeling, ETL Implementation & Design, Data Pipelines, Query Optimization

Senior Database Engineer

2020 - 2022
One Acre Fund
  • Optimized indexes and materialized views that significantly improved the data warehouse performance.
  • Optimized previously existing jobs and stored procedures to improve performance and speed.
  • Built the infrastructure required for optimal extraction, transformation, and loading of data from various data sources using Azure Data Factory, Azure Blob storage, and PySpark.
Technologies: SQL, SQL Server BI, SQL Server DBA, PostgreSQL, Azure DevOps, Kubernetes, Google Cloud Dataproc, Google Cloud Composer, Azure, Python, PySpark, Azure Data Factory (ADF), Azure SQL Databases, Data Integration, Data Processing, Infrastructure, Data-level Security, Azure Databricks, Data Modeling, ETL Implementation & Design, Data Pipelines, Query Optimization

Data Engineer

2018 - 2020
Zenith Bank
  • Gathered and analyzed data requirements, and designed data warehouse solutions that powered several reports for different teams, including product, marketing, and financial control.
  • Identified, designed, and implemented internal process improvements, including automating manual processes, optimizing data delivery, and re-designing ETL infrastructure for greater scalability and availability.
  • Rewrote several stored procedures and views that reduced ETL run times from eight hours to less than an hour.
  • Designed a data pipeline used to migrate data from legacy systems into a new core banking application.
Technologies: Python, SQL, SQL Server BI, ETL, SQL Server Integration Services (SSIS), Spark SQL, Spark, Apache Airflow, Data Integration, Data Processing, Infrastructure, Data-level Security, Azure Databricks, Data Modeling, ETL Implementation & Design, Data Pipelines, Query Optimization

Database Engineer

2014 - 2018
Zenith Bank
  • Scaled out our payment service by setting up replication and high availability on multiple replicas, which improved performance by up to 40%.
  • Optimized previously existing jobs, stored procedures, and views to improve readability and performance.
  • Performed performance tuning and monitoring to reduce my database downtime.
Technologies: SQL Server DBA, Sybase, SQL Server BI, Oracle, PostgreSQL, Data Integration, Data Processing, Infrastructure, Data-level Security, Azure Databricks, Data Modeling, ETL Implementation & Design, Data Pipelines, Query Optimization

Experience

Geospatial Data Processing

This project is a data pipeline aimed at collecting and processing geospatial data of thousands of vessels moving around the world.

The data processed are based on historical data of vessel movements and near real-time vessel movements data by consuming a graphQL API.

The longitude and latitude of the vessels, amongst other information, were recorded every three minutes, which produced data of over 400 billion records.

This data was crunched and processed to build the foundation of the following analytics:
1. The ports visited by each vessel.
2. When the vessels arrived at a port.
3. Distance traveled by vessel per voyage.
4. How long each vessel stays at a port.
5. Classes of vessels clustered by port region, the season of the year, and others.

This implementation ensures adequate processing of both batch and real-time streaming datasets.

The following tools were used to implement this:

ETL: Python, SQL, and Azure Databricks
Orchestration: Azure Data Factory
File Storage: Azure Blob storage

Data Vault Implementation

This project was embarked on with the primary purpose of being able to keep up with the constantly changing regulatory compliance in the financial services sector, which in turn significantly helped with generating valuable insights into customer behavior needed to create new personalized products by being able to analyze vast amounts of data available across multiple source systems.

Responsibilities
1. The data model design: Hub tables and satellite tables.
2. ETL design and flow.
3. ETL implementation: sourcing data from several sources like CSVs, XML, third-party API systems, databases, JSON.
4. Developed data marts.

This implementation built the foundation for regulatory reporting and customer behavior insights analytics.

Core Banking Data Migration

During data migration from the phoenix core banking application to the essence, Misys core banking application.

Responsibilities
1. Planning and profiling of source data.
2. Managed the complete audit of source data.
3. Data cleansing and transformation based on business rules.

Education

2006 - 2011

Bachelor's Degree in Information Technology

Covenant University - Ota, Ogun, Nigeria

Certifications

JANUARY 2020 - PRESENT

Business Analytics

Harvard Business School Online

Skills

Libraries/APIs

PySpark

Tools

SQL Server BI, Spark SQL, Apache Airflow, Google Cloud Dataproc, Google Cloud Composer, Tableau, Excel 2013, Microsoft Power BI

Languages

SQL, Python

Frameworks

Apache Spark, Spark

Paradigms

ETL, ETL Implementation & Design, Azure DevOps

Platforms

Databricks, Amazon Web Services (AWS), Azure, Kubernetes, Oracle

Storage

SQL Server DBA, Microsoft SQL Server, Data Pipelines, PostgreSQL, Sybase, SQL Server Integration Services (SSIS), Azure SQL Databases, SQL Loader, Data Integration

Other

Azure Data Factory (ADF), Big Data, Data Analysis, Geospatial Analytics, Azure Data Lake, Data Engineering, ETL Development, Azure Databricks, Query Optimization, Data Warehousing, Data Processing, Infrastructure, Data-level Security, Data Modeling, CCNA, Digital Electronics, Computational Finance, Business Analysis, Blob Storage, Tableau Server

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring