
Olajire Atose
Verified Expert in Engineering
Database Engineer and Developer
Kigali, Kigali City, Rwanda
Toptal member since April 27, 2022
Olajire is an enthusiastic and knowledgeable data engineer who makes data available for everyone's use. With over eight years of experience in designing, developing, testing, and supporting data solutions with proficiency in building and maintaining ETLs, data warehouses, databases and creating data pipeline infrastructure for greater scalability and availability.
Portfolio
Experience
- SQL - 9 years
- Data Engineering - 7 years
- ETL - 5 years
- PySpark - 4 years
- Big Data - 4 years
- Python - 4 years
- Azure - 3 years
- Databricks - 2 years
Availability
Preferred Environment
Azure, Azure Data Factory (ADF), Big Data, Data Analysis, Databricks, Data Warehousing, Geospatial Analytics, PySpark, Python, SQL
The most amazing...
...thing I've worked on is designing and implementing solutions to process large geospatial data that built the foundation for robust and insightful analytics.
Work Experience
Senior Data Engineer
Thynk Software
- Designed solutions to process extensive geospatial data using Databricks, Azure Data Factory, and Azure Blob storage.
- Transformed logical information models into database designs and defined data rules and schema designs aligned with the business model.
- Developed high-level data flow diagrams and data standards, enforced naming conventions and evaluated the consistencies and integrity of data warehouse models and designs.
- Performed tuning for ETL code, stored procedures, functions, and SQL queries.
Senior Database Engineer
One Acre Fund
- Optimized indexes and materialized views that significantly improved the data warehouse performance.
- Optimized previously existing jobs and stored procedures to improve performance and speed.
- Built the infrastructure required for optimal extraction, transformation, and loading of data from various data sources using Azure Data Factory, Azure Blob storage, and PySpark.
Data Engineer
Zenith Bank
- Gathered and analyzed data requirements, and designed data warehouse solutions that powered several reports for different teams, including product, marketing, and financial control.
- Identified, designed, and implemented internal process improvements, including automating manual processes, optimizing data delivery, and re-designing ETL infrastructure for greater scalability and availability.
- Rewrote several stored procedures and views that reduced ETL run times from eight hours to less than an hour.
- Designed a data pipeline used to migrate data from legacy systems into a new core banking application.
Database Engineer
Zenith Bank
- Scaled out our payment service by setting up replication and high availability on multiple replicas, which improved performance by up to 40%.
- Optimized previously existing jobs, stored procedures, and views to improve readability and performance.
- Performed performance tuning and monitoring to reduce my database downtime.
Experience
Geospatial Data Processing
The data processed are based on historical data of vessel movements and near real-time vessel movements data by consuming a graphQL API.
The longitude and latitude of the vessels, amongst other information, were recorded every three minutes, which produced data of over 400 billion records.
This data was crunched and processed to build the foundation of the following analytics:
1. The ports visited by each vessel.
2. When the vessels arrived at a port.
3. Distance traveled by vessel per voyage.
4. How long each vessel stays at a port.
5. Classes of vessels clustered by port region, the season of the year, and others.
This implementation ensures adequate processing of both batch and real-time streaming datasets.
The following tools were used to implement this:
ETL: Python, SQL, and Azure Databricks
Orchestration: Azure Data Factory
File Storage: Azure Blob storage
Data Vault Implementation
Responsibilities
1. The data model design: Hub tables and satellite tables.
2. ETL design and flow.
3. ETL implementation: sourcing data from several sources like CSVs, XML, third-party API systems, databases, JSON.
4. Developed data marts.
This implementation built the foundation for regulatory reporting and customer behavior insights analytics.
Core Banking Data Migration
Responsibilities
1. Planning and profiling of source data.
2. Managed the complete audit of source data.
3. Data cleansing and transformation based on business rules.
Education
Bachelor's Degree in Information Technology
Covenant University - Ota, Ogun, Nigeria
Certifications
Business Analytics
Harvard Business School Online
Skills
Libraries/APIs
PySpark
Tools
SQL Server BI, Spark SQL, Apache Airflow, Google Cloud Dataproc, Google Cloud Composer, Tableau, Excel 2013, Microsoft Power BI
Languages
SQL, Python
Frameworks
Apache Spark, Spark
Paradigms
ETL, ETL Implementation & Design, Azure DevOps
Platforms
Databricks, Amazon Web Services (AWS), Azure, Kubernetes, Oracle
Storage
SQL Server DBA, Microsoft SQL Server, Data Pipelines, PostgreSQL, Sybase, SQL Server Integration Services (SSIS), Azure SQL Databases, SQL Loader, Data Integration
Other
Azure Data Factory (ADF), Big Data, Data Analysis, Geospatial Analytics, Azure Data Lake, Data Engineering, ETL Development, Azure Databricks, Query Optimization, Data Warehousing, Data Processing, Infrastructure, Data-level Security, Data Modeling, CCNA, Digital Electronics, Computational Finance, Business Analysis, Blob Storage, Tableau Server
How to Work with Toptal
Toptal matches you directly with global industry experts from our network in hours—not weeks or months.
Share your needs
Choose your talent
Start your risk-free talent trial
Top talent is in high demand.
Start hiring