Syed Muneeb Hussain, Developer in Karachi, Sindh, Pakistan
Syed is available for hire
Hire Syed

Syed Muneeb Hussain

Verified Expert  in Engineering

SQL and Data Developer

Karachi, Sindh, Pakistan

Toptal member since July 20, 2022

Bio

Muneeb is an experienced data engineer proficient in Python and SQL, specializing in big data technologies. He has worked on various cloud platforms such as AliCloud, Azure, and GCP, as well as transformation tools like dbt and ETL tools, including Airflow, Prefect, ADF, Talend, and SSIS. Additionally, he excels in data visualization using Power BI, Grafana, and Apache Superset. Muneeb is well-versed in DataOps and CI/CD pipelining using Docker and GitHub.

Portfolio

Dataquartz
Python, SQL, DuckDB, Grafana, Flask, REST, Docker, GitHub, Airbyte, Meltano...
Seeloz
Azure Logic Apps, Hadoop, Google BigQuery, PySpark, Spark, SQL, Database Design...
Daraz | Alibaba Group
Hadoop, SQL, Blink SQL, Alibaba Cloud, Data, Data Engineering, Data Warehouse...

Experience

Availability

Part-time

Preferred Environment

SQL, Python 3, PyCharm, Talend, Grafana, Data Build Tool (dbt), Apache Airflow, Meltano, Docker, Azure

The most amazing...

...project I've developed is a user-friendly ETL platform, democratizing data orchestration for seamless workflows—bridging the gap for non-tech users.

Work Experience

Lead Data Engineer

2022 - PRESENT
Dataquartz
  • Led the development of an in-house data ingestion product with Python, Flask, DuckDB, PostgreSQL, Grafana for dynamic visualization, and Prefect for ETL workflow management.
  • Orchestrated end-to-end ETL pipelines, incorporating audit logging and data integrity checks using Airflow and Prefect.
  • Implemented Prometheus and the Node Exporter for robust logging within the application.
  • Pioneered bug tracking, resolution, and new feature development in the data model.
  • Containerized the entire application using Docker for enhanced scalability and manageability in the data engineering workflow.
Technologies: Python, SQL, DuckDB, Grafana, Flask, REST, Docker, GitHub, Airbyte, Meltano, Apache Airflow, Prefect, Prometheus, Node Exporter, Data Engineering, Data Modeling, Data Warehouse, PostgreSQL, JSON, MinIO, S3 Buckets, Cloud Engineering, Data Build Tool (dbt), Pandas, NumPy, Apache Superset, Database Design, Google Cloud Development, Analytics Development, English, Agile Development, Data Science, Data Structures

Data Engineering Manager

2022 - 2022
Seeloz
  • Worked on the development of a data ingestion product using Prefect, SQL, Python, Flask, DuckDB, and PostgreSQL for the back end, and Grafana for the data visualization.
  • Built various ETL projects using SQL, PySpark, Scala, Azure Logic Apps, and more to pull data from multiple ERPs and various source systems.
  • Developed Azure Logic Apps to pull data from Microsoft Dynamics 365 data entities. Wrote ETL in Scala and PySpark to load them into the supply chain meta-model.
  • Implemented a monitoring framework using PySpark, PostgreSQL, and Grafana to ensure data correctness and integrity.
  • Worked on the development and optimizations of various data models and ETL pipelines for fast data processing.
  • Monitored daily data pipelines and ETL data load processes to ensure all the required data was loaded correctly in the supply chain data model.
  • Developed various dashboards using Grafana and Power BI to gauge important business metrics.
Technologies: Azure Logic Apps, Hadoop, Google BigQuery, PySpark, Spark, SQL, Database Design, Scala, Data Warehouse, Azure Blobs, Data Analysis, Data Engineering, Databricks, Slowly Changing Dimensions (SCD), Query Optimization, Big Data Architecture, Database, Data Quality Analysis, IntelliJ IDEA, Shell Development, Data Integration, Database, Analysis, ETL, Business Intelligence Development, Python, PyCharm, Big Data Architecture, Data Warehouse, Azure Design, Cloud Infrastructure, ETL Tools, Database, Python, CI/CD Pipelines, GitHub, SQL, Data Science, Database Analytics, RDBMS, Data Processing, Business Intelligence (BI) Platforms, Azure, Dedicated SQL Pool (formerly SQL DW), Azure SQL Data Warehouse, API Integration, SQL DML, SQL, Performance Tuning, T-SQL, Reports, Business Intelligence Development, Big Data Architecture, Relational Databases, Data Modeling, Database Modeling, MariaDB, Business Logic, APIs, Data Architecture, Database, Logical Database Design, Database Schema Design, Relational Database Design, REST API, Azure Service Bus, JSON, Quality Management, MySQL, Dimensional Modeling, ELT, Pandas, Spark, Microsoft Azure, Schemas, Jupyter Notebook, Relational Data Mapping, BigQuery, Reporting, BI Reporting, Windows PowerShell, XML, AnyDesk, Apache Airflow, NoSQL, PostgreSQL, Database Optimization, DuckDB, Apache, Data Extraction, CSV Export, CSV, Scripting, MongoDB, .NET, HTML, CSS, Cloud Engineering, Azure Data Factory, Data Management, Data Build Tool (dbt), Azure Databricks, Warehouses, Meltano, Flask, REST, NumPy, Google Cloud Development, Analytics Development, English, Agile Development, Data Science, Data Structures

Big Data Engineering and Governance Lead

2019 - 2022
Daraz | Alibaba Group
  • Built and managed a DWH architecture, and wrote automated ETL scripts using HiveQL, HDFS, HBase, Python, and Shell on a cloud platform for data ingestions.
  • Developed BI dashboards on Power BI, vShow, and FBI to gauge important metrics related to domains like customer funnel, marketing, and logistics.
  • Developed and maintained an enterprise data warehouse and monitored data ingestion pipelines on a daily basis using SQL, Python, Flink, ODPS, and ETL flows.
  • Optimized dozens of ETL pipelines and SQL queries for fast data processing to finish the execution in minutes instead of hours.
  • Worked closely with the departmental HODs to maintain optimum levels of communication to effectively and efficiently complete projects.
  • Managed incoming data analysis requests and efficiently distributed results to support decision strategies.
Technologies: Hadoop, SQL, Blink SQL, Alibaba Cloud, Data, Data Engineering, Data Warehouse, Data Science, Big Data Architecture, Python, Shell Development, Data Visualization, Business Intelligence Development, Query Optimization, Data Integration, PostgreSQL, MySQL, Slowly Changing Dimensions (SCD), Data Analysis, Big Data Architecture, Database, Data Quality Analysis, Database, Analysis, ETL, Database Design, Data Warehouse, Azure Design, Cloud Infrastructure, ETL Tools, Database, Python, CI/CD Pipelines, GitHub, Data Science, Database Analytics, RDBMS, Data Processing, Business Intelligence (BI) Platforms, Azure, Azure SQL Data Warehouse, Dedicated SQL Pool (formerly SQL DW), SQL Server, API Integration, SQL DML, SQL, Performance Tuning, T-SQL, Reports, Business Intelligence Development, PySpark, Big Data Architecture, Relational Databases, Data Modeling, Database Modeling, Stored Procedure, Tableau Development, Dashboard, Dashboard Development, MariaDB, Business Logic, Business Intelligence Development, APIs, Data Architecture, Database, Logical Database Design, Database Schema Design, Relational Database Design, REST API, JSON, MySQL, Quality Management, IntelliJ IDEA, Dimensional Modeling, ELT, Pandas, Spark, Schemas, Jupyter Notebook, Relational Data Mapping, BigQuery, Reporting, BI Reporting, Windows PowerShell, XML, AnyDesk, Apache Airflow, NoSQL, Database Optimization, Hadoop, HDFS, Docker, Google BigQuery, Apache, Data Extraction, CSV Export, CSV, Scripting, MongoDB, HTML, CSS, Cloud Engineering, Azure Data Factory, Data Management, Data Build Tool (dbt), Warehouses, REST, NumPy, Analytics Development, English, Agile Development, Data Structures

Technical Consultant

2019 - 2019
Qordata
  • Designed and developed end-to-end data ingestion pipelines to ensure data flow daily.
  • Implemented and managed data flow jobs for data modeling solutions relevant to the health and life science industry, using tools like SQL Server Integration Services (SSIS) and Microsoft SQL Server.
  • Developed SQL queries, stored procedures, and dynamic SQL and optimized existing complex SQL queries to speed up day-to-day processes.
  • Created ad-hoc data reports that clients requested following their requirements.
Technologies: SQL, SSIS, SQL Server, Data Analysis, Data Quality Analysis, Database, Salesforce Development, Query Optimization, SQL, Slowly Changing Dimensions (SCD), Data Engineering, Database, Shell Development, Data Integration, Analysis, ETL, Data Warehouse, Business Intelligence Development, Database Design, Data Warehouse, ETL Tools, Database, Data Science, Database Analytics, RDBMS, Data Processing, Business Intelligence (BI) Platforms, SQL Server, SQL DML, SQL, Performance Tuning, T-SQL, Relational Databases, Data Modeling, Database Modeling, Business Logic, Data Architecture, Database, Logical Database Design, Database Schema Design, Relational Database Design, Visual Studio Development, Quality Management, MySQL, Dimensional Modeling, ELT, Schemas, Jupyter Notebook, Relational Data Mapping, Reporting, BI Reporting, Windows PowerShell, NoSQL, PostgreSQL, Database Optimization, Data Extraction, CSV Export, CSV, Scripting, MongoDB, .NET, Cloud Engineering, Data Management, Warehouses, NumPy, Analytics Development, English, Data Structures

Data Engineer

2017 - 2019
Afiniti
  • Designed and developed a database architecture and data model for a business flow using Talend Open Studio, SSIS, and MySQL Workbench.
  • Performed large-scale data conversions, migrations, and optimization to reduce resource and time costs while maintaining data integrity.
  • Wrote SQL stored procedures and Python scripts for data quality checks and ad-hoc analyses.
  • Implemented complex data processing jobs, including integrating customer relationship management (CRM) and third-party data into daily processes.
  • Established automated emails to have more visibility on the progress of regular data processing tasks.
  • Analyzed clients' business processes to propose optimal solutions for data requirements.
Technologies: SQL, MySQL, SSIS, SQL Server, Talend, Talend ETL, Data Engineering, Database, Data Analysis, Analysis, Data Visualization, Business Intelligence Development, Slowly Changing Dimensions (SCD), Query Optimization, Data Quality Analysis, Shell Development, Data Integration, Database, SQL, ETL, Data Warehouse, Database Design, Python, Data Warehouse, ETL Tools, Database, Python, Data Science, Database Analytics, RDBMS, Data Processing, Business Intelligence (BI) Platforms, SQL Server, SQL DML, SQL, Performance Tuning, T-SQL, Relational Databases, Data Modeling, Database Modeling, Stored Procedure, Business Logic, MariaDB, Business Intelligence Development, Data Architecture, Database, Logical Database Design, Database Schema Design, Relational Database Design, Visual Studio Development, MySQL, Quality Management, Dimensional Modeling, ELT, Pandas, Schemas, Jupyter Notebook, Relational Data Mapping, Reporting, BI Reporting, Windows PowerShell, AnyDesk, NoSQL, Database Optimization, Data Extraction, CSV Export, CSV, Scripting, .NET, HTML, CSS, Cloud Engineering, Data Management, Warehouses, NumPy, Analytics Development, English, Data Science, Data Structures

Automated ETL Tool

As the driving force behind a groundbreaking project, I contributed to the development of an in-house data ingestion system, meticulously implementing best practices in data engineering, development, and testing. Employing a robust tech stack that included Python, Flask, DuckDB, PostgreSQL, Grafana, Prefect, DuckDB, and Prometheus, our approach prioritized scalability, efficiency, and maintainability.

Our vision materialized in a centralized application boasting an intuitive click-and-drop interface, fostering a seamless user experience. This innovation serves as a dynamic one-stop-shop, empowering a diverse user base, both technical and non-technical, to effortlessly design and deploy ETL pipelines. The architecture embodies industry-leading practices, optimizing for performance, reliability, and data integrity.

Adhering to agile methodologies, we championed iterative improvements and rapid feature deployment. Rigorous testing practices, spanning unit, integration, and end-to-end testing, ensured the product's reliability and stability. Leveraging automated testing frameworks streamlined our testing process, guaranteeing thorough coverage and swift issue identification.

Meltano Custom Extractor

https://github.com/muneebsmh/meltano_custom_extractor
This project showcases a Python-based ELT (extract, load, transform) application engineered with Meltano, intricately woven with the SpaceX API. The application follows a streamlined process: first, it extracts raw data from the SpaceX API, capturing a wealth of information on rocket launches, missions, and related details. Once extracted, the data undergoes a rigorous transformation journey orchestrated by dbt (data build tool). Through dbt's powerful features, such as data modeling, versioning, and testing, the raw data is refined, enriched, and structured to meet specific business requirements and analytical needs. Finally, the transformed data finds its home in a PostgreSQL database, where it awaits eager analysis and exploration, ready to unveil insights that drive decision-making and propel innovation. Additionally, the project boasts a bespoke Meltano extractor, meticulously crafted to fetch data from the SpaceX API at regular intervals. Leveraging both incremental and full-load methodologies, this extractor ensures that the dataset remains up-to-date and comprehensive, paving the way for continuous insights and discoveries.

Hopsworks Feature Store Python Integration

https://github.com/muneebsmh/hopsworks-integrations
Addressing the challenges of limited documentation and global developer support for Hopsworks, I took on the task of developing Python APIs for the feature store. Despite tight timelines and minimal community assistance, I created all necessary APIs successfully. Moreover, I published the "Hopsworks Integration" library on PyPi.org (pypi.org/project/Hopsworks-Integration/). This library has gained traction and is now recognized by the Hopsworks development community.

Payment Risk Engine | COD Blocking

A system that identifies and blocks the cash-on-delivery option for faulty customers with bad buying histories. Previously, we had no way of tracking the customer performance, which led to many customers rejecting the delivered orders at their doorsteps, causing Daraz to bear the failed logistics cost. This system enabled us to block a cash-on-delivery (COD) feature for certain customers and make them pay in advance for their orders. It is based on a delicate trade-off as it increases gross-to-net revenue but can also decrease the customer base due to the COD feature blocking for parcel deliveries.

I first conducted a thorough data analysis to find the impact on the business and moved on to creating data pipelines and a performance dashboard that would gauge the impact of the system on the overall business of Daraz.

Delayed Order Notification System

An automated alert system that notifies customers about delayed orders based on specific logistics metrics in order to enhance the customer experience. I worked on developing the system's end-to-end data pipelines, designed the business flow, and made a BI dashboard to gauge the performance.

This project not only enhanced the customer experience but also helped in gauging Daraz's logistics performance and highlighted key metrics that needed to be fixed.

Dashboard Usage Analysis

Every data visualization dashboard consumes a certain amount of computing and memory resources. Knowing how many resources the dashboards consume from the assigned cloud quota is imperative when working in the eCommerce industry. Currently, there are more than 700 dashboards in Daraz. When these dashboards are refreshed daily, they consume many resources, slowing down other processes. Therefore, I needed to identify which dashboards were the most frequently used and which were not so they could be decommissioned to save resources.

I created a meta dashboard that would rank the dashboards by tracking the daily, weekly, and monthly active users and their visits. Also, this meta dashboard tracked individual user history on multiple dashboards, i.e., the number of dashboards that a particular user regularly visits, which helped us filter out the executives' dashboards.

Enterprise Data Warehouse

At my previous company, Afiniti, multiple clients used the Afiniti engine to optimize their call center performance based on the data-driven decision-based customer and agent pairing. The legacy enterprise data portal that Afiniti used to gauge clients' performance had some limitations. For instance, there was no implementation of change data capture and historical analysis of the clients. Also, the optimizing metric, such as handle time, wait time, etc., that was used to calculate the performance of a client was not recorded historically.

The enterprise data warehouse (EDW) structure caters to all limitations of an enterprise portal along with additional features, such as a standardized model that can fit into different business requirements without any change in architecture. It helped us track historical changes made to clients' performance and provided a holistic view of all clients in a single portal and at any time.

I worked on creating the whole data warehouse from scratch, including developing all data pipelines and dimensional modeling.

Data Pull from Dynamics 365 Using Azure Logic Apps

A data integration pipeline that pulls data from certain data entities in Microsoft Dynamics 365 into our supply chain meta-model at Seeloz. I developed this data integration pipeline in Azure Logic Apps to fetch data from data entities and load them into Azure Blob Storage, which could later be used in ETL written at our end. All the communication was done using Azure Service Bus. The app was triggered using the HTTP POST request, and the required arguments were passed using the JSON payload. All the error handling and logging were also implemented adequately at each step.
2018 - 2021

Master's Degree in Computer Science

National University of Computer and Emerging Sciences - Karachi, Pakistan

2013 - 2017

Bachelor's Degree in Computer Science

National University of Computer and Emerging Sciences - Karachi, Pakistan

Libraries/APIs

NumPy, REST API, Pandas, PySpark, Flask-RESTful, SQL

Tools

Salesforce Development, MySQL, Talend ETL, Visual Studio Development, GitHub, Tableau Development, Business Intelligence Development, BigQuery, Apache Airflow, Grafana, Prefect, Spark, Azure Logic Apps, PyCharm, IntelliJ IDEA, Shell Development, GitLab CI/CD, Celery

Languages

SQL, Python, SQL DML, T-SQL, Stored Procedure, XML, HTML, CSS, Python, Scala

Paradigms

ETL, Database Design, Dimensional Modeling, REST, Business Intelligence Development, Agile Development

Storage

MySQL, Database, Data Integration, SQL, Database, RDBMS, SQL Server, SQL, Relational Databases, Database Modeling, MariaDB, Database, NoSQL, SSIS, Hadoop, PostgreSQL, SQL Server, SQL, Azure, HDFS, MongoDB, Google Cloud Development, Alibaba Cloud, Azure Blobs, JSON, Redis

Frameworks

Windows PowerShell, Hadoop, .NET, Flask, Big Data Architecture, Spark

Platforms

Talend, Apache, Azure Design, Azure SQL Data Warehouse, Jupyter Notebook, Docker, Dedicated SQL Pool (formerly SQL DW), Meltano, Databricks, AWS, Airbyte

Other

Data Warehouse, Quality Management, Data Warehouse, Slowly Changing Dimensions (SCD), Data Engineering, Query Optimization, Data Quality Analysis, Database, ETL Tools, Data Science, Database Analytics, Data Processing, Business Intelligence (BI) Platforms, Performance Tuning, Data Modeling, Business Logic, Data Architecture, Logical Database Design, Database Schema Design, Relational Database Design, ELT, Schemas, Relational Data Mapping, Reporting, BI Reporting, AnyDesk, Data Migration, Database Optimization, Data Extraction, CSV Export, CSV, Data Management, Warehouses, Analytics Development, English, Data Visualization, Google BigQuery, Big Data Architecture, Data Analysis, Big Data Architecture, Analysis, Cloud Infrastructure, CI/CD Pipelines, API Integration, Reports, Business Intelligence Development, Dashboard, Dashboard Development, APIs, DuckDB, Scripting, Cloud Engineering, Azure Data Factory, Data Build Tool (dbt), Azure Databricks, Prometheus, Node Exporter, MinIO, S3 Buckets, Data Structures, OOP Designs, Blink SQL, Data, Data Science, Azure Service Bus, Microsoft Azure, Apache Superset, Workflow Automation, Hopworks, Feature Engineering, Data Science

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring