
Muhammad Naeem Ahmed
Verified Expert in Engineering
Unix Shell Scripting Developer
Muhammad brings nearly 15 years of IT experience in data warehousing solution implementation. He delivers reliable, maintainable, efficient code using SQL, Python, Perl, Unix, C/C++, and Java. His work helped eBay increase its revenue, and Walmart improve processes. Muhammad has a strong focus on big data-related technologies, automating redundant tasks to improve workflow, and understanding how to achieve exciting, efficient, and profitable client solutions.
Portfolio
Experience
Availability
Preferred Environment
Snowflake, Teradata SQL Assistant, DBeaver, Presto DB, PyCharm
The most amazing...
...project I've developed was converting buyers into sellers at eBay as part of a Hackathon project. This effort turned out to be an overall 0.1% revenue booster.
Work Experience
Senior Data Engineer
Stout Technologies
- Managed Facebook videos pipeline, containing attribute data, such as genre, PG rating, trending, etc., through Python/SQL.
- Optimized production SQL for throughput quality.
- Developed queries and built dashboards for business-critical video attributes.
Senior Data Engineer
Walmart Labs
- Architected, developed, and supported new features in the project’s data flow that calculated cumulative/daily metrics such as converted visitors and first-time buyers on the home and search pages.
- Analyzed Hive sensor- and beacon-parsed data for ad-hoc analysis of user behavior.
- Automated the current ETL pipeline through Python to build SQL on the fly into Hive map columns. Reduced the development cycle of 2-3 weeks for each new feature.
- Wrote Hive UDF to replace the use of R to calculate p-value in the Hive pipeline. Supported existing processes and tools, mentored fellow engineers, and triaged data issues in a timely resolution.
- Participated in the effort to migrate on-premise jobs to the GCP cloud.
Senior Software Engineer
ebay
- Converted Teradata SQL to Spark SQL for a migration project. Developed Regex-related string processing UDFs for Spark.
- Wrote Pig, Hive, and Map Reduce jobs on user behavior clickstream data. Automated Unix scripts through crontabs to run analyses, such as first-time buyer count and conversion metrics on listings data.
- Prepared data for predictive and prescriptive modeling.
- Built tools and custom wrapper scripts, using Python to automate DistCp Hadoop commands and logs processing.
- Developed and supported ETL jobs into production. The jobs entailed both Teradata and Hadoop scripts.
Database Analyst
PeakPoint Technologies
- Data modeled and mapped, developed, and deployed ETL code. Wrote advanced Teradata SQL.
- Developed extended stored procedures, DB-link, packages, and parameterized dynamic PL/SQL to migrate the schema objects per business requirements.
- Designed a logical data model and implemented it to a physical data model.
- Developed and placed into production automated ETL jobs scheduled in the UC4 tool.
Experience
Teradata SQL to Spark SQL Migration Project
Experimentation ETL Code Refactor
Converting Buyers Into Sellers Through Purchase History
Python Wrapper for Hadoop Administrative Commands
Senior Data Engineer
Facebook Watch Data Pipeline Engineer
Senior Developer
• Researching various crypto bots available in the market and their technical features-trading strategies.
• Developing Python Codebase to implement effective crypto bot strategies, taking into account fear and greed, on-chain analysis of whale activity, etc. Writing auto-buy, sell, and portfolio balancing code.
• Deep Diving on crawled web data regarding crypto news.
Skills
Languages
Python, T-SQL (Transact-SQL), Snowflake, Python 3, SQL, Bash Script, SQL DML, JavaScript, GraphQL, C++, Java, R
Frameworks
Apache Spark, Presto DB, Spark, Hadoop, Windows PowerShell
Libraries/APIs
PySpark
Tools
Microsoft Power BI, PyCharm, Teradata SQL Assistant, Erwin, Sqoop, Flume, BigQuery, Microsoft Access, Apache Airflow, Oozie, Tableau, Google Cloud Composer, Looker, GitHub
Paradigms
ETL, Database Design, ETL Implementation & Design, MapReduce, DevOps, Azure DevOps, Automation, Dimensional Modeling, Data Science, Business Intelligence (BI)
Platforms
Azure, Unix, Hortonworks Data Platform (HDP), Apache Pig, Apache Kafka, Amazon Web Services (AWS), Docker, Kubernetes, MapR, Google Cloud Platform (GCP)
Storage
MySQL, Databases, NoSQL, DBeaver, PL/SQL, Data Pipelines, Amazon DynamoDB, Database Architecture, Database Modeling, Apache Hive, Elasticsearch, Teradata, SQL Server 2014, Oracle PL/SQL, Azure SQL, Microsoft SQL Server, SQL Performance, Amazon S3 (AWS S3), PostgreSQL, Oracle 11g, Relational Databases
Other
Data Modeling, Data Warehousing, Data Analysis, Data Architecture, ETL Tools, Data Engineering, APIs, Machine Learning, Big Data, ETL Development, Data Warehouse Design, Unix Shell Scripting, Customer Data, Data, Web Scraping, Azure Databricks, Data Cleansing, Azure Data Factory, Unstructured Data Analysis, Data Visualization, Data Analytics, Image Processing, Data Queries, Performance Tuning, Analytics, Reports, Scripting, Inventory Management, Microsoft Azure, Unidash
Education
Bachelor's Degree in Computer Science
FAST National University - Islamabad, Pakistan
Certifications
Teradata Certified Master V2R5
Teradata