Saad Ur Rahman, Developer in Waterloo, ON, Canada
Saad is available for hire
Hire Saad

Saad Ur Rahman

Verified Expert  in Engineering

Back-end Developer

Location
Waterloo, ON, Canada
Toptal Member Since
September 13, 2022

Saad focuses on distributed and data systems, particularly those that scale and provide consistency, reliability, and high availability. He enjoys developing and architecting systems that will provide these guarantees and operate in disparate environments at scale. He is extremely security conscious and believes security should be foundational during the design phase. Saad is keen on building, developing, debugging, monitoring, and maintaining microservices deployed in the cloud.

Portfolio

Gartner - IT Research & Consulting Delivery
Amazon Web Services (AWS), Java, REST, Amazon Simple Queue Service (SQS)...
Apache Software Foundation
Java, JUnit, Mockito, Bazel, GitHub, Travis CI, Jenkins, Kubernetes, Docker...
Freelance
Amazon Web Services (AWS), Go, Java, Git, SQL, NoSQL, Kubernetes, Docker...

Experience

Availability

Full-time

Preferred Environment

Linux, Amazon Web Services (AWS), Java, Go, JetBrains, Git, GitHub, PostgreSQL, Cassandra

The most amazing...

...project I've worked on is the Apache Heron. It's an excellent way to learn and sharpen one's skillset on open source, and it was a thrill working with the team.

Work Experience

Senior Software Engineer, Technical Lead

2023 - PRESENT
Gartner - IT Research & Consulting Delivery
  • Provided technical leadership and guidance to the team. Suggested appropriate technologies and technical patterns for use as well as advised on their benefits and caveats.
  • Mentored engineers and guided development efforts. Provided guidance for the use of AWS services and helped with debugging efforts.
  • Worked on systems design development and documentation to be shared with stakeholders and presented to the ARB. Helped select the appropriate architectures and oversaw the design, leveraged requirements, and stakeholder feedback.
  • Worked with the business and other development team stakeholders to understand and develop system requirements for integrations across verticals.
  • Helped to coordinate and support development and user efforts across disparate time zones.
  • Developed and maintained well-tested, documented, secure, concise, and efficient code that other developers could maintain. Helped to promote practices to improve code quality and raise awareness of standards.
  • Developed loosely coupled and highly reusable integration middleware designed to be deployed as AWS Lambdas. The solutions are highly robust, secure, and easy to debug and monitor.
  • Designed, deployed, tested, and validated infrastructure based on system requirements. The infrastructure was deployed across multiple environments and was serverless, robust, and highly observable.
  • Involved in the monitoring and debugging of services across their complete lifecycle. This involved supporting engineers, stakeholders, and business users across multiple environments.
  • Helped guide improvements to infrastructure deployment pipelines. These changes drastically improved user experience when deploying infrastructure by speeding up the pipelines and simplifying Lambda build scripts.
Technologies: Amazon Web Services (AWS), Java, REST, Amazon Simple Queue Service (SQS), Amazon API Gateway, Amazon Simple Notification Service (Amazon SNS), Jenkins, Amazon CloudWatch, Salesforce, Git, Jira, Lucidchart, Continuous Delivery (CD), Continuous Integration (CI), Apache Maven

Software Engineer | Committer

2021 - PRESENT
Apache Software Foundation
  • Created and documented features for Apache Heron, extending the ability to customize the Kubernetes execution environment. More information can be found in the project "Apache Heron" below.
  • Split the executor and manager processes for each topology to reduce resource consumption.
  • Added pod template support to completely customize a topology's execution environment.
  • Allowed support for shared and per-pod persistent volumes via the CLI.
  • Added the ability to restart topologies by implementing a Kubernetes container termination handler. The replica controller would then restart the containers.
  • Created a comprehensive test suite for the Kubernetes scheduler.
  • Isolated and fixed bugs, causing intermittent failures with Travis CI.
  • Developed a functionality that is highlighted in the user documentation that I wrote. It can be found at https://heron.apache.org/docs/schedulers-k8s-execution-environment.
Technologies: Java, JUnit, Mockito, Bazel, GitHub, Travis CI, Jenkins, Kubernetes, Docker, Testing, Concurrency, Sockets, Continuous Integration (CI), Design Patterns, Linux, gRPC, Debugging, GitOps, Git, Architecture, Software Design

Software Engineer

2015 - PRESENT
Freelance
  • Developed distributed systems using Go and Java with a number of different data storage technologies.
  • Worked with large and complex code bases and began contributing to the production-grade code.
  • Engaged in feature discussions with fellow engineers and researched technical details for the feature sets. I subsequently wrote design documentation for the new features and produced a plan to develop them incrementally.
  • Designed and developed performant RESTful APIs following the OpenAPI specification. The APIs provide telemetry data, with some leveraging a caching layer with eager writes and lazy reads from a database.
  • Deployed and managed microservices using Kubernetes with Docker as the container engine, where I keep configurations idempotent. I periodically used Puppet to manage and update configurations. I used Puppet when only configurations needed updating.
  • Selected and configured monitoring and health metrics with Grafana and Prometheus as the data source.
  • Tracked, monitored, and patched bugs in distributed systems using distributed tracing and logs. I worked with logging and log monitoring through the Logstash, Elasticsearch, and Kibana (ELK) stack.
  • Ran and analyzed performance benchmarks to find bottlenecks and improve performance in the back-end services.
  • Participated in schema design for NoSQL databases, most notably Apache Cassandra. My focus regarding Cassandra was on its tables, with a keen eye on performance.
  • Engaged in schema design for SQL databases (PostgreSQL). I focused on schema data normalization, key and index selections, and query development (transactions and performance).
Technologies: Amazon Web Services (AWS), Go, Java, Git, SQL, NoSQL, Kubernetes, Docker, Prometheus, Distributed Tracing, Debugging, Grafana, PostgreSQL, Apache Cassandra, Spring Boot, Spring, Redis, Apache Kafka, gRPC, REST, Testing, Concurrency, Sockets, Databases, Continuous Integration (CI), Continuous Delivery (CD), Design Patterns, System Design, Linux, GitOps, GitFlow, Continuous Development (CD), ELK (Elastic Stack), Architecture, Software Design, Leadership, Microservices, HTML, CSS, JavaScript, PHP, Back-end, GraphQL, Python, Back-end Development, REST APIs, APIs, GitHub, GitHub Actions, MySQL, Puppet

Software Engineer

2022 - 2023
Teachable
  • Helped in the transition from RoR to microservices using Go.
  • Worked on the Asset Management Service (AMS) that serves all course content from courses. This microservice was an MVP developed by breaking down the RoR monolith. It is the most heavily used microservice in the company.
  • Helped develop a plan to break down the components of the RoR monolith into the AMS service. This involved handling the indexing of all assets and their storage and transcoding in the case of video content.
  • Built the service's PostgreSQL database schema. Provided insights to optimize storage and access for demanding query workloads. This involved selecting indices and efficient data types and creating custom data types.
  • Deployed and configured the ElastiCache Redis infrastructure and developed the module interfacing with the cache. This cache is central to much of the functionality that the service provides.
  • Worked on the video transcoding module that transmits data to the parent company for transcoding and stores the transcoded videos in S3. This is an asynchronous operation that could potentially fail, and those scenarios need to be handled.
  • Created a plan to migrate the data from the monolith to the AMS service's database. This involved an ETL job to transform the data to the new schema. There was also a CDC cron job to migrate over new data inflight during the migration.
  • Developed and deployed the GitHub (GH) Actions CI pipeline with deployments for feature development, staging, and production environments. The GH Actions would perform schema migrations, linting, tests, code coverage checks, and connectivity tests.
  • Facilitated weekly sessions for engineers transitioning over to Go from Ruby. I was equally involved in providing feedback for microservice architecture designs. I was consulted on the effective use of Go and third-party libraries.
  • Took part in cross-team code reviews for projects within the company. Provided feedback on performance and the effective use of various technologies and libraries.
Technologies: Go, Microservices, Amazon Web Services (AWS), PostgreSQL, Redis, Kubernetes, GitHub, GitHub Actions, Continuous Integration (CI)

Software Engineer

2005 - 2015
Oil & Gas Industry Clients
  • Developed and maintained software written in C++ that would scrape oil well data endpoints. The system would then serve raw and processed data and any generated reports.
  • Engaged in feature discussions with engineers and researched technical details for the feature sets. I subsequently wrote design documentation for the new features and produced a plan to develop the features incrementally.
  • Tracked bugs and developed fixes. Each bug would be followed by an update to the test suite and a short written report on the cause and remedy.
  • Ran performance benchmarks to find bottlenecks and improve performance in the back-end services.
Technologies: C++, Networking, Boost, Design Patterns, System Design, Debugging, Testing, Concurrency, Sockets, Databases, Linux, Architecture, Software Design, Leadership, Back-end, Back-end Development

Apache Heron

https://heron.apache.org/docs/schedulers-k8s-execution-environment
I extended the ability to fully customize the Kubernetes execution environment for Apache Heron. I achieved this by expanding and redesigning elements of the existing Kubernetes scheduler to allow more of the analytics container's environments to be modified. This was a challenge because I had to exercise meticulous care to ensure configurations Heron requires to function would remain unmodified. I developed the entire test suite for the Kubernetes scheduler module.

FTeX (Crypto-Bro's Bank) – Demonstrator Fiat and Cryptocurrency Bank

https://github.com/surahman/FTeX
FTeX provides the core banking needs of being able to deposit, exchange, purchase, sell, and check account balances, as well as review transactions over some time.

It leverages real-time currency quotes, and a Redis cache is deployed as a session store for exchange rate offers. Each rate offer is unique and only valid for the requestor for an allotted period.

PostgreSQL is essential for the service as all financial transactions require ACID semantics. For demonstration purposes, the database transactions have been created in both the 2nd tier (service) and the 3rd tier (database). The service executes multiple queries within a transaction blocked and controlled by the service while the database uses User Defined Procedures. This last method serves to improve performance.

Meticulous care has been exercised regarding floating point precision using particular data types. Fiat currency values are limited to an accuracy of two decimal places, while cryptocurrency values are limited to a precision of eight decimal places. All financial calculations leverage round half-to-even. A general journal is maintained, and every financial transaction is recorded.

Multiple Choice Questionnaire Platform

https://github.com/surahman/MCQ-Platform
This is a multiple-choice question API platform written in Go and backed by Apache Cassandra.

This project aims to demonstrate my skillset designing, architecting, developing, and delivering a service designed for containerization. There are mock and integration tests employed. Please see the coverage reports in the project repository.

RPC Library

https://gitlab.com/surahman/rpclib
An RPC library written in C++ that leverages BSD sockets. This was an exercise in systems and efficient protocol design. The complete design document is available for review and addresses all design aspects, from the protocol to data structures and architecture.

File Router

https://gitlab.com/surahman/file-router
A project comprised of a central hub, acting as a broker for clients and servers willing to engage in a file exchange. This project utilizes the Java threading and TCP/IP sockets library while leveraging advanced TCP/IP protocol features.

Brick Breaker

https://gitlab.com/surahman/brick-breaker
Based on the classic game by Nintendo, Brick Breaker aims to showcase the use of MVC and the Java Swing framework to develop a 2D game. Elements of the game are drawn using primitive shapes and images for UI elements.

Painter

https://gitlab.com/surahman/painter
Painter is a vector drawing application written in Java that utilizes the Java Swing framework to draw the user interface and canvas. It uses the MVC design paradigm to split responsibilities between the various components of the application. The observer pattern is employed to redraw and update the canvas when needed, thereby delivering better performance.

Rad Photo

https://gitlab.com/surahman/radfoto
Rad Photo is an Android application written in Java designed to scrape a remote server’s directories for images and then display them. It allows users to then assign the images a zero to five-star rating.

Twitter Analytics

https://gitlab.com/surahman/twitter-analytics
This project scrapes the Twitter live public data stream to power an analytics dashboard. It leverages Spark Streaming to collect data on user locations, mentions, and hashtags. It then calculates the top five items in each dimension and displays the trend change in percentages.

Edgar Log Analytics

https://gitlab.com/surahman/edgar-logs
This application utilizes the Apache Spark distributed framework and is written in Scala. It was designed as a batch processing job to be executed shortly after the Apache Server daily access log files for the day are fully collected. Its purpose is to process the daily access logs from the SEC’s EDGAR system to develop insights on filing accesses.

Languages

C++, Java, Go, C, SQL, PHP, Scala, HTML, CSS, JavaScript, GraphQL, Python

Frameworks

JUnit, Mockito, Boost, gRPC, Spark, Spring Boot, Spring

Libraries/APIs

REST APIs, Spark Streaming, Sockets

Tools

Git, GitHub, Grafana, Bazel, Travis CI, Jenkins, ELK (Elastic Stack), Puppet, Amazon Simple Queue Service (SQS), Amazon Simple Notification Service (Amazon SNS), Amazon CloudWatch, Jira, Lucidchart, Apache Maven

Paradigms

REST, Testing, Continuous Integration (CI), Continuous Delivery (CD), Design Patterns, Continuous Development (CD), Microservices

Platforms

Amazon Web Services (AWS), Kubernetes, Docker, Apache Kafka, Android, Linux, Salesforce

Other

Distributed Systems, APIs, Architecture, Back-end, Back-end Development, GitFlow, GitOps, Debugging, Distributed Tracing, RPC, Concurrency, Apache Cassandra, System Design, Software Design, Leadership, Prometheus, Networking, Artificial Intelligence (AI), GitHub Actions, Amazon API Gateway

Storage

NoSQL, PostgreSQL, MySQL, Databases, Redis, Cassandra

2015 - 2020

Bachelor's Degree with Honors in Computer Science

University of Waterloo - Waterloo, ON, Canada

SEPTEMBER 2020 - PRESENT

AWS Certified Machine Learning – Specialty

Amazon Web Services

AUGUST 2020 - PRESENT

AWS Certified Data Analytics – Specialty

Amazon Web Services

JULY 2020 - PRESENT

AWS Certified Developer – Associate

Amazon Web Services

JUNE 2020 - JUNE 2023

AWS Certified Solutions Architect Associate

AWS

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring