Siva Karanam, Developer in Kakinada, Andhra Pradesh, India
Siva is available for hire
Hire Siva

Siva Karanam

Verified Expert  in Engineering

Software Developer

Kakinada, Andhra Pradesh, India

Toptal member since January 19, 2022

Bio

Siva has over seven years of professional experience in software development. He has solid knowledge of algorithms, data structures, and object-oriented programming (OOP). Siva is experienced in web development, web crawlers/scraping, data cleaning, and data munging. He has worked with all phases of the software development lifecycle and Agile methodologies. In addition, Siva has experience with MySQL, Microsoft SQL Server, and Consul, and he knows data science and machine learning concepts.

Portfolio

OnCorps, Inc.
Python, Angular, JavaScript, Open Source, Web Development, Databricks, Jupiter...
Silq
Python 3, Django, GraphQL, React, APIs, Architecture, REST, Django ORM...
Rakuten
Python 3, React, Consul, Jenkins, Confluence, Jira, REST APIs, Flask, FastAPI...

Experience

  • Python - 6 years
  • Software - 5 years
  • Large-scale Web Crawlers - 4 years
  • Agile Sprints - 4 years
  • Web Development - 4 years
  • REST APIs - 4 years
  • Data Engineering - 2 years
  • MySQL - 2 years

Availability

Full-time

Preferred Environment

PyCharm, Slack, Jira, Confluence, Agile Sprints, MacOS, Python, Software, Linux, Databases

The most amazing...

...recognition I've received is the prestigious Rakuten Excellence Award while working at Rakuten.

Work Experience

Back-end Developer

2022 - 2023
OnCorps, Inc.
  • Developed a pipeline to process the financial statements and generate meaningful insights.
  • Built client-specific Python scripts to find errors in financial statements.
  • Automated many manual tasks to reduce the manual efforts to identify negative cases.
Technologies: Python, Angular, JavaScript, Open Source, Web Development, Databricks, Jupiter, Databases, Containerization, JSON, Back-end Development, Git, Web Scraping, Interactive Brokers API, pylint, Microsoft Excel, Auth0, Website Data Scraping, Information Retrieval, GitHub, ChatGPT, ETL, Data Integration, ETL Implementation & Design, Apache Airflow, Mypy, Cloud, ETL Tools, Large Language Models (LLMs)

Full-stack Engineer

2022 - 2022
Silq
  • Integrated the shipping (visibility) tracking with a third-party API, including DHL and UPS.
  • Tracked and fixed many bugs using the Jira tool. Worked with retooling to design new pages using React components and changed the existing forms.
  • Researched and found the rich data source for seaports and airports with valid identifiers. Ingested 10,000+ seaport and airport data.
Technologies: Python 3, Django, GraphQL, React, APIs, Architecture, REST, Django ORM, User Experience (UX), JSON, Swagger, Back-end Development, Git, API Integration, Django REST Framework, eCommerce, Unit Testing, pylint, Microsoft Excel, API Development, Back-end, Website Data Scraping, CSS, HTML, SQLAlchemy, Amazon EC2, Minimum Viable Product (MVP), OAuth, B2B, Java

Senior Software Full-stack Engineer

2019 - 2022
Rakuten
  • Built an integrated tool that would help manage all the systems and services under a single platform.
  • Automated manual work using Jenkins and Python, which reduced 130 manual hours per month.
  • Created a web application to interact with servers via SSH and ran commands from a browser.
Technologies: Python 3, React, Consul, Jenkins, Confluence, Jira, REST APIs, Flask, FastAPI, Pytest, MacOS, Back-end, Python, Docker, SQL, Full-stack, APIs, Linux, Microservices, Google Cloud Platform (GCP), Containerization, Architecture, REST, User Experience (UX), JSON, Swagger, Back-end Development, Elasticsearch, DevOps, Git, System Design, Automation, API Integration, Unit Testing, pylint, ELK (Elastic Stack), API Development, Auth0, CSS, HTML, SQLAlchemy, CI/CD Pipelines, Information Retrieval, Redis Cache, GitHub, Containers, Minimum Viable Product (MVP), OAuth, Mypy, Cloud, Clean Architecture, Next.js, TypeScript, Java

Data and Back-end Engineer

2018 - 2019
Myntra
  • Designed and developed a config-based crawler framework for large-scale web crawling, crawling at 20 million records every day.
  • Wrote post-processers to convert the unstructured data crawled from different sources to structured data. Automated the scripts to process millions of records every day.
  • Structured and stored the data into databases as needed and then ran the models on top of it to get the insights.
  • Developed REST APIs to get data for fashion insights.
Technologies: Python 3, REST APIs, Large-scale Web Crawlers, Data Engineering, MacOS, Apache Hive, Presto, Azkaban, Apache Kafka, Redis, Back-end, Python, SQL, Linux, Django ORM, JSON, Back-end Development, Data Scraping, Scraping, Git, Selenium, Automation, Proxies, Data Extraction, Web Scraping, Scrapy, eCommerce, pylint, ELK (Elastic Stack), API Development, Website Data Scraping, GitHub, Azure, Minimum Viable Product (MVP), ETL, Data Integration, ETL Implementation & Design, Cloud, B2B, ETL Tools, UI Automator, CAPTCHA

Software Engineer

2015 - 2018
Headrun Technologies Pvt Ltd
  • Developed site-specific Web crawlers with and without using the Scrapy framework, which extracts metadata from websites, delivers the meta in JSON and MySQL formats, and provides metadata to search databases.
  • Worked as a full-stack developer and created a web crawlers-based health tracking tool with JavaScript on the front end and Django, Tastypie, and Nginx on the back end.
  • Pulled the data from different sources like websites, email, pdf, cloud, etc., and then stored the data into different storages using models.
  • Developed bots using Python and Selenium, incorporating CAPTCHA-solving techniques (e.g., text input, drag-and-drop) to automate large-scale booking processes.
  • Implemented a Web crawlers-based stats tool with Django, with over 2,000 web crawlers of health, information, and rich metadata. This tool monitored the web crawlers and sent notifications on the heat map with different color codes.
Technologies: Python 2, Python 3, JavaScript, Selenium, Beautiful Soup, Requests, MySQL, Web Development, Django, REST APIs, Back-end, Python, Testing, SQL, HTML, CSS, Linux, APIs, Django ORM, JSON, XML, Swagger, Back-end Development, Data Scraping, Scraping, DevOps, Git, Automation, API Integration, Proxies, Data Extraction, Web Scraping, Scrapy, Django REST Framework, Unit Testing, pylint, Microsoft Excel, API Development, Website Data Scraping, Amazon EC2, CI/CD Pipelines, Information Retrieval, GitHub, Minimum Viable Product (MVP), Cloud, ETL Tools, Full-stack, UI Automator, Bots, CAPTCHA

Experience

Omnia

In this project I built an integrated tool that would help manage all the systems and services under a single platform. It provides on-demand cloud services and APIs to individuals and teams. This is the initial step to building Cloud Platforms.

Private Cloud Application for an MNC

I built internal cloud platforms like AWS, GCP, and Azure. I also created an internal cloud platform to manage all the clusters and services in one place. I worked on both the back-end and front-end side of the product.

Market Intelligence Product for eCommerce Company

I designed and implemented a market intelligence product for a leading eCommerce company, utilizing large-scale web scraping to gather insights from diverse online sources.

KEY CONTRIBUTIONS

Data Architecture and Planning:
• Conducted in-depth analysis of market intelligence requirements and developed a strategic plan for efficient web data extraction.

Scalable Web Scraping Framework:
• Implemented a robust web scraping framework using Python and Scrapy to extract data from eCommerce platforms, competitor websites, and industry news sites.

Configurable Crawling System:
• Engineered a configurable crawling system to adapt to dynamic changes in website structures.
• Implemented regular updates for accurate data retrieval.

Data Cleaning and Transformation:
• Applied advanced data cleaning techniques to handle diverse, unstructured data formats.
• Transformed raw data into structured datasets for integration into the company's analytics pipeline.

Alerting Mechanism:
• Implemented an alerting mechanism to notify stakeholders of significant market changes.

RESULTS
The market intelligence product significantly improved strategic decision-making.

Auction Scraper for Predictions

An auction scraper for scraping the auction details daily.

I was the back-end and data engineer who developed the scraper, collected the data, processed it (converted the unstructured to structured data), and provided the data to the data science team to run models on top of it. The scraper has helped our clients view/consume the data from rich insights and predictions.

Skills

Libraries/APIs

React, REST APIs, API Development, Beautiful Soup, Requests, Mypy, Django ORM, Pandas, Interactive Brokers API, SQLAlchemy

Tools

Jenkins, Git, pylint, GitHub, Confluence, Jira, Pytest, Microsoft Excel, Auth0, Crawlera, ELK (Elastic Stack), ChatGPT, Boto 3, Apache Airflow

Languages

Python 3, Python 2, Python, HTML, JavaScript, SQL, CSS, TypeScript, GraphQL, XML, Java

Frameworks

Swagger, Flask, Django, Selenium, Scrapy, Django REST Framework, Presto, Angular, Spark, Next.js

Paradigms

Unit Testing, REST, DevOps, Automation, ETL, ETL Implementation & Design, Testing, Microservices, Clean Architecture, B2B

Storage

JSON, MySQL, Databases, PostgreSQL, Data Integration, Apache Hive, Azkaban, Redis, Elasticsearch, Redis Cache, Amazon S3 (AWS S3), MongoDB

Platforms

Linux, MacOS, Docker, Amazon Web Services (AWS), Amazon EC2, Apache Kafka, Databricks, Google Cloud Platform (GCP), Azure

Other

Large-scale Web Crawlers, Back-end, APIs, Back-end Development, Data Scraping, Scraping, Data Extraction, Web Scraping, Web Crawlers, Website Data Scraping, Information Retrieval, Agile Sprints, Data Engineering, Web Development, Consul, FastAPI, Software, Full-stack, Architecture, System Design, API Integration, Proxies, eCommerce, CI/CD Pipelines, Minimum Viable Product (MVP), OAuth, Cloud, ETL Tools, UI Automator, Bots, CAPTCHA, Open Source, Jupiter, Containerization, User Experience (UX), Full-stack Development, Web App Development, Containers, Large Language Models (LLMs)

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring