Selahattin Hancer, Data Engineer and Developer in Austin, United States
Selahattin Hancer

Data Engineer and Developer in Austin, United States

Member since December 8, 2022
Selahattin is a data engineer, database administrator, and software developer with over 15 years of industry experience. Throughout his career, he has managed mission-critical databases and built data platforms. Selahattin is passionate about the trending technology stacks and using them to deliver products that help businesses.
Selahattin is now available for hire

Portfolio

  • Wenzel Spine
    Docker, Kubernetes, Python, SQL Server DBA, Shell Scripting, REST APIs, SQL...
  • Ercot
    Oracle, Linux, Shell Scripting, Upgrades, SQL Server DBA
  • Statera Spina
    Apache Airflow, Data Build Tool (dbt), Data Pipelines, Amazon S3 (AWS S3)...

Experience

Location

Austin, United States

Availability

Part-time

Preferred Environment

Data Build Tool (dbt), PostgreSQL, SQL, Python, Apache Airflow, Snowflake, Shell Scripting, Amazon Web Services (AWS), Data Pipelines, Database Administration (DBA)

The most amazing...

...projects I've worked on involved building data and DevOps pipelines on AWS, managing large databases, and writing tools for monitoring and patching.

Employment

  • Data Engineer

    2021 - PRESENT
    Wenzel Spine
    • Created and maintained ETL processes and ELT data pipelines from various sources.
    • Utilized Python decorators, the data build tool (DBT) app, and APIs and deployed them to the Docker host and Kubernetes.
    • Developed REST APIs and deployed them using Kubernetes.
    Technologies: Docker, Kubernetes, Python, SQL Server DBA, Shell Scripting, REST APIs, SQL, ETL, Apache Airflow, Data Build Tool (dbt), Airbyte, MySQL
  • Database Administrator

    2020 - 2021
    Ercot
    • Upgraded and managed an extensive data warehouse database.
    • Provided SQL tuning support to a developer during the upgrade process.
    • Conducted a development and test database refresh and wrote a shell script to automate the refresh process.
    • Performed periodic GI patching for over 200 databases during the patching cycle.
    • Monitored databases and eliminated downtime by creating Linux and Unix shell and PL/SQL scripts.
    • Planned and completed disaster recovery and switchover testing for databases.
    • Used real application testing before critical database changes, including migration, upgrading, major patches, and deployment.
    • Developed a database and system monitoring tool using PowerShell and Transact-SQL.
    Technologies: Oracle, Linux, Shell Scripting, Upgrades, SQL Server DBA
  • Data Engineer

    2018 - 2019
    Statera Spina
    • Built and orchestrated data pipelines using Apache Airflow.
    • Introduced DBT to the ETL system to transform data and built DBT modules.
    • Developed and maintained data pipelines that were ingesting data from various sources like Amazon S3 buckets, third-party REST APIs, SQL, and NoSQL databases.
    Technologies: Apache Airflow, Data Build Tool (dbt), Data Pipelines, Amazon S3 (AWS S3), Docker, REST APIs, SQL, ETL, Amazon Web Services (AWS), Python, Fivetran
  • Data Engineer

    2017 - 2018
    Kredya
    • Utilized AWS Lambda, Amazon DynamoDB, Snowflake, Amazon RDS, Amazon S3, pandas, and Python.
    • Built a DevOps pipeline on AWS using GitHub, Jenkins, Ansible, Docker, and Kubernetes.
    • Designed and implemented data models for a data warehouse and data marts.
    • Developed, maintained, and monitored ETL processes and ELT data pipelines.
    • Provided SQL tuning support for the production database.
    Technologies: Snowflake, Amazon Web Services (AWS), Amazon S3 (AWS S3), Pandas, Python, Jenkins, Git, Ansible, Docker, Kubernetes, AWS Lambda, SQL, PostgreSQL, Data Pipelines, ETL, ELT
  • Database Administrator

    2011 - 2017
    TEB
    • Supported and maintained over 200 Oracle databases in different environments, including production, test, development, and Oracle Active Data Guard.
    • Implemented the migration of databases from Linux to Unix and vice versa.
    • Cloned, refreshed, and restored Oracle databases for development and testing.
    • Created multiple Linux and Unix shell and PL/SQL scripts to proactively monitor Oracle databases and eliminate downtime.
    • Installed and configured integrated and classic GoldenGate for data replication.
    • Configured, installed, and implemented PostgreSQL, SQL Server, and Cassandra databases.
    • Set up PostgreSQL for high availability and replication with a hot standby.
    • Planned and implemented Oracle and PostgreSQL backup strategies.
    • Conducted various implementation tasks concerning alert monitoring for CPU usage, disk space, contention, and high events.
    Technologies: Oracle, PostgreSQL, SQL Server DBA, Linux, Oracle GoldenGate, RMAN, ASM, Exadata, Shell Scripting, Jira
  • Data Engineer

    2008 - 2011
    TEB
    • Wrote advanced queries, PL/SQL packages, functions, triggers, SQL scripts, and views.
    • Developed ETL scripts to load data from multiple sources into the data warehouse and analyzed, cleaned, transformed, and loaded data using ODI.
    • Utilized erwin for the warehouse's logical and physical database modeling.
    Technologies: Python, PL/SQL, SQL, Oracle ODI, ETL
  • Software Developer

    2005 - 2008
    Obase
    • Created internal tools using Python, SQL, and Oracle.
    • Analyzed, designed, and developed client applications using Oracle Forms and Report.
    • Used Python to extract data from comma-separated values files, JSON, and Microsoft Excel.
    Technologies: Python, APIs, Oracle

Experience

  • Modern Data Platform and CI/CD

    Built a modern data platform and DevOps pipeline from scratch for a client using popular tech stacks. While working on data storage and processing, I used Snowflake for cloud data warehousing and Amazon S3 to build the data lake for data science.

    Next, I covered data ingestion using custom Python code, pandas that ingested some of the data from various sources, Airbyte and Fivetran for data source implementation, and Apache Airflow for data orchestration. To conduct data transformation and modeling, I used DBT and transformed data that was loaded into the warehouse.

    The DevOps pipeline was built on AWS, using Ansible, Jenkins, Docker, and Kubernetes, and ran on Amazon EC2 instances. Finally, I deployed REST API on Kubernetes and Dockerized the data build tool.

Skills

  • Languages

    SQL, Python, Bash, T-SQL (Transact-SQL), SQL DML, Snowflake
  • Frameworks

    ASM, Windows PowerShell
  • Libraries/APIs

    REST APIs, Pandas
  • Tools

    RMAN, Git, PyCharm, Apache Airflow, Oracle GoldenGate, Jenkins, Ansible, Tableau
  • Paradigms

    ETL, DevOps
  • Platforms

    Linux, Docker, Oracle, AWS Lambda, Amazon Web Services (AWS), Kubernetes, Amazon EC2
  • Storage

    PostgreSQL, Database Management, PL/SQL, SQL Server DBA, Data Pipelines, Database Administration (DBA), SQL Performance, Microsoft SQL Server, Amazon S3 (AWS S3), MySQL
  • Other

    Data Build Tool (dbt), Airbyte, Data Warehousing, Shell Scripting, ELT, Upgrades, Data Engineering, CI/CD Pipelines, Performance Tuning, Data Queries, Software Deployment, System Design, APIs, Cloud, ETL Tools, Fivetran

Education

  • Bachelor's Degree in Computer Engineering
    1999 - 2004
    Sakarya University - Serdivan, Turkey

Certifications

  • Oracle Certified Professional (Cloud)
    OCTOBER 2020 - PRESENT
    Oracle

To view more profiles

Join Toptal
Share it with others