Sergey Dmitriev, Data Modeling Developer in Seattle, WA, United States
Sergey Dmitriev

Data Modeling Developer in Seattle, WA, United States

Member since July 11, 2018
Sergey is a senior data management professional, solution architect, and cloud architect with over 17 years of experience developing data-intense applications as well as building and leading technical teams to successfully deliver challenging software development and migration projects. He is skilled in all aspects of software design and development with demonstrated expertise in application delivery planning, design, and development.
Sergey is now available for hire


  • Amazon Web Services
    Python, SQL, Amazon Web Services (AWS), Relational Databases
  • Deutsche Bank
    Shell Scripting, Databases, PL/SQL, Java, SQL, Exadata, Oracle
  • INIT-S
    SQL, Erwin, C#, Sybase, Microsoft SQL Server, Oracle



Seattle, WA, United States



Preferred Environment

Git, Subversion (SVN), Oracle, Erwin, Toad, Linux, MacOS

The most amazing...

...project I've completed consolidated the database of a core banking application from six Oracle legacy databases (100 Tb) to a single database on Oracle Exadata.


  • Senior Database Consultant

    2017 - 2018
    Amazon Web Services
    • Planned and implemented relational database migrations to AWS.
    • Designed and implemented data warehouses, data lakes, and operational data stores in AWS.
    • Designed and implemented data pipelines on AWS using SQL and Python.
    • Created data models and database components for databases (Oracle) hosted on AWS RDS and EC2.
    • Optimized performance of reports and SQL queries for databases (Oracle) hosted in AWS.
    • Created labs, sales demos, and conference activities for database migrations to AWS.
    Technologies: Python, SQL, Amazon Web Services (AWS), Relational Databases
  • Lead Data Architect

    2005 - 2017
    Deutsche Bank
    • Consolidated legacy Oracle databases on an Exadata (merging models and data, migrating data, modifying PL/SQL, shell, and Java code). Database size: 100Tb.
    • Optimized performance for reporting components on Exadata from hours running time to seconds.
    • Created data model and database components (Oracle) for high an application managing a lifecycle of listed derivatives transactions (2000 transaction/sec).
    • Designed and implemented data model and database code for risk management platform to capture risk model parameters for every risk calculation for compliance reporting.
    • Designed and implemented a dynamically configured reporting engine (in PL/SQL) for processing 30Tb dataset.
    • Designed and implemented data model and database code for real-time warehouse for Sales IT department, receiving information from 150+ feeds and applying complicated logic to calculate sales commissions.
    Technologies: Shell Scripting, Databases, PL/SQL, Java, SQL, Exadata, Oracle
  • Senior Database Developer, DBA

    2000 - 2005
    • Designed and developed document management systems and resource management systems for nuclear power plants to manage each power plant’s entire documentation management process. Enhanced and automated resource management, performed database migration between different platforms (Sybase ASE, MS SQL Server, Oracle), database servers administration, deployment packages creation, consulting customers.
    Technologies: SQL, Erwin, C#, Sybase, Microsoft SQL Server, Oracle


  • Transformation Program for Core Banking Equity Settlement Application

    Rebuilt a platform using COBOL, C++, and EJB 1.0 components with six databases 100Tb in size on Oracle 10g to Java application hosted in on-premises cloud platform with consolidation database on an Oracle Exadata cluster.

  • Data Pipelines on Google Compute Cloud

    I build data pipelines loading data from JSON formatted file in Google Cloud Storage to Bigquery with complicated transformation logic in SQL, aggregating and loading data into Data Mart in Postgres (Google SQL) to be used by application UI.

  • Data Pipelines on AWS

    I used Hadoop as a source for data pipelines as well as execution platform to run Pig, Hive, and Presto. I used Spark in data pipelines to do ETL in batch mode

    I have Python experience in building data pipelines for data warehouses and data science projects. I have experience building on-premises and cloud data pipelines as well as backend serverless cloud APIs on AWS. I used Spark to compute intense data pipelines in batch mode and SparkSQL. I’m very comfortable working with Spark and will learn new use cases quickly.


  • Languages

    SQL, Python, C#, Java
  • Tools

    Erwin, Oracle Exadata, AWS CloudFormation, AWS Athena, Toad, Subversion (SVN), Git, Apache Airflow
  • Paradigms

    Database Development, ETL, Database Design
  • Storage

    PL/SQL, Oracle PL/SQL, Exadata, Oracle RDBMS, Oracle SQL, AWS S3, Data Integration, Amazon Aurora, Redshift, Relational Databases, Databases, Microsoft SQL Server, Sybase
  • Other

    Data Modeling, Software Design, Software Architecture, Shell Scripting, Google BigQuery
  • Frameworks

    AWS EMR, Spark, Hadoop
  • Platforms

    AWS Lambda, AWS EC2, Linux, MacOS, Oracle, Amazon Web Services (AWS), Google Cloud Platform (GCP)
  • Libraries/APIs

    Pandas, NumPy


  • Master's Degree in Computer Science
    1997 - 2003
    Moscow Power Engineering Institute - Moscow, Russia


  • AWS Certified Solution Architect - Associate
    JULY 2017 - JULY 2019
  • Oracle Certified Professional (DBA)
    MARCH 2005 - PRESENT

To view more profiles

Join Toptal
Share it with others