Oracle to SQL Server and SQL Server to Oracle Migration Guide

Migrating in either direction between Microsoft SQL Server and Oracle Database? There are some significant pitfalls to be aware of…


Toptalauthors are vetted experts in their fields and write on topics in which they have demonstrated experience. All of our content is peer reviewed and validated by Toptal experts in the same field.

Migrating in either direction between Microsoft SQL Server and Oracle Database? There are some significant pitfalls to be aware of…


Toptalauthors are vetted experts in their fields and write on topics in which they have demonstrated experience. All of our content is peer reviewed and validated by Toptal experts in the same field.
Leonid Draginsky
Verified Expert in Engineering

Leonid has 20+ years as a multi-platform developer and DBA, with special expertise in Oracle and SQL Server cross-platform migrations.

Previously At

J.P. Morgan Chase
Share

“Vendor dependency” is a scary word for many business executives. On the other hand, it’s already widely understood in the industry that complete “vendor independence” is not achievable. And it’s especially true in the case of databases.

Two of the most proliferated enterprise RDBMS platforms are Oracle Database and Microsoft SQL Server (for brevity, for the rest of this article, I’ll call them “Oracle” and “SQL Server” respectively). Sure, IBM Db2 competes with Oracle on ever-shrinking—but still critical in many areas—mainframe platforms. And rapidly progressing open-source alternatives, such as PostgreSQL, are gaining a firm footing in dynamic environments on low-to-mid-level commodity hardware and on the web.

But Oracle vs. SQL Server is the choice that many business executives are facing when their organizations need a new RDBMS. The final selection is based on multiple factors: license cost, available internal expertise and past experience, compatibility with existing environments, partner relations, future business plans, etc. But even with the most thorough upfront evaluation and best-educated decision making, sometimes factors change and then the platform needs to change, too. I know this because in the course of my career, I’ve implemented such migrations twice, prepared the evaluation of transition feasibility once, and I’m working on cross-platform functionality migration at this very time.

Both Oracle and SQL Server are “old-school,” partially ANSI-compliant RDBMS implementations. When leaving aside procedural extensions—PL/SQL and Transact-SQL have different syntaxes, but are generally simple to translate between—and newer object-oriented futures, SQL code can look deceptively similar. And this is a dangerous honey trap.

Two of the most critical points for any migration project between Oracle and SQL Server (in either direction) are transactions and, closely related, temporary tables, which are a key tool in resolving transaction scope. We’ll also cover nested transactions—those exist within the scope of another transaction—because they’re a key part of implementing user security auditing in Oracle. But in SQL Server, user security auditing needs a different approach because of its COMMIT behavior in that context.

Understanding Transaction Structure: Observing Oracle vs. SQL Server from Ten Thousand Feet

Oracle transactions are implicit. It means that you don’t need to start a transaction—you’re always in a transaction. And this transaction is open until you issue a commit or rollback statement. Yes, you can start a transaction explicitly, define rollback safe points, and set inner/nested transactions; but what’s important is that you’re never “not in a transaction” and you must always issue a commit or rollback. Also note that issuing a data definition language (DDL) statement (CREATE, ALTER, etc.; in a transaction it can be done through dynamic SQL) commits the transaction in which it was issued.

Unlike Oracle, SQL Server has explicit transactions. This means that unless you explicitly start a transaction, all your changes will be committed “automatically”—immediately as your statement is processed, as every DML statement (INSERT, UPDATE, DELETE) creates a transaction on its own and commits it unless it errors out.

This is the result of the difference in data storage implementations—how data is written to a database and how the database engine reads it.

In Oracle, DML statements change records directly in the data file. The old copy of the record (or empty record substitution, in case of INSERT) is written to the current rollback file, and the exact time of the change is marked on the record.

When a SELECT statement is issued, it’s processed based on the data that have been modified before it was issued. If any records have been modified after the SELECT is issued, Oracle uses the older version from the rollback file.

This is how Oracle implemented read consistency and non-blocking read/write. It’s also why long-running queries on very active transactional databases would sometimes run into the infamous error ORA-01555, snapshot too old: rollback segment ... too small. (This means that the rollback file needed by the query for an older version of the record has already been reused.) This is why the correct answer to the question “How long should my Oracle transaction be?” is “As long as needed and no longer.”

SQL Server’s implementation is different: The database engine writes and reads directly to/from data files only. Every SQL statement (SELECT/INSERT/UPDATE/DELETE) is a transaction unless it’s a part of an explicit transaction grouping multiple statements together, allowing changes to be rolled back.

Every transaction locks the resources that it needs. Current releases of Microsoft SQL Server are highly optimized in locking only the resources that are needed, but what is needed is defined by the SQL code—so optimizing your queries is critical). That is to say, unlike in Oracle, transactions in SQL Server should be as short as possible, and this is why automatic commits are the default behavior.

And which SQL construct in Oracle and SQL Server is affected by the difference in their transaction implementations? Temp tables.

Temporary Tables in Oracle and SQL Server

When the ANSI SQL Standard defines local and global temporary tables, it does not explicitly state how they should be implemented. Both Oracle and SQL Server implement global temporary tables. SQL Server also implements local temporary tables. Oracle 18c also implemented “true” local temporary tables (which they call “private temporary tables.”) This makes the translation of SQL Server code to Oracle 18c visibly simpler than it is for older versions—rounding out Oracle’s earlier addition of a few related features like auto-incrementing identity columns.

But from a pure functional analysis perspective, the introduction of private temporary tables can be a mixed blessing as it makes SQL Server to Oracle migration issues seem less than they are. This is another honey trap, since it may introduce some new challenges of its own. For instance, design-time code validation can’t be done on private temporary tables, so any code using them will invariably be more error-prone. If you’ve used dynamic SQL, let’s put it this way: Private temporary tables are just as complex to debug, but without the apparent unique use case. Hence why Oracle added local (private) temporary tables only in 18c and not before.

In short, I don’t see a use case for private temporary tables in Oracle that can’t be implemented using global temporary tables the same or better. So for any serious conversion, we need to understand the difference between Oracle and SQL Server global temporary tables.

Global Temporary Tables in Oracle and SQL Server

An Oracle global temporary table is a permanent data dictionary object explicitly created at design time by a DDL statement. It’s “global” only as it’s a database-level object and can be accessed by any database session having the required permissions. However, despite its structure being global, all data in a global temporary table is scoped only to the session that it operates within and is not, in any circumstances, visible outside of this session. In other words, other sessions can have their own data in their own copy of the same global temporary table. So, in Oracle, a global temporary table holds session-local data—used mostly in PL/SQL for code simplification and performance optimization.

In SQL Server, a global temporary table is a temporary object created in a block of Transact-SQL code. It exists as long as its creating session is open, and it’s visible—both in structure and data—to other sessions in the database. So, it’s a global temporary object for sharing data across sessions.

A local temporary table in SQL Server differs from global ones by being accessible only in the session that creates it. And the usage of local temporary tables in SQL Server is much more widespread (and, I would say, more critical to database performance) than the use of global temporary tables.

So, how are local temporary tables used in SQL Server, and how should they be translated into Oracle?

The critical (and correct) use of local temporary tables in SQL Server is to shorten or remove transaction resource lock, especially:

  • When a set of records needs to be processed by some aggregation
  • When set of data needs to be analyzed and modified
  • When the same set of data needs to be used multiple times in the same scope

In these cases, it is very often a better solution to select this set of records into a local temporary table to remove the lock from the source table.

It’s worth noting that common table expressions (CTEs, i.e. WITH <alias> AS (SELECT...) statements) in SQL Server are merely “syntactic sugar”. They’re converted into inline subqueries before SQL execution. Oracle CTEs (with a /*+ materialize */ hint) are performance-optimized and create a temporary version of a materialized view. In Oracle’s execution path, CTEs only access the source data once. Based on this difference, SQL Server might perform better using local temporary tables instead of multiple references to the same CTE, as could be done in an Oracle query.

Because of the difference between transaction implementations, temporary tables also serve a different function. As a result, moving SQL Server temporary tables to Oracle “as-is” (even with Oracle 18c’s implementation of private temporary tables) can be not only detrimental to performance, but functionally wrong.

On the other hand—when moving from Oracle to SQL Server—attention needs to be paid to transaction length, visibility scope of global temporary tables, and performance of CTE blocks with the “materialized” hint.

In both cases, as soon as migrated code includes temporary tables, we should talk not about code translation, but about system re-implementation.

Enter Table Variables

Developers will probably wonder: What about table variables? Do we need to make any changes or can we move table variables “as-is” in our Oracle-to-SQL-Server migration steps? Well, this depends on why and how they are used in the code.

Let’s look at how both temporary tables and table variables can be used. I’ll start with Microsoft SQL Server.

The implementation of table variables in Transact-SQL somewhat matches temporary tables but adds some functionality of its own. The key difference is the ability to pass table variables as parameters to functions and stored procedures.

This is the theory, but practical usage considerations are a bit more involved.

First tasked with serious Transact-SQL optimization when I was coming from a deeply entrenched Oracle background, I expected it to be this way: table variables are in-memory while temporary tables are on-disk. But I found that Microsoft SQL Server versions through 2014 didn’t store table variables in memory. Thus a full table scan on a temporary variable is indeed a full table scan on the disk. Thankfully, SQL Server 2017 and later versions support declarative memory optimization for both temporary tables and table variables.

So, what is the use case for table variables in Transact-SQL if everything can be done as well or better using temporary tables? The key property of a table variable that it’s a variable and as such is not affected by transaction rollback and can be passed as a parameter.

Transact-SQL functions are very restrictive: as the task of a function is to return some singular return value, it—by design—cannot have side-effects. Transact-SQL sees even SELECT as a side-effect, because in SQL Server any access to a table creates an implicit transaction and associated transaction lock. This means that inside a function, we cannot access data in an existing temporary table, nor create a temporary table. As a result, if we need to pass any set of records into a function, we have to use table variables.

Oracle’s considerations for using (global) temporary tables and collection variables (the Oracle PL/SQL equivalent of Transact-SQL table variables) are different. Oracle collection variables are in-memory, whereas temporary tables are located in temporary tablespaces. Oracle functions allow read-only access to tables, permanent or temporary; a simple SELECT in Oracle never places a lock on resources.

In Oracle, the choice of using collection variables vs. temporary tables is based on the expected amount of data, the duration for which this data needs to be preserved, and memory vs. disk allocation and availability. Also, collection variables are the standard way to take a row set as output back to a host program.

As most of the SQL syntax elements look very similar between SQL Server and Oracle, conversion of code blocks with table variables from SQL Server Transact-SQL into Oracle PL/SQL is a simpler and more syntactically forgiving process. It could pass a basic validation test, but won’t be functionally correct unless the temporary table reimplementation steps are taken, as outlined above. On the other hand, code moved from Oracle to SQL Server involves more modification steps just to be syntactically valid. To also be functionally correct, it will need to address the in-depth cases of using temporary tables and CTEs.

Inner Transactions (“Nested Transactions”)

In terms of Oracle to SQL Server migration challenges, the next major area to look at is nested transactions.

Just as with temporary tables, if Transact-SQL code includes any transaction, nested or not, or Oracle code includes any nested transactions, we’re talking about not just a plain code migration, but functional reimplementation.

First, let’s look at how Oracle nested transactions behave and how we tend to use them.

Nested Transactions in Oracle

Oracle nested transactions are completely atomic and independent of the outer scope. There is no actual use for nested transactions in plain interactive Oracle SQL queries. When you are working with Oracle in interactive mode you just manually commit your changes when you see that you got to a state. If you did some changes that you can’t yet commit until you’ll do the last—say, uncertain for you—step that may need to be rolled back, but you want to preserve the work that has been already done by you, you’ll create a safe-point to roll back to it without committing or rolling back the full transaction.

So, where are nested transactions used? In PL/SQL code. More specifically in autonomous procedures—those declared with PRAGMA AUTONOMOUS_TRANSACTION. It means that when this code is called (as a named stored procedure or anonymously) the transaction is committed or rolled back independently of the transaction that called this code.

The goal of using nested transactions is to have a self-contained unit of work committed or rolled back regardless of what will happen to the calling code. When an inner transaction can be committed or rolled back, it would be used to check the availability of (or reserve) shared resources—e.g. in implementing a room reservation system. The primary use for commit-only inner transactions is activity monitoring, code tracing, and secure access auditing (i.e., a user wasn’t allowed to make changes, but attempted to.)

Nested transactions in SQL Server Transact-SQL code are completely different.

Nested Transactions in SQL Server

In Transact-SQL, whether an inner transaction is committed completely depends on the outermost transaction. If an inner transaction has been rolled back, it’s just rolled back. But if an inner transaction has been committed, it’s still not fully committed, as it can be rolled back if any level of its outer scope transaction gets rolled back.

So, what is the use of inner transactions if its commits can be undone by rolling back its outer transaction? The answer is the same as in the use case for local temporary tables: releasing the lock on resources. The difference is that it is not a global lock release, but a lock within the scope of the immediate outer (direct “parent”) transaction. It is used in complex Transact-SQL code to release inner resources for the outer transaction. It’s a performance optimization and resource management tool.

As Oracle and SQL Server inner/nested transactions have different (perhaps even opposite) behavior and completely different use cases, migration from one platform to another requires not just a re-write, but to completely re-architect any scope that contains nested transaction blocks.

Other Factors

Are these temp-table- and transaction-centered considerations the only things that need to be addressed in an Oracle to SQL Server migration? While they may be the most important, there are definitely others, each with their own quirks that are worth covering. Below is the remainder of what I’ve found to be the most misunderstood topics:

  1. Identity columns in SQL Server
  2. Sequences in Oracle
  3. Synonyms in Oracle
  4. Filtered indexes
  5. Read consistency (Oracle to SQL Server only)
  6. Use of migration tools

The next part of this series continues by exploring these, particularly the first three.

Temp Tables, Table/Collection Variables, and Nested Transactions: The Top 3 Migration Pain Points

I started with temporary tables, table variables/collections, and nested transactions because these are the most common and obvious points of failure in conversion projects. Any non-trivial system in Oracle Database or Microsoft SQL Server will doubtless use some of them, and using these elements is very tightly coupled with the specific design of transaction support by the respective RDBMS implementations.

Microsoft Gold Partner badge. (Toptal is a Microsoft Gold Partner.)

Understanding the basics

  • What is the difference between Oracle and SQL Server?

    Both Oracle and SQL Server are partially ANSI-compliant, but their implementations of the standard are different, most notably in transaction structure. Migration of data and applications between Oracle and Microsoft SQL Server is a complex process that requires a deep understanding of both platforms.

  • While creating a database link from one Oracle instance to another is standard functionality, creating a link from Oracle Database to Microsoft SQL Server requires Heterogeneous Services and an extra license—it’s not part of the Oracle Enterprise license.

  • An Oracle transaction is some amount of work (data modifications via INSERT, UPDATE, DELETE) that can be permanently applied (via a COMMIT command) to the database.

  • An SQL transaction is a bundle of data operations followed by a COMMIT or a ROLLBACK statement. Specific database implementations have differences in how transactions are supported and what counts as “work.” For example, Microsoft SQL Server counts SELECT as a part of a transaction, whereas Oracle doesn’t.

  • In Oracle, COMMIT is (much) faster than ROLLBACK.

  • COMMIT can’t be “undone,” but Oracle does support “flashback” queries, which can fetch data as it has been at some point in the past. Using flashback requires a database configuration that enables it. Flashback is normally used by DBAs to recover from human error and rarely as a part of application functionality.

  • A temporary table in Oracle is a permanent table object in a temporary tablespace, holding temporary data visible only within the transaction that inserted it. Different transactions can use the same temporary table, but cannot see each other’s data.

  • SQL Server’s temporary tables are created in the “tempdb” database, whereas table variables are in-memory objects (backed by “tempdb”) that can be passed as a parameter, just like a normal variable. In SQL Server 2017, table variables gained more features of temporary tables, such as the ability to have indexes.

Hire a Toptal expert on this topic.
Hire Now
Leonid Draginsky

Leonid Draginsky

Verified Expert in Engineering

Westerville, OH, United States

Member since June 18, 2020

About the author

Leonid has 20+ years as a multi-platform developer and DBA, with special expertise in Oracle and SQL Server cross-platform migrations.

authors are vetted experts in their fields and write on topics in which they have demonstrated experience. All of our content is peer reviewed and validated by Toptal experts in the same field.

PREVIOUSLY AT

J.P. Morgan Chase

World-class articles, delivered weekly.

By entering your email, you are agreeing to our privacy policy.

World-class articles, delivered weekly.

By entering your email, you are agreeing to our privacy policy.

Join the Toptal® community.