Mythics Blog

Oracle Responds to SAP Hana with Oracle Database 12c In-Memory

Posted on June 18, 2014 by Ted Nanayakkara

Tags: Oracle, Oracle Software, Mythics Consulting

Oracle introduced Oracle Database In-Memory, a new add-on option to Oracle Database 12c Enterprise Edition, in response to the new industry push towards totally memory resident databases offered by other vendors such as IBM and most notably SAP’s Hana. In contrast to row organized relational databases stored on disks, a new concept that used column organized memory resident databases was spreading throughout academic circles. Capitalizing on the hype and with the help of a few acquisitions, Oracle’s primary traditional enterprise applications competitor SAP started pushing their own memory database called Hana in an attempt to displace Oracle as the dominant database platform for SAP’s suite of applications. While SAP has always been the market leader in the ERP segment, SAP customers often chose Oracle as the backend database for their SAP applications. As Hana database matures along with their middleware product called NetWeaver, SAP was planning to own the entire software stack underneath their ERP systems. After announcing a new roadmap and Oracle’s strategy for in-memory databases at Oracle Open World last year, on June 10th 2014, Oracle responded by introducing an impressive revolutionary capability with a memory resident column cache that works in concert with their traditional row cache.

Databases are typically stored on rotating disks in row format and the active datasets are read into memory based row caches where the data is protected by transaction logging while business users interacted via enterprise applications. Database performance increased with advancements in hardware such as flash technologies and increases in DRAM capacity. While this architecture worked well for data manipulations with OLTP systems and performed well for query analysis involving data structures with a small number of rows and large number of columns, it was not up to par for data warehousing with OLAP systems that performed multidimensional complex analysis on a large number of rows with a few columns. Because of this, it was customary to separate analytical systems from transactional systems rendering real-time analytics and decision-making virtually impossible.

While Oracle dominated the RDBMS market, other vendors seized on this opportunity to provide specialized analytical databases for data warehousing with special purpose appliances such as Teradata and Netezza. As the cost of DRAM dropped, some vendors, notably SAP, introduced new database technologies designed to run entire databases in-memory with column organized data structures. While this approach increased query performance by multiple orders of magnitude, transactional performance was negatively impacted. SAP proposed using Sybase for transactional processing and loading column organized datasets into Hana for high performance analytical processing. IBM and few other niche players also joined in on the quest for designing in-memory databases.

With future relevance in jeopardy, Oracle took some time to make an aggressive, innovative, and impressive move to protect its flagship product and to brand its competition irrelevant. With the complete existing Oracle architecture intact, including the row cache and the transaction logging mechanisms, Oracle designed a new column cache that only existed in-memory. As row organized data is read from disk, Oracle simultaneously populated both caches: row cache and column cache, thereby maintained two copies of data in-memory: one row organized and the other column organized. Then the query optimizer automatically used the appropriate cache to satisfy each type of query, accommodating both types of queries within a single database, while maintaining transactional integrity across both caches. When querying the column cache, Oracle used hardware vector accelerators in CPUs to process entire datasets in one atomic operation. In RAC environments, column caches were mirrored across nodes for high availability. By using this architecture, all prior database features continued to work, requiring no application changes to utilize the in-memory column cache.

To use the in-memory database, all you need is two simple steps:

  • First, set the column cache size using one initialization parameter.
  • Second, mark all tables and partitions to be cached using a DDL flag. At this point, queries will run up to 100 to 1000 times faster.

You can further improve DML by dropping all indexes except primary key, foreign key and referential key indexes, and thereby doubling your transactional performance.

In an event held at Oracle Headquarters in Redwood Shores, California, CEO Larry Ellison walked the audience through Oracle’s history primarily to make a point: Oracle has always adapted to changes in the industry by innovating within its core engine and in-memory databases were no exception. Since 1977, Oracle has successfully redesigned and architected its database engine to easily move their customer base from mainframe to client/server to internet to cloud, and now to in-memory in the least disruptive manner; thereby aggressively protecting its market share and dominating the next generation computing model.


  • ! No comments yet

Leave a Comment