Frequently Asked Questions – DATABASE IN-MEMORY – Part4

Q: Is there any ratio or certain %, that inemory_Area cannot be more than 15% of SGA ?

A: No, normal sizing still applies to the other areas of the SGA. The IM column store is in addition to those values and can be as large as your memory and the Oracle Database supports.

Q: Does data always get compressed when put into memory?
A: No, there is an option for “no memcompress”, but the default is query low.

Q: What occurs if the in-memory static size amount is insufficient for all the data in a column?
A: If there is not enough memory in the IM column store for the entire object being loaded then just part of it will be loaded and the rest w ill be accessed from the row store. A message will be written to the alert log that the IM column store is full.

Q: The data buffer cache uses a block size that matches the block size in the data file. What is the unit of storage in a columnar array?
A: The unit of store for columnar data is 1MB. This can be seen in the view v$inmemory_area.

Q: Can the in-memory extent size be adjusted?
A: No, there is no control over the in-memory extent size.

Q: Do we have to explicitly create storage indexes?
A: No, you cannot control the creation of storage indexes.

Q: Is the in-memory feature available in all license editions?
A: No, the Database In-Memory option is only available in Enterprise Edition.

Q: Does the optimizer pick a hybrid, Row and Column access methods?
A: Yes, if the entire object does not fit into the IM column store then part of it can be accessed from the IM column store and the rest from the buffer cache.

Q: In-memory may contain a list all unique values to determine which parts of the in -memory column store to access. But what if there are minimal duplicate values (as in a unique column or a column phone #s of clients)? Is this list of values maintained?
A: Compression is performed at the IMCU level and the algorithms adapt based on the data values. The compression won’t waste space if there is no compression
benefit from the values found.

Q: How does this help with ‘join cardinality’ issues?
A: Join cardinalities are the same, the optimizer uses statistics on the object as well as statistics on the inmemory object to determine the best execution plan.

Q: Once I’m done with partitioning in-memory, how do I take it out of memory?
A: alter <object> no inmemory

Q: How is duplicate different from mirroring?
A: The duplicate subclause is essentially the same as mirroring at the IMCU level but it only ocurrs in-memory.

Q: How does the database deal with a situation in which you have specified in-memory for a specific table, dropped all of the analytic indexes but the entire table doesn’t fit into the in-memory column store? Will analytic queries suffer in this situation?
A: Yes, but only because the data will have to be fetched from the row store and won’t take advantage of the IM column store’s features. Indexes won’t be used in the case where some data is accessed from the IM column store and some from the buffer cache. In general you would not want this to happen. The best practice is to populate entire objects into the IM column store.

Frequently Asked Questions – DATABASE IN-MEMORY – Part3

Q: What is the down side of removing the indecies?
A: The only down side would be if you remove an index that is needed for referential integrity or for OLTP support. Indexes are not used for the IM column store.

Q: Would Oracle Database In-Memory be as beneficial for a traditional data warehouse as for reporting off an OLTP system?
A: Yes, DBIM is perfect for these use cases.

Q: For tables that are marked to be in-memory, are they actually loaded into memory upon database startup or when a query requests data from the tables?
A: This is configurable. Data may be loaded into memory on startup, or on demand.

Q: Fact or fiction? Oracle Database 12c capability for coulmn based data retrieval vs. traditional row based data retrieval now allows one single database for OLTP and data warehouse? Any implementation of this in the real world yet where companies are doing this?
A: Fact. Oracle Database In-Memory adds columnar data format in addition to Oracle’s traditional row based format. We have many companies using Database In-Memory.

Q: Can a table data reside simultaneously in both InMemory and SGA?
A: Yes, but let’s clarify. The In-Memory column store is part of the SGA and so is the traditional buffer cache. If a table is loaded into the IM column store it can be accessed in either the buffer cache or the column store. This is an optimizer decision based on the cost of the query.

Q: Are companies using Oracle Database 12c to support OLTP and data warehouse / Decision Support Systems using the same database, without pruning data from OLTP database to a seperate Datawarehouse database? (with reference to column retreival vs row retreival)
A: There are two very significant use cases for Oracle Database In-Memory; one is where an application can run queries against a OLTP system in real-time such as with EBusiness Suite, JDE, Siebel etc. The second, there is where multiple OLTP sources (data from multiple apps) funnel into a Data Warehouse for DSS. The in-memory capability accelerates query speeds exponentially in that environment as well. Both use cases are equally important and in use by our customers.

Q: Is the database startup going to be slow if data has to be represented in both row and column store?
A: No, the database w ill startup normally. The population of the IM column store happens in the background after the database has started.

Q: Does this mean we have two instances of tables? One in memory(column store) and another on disk (row)?
A: Yes, Oracle operates as it always has with the buffer cache caching blocks from on-disk tables in row format. When a table is added to the IM column store then it is read from disk and populated into the IM column store in columnar format.

Q: What are best practices for monitoring and adjusting the memory allocation for the in-memory column store?
A: The best practice is to leave enough room in the IM column store for any growth in the objects that have been marked for in-memory. That will depend on your data growth.

Q: Are there any scalability issues with the size of the SGA? Should I avoid 2TB, 4 TB or 8TB of SGA?
A: No, the SGA can be as large as is supportable by the Oracle Database on your server.

Frequently Asked Questions – DATABASE IN-MEMORY – Part2

Q: Is there any way to analyse data use to recommend tables/columns that would benefit from in-memory option?
A: Yes, there is the In-Memory Advisor which is available on OTN that can analyze your system to determine w hich objects would benefit from Database In-Memory.

Q: How easy is it to move from a row to columnar format in Oracle Database In-Memory?
A: Once you’ve allocated the In-Memory column store then it is a simple alter table command to populate the table into the IM column store.

Q: How is Oracle Database In-memory different from caching a table?
A: The biggest difference is the format. In the buffer cache the table is in row format and in the column store it is in columnar format.

Q: How much memory will be used for table in memory, is it been from sga ? please explain
A: For the table itself, that will depend on the compression level. The IM column store is allocated from memory in the SGA and once allocated a simple alter table command can be used to populate the table into the IM column store.

Q: Before purchasing, how can we determine whether or not Oracle Database In-Memory will give us and ROI given our data structures and reporting needs?
A: There is a business calculator which allows you to see the economic impact of Oracle Database In-Memory. The calculator is intended to provide an approximate savings calculation for your consideration. It is not an exact ROI calculation.

Direct access to the tool:

On a technical level, there is a tool Database In-Memory Advisor which allows you to size and use In-Memory. The whitepaper on best practices: ODBIM Advisor Best Practices White Paper –

Q: I need to understand the difference between row format and column format? Where can I get info about these formats please?
A: To get started you can look at the Database In-Memory White Paper on OTN. This explains Database In-Memory and the difference between the row store and the column

Q: Are In-Memory tables protected by the same ACLs as normal tables?
A: Yes because it is still one table. The optimizer controls when a query will access the object in the column store or the row store. The actual table is still protected just as it is without Oracle Database In-Memory.

Q: Does EBS support Oracle Database In-Memory?
A: Yes, EBS does support Database In-Memory.

Frequently Asked Questions – DATABASE IN-MEMORY – Part1

Q: Is column storage in Oracle Database 12c an enterprise edition feature?
A: Database In-Memory is an Enterprise Edition Option which must be licensed so there is an extra cost in addition to the EE license.

Q: Timesten is currently being used with Exalytics and 11g, if we move to 12c, do we still need Timesten? What is the best recommendation?
A: Both databases work in memory, but do so in very different ways. They are complementary technologies that address very different business requirements. TimesTen is an inmemory row store, designed for extremely high-frequency processing. Typically configured tightly-coupled w ith the application server, TimesTen can deliver sub-millisecond response time for individual transactions. DB In-Memory is an in-memory column store that is designed to accelerate scans of very large numbers of rows. It is therefore particularly suitable for OLAP – scans of a few columns in large data sets w ith filters can be sped up by factors such as 100x or even 1000x.

Q: How is In-Memory enabled?
A: Database In-Memory is enabled by setting the initialization parameter inmemory_size > 0.

Q: Does Oracle Database 12c using in-memory capabilities by default, or does the DBA need to configure?
A: Oracle Database 12c does not enable Database InMemory by default. To enable Oracle Database In-Memorythe inmemory_size initialization parameter must be set to a
value greater than zero. You then need to alter one or more tables, partitions, sub-partitions or materialized veiws to be populated into the IM column store. The Oracle optimizer can then transparently make use of the IM column store.

Q: Does Database In-memory have a hardware dependency like some specific quality of RAM?
A: No, Database In-Memory can run on any hardware platform supported for Oracle Database 12c. You do have to have enough memory to support the In-Memory column store in addition to the other components in the SGA though.

Q: Does the DBA control which tables participate in In Memory Columnar Technology?
A: Yes, the DBA or a privileged user that can alter a table, partition, subpartitiion or materialized view to be inmemory.

Q: How is Database In-Memory priced? Is this an add-on?
A: Oracle Database In-Memory is an option priced at 23K per processor unit. Refer to our price list for more details.

Q: Can Oracle Database In-Memory columnar format store a subset of rows from a table? We have a SaaS application that does OLTP analytics, but strictly within a tenant. We use a single schema with VPD.
A: Oracle Database In-Memory supports tables, partitions and sub-partitions so if you partition your table then yes it can
store a subset of the rows of a table.

Q: Would Oracle Database In-Memory Columnar technology be leveraged in the PeopleSoft Campus Solutions application if the memory allocations are sized properly?
A: PeopleSoft does support Database In-Memory. You should check their documentation or MOS for specific objects that can be populated into the In-Memory column store.

Frequently Asked Questions – MULTITENANT – Part2

Q: It’s mentioned that cloning pluggable databases (PDB’s) is “rapid provisioning”…my experience testing is that the raw datafiles still have to be copied…so, if you have a large database, its dependant on IO capacity. Really no faster than non-PDB if you ask me. Am I wrong?

A: It is much faster. Instead of creating a new database by running through the catalog script – slow-by-slow as Tom describes it – we create a new database by simply cloning it. Snapshot clones are incredibly fast. Snapshot clone for a 1TB PDB in less than five minutes!

Q: Why is multitenant not the default architecture? In what scenarios should multitenant architecture be avoided?
A: Multitenant is the default option in Database Configuration Assistant (dbca).. In general we recommend using the Multitenant architecture. Features not yet compatible w ith Multitenant are listed in the database readme doc.Over time all features w ill be compatible with Multitenant providing all users the option of transitioning to a single architecture.

Q: Is it possible to copy data across container databases (CDBs)?
A: Yes, we now have a remote clone capability. createpluggable database a from b@dblink

Q: Can we clone a live PDB?
A: We currently require a PDB to be read-only to be cloned.One option is to take clones from a DG standby.

Q: How different can the schema be for each PDB?
A: There are no limitations in this regard. You can consolidatea disparate collection of applications if you like. The application schema is defined withing the self-contained PDB.

Q: Can you run CDB in RAC or some type of HA environment?
A: Multitenant is fully compatible w ith HA options such as RAC and Data guard. PDBs may be affinitized to specific nodes, or configured as uniform.

Q: We are already using OVM for consolidation – what does Multitenant give us over that?
A: OVM is good for infrastructure consolidation. Multitentant is specifically designed for databsase consolidation – much higher consolidation density, simpler management as you can now manage many databases as one. You could still use MT for further consolidation benefits on top of OVM.

Q: With an enterprise license, I can have one CDB and one PDB?
A: Correct. The single tenant configuration – one PDB per CDB – does not require or trigger the licensed option.

Q: Can flashback be used to recover dropped container database?
A: No. Use RMAN to recover a CDB that has been dropped.

Q: Can Goldengate replication be set up between pluggable databases in the same server? Does anything need to be done to container database for
that to happen?
A: Yes, we can replicate between PDBs in the same server. We would capture from the CDB (root container) which allows us to capture from any PDB in the environment, and then we would set up a replicat to replicate directly into the PDB. The activity that the Replicat does will be ignored by the Extract process running in the CDB. No docs, nothing to change.. just pretend they are two different systems. By default, Extract w ill ignore any activity done by an OGG Replicat process, so you don’t even need to add a special parameter or anything. If you want to move the PDB, it’s as easy as changing the connection for the replicat. You could initially set it up w ith a connect string (TNSNAMES.ORA) and thenwhen you move it, just change the TNSNAMES.ORA entry to point to the new location

Q: Can RAC instances work in multinenants architecture?
A: Yes, RAC is supported in a multitenant environment

Frequently Asked Questions – MULTITENANT – Part1

Q: Is there a performance gain with DBLINK in Multitenant architecture (links are between multitienant databases)?
A: Yes, dblinks between Pluggable Databases (PDBs) in the same Container Database (CDB) are more efficient than between standalone databases.

Q: What if a main database instance crashes? Will it have impact on all multitenant databases?
A: The nature of a consolidated system is such that a CDBwide outage will by definition affect all PDBs. For this reason we recommend following our MAA (Massive Availability Architecture) best practices for setting up databases – in general, and particularly Multitenant databases – for High Availability. The Oracle Multitenant option will allow you to make very significant cost reductions. We recommend reinvesting some of those savings in HA options such as RAC and Advanced Data Guard. You’ll still have plenty of savings to show, and you’ll have improved SLAs.

Q: How does the multitenant architecture differ from traditional SQLSERVER instance based architecture?
A: In that Microsoft SQL Server supports multiple “databases” within a single server, while Oracle Multitenant supports multiple Pluggable Databases (PDBs) within a single Container Database (CDB), there is a superficial similarity between the two. There are some important differences, however, Microsoft SQL Server users are defined at server  level, so there’s imperfect namespace isolation, which makes it less suitable for Database as a Service (DBaaS). SQL Server Databases are difficult to move between servers. By contrast the pluggable portability of Pluggable Databases (PDBs) allows for a great range of flexibility.

Q: Does Oracle Database 12c support Non-Container databases ?
A: Yes, customer have the choice of using existing architecture or new multitenant architecture (or a combination of both)

Q: Do you have any guidance regarding capacity/resource sizing of multitenant database?
A: This is highly dependent on the loads being consolidated. The savings are proportional to the numbers of databases consolidated. Rough rules of thumb for minimal savingsre about 1.5GB per database and maybe 1/3 core for databases under load. On top of that, you might expect greater savings because of the ability to share “peaks and troughs” in load. In terms of storage, snapshot clones facilitate huge savings in storage and IOPS. Don’t forget the operational savings because of the ability to manage many databases as one.

Q: Do we need additional licence for multitenant architecture?
A: Multitenant is an Oracle Database 12c option that must be licensed.

Q: Is Pluggable Database (PDB) / Container Database CDB) always a cost option?
A: No. The single tenant configuration – a single PDB within the CDB – does not require or trigger the licensed option. This configuration delivers some of the advantages of multitenant – including pluggable portability and upgrade/patch via unplug/plug. Two or more PDBs per CDB gives you full functionality and requires the licensed option.

Q: Can we configure undo and redo at pluggable database (PDB) level with Multitenent Option? if not why?
A: Consolidation advantages are delivered by sharing resources such as a single set of background processes, as Tom is describing. Single log writer writes a single
consolidated redo stream.


Frequently Asked Questions – ORACLE DATABASE 12c


Q: Is edition-based redefinition an enterprise edition feature?
A: It’s not exclusive to EE

Q: Are all the features in this presentation available for Standard Edition and Enterprise Edition? If not, is there a doc that shows what feature is available?
A: This ‘family’ white paper is a good place to start.

Q: Does Oracle 12c require RAC?
A: RAC is not required, but it is an option for Oracle Database 12c Enterprise Edition. It is also included with Oracle Database 12c Standard Edition (on clusters with a maximum of 4 sockets)


The 2015 award categories are

Oracle REST Data Services
Oracle Application Express
Database Design

Nominations open until Aug 15th – submit yours today!

For more details


Cloud Summer Camp !

Check out the new features of Oracle Database 12c.
Live examples at our Cloud Summer Camp!

All About Oracle 12c Performance

Real-Time Database Operations Monitoring

Oracle® Database SQL Tuning Guide

12c Release 1 (12.1)

Part Number E15858-15

Chapter 16 Monitoring Database Operations

Section 16.1 About Monitoring Database Operations


Real-Time ADDM Analysis

Oracle® Database SQL Tuning Guide

12c Release 1 (12.1)

Part Number E49106-05

Chapter 7 Automatic Performance Diagnostics

Section 7.1.3 Real-Time ADDM Analysis


Adaptive Query Optimization

Oracle® Database SQL Tuning Guide

12c Release 1 (12.1)

Part Number E15858-15

Chapter 4 Query Optimizer Concepts

Section 4.4 About Adaptive Query Optimization


Enhanced Column Histogram Statistics

Oracle® Database SQL Tuning Guide

12c Release 1 (12.1)

Part Number E49106-05

Chapter 11 Histograms

Section 11.4.3 Generating a Top Frequency Histogram


Online Statistics Gathering for Bulk Loads

Oracle® Database SQL Tuning Guide

12c Release 1 (12.1)

Part Number E49106-05

Chapter 10 Optimizer Statistics Concepts

Section 10.3.3 Online Statistics Gathering for Bulk Loads


Session-Private Statistics for Global Temporary Tables

Oracle® Database SQL Tuning Guide

12c Release 1 (12.1)

Part Number E49106-05

Chapter 10 Optimizer Statistics Concepts

Section 10.2.4 Session-Specific Statistics for Global Temporary Tables


SQL Plan Directives

Oracle® Database SQL Tuning Guide

12c Release 1 (12.1)

Part Number E49106-05

Chapter 10 Optimizer Statistics Concepts

Section 10.4.1 SQL Plan Directives


Adaptive SQL Plan Management

Oracle® Database SQL Tuning Guide

12c Release 1 (12.1)

Part Number E15858-15

Chapter 23 Managing SQL Plan Baselines

Section 23.1 About SQL Plan Management


Automatic Column Group Detection

Oracle® Database SQL Tuning Guide

12c Release 1 (12.1)

Part Number E15858-15

Chapter 13 Managing Optimizer Statistics: Advanced Topics

Section 13.3.1 Managing Column Group Statistics