How To Cache Oracle Database

In today’s world, everything is about speed; what’s not instant is not good enough. If you have ever experienced slow responses from your database, this article is for you.

Oracle has two options for you to use

  • Caching
  • Pooling

What is caching?

Caching is a way of storing data in-memory for faster retrieval. They are used to load data once only from the storage and use them for all subsequent requests with the same result set. What if the data in the database change? The cache will, of course, be updated accordingly 🙂

Why caching?

As mentioned above, caching dramatically increases the speed of retrieving information for the data are stored in the memory and doesn’t have to be obtained from the hard drive. If you are unsure what is the difference please review my article about “Computing Architecture“.

How does it work?

Now we are getting to the interesting part 🙂 Oracle offers a great hint result cache that will help you to store data in the SGA similar to buffer cache or program global area. When you execute a query with this hint, Oracle will retrieve the data as usual only with the exception they will be stored in the SQL Result Cache. Whenever a user executes the same query, Oracle serves its result from the Result Cache instead of going down to the tables again and again. This makes the retrieval very fast.

You can see the details about the result cache with this command:

SELECT name, value
FROM v$result_cache_statistics;

/* output
NAME                          | VALUE
---------------------------------------------
Block Size (Bytes)            | 1024
Block Count Maximum           | 524288
Block Count Current           | 32 
Result Size Maximum (Blocks)  | 13107
Create Count Success          | 1
Create Count Failure          | 0
Find Count                    | 2
Invalidation Count            | 0
Delete Count Invalid          | 0
Delete Count Valid            | 0
Hash Chain Length             | 1
Find Copy Count               | 2
Global Hit Count              | 0
Global Miss Count             | 0
Latch (Share)                 | 0
*/

Where Block Count Maximum is the max size of the result cache (512MB in this case) and Block Count Current is the currently used size (32kB).

Now let’s explore how the result cache works in this example.

First of all, we need to set enable the AUTOTRACE (this only works via SQL*Plus console!). If you are not familiar with SQL*Plus, and AUTOTRACE, please visit my article How To Trace Queries in Oracle with AUTOTRACE.

SQL> set linesize 32000; 
SQL> set autotrace traceonly;

Now, all is set, and let’s start tracing the queries 🙂

SELECT id_department
, count(*)
FROM employee
WHERE 1=1
GROUP BY id_department;

The output

10 rows selected.

Elapsed: 00:00:01.13

Execution plan
----------------------------------------------------------
Plan hash value: 2370953298

---------------------------------------------------------------------------------------
| Id | Operation                 | Name     | Rows   | Bytes | Cost (%CPU) | Time     |
---------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT          |          |      1 |    10 | 6077    (1) | 00:00:01 |
|  1 |    HASH GROUP BY          |          |      1 |    10 | 6077    (1) | 00:00:01 |
|* 2 |      TABLE ACCESS FULL    | EMPLOYEE |  93432 |  912K | 6074    (1) | 00:00:01 |
---------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      22099  consistent gets
      22099  physical reads
          0  redo size
        617  bytes sent via SQL*Net to client
        371  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         10  rows processed

Now let’s see how the query will perform with the result cache hint:

SELECT /*+ result_cache */
  id_department
, count(*)
FROM employee
WHERE 1=1
GROUP BY id_department;

Output:

10 rows selected.

Elapsed: 00:00:00.01

Execution plan
----------------------------------------------------------
Plan hash value: 2370953298

---------------------------------------------------------------------------------------------------------
| Id | Operation                 | Name                       | Rows   | Bytes | Cost (%CPU) | Time     |
---------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT          |                            |      1 |    10 | 6077    (1) | 00:00:01 |
|  1 |  RESULT CACHE             | 76ptckqdv00tp0mrd0ydmhyy71 |        |       |             |          |
|  2 |    HASH GROUP BY          |                            |      1 |    10 | 6077    (1) | 00:00:01 |
|* 3 |      TABLE ACCESS FULL    | EMPLOYEE                   |  93432 |  912K | 6074    (1) | 00:00:01 |
---------------------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------
   1 - column-count=2; dependencies=(ORACLE_WORLD.EMPLOYEE); parameters=(nls); name="SELECT /*+ result_cache */
id_department
, count(*)
FROM employee
WHERE 1=1
GROUP BY id_department

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        617  bytes sent via SQL*Net to client
        371  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         10  rows processed

As you can see, the second query executed practically in no time compared to the first one. Another significant difference is that consistent gets dropped to zero (vs 22099) which means there was not I/O performed by the second query. Why? All results were loaded right from the memory instead of reaching the disk storage.

Let’s update some data in the database to demonstrate how will the result cache get updated.

DELETE FROM employee WHERE id_department = 1001;
commit;

The very moment you commit, the result cache got invalidated and therefore Oracle will do a dynamic refresh the next time you issue the query; of course, the cost will grow for you have to reach down to disk storage again.

If you want to delete your cache there are several ways to do so, see the following examples:

-- delete all cache
BEGIN
  dbms_result_cache.flush;
END;

-- delete cache for a specific table only
BEGIN
  dbms_result_cache.invalidate('ORACLE_WORLD','EMPLOYEE'); -- owner, table_name
END;

To monitor the cache usage pls dive into these views:

-- settings and memory consumption stats
SELECT * FROM v$result_cache_statistics;

-- memory in the cache
SELECT * FROM v$result_cache_memory;

-- objects in the cache
SELECT * FROM v$result_cache_objects;

-- dependencies between various objects in the cache
SELECT * FROM v$result_cache_dependency;

What is pooling?

Connection pooling keeps the database connection open so it can be reused by other sessions.

Why pooling?

The operation consuming most of the time when querying the database is usually establishing a connection to the server (especially when the server is remote). To avoid such an expensive operation every time the user/application needs to retrieve data from the database you only use the existing connection in the pool.

How to open pooled connection? It’s pretty simple, just add (SERVER = POOLED) in your connection string.

ORACLE_WORLD =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = oracle-world.com)(PORT = 1521))
    (CONNECT_DATA = (SERVER = POOLED)
      (SID = OW)
    )
  )

I strongly recommend using pooling in any production application to mitigate your performance issues and please consider using result cache hint in queries where changes do not frequently occur. If they do, there will be now a negative impact on the performance of your application, you can only benefit from it!

One thing to make sure you won’t be negatively hit by using result_cache is making sure you enough of cache memory allocated and properly configured!