Data Concurrency, Consistency, And Locks

Multi-user databases require a mechanism to handle data concurrency, consistency, and integrity. Data has to be consistent throughout any transaction executed in the database system and therefore Oracle implemented locks.

Even though it’s technically impossible that two processes will request exclusive access to a resource at the same time, it might happen that while one transaction is processing data the other might want to work with the same resource. This would be a very sad story (full of deadlocks and clashes) but luckily, we have Oracle; and of course Oracle comes with a strategy how to mitigate this 🙂

First of all, let’s get familiar with these terms (just to be clear right from the beginning):

  • Data consistency
    • this means data are consistent throughout all transactions for a particular user until the user is finished
    • example: when I query a table, and it is modified (and committed) right after by another user, I should not see the changes until I query it again
  • Data concurrency
    • this means data are accessible for multiple users at the same time
    • example: I can access and/or modify data like other users can “at the same time” … the changes triggered by users (application) will be serialized.
  • Data integrity
    • this assures data are accurate, valid, complete, and consistent.
    • three integrity levels are:
      • REFERENTIAL – child record cannot exist (be created or left) without its reference to a parent
      • ENTITY – two records cannot have the same ROWID
      • DOMAIN – data types must not mismatch (you cannot store DATE as NUMBER)

Now, when you are familiar with let’s dive in the fun stuff 🙂

Transaction isolation levels

There must be a certain compromise between performance and data consistency. These are called “transaction isolation levels”. Oracle defines 3 isolation levels (“read uncommitted” is not supported by Oracle) you can choose from and each of them has 3 states. There are certain rules/restrictions for each isolation level and it’s three states. See the following table to examine the combinations.

Isolation LevelDirty ReadNon-repeatable ReadPhantom Read
Read uncommitted
YesYesYes
Read committedNoYesYes
Repeatable readNoNoYes
SerializableNoNoNo

The default isolation level for Oracle is “Read committed”. Let me explain quickly what those states mean.

  • Dirty read
    • if you are a regular subscriber of my posts, you might remember my article DML processing where I mentioned DIRTY BUFFERs – yes 🙂 it is related
    • this means reading data modified by another transaction/user while not yet committed
  • Non-repeatable read
    • when an application/user re-executes a query and for the second time, data are different because they were modified and committed (after the first read) by another transaction
    • this is related to UPDATE
  • Phantom read
    • when an application/user re-executes a query and for the second time, there are additional data inserted/deleted and committed (after the first read) by another transaction
    • this is related to INSERT / DELETE

To sum up, in layman’s terms, dirty read is related to data reading while other transactions (insert, delete, update) have not committed their changes yet. Whereas Non-repeatable read and phantom read are related to data reading from committed transactions (it’s just non-repeatable read is for updates and phantom read for inserts and deletes).

Since I already mentioned the types of reads, let me describe the isolation levels as well.

  • Read committed
    • as I mentioned already, this is a default level in Oracle
    • a query can see only data already committed before the query began (=> cannot read dirty buffers)
    • you can experience non-repeatable read and phantom read (as described in the table above) due to the fact, that Oracle allows other transactions to modify the data during the run of your query (transaction)
  • Read-only
    • a query can see only data committed at the beginning of its transaction
    • INSERT, UPDATE, DELETE by other transactions are not allowed during the transaction execution
  • Serializable
    • a query can see only data committed at the beginning of its transaction
    • INSERT, UPDATE, DELETE are not allowed during the transaction execution except the transaction itself
    • non-repeatable read and phantom read are not possible here

You can, of course, set isolation levels according to your needs 🙂

-- per transaction
SET TRANSACTION ISOLATION LEVEL READ COMMITTED; 
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; 
SET TRANSACTION READ ONLY;

-- for the whole session
ALTER SESSION SET ISOLATION_LEVEL = SERIALIZABLE; 
ALTER SESSION SET ISOLATION_LEVEL = READ COMMITTED;

Concurrency control

Oracle automatically assures reading consistency so data you query are based on a “snapshot”. When the query starts, Oracle locks the records for reading (statement-level) so you can get consistent data. It does not only apply to a single query but also on a set of queries – transaction (transaction-level).

Statement-level read consistency

Whenever you issue a query in a database instance, Oracle will enforce the “Statement-level read consistency” which means you will get data from the time you queried them; you will not be able to see dirty data. Changes made by other transactions during the execution, and therefore you will see only data committed before your query started.

Transaction-level read consistency

Whenever you start a transaction (set of queries) in a serializable mode, Oracle will enforce the “Transaction-level read consistency”. This means the transaction will work with a “snapshot” of data related to the start of the transaction itself. Changes made by other transactions during the execution won’t be visible. The only changes visible to the transaction are those done by the transaction itself.

Deadlock

Deadlock occurs when two transactions/processes are waiting for the same data locked by each other. This means either transaction can proceed and will wait for the other one to release the data.

Examine the example below to see step-by-step how deadlock can occur.

  1. Transaction 1 acquires a lock on a table PRODUCT for a product id 10001
  2. Transaction 2 acquires a lock on a table PRODUCT for a product id 10002
  3. Transaction 1 acquires a lock on a table PRODUCT for a product id 10002
  4. Transaction 2 acquires a lock on a table PRODUCT for a product id 10001

At step 4, Oracle detects the deadlock and raises the ORA-00060 error (=deadlock detection).

Detection

Oracle has a smart mechanism to detect deadlocks (as you can see above – it will throw an error -> detection) and whenever a deadlock is detected Oracle will initiate rollback on the statement that caused the deadlock ( step 4 in this case).

How to Avoid

There is no magic pill on how to avoid deadlocks. If you are experiencing deadlocks that means you probably either modify isolation levels for transactions or your application has some severe flaws when it comes to architecture and transaction processing management. Thus, I strongly suggest reviewing your application design and to follow default setup of isolation levels by Oracle.

Deadlocks usually often when people mess up with transaction isolation levels by manually changing them. By default, it’s very unusual to experience a deadlock. Oracle is that smart 🙂

Locking

Locks are implemented in Oracle to provide data consistency and integrity for all users. Without locks, in a multi-user environment, you will quickly face data inconsistency issues.

Oracle provides two basic types of locks:

  • Shared lock
    • Oracle can allocate multiple shared locks for multiple users/applications
  • Exclusive lock
    • only one exclusive lock (row or table) can be allocated per resource

The lock is connected to a row and/or a table. When you issue the following statements:

  • INSERT
  • UPDATE
  • MERGE
  • DELETE
  • SELECT … FOR UPDATE

Both locks (table and row) are acquired. The row lock is in the exclusive mode (so no one else can mess around with the data) and the table lock is in the shared mode so others can read and eventually manipulate the rest of the rows. This assures the best possible performance.

Table locks are divided into several categories:

  • Row Share Lock (RS)
    • sometimes referred to as subshare table lock (SS)
    • indicates a lock on the table for specific rows with the intention of updating them
    • best concurrency option (the least restrictive)
  • Row Exclusive Lock (RX)
    • sometimes referred to as subexclusive table lock (SX)
    • indicates a lock on the table with already updated data
    • allows other transactions to simultaneously lock rows in the same table for any operation (insert, delete, update, merge, select, ..) for other rows
    • multiple operations can be done within a table at the same time (for different rows)
  • Share Table Lock (S)
    • allows multiple transactions to query the table
    • only if a single transaction holds this lock it can update the table (but if more transactions hold this lock, they can all update – considering they will try to update different rows)
  • Share Row Exclusive Table Lock (SRX)
    • sometimes referred to as share-subexclusive table lock (SSX)
    • more restrictive than Share Table Lock
    • allows multiple transactions to query the table
    • but only one transaction at a time can update the table
  • Exclusive Table Lock (X)
    • the most stringent lock
    • only one transaction can have this lock at a time and therefore only one transaction can write to the table at a time

Examine the table below, to see how different commands acquire different locks.

SQLRow LockTable Lock TypeRSRXSSRXX
SELECT ... FROM table ...-noneYYYYY
INSERT INTO table ...YSXYYNNN
UPDATE table ...YSXYYNNN
MERGE INTO table ...YSXYYNNN
DELETE FROM table ...YSXYYNNN
SELECT ... FROM table ... FOR UPDATE OFYSXYYNNN
LOCK TABLE table IN ...-
ROW SHARE MODESSYYYYN
ROW EXCLUSIVE MODESXYNNNN
SHARE MODESYNYNN
SHARE ROW EXCLUSIVE MODESSXYNNNN
EXCLUSIVE MODEXNNNNN

To query all locked objects and corresponding lock modes, please use this query:

SELECT dba_obj.owner
, dba_obj.object_name
, DECODE(lock_obj.locked_mode,
         0, 'NONE',
         1, 'NULL',
         2, '(SS): Row Share Lock',
         3, '(SX): Row Exclusive Table Lock',
         4, '(S): Share Table Lock',
         5, '(SSX): Share Row Exclusive Table Lock',
         6, '(X): Exclusive Table Lock',
         , 'N/A') as lock_mode_type
, sess.sid
, sess.serial#
, proc.spid
, sess.program
, sess.status
, sess.username
, sess.machine
, lock_obj.os_user_name as username
, sess.port
, sess.logon_time
, run_t.time_elapsed
, run_t.time_remaining
, sql_q.sql_fulltext
FROM v$locked_object lock_obj
--
JOIN dba_objects dba_obj
ON lock_obj.object_id = dba_obj.object_id
--
JOIN v$session sess
ON lock_obj.session_id = sess.sid
--
JOIN v$process proc
ON sess.paddr = proc.addr
--
JOIN v$sql sql_q
ON sess.sql_address = sql_q.address
--
LEFT JOIN (
           SELECT sid
           , serial#
           , sum(time_remaining) as time_remaining
           , sum(elapsed_seconds) as time_elapsed
           FROM v$session_longops
           GROUP BY sid
           , serial#
          ) run_t
ON run_t.sid = sess.sid
AND run_t.serial# = sess.serial#
--
WHERE 1=1
--AND sess.status = 'ACTIVE'
AND dba_obj.owner = 'ORACLE_WORLD'

I hope I helped you understand how important locks are and how Oracle treats different types of queries. This should give you a better understanding of what is going in behind the scene 🙂

Now, you probably realize why all the maintenance and massive DDLs (typically daily loads into Data Warehouse) happen during the night or in the wee hours (to avoid unnecessary locking and therefore slow down during the load).