Chained Rows – Problems And How To Fix Them

Whenever you insert a record in a table, the record is stored in a data block and is assigned with ROWID. If you try to update the record, data inside that block are updated based on the ROWID (which does not change – except a few rare cases ->i.e. ALTER TABLE .. MOVE). However, if the record is longer than the data block size (generally 4 or 8 KB) it will have to be chained to another block.

As I explain in the article about Logical storage structures, data blocks are the smallest structures in Oracle. I strongly suggest reading that article first, before you proceed with this one. You can get a grasp of structures and how they work together.

Let’s examine the Logical storage structures together in the picture below.

This picture should give you a better understanding of how the table is composed down to the data level. As you will see further, the chained rows are stored across multiple data blocks.

This is a simple example to help you understand what is happening. You should be now able to understand what does “chained row” mean and let’s talk about why they are happening, why it is bad, and how to deal with them.

Why are chained rows happening

Rows are being chained whenever you try to insert data that cannot fit a single data block size or you try to update existing data and the new data exceeds the block capacity. Then, the row has to be stored into multiple blocks instead of one.

Another reason might be when LONG or LONG RAW is used in the table or when the table has more than 255 columns.

Why is that wrong

Because one record is stored in two blocks, you need to read them both instead of one block only. That, of course, increases the IO and makes the process more resource expensive and therefore slower.

How to identify chained rows

Right of the bat, I would like to mention that Oracle has a bug (or at least as far as I was able to get) and you will not get the chained rows data from the dba_tables data dictionary.

SELECT owner
, table_name
, chain_cnt
FROM dba_tables 
WHERE 1=1
AND chain_cnt > 0;

You can give it a try but you will most likely not succeed. I will show you a better and more reliable way to get to that information. Buckle up 🙂

Before we start, we have to create a table where we will store the information about tables and their chained rows. Feel free to index it if you plan to load a huge about of tables.

-- this is a standard table (do not change the structure)
CREATE TABLE chained_rows (
  owner_name VARCHAR2(30),
  table_name VARCHAR2(30),
  cluster_name VARCHAR2(30),
  partition_name VARCHAR2(30),
  subpartition_name VARCHAR2(30),
  head_rowid ROWID,
  analyze_timestamp DATE
);

As a next step, we will populate this table by either gathering data for one table only or all data according to your selection.

-- single table analysis
ANALYZE TABLE owner.table_name LIST CHAINED ROWS INTO chained_rows;

-- multiple table analysis to find chained rows
BEGIN
  FOR i IN (SELECT owner, table_name 
            FROM dba_tables 
            WHERE 1=1 
            AND owner IN ('SALES','CRM','RISK','ORACLE_WORLD')
           )
  LOOP
    EXECUTE IMMEDIATE 'ANALYZE TABLE '||i.owner||'.'||i.table_name||' LIST CHAINED ROWS INTO chained_rows';
  END LOOP;
END;

At this moment, as we already know what tables contain chained rows, let’s have a better look at the seriousness of the situation. To do so, run the following command.

SELECT cr.owner_name
, cr.table_name
, count(*) as chained_rows
, dt.num_rows
, round(count(*) / dt.num_rows,2)*100 as chained_ratio
FROM chained_rows cr
--
LEFT JOIN (SELECT owner
          , table_name
          , sum(num_rows) as num_rows
          FROM dba_tables
          GROUP BY owner
          , table_name
          ) dt
ON dt.owner = cr.owner_name
AND dt.table_name = cr.table_name
--
GROUP BY cr.owner_name
, cr.table_name
, dt.num_rows
--
ORDER BY 3 DESC;

Notwithstanding the ratio (either low or high) you should fix all chained rows in your tables – ideally, there should be none; aim for the best 🙂

As I mentioned above, the usual block size is 4 or 8KB. You can check in your database what’s your value.

SELECT value as size_kb
FROM v$parameter
WHERE 1=1
AND name = 'db_block_size';

The last step before you can start fixing the issue is, to check whether it makes sense to de-chain the rows or not.

If, as a matter of fact, the rowsize of your table is bigger than block size – there is nothing you can do about it and your rows will stay chained forever.

To check the size of your rows, use the following command.

-- list down all columns in the table and sum them all up together
SELECT nvl(vsize(col_1),0)
        +nvl(vsize(col_2),0)
        +nvl(vsize(col_3),0)
        +nvl(vsize(col_4),0)
        +nvl(vsize(col_5),0) as row_size
      , count(*) -- group them together
FROM owner.table_name
WHERE rowid IN (SELECT head_rowid FROM chained_rows WHERE owner_name = 'ORACLE_WORLD' AND table_name = 'AFFECTED_TABLE')
GROUP BY nvl(vsize(col_1),0)
        +nvl(vsize(col_2),0)
        +nvl(vsize(col_3),0)
        +nvl(vsize(col_4),0)
        +nvl(vsize(col_5),0)
ORDER BY 1 DESC

Now, when you know whether you have to do something or you can go back to your work 🙂

How to fix chained rows

All rows with the size below your block size (4KB = 4096; 8KB = 8192) you need to un-chain them. There are several options to do so.

  • Recreate the table with CTAS and drop the existing one (rename appropriately)
    • use with a high density of chained rows
  • Save chained rows in a TEMP table, delete them from the affected table and re-insert them again
    • use with a low density of chained rows
  • MOVE the table
    • highly recommended

Recreate the table

-- if the chained rows ratio is higher than 30-40% then it is better to rebuild the table
CREATE TABLE owner.affected_table_fixed AS
SELECT *
FROM owner.affected_table
-- don't forget to define partitions, indexes, and other constraints here
;


  -- control check
  SELECT * FROM affected_table
  MINUS
  SELECT * FROM affected_table_fixed;
  -- must be 0
  
  SELECT * FROM affected_table_fixed
  MINUS
  SELECT * FROM affected_table;
  --must be 0
  
RENAME affected_table TO affected_table_bak;  
RENAME affected_table_fixed TO affected_table;

DROP affected_table_bak;

DELETE and ReInsert

-- if the chained rows ratio is less than 30% then it's ok to backup rows, delete and reinsert again
CREATE TABLE tmp AS
SELECT * FROM affected_table 
WHERE rowid IN (SELECT head_rowid 
                 FROM chained_rows 
                 WHERE owner_name = 'ORACLE_WORLD' 
                 AND table_name = 'AFFECTED_TABLE');
-- if you have a foreign key - disable
ALTER TABLE affected_table DISABLE CONSTRAINT fk_column;

DELETE FROM affected_table 
WHERE rowid IN (SELECT head_rowid 
               FROM chained_rows 
               WHERE owner_name = 'ORACLE_WORLD' 
               AND table_name = 'AFFECTED_TABLE');

INSERT INTO affected_table 
SELECT * FROM tmp;

-- if you have a foreign key - enable
ALTER TABLE affected_table ENABLE CONSTRAINT fk_column;
commit;

MOVE table

-- PCTFREE + PCTUSED must be less than 100
ALTER TABLE affected_table MOVE 
PCTFREE 15  -- increase this according to your needs
PCTUSED 50  -- increase this according to your needs
; 

-- after that, all indexes and stats will become UNUSABLE
ALTER INDEX idx_name REBUILD; 
ALTER INDEX idx_another REBUILD; 
  
ANALYZE TABLE affected_table COMPUTE STATISTICS;

Congratulations, you have fixed your problems 🙂 However, there is one more thing you need to do to make sure you won’t get them anymore. Read on!

How to prevent chained rows

If you used the ALTER TABLE … MOVE with adjusted PCTFREE – you should be safe by this moment. If you did not, let’s work on it now and set a higher PCTFREE on your table.

ALTER TABLE schema.table PCTFREE 15;

I have already briefly touched PCTFREE in the article about How to save some space, where I, ironically, recommended to set PCTFREE to 0. Here, contrarily, you need to increase it 🙂

Let me now elaborate a little bit on PCTFREE and PCTUSED – both are very critical when it comes to performance optimization. Their thorough understanding will help you to decide where could be the problem in your high IO issues.

PCTFREE

This parameter indicates how much space needs to be allocated for future updates. If, for example, PCTFREE is set to 20(%), Oracle will stop inserting new records when the block is 80% full – leaving those 20% (PCTFREE=20) for future updates.

In the article, I referred above, you can read about decreasing PCTFREE will save some space. However, if you want to update those records, it will create either migrated or chained rows. This is a clear indication that the PCTFREE is too low and needs to be increased.

PCTUSED

This parameter indicates whether the block can be reused or is full enough and new records will be allocated to a new data block. If, for example, PCTUSED is set to 5O(%), whenever the block is less than 50% full it goes to a “freelist” which indicates blocks available for writing. Having said that, newly inserted records will go that block because Oracle will consider this block “empty enough” for inserting. Please note, that Oracle only inserts data into data blocks from the “freelist”. If the block is more than 50% full, Oracle will insert new data into a new block.

This has a direct impact on INSERT statements and wrongly configured PCTUSED parameters will significantly deteriorate the performance (by increasing the IO – because of writing into more data blocks than needed). We can safely conclude that the higher the PCTUSED is, the less free space can be reused during inserting. That means higher IO.

Increase your PCTUSED to efficiently use your disk space. General practice says not to set it below 40 (that would be a pure waste). PCTUSED has to be based on the avg_row_len (from dba_tables) of your data in your table (compared to the data block size).

Conclusion

All chained rows are wrong and are a consequence of inappropriate design. Always keep your rows unchained and set your PCTFREE and PCTUSED wisely.

Some hints to get the right values:

  • PCTFREE = round(100-((SPARE_ROWS * avg_row_len)/(BLOCK_SIZE/10)))
  • PCTUSED = round((SPARE_ROWS * avg_row_len)/(BLOCK_SIZE/10))

General rules:

  • When you expect to have many updates on your table, set the PCTFREE to a higher value (40-50)
  • When you expect to not update (archive tables, log tables, ..) your table, set the PCTFREE to a low value (0 – 5)
  • The lower the PCTUSED is – the less IO you will have and therefore the faster your system will be