Suppose you have a table with several non-unique secondary indexes (NUSI). When TPump loads the table, should you expect that each row’s INSERT will cause a table level lock on each of the secondary index sub-tables? And if so, couldn’t this create a lot of blocking across sessions? To answer that, let’s take a closer look at what happens when NUSI sub-tables are updated.

First, it’s natural to assume that since access via a NUSI always causes a table-level lock to be set, then updating a NUSI would be the same. But that is not the case. When loading with TPump you can really cross NUSI update contention off your check list, and here’s why.

You may have had experiences loading with TPump against a table with a join index. Under some circumstances, updating the join index structure can lead to blocking when different sessions of a TPump job update the same join index primary index value. But NUSI updates are structurally less contentious, and won’t block.

For one thing, NUSIs are AMP-local, with each AMP having a dedicated NUSI sub-table for its rows. You can insert a row on your AMP with a NUSI value of ‘90230’ and I can insert a different row on my AMP with the same NUSI value, without stepping on each other's toes.

In addition, the NUSI doesn’t undergo traditional locking, as does a join index. When you insert a row using TPump, then the row ID of the row being inserted needs to be added to one of the NUSI sub-table rows on that AMP. When the TPump base-table row INSERT takes place, the file system places an internal, temporary lock on the data block within the index sub-table where the NUSI maintenance will take place. These internal locks are not transaction-type locks that can be explicitly specified in a LOCKING ROW modifier and they don’t show up in the explain text. Rather, they are one level lower and are held very briefly, just for the moment in time when the NUSI sub-table is actually updated, not for the entire step, and most definitely not until End Transaction.

If more than one TPump INSERT needs to update the same NUSI data block on the same AMP at the same point in time, these updates are handled serially, with only the briefest of delays to any one of them.

The other really great thing about NUSI updates, if you’ve got time for one more point, there is no transient journal overhead, as there is with unique secondary index updates. The inserts to the NUSI sub-table will be reversed as part of the rollback of the base table insert, since they are conveniently on the same AMP.
 

Discussion
carrie 595 comments Joined 04/08
12 Jul 2010

Please replace the last paragraph in the above text (which is outdated information) with the following text:

Enhancements made in V2R6.2 improved the performance of rollbacks on a table with a NUSI, by replacing row-at-a-time NUSI maintenance with block-at-a-time processing, whereever possible.

Prior to those enhancements, NUSI update did not incur transient journal overhead but the rollback was always performed row-at-a-time. In order to benefit from block-optimized NUSI rollbacks, which are quicker than, every row ID insert or delete that triggers NUSI maintenance is recorded in the transient journal as a NUSI change row.

These NUSI change rows are used, if required, for a MERGE INTO or MERGE DELETE during rollback processing.

vasudev 35 comments Joined 12/12
22 Jul 2013

Hi Carrie,
I am using TD 12, I am having a table with many SI and its loaded using TPUMP. When the number of SI in the table is reduced, can this improve the performance of TPUMP. Please advise.

You must sign in to leave a comment.