All Forums UDA
RGlass 35 posts Joined 09/04
11 Mar 2008
Exclusive Lock

An existing ETL job at my site is coded with an exclusive lock applied to delete rows from a table.I presume this was coded this way to ensure no one accessed an empty table between the delete and insert.There are 1200 rows in table. The delete and re-insert of current rows (again 1200) is a subsecond operation,however occasionally another user or app. is running a long running query with an access lock on this table. Since this causes a block for the job trying to get the exclusive lock I get paged in the middle of the night to resolve.Has anyone heard of a legitimate reason for coding this way?Our ETL lead is reluctant to change because the world may come to an end. Any Ideas of how to convince him would be appreciated.TBob

Someshnr 53 posts Joined 06/07
18 Mar 2008

For a delete/insert task I donot think they should put an exclusive lock. If they are concerned about dirty read inbetween delete and insert they can put the code inside BT/ET block.something like;BT;Delete Table...;.If Errorcode <> 0 Then .Goto Continue;Insert Into Table ....;.If Errorcode <> 0 Then .Goto Continue;ET;.Label Continue;Otherwise, if your company uses external scheduler for jobs, you may consider for changing dependency(one job dependant on the other).

tanova535 7 posts Joined 01/06
28 Apr 2008

if you can use views with "lock row for access" for reading thats best otherwise "lock row for read" on the table by the long running queries and not executing rowhash level locks may also be contributing to the blocking.

Bodhi.Sattva 17 posts Joined 07/10
08 Jul 2010

Hello,
I have been looking for answers to a basic question on table partitioning in Teradata. Can someone please shed more light. Here it is -

If we do Range Partitioning on a table based on Numeric values and then different jobs load data corresponding to different partition ranges in parallel - then do we get into a resource contention scenario OR do all parallel running jobs hit specific partition and only acquire lock on a particular partition. In short - the question is - while loading data into a partitioned table [in this case Range Partition], do we acquire a lock at the partition level and multiple programs can run in parallel acquiring lock at individual partitions OR the lock is acquired at whole table level and the parallel jobs get into a contention mode.

Ex: An error table where each program loads specific error information that is encountered based on error code. If the table is range partitioned on error codes of range of 1000s, will individual programs when run in parallel hit their specific partition acquiring only lock to it or they will acquire complete table lock and release them for subsequent jobs when error load is done.

Please advice.

"The Day You Cease to Learn, is the Day Yoe Cease to Exist..."

callnishanth 4 posts Joined 05/09
15 Feb 2011

Bringing the Exclusive Lock into the picture, the problem I'm facing is that the ETL Job uses MultiLoad Utility. This requires an exclusive lock which i believe we cannot control.

The ETL Job just waits in the queue until the user has completed his query which is affecting our batch significantly. Any help will be appreciated.

emilwu 72 posts Joined 12/07
22 Feb 2011

you can use this approach: bring your data into an empty staging using fastload
Then MERGE INTO TARGETTABLE USING FASTLOADSTAGING ON () WHEN MATCHED THEN INSERT.. .WHEN NOT MATCHED THEN UPDATE .. .

The user SQL should apply to a view (with row level access lock applied LOCKING ROW FOR ACCESS SELECT * FROM BASE TABLE).

You must sign in to leave a comment.