DEADLOCK forever ...

Database
Enthusiast

DEADLOCK forever ...

Hi all,

trying to explain a deadlock issue is definitely not so easy.

A session S1 (TMODE)  starting with a BT transaction is sending sequentially a lot of UPDATE orders on Table T1,

and a session S2 is coming right in the middle with a "SELECT * from T1" (no locking modifier).

The fact is that S2 query is blocked due to an attempt of LOCK READ , but it is aborted after 4 minutes because of a deadlock.

Viewpoint show that S1  is blocked by S2 and blocking S2,

and that S2 is blocked by S1 and blocking S1: why blocking S1 ? dbqlog shows that UPDATEs are going on (until an abort issued by the user)..

Is the current transaction (BT) of S1 the only reason why a deadlock is evaluated, in spite of running UPDATEs ? 

Thanks for your comments,

Pierre 

 

5 REPLIES
Senior Apprentice

Re: DEADLOCK forever ...

Was this job a single session using a high PACK factor?

You better apply LOCK TABLE WRITE instead of the default ROW locks which happily cause deadlocks :-)

Teradata Employee

Re: DEADLOCK forever ...

Locks are complicated in a massively parallel system...

 

When a transaction is doing multiple single row operations, each of those operations gets a write lock on the specific rowhash as the execution of the individual row operation is executed. Further row locks are acquired if USIs or JIs need to be updated as a result of the row update. Each of those rowhash locks may be on a different AMP. 

 

When the SELECT requests a full table read lock, that request is sent to every AMP. On AMPs which have not yet seen an update request, the request is granted. On AMPs which already have done an update operation and have a rowhash lock in place, the read lock has to wait. 

 

When another update then happens in the transaction and is routed to an AMP that has not done an update yet, then when the rowhash lock is requested it is blocked by the granted read lock and a deadlock has occurred. Teradata has a deadlock resolution process that runs on a periodic basis, looks for these situations and chooses a transaction to abort to resolve the ddeadlock.

 

Dieter mentions one solution. Placing a LOCKING table FOR WRITE prior to the update operations will lock the table on all AMPs and the select will simply wait until the update is done. 

Another solution would be to use a LOCKING FOR ACCESS modifier on the SELECT. In this case, the select will read the current state of the data without waiting for the write locks from the transaction.

Or the updates can be done in smaller transactions and commit them quicker to narrow the window in which the deadlock can occur.

 

Also, there is a DBS Control parameter that adjusts the deadlock timeout if resolving the deadlock in 4 minutes is not quick enough.

Enthusiast

Re: DEADLOCK forever ...

Hi,

 

I have to deal with the following situation: My client developed a system consisting of a web frontend and Teradata as the database. Performance is terrible. Hundreds of people can use the tool at the same time, each time they change a field in the web frontend and navigate to another screen a stored procedure is called,  which contains about ten updates, delete and insert steps. 

The client expects stable run times, not longer than a couple of seconds. As many full table scans are done, the only chance I saw was working with indexes.

I started to optimize, by adding a lot of join indexes (I have to say, that all the tables are tiny), to give direct access to the data. As the tools need different paths to the data, each of the join indexes contains the same data but a different primary index. 

The tool does a lot of single item updates, deletes and inserts. While this improved performance a lot, I ran into another problem: I suddenly get a huge amount of deadlocks in the stored procedure, especially when several people use the tool at the same time.

 

Is there any chance that I can avoid the deadlocks? If I put WRITE LOCKS on the update statements, could this be a solution?

 

Thanks in Advance,

Roland

Roland Wenzlofsky
Teradata Employee

Re: DEADLOCK forever ...

It really is best if you start another topic for a new conversation, this is related but not really the same as the original...

 

The deadlocks could be eliminated by placing table level write locks on all tables that will be accessed in the transaction. That will need to be done in a single request with all the requested locks eg:

LOCKING TABLE T1 FOR WRITE LOCKING TABLE T2 FOR WRITE ...

 

Alternately a single table could be selected as the sequencing table for the entire application and a single LOCKING sequencing_table FOR WRITE could preceed any operations in any of the transactions. This would simplify the locking statements significantly and make it so that every execution would simply queue on that lock.

 

The tradeoff for doing one of these is that the transactions will be serialized. If the transactions are fast enough and the arrival rate is not too high and there is no waiting for user input or something before the transaction is committed, then it may be sufficient to run them sequentially. Of course if they are deadlocking, then they must be accessing some of the same records for each transaction and thus they have to be sequential anyway.

 

How tiny is tiny? There are some tricks - like forcing a table onto a single AMP, that can be quite effective for very tiny tables.

Enthusiast

Re: DEADLOCK forever ...

Thanks Todd, 

thanks Dieter,

reall appreciate your wise and clear comments;

 

Pierre