An existing ETL job at my site is coded with an exclusive lock applied to delete rows from a table. I presume this was coded this way to ensure no one accessed an empty table between the delete and insert. There are 1200 rows in table. The delete and re-insert of current rows (again 1200) is a subsecond operation, however occasionally another user or app. is running a long running query with an access lock on this table. Since this causes a block for the job trying to get the exclusive lock I get paged in the middle of the night to resolve. Has anyone heard of a legitimate reason for coding this way? Our ETL lead is reluctant to change because the world may come to an end.
Any Ideas of how to convince him would be appreciated.
if you can use views with "lock row for access" for reading thats best otherwise "lock row for read" on the table by the long running queries and not executing rowhash level locks may also be contributing to the blocking.
Hello, I have been looking for answers to a basic question on table partitioning in Teradata. Can someone please shed more light. Here it is -
If we do Range Partitioning on a table based on Numeric values and then different jobs load data corresponding to different partition ranges in parallel - then do we get into a resource contention scenario OR do all parallel running jobs hit specific partition and only acquire lock on a particular partition. In short - the question is - while loading data into a partitioned table [in this case Range Partition], do we acquire a lock at the partition level and multiple programs can run in parallel acquiring lock at individual partitions OR the lock is acquired at whole table level and the parallel jobs get into a contention mode.
Ex: An error table where each program loads specific error information that is encountered based on error code. If the table is range partitioned on error codes of range of 1000s, will individual programs when run in parallel hit their specific partition acquiring only lock to it or they will acquire complete table lock and release them for subsequent jobs when error load is done.