I have created 1 view.. locking base_table for access create view base_table_v as sel * from base_table; Their are 2 scenarios 1)I am updating through view Which lock will be there on VIEW and base table?
2)I am renaming base table and somebody is reading it through view
i.e 1ST)SEL * FROM BASE_TABLE_V IS RUNNING 2ND)RENAME TABLE BASE_TABLE TO BASE_DUP CAME...
1st)RENAME TABLE BASE_TABLE TO BASE_DUP is running 2nd)SEL * FROM BASE_TABLE_V came
What should happen in above 2 cases
Also please put some light on downgrading of locks and upgrading of locks using locking modifier..
I am planning to change VIEW definition as
locking base_table for exclusive replace view base_table_v as sel * from base_table; What extra steps i need to take?
Note that in your examples the LOCKING modifier is NOT part of the view definition, rather it is being applied prior to executing the DDL statement - CREATE or REPLACE.
While updating "through" a view, there is no lock on the view itself. There will be a write lock on the table being updated; if the rows are being accessed via RowHash (e.g. PI access) then it will be a RowHash write lock, otherwise a full table write lock.
Renaming a table requires an exclusive lock. In the first scenario the RENAME waits for the SELECT to finish and release its READ (or ACCESS) lock. In the second scenario, the RENAME completes but the SELECT fails (because the table named in view will no longer exist).
It's not clear to me why you'd want to lock the base table before creating a view of that table. What are you really trying to accomplish?
replace view base_table_v as locking base_table for access sel * from base_table;
My main point is related to 1 of my job
First one IS RENAMING TABLE by way of 3 tables
RENAME BASE_TABLE TO BASE_TEMP1; RENAME BASE_TEMP(containing updated data) To base_table; RENAME BASE_Temp1TO BASE_TEMP;
Generally 2 scenarios happens
1)Another job selecting data from BASE_TABLE_v runs for long time and we have to kill either the rename or select job as 2 will not move for hours....don't know how that happens even after Teradata has such a gud Deadlock resistance.....
2)Sometimes 2nd Job fails with msg (strangely message is BASE_TEMP1 has been restructured instead of BASE_TABLE !!!) and then after that it resubmits the job automatically and this time it doesn't fail data comes....don't know how that happens as table has been renamed....and from which table it has selected the data in second attempt(BASE_TABLE? OR BASE_TEMP1 OR BASE_TEMP))
Can you please explain how these 2 scenarios can exists........and also solution to this problem..... Your help will be very much appreciated
The RENAME will be blocked by the SELECT. I can't say why the SELECT is not releasing the lock. For example, you could be running in ANSI mode and have not done explicit COMMIT. Or you could have a transaction that issues multiple requests prior to commit, and the RENAME is requesting a lock in between.
The view name is resolved first to the table name and then to the table ID when the query is parsed, then the optimizer builds a query plan using the ID. DDL statements (including rename) increment an internal "version number". If the rename happens during query planning (after resolving the ID and before the PE dispatches a request to the AMP to acquire the lock), the AMP detects that the version number is different and triggers the "table has been restructured" error. It doesn't know that only the name has changed. By the time the query has been resubmitted, the second RENAME is likely done also and your view in effect references the new BASE_TABLE (formerly BASE_TEMP). If not, you would get an "object does not exist" error.
One possible way around this is not to rename the tables, just use REPLACE VIEW to direct queries to the other one.
Hello, I have been looking for answers to a basic question on table partitioning in Teradata. Can someone please shed more light. Here it is -
If we do Range Partitioning on a table based on Numeric values and then different jobs load data corresponding to different partition ranges in parallel - then do we get into a resource contention scenario OR do all parallel running jobs hit specific partition and only acquire lock on a particular partition. In short - the question is - while loading data into a partitioned table [in this case Range Partition], do we acquire a lock at the partition level and multiple programs can run in parallel acquiring lock at individual partitions OR the lock is acquired at whole table level and the parallel jobs get into a contention mode.
Ex: An error table where each program loads specific error information that is encountered based on error code. If the table is range partitioned on error codes of range of 1000s, will individual programs when run in parallel hit their specific partition acquiring only lock to it or they will acquire complete table lock and release them for subsequent jobs when error load is done.
Hi Bodhi - Kind advice - for any new question ,start a new thread. Its beneficial for you and anyone who is seeking similar answers. On your question - Here is a briefing:
There are four types of locks: Exclusive - prevents any other type of concurrent access Write - prevents other reads, writes, exclusives Read - prevents writes and exclusives Access - prevents exclusive only
Locks may be applied at three levels: Database - applies to all tables/views in the database Table/View - applies to all rows in the table/views Row Hash - applies to all rows with same row hash
Now , if you are doing a DML operation on table PI , then a Row Hash-Write lock will be active and it can allow for similar DML operation on other PI column values.So a partial lock is possible which can allow to carry out multiple update on different PI values at the same time. Mind you , if the same PI column is being used for in two different update statement then it will result in lock queue.