Not being familiar with teradata, I have the following issue.
An ETL system requires that tables can be read and written to at the same time by the same user, but different connections. Deadlocks occur, seemingly at random. Is there a way to minimize this using isolation levels as dirty reads, shadow reads ....
Any information is appreciated in my search for an answer
One option is to modify all your Select statements to include a "lock for access" qualifier.
An easier method would be to create a separate database containing just views that reference the base tables. Then alter the view creation DDL to include the lock for access qualifier. For example:
"Replace View DBName.TableName as lock row for access Select.....;"
Locking for access will allow your reads to occur even if the table is being written to and will allow writes to occur even if it is being read. There is plenty more information on the different locking alternatives in the Teradata manuals.
If your application queries a table and then updates it within the same transaction, it is vulnerable to a classic deadlock scenario. The query will acquire a read lock and then attempt to upgrade it to a write lock. A deadlock occurs if a competing transaction tries to do the same thing. In this case, locking for access will not solve the problem. Instead, consider using locking for write on the initial query to eliminate the need for a lock upgrade. Alternatively, consider breaking the transaction into multiple transactions.