May I re-open this discussion?
I'm wondering if there are specific strategies for getting small, OLTP type SQL transactions to run faster within TD.
I can't use FASTLOAD or CSV LOAD - I have small transactions - like 20-100 rows in multiple tables. My requirement is to deliver these updates/inserts as near real time as I can.
My transaction runs extremely slowly even using indexes - I worked with Teradata DBA for a month to make sure every detail was set properly and each transaction was averaging 4 seconds.
I asked Teradata DBA's for more connections, so at least I could process in more threads. They did not like that idea at all.
So is it ever feasible to use TD as an OLTP target? Am I chasing a ghost?
The Teradata Database's locking behavior is the primary challenge for an OLTP style application. Be wary of adding threads, because that may increase lock contention, and make things worse. Try to do as much work as possible with the fewest number of connections - that will minimize write lock contention. Keep the amount of work as small as possible in each transaction - that will also minimize lock contention.
Indexes improve query performance, but require additional work during inserts, updates, and deletes ("index maintenance"). Be wary of adding indexes to tables for which inserts/updates/deletes are a significant part of the workload.
Teradata does not support read-committed transaction isolation like Oracle does. With Oracle, writers do not block readers, but with Teradata, writers will block readers. You can use access locking with Teradata, so that writers don't block readers, but that will permit dirty reads, which typically are not acceptable for an OLTP application.
Teradata does not support row-level locking like Oracle does. Teradata has row hash locking, so each lock covers a collection of rows, leading to increased lock contention and blocking.
I agree with the cautions about locking however running a transaction workload through a single session is very unlikely to deliver the required throughput or respose time unless the volume is extremely small.
The good thing about a transactional workload is that it takes very little system resource per insert/update/delete. Thus a moderate number of session doing this work concurrently will not have an impact on systeem resources.
If any number of the updates are being applied to a single table, then you should also be using the array update feature to group them together into a single operation.
If there are single operations to mulltiple tables tha can't take advantage of array updates, then those should be grouped together in multi-statement requests to avoid round trip times.
You don't say what indexes you have in place. secondary indexes can definately make the updates take longer and can increase locking contention.
And then you have to make sure that the workload management is set to make sure that the requests get through without delays. High weight/priority and possibly expedited AWTs if there is a lot of concurrency of other work. See Carrie Ballinger's tips on WLM for tactical query workloads.