Using Teradata in OLTP application

Database
Enthusiast

Using Teradata in OLTP application

Hi,

Recently we migrated our database from Oracle to Teradata and it provides healthy support in OLAP operations. But we need to explore few Teradata tables to OLTP world where we are getting 250 thousand online hits per day.  Below are the table details and online request details.

CREATE MULTISET TABLE TRANSACTION_DETAILS ,NO FALLBACK ,

NO BEFORE JOURNAL,

NO AFTER JOURNAL,

CHECKSUM = DEFAULT,

DEFAULT MERGEBLOCKRATIO

(

REQ_ID CHAR(10) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,

REQ_STRT_TS TIMESTAMP(6),

REQ_END_TS TIMESTAMP(6),

COMP_TS TIMESTAMP(6),

RQ_TYP_CODE INTEGER,

STS_CODE INTEGER,

)

PRIMARY INDEX TRPK (REQ_ID);

Online request types issued to teradata from OLTP application:

Type 1 :

========

SELECT REQ_ID,REQ_STRT_TS,REQ_END_TS,COMP_TS,RQ_TYP_CODE,STS_CODE FROM TRANSACTION_DETAILS WHERE REQ_ID ='R1';

Returns o/p in 30-50 milli seconds

Type 2 :

========

SELECT REQ_ID,REQ_STRT_TS,REQ_END_TS,COMP_TS,RQ_TYP_CODE,STS_CODE FROM TRANSACTION_DETAILS WHERE REQ_ID IN ('R1','R2','R3','R4','R5');

Returns o/p based on the no of request ids provided IN clause. i.e Response time Oscillates between 1-3 secs

But we need sub second response for both the types of input queries

Please suggest us some perfomance tuning approach or any other possibilities to arrive sub second reponse in all the scenarios(Type1 and Type2) to serve all the OLTP requests.

Thanks,

Sri

5 REPLIES
Senior Apprentice

Re: Using Teradata in OLTP application

Hi Sri,

how many rows are returned by the 2nd query? 

Are there collected stats on that column?

Can you share the Explain?

I would expect 5 sequential "single-AMP retrieve" steps "by way of the primary index".

Enthusiast

Re: Using Teradata in OLTP application

Hi Deiter,

Thanks for your response.

The no of o/p rows will be equal to no of REQ_IDs in IN clause.

The stats are already collected for the REQ_ID column.

Yes, you are correct it does 5 sequential "single-AMP retrieve" steps "by way of the primary index".

I have attached the explain plan below.

 This query is optimized using type 2 profile T2_Linux64, profileid 21.

  1) First, we do a single-AMP RETRIEVE step from all partitions of

     TRANSACTION_DETAILS in view

     VW_TRANSACTION_DETAILS by way of the primary index

     "TRANSACTION_DETAILS in view

     VW_TRANSACTION_DETAILS.REQ_ID = 'R1'" with no

     residual conditions locking row for access into Spool 1

     (group_amps), which is built locally on that AMP.  The size of

     Spool 1 is estimated with high confidence to be 1 row (135 bytes).

     The estimated time for this step is 0.00 seconds.

  2) Next, we do a single-AMP RETRIEVE step from all partitions of

     TRANSACTION_DETAILS in view

     VW_TRANSACTION_DETAILS by way of the primary index

     "TRANSACTION_DETAILS in view

     VW_TRANSACTION_DETAILS.REQ_ID = 'R2'" with no

     residual conditions locking row for access into Spool 1

     (group_amps), which is built locally on that AMP.  The size of

     Spool 1 is estimated with high confidence to be 2 rows (270 bytes).

     The estimated time for this step is 0.00 seconds.

  3) We do a single-AMP RETRIEVE step from all partitions of

     TRANSACTION_DETAILS in view

     VW_TRANSACTION_DETAILS by way of the primary index

     "TRANSACTION_DETAILS in view

     VW_TRANSACTION_DETAILS.REQ_ID = 'R3'" with no

     residual conditions locking row for access into Spool 1

     (group_amps), which is built locally on that AMP.  The size of

     Spool 1 is estimated with high confidence to be 3 rows (405 bytes).

     The estimated time for this step is 0.00 seconds.

  4) We do a single-AMP RETRIEVE step from all partitions of

     TRANSACTION_DETAILS in view

     VW_TRANSACTION_DETAILS by way of the primary index

     "TRANSACTION_DETAILS in view

     VW_TRANSACTION_DETAILS.REQ_ID = 'R4'" with no

     residual conditions locking row for access into Spool 1

     (group_amps), which is built locally on that AMP.  The size of

     Spool 1 is estimated with high confidence to be 4 rows (540 bytes).

     The estimated time for this step is 0.00 seconds.

  5) We do a single-AMP RETRIEVE step from all partitions of

     TRANSACTION_DETAILS in view

     VW_TRANSACTION_DETAILS by way of the primary index

     "TRANSACTION_DETAILS in view

     VW_TRANSACTION_DETAILS.REQ_ID = 'R5'" with no

     residual conditions locking row for access into Spool 1

     (group_amps), which is built locally on that AMP.  The size of

     Spool 1 is estimated with high confidence to be 5 rows (675 bytes).

     The estimated time for this step is 0.00 seconds.

  6) Finally, we send out an END TRANSACTION step to all AMPs involved

     in processing the request.

  -> The contents of Spool 1 are sent back to the user as the result of

     statement 1.  The total estimated time is 0.00 seconds.

Though it shows estimated time is 0.00 seconds, The retrieval time is not constant and its oscillating btw 1 and 3 secs

Thanks,

Sri

Enthusiast

Re: Using Teradata in OLTP application

I try this on oracle db and it shows 0.02 seconds. This is from emp table having only 14 rows.

SELECT * FROM EMP WHERE DEPTNO IN(10,20,30,40)

In oracle too, we find 0.00 seconds if we run few queries  even with IN clause. But it is not so.

Enthusiast

Re: Using Teradata in OLTP application

Hi Sri,

What does the dbql log look like for the oscillating query? and How are you measuring your response times?

With an explain plan that clean, i'm wondering if it might not be a network issue, with the oscillation due to traffic....

Fan

Re: Using Teradata in OLTP application

Ok, the question is really old, so I dont' think I'll be of any help, but let me share a couple of points.

1 - from your following explanations, it would seem the REQ_ID is indeed a "unique" idenitfier. In this case you should define an UNIQUE PRIMARY INDEX to let Teradata know

2 - the REQ_ID is a CHAR(10): I had some problems in the past with skewed tables (even if the key was unique) when the key was a CHAR/VARCHAR. Can you check if your tables is skewed, and, in case, do you have the possibility to use a numeric field for REQ_ID?

BR,

Fabio