need help

Database
Enthusiast

need help

Hi all@

 END_DT = ‘2999--12-31' is one which is repeated evrywhere causing more processing. What can be done to make this query run quickly?

select  T1.*, T2.TABLE6_ID, T3.TABLE6_ARNGM_ID,T4.TABLE9_ID,

                                T3.PRMY_PTCPT_ID

from      (

                                select  inst.PLN_no, PTCPT.PTCPT_no, PTCPT.SUBPLN_no,  lnd.LN_no,

                                 LNFCT.TABLE2_ID, LNFCT.TABLE3_ID, LNFCT.TABLE5_ID,

                                LNFCT.TABLE4_ID,  LNFCT.EFF_DT,  LNFCT.DMD_OFFST_AMT,  LNFCT.LST_UPDT_RUN_ID,

                                 LNFCT.CREAT_RUN_ID,  LNFCT.TOT_LN_IN_ARR_CT,  LNFCT.TOT_EXPCT_INT_AMT,

                                LNFCT.CRNT_BAL_AMT,  LNFCT.INT_IN_ARR_AMT,  LNFCT.END_DT, LNFCT.DMD_OFFST_COMB_AMT,

                                LNFCT.BEGNG_LN_FUND_BAL_AMT,  LNFCT.LN_ISS_AMT, LNFCT.ACTL_PRIN_AMT,

                                LNFCT.PMT_MADE_CT, LNFCT.INT_PD_AMT, LNFCT.PRIN_IN_ARR_AMT, LNFCT.DYS_IN_ARR_CT,

                                LNFCT.DMD_DISTRB_AMT, LNFCT.SKED_AMT, LNFCT.YTD_ACCUM_INT_AMT, LNFCT.LN_ISS_PRIN_AMT

                                 --,LNFCT.END_BAL_AMT

                                from      TABLE1 LNFCT,  TABLE2 inst,  TABLE3 PTCPT,

                                                TABLE4 lnd, TABLE5 P2D 

                                                where   LNFCT.TABLE3_ID = PTCPT.TABLE3_ID

                                                                and        LNFCT.TABLE2_ID = inst.TABLE2_ID

                                                                and        lnd.TABLE4_ID = LNFCT.TABLE4_ID

                                                                and        LNFCT.TABLE5_ID = P2D.TABLE5_ID

                                                                and        LNFCT.END_DT = ‘2999--12-31'

                                ) T1

inner join

                                (

                                select  TABLE6_ID, PLN_no

                                from      TABLE6

                                where   TABLE6_ID=PARNT_TABLE6_ID

                                and        END_DT=‘2999--12-31'

                                ) T2

on           T1.PLN_no=T2.PLN_no

inner join

                                (

                                                select    Table6.pln_no   ,               Table6.subpln_no  , Table8.PTCPT_no    , ppa.prmy_PTCPT_id   , max(Table6_arngm_id) Table6_arngm_id

                                                from        pln_Table6_arngm ppa   ,Table8    , Table6

                                                where   ppa.end_dt = ‘2999--12-31'

                                                                and        Table8.end_dt = ‘2999--12-31'

                                                                and        Table6.end_dt = ‘2999--12-31'

                                                                and        Table6.Table6_id = ppa.Table6_id

                                                                and        ppa.prmy_PTCPT_id = Table8.Table8_id

                                                group    by                                 Table6.pln_no                                 , Table6.subpln_no                                 , Table8.PTCPT_no                                 , ppa.prmy_PTCPT_id

                                ) T3

on           T3.pln_no = T1.pln_no

and        T3.subpln_no = T1.subpln_no

and        T3.PTCPT_no = T1.PTCPT_no

inner join

                                (

                                select  Table6.pln_no  , Table6.subpln_no   , Table8.PTCPT_no   ,                 lnr.LN_no   , max(Table9_id) Table9_id

                                from    Table9 lnr   ,           PLN_TABLE6_ARNGM PPA   , Table8   ,Table6

                where   lnr.end_dt = ‘2999--12-31'

                and        Table8.end_dt = ‘2999--12-31'

                and        Table6.end_dt = ‘2999--12-31'

                and        PPA.end_Dt=‘2999--12-31'

                and        Table6.Table6_id = lnr.Table6_id

                and        ppa.prmy_PTCPT_id = Table8.Table8_id

                and        ppa.TABLE6_ARNGM_ID=LNR.ARNGM_BSNS_no

                group    by       Table6.pln_no    , Table6.subpln_no    , Table8.PTCPT_no,lnr.LN_no

                ) T4

on           T4.pln_no = T1.pln_no

and        T4.subpln_no = T1.subpln_no

and        T4.PTCPT_no = T1.PTCPT_no

and        T4.ln_no = T1.ln_no;

1) First, we lock JI_235_TABLE6_1 for access, we lock

     TABLE3 for access, we lock

     TABLE2 for access, we lock TABLE9 for

     access, we lock TABLE6 for access, we lock TABLE4

     for access, we lock LNFCT for access, we lock TABLE8 for

     access, and we lock TABLE5 for access.

  2) Next, we do an all-AMPs RETRIEVE step from TABLE8 by way of an

     all-rows scan with a condition of ("TABLE8.END_DT = DATE

     ‘2999--12-31'") into Spool 5 (all_amps) fanned out into 5 hash join

     partitions, which is built locally on the AMPs.  The input table

     will not be cached in memory, but it is eligible for synchronized

     scanning.  The size of Spool 5 is estimated with high confidence

     to be 4,326,702 rows.  The estimated time for this step is 5.59

     seconds.

  3) We do a single-AMP RETRIEVE step from JI_235_PLN_TABLE6_ARNGM_2

     by way of the primary index "END_DT = DATE ‘2999--12-31'" with a

     residual condition of ("(NOT

     (JI_235_PLN_TABLE6_ARNGM_2.PRMY_PTCPT_ID IS NULL )) AND

     (NOT (JI_235_PLN_TABLE6_ARNGM_2.TABLE6_ID IS NULL ))")

     locking row for access into Spool 6 (all_amps) fanned out into 5

     hash join partitions, which is redistributed by hash code to all

     AMPs.  The size of Spool 6 is estimated with high confidence to be

     9,964,649 rows.  The estimated time for this step is 5.78 seconds.

  4) We do an all-AMPs RETRIEVE step from JI_235_TABLE6_1 by way of

     an all-rows scan with a condition of ("JI_235_TABLE6_1.END_DT =

     DATE ‘2999--12-31'") into Spool 7 (all_amps), which is

     redistributed by hash code to all AMPs.  Then we do a SORT to

     order Spool 7 by the sort key in spool field1.  The size of Spool

     7 is estimated with high confidence to be 274,252 rows.  The

     estimated time for this step is 0.21 seconds.

  5) We do an all-AMPs JOIN step from Spool 5 (Last Use) by way of an

     all-rows scan, which is joined to Spool 6 (Last Use) by way of an

     all-rows scanSpool 5 and Spool 6 are joined using a hash join

     of 5 partitions, with a join condition of ("(PRMY_PTCPT_ID =

     TABLE8_ID) AND (END_DT = END_DT)").  The result goes into Spool

     8 (all_amps), which is built locally on the AMPs into 12 hash join

     partitions.  The size of Spool 8 is estimated with low confidence

     to be 9,964,649 rows.  The estimated time for this step is 3.09

     seconds.

  6) We do an all-AMPs JOIN step from Spool 7 (Last Use) by way of an

     all-rows scan, which is joined to TABLE6 by way of an all-rows

     scan with no residual conditions.  Spool 7 and TABLE6 are

     joined using a row id join, with a join condition of ("Field_1 =

     TABLE6.RowID").  The result goes into Spool 9 (all_amps), which

     is duplicated on all AMPs into 12 hash join partitions.  The size

     of Spool 9 is estimated with low confidence to be 20,568,900 rows.

     The estimated time for this step is 4.57 seconds.

  7) We do an all-AMPs JOIN step from Spool 8 (Last Use) by way of an

     all-rows scan, which is joined to Spool 9 (Last Use) by way of an

     all-rows scanSpool 8 and Spool 9 are joined using a hash join

     of 12 partitions, with a join condition of ("(TABLE6_ID =

     TABLE6_ID) AND ((END_DT = END_DT) AND (END_DT = END_DT ))").

     The result goes into Spool 4 (all_amps), which is built locally on

     the AMPs.  The size of Spool 4 is estimated with low confidence to

     be 9,978,293 rows.  The estimated time for this step is 1.40

     seconds.

  8) We do an all-AMPs SUM step to aggregate from Spool 4 (Last Use) by

     way of an all-rows scan, and the grouping identifier in field 1.

     Aggregate Intermediate Results are computed globally, then placed

     in Spool 10.  The size of Spool 10 is estimated with low

     confidence to be 7,483,720 rows.  The estimated time for this step

     is 10.53 seconds.

  9) We do an all-AMPs RETRIEVE step from Spool 10 (Last Use) by way of

     an all-rows scan into Spool 2 (all_amps), which is built locally

     on the AMPs.  The size of Spool 2 is estimated with low confidence

     to be 7,483,720 rows.  The estimated time for this step is 1.03

     seconds.

 10) We do an all-AMPs RETRIEVE step from TABLE9 by way of an

     all-rows scan with a condition of ("(NOT

     (TABLE9.TABLE6_ID IS NULL )) AND (TABLE9.END_DT

     = DATE ‘2999--12-31')") into Spool 14 (all_amps) fanned out into 2

     hash join partitions, which is built locally on the AMPs.  The

     size of Spool 14 is estimated with high confidence to be 688,144

     rows.  The estimated time for this step is 1.94 seconds.

 11) We do a single-AMP RETRIEVE step from JI_235_PLN_TABLE6_ARNGM_2

     by way of the primary index "END_DT = DATE ‘2999--12-31'" with a

     residual condition of ("NOT

     (JI_235_PLN_TABLE6_ARNGM_2.PRMY_PTCPT_ID IS NULL)")

     locking row for access into Spool 15 (all_amps) fanned out into 2

     hash join partitions, which is redistributed by hash code to all

     AMPs.  The size of Spool 15 is estimated with high confidence to

     be 9,964,649 rows.  The estimated time for this step is 5.50

     seconds.

 12) We do an all-AMPs JOIN step from Spool 14 (Last Use) by way of an

     all-rows scan, which is joined to Spool 15 (Last Use) by way of an

     all-rows scanSpool 14 and Spool 15 are joined using a hash join

     of 2 partitions, with a join condition of ("(END_DT = END_DT) AND

     (TABLE6_ARNGM_ID = ARNGM_BSNS_no)").  The result goes into

     Spool 16 (all_amps), which is redistributed by hash code to all

     AMPs.  Then we do a SORT to order Spool 16 by row hash.  The size

     of Spool 16 is estimated with low confidence to be 689,795 rows.

     The estimated time for this step is 0.43 seconds.

 13) We execute the following steps in parallel.

      1) We do an all-AMPs RETRIEVE step from TABLE6 by way of an

         all-rows scan with a condition of ("TABLE6.END_DT = DATE

         ‘2999--12-31'") locking for access into Spool 17 (all_amps),

         which is built locally on the AMPs.  The size of Spool 17 is

         estimated with high confidence to be 274,089 rows.  The

         estimated time for this step is 0.84 seconds.

      2) We do an all-AMPs JOIN step from Spool 16 (Last Use) by way of

         a RowHash match scan, which is joined to TABLE8 by way of

         a RowHash match scan with a condition of ("TABLE8.END_DT =

         DATE ‘2999--12-31'") locking TABLE8 for accessSpool 16

         and TABLE8 are joined using a merge join, with a join

         condition of ("(PRMY_PTCPT_ID = TABLE8.TABLE8_ID)

         AND ((END_DT = TABLE8.END_DT) AND (TABLE8.END_DT =

         END_DT ))").  The input table TABLE8 will not be cached in

         memory.  The result goes into Spool 18 (all_amps), which is

         redistributed by hash code to all AMPs.  The size of Spool 18

         is estimated with low confidence to be 689,795 rows.  The

         estimated time for this step is 5.41 seconds.

 14) We do an all-AMPs JOIN step from Spool 17 (Last Use) by way of an

     all-rows scan, which is joined to Spool 18 (Last Use) by way of an

     all-rows scanSpool 17 and Spool 18 are joined using a single

     partition hash join, with a join condition of ("(END_DT = END_DT)

     AND ((END_DT = END_DT) AND ((END_DT = END_DT) AND (TABLE6_ID =

     TABLE6_ID )))").  The result goes into Spool 13 (all_amps),

     which is built locally on the AMPs.  The size of Spool 13 is

     estimated with low confidence to be 689,795 rows.  The estimated

     time for this step is 0.10 seconds.

 15) We do an all-AMPs SUM step to aggregate from Spool 13 (Last Use)

     by way of an all-rows scan, and the grouping identifier in field 1.

     Aggregate Intermediate Results are computed globally, then placed

     in Spool 19.  The size of Spool 19 is estimated with low

     confidence to be 689,795 rows.  The estimated time for this step

     is 0.49 seconds.

 16) We execute the following steps in parallel.

      1) We do an all-AMPs RETRIEVE step from Spool 19 (Last Use) by

         way of an all-rows scan into Spool 1 (all_amps), which is

         built locally on the AMPs.  The size of Spool 1 is estimated

         with low confidence to be 689,795 rows.  The estimated time

         for this step is 0.11 seconds.

      2) We do an all-AMPs RETRIEVE step from JI_235_TABLE6_1 by way

         of an all-rows scan with a condition of ("(NOT

         (JI_235_TABLE6_1.PLN_no IS NULL )) AND

         (JI_235_TABLE6_1.END_DT = DATE ‘2999--12-31')") into Spool

         22 (all_amps), which is redistributed by hash code to all AMPs.

         Then we do a SORT to order Spool 22 by the sort key in spool

         field1.  The size of Spool 22 is estimated with high

         confidence to be 274,252 rows.  The estimated time for this

         step is 0.21 seconds.

      3) We do an all-AMPs RETRIEVE step from LNFCT by way of an

         all-rows scan with a condition of ("LNFCT.END_DT = DATE

         ‘2999--12-31'") into Spool 23 (all_amps), which is

         redistributed by hash code to all AMPs.  Then we do a SORT to

         order Spool 23 by row hash.  The size of Spool 23 is estimated

         with high confidence to be 674,661 rows.  The estimated time

         for this step is 5.80 seconds.

 17) We do an all-AMPs JOIN step from TABLE2 by way of

     a RowHash match scan with no residual conditions, which is joined

     to Spool 23 (Last Use) by way of a RowHash match scan.

     TABLE2 and Spool 23 are joined using a merge join,

     with a join condition of ("TABLE2_ID =

     TABLE2.TABLE2_ID").  The result goes

     into Spool 24 (all_amps), which is redistributed by hash code to

     all AMPs.  Then we do a SORT to order Spool 24 by row hash.  The

     size of Spool 24 is estimated with low confidence to be 674,661

     rows.  The estimated time for this step is 1.40 seconds.

 18) We do an all-AMPs JOIN step from TABLE5 by way of a

     RowHash match scan with no residual conditions, which is joined to

     Spool 24 (Last Use) by way of a RowHash match scan.

     TABLE5 and Spool 24 are joined using a merge join,

     with a join condition of ("TABLE5_ID =

     TABLE5.TABLE5_ID").  The input table

     TABLE5 will not be cached in memory, but it is

     eligible for synchronized scanning.  The result goes into Spool 25

     (all_amps), which is redistributed by hash code to all AMPs.  Then

     we do a SORT to order Spool 25 by row hash.  The size of Spool 25

     is estimated with low confidence to be 674,661 rows.  The

     estimated time for this step is 19.45 seconds.

 19) We execute the following steps in parallel.

      1) We do an all-AMPs JOIN step from TABLE4 by way of

         a RowHash match scan with no residual conditions, which is

         joined to Spool 25 (Last Use) by way of a RowHash match scan.

         TABLE4 and Spool 25 are joined using a merge join,

         with a join condition of ("TABLE4.TABLE4_ID

         = TABLE4_ID").  The result goes into Spool 26 (all_amps),

         which is redistributed by hash code to all AMPs.  Then we do a

         SORT to order Spool 26 by row hash.  The size of Spool 26 is

         estimated with low confidence to be 674,661 rows.  The

         estimated time for this step is 2.84 seconds.

      2) We do an all-AMPs JOIN step from Spool 22 (Last Use) by way of

         an all-rows scan, which is joined to TABLE6 by way of an

         all-rows scan with a condition of ("TABLE6.TABLE6_ID =

         TABLE6.PARNT_TABLE6_ID") locking TABLE6 for access.

         Spool 22 and TABLE6 are joined using a row id join, with a

         join condition of ("Field_1 = TABLE6.RowID").  The result

         goes into Spool 27 (all_amps), which is redistributed by hash

         code to all AMPs.  The size of Spool 27 is estimated with no

         confidence to be 89,132 rows.  The estimated time for this

         step is 0.83 seconds.

 20) We do an all-AMPs JOIN step from TABLE3 by way

     of a RowHash match scan with no residual conditions, which is

     joined to Spool 26 (Last Use) by way of a RowHash match scan.

     TABLE3 and Spool 26 are joined using a merge

     join, with a join condition of ("TABLE3_ID =

     TABLE3.TABLE3_ID").  The input table

     TABLE3 will not be cached in memory, but it is

     eligible for synchronized scanning.  The result goes into Spool 28

     (all_amps), which is redistributed by hash code to all AMPs.  The

     size of Spool 28 is estimated with low confidence to be 674,661

     rows.  The estimated time for this step is 19.71 seconds.

 21) We do an all-AMPs RETRIEVE step from Spool 1 (Last Use) by way of

     an all-rows scan with a condition of ("(NOT (LN_no IS NULL )) AND

     ((NOT (PLN_no IS NULL )) AND ((NOT (PTCPT_no IS NULL )) AND

     (NOT (SUBPLN_no IS NULL ))))") into Spool 29 (all_amps) fanned

     out into 2 hash join partitions, which is redistributed by hash

     code to all AMPs.  The size of Spool 29 is estimated with low

     confidence to be 689,795 rows.  The estimated time for this step

     is 0.43 seconds.

 22) We do an all-AMPs JOIN step from Spool 27 (Last Use) by way of an

     all-rows scan, which is joined to Spool 28 (Last Use) by way of an

     all-rows scanSpool 27 and Spool 28 are joined using a single

     partition hash join, with a join condition of ("(PLN_no = PLN_no)

     AND (END_DT = END_DT)").  The result goes into Spool 30 (all_amps),

     which is built locally on the AMPs into 2 hash join partitions.

     The size of Spool 30 is estimated with no confidence to be

     2,031,619 rows.  The estimated time for this step is 1.53 seconds.

 23) We do an all-AMPs JOIN step from Spool 29 (Last Use) by way of an

     all-rows scan, which is joined to Spool 30 (Last Use) by way of an

     all-rows scanSpool 29 and Spool 30 are joined using a hash join

     of 2 partitions, with a join condition of ("(PTCPT_no =

     PTCPT_no) AND ((SUBPLN_no = SUBPLN_no) AND ((PLN_no = PLN_no)

     AND ((LN_no = LN_no) AND (PLN_no = PLN_no ))))").  The result

     goes into Spool 31 (all_amps), which is built locally on the AMPs

     into 6 hash join partitions.  The size of Spool 31 is estimated

     with no confidence to be 689,795 rows.  The estimated time for

     this step is 0.83 seconds.

 24) We do an all-AMPs RETRIEVE step from Spool 2 (Last Use) by way of

     an all-rows scan with a condition of ("(NOT (PLN_no IS NULL ))

     AND ((NOT (PTCPT_no IS NULL )) AND (NOT (SUBPLN_no IS NULL )))")

     into Spool 32 (all_amps) fanned out into 6 hash join partitions,

     which is redistributed by hash code to all AMPs.  The size of

     Spool 32 is estimated with low confidence to be 7,483,720 rows.

     The estimated time for this step is 4.49 seconds.

 25) We do an all-AMPs JOIN step from Spool 31 (Last Use) by way of an

     all-rows scan, which is joined to Spool 32 (Last Use) by way of an

     all-rows scanSpool 31 and Spool 32 are joined using a hash join

     of 6 partitions, with a join condition of ("(PLN_no = PLN_no)

     AND ((SUBPLN_no = SUBPLN_no) AND ((PTCPT_no = PTCPT_no) AND

     ((PLN_no = PLN_no) AND ((PLN_no = PLN_no) AND ((PTCPT_no =

     PTCPT_no) AND (SUBPLN_no = SUBPLN_no ))))))").  The result

     goes into Spool 21 (group_amps), which is built locally on the

     AMPs.  The size of Spool 21 is estimated with no confidence to be

     7,483,720 rows.  The estimated time for this step is 3.06 seconds.

 26) Finally, we send out an END TRANSACTION step to all AMPs involved

     in processing the request.

  -> The contents of Spool 21 are sent back to the user as the result

     of statement 1.  The total estimated time is 1 minute and 46

     seconds.

5 REPLIES
Enthusiast

Re: need help

any suggestions?

Senior Supporter

Re: need help

So what do you expect from the others / us?

Just dump SQL and Explain seems not to work. You would need to give much more qualified information.

Why do you think the date is the problem?

The SQL reasonable complex due to the size of it. Not nice formated, so people might not like to spend time to reformat it for you.

Explain is very long, so where is the problem? Are the numbers shown in the explain correct? Or does the explain under or over estimate by factor >10?

DDLs are missing - it seems tables contains history due to this interesting formated dates. But what are the PI's? Some can be guessed from the Explain, but I have limited willingness to do so. How is the JI definition?

If you have complex problems you need to share more info! And prepare it better - at least from my point of view.

Teradata Employee

Re: need help

Whats the current execution time for this query?

Whats the data volume we are talking about?

Why are you writing joins in where clauses? Your data is being redistributed 10+ times due to that. I guess you can reduce its execution time just by writing SQL as its supposed to be written.

As a start, replace all this kind of SQL:

from      TABLE1 LNFCT,  TABLE2 inst,  TABLE3 PTCPT,

                                                TABLE4 lnd, TABLE5 P2D 

                                                where   LNFCT.TABLE3_ID = PTCPT.TABLE3_ID

                                                                 and        LNFCT.TABLE2_ID = inst.TABLE2_ID

                                                                 and        lnd.TABLE4_ID = LNFCT.TABLE4_ID

                                                                 and        LNFCT.TABLE5_ID = P2D.TABLE5_ID

                                                                 and        LNFCT.END_DT = ‘2999--12-31'

to proper JOIN clauses and get back with answers of all questions we have asked.

Teradata Employee

Re: need help

Please use the "Code" icon to put SQL next time .... its there for a reason. :)

Teradata Employee

Re: need help

The teradata optimizer does not care if joins are specified in the where clause or in JOIN syntax in the FROM clause, or even a combination of the two. It will generate the same plan. The query as written will not cause redistributions just because of the syntax.