Qry runs long

Database
Enthusiast

Qry runs long

Hi !

This query runs for a long time . I can see that there is a left outer join and a full outer join.
I tried filtering out the roiws from the tables before the join like ' Sel * from table where (extract(year from C.M_DT)) = '2011'
and (extract(month from C.M_DT)) = '3' ) ' instead of the table name and commenting this condition out from the where clause. But this is just increasing the run time
Can someone help me imprving the perf?

select X1.c1 as c1, X2.c1 as c2, X1.c2 as c3, X2.c2 as c4,
X1.c3 as c5, X2.c3 as c6, X1.c4 as c7, X2.c4 as c8,
X1.c5 as c9, X2.c5 as c10, X1.c6 as c11, X1.c9 as c12,
X2.c9 as c13, X1.c10 as c14, X2.c10 as c15, X1.c11 as c16,
X2.c11 as c17, X2.c6 as c18, X1.c7 as c19, X1.c8 as c20
from (
select distinct X1.c4 as c1, X1.c5 as c2, X1.c6 as c3,
X1.c7 as c4, X1.c8 as c5, sum(X1.c3) over (partition by X1.c5,
X1.c7, X1.c6, X1.c8) as c6, max(X1.c2) over (partition by X1.c5,
X1.c7, X1.c6, X1.c8) as c7,
case
when X1.c2 - X1.c1 > 0 then -1
else X1.c2
end as c8, X1.c11 as c9, X1.c9 as c10,
X1.c10 as c11
from (
select min(C.V_PCT) as c1, max(C.V_PCT) as c2,
sum(C.C_AMT) as c3,
D.P_NBR as c4, D.PR_NBR as c5,

case
when B.C_TXT = ‘TBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 'BR'
when B.C_TXT = ‘TNBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 'NBR'
else A.S_CD
end as c6, A.I_SRC_CD as c7,
A.I_SRC_CD_D_TXT as c8,
A.I_S_R_G_CD as c9,
A.I_S_O_NBR as c10,

case
when B.C_TXT = ‘TBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 1
when B.C_TXT = ‘TNBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 2
else B.C_S_NBR
end as c11
from I_FND_DIM A /* FUT_I_FND_DIM */
left outer join CLASS B /* FUT_CLASS_FUND_ORDER */
On A.S_CD = B.CLASS_CD
and B.C_S_ID = 1616, D_B_FCT C /* FUT_D_B_FCT */ ,
P_P_D D /* FUT_P_P_D */
where ( A.I_FND_DIM_ID = C.I_FND_DIM_ID
and C.P_P_D_ID = D.P_P_D_ID
and D.P_NBR = '100214'
and (extract(year from C.M_DT)) = '2011'
and (extract(month from C.M_DT)) = '6' )
group by A.I_SRC_CD, A.I_SRC_CD_D_TXT,
A.I_S_R_G_CD, A.I_S_O_NBR,
D.P_NBR, D.PR_NBR ,
case
when B.C_TXT = ‘TBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 'BR'
when B.C_TXT = ‘TNBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 'NBR'
else A.S_CD
end ,
case
when B.C_TXT = ‘TBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 1
when B.C_TXT = ‘TNBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 2
else B.C_S_NBR
end ) D1 ) D1 full outer join (
select distinct X1.c4 as c1, X1.c5 as c2, X1.c6 as c3,
X1.c7 as c4, X1.c8 as c5, sum(X1.c3) over (partition by X1.c5,
X1.c7, X1.c6, X1.c8) as c6, X1.c11 as c9, X1.c9 as c10,
X1.c10 as c11
from (
select sum(C.C_AMT) as c3, D.P_NBR as c4,
D.PR_NBR as c5,
case
when B.C_TXT = ‘TBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 'BR'
when B.C_TXT = ‘TNBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 'NBR'
else A.S_CD
end as c6, A.I_SRC_CD as c7,
A.I_SRC_CD_D_TXT as c8,
A.I_S_R_G_CD as c9,
A.I_S_O_NBR as c10,

case
when B.C_TXT = ‘TBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 1
when B.C_TXT = ‘TNBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 2
else B.C_S_NBR
end as c11
from I_FND_DIM A /* FUT_I_FND_DIM */ left outer join CLASS B /* FUT_CLASS_FUND_ORDER */
On A.S_CD = B.CLASS_CD
and B.C_S_ID = 1616, D_B_FCT C /* FUT_D_B_FCT */ ,
P_P_D D /* FUT_P_P_D */
where ( A.I_FND_DIM_ID = C.I_FND_DIM_ID
and C.P_P_D_ID = D.P_P_D_ID
and D.P_NBR = '100214'
and (extract(year from C.M_DT)) = '2011'
and (extract(month from C.M_DT)) = '3' )
group by A.I_SRC_CD, A.I_SRC_CD_D_TXT,
A.I_S_R_G_CD, A.I_S_O_NBR,
D.P_NBR, D.PR_NBR ,
case
when B.C_TXT = ‘TBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 'BR'
when B.C_TXT = ‘TNBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 'NBR'
else A.S_CD
end ,
case
when B.C_TXT = ‘TBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 1
when B.C_TXT = ‘TNBR’
and A.S_CD <> ‘A’
and A.S_CD <> ‘V’
and A.IN_CD in (‘2’, ‘9’) then 2
else B.C_S_NBR
end ) D1 ) D2
On X1.c4 = X2.c4
and X1.c5 = X2.c5
and X1.c2 = X2.c2
and X1.c3 = X2.c3

1) First, we lock TP002.P_P_D for access, we lock
TP002.D_B_FCT for access, we lock TP002.I_FND_DIM for
access, and we lock TP002.CLASS for access.
2) Next, we do an all-AMPs RETRIEVE step from TP002.P_P_D by
way of index # 4 TP002.P_P_D.P_NBR = '100214' with no
residual conditions into Spool 7 (all_amps), which is duplicated
on all AMPs. Then we do a SORT to order Spool 7 by row hash. The
size of Spool 7 is estimated with high confidence to be 9,578,475
rows. The estimated time for this step is 6.45 seconds.
3) We do an all-AMPs JOIN step from Spool 7 (Last Use) by way of an
all-rows scan, which is joined to TP002.D_B_FCT by way of a
traversal of index # 8 without accessing the base table extracting
row ids only. Spool 7 and TP002.D_B_FCT are joined using a
nested join, with a join condition of (
TP002.D_B_FCT.P_P_D_ID = P_P_D_ID). The
input table TP002.D_B_FCT will not be cached in memory. The
result goes into Spool 8 (all_amps), which is built locally on the
AMPs. Then we do a SORT to order Spool 8 by field Id 1. The size
of Spool 8 is estimated with no confidence to be 433,583 rows.
The estimated time for this step is 4 minutes and 7 seconds.
4) We do an all-AMPs JOIN step from Spool 8 by way of an all-rows
scan, which is joined to TP002.D_B_FCT by way of an all-rows
scan with a condition of (((EXTRACT(MONTH FROM
(TP002.D_B_FCT.M_DT )))= 6) AND ((EXTRACT(YEAR FROM
(TP002.D_B_FCT.M_DT )))= 2011)). Spool 8 and
TP002.D_B_FCT are joined using a row id join, with a join
condition of ((1=1)). The input table TP002.D_B_FCT will
not be cached in memory. The result goes into Spool 9 (all_amps),
which is redistributed by hash code to all AMPs. Then we do a
SORT to order Spool 9 by row hash. The size of Spool 9 is
estimated with no confidence to be 433,583 rows. The estimated
time for this step is 4 minutes and 7 seconds.
5) We execute the following steps in parallel.
1) We do an all-AMPs JOIN step from Spool 9 (Last Use) by way of
a RowHash match scan, which is joined to
TP002.I_FND_DIM by way of a RowHash match scan with
no residual conditions. Spool 9 and TP002.I_FND_DIM
are joined using a merge join, with a join condition of (
TP002.I_FND_DIM.I_FND_DIM_ID =
I_FND_DIM_ID). The result goes into Spool 10
(all_amps), which is built locally on the AMPs. Then we do a
SORT to order Spool 10 by row hash. The size of Spool 10 is
estimated with no confidence to be 433,583 rows. The
estimated time for this step is 2.01 seconds.
2) We do an all-AMPs RETRIEVE step from TP002.CLASS by way of an
all-rows scan with a condition of (TP002.CLASS.C_S_ID =
1616) into Spool 11 (all_amps), which is duplicated on all
AMPs. Then we do a SORT to order Spool 11 by row hash. The
size of Spool 11 is estimated with high confidence to be
62,175 rows. The estimated time for this step is 0.03
seconds.
6) We do an all-AMPs JOIN step from Spool 10 (Last Use) by way of a
RowHash match scan, which is joined to Spool 11 by way of a
RowHash match scan. Spool 10 and Spool 11 are left outer joined
using a merge join, with a join condition of (S_CD =
CLASS_CD). The result goes into Spool 6 (all_amps), which is
built locally on the AMPs. The size of Spool 6 is estimated with
low confidence to be 449,867 rows. The estimated time for this
step is 0.47 seconds.
7) We do an all-AMPs SUM step to aggregate from Spool 6 (Last Use) by
way of an all-rows scan, and the grouping identifier in field 1.
Aggregate Intermediate Results are computed globally, then placed
in Spool 12. The size of Spool 12 is estimated with no confidence
to be 337,401 rows. The estimated time for this step is 0.73
seconds.
8) We do an all-AMPs RETRIEVE step from Spool 12 (Last Use) by way of
an all-rows scan into Spool 1 (all_amps), which is built locally
on the AMPs. The size of Spool 1 is estimated with no confidence
to be 337,401 rows. The estimated time for this step is 0.08
seconds.
9) We do an all-AMPs STAT FUNCTION step from Spool 1 (Last Use) by
way of an all-rows scan into Spool 17 (Last Use), which is assumed
to be redistributed by value to all AMPs. The result rows are put
into Spool 4 (all_amps), which is redistributed by hash code to
all AMPs. Then we do a SORT to order Spool 4 by the sort key in
spool field1 eliminating duplicate rows. The estimated time for
this step is 0.08 seconds.
10) We do an all-AMPs JOIN step from Spool 8 (Last Use) by way of an
all-rows scan, which is joined to TP002.D_B_FCT by way of an
all-rows scan with a condition of (((EXTRACT(MONTH FROM
(TP002.D_B_FCT.M_DT )))= 3) AND ((EXTRACT(YEAR FROM
(TP002.D_B_FCT.M_DT )))= 2011)) locking
TP002.D_B_FCT for access. Spool 8 and TP002.D_B_FCT are
joined using a row id join, with a join condition of ((1=1)).
The input table TP002.D_B_FCT will not be cached in memory.
The result goes into Spool 24 (all_amps), which is redistributed
by hash code to all AMPs. Then we do a SORT to order Spool 24 by
row hash. The size of Spool 24 is estimated with no confidence to
be 433,583 rows. The estimated time for this step is 4 minutes
and 7 seconds.
11) We do an all-AMPs JOIN step from Spool 24 (Last Use) by way of a
RowHash match scan, which is joined to TP002.I_FND_DIM by
way of a RowHash match scan with no residual conditions locking
TP002.I_FND_DIM for access. Spool 24 and
TP002.I_FND_DIM are joined using a merge join, with a join
condition of (TP002.I_FND_DIM.I_FND_DIM_ID =
I_FND_DIM_ID). The result goes into Spool 25 (all_amps),
which is built locally on the AMPs. Then we do a SORT to order
Spool 25 by row hash. The size of Spool 25 is estimated with no
confidence to be 433,583 rows. The estimated time for this step
is 2.01 seconds.
12) We do an all-AMPs JOIN step from Spool 25 (Last Use) by way of a
RowHash match scan, which is joined to Spool 11 (Last Use) by way
of a RowHash match scan. Spool 25 and Spool 11 are left outer
joined using a merge join, with a join condition of (
S_CD = CLASS_CD). The result goes into Spool 21
(all_amps), which is built locally on the AMPs. The size of Spool
21 is estimated with low confidence to be 449,867 rows. The
estimated time for this step is 0.47 seconds.
13) We do an all-AMPs SUM step to aggregate from Spool 21 (Last Use)
by way of an all-rows scan, and the grouping identifier in field 1.
Aggregate Intermediate Results are computed globally, then placed
in Spool 27. The size of Spool 27 is estimated with no confidence
to be 337,401 rows. The estimated time for this step is 0.70
seconds.
14) We do an all-AMPs RETRIEVE step from Spool 27 (Last Use) by way of
an all-rows scan into Spool 2 (all_amps), which is built locally
on the AMPs. The size of Spool 2 is estimated with no confidence
to be 337,401 rows. The estimated time for this step is 0.07
seconds.
15) We do an all-AMPs STAT FUNCTION step from Spool 2 (Last Use) by
way of an all-rows scan into Spool 32 (Last Use), which is assumed
to be redistributed by value to all AMPs. The result rows are put
into Spool 3 (all_amps), which is redistributed by hash code to
all AMPs. Then we do a SORT to order Spool 3 by the sort key in
spool field1 eliminating duplicate rows. The estimated time for
this step is 0.07 seconds.
16) We execute the following steps in parallel.
1) We do an all-AMPs RETRIEVE step from Spool 4 (Last Use) by way
of an all-rows scan into Spool 36 (all_amps), which is
redistributed by hash code to all AMPs. Then we do a SORT to
order Spool 36 by row hash. The size of Spool 36 is estimated
with no confidence to be 337,401 rows. The estimated time for
this step is 0.36 seconds.
2) We do an all-AMPs RETRIEVE step from Spool 3 (Last Use) by way
of an all-rows scan into Spool 37 (all_amps), which is
redistributed by hash code to all AMPs. Then we do a SORT to
order Spool 37 by row hash. The size of Spool 37 is estimated
with no confidence to be 337,401 rows. The estimated time for
this step is 0.34 seconds.
17) We do an all-AMPs JOIN step from Spool 36 (Last Use) by way of a
RowHash match scan, which is joined to Spool 37 (Last Use) by way
of a RowHash match scan. Spool 36 and Spool 37 are full outer
joined using a merge join, with a join condition of ((C3 = C3)
AND ((C2 = C2) AND ((C5 = C5) AND (C4 = C4 )))). The result goes
into Spool 35 (group_amps), which is built locally on the AMPs.
The size of Spool 35 is estimated with no confidence to be
22,632,642 rows. The estimated time for this step is 6.99 seconds.
18) Finally, we send out an END TRANSACTION step to all AMPs involved
in processing the request.
-> The contents of Spool 35 are sent back to the user as the result
of statement 1. The total estimated time is 12 minutes and 41
seconds.

1 REPLY
Enthusiast

Re: Qry runs long

Have you checked DBQL to see which individual steps are taking the longest? In terms of filtering, if you can put a PPI on the M_DT field, use a between filter instead of the Extract.

SELECT * FROM table WHERE C.M_DT BETWEEN '2011-03-01' AND '2011-03-31';

This should give you partition elimination and reduce the data scanned more efficiently.

Also the frequent use of DISTINCT can be an issue. In pre-TD13 versions of Teradata, DISTINCT and GROUP BY behave differently and can have performance impacts in some situations. Ordered analytic functions also have an impact on performance so if it's at all possible to do the SUM() steps with traditional aggregate functions that would be better.