Performance tuning on merge into query

Database
Visitor

Performance tuning on merge into query

Hi Experts,

Anybody please help me in tuning the long running merge statement.

10) We do a group-AMPs JOIN step from rtl_table.s_SERVICE_STJI by
way of a RowHash match scan with a condition of (
"rtl_table.s_SERVICE_STJI.srv_type_cd = 'FV'"), which is
joined to Spool 21 (Last Use) by way of a RowHash match scan.
rtl_table.s_SERVICE_STJI and Spool 21 are joined using a merge
join, with a join condition of ("srv_rtl_service_id =
rtl_table.s_SERVICE_STJI.srv_service_id"). The result goes
into Spool 22 (all_amps) (compressed columns allowed), which is
duplicated on all AMPs. The size of Spool 22 is estimated with
low confidence to be 144 rows (9,936 bytes). The estimated time
for this step is 0.02 seconds.


11) We do an all-AMPs JOIN step from Spool 22 (Last Use) by way of an
all-rows scan, which is joined to rtl_table.a by way of an
all-rows scan with a condition of ("(rtl_table.a.srv_cease_dt IS
NULL) AND (((rtl_table.a.srv_source_cd = 'TIS') OR
(rtl_table.a.srv_source_cd = 'R6')) AND
((rtl_table.a.srv_type_cd = 'FV') AND
(rtl_table.a.srv_service_id >= 1 )))") locking rtl_table.a for
access. Spool 22 and rtl_table.a are joined using a product
join, with a join condition of ("(srv_type_cd =
rtl_table.a.srv_type_cd) AND (usg_terminating_number =
rtl_table.a.srv_business_code)"). The result goes into Spool 23
(all_amps) (compressed columns allowed), which is redistributed by
the hash code of (stg_table.f.usg_start_dt) to all AMPs. Then
we do a SORT to order Spool 23 by row hash. The size of Spool 23
is estimated with low confidence to be 1 row (29 bytes). The
estimated time for this step is 31.56 seconds.

 

On the step 11, I got stuck for long time.

 

1 REPLY
Teradata Employee

Re: Performance tuning on merge into query

Knowing nothing but these two steps, the statement in Step 12, " Spool 22 and rtl_table.a are joined using a product join" is suspicious. Even though it thinks the result might be just one row, Spool 22 supposedly contains 144 rows, and rtl_table.a doesn't sound like a look-up table.  So you might be processing <the-number-of-rows-in-rtl_table.a> * 144 rows for Spool 23, which might be a lot.  Do you know the real size of Spool 22?   Can you explain what rtl_table.a is?  Does the product join indicate something missing from the predicate?