LD tables is merging in final target table and this one merge into script is taking more then 1.5 hrs.
Target Table: MULTISET with UPI, USI and PPI (Size 8.5 TB)
LD Table: SET with PI (3.4 million rows)
RowHash to prevent global deadlock for DP_STG.PREPD_NON_USG_HXC_LD.
2) Next, we lock a distinct DD_UTL."pseudo table" for write on a
RowHash to prevent global deadlock for DD_UTL.PREPD_NON_USG_ERR.
3) We lock a distinct Dd_TAB."pseudo table" for write on a RowHash to
prevent global deadlock for Dd_TAB.PREPD_NON_USG.
4) We lock DP_STG.PREPD_NON_USG_HXC_LD for read, we lock
DD_UTL.PREPD_NON_USG_ERR for write, and we lock
Dd_TAB.PREPD_NON_USG for write.
5) We do an all-AMPs RETRIEVE step from DP_STG.PREPD_NON_USG_HXC_LD
by way of an all-rows scan with no residual conditions into Spool
1 (used to materialize view, derived table or table function
PNU_LD) (all_amps) (compressed columns allowed), which is
redistributed by the rowkey of (
DP_STG.PREPD_NON_USG_HXC_LD.ACCESS_METHOD_ID) to all AMPs. Then
we do a SORT to partition Spool 1 by rowkey. The size of Spool 1
is estimated with high confidence to be 34,320,156 rows (
9,266,442,120 bytes). The estimated time for this step is 3.01
6) We do an all-AMPs merge with unmatched inserts into
Dd_TAB.PREPD_NON_USG from Spool 1 (Last Use) with a condition of (
"(Dd_TAB.PREPD_NON_USG.ACCESS_METHOD_ID = ACCESS_METHOD_ID) AND
((Dd_TAB.PREPD_NON_USG.NON_USG_START_DT = NON_USG_START_DT) AND
((Dd_TAB.PREPD_NON_USG.NON_USG_CLASS_CD = NON_USG_CLASS_CD) AND
((Dd_TAB.PREPD_NON_USG.SUB_OFFRNG_ID = SUB_OFFRNG_ID) AND
(Dd_TAB.PREPD_NON_USG.NON_USG_START_TM = NON_USG_START_TM ))))").
The number of rows merged is estimated with high confidence to be
7) Finally, we send out an END TRANSACTION step to all AMPs involved
in processing the request.
-> No rows are returned to the user as the result of statement 1.
Target Table DDL
As per my analysis, SI and 8.5 TB table size is causing the issue.
But still recommendation will be appreciated.
How skewed is the distribution of NUSI values? How critical is the NUSI for query?
Could a single-table JI be used instead? (Maybe sparse JI, omitting the highly skewed values?)