Cpu and io skew -99%

Database
Enthusiast

Cpu and io skew -99%

Explain select add_months (a.period_dt, 1) - 1 as DataDate, ‘DSL' as Rskdmn,
  a.org_cd as NCS, a.rec_typ_nm as ProdDesc, c.dmcl_st_cd as USState,
  c.cor as Country,  sum(Amt)/1000000.000000 as sumOS,
   sum(cmtmt_am)/1000000.000000 as sumCmt,  count(*) as numRecords
from VWDB.VW1as a
left outer join (
     select a.Cid,
     case when a.eff_dt = '2014-06-30' then cast ('2011-01-01' as date format 'YYYY/MM/DD')
     else a.eff_dt
     end as eff_dt,
     a.exp_dt, a.dmcl_st_cd, a.cor 
     from VWDB.VW2 as a
     inner join VWDB.VW1as b
      on b.Uid = a.Cid) as c
on a.Uid = c.Cid
where add_months (a.period_dt, 1) - 1 >= c.eff_dt
 and add_months (a.period_dt, 1) - 1 < c.exp_dt
 and a.period_dt = '2013-06-01' 
group by 1, 2, 3, 4, 5, 6;

  1) First, we lock TBLDB.IP_ID for access, we lock
     TBLDB.IP_AD_CFY for access, we lock
     TBLDB.IP for access, we lock
     TBLDB.TBL1 for access, we lock TBLDB.GRPHY_ZIP
     for access, we lock TBLDB.CDR_DT  for access, we lock
     SRCTBLDB.VW1for access, and we lock
     DMRTTBLDB.TBLXREF for access.
  2) Next, we do an all-AMPs SUM step to aggregate from
     TBLDB.TBL1 by way of an all-rows scan with a
     condition of ("(TBLDB.TBL1.SC_CD = '10') AND
     ((TBLDB.TBL1.CFY_CD = 'NCS_DS') OR
     ((TBLDB.TBL1.CFY_CD = 'NCS_CD') OR
     ((TBLDB.TBL1.CFY_CD = 'SIC_CD') OR
     (TBLDB.TBL1.CFY_CD = 'SIC_DS'))))"), and the
     grouping identifier in field 1030.  Aggregate Intermediate Results
     are computed globally, then placed in Spool 7.  The size of Spool
     7 is estimated with low confidence to be 26 rows (546 bytes).  The
     estimated time for this step is 0.92 seconds.
  3) We execute the following steps in parallel.
       1) We do an all-AMPs RETRIEVE step from Spool 7 (Last Use) by
          way of an all-rows scan into Spool 1 (all_amps), which is
          redistributed by hash code to all AMPs with hash fields (
          "Spool_7.EFF_DT") and Field1 ("Spool_7.EFF_DT").  The size
          of Spool 1 is estimated with low confidence to be 26 rows (
          650 bytes).  Spool AsgnList: "Spool_7.EFF_DT",
          "EFF_DT" = "EFF_DT".
          The estimated time for this step is 0.04 seconds.
       2) We do an all-AMPs SUM step to aggregate from
          TBLDB.IP_AD_CFY by way of an all-rows scan
          with a condition of (
          "(TBLDB.IP_AD_CFY.SC_CD = '10') AND
          ((TBLDB.IP_AD_CFY.CFY_CD =
          'CNTRY_OF_DMCL') OR
          (TBLDB.IP_AD_CFY.CFY_CD =
          'CNTRY_OF_RISK'))"), and the grouping identifier in field
          1032.  Aggregate Intermediate Results are computed globally,
          then placed in Spool 10.  The size of Spool 10 is estimated
          with low confidence to be 30 rows (630 bytes).  The estimated
          time for this step is 4.60 seconds.
  4) We do an all-AMPs RETRIEVE step from Spool 10 (Last Use) by way of
     an all-rows scan into Spool 1 (all_amps), which is redistributed
     by hash code to all AMPs with hash fields ("Spool_10.EFF_DT") and
     Field1 ("Spool_10.EFF_DT").  Then we do a SORT to order Spool 1
     by the sort key in spool field1 eliminating duplicate rows.  The
     size of Spool 1 is estimated with low confidence to be 43 rows (
     1,075 bytes).  Spool AsgnList: "Spool_10.EFF_DT",
     "EFF_DT" = "EFF_DT".
     The estimated time for this step is 0.04 seconds.
  5) We do an all-AMPs SUM step to aggregate from Spool 1 (Last Use) by
     way of an all-rows scan, and the grouping identifier in field 2.
     Aggregate Intermediate Results are computed globally, then placed
     in Spool 13.  The size of Spool 13 is estimated with low
     confidence to be 33 rows (693 bytes).  The estimated time for this
     step is 0.05 seconds.
  6) We do an all-AMPs RETRIEVE step from Spool 13 (Last Use) by way of
     an all-rows scan into Spool 2 (used to materialize view, derived
     table or table function dy_01) (all_amps) (compressed columns
     allowed), which is built locally on the AMPs with Field1 (
     "UniqueId").  The size of Spool 2 is estimated with low confidence
     to be 33 rows (825 bytes).  Spool AsgnList: "UniqueId",
     "EFF_DT".
     The estimated time for this step is 0.04 seconds.
  7) We do an all-AMPs SUM step to aggregate from Spool 2 (Last Use) by
     way of an all-rows scan.  Aggregate Intermediate Results are
     computed globally, then placed in Spool 16.  The size of Spool 16
     is estimated with high confidence to be 1 row (23 bytes).  The
     estimated time for this step is 0.05 seconds.
  8) We do an all-AMPs RETRIEVE step from Spool 16 (Last Use) by way of
     an all-rows scan into Spool 3 (used to materialize view, derived
     table or table function dy_02) (all_amps) (compressed columns
     allowed), which is built locally on the AMPs with Field1 (
     "UniqueId").  The size of Spool 3 is estimated with high
     confidence to be 1 row (29 bytes).  Spool AsgnList: "UniqueId",
     "Field_2",
     "Field_3".
     The estimated time for this step is 0.01 seconds.
  9) We execute the following steps in parallel.
       1) We do an all-AMPs RETRIEVE step from Spool 3 (Last Use) by
          way of an all-rows scan into Spool 20 (all_amps) (compressed
          columns allowed), which is duplicated on all AMPs.  The size
          of Spool 20 is estimated with high confidence to be 960 rows
          (20,160 bytes).  Spool AsgnList:
          "LB_AS_OF_DT" = "LB_AS_OF_DT",
          "UB_AS_OF_DT" = "UB_AS_OF_DT".
          The estimated time for this step is 0.03 seconds.
       2) We do an all-AMPs RETRIEVE step from TBLDB.GRPHY_ZIP by
          way of an all-rows scan with no residual conditions into
          Spool 21 (all_amps) (compressed columns allowed), which is
          duplicated on all AMPs with hash fields (
          "TBLDB.GRPHY_ZIP.ZIP_CD").  The size of Spool 21 is
          estimated with high confidence to be 40,537,920 rows (
          810,758,400 bytes).  Spool AsgnList:
          "ZIP_CD" = "TBLDB.GRPHY_ZIP.ZIP_CD",
          "ST_CD" = "ST_CD".
          The estimated time for this step is 0.39 seconds.
 10) We execute the following steps in parallel.
      1) We do an all-AMPs JOIN step (No Sum) from Spool 20 (Last Use)
         by way of an all-rows scan, which is joined to
         TBLDB.CDR_DT  by way of an all-rows scan with no
         residual conditions.  Spool 20 and TBLDB.CDR_DT  are
         joined using a product join, with a join condition of (
         "(TBLDB.CDR_DT .CLNDR_DT <= Spool_20.UB_AS_OF_DT) AND
         (TBLDB.CDR_DT .CLNDR_DT >= Spool_20.LB_AS_OF_DT)").
         The result goes into Spool 22 (all_amps) (compressed columns
         allowed), which is duplicated on all AMPs.  The size of Spool
         22 is estimated with no confidence to be 935,040 rows (
         15,895,680 bytes).  Spool AsgnList:
         "CLNDR_DT" = "{RightTable}.CLNDR_DT".
         The estimated time for this step is 0.16 seconds.
      2) We do an all-AMPs RETRIEVE step from TBLDB.TBL1
         by way of an all-rows scan with a condition of (
         "(TBLDB.TBL1.CFY_CD = 'SIC_CD') AND
         (TBLDB.TBL1.SC_CD = '10')") locking for access
         into Spool 23 (all_amps) (compressed columns allowed), which
         is built locally on the AMPs with hash fields (
         "TBLDB.TBL1.IP_ID").  Then we do a SORT
         to order Spool 23 by row hash.  The size of Spool 23 is
         estimated with low confidence to be 3,445,872 rows (
         165,401,856 bytes).  Spool AsgnList:
         "IP_ID" = "TBLDB.TBL1.IP_ID",
         "SC_CD" = "SC_CD",
         "EFF_DT" = "EFF_DT",
         "EXP_DT" = "EXP_DT",
         "CFY_VL_CD" = "CFY_VL_CD".
         The estimated time for this step is 0.90 seconds.
 11) We execute the following steps in parallel.
      1) We do an all-AMPs JOIN step (No Sum) from Spool 22 (Last Use)
         by way of an all-rows scan, which is joined to
         TBLDB.IP by way of an all-rows scan with a
         condition of ("(TBLDB.IP.SC_CD = '10') AND
         (TBLDB.IP.ID_TYP_CD = 'CID')").  Spool 22
         and TBLDB.IP are joined using a product join,
         with a join condition of ("TBLDB.IP.EXP_DT >=
         Spool_22.CLNDR_DT").  The result goes into Spool 24 (all_amps)
         (compressed columns allowed), which is built locally on the
         AMPs with hash fields (
         "TBLDB.IP.IP_ID").  Then we do a SORT
         to order Spool 24 by row hash.  The size of Spool 24 is
         estimated with no confidence to be 91,297,420 rows (
         33,323,558,300 bytes).  Spool AsgnList:
         "CLNDR_DT" = "{LeftTable}.CLNDR_DT",
         "IP_ID" = "TBLDB.IP.IP_ID",
         "SC_CD" = "{RightTable}.SC_CD",
         "ID_TX" = "{RightTable}.ID_TX",
         "EXP_DT" = "{RightTable}.EXP_DT".
         The estimated time for this step is 19.04 seconds.
      2) We do an all-AMPs RETRIEVE step from TBLDB.TBL1
         by way of an all-rows scan with a condition of (
         "(TBLDB.TBL1.CFY_CD = 'NCS_CD') AND
         (TBLDB.TBL1.SC_CD = '10')") locking for access
         into Spool 25 (all_amps) (compressed columns allowed), which
         is built locally on the AMPs with hash fields (
         "TBLDB.TBL1.IP_ID").  Then we do a SORT
         to order Spool 25 by row hash.  The size of Spool 25 is
         estimated with low confidence to be 3,529,951 rows (
         169,437,648 bytes).  Spool AsgnList:
         "IP_ID" = "TBLDB.TBL1.IP_ID",
         "SC_CD" = "SC_CD",
         "EFF_DT" = "EFF_DT",
         "EXP_DT" = "EXP_DT",
         "CFY_VL_CD" = "CFY_VL_CD".
         The estimated time for this step is 0.90 seconds.
 12) We do an all-AMPs JOIN step (No Sum) from Spool 23 (Last Use) by
     way of a RowHash match scan, which is joined to Spool 24 (Last
     Use) by way of a RowHash match scan.  Spool 23 and Spool 24 are
     right outer joined using a merge join, with condition(s) used for
     non-matching on right table ("Spool_24.SC_CD = '10'"), with a
     join condition of ("(Spool_24.CLNDR_DT <= Spool_23.EXP_DT) AND
     ((Spool_24.CLNDR_DT >= Spool_23.EFF_DT) AND
     ((Spool_24.IP_ID = Spool_23.IP_ID) AND
     (Spool_23.SC_CD = Spool_24.SC_CD )))").  The result goes into
     Spool 26 (all_amps) (compressed columns allowed), which is built
     locally on the AMPs with hash fields ("Spool_24.IP_ID").
     Then we do a SORT to order Spool 26 by row hash.  The size of
     Spool 26 is estimated with no confidence to be 91,297,420 rows (
     35,332,101,540 bytes).  Spool AsgnList:
     "IP_ID" = "{LeftTable}.IP_ID",
     "CLNDR_DT" = "{RightTable}.CLNDR_DT",
     "IP_ID" = "Spool_24.IP_ID",
     "SC_CD" = "{RightTable}.SC_CD",
     "ID_TX" = "{RightTable}.ID_TX",
     "EXP_DT" = "{RightTable}.EXP_DT",
     "CFY_VL_CD" = "{LeftTable}.CFY_VL_CD".
     The estimated time for this step is 19.49 seconds.
 13) We do an all-AMPs JOIN step (No Sum) from Spool 21 (Last Use) by
     way of an all-rows scan, which is joined to
     TBLDB.IP_AD_CFY by way of an all-rows scan with a
     condition of ("(TBLDB.IP_AD_CFY.SC_CD = '10')
     AND (TBLDB.IP_AD_CFY.CFY_CD =
     'CNTRY_OF_DMCL')") locking TBLDB.IP_AD_CFY for
     access.  Spool 21 is used as the hash table and
     TBLDB.IP_AD_CFY is used as the probe table in a
     right outer joined using a dynamic hash join (on cross term), with
     a join condition of ("Spool_21.ZIP_CD =
     (SUBSTR(TBLDB.IP_AD_CFY.ZIP_CD ,1 ,5 ))").  The
     result goes into Spool 29 (all_amps) (compressed columns allowed),
     which is built locally on the AMPs with hash fields (
     "TBLDB.IP_AD_CFY.IP_ID").  Then we do a
     SORT to order Spool 29 by row hash.  The size of Spool 29 is
     estimated with index join confidence to be 3,010,386 rows (
     180,623,160 bytes).  Spool AsgnList:
     "IP_ID" =
     "TBLDB.IP_AD_CFY.IP_ID",
     "ZIP_CD" = "{LeftTable}.ZIP_CD",
     "EXP_DT" = "{RightTable}.EXP_DT",
     "EFF_DT" = "{RightTable}.EFF_DT",
     "SC_CD" = "{RightTable}.SC_CD",
     "ST_CD" = "{LeftTable}.ST_CD",
     "CFY_VL_CD" = "{RightTable}.CFY_VL_CD",
     "ZIP_CD" = "{RightTable}.ZIP_CD".
     The estimated time for this step is 4.74 seconds.
 14) We execute the following steps in parallel.
      1) We do an all-AMPs JOIN step (No Sum) from Spool 25 (Last Use)
         by way of a RowHash match scan, which is joined to Spool 26
         (Last Use) by way of a RowHash match scan.  Spool 25 and Spool
         26 are right outer joined using a merge join, with
         condition(s) used for non-matching on right table (
         "Spool_26.SC_CD = '10'"), with a join condition of (
         "(Spool_26.CLNDR_DT <= Spool_25.EXP_DT) AND
         ((Spool_26.CLNDR_DT >= Spool_25.EFF_DT) AND
         ((Spool_26.IP_ID = Spool_25.IP_ID) AND
         (Spool_25.SC_CD = Spool_26.SC_CD )))").  The result goes
         into Spool 31 (all_amps) (compressed columns allowed), which
         is built locally on the AMPs with hash fields (
         "Spool_26.IP_ID").  Then we do a SORT to order Spool
         31 by row hash.  The size of Spool 31 is estimated with no
         confidence to be 91,297,420 rows (37,340,644,780 bytes).
         Spool AsgnList:
         "CLNDR_DT" = "{RightTable}.CLNDR_DT",
         "IP_ID" = "Spool_26.IP_ID",
         "SC_CD" = "{RightTable}.SC_CD",
         "IP_ID" = "{LeftTable}.IP_ID",
         "IP_ID" = "{RightTable}.IP_ID",
         "ID_TX" = "{RightTable}.ID_TX",
         "EXP_DT" = "{RightTable}.EXP_DT",
         "CFY_VL_CD" = "{LeftTable}.CFY_VL_CD",
         "CFY_VL_CD" = "{RightTable}.CFY_VL_CD".
         The estimated time for this step is 20.70 seconds.
      2) We do an all-AMPs RETRIEVE step from
         TBLDB.IP_AD_CFY by way of an all-rows scan
         with a condition of (
         "(TBLDB.IP_AD_CFY.SC_CD = '10') AND
         (TBLDB.IP_AD_CFY.CFY_CD =
         'CNTRY_OF_RISK')") locking for access into Spool 34 (all_amps)
         (compressed columns allowed), which is built locally on the
         AMPs with hash fields (
         "TBLDB.IP_AD_CFY.IP_ID").  Then we
         do a SORT to order Spool 34 by row hash.  The size of Spool 34
         is estimated with low confidence to be 3,056,453 rows (
         146,709,744 bytes).  Spool AsgnList:
         "IP_ID" =
         "TBLDB.IP_AD_CFY.IP_ID",
         "SC_CD" = "SC_CD",
         "CFY_VL_CD" = "CFY_VL_CD",
         "EFF_DT" = "EFF_DT",
         "EXP_DT" = "EXP_DT".
         The estimated time for this step is 4.60 seconds.
 15) We execute the following steps in parallel.
      1) We do an all-AMPs JOIN step (No Sum) from Spool 29 (Last Use)
         by way of a RowHash match scan, which is joined to Spool 31
         (Last Use) by way of a RowHash match scan.  Spool 29 and Spool
         31 are right outer joined using a merge join, with
         condition(s) used for non-matching on right table (
         "Spool_31.SC_CD = '10'"), with a join condition of (
         "(Spool_31.CLNDR_DT <= Spool_29.EXP_DT) AND
         ((Spool_31.CLNDR_DT >= Spool_29.EFF_DT) AND
         ((Spool_31.IP_ID = Spool_29.IP_ID) AND
         (Spool_31.SC_CD = Spool_29.SC_CD )))").  The result goes
         into Spool 35 (all_amps) (compressed columns allowed), which
         is built locally on the AMPs with hash fields (
         "Spool_31.IP_ID").  Then we do a SORT to order Spool
         35 by row hash.  The size of Spool 35 is estimated with no
         confidence to be 91,297,420 rows (40,444,757,060 bytes).
         Spool AsgnList:
         "IP_ID" = "{RightTable}.IP_ID",
         "IP_ID" = "{RightTable}.IP_ID",
         "IP_ID" = "{LeftTable}.IP_ID",
         "ZIP_CD" = "{LeftTable}.ZIP_CD",
         "CLNDR_DT" = "{RightTable}.CLNDR_DT",
         "IP_ID" = "Spool_31.IP_ID",
         "SC_CD" = "{RightTable}.SC_CD",
         "ID_TX" = "{RightTable}.ID_TX",
         "ST_CD" = "{LeftTable}.ST_CD",
         "EXP_DT" = "{RightTable}.EXP_DT",
         "CFY_VL_CD" = "{RightTable}.CFY_VL_CD",
         "CFY_VL_CD" = "{RightTable}.CFY_VL_CD",
         "CFY_VL_CD" = "{LeftTable}.CFY_VL_CD",
         "ZIP_CD" = "{LeftTable}.ZIP_CD".
         The estimated time for this step is 22.50 seconds.
      2) We do an all-AMPs RETRIEVE step from TBLDB.TBL1
         by way of an all-rows scan with a condition of (
         "(TBLDB.TBL1.CFY_CD = 'SIC_DS') AND
         (TBLDB.TBL1.SC_CD = '10')") locking for access
         into Spool 38 (all_amps) (compressed columns allowed), which
         is built locally on the AMPs with hash fields (
         "TBLDB.TBL1.IP_ID").  Then we do a SORT
         to order Spool 38 by row hash.  The size of Spool 38 is
         estimated with low confidence to be 3,445,872 rows (
         1,257,743,280 bytes).  Spool AsgnList:
         "IP_ID" = "TBLDB.TBL1.IP_ID",
         "SC_CD" = "SC_CD",
         "EFF_DT" = "EFF_DT",
         "EXP_DT" = "EXP_DT",
         "CFY_VL_TX" = "CFY_VL_TX".
         The estimated time for this step is 1.17 seconds.
 16) We execute the following steps in parallel.
      1) We do an all-AMPs JOIN step (No Sum) from Spool 34 (Last Use)
         by way of a RowHash match scan, which is joined to Spool 35
         (Last Use) by way of a RowHash match scan.  Spool 34 and Spool
         35 are right outer joined using a merge join, with
         condition(s) used for non-matching on right table (
         "Spool_35.SC_CD = '10'"), with a join condition of (
         "(Spool_35.CLNDR_DT <= Spool_34.EXP_DT) AND
         ((Spool_35.CLNDR_DT >= Spool_34.EFF_DT) AND
         ((Spool_35.IP_ID = Spool_34.IP_ID) AND
         (Spool_35.SC_CD = Spool_34.SC_CD )))").  The result goes
         into Spool 39 (all_amps) (compressed columns allowed), which
         is built locally on the AMPs with hash fields (
         "Spool_35.IP_ID").  Then we do a SORT to order Spool
         39 by row hash.  The size of Spool 39 is estimated with no
         confidence to be 91,297,420 rows (42,453,300,300 bytes).
         Spool AsgnList:
         "CLNDR_DT" = "{RightTable}.CLNDR_DT",
         "IP_ID" = "Spool_35.IP_ID",
         "SC_CD" = "{RightTable}.SC_CD",
         "IP_ID" = "{RightTable}.IP_ID",
         "IP_ID" = "{RightTable}.IP_ID",
         "IP_ID" = "{LeftTable}.IP_ID",
         "IP_ID" = "{RightTable}.IP_ID",
         "ZIP_CD" = "{RightTable}.ZIP_CD",
         "ID_TX" = "{RightTable}.ID_TX",
         "CFY_VL_CD" = "{LeftTable}.CFY_VL_CD",
         "ST_CD" = "{RightTable}.ST_CD",
         "EXP_DT" = "{RightTable}.EXP_DT",
         "CFY_VL_CD" = "{RightTable}.CFY_VL_CD",
         "CFY_VL_CD" = "{RightTable}.CFY_VL_CD",
         "CFY_VL_CD" = "{RightTable}.CFY_VL_CD",
         "ZIP_CD" = "{RightTable}.ZIP_CD".
         The estimated time for this step is 23.80 seconds.
      2) We do an all-AMPs RETRIEVE step from
         TBLDB.IP_ID by way of an all-rows scan with a
         condition of ("(TBLDB.IP_ID.SC_CD = '10') AND
         (TBLDB.IP_ID.CFY_CD = 'CLIENT_NM')") into
         Spool 42 (all_amps) (compressed columns allowed), which is
         built locally on the AMPs with hash fields (
         "TBLDB.IP_ID.IP_ID").  Then we do a
         SORT to order Spool 42 by row hash.  The size of Spool 42 is
         estimated with low confidence to be 2,981,321 rows (
         1,088,182,165 bytes).  Spool AsgnList:
         "IP_ID" = "TBLDB.IP_ID.IP_ID",
         "SC_CD" = "SC_CD",
         "CFY_VL_TX" = "CFY_VL_TX",
         "EFF_DT" = "EFF_DT",
         "EXP_DT" = "EXP_DT".
         The estimated time for this step is 3.46 seconds.
 17) We do an all-AMPs JOIN step (No Sum) from Spool 38 (Last Use) by
     way of a RowHash match scan, which is joined to Spool 39 (Last
     Use) by way of a RowHash match scan.  Spool 38 and Spool 39 are
     right outer joined using a merge join, with condition(s) used for
     non-matching on right table ("Spool_39.SC_CD = '10'"), with a
     join condition of ("(Spool_39.CLNDR_DT <= Spool_38.EXP_DT) AND
     ((Spool_39.CLNDR_DT >= Spool_38.EFF_DT) AND
     ((Spool_39.IP_ID = Spool_38.IP_ID) AND
     (Spool_38.SC_CD = Spool_39.SC_CD )))").  The result goes into
     Spool 43 (all_amps) (compressed columns allowed), which is built
     locally on the AMPs with hash fields ("Spool_39.IP_ID").
     Then we do a SORT to order Spool 43 by row hash.  The size of
     Spool 43 is estimated with no confidence to be 91,297,420 rows (
     73,403,125,680 bytes).  Spool AsgnList:
     "IP_ID" = "{RightTable}.IP_ID",
     "IP_ID" = "{RightTable}.IP_ID",
     "IP_ID" = "{LeftTable}.IP_ID",
     "IP_ID" = "{RightTable}.IP_ID",
     "IP_ID" = "{RightTable}.IP_ID",
     "ZIP_CD" = "{RightTable}.ZIP_CD",
     "CLNDR_DT" = "{RightTable}.CLNDR_DT",
     "IP_ID" = "Spool_39.IP_ID",
     "SC_CD" = "{RightTable}.SC_CD",
     "ID_TX" = "{RightTable}.ID_TX",
     "CFY_VL_CD" = "{RightTable}.CFY_VL_CD",
     "ST_CD" = "{RightTable}.ST_CD",
     "EXP_DT" = "{RightTable}.EXP_DT",
     "CFY_VL_CD" = "{RightTable}.CFY_VL_CD",
     "CFY_VL_CD" = "{RightTable}.CFY_VL_CD",
     "CFY_VL_TX" = "{LeftTable}.CFY_VL_TX",
     "CFY_VL_CD" = "{RightTable}.CFY_VL_CD",
     "ZIP_CD" = "{RightTable}.ZIP_CD".
     The estimated time for this step is 38.64 seconds.
 18) We execute the following steps in parallel.
      1) We do an all-AMPs JOIN step (No Sum) from Spool 42 (Last Use)
         by way of a RowHash match scan, which is joined to Spool 43
         (Last Use) by way of a RowHash match scan.  Spool 42 and Spool
         43 are right outer joined using a merge join, with
         condition(s) used for non-matching on right table (
         "Spool_43.SC_CD = '10'"), with a join condition of (
         "(Spool_43.CLNDR_DT <= Spool_42.EXP_DT) AND
         ((Spool_43.CLNDR_DT >= Spool_42.EFF_DT) AND
         ((Spool_42.IP_ID = Spool_43.IP_ID) AND
         (Spool_43.SC_CD = Spool_42.SC_CD )))").  The result goes
         into Spool 46 (all_amps) (compressed columns allowed), which
         is built locally on the AMPs with hash fields (
         "Spool_43.IP_ID").  Then we do a SORT to order Spool
         46 by row hash.  The size of Spool 46 is estimated with no
         confidence to be 91,297,420 rows (103,987,761,380 bytes).
         Spool AsgnList:
         "CLNDR_DT" = "{RightTable}.CLNDR_DT",
         "IP_ID" = "Spool_43.IP_ID",
         "SC_CD" = "{RightTable}.SC_CD",
         "IP_ID" = "{RightTable}.IP_ID",
         "IP_ID" = "{RightTable}.IP_ID",
         "IP_ID" = "{RightTable}.IP_ID",
         "IP_ID" = "{RightTable}.IP_ID",
         "IP_ID" = "{RightTable}.IP_ID",
         "ZIP_CD" = "{RightTable}.ZIP_CD",
         "ID_TX" = "{RightTable}.ID_TX",
         "CFY_VL_CD" = "{RightTable}.CFY_VL_CD",
         "ST_CD" = "{RightTable}.ST_CD",
         "CFY_VL_TX" = "{LeftTable}.CFY_VL_TX",
         "EXP_DT" = "{RightTable}.EXP_DT",
         "CFY_VL_CD" = "{RightTable}.CFY_VL_CD",
         "CFY_VL_CD" = "{RightTable}.CFY_VL_CD",
         "CFY_VL_TX" = "{RightTable}.CFY_VL_TX",
         "CFY_VL_CD" = "{RightTable}.CFY_VL_CD",
         "ZIP_CD" = "{RightTable}.ZIP_CD".
         The estimated time for this step is 55.56 seconds.
      2) We do an all-AMPs RETRIEVE step from TBLDB.TBL1
         by way of an all-rows scan with a condition of (
         "(TBLDB.TBL1.CFY_CD = 'NCS_DS') AND
         (TBLDB.TBL1.SC_CD = '10')") locking for access
         into Spool 49 (all_amps) (compressed columns allowed), which
         is built locally on the AMPs with hash fields (
         "TBLDB.TBL1.IP_ID").  Then we do a SORT
         to order Spool 49 by row hash.  The size of Spool 49 is
         estimated with low confidence to be 5,401,427 rows (
         1,971,520,855 bytes).  Spool AsgnList:
         "IP_ID" = "TBLDB.TBL1.IP_ID",
         "SC_CD" = "SC_CD",
         "EFF_DT" = "EFF_DT",
         "EXP_DT" = "EXP_DT",
         "CFY_VL_TX" = "CFY_VL_TX".
         The estimated time for this step is 1.37 seconds.
 19) We do an all-AMPs JOIN step (No Sum) from Spool 46 (Last Use) by
     way of a RowHash match scan, which is joined to Spool 49 (Last
     Use) by way of a RowHash match scan.  Spool 46 and Spool 49 are
     left outer joined using a merge join, with condition(s) used for
     non-matching on left table ("Spool_46.SC_CD = '10'"), with a join
     condition of ("(Spool_46.CLNDR_DT <= Spool_49.EXP_DT) AND
     ((Spool_46.CLNDR_DT >= Spool_49.EFF_DT) AND
     ((Spool_46.IP_ID = Spool_49.IP_ID) AND
     (Spool_49.SC_CD = Spool_46.SC_CD )))").  The result goes into
     Spool 50 (all_amps) (compressed columns allowed), which is built
     locally on the AMPs.  The size of Spool 50 is estimated with no
     confidence to be 91,297,420 rows (134,937,586,760 bytes).  Spool
     AsgnList:
     "IP_ID" = "{LeftTable}.IP_ID",
     "IP_ID" = "{RightTable}.IP_ID",
     "IP_ID" = "{LeftTable}.IP_ID",
     "IP_ID" = "{LeftTable}.IP_ID",
     "IP_ID" = "{LeftTable}.IP_ID",
     "IP_ID" = "{LeftTable}.IP_ID",
     "ZIP_CD" = "{LeftTable}.ZIP_CD",
     "ID_TX" = "{LeftTable}.ID_TX",
     "CLNDR_DT" = "{LeftTable}.CLNDR_DT",
     "CFY_VL_CD" = "{LeftTable}.CFY_VL_CD",
     "ST_CD" = "{LeftTable}.ST_CD",
     "CFY_VL_TX" = "{LeftTable}.CFY_VL_TX",
     "IP_ID" = "{LeftTable}.IP_ID",
     "SC_CD" = "{LeftTable}.SC_CD",
     "EXP_DT" = "{LeftTable}.EXP_DT",
     "CFY_VL_CD" = "{LeftTable}.CFY_VL_CD",
     "CFY_VL_TX" = "{RightTable}.CFY_VL_TX",
     "CFY_VL_CD" = "{LeftTable}.CFY_VL_CD",
     "CFY_VL_TX" = "{LeftTable}.CFY_VL_TX",
     "CFY_VL_CD" = "{LeftTable}.CFY_VL_CD",
     "ZIP_CD" = "{LeftTable}.ZIP_CD".
     The estimated time for this step is 8.59 seconds.
 20) We do an all-AMPs RETRIEVE step from Spool 50 (Last Use) by way of
     an all-rows scan with a condition of ("(NOT (Spool_50.ZIP_CD IS
     NULL )) OR ((NOT (Spool_50.IP_ID IS NULL )) OR ((NOT
     (Spool_50.IP_ID IS NULL )) OR ((NOT
     (Spool_50.IP_ID IS NULL )) OR ((NOT
     (Spool_50.IP_ID IS NULL )) OR ((NOT
     (Spool_50.IP_ID IS NULL )) OR (NOT
     (Spool_50.IP_ID IS NULL )))))))") into Spool 19
     (all_amps) (compressed columns allowed), which is built locally on
     the AMPs with Field1 ("17224").  The size of Spool 19 is estimated
     with no confidence to be 91,297,420 rows (132,472,556,420 bytes).
     Spool AsgnList: "17224",
     "Spool_19.ID_TX" = "{ Copy }ID_TX",
     "Spool_19.CLNDR_DT" = "{ Copy }CLNDR_DT",
     "Spool_19.CFY_VL_CD" = "{ Copy }CFY_VL_CD",
     "Spool_19.ST_CD" = "{ Copy }ST_CD",
     "Spool_19.CFY_VL_TX" = "{ Copy }CFY_VL_TX",
     "Spool_19.IP_ID" = "{ Copy }IP_ID",
     "Spool_19.SC_CD" = "{ Copy }SC_CD",
     "Spool_19.EXP_DT" = "{ Copy }EXP_DT",
     "Spool_19.CFY_VL_CD" = "{ Copy }CFY_VL_CD",
     "Spool_19.CFY_VL_TX" = "{ Copy }CFY_VL_TX",
     "Spool_19.CFY_VL_CD" = "{ Copy }CFY_VL_CD",
     "Spool_19.CFY_VL_TX" = "{ Copy }CFY_VL_TX",
     "Spool_19.CFY_VL_CD" = "{ Copy }CFY_VL_CD",
     "Spool_19.ZIP_CD" = "{ Copy }ZIP_CD".
     The estimated time for this step is 30.63 seconds.
 21) We do an all-AMPs SUM step to aggregate from Spool 19 (Last Use)
     by way of an all-rows scan, and the grouping identifier in field 1.
     Aggregate Intermediate Results are computed globally, then placed
     in Spool 53.  The size of Spool 53 is estimated with no confidence
     to be 68,473,065 rows (316,756,398,690 bytes).  The estimated time
     for this step is 12 minutes and 35 seconds.
 22) We execute the following steps in parallel.
      1) We do an all-AMPs RETRIEVE step from Spool 53 (Last Use) by
         way of an all-rows scan into Spool 4 (used to materialize
         view, derived table or table function a) (all_amps)
         (compressed columns allowed), which is built locally on the
         AMPs with Field1 ("UniqueId").  The size of Spool 4 is
         estimated with no confidence to be 68,473,065 rows (
         26,293,656,960 bytes).  Spool AsgnList: "UniqueId",
         "ID_TX",
         "Field_3",
         "Field_4",
         "CFY_VL_CD",
         "ST_CD".
         The estimated time for this step is 6.01 seconds.
      2) We do an all-AMPs RETRIEVE step from SRCTBLDB.DC_RISK_DTL
         by way of an all-rows scan with no residual conditions into
         Spool 55 (all_amps) (compressed columns allowed), which is
         redistributed by hash code to all AMPs with hash fields ("(
         CASE WHEN (NOT (SRCTBLDB.DC_RISK_DTL.UID LIKE
         'WH%')) THEN (SRCTBLDB.DC_RISK_DTL.UID) ELSE
         (SUBSTR(SRCTBLDB.DC_RISK_DTL.UID ,3 ,30)) END
         )(VARCHAR(50), CHARACTER SET LATIN, NOT
         CASESPECIFIC),
SRCTBLDB.DC_RISK_DTL.PERIOD_DT").  The size
         of Spool 55 is estimated with high confidence to be 27,446,492
         rows (988,073,712 bytes).  Spool AsgnList:
         "UID" = "UID",
         "PERIOD_DT" = "SRCTBLDB.DC_RISK_DTL.PERIOD_DT",
         "NULL ,( CASE WHEN (NOT (SRCTBLDB.DC_RISK_DTL.UID
         LIKE 'WH%')) THEN (SRCTBLDB.DC_RISK_DTL.UID) ELSE
         (SUBSTR(SRCTBLDB.DC_RISK_DTL.UID ,3 ,30)) END
         )(VARCHAR(50), CHARACTER SET LATIN, NOT CASESPECIFIC)".
         The estimated time for this step is 3.02 seconds.
 23) We do an all-AMPs JOIN step (No Sum) from Spool 55 (Last Use) by
     way of a RowHash match scan, which is joined to
     DMRTTBLDB.TBLXREF by way of a RowHash match scan with no
     residual conditions.  Spool 55 is used as the hash table and
     DMRTTBLDB.TBLXREF is used as the probe table in a left outer
     joined using a single partition direct hash join, with a join
     condition of ("(DMRTTBLDB.TBLXREF.UID = (( CASE WHEN
     (NOT (Spool_55.UID LIKE 'WH%')) THEN
     (Spool_55.UID) ELSE (SUBSTR(Spool_55.UID ,3
     ,30)) END ))) AND (DMRTTBLDB.TBLXREF.PERIOD_DT =
     Spool_55.PERIOD_DT)").  The result goes into Spool 56 (all_amps)
     (compressed columns allowed), which is redistributed by hash code
     to all AMPs with hash fields ("( CASE WHEN (NOT
     (Spool_55.UID LIKE 'WH%')) THEN (Spool_55.UID)
     ELSE (SUBSTR(Spool_55.UID ,3 ,30)) END )(VARCHAR(1000),
     CHARACTER SET LATIN, NOT CASESPECIFIC)") into 6 hash join
     partitions.  The size of Spool 56 is estimated with low confidence
     to be 27,628,438 rows (884,110,016 bytes).  Spool AsgnList:
     "UID" = "{LeftTable}.UID",
     "NULL ,( CASE WHEN (NOT (Spool_55.UID LIKE 'WH%')) THEN
     (Spool_55.UID) ELSE (SUBSTR(Spool_55.UID ,3
     ,30)) END )(VARCHAR(1000), CHARACTER SET LATIN, NOT CASESPECIFIC)".
     The estimated time for this step is 2.62 seconds.
 24) We do an all-AMPs RETRIEVE step from Spool 4 (Last Use) by way of
     an all-rows scan into Spool 57 (all_amps) (compressed columns
     allowed) fanned out into 6 hash join partitions, which is
     redistributed by hash code to all AMPs with hash fields (
     "Spool_4.CID").  The size of Spool 57 is estimated with no
     confidence to be 68,473,065 rows (25,745,872,440 bytes).  Spool
     AsgnList:
     "CID" = "Spool_4.CID",
     "EFF_DT" = "EFF_DT",
     "EXP_DT" = "EXP_DT",
     "CNTRY_OF_RISK" = "CNTRY_OF_RISK",
     "DMCL_ST_CD" = "DMCL_ST_CD".
     The estimated time for this step is 13.46 seconds.
 25) We do an all-AMPs JOIN step (No Sum) from Spool 56 (Last Use) by
     way of an all-rows scan, which is joined to Spool 57 (Last Use) by
     way of an all-rows scan.  Spool 56 is used as the hash table and
     Spool 57 is used as the probe table in a joined using a classical
     hash join of 6 partitions, with a join condition of ("(( CASE WHEN
     (NOT (Spool_56.UID LIKE 'WH%')) THEN
     (Spool_56.UID) ELSE (SUBSTR(Spool_56.UID ,3
     ,30)) END ))= Spool_57.CID").  The result goes into Spool 5
     (used to materialize view, derived table or table function c)
     (all_amps) (compressed columns allowed), which is built locally on
     the AMPs with Field1 ("UniqueId").  The size of Spool 5 is
     estimated with no confidence to be 646,691,390 rows (
     248,329,493,760 bytes).  Spool AsgnList: "UniqueId",
     "{RightTable}.CID",
     "( CASE WHEN ({RightTable}.EFF_DT = DATE '2014-06-30') THEN (DATE
     '2011-01-01') ELSE ({RightTable}.EFF_DT) END)",
     "{RightTable}.EXP_DT",
     "{RightTable}.DMCL_ST_CD",
     "{RightTable}.CNTRY_OF_RISK".
     The estimated time for this step is 39.81 seconds.
 26) We execute the following steps in parallel.
      1) We do an all-AMPs RETRIEVE step from a single partition of
         SRCTBLDB.VW1in view VWDB.VW1with
         a condition of ("SRCTBLDB.VW1in view
         VWDB.DC_RISK_DTL.PERIOD_DT = DATE '2013-06-01'") with
         a residual condition of ("SRCTBLDB.VW1in view
         VWDB.DC_RISK_DTL.PERIOD_DT = DATE '2013-06-01'")
         locking for access into Spool 61 (all_amps) (compressed
         columns allowed), which is redistributed by rowkey to all AMPs
         with hash fields ("SRCTBLDB.DC_RISK_DTL.PERIOD_DT ,( CASE
         WHEN (NOT (SRCTBLDB.DC_RISK_DTL.UID LIKE 'WH%'))
         THEN (SRCTBLDB.DC_RISK_DTL.UID) ELSE
         (SUBSTR(SRCTBLDB.DC_RISK_DTL.UID ,3 ,30)) END
         )(VARCHAR(50), CHARACTER SET LATIN, NOT CASESPECIFIC)") and
         partitioning fields ("SRCTBLDB.DC_RISK_DTL.PERIOD_DT").
         Then we do a SORT to partition Spool 61 by rowkey.  The size
         of Spool 61 is estimated with high confidence to be 1,667,994
         rows (156,791,436 bytes).  Spool AsgnList:
         "UID" = "UID",
         "PERIOD_DT" = "SRCTBLDB.DC_RISK_DTL.PERIOD_DT",
         "REC_TYP_NM" = "REC_TYP_NM",
         "AMT" = "AMT",
         "CMTMT_AM" = "CMTMT_AM",
         "WF_NCS_CD" = "WF_NCS_CD",
         "NULL ,( CASE WHEN (NOT (SRCTBLDB.DC_RISK_DTL.UID
         LIKE 'WH%')) THEN (SRCTBLDB.DC_RISK_DTL.UID) ELSE
         (SUBSTR(SRCTBLDB.DC_RISK_DTL.UID ,3 ,30)) END
         )(VARCHAR(50), CHARACTER SET LATIN, NOT CASESPECIFIC)".
         The estimated time for this step is 0.43 seconds.
      2) We do an all-AMPs RETRIEVE step from Spool 5 (Last Use) by way
         of an all-rows scan into Spool 62 (all_amps) (compressed
         columns allowed), which is built locally on the AMPs with hash
         fields ("Spool_5.CID").  Then we do a SORT to order
         Spool 62 by row hash.  The size of Spool 62 is estimated with
         no confidence to be 646,691,390 rows (243,155,962,640 bytes).
         Spool AsgnList:
         "CID" = "Spool_5.CID",
         "EFF_DT" = "EFF_DT",
         "EXP_DT" = "EXP_DT",
         "DMCL_ST_CD" = "DMCL_ST_CD",
         "CNTRY_OF_RISK" = "CNTRY_OF_RISK".
         The estimated time for this step is 2 minutes and 45 seconds.
 27) We do an all-AMPs JOIN step (No Sum) from Spool 61 (Last Use) by
     way of a RowHash match scan, which is joined to a single partition
     of DMRTTBLDB.TBLXREF in view VWDB.VW1with a
     condition of ("DMRTTBLDB.TBLXREF in view
     VWDB.DC_RISK_DTL.PERIOD_DT = DATE '2013-06-01'") with a
     residual condition of ("DMRTTBLDB.TBLXREF in view
     VWDB.DC_RISK_DTL.PERIOD_DT = DATE '2013-06-01'") locking
     DMRTTBLDB.TBLXREF for access.  Spool 61 and
     DMRTTBLDB.TBLXREF are left outer joined using a rowkey-based
     merge join, with condition(s) used for non-matching on left table
     ("Spool_61.PERIOD_DT = DATE '2013-06-01'"), with a join condition
     of ("(DMRTTBLDB.TBLXREF.PERIOD_DT = Spool_61.PERIOD_DT) AND
     (DMRTTBLDB.TBLXREF.UID = (( CASE WHEN (NOT
     (Spool_61.UID LIKE 'WH%')) THEN (Spool_61.UID)
     ELSE (SUBSTR(Spool_61.UID ,3 ,30)) END )))").  The
     result goes into Spool 63 (all_amps) (compressed columns allowed),
     which is duplicated on all AMPs with hash fields ("( CASE WHEN
     (NOT (Spool_61.UID LIKE 'WH%')) THEN
     (Spool_61.UID) ELSE (SUBSTR(Spool_61.UID ,3
     ,30)) END )(VARCHAR(1000), CHARACTER SET LATIN, NOT CASESPECIFIC)").
     Then we do a SORT to order Spool 63 by row hash.  The size of
     Spool 63 is estimated with low confidence to be 1,611,888,960 rows
     (151,517,562,240 bytes).  Spool AsgnList:
     "UID" = "{LeftTable}.UID",
     "PERIOD_DT" = "{LeftTable}.PERIOD_DT",
     "WF_NCS_CD" = "{LeftTable}.WF_NCS_CD",
     "REC_TYP_NM" = "{LeftTable}.REC_TYP_NM",
     "AMT" = "{LeftTable}.AMT",
     "CMTMT_AM" = "{LeftTable}.CMTMT_AM",
     "NULL ,( CASE WHEN (NOT (Spool_61.UID LIKE 'WH%')) THEN
     (Spool_61.UID) ELSE (SUBSTR(Spool_61.UID ,3
     ,30)) END )(VARCHAR(1000), CHARACTER SET LATIN, NOT CASESPECIFIC)".
     The estimated time for this step is 1 minute and 45 seconds.
 28) We do an all-AMPs JOIN step (No Sum) from Spool 62 (Last Use) by
     way of a RowHash match scan, which is joined to Spool 63 (Last
     Use) by way of a RowHash match scan.  Spool 62 and Spool 63 are
     joined using a merge join, with a join condition of ("((( CASE
     WHEN (NOT (Spool_63.UID LIKE 'WH%')) THEN
     (Spool_63.UID) ELSE (SUBSTR(Spool_63.UID ,3
     ,30)) END ))= Spool_62.CID) AND
     ((((ADD_MONTHS(({RightTable}.PERIOD_DT ),(1 )))- 1 )<
     Spool_62.EXP_DT) AND (((ADD_MONTHS(({RightTable}.PERIOD_DT ),(1
     )))- 1 )>= Spool_62.EFF_DT ))").  The result goes into Spool 60
     (all_amps) (compressed columns allowed), which is built locally on
     the AMPs with Field1 ("17224").  The size of Spool 60 is estimated
     with no confidence to be 510,111,691 rows (49,480,834,027 bytes).
     Spool AsgnList: "17224",
     "Spool_60.PERIOD_DT" = "{ Copy }{RightTable}.PERIOD_DT",
     "Spool_60.WF_NCS_CD" = "{ Copy }{RightTable}.WF_NCS_CD",
     "Spool_60.REC_TYP_NM" = "{ Copy }{RightTable}.REC_TYP_NM",
     "Spool_60.DMCL_ST_CD" = "{ Copy }{LeftTable}.DMCL_ST_CD",
     "Spool_60.CNTRY_OF_RISK" = "{ Copy }{LeftTable}.CNTRY_OF_RISK",
     "Spool_60.AMT" = "{ Copy }{RightTable}.AMT",
     "Spool_60.CMTMT_AM" = "{ Copy }{RightTable}.CMTMT_AM".
     The estimated time for this step is 41.21 seconds.
 29) We do an all-AMPs SUM step to aggregate from Spool 60 (Last Use)
     by way of an all-rows scan, and the grouping identifier in field 1.
     Aggregate Intermediate Results are computed globally, then placed
     in Spool 66.  The size of Spool 66 is estimated with no confidence
     to be 288,947,703 rows (64,435,337,769 bytes).  The estimated time
     for this step is 5.92 seconds.
 30) We do an all-AMPs RETRIEVE step from Spool 66 (Last Use) by way of
     an all-rows scan into Spool 58 (group_amps), which is built
     locally on the AMPs with Field1 ("UniqueId").  The size of Spool
     58 is estimated with no confidence to be 288,947,703 rows (
     32,940,038,142 bytes).  Spool AsgnList: "UniqueId",
     "Field_2 ,’DSL',ORG_CD ,REC_TYP_NM ,DMCL_ST_CD
     ,COR ,Field_7 / 1000000.000000 ,Field_8 / 1000000.000000
     ,Field_9 (INTEGER),".
     The estimated time for this step is 7.37 seconds.
 31) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> The contents of Spool 58 are sent back to the user as the result
     of statement 1.  The total estimated time is 23 minutes and 12
     seconds.

Any Suggestion to reduce cpu and io skew?

4 REPLIES
Enthusiast

Re: Cpu and io skew -99%

Initial thought:
Without much info,just the explain, a quick glance, looks like your OR clause. How about replacing with IN? Modeling is one part , where I see that should have been taken care where you may probably avoid FTS. Some codes are repetitive. Maybe you need to revisit your joins.
Enthusiast

Re: Cpu and io skew -99%

THe OR is not in the actual query....so cannot change that. Revisit your joins as in?

Enthusiast

Re: Cpu and io skew -99%

Maybe you can share the query? Joins of tables, if can be tweaked meeting the end requirements.

Also, if logics can be broken up VT, GTT. How about temporal features too? I see date  comparisions?

Senior Supporter

Re: Cpu and io skew -99%

The SQL is much more complex as you code is showing as you seem to access views which contain also some complex logic - are you aware of this?

The high skew is most likely to be related to missing or stale stats. 

Check the explain and look for steps where the estimated numbers are not on the same level as you actuals. Check if you can change this discripancy with collecting stats.