Re write a query

Database
Enthusiast

Re write a query

Hi,

Can you please help me re-write below query as it is highly skewd and having very high Impact cpu.

DELETE FROM xyz.#VAU_DETAIL a 

where exists 

(select 1 from  xyz.VAU_DETAIL b 

 where b.O_TRAN_ID = a.O_TRAN_ID

and b.ER_ID not = a.ER_ID

and a.AC_DT >= b.AC_DT);

14 REPLIES
Enthusiast

Re: Re write a query

Can you post the EXPLAIN output please? 

Can you give as well the top result rows of:

SELECT O_TRAN_ID,COUNT(*) FROM xyz.#VAU_DETAIL GROUP BY 1 ORDER BY 2 DESC ?

Thanks

Roland

Roland Wenzlofsky
Enthusiast

Re: Re write a query

Explain:-

1) First, we lock a distinct xyz."pseudo table" for write on

     a RowHash to prevent global deadlock for

     xyz.#VAU_DETAIL.

  2) Next, we lock xyz.#VAU_DETAIL in view

     #V_AOC_USAGE_DETAIL for write.

  3) We execute the following steps in parallel.

       1) We do an all-AMPs RETRIEVE step from

          xyz.#VAU_DETAIL in view #V_AU_DETAIL

          by way of an all-rows scan with no residual conditions into

          Spool 2 (all_amps), which is redistributed by the hash code

          of (xyz.#VAU_DETAIL.O_TRAN_ID) to

          all AMPs.  The size of Spool 2 is estimated with high

          confidence to be 1,922,203 rows (84,576,932 bytes).  The

          estimated time for this step is 0.96 seconds.

       2) We do an all-AMPs RETRIEVE step from

          xyz.#VAU_DETAIL by way of an all-rows scan

          with no residual conditions locking for access into Spool 4

          (all_amps), which is redistributed by the hash code of (

          xyz.#VAU_DETAIL.O_TRAN_ID) to all

          AMPs.  Then we do a SORT to order Spool 4 by the sort key in

          spool field1.  The size of Spool 4 is estimated with high

          confidence to be 1,881,422 rows (152,395,182 bytes).  The

          estimated time for this step is 0.23 seconds.

  4) We do an all-AMPs RETRIEVE step from Spool 4 (Last Use) by way of

     an all-rows scan into Spool 3 (all_amps), which is built locally

     on the AMPs.  The size of Spool 3 is estimated with high

     confidence to be 1,881,422 rows (152,395,182 bytes).

  5) We do an all-AMPs JOIN step from Spool 2 (Last Use) by way of an

     all-rows scan, which is joined to Spool 3 (Last Use) by way of an

     all-rows scan.  Spool 2 and Spool 3 are joined using a single

     partition inclusion hash join, with a join condition of (

     "(AC_DT >= AC_DT) AND ((ER_ID <> ER_ID) AND

     ((O_TRAN_ID = O_TRAN_ID) AND (NOT

     (O_TRAN_ID IS NULL ))))").  The result goes into Spool 1

     (all_amps), which is redistributed by the hash code of (

     xyz.#VAU_DETAIL.ROWID) to all AMPs.  Then we do a

     SORT to order Spool 1 by row hash and the sort key in spool field1

     eliminating duplicate rows.  The size of Spool 1 is estimated with

     index join confidence to be 1,922,203 rows (34,599,654 bytes).

     The estimated time for this step is 27.94 seconds.

  6) We do an all-AMPs MERGE DELETE to xyz.#VAU_DETAIL

     in view #V_AU_DETAIL from Spool 1 (Last Use) via the row id.

     The size is estimated with index join confidence to be 1,922,203

     rows.  The estimated time for this step is 3 minutes and 20

     seconds.

  7) We spoil the parser's dictionary cache for the table.

  -> No rows are returned to the user as the result of statement 1.

2: It has 1,851,017 unique values in O_TRAN_ID column and Table also has same count of rows i.e 1,890,444.

3: O_TRAN_ID VARCHAR(20)

4: ER_ID UPI

5: STATS IS UPTO DATE ON ALL THE COLUMNS IN WHERE CLAUSE. 
Enthusiast

Re: Re write a query

"and b.ER_ID not = a.ER_ID" is what's killing your performance.. rewrite it as a loj on a.tran_id = b.tran_id where a.ac_dt > b.ac_dt and b.er_id is null (assuming "a" is the left table)

Enthusiast

My query taking more than 6tb of spool to execute.Can you please help me out

Here is the query and explain attached:-

SELECT                 CAST(C.VENDOR_DEPT_NBR AS INTEGER) AS DEPT                

,CAST(C.DEPT_CATEGORY_NBR AS INTEGER) AS DEPT_CATEGORY_NBR              

  ,CAST(T.CHANNEL_MTHD_DESC AS VARCHAR(120) ) AS CHANNEL_MTHD_DESC            

    ,CAST(COUNT(F.ITEM_NBR) AS BIGINT) AS C_ITEM_NBR     

           ,CAST(SUBSTR(F.DEMAND_SEGMENTATION_NM,4,2) AS INTEGER) AS FCSTRANGE /* Casting VARCHAR(40) field on source teradata table to Integer as the target field on Oracle table is of NUMBER(10,0) datatype */ FROM                    CA_WM_REPL_VM.GRS_FULFILLMENT_PARM F               

 

 ,CA_WM_VM.ITEM_CUR C               

 ,CA_WM_REPL_VM.CHANNEL_MTHD_TXT T             

   ,WM_UTIL.JWCAC29U_TEMP_TBL TMP WHERE                  C.MDS_FAM_ID = F.ITEM_NBR                  AND F.CHANNEL_MTHD_CD = T.CHANNEL_MTHD_CD                 AND C.VENDOR_DEPT_NBR  = TMP.DEPT                 AND C.DEPT_CATEGORY_NBR = TMP.DEPT_CATEGORY_NBR                 AND T.CHANNEL_MTHD_DESC  = TMP.CHANNEL_MTHD_DESC                 AND TMP.DEPT_SUBCATG_NBR = -1                 AND TMP.FINELINE_NBR = -1 GROUP BY          C.VENDOR_DEPT_NBR                  ,C.DEPT_CATEGORY_NBR                 ,T.CHANNEL_MTHD_DESC                  ,SUBSTR(F.DEMAND_SEGMENTATION_NM,4,2);

 

 

Explain plan:

  1) First, we lock CA_WM_REPL_TABLES.GRS_FULFILLMENT_PARM in view

     CA_WM_REPL_VM.GRS_FULFILLMENT_PARM for access, we lock

     CA_WM_REPL_TABLES.CHANNEL_MTHD_TXT in view

     CA_WM_REPL_VM.CHANNEL_MTHD_TXT for access, we lock

     WW_CORE_DIM_TABLES.ITEM_DIM in view CA_WM_VM.ITEM_CUR for access

     on a single partition, and we lock WM_UTIL.TMP for access.

  2) Next, we execute the following steps in parallel.

       1) We do an all-AMPs RETRIEVE step from

          CA_WM_REPL_TABLES.GRS_FULFILLMENT_PARM in view

          CA_WM_REPL_VM.GRS_FULFILLMENT_PARM by way of an all-rows scan

          with no residual conditions into Spool 4 (all_amps)

          (compressed columns allowed), which is redistributed by the

          rowkey of (CA_WM_REPL_TABLES.GRS_FULFILLMENT_PARM.ITEM_NBR, 4,

          4) to all AMPs.  Then we do a SORT to partition Spool 4 by

          rowkey.  The size of Spool 4 is estimated with high

          confidence to be 20,430,590 rows (1,021,529,500 bytes).  The

          estimated time for this step is 1.01 seconds.

       2) We do an all-AMPs RETRIEVE step from

          CA_WM_REPL_TABLES.CHANNEL_MTHD_TXT in view

          CA_WM_REPL_VM.CHANNEL_MTHD_TXT by way of an all-rows scan

          with no residual conditions into Spool 5 (all_amps)

          (compressed columns allowed), which is duplicated on all AMPs.

          The size of Spool 5 is estimated with high confidence to be

          7,560 rows (740,880 bytes).  The estimated time for this step

          is 0.04 seconds.

  3) We do an all-AMPs JOIN step from Spool 5 (Last Use) by way of an

     all-rows scan, which is joined to WM_UTIL.TMP by way of an

     all-rows scan with a condition of ("(NOT (WM_UTIL.TMP.DEPT IS NULL

     )) AND ((WM_UTIL.TMP.DEPT_SUBCATG_NBR = -1) AND

     (WM_UTIL.TMP.FINELINE_NBR = -1 ))").  Spool 5 and WM_UTIL.TMP are

     joined using a dynamic hash join, with a join condition of (

     "(CHANNEL_MTHD_DESC )= (TRANSLATE((WM_UTIL.TMP.CHANNEL_MTHD_DESC

     )USING LATIN_TO_UNICODE))").  The result goes into Spool 6

     (all_amps) (compressed columns allowed), which is redistributed by

     the hash code of (WM_UTIL.TMP.DEPT_CATEGORY_NBR, WM_UTIL.TMP.DEPT,

     CA_WM_REPL_TABLES.CHANNEL_MTHD_TXT.CHANNEL_MTHD_CD) to all AMPs.

     The size of Spool 6 is estimated with low confidence to be 24,374

     rows (2,559,270 bytes).  The estimated time for this step is 0.07

     seconds.

  4) We do an all-AMPs JOIN step from a single partition of

     WW_CORE_DIM_TABLES.ITEM_DIM in view CA_WM_VM.ITEM_CUR with a

     condition of ("WW_CORE_DIM_TABLES.ITEM_DIM in view

     CA_WM_VM.ITEM_CUR.DW_COUNTRY_DIV_PART_NBR = 4") with a residual

     condition of ("(NOT (WW_CORE_DIM_TABLES.ITEM_DIM in view

     CA_WM_VM.ITEM_CUR.VENDOR_DEPT_NBR IS NULL )) AND ((NOT

     (WW_CORE_DIM_TABLES.ITEM_DIM in view

     CA_WM_VM.ITEM_CUR.DEPT_CATEGORY_NBR IS NULL )) AND

     ((WW_CORE_DIM_TABLES.ITEM_DIM in view

     CA_WM_VM.ITEM_CUR.DW_COUNTRY_DIV_PART_NBR = 4) AND

     ((WW_CORE_DIM_TABLES.ITEM_DIM in view

     CA_WM_VM.ITEM_CUR.SNAPSHOT_END_DATE = DATE '2099-12-31') AND

     ((WW_CORE_DIM_TABLES.ITEM_DIM in view

     CA_WM_VM.ITEM_CUR.DW_CURR_PARTITION_NBR = 1) AND

     ((WW_CORE_DIM_TABLES.ITEM_DIM in view

     CA_WM_VM.ITEM_CUR.BASE_DIV_NBR = 1) AND

     ((WW_CORE_DIM_TABLES.ITEM_DIM in view

     CA_WM_VM.ITEM_CUR.CURRENT_IND = 'Y') AND

     (WW_CORE_DIM_TABLES.ITEM_DIM in view

     CA_WM_VM.ITEM_CUR.COUNTRY_CODE = 'CA')))))))"), which is joined to

     Spool 4 (Last Use) by way of a RowHash match scan.

     WW_CORE_DIM_TABLES.ITEM_DIM and Spool 4 are joined using a

     rowkey-based merge join, with a join condition of (

     "WW_CORE_DIM_TABLES.ITEM_DIM.MDS_FAM_ID = ITEM_NBR").  The result

     goes into Spool 7 (all_amps) (compressed columns allowed), which

     is redistributed by the hash code of (

     CA_WM_REPL_TABLES.GRS_FULFILLMENT_PARM.CHANNEL_MTHD_CD,

     WW_CORE_DIM_TABLES.ITEM_DIM.VENDOR_DEPT_NBR,

     WW_CORE_DIM_TABLES.ITEM_DIM.DEPT_CATEGORY_NBR) to all AMPs.  The

     size of Spool 7 is estimated with low confidence to be 6,545,313

     rows (346,901,589 bytes).  The estimated time for this step is

     1.56 seconds.

  5) We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of an

     all-rows scan, which is joined to Spool 7 (Last Use) by way of an

     all-rows scanSpool 6 and Spool 7 are joined using a single

     partition hash join, with a join condition of (

     "(DEPT_CATEGORY_NBR = DEPT_CATEGORY_NBR) AND ((VENDOR_DEPT_NBR =

     DEPT) AND (CHANNEL_MTHD_CD = CHANNEL_MTHD_CD ))").  The result

     goes into Spool 3 (all_amps) (compressed columns allowed), which

     is built locally on the AMPs.  The size of Spool 3 is estimated

     with low confidence to be 50,518 rows (6,819,930 bytes).  The

     estimated time for this step is 0.10 seconds.

  6) We do an all-AMPs SUM step to aggregate from Spool 3 (Last Use) by

     way of an all-rows scan , grouping by field1 (

     WW_CORE_DIM_TABLES.ITEM_DIM.VENDOR_DEPT_NBR

     ,WW_CORE_DIM_TABLES.ITEM_DIM.DEPT_CATEGORY_NBR

     ,CA_WM_REPL_TABLES.CHANNEL_MTHD_TXT.CHANNEL_MTHD_DESC

     ,SUBSTR(CA_WM_REPL_TABLES.GRS_FULFILLMENT_PARM.DEMAND_SEGMENTATION_NM

     ,4 ,2)).  Aggregate Intermediate Results are computed globally,

     then placed in Spool 8.  The size of Spool 8 is estimated with no

     confidence to be 37,889 rows (13,943,152 bytes).  The estimated

     time for this step is 0.07 seconds.

  7) We do an all-AMPs RETRIEVE step from Spool 8 (Last Use) by way of

     an all-rows scan into Spool 1 (group_amps), which is built locally

     on the AMPs.  The size of Spool 1 is estimated with no confidence

     to be 37,889 rows (4,660,347 bytes).  The estimated time for this

     step is 0.04 seconds.

  8) Finally, we send out an END TRANSACTION step to all AMPs involved

     in processing the request.

  -> The contents of Spool 1 are sent back to the user as the result of

     statement 1.  The total estimated time is 2.86 seconds.

 

Junior Contributor

Re: My query taking more than 6tb of spool to execute.Can you please help me out

The plan looks ok, which step is using those 6 TBs?

 

Can you share estimated vs. actual CPU/rows/spool from QryLogStepsV?

Enthusiast

Re: My query taking more than 6tb of spool to execute.Can you please help me out

Can you please provide the query if you have any to pull the information from QryLogStepsV

Junior Contributor

Re: My query taking more than 6tb of spool to execute.Can you please help me out

This is a basic query for a single QueryID:

SELECT
   QueryID
  ,StepLev1Num
  ,StepLev2Num
  ,StepName
  ,StepStartTime
  ,StepStopTime
  ,ElapsedTime
  ,EstProcTime
  ,EstCPUCost
  ,CPUTime
  ,IOcount
  ,EstRowCount
  ,RowCount
  ,MaxAmpCPUTime
  ,MinAmpCPUTime
  ,SpoolUsage
  ,PhysIO
FROM dbc.qrylogstepsv
WHERE QueryID = ????
-- AND LogDate = DATE '2017-??-??' -- for history table to match the PPI ORDER BY StepLev1Num ,StepLev2Num

Of course this query might have been moved to the DBQL-history tables and then you need to query against that. 

 

If you don't have acces you can also post data from Viewpoint's QueryMonitor.

 

Enthusiast

Re: My query taking more than 6tb of spool to execute.Can you please help me out

Thank you...will pul the information and let you know the details.

Enthusiast

Re: My query taking more than 6tb of spool to execute.Can you please help me out

Please find the details from step table and let me know how to proceed  :-

 

 

LogDateProcIDCollectTimeStampQueryIDZoneIDStepLev1NumStepLev2NumStepNameStepStartTimeStepStopTimeEstProcTimeEstCPUCostCPUTimeIOcountEstRowCountRowCountRowCount2RowCount3NumOfActiveAMPsMaxAmpCPUTimeMaxCPUAmpNumberMinAmpCPUTimeMaxAmpIOMaxIOAmpNumberMinAmpIOSpoolUsageMaxAMPSpoolMaxSpoolAmpNumberMinAMPSpoolStepWDLSNUtilityTableIDRowsWComprColumnsEstIOCostEstNetCostEstHRCostCPUtimeNormMaxAmpCPUTimeNormMaxCPUAmpNumberNormMinAmpCPUTimeNormNumCombinedPartitionsNumContextsNumCPReferencesEstRowCountSkewEstRowCountSkewMatchFragmentNumStepInstanceStepStatusDispatchSeqLockDelayIOKBStatementNumTriggerNestLevelTriggerKindSSRReceiverCountVHLogicalIOVHPhysIOVHLogicalIOKBVHPhysIOKBPhysIOPhysIOKBServerByteCountStream_MetricsDMLLoadIDExtraField1ExtraField2ExtraField3ExtraField4ExtraField5ExtraField6ExtraField7ExtraField8ExtraField9ExtraField10ExtraField11ExtraField12ExtraField13ExtraField14ExtraField15ExtraField16ExtraField17ExtraField18ExtraField19ExtraField20ExtraField21ExtraField22ExtraField23ExtraField24ExtraField25ExtraField26ExtraField27ExtraField28ExtraField29ExtraField30ExtraField31ExtraField32ExtraField33ExtraField34ExtraField35ExtraField36ExtraField37ExtraField38ExtraField39ExtraField40ExtraField41ExtraField42ExtraField43ExtraField44ExtraField45ExtraField46ExtraField47ExtraField48
15-10-201730,69958:21.43069908390174,90,00000-00-00-0070RET   ??????????????0?000?0?151,82,302????????????????????10?????????????????????????????0?????????????????????????????
15-10-201730,69958:21.43069908390174,90,00000-00-00-0030JIN   57:52.857:52.80.070.161.7622,636.0024,374.0013,785.00001,0800.011,079040254139077,67,808.0012,65,664.009674,75,136.00456151,82,302?066.757.98075.630.341,0790???00?????3,89,240.0010??????131,436.00?????????????????????0?????????????????????????????
15-10-201730,69958:21.43069908390174,90,00000-00-00-0010MLK   57:52.657:52.6000.21001,080.000?1,08001,07400?000?0456151,82,302?00008.890.171,0740???00??????10?????????????????????????????0?????????????????????????????
15-10-201730,69958:21.43069908390174,90,00000-00-00-0021RET   57:52.657:52.81.0138.8959.7376,607.00204,14,386.00204,14,386.00001,0800.085300.0286460539116,13,952.0012,73,856.009674,75,136.00456151,82,302?0791.74183.902,566.433.275301.03???00?????49,15,732.0010??????10,118.0025,82,656.00?????????????????????0?????????????????????????????
15-10-201730,69958:21.43069908390174,90,00000-00-00-0040JIN   57:52.857:53.11.5619.7631.391,25,466.0065,40,680.00204,12,716.00001,0800.09540.01170570739057,28,000.00220,81,536.00540456151,82,302?01,391.45147.2501,349.503.95540.34???00?????14,16,048.0010??????46,967.0062,48,632.00?????????????????????0?????????????????????????????
15-10-201730,69958:21.43069908390174,90,00000-00-00-0060SUM   ??????????????0?000?0?151,82,302????????????????????10?????????????????????????????0?????????????????????????????
15-10-201730,69958:21.43069908390174,90,00000-00-00-0050JIN   57:53.1?????????????0?000?0?151,82,302????????????????????10?????????????????????????????0?????????????????????????????
15-10-201730,69958:21.43069908390174,90,00000-00-00-0022RET   57:52.657:52.60.040007,560.007,560.00001,0800.011,0220281,0041688,47,360.008,192.001,0798,192.00456151,82,302?039.940.02000.341,0220???00?????6,91,504.0010?????????????????????????????0?????????????????????????????
15-10-201730,69958:21.43069908390174,90,00000-00-00-0032,7670RESP  57:55.257:55.2??????????????????????????????????????????????????????????????????????????????????????????????????????
15-10-201730,69958:21.43069908390174,90,00000-00-00-0090ExpHR ??????????????0?000?0?151,82,302????????????????????10?????????????????????????????0?????????????????????????????
15-10-201730,69958:21.43069908390174,90,00000-00-00-0080ExpRL ??????????????0?000?0?151,82,302????????????????????10?????????????????????????????0?????????????????????????????