SQL Query Tunning

Database

SQL Query Tunning

Hi,

I am new in teradata and i want to start with sql query tunning.

Can anyone tell me how should I proceed further ?

Thanks.
10 REPLIES
Junior Supporter

Re: SQL Query Tunning

>>"Can anyone tell me how should I proceed further ?"

1.- EXPLAIN (learn how to use it)
2.- "SQL Request and Transaction Processing" (read it from top to bottom)
3.- Test, Test, Test

HTH.

Cheers.

Carlos.
Enthusiast

Re: SQL Query Tunning

Dear all,

Please look at below mentioned issue , this query was runing very long time nearly 545 hours, I did proper tuning like collect stats and seconday index creation as per required columns .now it takes 2 min:43 sec,This query should be run with in 2,3 sec (NOT minutes) as if it is running in oracle (2,3 sec), because  this is login query for RPD.

1 ) VIS_PR_BU_ID column having 2,06,77,318 null values but we are using in join coindition

any how we eleminated by using IS NOT NULL

2) Created JOIN INX for WHERE A.INTEGRATION_ID = B.PAR_INTEGRATION_ID

Still it takes 2 & 3 min , Please suggested me i'm not wrong

SELECT /*+index(a W_PARTY_PER_D_U2) index(c S SHAI2)*/

DISTINCT C.NAME , CASE WHEN C.NAME ='TMCV' THEN 'COMMERCIAL VEHICLES'

WHEN C.NAME ='TMPC' THEN 'Passenger Vehicles' ELSE C.NAME END

FROM W_PARTY_PER_D A,WC_USER_D B, W_PARTY_ORG_D C

WHERE A.INTEGRATION_ID = B.PAR_INTEGRATION_ID

AND A.VIS_PR_BU_ID = C.INTEGRATION_ID

AND B.LOGIN = 'MP_3005700'

select count(1) from W_PARTY_PER_D --3,93,70,920

select count(1) from WC_USER_D --174,648

select count(1) from W_PARTY_ORG_D--56,20,010

The total estimated time is 3 minutes and 13 seconds.

Regards,

Mahesh

Senior Apprentice

Re: SQL Query Tunning

Hi Mahesh,

could you show the exact DDL (but at least the index definition), EXPLAIN and statistics?

And some more info about usage, is the  value passed to B.Login a parameter?

Dieter

Enthusiast

Re: SQL Query Tunning

Dear Dieter,

Please find the below mentioned details , Kindly let me know if you required any info for same

--SHOW W_PARTY_PER_D

UNIQUE PRIMARY INDEX ( ROW_WID )

UNIQUE INDEX W_PARTY_PER_D_U1 ( DATASOURCE_NUM_ID ,INTEGRATION_ID ,

CONTACT_TYPE_I );

--SHOW TABLE WC_USER_D

UNIQUE PRIMARY INDEX ( ROW_WID )

UNIQUE INDEX WC_USER_D_U1 ( INTEGRATION_ID );

LOGIN VARCHAR(50)

--SHOW TABLE W_PARTY_ORG_D

UNIQUE PRIMARY INDEX ( ROW_WID )

UNIQUE INDEX W_PARTY_ORG_D_U1 ( DATASOURCE_NUM_ID ,INTEGRATION_ID );

--HELP STATS W_PARTY_PER_D

Date Time Unique Values Column Names

1 12/10/29 12:07:12 2,357 VIS_PR_BU_ID

2 12/10/29 12:11:07 39,370,920 INTEGRATION_ID

3 12/10/29 12:16:47 8,298,093 X_CONCAT_FULL_NAME

4 12/10/29 12:18:54 39,370,920 VIS_PR_BU_ID,INTEGRATION_ID

--help stats WC_USER_D

12/10/29 11:06:33 174,648 PAR_INTEGRATION_ID

12/10/29 11:06:30 174,636 LOGIN

12/10/29 10:55:19 174,648 PAR_INTEGRATION_ID,LOGIN

--help stats W_PARTY_ORG_D

12/10/29 10:55:18 5,620,010 INTEGRATION_ID

12/10/29 11:07:08 4,003,285 NAME

--explain

Explain SELECT /*+index(a W_PARTY_PER_D_U2) index(c S SHAI2)*/

DISTINCT C.NAME , CASE WHEN C.NAME ='TMCV' THEN 'COMMERCIAL VEHICLES'

WHEN C.NAME ='TMPC' THEN 'Passenger Vehicles' ELSE C.NAME END

FROM W_PARTY_PER_D A,WC_USER_D B, W_PARTY_ORG_D C

WHERE A.INTEGRATION_ID = B.PAR_INTEGRATION_ID

AND A.VIS_PR_BU_ID = C.INTEGRATION_ID

AND B.LOGIN = 'MP_3005700'

1) First, we lock a distinct TD_DEV."pseudo table" for read on a

RowHash to prevent global deadlock for TD_DEV.A.

2) Next, we lock a distinct TD_DEV."pseudo table" for read on a

RowHash to prevent global deadlock for TD_DEV.C.

3) We lock a distinct TD_DEV."pseudo table" for read on a RowHash to

prevent global deadlock for TD_DEV.B.

4) We lock TD_DEV.A for read, we lock TD_DEV.C for read, and we lock

TD_DEV.B for read.

5) We do an all-AMPs RETRIEVE step from TD_DEV.B by way of an

all-rows scan with a condition of ("TD_DEV.B.LOGIN = 'MP_3005700'")

into Spool 4 (all_amps), which is duplicated on all AMPs. The

size of Spool 4 is estimated with low confidence to be 24 rows (

480 bytes). The estimated time for this step is 0.07 seconds.

6) We do an all-AMPs JOIN step from Spool 4 (Last Use) by way of an

all-rows scan, which is joined to TD_DEV.A by way of an all-rows

scan with a condition of ("NOT (TD_DEV.A.VIS_PR_BU_ID IS NULL)").

Spool 4 and TD_DEV.A are joined using a single partition hash_

join, with a join condition of ("TD_DEV.A.INTEGRATION_ID =

PAR_INTEGRATION_ID"). The input table TD_DEV.A will not be cached

in memory. The result goes into Spool 5 (all_amps), which is

duplicated on all AMPs. The size of Spool 5 is estimated with low

confidence to be 24 rows (480 bytes). The estimated time for this

step is 5 minutes and 23 seconds.

7) We do an all-AMPs JOIN step from Spool 5 (Last Use) by way of an

all-rows scan, which is joined to TD_DEV.C by way of an all-rows

scan with no residual conditions. Spool 5 and TD_DEV.C are joined

using a single partition hash_ join, with a join condition of (

"VIS_PR_BU_ID = TD_DEV.C.INTEGRATION_ID"). The input table

TD_DEV.C will not be cached in memory. The result goes into Spool

3 (all_amps), which is built locally on the AMPs. The size of

Spool 3 is estimated with low confidence to be 1 row (50 bytes).

The estimated time for this step is 1 minute and 16 seconds.

8) We do an all-AMPs SUM step to aggregate from Spool 3 (Last Use) by

way of an all-rows scan , grouping by field1 ( TD_DEV.C.NAME ,(

CASE WHEN (TD_DEV.C.NAME = 'TMCV') THEN ('COMMERCIAL VEHICLES')

WHEN (TD_DEV.C.NAME = 'TMPC') THEN ('Passenger Vehicles') ELSE

(TD_DEV.C.NAME) END)). Aggregate Intermediate Results are

computed globally, then placed in Spool 1. The size of Spool 1 is

estimated with low confidence to be 1 row (287 bytes). The

estimated time for this step is 0.04 seconds.

9) Finally, we send out an END TRANSACTION step to all AMPs involved

in processing the request.

-> The contents of Spool 1 are sent back to the user as the result of

statement 1. The total estimated time is 6 minutes and 39 seconds.

Senior Apprentice

Re: SQL Query Tunning

Hi Mahesh,

I did proper tuning like collect stats and seconday index creation as per required columns

Some remarks:

- None of the existing indexes will be useful for this query.

- Looks like a data model where the logical PK/UNIQUE constraints are implemented as-is. This will result in perfectly distributed tables, but probably bad performance.

- The ROW_WID columns look like a reminder from Oracle "create a numeric PK on every table additionally to the logical PK"

- A PI should be chosen mainly based on JOINs

An index on B.LOGIN would help a bit, but the main problem are the PIs.

Changing the PI of all tables to a PI (probably UPI) on INTEGRATION_ID will probably help other queries, too.

If you can't change the PIs this Join Index should fully cover your query and run sub-second:

CREATE JOIN INDEX xxxx AS
SELECT
(B.LOGIN), (C.NAME, A.VIS_PR_BU_ID)
FROM W_PARTY_PER_D A,WC_USER_D B, W_PARTY_ORG_D C
WHERE A.INTEGRATION_ID = B.PAR_INTEGRATION_ID
AND A.VIS_PR_BU_ID = C.INTEGRATION_ID
PRIMARY INDEX (LOGIN)

Dieter

Enthusiast

Re: SQL Query Tunning

Dear Dieter,

Thank you so much , Simply superb, Now it is running with in 2seconds.

Any how we are using Tpump utility for daily ETL loads for these tables ( W_PARTY_PER_D A,WC_USER_D B, W_PARTY_ORG_D)

ETL Loading utilities :-

W_PARTY_ORG_D--->TPUMP (upset)

WC_USER_D--->TPUMP (delete/upset)

W_PARTY_PER_D--->releation connection

I hope this will not given any issue, because TPump doesn't have any limitation for same.

Enthusiast

Re: SQL Query Tunning

Hi Dieter,

i have a query like as follows

SELECT c.unit_num

,c.trans_end_dt

,c.Coupon_cd AS Coupon_cd

,COUNT(DISTINCT Loyalty_Member_Id ) AS "Customer count"

,SUM(Net_Sales_Amt) AS Sales

,COUNT(Wgt_Item_Cnt) AS Units

,COUNT (DISTINCT c.rtl_trans_id) AS Transactions

,SUM(Discount_Amt) Disc_Amt

,SUM(coupon_qty) AS "Coupon quantity"

 FROM

           (SELECT

                a.rtl_trans_id  AS rtl_trans_id     

               ,a.Trans_End_Dt AS Trans_End_Dt    

               ,a. Mfr_Cpn_Scan_Cd AS Coupon_cd

               ,b.unit_num

               ,SUM(a.Mfr_Cpn_Amt ) AS Discount_Amt

               ,COUNT(DISTINCT rtl_trans_id) AS coupon_qty

              FROM     Merch_AV.RtlSaleMfrCpn a

                                                                     JOIN merch_av.FlatOrgHierarchyAsIs b

              ON a.org_id =b.store_id

                          AND Mfr_Cpn_Scan_Cd = 110100360000   

                          AND trans_end_dt BETWEEN 1120717 AND 1120811

                          AND trans_type_cd = 0

              WHERE   org_level_03_num=84 AND org_hierarchy_id=203     /*  farm fresh          */                          

                  GROUP  BY 1,2,3,4)c

        INNER JOIN merch_av.Rtlsale d

        ON c.Rtl_Trans_Id = d.Rtl_Trans_Id

        AND c.Trans_End_Dt = d.Trans_End_Dt

        AND trans_type_cd=0

    GROUP BY 1,2,3

    ORDER BY 1;

and explain plan as follows for this is as follows

/---------------------------

Explain SELECT c.unit_num

,c.trans_end_dt

,c.Coupon_cd AS Coupon_cd

,COUNT(DISTINCT Loyalty_Member_Id ) AS "Customer count"

,SUM(Net_Sales_Amt) AS Sales

,COUNT(Wgt_Item_Cnt) AS Units

,COUNT (DISTINCT c.rtl_trans_id) AS Transactions

,SUM(Discount_Amt) Disc_Amt

,SUM(coupon_qty) AS "Coupon quantity"

 FROM

           (SELECT

                a.rtl_trans_id  AS rtl_trans_id     

               ,a.Trans_End_Dt AS Trans_End_Dt    

               ,a. Mfr_Cpn_Scan_Cd AS Coupon_cd

               ,b.unit_num

               ,SUM(a.Mfr_Cpn_Amt ) AS Discount_Amt

               ,COUNT(DISTINCT rtl_trans_id) AS coupon_qty

              FROM     Merch_AV.RtlSaleMfrCpn a

                                                                     JOIN merch_av.FlatOrgHierarchyAsIs b

              ON a.org_id =b.store_id

                          AND Mfr_Cpn_Scan_Cd = 110100360000   

                          AND trans_end_dt BETWEEN 1120717 AND 1120811

                          AND trans_type_cd = 0

              WHERE   org_level_03_num=84 AND org_hierarchy_id=203     /*  farm fresh          */                          

                  GROUP  BY 1,2,3,4)c

        INNER JOIN merch_av.Rtlsale d

        ON c.Rtl_Trans_Id = d.Rtl_Trans_Id

        AND c.Trans_End_Dt = d.Trans_End_Dt

        AND trans_type_cd=0

    GROUP BY 1,2,3

    ORDER BY 1;

  1) First, we lock SVUEDW_T.RtlSaleMfrCpn for access, we lock

     SVUEDW_T.RtlSale in view merch_av.Rtlsale for access, and we lock

     SVUEDW_A.FlatOrgHierarchyAsIs for access.

  2) Next, we do an all-AMPs JOIN step from

     SVUEDW_A.FlatOrgHierarchyAsIs by way of a RowHash match scan with

     a condition of ("(SVUEDW_A.FlatOrgHierarchyAsIs.Org_Level_03_Num =

     84) AND ((SVUEDW_A.FlatOrgHierarchyAsIs.Org_Hierarchy_Id = 203)

     AND ((NOT (SVUEDW_A.FlatOrgHierarchyAsIs.Org_Level_03_Desc LIKE

     '%SAVE-A-LOT%')) AND (NOT

     (SVUEDW_A.FlatOrgHierarchyAsIs.Org_Level_03_Desc LIKE '%SAVE A

     LOT%'))))"), which is joined to SVUEDW_A.FlatOrgHierarchyAsIs by

     way of a RowHash match scan with a condition of (

     "(SVUEDW_A.FlatOrgHierarchyAsIs.Org_Hierarchy_Id = 203) AND ((NOT

     (SVUEDW_A.FlatOrgHierarchyAsIs.Org_Level_03_Desc LIKE

     '%SAVE-A-LOT%')) AND (NOT

     (SVUEDW_A.FlatOrgHierarchyAsIs.Org_Level_03_Desc LIKE '%SAVE A

     LOT%')))") locking SVUEDW_A.FlatOrgHierarchyAsIs for access.

     SVUEDW_A.FlatOrgHierarchyAsIs and SVUEDW_A.FlatOrgHierarchyAsIs

     are joined using a merge join, with a join condition of (

     "(SVUEDW_A.FlatOrgHierarchyAsIs.Org_Hierarchy_Id =

     SVUEDW_A.FlatOrgHierarchyAsIs.Org_Hierarchy_Id) AND

     (SVUEDW_A.FlatOrgHierarchyAsIs.Store_Id =

     SVUEDW_A.FlatOrgHierarchyAsIs.Store_Id)").  The result goes into

     Spool 4 (all_amps) (compressed columns allowed), which is

     duplicated on all AMPs.  The size of Spool 4 is estimated with low

     confidence to be 34,125 rows (989,625 bytes).  The estimated time

     for this step is 0.02 seconds.

  3) We do an all-AMPs JOIN step from Spool 4 (Last Use) by way of an

     all-rows scan, which is joined to 26 partitions of

     SVUEDW_T.RtlSaleMfrCpn with a condition of (

     "(SVUEDW_T.RtlSaleMfrCpn.Trans_End_Dt >= DATE '2012-07-17') AND

     ((SVUEDW_T.RtlSaleMfrCpn.Trans_End_Dt <= DATE '2012-08-11') AND

     ((SVUEDW_T.RtlSaleMfrCpn.Mfr_Cpn_Scan_Cd = 110100360000.

    ) AND (SVUEDW_T.RtlSaleMfrCpn.Trans_Type_Cd = 0 )))").  Spool 4 and

     SVUEDW_T.RtlSaleMfrCpn are joined using a dynamic hash join, with

     a join condition of ("(SVUEDW_T.RtlSaleMfrCpn.Org_Id = Store_Id)

     AND (SVUEDW_T.RtlSaleMfrCpn.Org_Id = Store_Id)").  The input table

     SVUEDW_T.RtlSaleMfrCpn will not be cached in memory.  The result

     goes into Spool 3 (all_amps) (compressed columns allowed), which

     is built locally on the AMPs.  The size of Spool 3 is estimated

     with low confidence to be 16,240 rows (893,200 bytes).  The

     estimated time for this step is 0.09 seconds.

  4) We do an all-AMPs SUM step to aggregate from Spool 3 (Last Use) by

     way of an all-rows scan , grouping by field1 (

     SVUEDW_T.RtlSaleMfrCpn.Rtl_Trans_Id

     ,SVUEDW_T.RtlSaleMfrCpn.Trans_End_Dt

     ,SVUEDW_T.RtlSaleMfrCpn.Mfr_Cpn_Scan_Cd

     ,SVUEDW_A.FlatOrgHierarchyAsIs.Unit_Num

     ,SVUEDW_T.RtlSaleMfrCpn.Rtl_Trans_Id).  Aggregate Intermediate

     Results are computed locally, then placed in Spool 6.  The size of

     Spool 6 is estimated with low confidence to be 16,240 rows (

     1,770,160 bytes).  The estimated time for this step is 0.01

     seconds.

  5) We do an all-AMPs SUM step to aggregate from Spool 6 (Last Use) by

     way of an all-rows scan , grouping by field1 (

     SVUEDW_T.RtlSaleMfrCpn.Rtl_Trans_Id

     ,SVUEDW_T.RtlSaleMfrCpn.Trans_End_Dt

     ,SVUEDW_T.RtlSaleMfrCpn.Mfr_Cpn_Scan_Cd

     ,SVUEDW_A.FlatOrgHierarchyAsIs.Unit_Num).  Aggregate Intermediate

     Results are computed globally, then placed in Spool 8.  The size

     of Spool 8 is estimated with low confidence to be 16,240 rows (

     1,705,200 bytes).  The estimated time for this step is 0.02

     seconds.

  6) We do an all-AMPs RETRIEVE step from Spool 8 (Last Use) by way of

     an all-rows scan into Spool 1 (used to materialize view, derived

     table or table function c) (all_amps) (compressed columns allowed),

     which is built locally on the AMPs.  The size of Spool 1 is

     estimated with low confidence to be 16,240 rows (1,250,480 bytes).

     The estimated time for this step is 0.01 seconds.

  7) We do an all-AMPs RETRIEVE step from Spool 1 (Last Use) by way of

     an all-rows scan with a condition of ("NOT (c.RTL_TRANS_ID IS NULL)")

     into Spool 13 (all_amps) (compressed columns allowed), which is

     redistributed by the rowkey of (

     SVUEDW_T.RtlSaleMfrCpn.Rtl_Trans_Id,

     SVUEDW_T.RtlSaleMfrCpn.Trans_End_Dt) to all AMPs.  Then we do a

     SORT to partition Spool 13 by rowkey.  The size of Spool 13 is

     estimated with low confidence to be 16,240 rows (1,120,560 bytes).

     The estimated time for this step is 0.01 seconds.

  8) We do an all-AMPs JOIN step from SVUEDW_T.RtlSale in view

     merch_av.Rtlsale by way of a RowHash match scan with a condition

     of ("SVUEDW_T.RtlSale in view merch_av.Rtlsale.Trans_Type_Cd = 0"),

     which is joined to Spool 13 (Last Use) by way of a RowHash match

     scan.  SVUEDW_T.RtlSale and Spool 13 are joined using a

     rowkey-based merge join, with a join condition of ("(TRANS_END_DT

     = SVUEDW_T.RtlSale.Trans_End_Dt) AND (RTL_TRANS_ID =

     SVUEDW_T.RtlSale.Rtl_Trans_Id)").  The input table

     SVUEDW_T.RtlSale will not be cached in memory, but it is eligible

     for synchronized scanning.  The result goes into Spool 14

     (all_amps) (compressed columns allowed), which is redistributed by

     the hash code of (SVUEDW_T.RtlSale.Org_Id, 203) to all AMPs.  Then

     we do a SORT to order Spool 14 by row hash.  The size of Spool 14

     is estimated with low confidence to be 16,081 rows (1,350,804

     bytes).  The estimated time for this step is 0.20 seconds.

  9) We do an all-AMPs JOIN step from Spool 14 (Last Use) by way of a

     RowHash match scan, which is joined to

     SVUEDW_A.FlatOrgHierarchyAsIs in view merch_av.Rtlsale by way of a

     RowHash match scan with a condition of (

     "(SVUEDW_A.FlatOrgHierarchyAsIs in view

     merch_av.Rtlsale.Org_Hierarchy_Id = 203) AND ((NOT

     (SVUEDW_A.FlatOrgHierarchyAsIs in view

     merch_av.Rtlsale.Org_Level_03_Desc LIKE '%SAVE-A-LOT%')) AND (NOT

     (SVUEDW_A.FlatOrgHierarchyAsIs in view

     merch_av.Rtlsale.Org_Level_03_Desc LIKE '%SAVE A LOT%')))")

     locking SVUEDW_A.FlatOrgHierarchyAsIs for access.  Spool 14 and

     SVUEDW_A.FlatOrgHierarchyAsIs are joined using a merge join, with

     a join condition of ("Org_Id =

     SVUEDW_A.FlatOrgHierarchyAsIs.Store_Id").  The result goes into

     Spool 12 (all_amps) (compressed columns allowed), which is built

     locally on the AMPs.  The size of Spool 12 is estimated with low

     confidence to be 15,370 rows (1,244,970 bytes).  The estimated

     time for this step is 0.01 seconds.

 10) We do an all-AMPs SUM step to aggregate from Spool 12 by way of an

     all-rows scan , grouping by field1 (

     SVUEDW_A.FlatOrgHierarchyAsIs.Unit_Num

     ,SVUEDW_T.RtlSaleMfrCpn.Trans_End_Dt

     ,SVUEDW_T.RtlSaleMfrCpn.Mfr_Cpn_Scan_Cd

     ,SVUEDW_T.RtlSaleMfrCpn.Rtl_Trans_Id).  Aggregate Intermediate

     Results are computed globally, then placed in Spool 19.  The size

     of Spool 19 is estimated with low confidence to be 15,370 rows (

     1,767,550 bytes).  The estimated time for this step is 0.02

     seconds.

 11) We do an all-AMPs SUM step to aggregate from Spool 19 (Last Use)

     by way of an all-rows scan , grouping by field1 (

     SVUEDW_A.FlatOrgHierarchyAsIs.Unit_Num

     ,SVUEDW_T.RtlSaleMfrCpn.Trans_End_Dt

     ,SVUEDW_T.RtlSaleMfrCpn.Mfr_Cpn_Scan_Cd).  Aggregate Intermediate

     Results are computed globally, then placed in Spool 21.  The size

     of Spool 21 is estimated with low confidence to be 2,157 rows (

     243,741 bytes).  The estimated time for this step is 0.02 seconds.

 12) We execute the following steps in parallel.

      1) We do an all-AMPs RETRIEVE step from Spool 21 (Last Use) by

         way of an all-rows scan into Spool 16 (all_amps) (compressed

         columns allowed), which is built locally on the AMPs.  The

         size of Spool 16 is estimated with low confidence to be 2,157

         rows (183,345 bytes).  The estimated time for this step is

         0.01 seconds.

      2) We do an all-AMPs SUM step to aggregate from Spool 12 (Last

         Use) by way of an all-rows scan , grouping by field1 (

         SVUEDW_A.FlatOrgHierarchyAsIs.Unit_Num

         ,SVUEDW_T.RtlSaleMfrCpn.Trans_End_Dt

         ,SVUEDW_T.RtlSaleMfrCpn.Mfr_Cpn_Scan_Cd

         ,SVUEDW_T.RtlSale.Loyalty_Member_Id).  Aggregate Intermediate

         Results are computed globally, then placed in Spool 25.  The

         size of Spool 25 is estimated with low confidence to be 15,370

         rows (1,060,530 bytes).  The estimated time for this step is

         0.02 seconds.

 13) We do an all-AMPs SUM step to aggregate from Spool 25 (Last Use)

     by way of an all-rows scan , grouping by field1 (

     SVUEDW_A.FlatOrgHierarchyAsIs.Unit_Num

     ,SVUEDW_T.RtlSaleMfrCpn.Trans_End_Dt

     ,SVUEDW_T.RtlSaleMfrCpn.Mfr_Cpn_Scan_Cd).  Aggregate Intermediate

     Results are computed globally, then placed in Spool 27.  The size

     of Spool 27 is estimated with low confidence to be 2,157 rows (

     131,577 bytes).  The estimated time for this step is 0.02 seconds.

 14) We execute the following steps in parallel.

      1) We do an all-AMPs RETRIEVE step from Spool 27 (Last Use) by

         way of an all-rows scan into Spool 15 (all_amps) (compressed

         columns allowed), which is built locally on the AMPs.  The

         size of Spool 15 is estimated with low confidence to be 2,157

         rows (97,065 bytes).  The estimated time for this step is 0.01

         seconds.

      2) We do an all-AMPs RETRIEVE step from Spool 16 (Last Use) by

         way of an all-rows scan into Spool 29 (all_amps) (compressed

         columns allowed), which is redistributed by the hash code of (

         SVUEDW_A.FlatOrgHierarchyAsIs.Unit_Num,

         SVUEDW_T.RtlSaleMfrCpn.Trans_End_Dt,

         SVUEDW_T.RtlSaleMfrCpn.Mfr_Cpn_Scan_Cd) to all AMPs.  Then we

         do a SORT to order Spool 29 by row hash.  The size of Spool 29

         is estimated with low confidence to be 2,157 rows (166,089

         bytes).  The estimated time for this step is 0.01 seconds.

 15) We do an all-AMPs RETRIEVE step from Spool 15 (Last Use) by way of

     an all-rows scan into Spool 30 (all_amps) (compressed columns

     allowed), which is redistributed by the hash code of (

     SVUEDW_A.FlatOrgHierarchyAsIs.Unit_Num,

     SVUEDW_T.RtlSaleMfrCpn.Trans_End_Dt,

     SVUEDW_T.RtlSaleMfrCpn.Mfr_Cpn_Scan_Cd) to all AMPs.  Then we do a

     SORT to order Spool 30 by row hash.  The size of Spool 30 is

     estimated with low confidence to be 2,157 rows (79,809 bytes).

     The estimated time for this step is 0.01 seconds.

 16) We do an all-AMPs JOIN step from Spool 29 (Last Use) by way of a

     RowHash match scan, which is joined to Spool 30 (Last Use) by way

     of a RowHash match scan.  Spool 29 and Spool 30 are joined using a

     merge join, with a join condition of ("(COUPON_CD = COUPON_CD) AND

     ((TRANS_END_DT = TRANS_END_DT) AND (UNIT_NUM = UNIT_NUM ))").  The

     result goes into Spool 10 (group_amps), which is built locally on

     the AMPs.  Then we do a SORT to order Spool 10 by the sort key in

     spool field1 (SVUEDW_A.FlatOrgHierarchyAsIs.Unit_Num).  The size

     of Spool 10 is estimated with low confidence to be 1,967 rows (

     175,063 bytes).  The estimated time for this step is 0.01 seconds.

 17) Finally, we send out an END TRANSACTION step to all AMPs involved

     in processing the request.

  -> The contents of Spool 10 are sent back to the user as the result

     of statement 1.  The total estimated time is 0.47 seconds.

-----------/

in this plan in step2 it is get duplicated in all amps i want to know why it is happening like this and how to convert this from low confidence to high confidence?

Thanks in advance

Enthusiast

Re: SQL Query Tunning

Dear All,

           If we fire below queries on SQLassistant13.10/bteq, it will through mentioned results

SELECT 10 /2  --5

SELECT 2/10 -- (Hope this should have given 0.2 instead of giving 0)

If we do casting it will be given proper values

SELECT CAST(2 AS DECIMAL(10,2)) / CAST(10 AS DECIMAL(10,2)) ;    --0.2

Please let me know the diff ? when i was working with TD12.00 I havn't faced this kind of issues.

Enthusiast

Re: SQL Query Tunning

TD 12 will give same results.

when you say

select 2/10

then teradata interprets datatype and provides answer in same datatype

If

select 2.0/10 or select 2/10.0

then it will go to decimal/float datatype and answer will be 0.2