¿Alternatives to increase the performance of this query?

Database
Enthusiast

¿Alternatives to increase the performance of this query?

Hello,

I have several questions about the delay in execution of this query.

As you can increase the performance of my query. Right now I have a query that do join with multiple tables in the database, everything possible was done to improve performance, such as using primary indexes for searches, collect statistics for tables, I've heard that a join index can improve much performance.

This is the query I'm trying to run, you run the query, just takes too long to do, it takes about 30 minutes.

        INSERT INTO EDW_ODS.AGENDA_HIST_CLINICA_PROGRAMA 
(
Programa_PP_Id
,Cliente_Paciente_Id
,CLIENTE_PACIENTE_OP
,CODIGOTIPOIDENTPAC
,Cliente_Ips_Id
,CLIENTE_IPS_OP
,CODIGOTIPOIDENTIPS
,Plan_Agrupador_Cita_Id
,Usuario_Red_Txt
,Numero_Identif_Nov_Registro
,EDW_Audit_Trail_User_Name
,EDW_Audit_Trail_Dttm
,EDW_Audit_Trail_Process_Cd
)

SELECT
CAST(b3.PROGRAMA_PP_ID AS INTEGER) Programa_PP_Id
,CAST(c3.Cliente_Id AS INTEGER) Cliente_Paciente_Id
,CLIENTE_PACIENTE_OP
,CODIGOTIPOIDENTPAC
,CAST(d1.Cliente_Id AS INTEGER) Cliente_Ips_Id
,CLIENTE_IPS_OP
,'CSIPS'
,CAST(g9.PLAN_AGRUPADOR_CITA_ID AS INTEGER) Plan_Agrupador_Cita_Id
,USUARIO_RED_TXT
,NUMERO_IDENTIF_NOV_REGISTRO
,User
,CURRENT_TIMESTAMP(0)
,1600020
FROM BD_STAGING.PE719_AGHPRO TEMP

LEFT OUTER JOIN MDB_CONSULTAS.VC_PROGRAMA_PP b3
ON (b3.CODIGO_PROGRAMA_PP_OP = TEMP.PROGRAMA_PP_OP )

LEFT OUTER JOIN MDB_CONSULTAS.VC_PLAN_AGRUPADOR_CITA g9
ON (g9.CODIGO_PLAN_AGRUPADOR_CITA_OP = TEMP.PLAN_AGRUPADOR_CITA_OP )

LEFT OUTER JOIN EDW_VIEW.V1_IDENTIFICATION_TYPE h10
ON (h10.EXT_IDENTIFICATION_TYPE_OP = TEMP.CODIGOTIPOIDENTPAC )

/* The left join is making the query take much to run,
this table EDW_METADATA.PE719_AGHPRO
has about 3 million records
and I think that it takes too long to make the crossing to the field */
LEFT OUTER JOIN EDW_METADATA.PE719_AGHPRO c3
ON (c3.NUMERO_IDENTIFICACION = TEMP.CLIENTE_PACIENTE_OP
AND h10.Ext_Identification_Type_Cd = c3.Tipo_Identificacion_Cd )

LEFT OUTER JOIN EDW_VIEW.V1_IDENTIFICATION_TYPE i1
ON (i1.EXT_IDENTIFICATION_TYPE_OP = 'CSIPS' )

/* It's the same description in the LEFT JOIN with comments. */
LEFT OUTER JOIN EDW_METADATA.PE719_AGHPRO d1
ON (d1.NUMERO_IDENTIFICACION = TEMP.CLIENTE_IPS_OP
AND i1.Ext_Identification_Type_Cd = d1.Tipo_Identificacion_Cd )

WHERE b3.PROGRAMA_PP_ID is not null
AND c3.Cliente_Id is not null
AND d1.Cliente_Id is not null
AND g9.PLAN_AGRUPADOR_CITA_ID is not null;
;

The previous query was analyzed several things, if you look where the comments / ** / this two LEFT JOIN are the ones who make the INSERT - SELECT be delayed in the execution much As explained in the comments because the table which do JOINS are tables of more than 3 million records.

As I said above the table is created by primary index and not unique primary index, has collected statistics.

I really do not know what else you can do to solve this problem, I thought if a join index can solve this slowness because it really should be inserted records are around 5000, few but I see that doing the JOIN takes too long to bring .

If the solution is with a JOIN INDEX could indicate me as I create this QUERY or other procedure should be done to improve this performance.

Thanks for your help, Greetings.

9 REPLIES
Senior Apprentice

Re: ¿Alternatives to increase the performance of this query?

This query shows several flaws, for me it doesn't look like it has been analyzed, yet.

A good data model/physical implementation will be rendered useless by bad queries:

- Those Left Joins with a WHERE IS NOT NULL condition are actual Inner Joins, Explain should show that

- The slow joins you identified are caused by a strange join condition: h10 is joined to temp and c3 in the same join. This is probably a logical error, because otherwise it would indicate a really strange data model.

This should return the same result:

LEFT OUTER JOIN EDW_METADATA.PE719_AGHPRO c3

ON (c3.NUMERO_IDENTIFICACION = TEMP.CLIENTE_PACIENTE_OP  

AND TEMP.CODIGOTIPOIDENTPAC  = c3.Tipo_Identificacion_Cd )

- The same for the join to of i1 to d1.

- Another strange join is

LEFT OUTER JOIN  EDW_VIEW.V1_IDENTIFICATION_TYPE i1

ON (i1.EXT_IDENTIFICATION_TYPE_OP = 'CSIPS' )

No additional condition? This will result in another Cross Join.

You should check Explain for unnecessary PRODUCT JOINs on large tables.

Dieter

Enthusiast

Re: ¿Alternatives to increase the performance of this query?

Excuse me, maybe I was not very clear on what I wanted to explain and what the query.

This first LEFT JOIN

What it does is compare the type of document the patient in the temporary table corresponds to the view and then use Next join.

LEFT OUTER JOIN EDW_VIEW.V1_IDENTIFICATION_TYPE h10

      ON (h10.EXT_IDENTIFICATION_TYPE_OP = TEMP.CODIGOTIPOIDENTPAC)


This JOIN what is done is to compare whether customer identification is equal to the one in the temporary table and the document type that comes from above is equal to JOIN document type of table join.

In the above JOIN no problem the system performs the comparison in an instant, but in this as the comparison with c3.NUMERO_IDENTIFICACION TEMP.CLIENTE_PACIENTE_OP and h10.Ext_Identification_Type_Cd with c3.Tipo_Identificacion_Cd EDW_METADATA.PE719_AGHPRO do it against a table of three million records and each record cover makes it very slow, not like this comparison could be improved to reduce the query time.

LEFT OUTER JOIN EDW_METADATA.PE719_AGHPRO c3

      ON (c3.NUMERO_IDENTIFICACION = TEMP.CLIENTE_PACIENTE_OP

      AND h10.Ext_Identification_Type_Cd = c3.Tipo_Identificacion_Cd)


Thanks again for your comments.

Senior Apprentice

Re: ¿Alternatives to increase the performance of this query?

Could you post the Explain?

As i said, those Outer Joins should be automatically converted to Inner Joins by the optimizer. If you actually need an Outer Join result your logic is wrong.

Did you should check if my modified join condition is returning a different answer set?

Dieter

Enthusiast

Re: ¿Alternatives to increase the performance of this query?

Thanks dieter, this is the result shows me explain. 

1) First, we lock EDW_ODS.IPS_CARACTERISTICA in view

     MDB_CONSULTAS.VC_PROGRAMA_PP for access, we lock EDW_METADATA.d1

     for access, we lock EDW_DATA.IDENTIFICATION_TYPE in view

     EDW_VIEW.V1_IDENTIFICATION_TYPE for access, and we lock

     BD_STAGING.TEMP for access.

  2) Next, we do an all-AMPs RETRIEVE step from EDW_ODS.CON in view

     MDB_CONSULTAS.VC_PROGRAMA_PP by way of an all-rows scan with a

     condition of ("((EDW_ODS.CON.Grupo_Caracteristica_Id = -2) OR

     ((EDW_ODS.CON.Grupo_Caracteristica_Id = -1) OR

     (EDW_ODS.CON.Grupo_Caracteristica_Id = 25 ))) AND ((NOT

     (EDW_ODS.CON.Caracteristica_Id IS NULL )) AND (NOT

     (EDW_ODS.CON.Codigo_Caracteristica_Op IS NULL )))") locking for

     access into Spool 6 (all_amps) (compressed columns allowed), which

     is redistributed by the hash code of (

     TRANSLATE((EDW_ODS.CON.Codigo_Caracteristica_Op )USING

     LATIN_TO_UNICODE)(VARCHAR(16), CHARACTER SET UNICODE, NOT

     CASESPECIFIC)) to all AMPs.  Then we do a SORT to order Spool 6 by

     row hash.  The size of Spool 6 is estimated with low confidence to

     be 25 rows (625 bytes).  The estimated time for this step is 0.01

     seconds.

  3) We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of a

     RowHash match scan, which is joined to BD_STAGING.TEMP by way of a

     RowHash match scan with a condition of ("(NOT

     (BD_STAGING.TEMP.PLAN_AGRUPADOR_CITA_OP IS NULL )) AND ((NOT

     (BD_STAGING.TEMP.CODIGOTIPOIDENTPAC IS NULL )) AND ((NOT

     (BD_STAGING.TEMP.CLIENTE_PACIENTE_OP IS NULL )) AND (NOT

     (BD_STAGING.TEMP.CLIENTE_IPS_OP IS NULL ))))").  Spool 6 and

     BD_STAGING.TEMP are joined using a merge join, with a join

     condition of ("(TRANSLATE((Codigo_Caracteristica_Op )USING

     LATIN_TO_UNICODE))= BD_STAGING.TEMP.PROGRAMA_PP_OP").  The result

     goes into Spool 7 (all_amps) (compressed columns allowed), which

     is duplicated on all AMPs.  The size of Spool 7 is estimated with

     low confidence to be 80 rows (5,040 bytes).  The estimated time

     for this step is 0.01 seconds.

  4) We do an all-AMPs JOIN step from EDW_ODS.CON in view

     MDB_CONSULTAS.VC_PLAN_AGRUPADOR_CITA by way of an all-rows scan

     with a condition of ("((EDW_ODS.CON.Grupo_Caracteristica_Id = -2)

     OR ((EDW_ODS.CON.Grupo_Caracteristica_Id = -1) OR

     (EDW_ODS.CON.Grupo_Caracteristica_Id = 19 ))) AND (NOT

     (EDW_ODS.CON.Caracteristica_Id IS NULL ))"), which is joined to

     Spool 7 (Last Use) by way of an all-rows scan locking EDW_ODS.CON

     for access.  EDW_ODS.CON and Spool 7 are joined using a product

     join, with a join condition of (

     "(TRANSLATE((EDW_ODS.CON.Codigo_Caracteristica_Op )USING

     LATIN_TO_UNICODE))= PLAN_AGRUPADOR_CITA_OP").  The result goes

     into Spool 8 (all_amps) (compressed columns allowed), which is

     duplicated on all AMPs.  The size of Spool 8 is estimated with low

     confidence to be 32 rows (1,856 bytes).  The estimated time for

     this step is 0.02 seconds.

  5) We do an all-AMPs JOIN step from Spool 8 (Last Use) by way of an

     all-rows scan, which is joined to EDW_METADATA.d1 by way of an

     all-rows scan with a condition of ("(NOT

     (EDW_METADATA.d1.Cliente_Id IS NULL )) AND (NOT

     (EDW_METADATA.d1.Tipo_Identificacion_Cd IS NULL ))").  Spool 8 and

     EDW_METADATA.d1 are joined using a product join, with a join

     condition of ("(TRANSLATE((EDW_METADATA.d1.Numero_Identificacion

     )USING LATIN_TO_UNICODE))= CLIENTE_IPS_OP").  The result goes into

     Spool 9 (all_amps) (compressed columns allowed), which is

     redistributed by the hash code of (

     EDW_METADATA.d1.Tipo_Identificacion_Cd) to all AMPs.  Then we do a

     SORT to order Spool 9 by row hash.  The size of Spool 9 is

     estimated with low confidence to be 3 rows (198 bytes).  The

     estimated time for this step is 0.67 seconds.

  6) We do an all-AMPs JOIN step from EDW_DATA.IDENTIFICATION_TYPE in

     view EDW_VIEW.V1_IDENTIFICATION_TYPE by way of a RowHash match

     scan with a condition of (

     "EDW_DATA.IDENTIFICATION_TYPE.Ext_Identification_Type_Op = 'CSIPS'"),

     which is joined to Spool 9 (Last Use) by way of a RowHash match

     scan.  EDW_DATA.IDENTIFICATION_TYPE and Spool 9 are joined using a

     merge join, with a join condition of (

     "EDW_DATA.IDENTIFICATION_TYPE.Ext_Identification_Type_Cd =

     Tipo_Identificacion_Cd").  The result goes into Spool 10

     (all_amps) (compressed columns allowed), which is duplicated on

     all AMPs.  The size of Spool 10 is estimated with low confidence

     to be 16 rows (992 bytes).  The estimated time for this step is

     0.01 seconds.

  7) We do an all-AMPs JOIN step from Spool 10 (Last Use) by way of an

     all-rows scan, which is joined to EDW_METADATA.c3 by way of an

     all-rows scan with a condition of ("(NOT

     (EDW_METADATA.c3.Cliente_Id IS NULL )) AND (NOT

     (EDW_METADATA.c3.Tipo_Identificacion_Cd IS NULL ))") locking

     EDW_METADATA.c3 for access.  Spool 10 and EDW_METADATA.c3 are

     joined using a product join, with a join condition of (

     "(TRANSLATE((EDW_METADATA.c3.Numero_Identificacion )USING

     LATIN_TO_UNICODE))= CLIENTE_PACIENTE_OP").  The result goes into

     Spool 11 (all_amps) (compressed columns allowed), which is

     redistributed by the hash code of (

     EDW_METADATA.c3.Tipo_Identificacion_Cd) to all AMPs.  Then we do a

     SORT to order Spool 11 by row hash.  The size of Spool 11 is

     estimated with low confidence to be 2 rows (140 bytes).  The

     estimated time for this step is 0.58 seconds.

  8) We do an all-AMPs JOIN step from EDW_DATA.IDENTIFICATION_TYPE in

     view EDW_VIEW.V1_IDENTIFICATION_TYPE by way of a RowHash match

     scan, which is joined to Spool 11 (Last Use) by way of a RowHash

     match scan locking EDW_DATA.IDENTIFICATION_TYPE for access.

     EDW_DATA.IDENTIFICATION_TYPE and Spool 11 are joined using a merge

     join, with a join condition of (

     "((TRANSLATE((EDW_DATA.IDENTIFICATION_TYPE.Ext_Identification_Type_Op

     )USING LATIN_TO_UNICODE))= CODIGOTIPOIDENTPAC) AND

     (EDW_DATA.IDENTIFICATION_TYPE.Ext_Identification_Type_Cd =

     Tipo_Identificacion_Cd)").  The result goes into Spool 5

     (group_amps), which is built locally on the AMPs.  The size of

     Spool 5 is estimated with low confidence to be 2 rows (204 bytes).

     The estimated time for this step is 0.01 seconds.

  9) Finally, we send out an END TRANSACTION step to all AMPs involved

     in processing the request.

  -> The contents of Spool 5 are sent back to the user as the result of

     statement 1.  The total estimated time is 1.31 seconds.

Senior Apprentice

Re: ¿Alternatives to increase the performance of this query?

There's no Outer Join and Product Joins are done by the optimizer because of the low estimated number of rows.

The optimizer seems to have replaced those strange joins with one similar to my proposal.

According to this Explain the query should run quite fast, no real Product Joins.

Is this the Explain on the actual tables, the estimated number of rows is quite low?

Did you check for missing/outdated statistics?

Submit a "DIAGNOSTIC HELPSTATS ON FOR SESSION;" and Explain again. Any missing stats will be indicated at the end of the explan.

Dieter

Enthusiast

Re: ¿Alternatives to increase the performance of this query?

Back and collect the statistics and run the Explain and showed me the same result, then run the query to see if there was increased execution speed of the consultation and continues with the same slow, and I think this speed is normal query. But it seems strange as there are many records and teradata is withstanding higher volumes without any problem.

Senior Apprentice

Re: ¿Alternatives to increase the performance of this query?

So stats are up to date and the maximum estimated row count is still only 80?

For a 3,000,000 row table? Strange.

No missing stats according to the DIAGNOSTIC?

Could you show the output of HELP STATS for those tables and the datatypes of those columns?

Did you check estimated vs. actual row count/elapsed time from the query log?

You need data from DBQL at the step level. If it's not foud you should enable it (at least to test this query).

Dieter

Enthusiast

Re: ¿Alternatives to increase the performance of this query?

Thank you for your invaluable help, the problem was resolved by reviewing the structure of the temporary table TEMP BD_STAGING.PE719_AGHPRO where it was found that the primary key this was poorly chosen and did not ensure the uniqueness of all records, was changed to a field that would guarantee uniqueness and increase query performance by 100%.

Enthusiast

Re: ¿Alternatives to increase the performance of this query?

Hi ,

  We are getting different behaviour while firing same query multiple times on database. Every time its takes different time to execute. First it took 2 min then 5 min then 15 min. its increasing. Not getting why this is happening. Can anyone pls help to understand and tune it properly ?

Please find the query and explain plan below.

Query:

SELECT DISTINCT 0 AS c1 , D1.c2 AS c2 , D1.c3 AS c3 , D1.c4 AS c4 ,

D1.c5 AS c5 , D1.c6 AS c6 , D1.c7 AS c7 , D1.c8 AS c8 , D1.c9 AS c9 ,

D1.c10 AS c10 , D1.c11 AS c11 , D1.c12 AS c12 , D1.c13 AS c13 ,

D1.c1 AS c14 , 0 AS c15 , 0 AS c16 , D1.c14 AS c17 

FROM

SELECT SUM ( T57657."ITEM_DTL_AMT" ) AS c1 , 

CASE WHEN T12446."PARTNER_BANK" IN ( 'IL' , 'NIC' , 'RS' , 'UIIC' ) THEN 'TMI' 

ELSE 'Others' 

END AS c2 , T45548."DAY_DT" AS c3 , T37817."X_SLS_AREA" AS c4 ,

T37817."NAME" AS c5 , T37817."X_SLS_REGION" AS c6 , T12468."TOP_LVL_PROD_NAME" AS c7 ,

T12446."ASSET_NUM" AS c8 , T12446."PREMIUM" AS c9 , T12446."PREMIUM_DT" AS c10 ,

T12446."PARTNER_BRANCH" AS c11 , T12446."PURCHASE_DT" AS c12 ,

T12446."INTEREST_RATE" AS c13 , T45548."ROW_WID" AS c14 

FROM

"W_LOV_D" T55657 , 

"W_LOV_D" T55645 ,

 "W_DAY_D" T45548 ,

  "W_INT_ORG_D" T37817 ,

"W_ASSET_D" T12446 , 

"W_PRODUCT_D" T12467 , 

"W_PRODUCT_DH" T12468 ,

"WC_INVOICE_ITEM_F" T57657 ,

"W_INVOICE_F" T112407 

WHERE ( T45548."ROW_WID" = T57657."INVC_DT_WID" 

AND T37817."ROW_WID" = T57657."PR_VIS_ORG_WID" 

AND T12446."ROW_WID" = T57657."ASSET_WID" 

AND T12467."ROW_WID" = T12468."ROW_WID" 

AND T12467."ROW_WID" = T57657."PROD_WID" 

AND T55657."ROW_WID" = T57657."INVC_STATUS_WID" 

AND T55645."ROW_WID" = T57657."INVC_TYPE_WID" 

AND T55645."ROW_WID" = T112407."INVC_TYPE_CD_WID" 

AND T12467."PRODUCT_TYPE_DESC" = 'Vehicle' 

AND T55645."R_TYPE" = 'FS_INVOICE_TYPE' 

AND T55645."VAL" = 'Standard' 

AND T55657."ROW_WID" = T112407."STATUS_CD_WID" 

AND T55657."R_TYPE" = 'PS_SUBCONTRACTOR_STATUS' 

AND T55657."VAL" = 'New' 

AND T57657."INVOICE_WID" = T112407."ROW_WID" 

AND T12446."ASSET_NUM" <> 'Unspecified' 

AND ( T12467."X_BU_UNIT" = 'TM' 

OR T12467."X_BU_UNIT" = 'TMPC' ) 

AND T12446."PARTNER_BANK" IS NOT NULL 

AND T45548."DAY_DT" BETWEEN TIMESTAMP'2013-02-01 00:00:00' 

AND TIMESTAMP'2013-02-28 00:00:00' ) 

GROUP BY T12446."ASSET_NUM" , T12446."INTEREST_RATE" , T12446."PREMIUM" ,

T12446."PREMIUM_DT" , T12446."PURCHASE_DT" , T12446."PARTNER_BRANCH" ,

T12468."TOP_LVL_PROD_NAME" , T37817."NAME" , T37817."X_SLS_AREA" ,

T37817."X_SLS_REGION" , T45548."ROW_WID" , T45548."DAY_DT" ,

CASE 

WHEN T12446."PARTNER_BANK" IN ( 'IL' , 'NIC' , 'RS' , 'UIIC' ) THEN 'TMI' 

ELSE 'Others' 

END ) D1 

Explain Plan:
1) First, we lock a distinct OLAP_DEV."pseudo table" for read on a

     RowHash to prevent global deadlock for OLAP_DEV.T12468. 

  2) Next, we lock a distinct OLAP_DEV."pseudo table" for read on a

     RowHash to prevent global deadlock for OLAP_DEV.T12467. 

  3) We lock a distinct OLAP_DEV."pseudo table" for read on a RowHash

     to prevent global deadlock for OLAP_DEV.T112407. 

  4) We lock a distinct OLAP_DEV."pseudo table" for read on a RowHash

     to prevent global deadlock for OLAP_DEV.T37817. 

  5) We lock a distinct OLAP_DEV."pseudo table" for read on a RowHash

     to prevent global deadlock for OLAP_DEV.T45548. 

  6) We lock a distinct OLAP_DEV."pseudo table" for read on a RowHash

     to prevent global deadlock for OLAP_DEV.T12446. 

  7) We lock a distinct OLAP_DEV."pseudo table" for read on a RowHash

     to prevent global deadlock for OLAP_DEV.T57657. 

  8) We lock OLAP_DEV.T12468 for read, we lock OLAP_DEV.T12467 for read,

     we lock OLAP_DEV.T112407 for read, we lock OLAP_DEV.T37817 for

     read, we lock OLAP_DEV.T45548 for read, we lock OLAP_DEV.T12446

     for read, and we lock OLAP_DEV.T57657 for read. 

  9) We do a BMSMS (bit map set manipulation) step that builds a bit

     map for OLAP_DEV.T12467 by way of a traversal of index # 24 which

     is placed in Spool 5.  The estimated time for this step is 0.00

     seconds. 

 10) We execute the following steps in parallel. 

      1) We do an all-AMPs RETRIEVE step from OLAP_DEV.T12467 by way of

         index # 16 "OLAP_DEV.T12467.PRODUCT_TYPE_DESC = 'Vehicle'" and

         the bit map in Spool 5 (Last Use) with a residual condition of

         ("((OLAP_DEV.T12467.X_BU_UNIT = 'TMPC') OR

         (OLAP_DEV.T12467.X_BU_UNIT = 'TM')) AND

         (OLAP_DEV.T12467.PRODUCT_TYPE_DESC = 'Vehicle')") into Spool 6

         (all_amps), which is built locally on the AMPs.  Then we do a

         SORT to order Spool 6 by the hash code of (

         OLAP_DEV.T12467.ROW_WID).  The size of Spool 6 is estimated

         with low confidence to be 18,214 rows (382,494 bytes).  The

         estimated time for this step is 2.83 seconds. 

      2) We do a two-AMP JOIN step from OLAP_DEV.T55645 by way of

         unique index # 8 "OLAP_DEV.T55645.R_TYPE = 'FS_INVOICE_TYPE',

         OLAP_DEV.T55645.VAL = 'Standard'" with a residual condition of

         ("(OLAP_DEV.T55645.R_TYPE = 'FS_INVOICE_TYPE') AND

         (OLAP_DEV.T55645.VAL = 'Standard')"), which is joined to

         OLAP_DEV.T55657 by way of unique index # 8

         "OLAP_DEV.T55657.R_TYPE = 'PS_SUBCONTRACTOR_STATUS',

         OLAP_DEV.T55657.VAL = 'New'" with no residual conditions. 

         OLAP_DEV.T55645 and OLAP_DEV.T55657 are joined using a nested

         join, with a join condition of ("(1=1)").  The result goes

         into Spool 7 (all_amps), which is duplicated on all AMPs. 

         Then we do a SORT to order Spool 7 by the hash code of (

         OLAP_DEV.T55657.ROW_WID).  The size of Spool 7 is estimated

         with high confidence to be 24 rows (696 bytes).  The estimated

         time for this step is 0.00 seconds. 

 11) We do an all-AMPs JOIN step from Spool 7 (Last Use) by way of an

     all-rows scan, which is joined to OLAP_DEV.T112407 by way of a

     traversal of index # 16 without accessing the base table

     extracting row ids only.  Spool 7 and OLAP_DEV.T112407 are joined

     using a nested join, with a join condition of ("ROW_WID =

     OLAP_DEV.T112407.STATUS_CD_WID").  The input table

     OLAP_DEV.T112407 will not be cached in memory.  The result goes

     into Spool 8 (all_amps), which is built locally on the AMPs.  Then

     we do a SORT to order Spool 8 by field Id 1.  The size of Spool 8

     is estimated with low confidence to be 9,615 rows (374,985 bytes). 

     The estimated time for this step is 0.09 seconds. 

 12) We do an all-AMPs JOIN step from Spool 8 (Last Use) by way of an

     all-rows scan, which is joined to OLAP_DEV.T112407 by way of an

     all-rows scan with a condition of ("NOT

     (OLAP_DEV.T112407.STATUS_CD_WID IS NULL)").  Spool 8 and

     OLAP_DEV.T112407 are joined using a row id join, with a join

     condition of ("ROW_WID = OLAP_DEV.T112407.INVC_TYPE_CD_WID").  The

     input table OLAP_DEV.T112407 will not be cached in memory.  The

     result goes into Spool 9 (all_amps), which is duplicated on all

     AMPs.  Then we do a SORT to order Spool 9 by the hash code of (

     OLAP_DEV.T112407.ROW_WID).  The size of Spool 9 is estimated with

     low confidence to be 230,760 rows (12,230,280 bytes).  The

     estimated time for this step is 1 minute and 21 seconds. 

 13) We execute the following steps in parallel. 

      1) We do an all-AMPs JOIN step from Spool 9 (Last Use) by way of

         an all-rows scan, which is joined to OLAP_DEV.T57657 by way of

         a traversal of index # 12 without accessing the base table

         extracting row ids only.  Spool 9 and OLAP_DEV.T57657 are

         joined using a nested join, with a join condition of (

         "OLAP_DEV.T57657.INVOICE_WID = ROW_WID").  The input table

         OLAP_DEV.T57657 will not be cached in memory.  The result goes

         into Spool 10 (all_amps), which is built locally on the AMPs. 

         Then we do a SORT to order Spool 10 by field Id 1.  The size

         of Spool 10 is estimated with low confidence to be 9,679 rows

         (532,345 bytes).  The estimated time for this step is 19.77

         seconds. 

      2) We do an all-AMPs RETRIEVE step from OLAP_DEV.T45548 by way of

         a traversal of index # 8 extracting row ids only with a

         residual condition of ("(OLAP_DEV.T45548.DAY_DT >= TIMESTAMP

         '2013-02-01 00:00:00.000000') AND (OLAP_DEV.T45548.DAY_DT <=

         TIMESTAMP '2013-02-28 00:00:00.000000')") into Spool 4

         (all_amps), which is built locally on the AMPs.  Then we do a

         SORT to order Spool 4 by row id eliminating duplicate rows. 

         The size of Spool 4 is estimated with high confidence to be 4

         rows.  The estimated time for this step is 0.07 seconds. 

 14) We do an all-AMPs JOIN step from Spool 10 (Last Use) by way of an

     all-rows scan, which is joined to OLAP_DEV.T57657 by way of an

     all-rows scan with a condition of ("NOT

     (OLAP_DEV.T57657.INVC_DT_WID IS NULL)").  Spool 10 and

     OLAP_DEV.T57657 are joined using a row id join, with a join

     condition of ("(ROW_WID = OLAP_DEV.T57657.INVC_TYPE_WID) AND

     ((INVC_TYPE_CD_WID = OLAP_DEV.T57657.INVC_TYPE_WID) AND

     ((STATUS_CD_WID = OLAP_DEV.T57657.INVC_STATUS_WID) AND (ROW_WID =

     OLAP_DEV.T57657.INVC_STATUS_WID )))").  The input table

     OLAP_DEV.T57657 will not be cached in memory.  The result goes

     into Spool 11 (all_amps), which is redistributed by the hash code

     of (OLAP_DEV.T57657.INVC_DT_WID) to all AMPs.  Then we do a SORT

     to order Spool 11 by row hash.  The size of Spool 11 is estimated

     with low confidence to be 9,679 rows (590,419 bytes).  The

     estimated time for this step is 43.67 seconds. 

 15) We execute the following steps in parallel. 

      1) We do an all-AMPs JOIN step from OLAP_DEV.T45548 by way of row

         ids from Spool 4 (Last Use) with no residual conditions, which

         is joined to Spool 11 (Last Use) by way of a RowHash match

         scan.  OLAP_DEV.T45548 and Spool 11 are joined using a merge

         join, with a join condition of ("OLAP_DEV.T45548.ROW_WID =

         INVC_DT_WID").  The result goes into Spool 12 (all_amps),

         which is redistributed by the hash code of (

         OLAP_DEV.T57657.PR_VIS_ORG_WID) to all AMPs.  Then we do a

         SORT to order Spool 12 by row hash.  The size of Spool 12 is

         estimated with low confidence to be 272 rows (19,312 bytes). 

         The estimated time for this step is 0.08 seconds. 

      2) We do an all-AMPs JOIN step from OLAP_DEV.T12468 by way of a

         RowHash match scan with no residual conditions, which is

         joined to Spool 6 (Last Use) by way of a RowHash match scan. 

         OLAP_DEV.T12468 and Spool 6 are joined using a merge join,

         with a join condition of ("ROW_WID = OLAP_DEV.T12468.ROW_WID"). 

         The result goes into Spool 13 (all_amps), which is built

         locally on the AMPs.  Then we do a SORT to order Spool 13 by

         the hash code of (OLAP_DEV.T12467.ROW_WID,

         OLAP_DEV.T12468.ROW_WID).  The size of Spool 13 is estimated

         with low confidence to be 18,214 rows (1,165,696 bytes).  The

         estimated time for this step is 0.44 seconds. 

 16) We do an all-AMPs JOIN step from OLAP_DEV.T37817 by way of a

     RowHash match scan with no residual conditions, which is joined to

     Spool 12 (Last Use) by way of a RowHash match scan. 

     OLAP_DEV.T37817 and Spool 12 are joined using a merge join, with a

     join condition of ("OLAP_DEV.T37817.ROW_WID = PR_VIS_ORG_WID"). 

     The result goes into Spool 14 (all_amps), which is duplicated on

     all AMPs.  Then we do a SORT to order Spool 14 by the hash code of

     (OLAP_DEV.T57657.PROD_WID, OLAP_DEV.T57657.PROD_WID).  The size of

     Spool 14 is estimated with low confidence to be 6,528 rows (

     842,112 bytes).  The estimated time for this step is 0.12 seconds. 

 17) We do an all-AMPs JOIN step from Spool 13 (Last Use) by way of a

     RowHash match scan, which is joined to Spool 14 (Last Use) by way

     of a RowHash match scan.  Spool 13 and Spool 14 are joined using a

     merge join, with a join condition of ("(ROW_WID = PROD_WID) AND

     (PROD_WID = ROW_WID)").  The result goes into Spool 15 (all_amps),

     which is redistributed by the hash code of (

     OLAP_DEV.T57657.ASSET_WID) to all AMPs.  Then we do a SORT to

     order Spool 15 by row hash.  The size of Spool 15 is estimated

     with low confidence to be 272 rows (42,160 bytes).  The estimated

     time for this step is 0.03 seconds. 

 18) We do an all-AMPs JOIN step from OLAP_DEV.T12446 by way of a

     RowHash match scan with a condition of ("(NOT

     (OLAP_DEV.T12446.PARTNER_BANK IS NULL )) AND

     ((OLAP_DEV.T12446.ASSET_NUM > 'Unspecified') OR

     (OLAP_DEV.T12446.ASSET_NUM < 'Unspecified'))"), which is joined to

     Spool 15 (Last Use) by way of a RowHash match scan. 

     OLAP_DEV.T12446 and Spool 15 are joined using a merge join, with a

     join condition of ("OLAP_DEV.T12446.ROW_WID = ASSET_WID").  The

     input table OLAP_DEV.T12446 will not be cached in memory, but it

     is eligible for synchronized scanning.  The result goes into Spool

     3 (all_amps), which is built locally on the AMPs.  The size of

     Spool 3 is estimated with low confidence to be 272 rows (73,984

     bytes).  The estimated time for this step is 0.20 seconds. 

 19) We do an all-AMPs SUM step to aggregate from Spool 3 (Last Use) by

     way of an all-rows scan , grouping by field1 (

     OLAP_DEV.T12446.ASSET_NUM ,OLAP_DEV.T12446.INTEREST_RATE

     ,OLAP_DEV.T12446.PREMIUM ,OLAP_DEV.T12446.PREMIUM_DT

     ,OLAP_DEV.T12446.PURCHASE_DT ,OLAP_DEV.T12446.PARTNER_BRANCH

     ,OLAP_DEV.T12468.TOP_LVL_PROD_NAME ,OLAP_DEV.T37817.NAME

     ,OLAP_DEV.T37817.X_SLS_AREA ,OLAP_DEV.T37817.X_SLS_REGION

     ,OLAP_DEV.T45548.ROW_WID ,OLAP_DEV.T45548.DAY_DT ,( CASE WHEN

     ((OLAP_DEV.T12446.PARTNER_BANK = 'IL') OR

     ((OLAP_DEV.T12446.PARTNER_BANK = 'NIC') OR

     ((OLAP_DEV.T12446.PARTNER_BANK = 'RS') OR

     (OLAP_DEV.T12446.PARTNER_BANK = 'UIIC')))) THEN ('TMI') ELSE

     ('Others') END)).  Aggregate Intermediate Results are computed

     globally, then placed in Spool 16.  The size of Spool 16 is

     estimated with low confidence to be 272 rows (214,880 bytes).  The

     estimated time for this step is 0.05 seconds. 

 20) We do an all-AMPs RETRIEVE step from Spool 16 (Last Use) by way of

     an all-rows scan into Spool 1 (used to materialize view, derived

     table or table function D1) (all_amps), which is built locally on

     the AMPs.  The size of Spool 1 is estimated with low confidence to

     be 272 rows (72,352 bytes).  The estimated time for this step is

     0.03 seconds. 

 21) We do an all-AMPs RETRIEVE step from Spool 1 (Last Use) by way of

     an all-rows scan into Spool 18 (all_amps), which is built locally

     on the AMPs.  The size of Spool 18 is estimated with low

     confidence to be 272 rows (85,680 bytes).  The estimated time for

     this step is 0.03 seconds. 

 22) Finally, we send out an END TRANSACTION step to all AMPs involved

     in processing the request.

  -> The contents of Spool 18 are sent back to the user as the result

     of statement 1.  The total estimated time is 2 minutes and 28

     seconds. 

Also what different alternatives we can think of while tuning database and queries other than distribution, indexing and stats ?

Thanks in advance.

Ashish Patil