Query Performance

Database
Enthusiast

Query Performance

Hi,

i have T1 table which is very huge, say ~500gb. I have around 5 more tables which are relatively small , say around 5-10gb each..i need to tune a query which is doing sel all columns from T1 left outer join T2 on T1.<pi column>=T2.<Pi column> left outer join T3 on T1.<pi column.=T3.<pi column> left outer join T4 on T1.<pi column.=T4.<pi column>...hence in each join it will actually join all rows of T1(which is a very big table),this kind of query is taking a long time , is there any way to tune it further ?

Thanks

Abhishek

Tags (1)
5 REPLIES
Enthusiast

Re: Query Performance

Abhishek,

What version of TD are you running?  Sounds like an opportunity for some vertical partitioning. Can you post the ddl?

Cheers,

Blaine

Re: Query Performance

Hi,

By using EXPLAIN statemnet, I have a query which don´t show the total estimated time:

EXPLAIN

SELECT  last_name AS Name

  ,salary_amount AS Salary

  ,department_number AS Dept

  ,COUNT(Salary) OVER (PARTITION BY Dept

  ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)

    AS Dept_Count 

FROM employee

WHERE Dept IN (401,403)

is there any way to get  the total estimated time?

Thanks

Colin.

Enthusiast

Re: Query Performance

Hi Colin,

Could you share the Explain plan of given query.

Thanks!!

Re: Query Performance

Thanks for your attention....

Explanation

---------------------------------------------------------------------------

  1) First, we lock a distinct CUSTOMER_SERVICE."pseudo table" for read

     on a RowHash to prevent global deadlock for

     CUSTOMER_SERVICE.employee.

  2) Next, we lock CUSTOMER_SERVICE.employee for read.

  3) We do an all-AMPs RETRIEVE step from CUSTOMER_SERVICE.employee by

     way of an all-rows scan with a condition of (

     "(CUSTOMER_SERVICE.employee.department_number = 401) OR

     (CUSTOMER_SERVICE.employee.department_number = 403)") into Spool 2

     (all_amps), which is built locally on the AMPs.  The size of Spool

     2 is estimated with no confidence to be 3 rows (141 bytes).  The

     estimated time for this step is 0.02 seconds.

  4) We do an all-AMPs STAT FUNCTION step from Spool 2 (Last Use) by

     way of an all-rows scan into Spool 5 (Last Use), which is assumed

     to be redistributed by value to all AMPs.  The result rows are put

     into Spool 1 (group_amps), which is built locally on the AMPs.

     The size is estimated with no confidence to be 3 rows (225 bytes).

  5) Finally, we send out an END TRANSACTION step to all AMPs involved

     in processing the request.

  -> The contents of Spool 1 are sent back to the user as the result of

     statement 1.

Enthusiast

Re: Query Performance

Evidently, there are many no confidence in the explain plan. Please check the stats.