Teradata Optimization for large table(20 billion rows)

General
Teradata Employee

Teradata Optimization for large table(20 billion rows)

Hi Guys,

We are currently processing a table with a maximum of 8 billion rows were the first table is around 1.7 billion records and joined to 17,000 rows in which min and sum aggregations are being used. It's a 96 AMP system, is there anyone knowshow fast should this query would run?

Thanks!

Greg

2 REPLIES
Enthusiast

Re: Teradata Optimization for large table(20 billion rows)

Do the explain plan and look for confidence levels and estimated time. Take the collect stats if its not taken on joining columns.

Enthusiast

Re: Teradata Optimization for large table(20 billion rows)

The estimated time in an explain is actually a cost, not a time value (even though it says time).

You cannot predict performance time by the number of amps. Performance time is affected by the hardware platform and workload on the system among other things.

Is your large table partitioned? Are you using the PI or a secondary index in your joins?

Are stats collected and up to date?

There is not enough information in this request to even guess at how long your query will take to run.

--Shelley