To find record counts for all tables in a particular database

Analytics
Enthusiast

To find record counts for all tables in a particular database

Hi,
Can someone help me to write a query to pull Record counts for all tables in a particular database or record counts for all tables present in Teradata.

Thanks a lot
6 REPLIES

Re: To find record counts for all tables in a particular database

I think teradata charges so much because it hides or it makes it difficult to find answers that are relatively easy in any other db

Enthusiast

Re: To find record counts for all tables in a particular database

I dont think that its feasible to fetch the entire record count of all the tables in a particular database in a single shot as it consumes lots of CPU

Instead of sel count(*) can use sel 1 which will be comparitively faster...

Senior Supporter

Re: To find record counts for all tables in a particular database

@Stefans: Can you explain your statement that "sel 1 will be comparitively faster"?

Count(*) or count(1) on wholes tables had been improved a lot since it does cylinder index scan.

But agree that you can't fetch all record counts with in a single shot.

Enthusiast

Re: To find record counts for all tables in a particular database

@ulrich: Sel 1 will not check for all the fields in a row,rather place a '1' against each row and produce the row count.I do agree with you that the count(*) or count(1) had been improved since it does cylinder index scan,however when i do both the types of select in sql assistant,i get the result set faster in sel 1 than sel count(*).EXPLAIN plan can give more info.......

Senior Supporter

Re: To find record counts for all tables in a particular database

@Stefans

If I compare the select count(*) from bigtable 

Explanation
1) First, we lock a distinct xxx."pseudo table" for read on a
RowHash to prevent global deadlock for xxx.yyy.
2) Next, we lock xxx.yyy for read.
3) We do an all-AMPs SUM step to aggregate from xxx.yyy
by way of a cylinder index scan with no residual conditions.
Aggregate Intermediate Results are computed globally, then placed
in Spool 3. The input table will not be cached in memory, but it
is eligible for synchronized scanning. The size of Spool 3 is
estimated with high confidence to be 1 row (23 bytes). The
estimated time for this step is 1 minute and 47 seconds.
4) We do an all-AMPs RETRIEVE step from Spool 3 (Last Use) by way of
an all-rows scan into Spool 1 (all_amps), which is built locally
on the AMPs. The size of Spool 1 is estimated with high
confidence to be 1 row (25 bytes). The estimated time for this
step is 0.03 seconds.
5) Finally, we send out an END TRANSACTION step to all AMPs involved
in processing the request.
-> The contents of Spool 1 are sent back to the user as the result of
statement 1. The total estimated time is 1 minute and 47 seconds.

with Select 1 from bigtable

Explanation
1) First, we lock a distinct xxx."pseudo table" for read on a
RowHash to prevent global deadlock for xxx.yyy.
2) Next, we lock xxx.yyy for read.
3) We do an all-AMPs RETRIEVE step from xxx.yyy by way
of a traversal of index # 4 without accessing the base table with
no residual conditions into Spool 1 (all_amps), which is built
locally on the AMPs. The input table will not be cached in memory,
but it is eligible for synchronized scanning. The result spool
file will not be cached in memory. The size of Spool 1 is
estimated with low confidence to be 1,650,879,744 rows (
36,319,354,368 bytes). The estimated time for this step is 45
minutes and 22 seconds.
4) Finally, we send out an END TRANSACTION step to all AMPs involved
in processing the request.
-> The contents of Spool 1 are sent back to the user as the result of
statement 1. The total estimated time is 45 minutes and 22
seconds.

I see nothing which convince me that this should be faster. And here it even use a NUSI and is not accessing the base table.

Can you share your DBQL results for the better performance?

Re: To find record counts for all tables in a particular database

This should do it:

SELECT 'SELECT '''||TRIM(TABLENAME)||''' AS TABLE_NM, SUM(1) AS ROW_CNT FROM '||TRIM(DATABASENAME)||'.'||TRIM(TABLENAME)||' GROUP BY 1 UNION all'
FROM DBC.TABLESX
WHERE DATABASENAME = 'your_db_name'
ORDER BY TABLENAME;

Run this, then copy all rows from the output, paste into the query window, and run.  Make sure to remove the 'UNION ALL' phrase from the last record.  It may not be fast, but it should work.

By the way, I just put the TRIM functions in there to make the output look good.

Johnny Eades