What is the query to delete all the records in a table in Teradata ?
Is this query correct:
Delete * from <DB_Name.Table_Name>;
In case the table is huge, you can also use DROP TABLE and CREATE TABLE commands.
Whatever mentioned by Ckraenzle is correct. You can simple use the below Synxtax to delete complete records from a teradata table.
delete from dbname.tablename;
Whenever we delete huge number of records from a table, say I am trying to delete 50 millon records in a 1 billion records table, usually it takes so long to complete the task. What I know was the delete statement tries to delete all the records at one shot.
Is it a way to improve the performance of deleting records from a table? one of my friend was telling me that we can delete the records by putting the #of records in loop, which is say putting the delete statement in a loop such a way that each time it will delete 2 million records and the total records will be deleted in 25 loops.
Has someone approached this way?
Don't delete in batches.
If the delete is slow it's probably because it needs a Full Table Scan or there's secondary indey maintenance. If you split it into 25 deletes wil require FTSs and SI maintenance 25 times.
Do you have any SIs (or Trigger/FKs)?
On big tables you should try to delete data based on the partitioning schema.
Thanks for your reply. I don't have any idea regarding deleting data based on partitioning schema. can you give me a sample example?
Big tables are usually partitioned by date, e.g
PARTITION BY RANGE_N (trans_date BETWEEN DATE '2005-01-01' AND DATE '2020-12-31'
EACH INTERVAL '1' DAY
DELETE FROM tab
WHERE trans_date BETWEEN DATE '20010-01-01' AND DATE '2010-02-15'
there's no Full Table Scan, but a very efficient partition access. Plus this might be a FastPath Delete without Transient Journal.
Just a word or caution regarding the option of dropping the table and then re-creating...
When you re-create you need to also recreate the table and column comments. More importantly past statistics analysis will also be lost unless you capture those stats and re-apply after table creation. I used dieters process to create a macro called grab_stats that captures existing collect stats statements on a table to be re-applied in scenarios just like this... thanks again Deiter!