Hello all, I am currently new to using Teradata and I would like to inquire about this scenario; can Teradata load 9 billion rows in a table which completes around less than 3 hours?
Currently I have about 9 million rows that needs to be blown up 100 times (9 billion rows) for a test scenario. System that I have right now has 192 AMPS and the source table's primary index I believe is good because around 470,000 rows are loaded in each AMP. Also the target table I will be inserting in has the same primary index as the source.
I have tried to run 100 insert select statements and executed them in parallel, though in the execution plan it still seems sequential.
INSERT INTO target_table AS (SELECT col1+1, CAST(col2 as VARCHAR(80))||'_001', from source_table)
;INSERT INTO target_table AS (SELECT col1+2, CAST(col2 as VARCHAR(80))||'_002', from source_table)
;INSERT INTO target_table AS (SELECT col1+100, CAST(col2 as VARCHAR(80))||'_100', from source_table)
Currently now it has been 20 hours and the target table is now at 5 billion rows. And it is now at the 60th insert statement, the amount of time for this insert statement to complete is around 40 minutes and I believe it will scale up in the succeeding steps.
I will likely let this continue because if I cancel this, the rollback will really be a headache.
I would just like to know if there is a better way to do this? One that will take less than 3 hours?
Thanks very much for the suggestions!