How to test Checkpoints in FastLoad

Tools

How to test Checkpoints in FastLoad

I'm a newbie to Teradata and am required to do a performance analysis of checkpoints in FastLoad. I have limited understanding of the concept.

I am using Datastage 8.1, in which I am reading 100,000 records from a file and using FastLoad to insert them into a Teradata table. Checkpoints are being created after every 10,000 records.

In theory, after loading say 20,500 rows, if FastLoad crashes, then loading should resume from record 20,001 correct? And the extra 500 rows will have been discarded?

To prove this I am manually aborting the job after 20,000 records have been loaded. However, when I check the table, no rows have been inserted. I was under the impression that since a checkpoint was created after 20,000 records, these 20,000 records should be present in the table.

Could this be because I am manually aborting the job? (I am stopping the job from the Datastage Director) Is there any way to manually simulate FastLoad failure?

How exactly can I test such checkpoints?

I'd appreciate any help, and apologize if my question is noobish...

Tags (1)
1 REPLY
Enthusiast

Re: How to test Checkpoints in FastLoad

Fastload does not load records directly into the target table during the acquisition (first ) phase. It puts them into a subtable; when it encounters an END LOADING it then moves them from the subtable into the target.
So if you kill the job after loading 20,500 rows, these are in the subtable which is not accessible by normal SQL commands.

To test restart, you need to kill the job after 20,500 and then restart the job. If you restart the job with just an End Loading statement, it will transfer the 20,000 records to the main table where you can check them.
Alternatively, if you restart the same script it will skip the first 20,000 records and start loading again with record 20,001. If you let this run to end and check the output, it will say 20,000 records skipped and beginning the load at 20001. At the end of your process you should have 20,500 records inserted in the first job, 20000 skipped and 80000 loaded in the second job and a total of 100,000 records in the table.

Make sure your script does not drop/ recreate from the target table and drop the errortables at the start. If you do this, it will not restart but will start from the beginning.
Years ago, Fastload restarts had a poor reputation and many sites still clean up and rerun the fastload. Fastload restarts have been reliable for a long time now!