I am currently investigating the best method of buffering inserts into tables from CLI. The current approach suggested to me is to chain a number of "USING...INSERT" statements into the Request Pointer, with a equivalent amount of data in the Using Data buffer. This seems very inefficient and clunky, as CLI would need to p**** the request string every time the request is executed. I am looking for something similar to manual-commit in DB2, where executed requests aren't actually committed to the database, but buffered internally by CLI. Then a whole bunch of requests can be committed at once using a manual commit call to the statement handle. Does something similar exist in teradata? can someone suggest another way to achieve per-session buffereing of requests?
Thanks for that response, It is probably the solution I am looking for.
As a follow up question, does the "REPEAT" option need to be specified with the "USING" clause in CLI for an iterated request? Because the DBCAREA has a using-data-count member which is used to specify the number of records in the using-data-pointer-vector, so is setting this member sufficient or does REPEAT need to be used aswell?
Also, can the 'PACK' option be specified within CLIv2? Or is it only for BTEQ? If so, will CLI pack records by default, or does 'PACK' need to be specified for it to do so? I ask as the manual indicates the PACK option is more for limiting the number of records packed together in order to eliminate buffer overflows.
However, you have to add "PACK" concept to your application. That is decide what is the appropriate packing factor. A Request has a 1MB limit; this limit includes the SQL Text Parcel, Data Parcels and other parcels like Options parcel. To my knowledge CLI does not PACK. It might return an error indicating that Request is over the 1MB limit; at that point you have to remove some of the rows of data and try again.
Ok thanks for that. Do you know what a 'safe' amount of data to pack into a single request would be? That is, what is the upper limit to be certain the overall request wont be over 1MB? I would rather pack less than the maximum than have to deal with error parcels when I break the 1MB limit.