big scale loading

General

big scale loading

Hi Guys

I’m a newbie to Teradata and need to convert a working engine that uses Oracle to Teradata. The biggest challenge is the data loading part.

The current Oracle loading engine uses SQL*LDR tool and loads more the 1-billion records per day from text files while the destination tables remains available for “Select”.

I thought about using Teradata TPump, I can get the same functionality but as far as the formal documentation goes the performance might not be suitable.
What should be the scale of a machine that should support this (if at all)?

Another option I saw is to using Mload.
Any idea if Mload enables “Select” statements in parallel to loading?

Thanks

Raz
2 REPLIES
Enthusiast

Re: big scale loading

Mload blocks selects with read locks but allows access locks (i.e., read-uncommitted) during the final loading phase. There are no restrictions on selects during the data acquisition phase (when the input data is being loaded into the work table.

An alternative to Mload is to load the data into a staging table in small batches, and then use INSERT...SELECT or MERGE to load your destination tables from the staging tables. This performs as well as--and often much better than--Mload, and it does not have any of Mload's restrictions as to data types and indexes on the destination table. The concurrency considerations would be similary to Mload. That is, the destination table will be locked for write during that last phase.

Re: big scale loading

Thanks for your prompt and professional response.
Trying to summaries the alternatives you presented:
(1) TPump is not a valid option for this scale of records.
(2) Mload can be used, but then the table access is limited to “read-uncommitted” (dirty read).
(3) The recommended approach is using Fastload (?) to a staging area and then use Insert …Select to the target tables.

I guess we will follow your advice.