I have a requirement i need to export a data from one table using fast export Parallel load it into a table using MLOAD.
I can write seprated Fast export and Mload but As the table is huge and i dont want waste unix space as the fast export creates the big file.
Can we load it like a queue..
Eg: When fast exports writes one record into file and it's has to be loaded into the table using Mload..
It is never possible to load tables row by row in the way you want as MLOAD and FastExport have block level operations. Also file should be completely ready to laod in order to start MLOAD.
You can achieve it through single TPT script with use of EXPORT and UPDATE operators. You can refer to TPT manual to create such script OR brwse through this forum for sample script.
I think you can use NP AXMOD. MultiLoad reads from the NP and FastEXport writes to NP.
You can use regular named pipe but restarts don't work.
Hoping someone can give me guidance. Currently we have large loads of data being inserted to Teradata via a perl script from Linux RedHat. The insertion is line by line so it takes hours to load. I was able to install tdicu, tdodbc, TeraGSS by downloading the tar files from this site. I was able to configure DBD::ODBC to connect to the Teradata DB and the insertion works fine. I've been told to use JDBC for Mload or Bteq or Fastload, but I'm not familiar enough with Java. I've been told to work with TPT 14, which is part of TTU, but I don't know how to get the software for Linux.
First of all, I assume that all of the Mload/Fastload/Bteq is all part of the TTU package, is that correct?
If I happen to get the software can I use perl to execute the Mload/Fastload/Bteq script?
Any help would be appreciated.
If you are going to be using our load/unload tools for the first time, you should use TPT. It is the suite of load/unload tools going forward.
TPT is part of TTU and can be found in the same media as the rest of the software.