While not as fast as the "fast" option, you could build a FastExport that builds your MultiLoad script for you to save some development time. Then you simply have to write the SELECT statement for each of the FastExport scripts. The scripts would be fairly similar where you just have to replace the SELECT statement. The output file could be the same if you export and then load the file before moving on to the next table. You could even use the same MLScript file name as well.
ARCMAIN jobs to archive and copy the data would be the quickest way to move the data. You can use ARC in UNIX or Windows to dump to disk if you have enough disk to hold your largest table.