SQL*Plus spool to Fastload via named pipe (Windows) is barely faster than by flat file.


SQL*Plus spool to Fastload via named pipe (Windows) is barely faster than by flat file.

I've got Fastload opening a named pipe, and I'm redirecting the output from a SQL*Plus script to the pipe in Windows. I assumed this would be far faster than allowing SQL*Plus to spool the result to a file locally, and fastload to import the flat file. Turns out it's not.

I've messed with every parameter in SQL*Plus and a number of them in fastload, and it has made a difference, but not anything like what I expected. I suspect it may have to do with the fact that axsmod is still taking all the time to write the fallback file to the file system. I haven't found a way to disable the fallback file, which I don't really need for this particular export process - I'd rather just have the speed. Is there a way to disable that?

Fastload is using the following:

axsmod np_AXSMOD.dll "log_level=0 block_size=1355000 fallback_directory='C:\WHATEVER\DATA' signature_check=0";

define ... file=\.\pipe\ORA2TD;

checkpoint 1000000;

SQL*Plus is using:

set termout off -- controls the display of output generated by commands

set echo off -- controls whether each command is listed as it is executed

set showmode off -- controls whether the old and new settings of a system variable are shown

set serveroutput off -- controls whether to display output from dbms_output

set sqlprompt "" -- controls the sqlplus command prompt

set sqlnumber off -- sets the prompt for the second and subsequent lines of a sql command or pl/sql block

set appinfo TERADATA_ETL -- sets the process name in v$session and dbms_application_info

set array 5000 -- sets the number of rows -- called a batch -- that sql*plus will fetch from the database at one time. Max value is 5000, which allows for better efficiency at the expense of memory

set colsep "" -- determines the column separator character

set define off -- controls whether sql*plus will scan commands for substitution variables and replace them

set feedback off -- displays the number of records returned by a script when it selects at least N records

set flush off -- controls when output is sent to the user's display device. OFF allows the host O/S to buffer output

set heading off -- controls printing of column headings

set headsep off -- defines the character used as the heading separator character

set linesize 271 -- sets the total num chars that display on one line before beginning a new line

set newpage none -- sets the N blank lines printed between the top of each page. None = no lines

set pagesize 0 -- sets the N lines in each page. 0 supresses all headings, breaks, titles, and other formatting

set pause off -- allows control of the terminal. Off prevents pausing at each page

set tab off -- determines how white space is formatted. Off uses spaces

set timing off -- controls the display of timing statistics

set trimout on -- determines whether trailing blanks are allowed at the end of each line

set trimspool off -- determines whether trailing blanks are allowed at the end of each spooled line (off speeds performance)

set verify off -- determines whether to list the text of a sql statement or command before and after replacing substitution variables

set wrap off -- controls whether to truncate the display of rows too long for the current line width

Export is accomplished by:

1) Launch fastload script and wait for pipe to open

2) Run SQL*Plus and redirect output to the pipe: sqlplus -s usr/pwd@inst @script.sqlplus > \.\pipe\ORA2TD

The named pipe method runs in 20:29 at 1.537M rows per minute. The flat file runs in 22:43 at 1.353M rows per minute. The named pipe runs about 13% faster. Does anyone see a major issue with the way I'm doing this that is preventing the named pipe method from running significantly faster? I expected a much higher throughput.

Re: SQL*Plus spool to Fastload via named pipe (Windows) is barely faster than by flat file.

Update: further testing shows the ETL server is severely limited by network bandwidth. We're going to resolve that problem first, then I'll repeat this test and post the results here.