hi everyone, I'm new to fexp. All i'm trying to do is to generate a comma delimited text file out of an Latin-characterset small table so i can load it into a DB2 table. Here is my script:
.logtable xxxxx .logon demoTdat/dbc,dbc begin export sessions 2; .export outfile c:\accts mode record format text; select * from financial.accts; .end report; .logoff;
But the output data file is something like this, even though this small table has 5 columns total.
BÊ SV0000000013624983ƒ‚ Ê CC4561143213626054ã Ê SV0000000013626053&Y
It seems to me that only the char column data is in a readable format while integer and datatime columns all shown in a sort of unicode way. I heard one should not bother opeing up the output file but instead go straight for the next task. But I can't think of DB2 will take this file and know it is for a 5-column table...
If you have a varchar, you get a length field for the string. Here, where you concatenated a number of strings, you just ge a length field for the line, not individual strings. So cast the line as a char string.
select cast(cust_id as varchar(7))||','|| acct_type||','||acct_nbr||','||cast(acct_start_date as varchar(8)) ||','||coalesce(cast(acct_end_date as varchar(8)),'00000000') (Char(60)) from financial.accts;
Note that in fastexport, you will get the full length of the line (including trailing spaces). If volumes are small, use a bteq export (which does not output trailing spaces). If volumes are very large, take the hit and pipe the output to a compression program.
If the volume is pretty small, may be in the thousands or so, you could also take the Microsoft Excel route, which can easily generate a .txt or a .csv file, which can be the input to your DB2 load. The other option is to run your Select in SQL Assistant, and direct output to a .txt file. Then again, there might be Windows/Unix platform issues that might come in as a challenge .. I take this approach to load some DEV Teradata tables with data in PROD.