Is there a way we can read parameters from multiple input files in sync (line by line) and repeat over a sql statement substituting these variables (USING clause for each) as a parameter in the query.
Example: Two input files input1.txt and input2.txt.
Assume input1.txt has two lines with content:
And input2.txt has two lines with content:
And now I need to repeat a specific query say
sel * from xyz where name= :name (input from input1.txt) and Id=:id(input from input2.txt);
So ideally it should execute twice:
sel * from xyz where name= 'A' and Id=1;
sel * from xyz where name= 'B' and Id= 2;
How do we define the USING clause for both these variables and differentiate?
The solution should be platform independent.
Also, is there a way we can pass this two variables through one file with some delimeter and take care of splitting and using them as individual variables in the BTEQ script?
Thank you for your time.
There's no way to read two files and process them as you described.
But you can read a single file with multiple parameters using .IMPORT VARTEXT plus .REPEAT *:
As @dnoeth said this cannot be done as described using BTEQ, it doesn't have that capability.
Out of interest, once you've got the select statements built, what is going to be done with the output of each select?
Written to a file?
Inserted into a table?
A couple of options come to mind, but neither of them only use BTEQ (as has already been said, BTEQ cannot do this 'out of the box').
1) Combine the two files into one on the client platform. If you need this process to be '[client] platform independent' then an obvious choice for this is Java. Once you've got a single file then use BTEQ to do the work for you as @dnoeth suggested earlier.
If you want this to be '[client] platform independent' then don't implement this logic on the client platform, do it in the dbms.
2) If there is some data in the row of each file which allows you to join them together then:
- load each one into a separate tables (say F1 and F2)
- run a single SELECT which joins F1, F2 and your other table (xyz in your example).
sel * from xyz inner join F1 on xyz.name = f1.name inner join F2 on xyz.id = f2.id and F1.other-common-field = F2.other-common-field;
This may give you what you want, check for 1:M relationships etc.
3) If there is NO other data in the row of each file which allows you to join them together then you have to create some. I'd suggest using record number (for the file) which you'll have to generate as part of the load process, possibly using a 'for each row' trigger.
CREATE TABLE F1 (colID INTEGER ,colRow_Number INTEGER ) PRIMARY INDEX(colID); CREATE TRIGGER F1_SEQUENCE AFTER INSERT ON F1 FOR EACH ROW BEGIN ATOMIC UPDATE F1 SET colRow_Number = (SELECT COUNT(*) FROM F1) WHERE colRow_Number IS NULL; END;
Then once each file is loaded into a separate table, your final select becomes:
sel * from xyz inner join F1 on xyz.name = f1.name inner join F2 on xyz.id = f2.id and F1.colRow_Number = F2.colRow_Number;
Caveats (i.e. don't say I didn't warn you...)
- the row level trigger will slow you down, perhaps too much.