Here is the scenario of the issue.
1. I have a btq which takes data from a source table having billions of rows. from the source i take only 2 years of data and load it into a stage table.
2. Due to the huge volume i had to split the data into three parts and at each step i will take 8 months data from source and load it into the stage.So there will be 3 insert queries for the stage.
3. Now in stage there is around 80 billion rows and the data from the stage will be joined with another table and then loaded into a target table. because of the join the no of rows will get doubled and hence the insert is taking more cpu cycles and spool space error is also coming.
Is there any way to split such huge volume of records in a single step?
Note: I just want to achieve this using sql and do not want to use any utilities
Just giving my try....
Load the data to three work tables (try with same partition used to load stage - 8 months data at a time). Then do Insert from 3 work tables to target table.
I think loading from table to table should not use much spool space.
Thanks for the response!!!
Yeah.. we can try loading the data into three different stage tables. But at the end of it only when i have all the data from the three stage tables combined together i have a meaningful data.
Also the data from the stage will be joined with one more source table and there will be some aggregations as well!! So i need to join 4 tables while loading into the table and do u really feel this will be really better approach than the current one..
Once again thanks for ur suggestion!!