Can anyone please point out the performance benefits(if any) of using GTTs over permanent tables ?
I have a big script that uses many GTTs.
I plan to replace the GTTs with permanent tables, to save data and time , if it fails after many hrs of processing.
I would like to know if GTTs perform better compared to permanent tables. If yes, in what way ?
Ex: difference in delay times? insert to permanent table being expensive or slower than that to a GTT? overhead with maintaining permanent journal ?
Global temporary tables don't use permanent journals. They have a session level scope, the data loaded in it is not available after the session is disconnected therefore its usage is really dependant on what are you trying to achieve and what level of volatility you want with the data. Some of the basic differences b/w permanent and GTT's are:
Permanent tables' data is available for use by anyone with appropriate access rights after the session creating it has disconnected whereas GTT's data has a session wide scope.
Permanent tables take the permanent space of the database they are created in whereas GTTs are created in the temporary space allocated to the user creating them.
The definition of both Permanent and GTT's are available in the dictionary.
Replacing GTTs with Volatile tables could be a better option then permanent tables but then its really the usage and the scenario that dictates which type of tables to use.
Need help with a query please.
Want to know which one is more efficient option of the two.
1. Creating a permanent table, inserting into the table and then collecting statistics.
Next time, I would use a delete statement before re-inserting data into the table.
2. creating a Volatile table.
The permanent table will hold perm space until the delete is performed. If the data no longer has value when the session is done, the temp table will use spool and release the space as soon as the session closes automatically.
It may or may not be valuable to collect statistics depending on how the table is used. If it always a scan (eg an insert select) then stats don't matter. If complex queries are executed using the table stats may be valuable.
Collecting stats on a table, deleting its contents and reloading it without recollecting stats can be dangerous if the amount and/or demographics of each new set of data are significantly different.
Another consideration is how the data is to be loaded into the table. Load utilities can't load into a temp table.
Thanks for the response..
Also, could you also advise if Not in is more efficient than In.
The code I have looks like below:
where abc NOT IN (sel abc from TableB where con IN (1,2) group by 1);
I had done lot of research and understand that not in are not that efficient and sub queries should be avoided. So trying to replace that with more efficient way.
What would that be as per you?