This is a good read with the note:
Note: If the user whose query unintentionally caused the spool space problem is logged on at the time you run this query, the query will not detect the user because the query is looking for spool usage for a user that is holding spool space but is not logged on.
Specific to queries, you can read the explains to rectify.
If you are going out of spool spce, you need tune your query and check for the product joins in the explain plan.
Generate the explain plan and then carefully analyze if there is any product join, or redistributions of data across amps.
check for character comparisons in the join conditions.
if there is any product join, try to find the reason and modify the query.
Hi Raja and Khurram,
I have a file of 10 GB data if i insert into table i am getting error message as In suffient memory, i have a free space in database.
But i am getting error message, what might be the reason.could you please share me..
Thanks in advance.
You have not shared the details of utility name. Can you please share the script too if possible? Please paste the error too .
Sounds like your primary index might be skewed. Remember, if you run out of space on one AMP it appears as though you have used up all available currentPerm in the database. Plug your index into the following and check for skew...
sel (hasamp(hasbucket(hashrow(list of primary index column names)))) as amp_no, count(*) as row_count
group by 1
order by 1;
Have you tried verifying if there is any left over spool(spool that is not released) for the user being used. i see that you are using a file and is a straight dump into the table. As Raja asked,what type of loader are you using? What is the PI of the table being loaded and are the values in the PI column of the file skewed?