Teradata IN List allowable limit

Database
N/A

Teradata IN List allowable limit

Hi,

From the reporting side, users try to enter values into a text box. This input is then fed to the IN list. The number of values in the IN list ends up containing more than 5000 values/parameters.

The query is a basic query:

SELECT ID, ID_NUM FROM <TABLE_X> WHERE ID_NUM IN (val1, val2, val3,................val5000);

The business requirement also imposes that the users may enter 8000 values some times.

Questions:-

1. Is it really a good practice to send this huge list in the query?

2. I get the following error messages:-

     "Insufficient memory to parse this request during optimizer phase"

     " Row size or sort key size overflow".

I have read the manuals and reference articles.

What I would like to know from this forum is:- If you guys have faced this problem, what have you done to resolve it?

Thanks,

Sreekanth

3 REPLIES
N/A

Re: Teradata IN List allowable limit

Forum members, please reply with your thoughts on this one.

N/A

Re: Teradata IN List allowable limit

Check for the possiblity of feeding values using subquery.There should be performance gain on using subquery, if your IN clause have more than 50 values.

The mentioned error is 3710 error code which hints towards a complex query with numerous passes (possibly due to your IN list). An alternative is to increase the value of MaxParseTreeSegs under DBC CONTROL. But if i were you , i would do basic performance tuning with the query including stats collection.

Additional thoughts are welcome.

Re: Teradata IN List allowable limit

While handling a fairly different problem, we had used our reporting tool to read the users inputs and call a Teradata macro, the macro read the full input list and placed it into a volatile table which was then joined with other tables to produce the result that will effectively work like an IN clause.

Our key problem was that as the list of values in the IN clause expands, the explain plan used to stopped running single AMP data pulls (our filters matched the PI columns) and instead went for a full table scan. By creating a volatile table with the exact same columns as the PI of the fact table and filling it up with user parameters, we were able to do a PI-PI join that resulted in data being extracted pretty fast.