I wanted to know under what all circumstances/scenarious can we receive Error 2616. I am facing this issue while running a select query which is joining columns from multiple views.
You may look for any calculations.
Assuming your datatype in ques is Integer. Below are the ranges for Int datatypes.
If you cross these limits, you can get a 2616.
Small Int: -32,768 to 32,767
Integer: -2,147,483,648 to 2,147,483,647.
Big Int: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
Select cast(9999999999 as INT)
or Select CAST(99999*99999 as INT)
Cast it to a higher range type and it should be good.
Also COUNT uses a larger field internally but in Teradata mode the result is implicitly CAST to INTEGER.
So you may want to explicitly CAST(COUNT(...) AS DECIMAL(18)) or BIGINT when dealing with large tables.
I did the required casting wherever i felt it was necessary. I am actually calculating a DENSE_RANK on table which has 2.5 Billion records and this I suspect is causing the error. For that I did the cast of DENSE_RANK function as shown below. But still I am getting same issue.
Is the MAX() query working fine without a 2616 error?
May be you may need to consider a CAST() statement there too, depending on the underlying column (& the Data).
The internal result data type for DENSE_RANK is INTEGER, so CAST doesn't help here as it would for COUNT.
RANK, ROW_NUMBER, etc. all return INTEGER. But SUM can handle larger values. Building on dnoeth's blog posts, a BIGINT DENSE_RANK could be computed as:
SELECT part_col, data_col,
/* DENSE_RANK: increment rank only for first row with each distinct value */
SUM(CASE WHEN rowno = 1 THEN CAST(1 AS BIGINT) ELSE 0 END)
OVER (PARTITION BY part_col ORDER BY data_col, rowno
ROWS UNBOUNDED PRECEDING) AS big_DENSE_RANK
(SELECT part_col, data_col,
/* Assign sequence numbers within groups of rows that should have the same rank */
OVER (PARTITION BY part_col, data_col ORDER BY data_col) AS rowno
) as DT;
In your specific case, there would be no "part_col" and the windowed SUM would have no PARTITION BY clause.
it's a workaround, but running an OLAP function twice on a 2.5 billion row table is a huge overhead.
I really don't know why those functions can't return a BIGINT on demand, even COUNT can do that. Seems to be similar to 4 digits for INTERVALs, nobody seemed to expect more than 2.1 billion rows in a table or durations exceeding 9999 seconds :-(
There should be an Enhancement Request for this, Ebay with a 4000+ billion row table will probably support it, too :-)