All Forums Database
Abdulaziz 10 posts Joined 10/11
21 Jan 2016
2616: Numeric overflow occurred during computation.

Hi,
I wanted to know under what all circumstances/scenarious can we receive Error 2616. I am facing this issue while running a select query which is joining columns from multiple views.
Thank you.
Regards
Abdulaziz Shaikh

Regards, Abdulaziz Shaikh
21 Jan 2016
You may look for any calculations. 
Assuming your datatype in ques is Integer. Below are the ranges for Int datatypes. 
If you cross these limits, you can get a 2616.

Small Int: -32,768 to 32,767
Integer: -2,147,483,648 to 2,147,483,647.
Big Int: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807

Classic example: 
Select cast(9999999999 as INT) 
or Select CAST(99999*99999 as INT) 

Cast it to a higher range type and it should be good. 

 

Fred 1096 posts Joined 08/04
21 Jan 2016

Also COUNT uses a larger field internally but in Teradata mode the result is implicitly CAST to INTEGER.
So you may want to explicitly CAST(COUNT(...) AS DECIMAL(18)) or BIGINT when dealing with large tables.

Abdulaziz 10 posts Joined 10/11
21 Jan 2016

I did the required casting wherever i felt it was necessary. I am actually calculating a DENSE_RANK on table which has 2.5 Billion records and this I suspect is causing the error. For that I did the cast of DENSE_RANK function as shown below. But still I am getting same issue. 

SELECT         

       ( CAST(DENSE_RANK() OVER (

        ORDER BY

COL1,

COL2,

COL3

) AS  BIGINT)+ XYZ.COL21) KEY_COLUMN

FROM TBL1

INNER JOIN

(SELECT MAX(COL21) AS COL21

FROM TBL2) XYZ

ON 1=1;

 

 

Thanks

Abdulaziz Shaikh

Regards,
Abdulaziz Shaikh

21 Jan 2016

Is the MAX() query working fine without a 2616 error?
May be you may need to consider a CAST() statement there too, depending on the underlying column (& the Data).

Abdulaziz 10 posts Joined 10/11
22 Jan 2016

Yes. Thats Working fine. It returns value 1. Moreover datatype of TBL2.COL21 is already BIGINT.

Regards,
Abdulaziz Shaikh

Fred 1096 posts Joined 08/04
22 Jan 2016

The internal result data type for DENSE_RANK is INTEGER, so CAST doesn't help here as it would for COUNT.

Abdulaziz 10 posts Joined 10/11
22 Jan 2016

So do we have any solution to generate RANK greater than what INTEGER can hold?

Regards,
Abdulaziz Shaikh

Fred 1096 posts Joined 08/04
23 Jan 2016

RANK, ROW_NUMBER, etc. all return INTEGER. But SUM can handle larger values. Building on dnoeth's blog posts, a BIGINT DENSE_RANK could be computed as:
 
SELECT part_col, data_col,
/* DENSE_RANK: increment rank only for first row with each distinct value */
SUM(CASE WHEN rowno = 1 THEN CAST(1 AS BIGINT) ELSE 0 END)
OVER (PARTITION BY part_col ORDER BY data_col, rowno
ROWS UNBOUNDED PRECEDING) AS big_DENSE_RANK
FROM
(SELECT part_col, data_col,
/* Assign sequence numbers within groups of rows that should have the same rank */
ROW_NUMBER()
OVER (PARTITION BY part_col, data_col ORDER BY data_col) AS rowno
FROM tab
) as DT;
 
In your specific case, there would be no "part_col" and the windowed SUM would have no PARTITION BY clause.
 
 
 

dnoeth 4628 posts Joined 11/04
23 Jan 2016

Hi Fred,
it's a workaround, but running an OLAP function twice on a 2.5 billion row table is a huge overhead.
I really don't know why those functions can't return a BIGINT on demand, even COUNT can do that. Seems to be similar to 4 digits for INTERVALs, nobody seemed to expect more than 2.1 billion rows in a table or durations exceeding 9999 seconds :-(
There should be an Enhancement Request for this, Ebay with a 4000+ billion row table will probably support it, too :-)
 
 
 

Dieter

ulrich 816 posts Joined 09/09
25 Jan 2016

Can't also believe - how do these limitations fit to the scalabilty ability of the HW?
If there are concers on the backwards compatibility this can be handled via dbc.control fields...
It is really hard to believe...

feel free to donate bitcoin:12kgAUHFUqvG2sQgaRBXFhCwyf9HXdkGud

You must sign in to leave a comment.