All Forums Database
goldminer 118 posts Joined 05/09
13 Sep 2012
DBQL SQLTextInfo 3577: row size or sort key size overflow

I am getting a 3577: row size or sort key size overflow when attempting to query dbqlsqltbl.sqltextinfo in sqla (yes it is a large query).  Does anyone have any workarounds.

 

Thanks,

 

Joe

Qaisar Kiani 337 posts Joined 11/05
13 Sep 2012

Which version of Teradata are you on?

I faced a smiliar kind of issue some time ago on I guess V2R6, as the size of column definition that was selected was expected to grow beyond 65K row limit and thus the TD was aborting the error.

I couldn't find any specific solution at that time other than breaking up the SQL in multiple sets!

dnoeth 4628 posts Joined 11/04
13 Sep 2012

Hi Joe,

how do you access that column? Only in the Select list?

It's defined as a UNICODE VARCHAR(32000), which is already close to the maximum eowsize of approx. 64K.

Dieter

 

Dieter

Roopalini 13 posts Joined 05/08
14 Jan 2013

Hi Dieter,
I am facing a similar issue. I have 2 tables A and B and they each have 60 columns each. When I run the below statement in Teradata Studio, I get the error 3577: row size or sort key size overflow, however when I run the same in SQL Assistant, it runs fine. Could you let me know, why Teradata Studio throws error?
Below is the query
 
SELECT a.*,b.*
FROM db_1.ABC a,
db_1.DEF b
WHERE a.A1 = b.A1;
Thanks
Roopalini
 

dnoeth 4628 posts Joined 11/04
15 Jan 2013

Hi Roopalini,
it's probably because Studio uses a UNICODE charset, too.
Check the definition of the tables if they include some large VarChars in LATIN.
Dieter

Dieter

Roopalini 13 posts Joined 05/08
16 Jan 2013

Thanks Dieter!! After changing the CHARSET to ASCII from UTF8, I was able to run the SQL.  

ETMack 2 posts Joined 05/13
07 May 2013

 
In layman’s term, if you are using “Select *”, you are displaying every column in every table you are joining.  This message is saying I have reached my limit of columns to display.  What I did is start getting specific to which columns I want to display.  As I am joining new tables and want to see every column in the new table I just joined I do the following:
Example:
select
 
uacc.acc_num,
oacc.cm_log_owner_id,
ccomp.name,
ores.result_date,
ores.ord_value,
pat.pat_mrn_id,
pat.pat_name,
ztype.name as ENC_TYPE,
cser.prov_name as AUTHORIZING_PROVIDER,
ibm.*
Hope this helps.
Etmack

lucactarus 2 posts Joined 02/14
10 Feb 2014

Hi. Can you please help me with the following query?
select oreplace(leapra.imp_impo/100 (format 'Z(7)9.99') (char(11)),'.',',') imp_impo,
oreplace(leapra.imp_fin/100 (format 'Z(7)9.99') (char(11)),'.',',') imp_fin,

         oreplace(leapra.imp_risc/100 (format 'Z(7)9.99') (char(11)),'.',',') riscatto,

         oreplace(leapra.imp_sir/100 (format 'Z(7)9.99') (char(11)),'.',',') spe_inc_rate,

         oreplace(leapra.imp_cano/100 (format 'Z(7)9.99') (char(11)),'.',',') canone from DB_FND_VIEW_TRS_PRD.dwm_an_leapra leapra;

 

this returns err 3577

 

if i omit one instruction row, it works well...

TY
 
Luca
 

dnoeth 4628 posts Joined 11/04
10 Feb 2014

Hi Luca,
you need to CAST the result of the oReplace to a shorter VarChar, in TD14 it's VARCHAR(8000) UNICODE.
 
Or you use TO_CHAR to switch comma and period:
TO_CHAR(x, '9999D99','NLS_NUMERIC_CHARACTERS = '',.''') 

Dieter

lucactarus 2 posts Joined 02/14
11 Feb 2014

Ty so much, Dieter!
 
Luca

Tapcu 1 post Joined 03/12
28 Apr 2014

Hi.

I am using Teradata 14.0 and tried to select data from a table that contains 190 columns, each with the type VARCHAR (1000) and got the error 3577. 

Please tell me how is limited the size of selected data? How much columns could i select?

dnoeth 4628 posts Joined 11/04
28 Apr 2014

You can select up to 2048 columns with a combined size of a bit less than 64KB.
This means defined maximum not actual size.
If your session uses UTF8 you need to multiple the defined size times 3, for UTF16 times 2.

Dieter

SmarakDas 51 posts Joined 02/12
20 May 2014

Hello Dieter,
 
As per your post, the Row Size is decided by the sum of the user-defined column size, rather than the actual data size (which can be much lesser than the column size specification).  Say 02 columns are defined Varchar(10) and Char(20). Even though the data in the 02 columns are "A" & "B", the RowSize will be calculated as 30 Bytes (Latin), 60 Bytes (UTF-16), 90 Bytes (UTF-8).
 
From a performance perspective, If I have to join a Latin & Unicode Column, is it advisable to join them simply or, convert one of the column via Unicode_To_Latin or Latin_To_Unicode. As far as I am aware, Latin_To_Unicode works always, whereas Unicode_To_Latin is not reliable owing to the fact that the space necessary is reduced by 1/2.
 
Thanks,
Smarak

SmarakDas 51 posts Joined 02/12
20 May 2014

Also, an above post specifies "Thanks Dieter!! After changing the CHARSET to ASCII from UTF8, I was able to run the SQL".
 
Here, is "ASCII" = "LATIN" ? From what I gather, there is 02 character set (Latin & Unicode) with UTF-8 and UTF-16 being the 02 encodings for Unicode character set. Kindly correct me , if I am wrong.
 
Thanks,
Smarak

Raja_KT 1246 posts Joined 07/09
21 May 2014

Data modelers should be far-sighted from beginning and see to it that for joining  cases if  Unicode_To_Latin or Latin_To_Unicode conversions be avoided owing to performance issues.

Raja K Thaw
My wiki: http://en.wikipedia.org/wiki/User:Kt_raj1
Street Children suffer not by their fault. We can help them if we want.

DBCHELP 6 posts Joined 07/14
16 Jun 2015

Same issue with TD14.  Character set was LATIN.
Had a column that was VARCHAR(3000).  Using the below function gave error 3577: row size or sort key size overflow
Solution: (cast to appropriate size of data value)

-- using TD function to find semicolon and values

, CAST (STRTOK(meas_values_tbl.meas_value ,';',1)  AS VARCHAR(50) )AS left_value1 

, CAST (STRTOK(meas_values_tbl.meas_value ,';',2)  AS VARCHAR(50) )AS left_value2

, CAST (STRTOK(meas_values_tbl.meas_value ,';',3)  AS VARCHAR(50) )AS left_value3

, CAST (STRTOK(meas_values_tbl.meas_value ,';',4)  AS VARCHAR(50) )AS left_value4

, CAST (STRTOK(meas_values_tbl.meas_value ,';',5)  AS VARCHAR(50) )AS left_value5

, CAST (STRTOK(meas_values_tbl.meas_value ,';',6)  AS VARCHAR(50) )AS left_value6

, CAST (STRTOK(meas_values_tbl.meas_value ,';',7)  AS VARCHAR(50) )AS left_value7

, CAST (STRTOK(meas_values_tbl.meas_value ,';',8)  AS VARCHAR(50) )AS left_value8

, CAST (STRTOK(meas_values_tbl.meas_value ,';',9)  AS VARCHAR(50) )AS left_value9

, CAST (STRTOK(meas_values_tbl.meas_value ,';',10) AS VARCHAR(50) )AS left_value10

 

Many thanks to Dieters posts.

jrgchip 4 posts Joined 05/15
29 Sep 2015

@noeth: "If your session uses UTF8 you need to multiple the defined size times 3, for UTF16 times 2."
UTF-8 uses 1-4 bytes per glyph, so shouldn't Teradata allocate 4 bytes (not 3) per defined char for UTF-8?
UTF-16 uses (I believe) 2 or 4 bytes per glyph, so shouldn't Teradata allocate 4 bytes (not 2) per defined char for UTF-16?
In practice, I see evidence that TD is using the factors Dieter mentioned, but they don't seem correct.
I suspect the answer is that TD's UNICODE, which I've understood to be a proprietary DBCS, is the set of 2-byte characters from UTF-16.  If that's the case, it need not do any expansion to represent it as true UTF-16 and would only need 3 bytes to expand to true UTF-8.
Confirmation or clarification of this would be hugely appreciated!
 

Fred 1096 posts Joined 08/04
29 Sep 2015

Teradata currently does not support surrogate code points (4-byte UTF16) defined in Unicode 3.1 or above.
See the "Unicode Server Character Set" topic in the "International Character Set Support" manual.
You can download or browse at www.info.teradata.com.

pinaldba 4 posts Joined 09/10
30 Sep 2015

Hi,
The errror can be resolved using the translate.
 

CAST ((TRANSLATE (s .sqltextinfo USING UNICODE_TO_LATIN )) AS VARCHAR (31000 )) AS SQLTEXTINFO

 
Thanks
Pinal

jrgchip 4 posts Joined 05/15
30 Sep 2015

Thanks, Fred.  I found the UTF-16 warning!  I don't see a similar warning that explains that 4-byte UTF-8 encodings are not supported.  But the "UTF8 Multibyte Sequences" doc only shows 1, 2, and 3-byte sequences ... implying TD does not support the 4-byte sequence!
Given that I think TD is using UCS-2 as its server character set, that makes sense.

You must sign in to leave a comment.