All Forums Database
hemanth_td 4 posts Joined 08/08
12 Jun 2013
Behavior of VARCHAR and COMPRESSed columns in SPOOL

Hello,
Could someone please answer:
1. REGARDING VARCHAR:
I have a table with one of the columns defined as VARCHAR(255) even though the char length would never exceed 15 characters.
Is there any performance impact in defining a column as VARCHAR(255) against defining it as VARCHAR(15).
I got to know from some one that --> When we query a table defined with VARCHAR columns, VARCHAR fields are expanded to their full limit in SPOOL.
Greater the spool file size, more the number of I/O hits --> So there is a significant performance degradation by defining the VARCHAR columns with "unnecessary" greater lengths.
Is my understanding correct? 
2. REGARDING COMPRESS
Will the column COMPRESSions be lost when a table (defined with some multi value column compressions) is read into spool?
 
Thanks in advance!
 

dnoeth 4628 posts Joined 11/04
12 Jun 2013

#1: false in general, but true when used in GROUP BY/ORDER BY/UNION. Plus the maximum size is considered during FastLoad/MLoad resulting in less row per block sent. For me this is always an indication that you don't care or you don't know about your actual data :-)
#2: false, any Multi Value Compression is carried into spool (unless you do some calculation on it)
 
Dieter

Dieter

CarlosAL 512 posts Joined 04/08
12 Jun 2013

Sorry for jumping in.
About #1: since 'performance' is something vague in this question, I would add that many client applications calculate the buffers and memory sizes for queries (resultsets) based on data types, thus one client app must reserve more memory than actually needed.
I aggree with Dieter about having care in design. Also, limited sizes can give you an aditional error-check level in data entry.
Cheers.
Carlos.

hemanth_td 4 posts Joined 08/08
12 Jun 2013

Interesting info, thanks very much Dieter!!
 
 

You must sign in to leave a comment.