0 - 19 of 19 tags for compression

We developed a free compression tool, which you can download from our website (the documentation, 15 pages, is included)
http://www.dwhpro.com/teradata-compression-3/
Sincerely
Roland
 
 

Hi All,
Can we create PPI on Compressed columns?
If yes, pls share me link where i can get info or material.
 
Thanks,
Praveen.

Hi,
  Can we compress Timestamp field in TD14 (I know that Timestamp fields for only NULL values can be compressed in earlier versions). If yes, can the syntax also be provided?
 
Thanks for the help

Hi ,
When adding MVC  to VARCHARs is there any space benefit realised from NULL valued columns,
I don't beleive there is  (as Null valued varchars take up no space anyway ) but couldn't find anything specifically in the manuals .
Thanks
Nick
 

I am considering the impementation of algorithmic compression within our 13.10 system's DBQL History tables. We have quite a long retention requirement for this data and the daily maintenance and nightly backups are starting to become an issue because of the large sizes.

Go in-depth on how NULLs and Compression are managed in a Teradata system.

Teradata 13.10 features Block Level Compression (BLC), which provides the capability to perform compression on whole data blocks at the file system level before the data blocks are actually written to storage. Like any compression features, BLC helps save space and reduce I/O.

This BLC utility is for Teradata users to run against TD 13.10 system to select the list of BLC candidate tables and evaluate BLC impact on space and speed for each specific table in interest, to get information for selecting appropriate tables to apply BLC.

 

Can TPT compress data before ethernet transporting for bandwidth saving (on ETL-side) ? 

Can Fastload do it likewise?

Everyone is aware of Teradata’s continued commitment to reducing the footprint and resource usage of data by adding compression capabilities and the result on perm space savings and performance, by reducing block size, IO and spool usage.  

Hello Gurus,

My table has a column which is LATIN CHARACTERSET of size VARCHAR(50) . All the records have a value as -99999 for this column.

I thought, it would be wise to change datatype to CHAR(50) and compress on '-99999'. It turned that my tablesize inturn increased.

Can someone please explain me what could be the reason?

After reading that "The system always compresses nulls whether you specify null compression or not." in the Database Design (Sept 2007) documentation I decided to test this.

This session will focus on block-level compression (BLC), how it works, what compression rates you can expect, and where it is appropriate to define. Examples of the impact on CPU usage and elapsed time when queries access compressed tables will be shared, and the overhead while performing different database operations on these tables will be explored. Plenty of tips and techniques for successful use of BLC are offered.

One of the new compression features in Teradata 13.10 is Block Level Compression (BLC), which provides the capability to perform compression on whole data blocks at the file system level before the data blocks are actually written to storage. Like any compression features, BLC helps save space and reduce I/O. 

There is a CPU cost to perform compression on inserting data. And there is a CPU cost to perform decompression on whole data blocks whenever the compressed data blocks are accessed. Even when only one column of a single row is needed, the whole data block must be decompressed. For updates, the compressed data blocks have to be decompressed first and then recompressed. Careful evaluations shall be done before applying BLC in your production systems.

The ALC (ALgorithmic Compression) test package contains UDFs simulating TD13.10 built-in compression functions, test templates for Latin and Unicode character columns and step-by-step instructions. It is intended for TD users to run over specific data at column level to determine compression rates of TD 13.10 built-in compression algorithms. The test results provide information for selecting an appropriate algorithm for specific data. These tests use read-only operations and they can be executed on any release that supports UDFs (V2R6.2 & forward). It is recommended to run these tests off peak hours - they will use a significant amount of system resources (CPU bound).

The purpose of this series was to give you some basic queries that I use to provide me with a quick snapshot of how well tuned an EDW is from a workload analysis and database tuning perspective.

The four topics were (direct links are provided at the end of the article):

  • Part 1 - Excessive Use of String Manipulation Verbs
  • Part 2 - Analyze Secondary Index Usage
  • Part 3 - Statistics Analysis
  • Part 4 - Compression Analysis

Have you tried them yet??

Teradata 13.10 provides Algorithmic Compression (ALC) feature that allows  users to apply compression / decompression functions on a specific column of character or byte type. The compression / decompression functions may be Teradata built-in functions provided along with ALC or user provided compression / decompression algorithm registered as UDFs.

Ok, so I shouldn’t even need to broach this topic, as I’m sure you have all heard it before: compress, compress, and compress.

In Part 3 of this series, we will take a quick look at how statistics are implemented and maintained at your site.

Statistics Collection can be a complicated and very deep-dive topic, with discussions on the frequency of collection, whether to use sampled stats, automation strategies, etc. This analysis is not going to go that deep, it is a high-level look at the statistics on the tables, and I am looking for just two things:

  • Are statistics applied to the tables or missing?
  • For those that are applied, is there consistency in the application and collection process?

In-place compression UDF. Compress column value for VARCHARs and BLOBS. Similar to zipping the column before inserting. Can also be used for encrypting data in the database. After compression data is unreadable until uncompressed.