![]() |
We developed a free compression tool, which you can download from our website (the documentation, 15 pages, is included)
08 May 2015
,
|
![]() |
Hi All,
31 Mar 2014
| 1 comment
,
|
![]() |
Hi,
07 Mar 2014
| 6 comments
,
|
![]() |
Hi ,
07 May 2013
,
|
![]() |
I am considering the impementation of algorithmic compression within our 13.10 system's DBQL History tables. We have quite a long retention requirement for this data and the daily maintenance and nightly backups are starting to become an issue because of the large sizes.
20 Mar 2013
| 2 comments
,
|
![]() |
Go in-depth on how NULLs and Compression are managed in a Teradata system.
02 Mar 2012
,
|
![]() |
Teradata 13.10 features Block Level Compression (BLC), which provides the capability to perform compression on whole data blocks at the file system level before the data blocks are actually written to storage. Like any compression features, BLC helps save space and reduce I/O. This BLC utility is for Teradata users to run against TD 13.10 system to select the list of BLC candidate tables and evaluate BLC impact on space and speed for each specific table in interest, to get information for selecting appropriate tables to apply BLC.
13 Feb 2012
,
|
![]() |
Can TPT compress data before ethernet transporting for bandwidth saving (on ETL-side) ? Can Fastload do it likewise?
17 Nov 2011
,
|
![]() |
Everyone is aware of Teradata’s continued commitment to reducing the footprint and resource usage of data by adding compression capabilities and the result on perm space savings and performance, by reducing block size, IO and spool usage.
10 Aug 2011
| 2 comments
,
|
![]() |
Hello Gurus, My table has a column which is LATIN CHARACTERSET of size VARCHAR(50) . All the records have a value as -99999 for this column. I thought, it would be wise to change datatype to CHAR(50) and compress on '-99999'. It turned that my tablesize inturn increased. Can someone please explain me what could be the reason?
12 Apr 2011
| 1 comment
,
|
![]() |
After reading that "The system always compresses nulls whether you specify null compression or not." in the Database Design (Sept 2007) documentation I decided to test this.
10 Mar 2011
| 4 comments
,
|
![]() |
This session will focus on block-level compression (BLC), how it works, what compression rates you can expect, and where it is appropriate to define. Examples of the impact on CPU usage and elapsed time when queries access compressed tables will be shared, and the overhead while performing different database operations on these tables will be explored. Plenty of tips and techniques for successful use of BLC are offered.
24 Jan 2011
,
|
![]() |
One of the new compression features in Teradata 13.10 is Block Level Compression (BLC), which provides the capability to perform compression on whole data blocks at the file system level before the data blocks are actually written to storage. Like any compression features, BLC helps save space and reduce I/O. There is a CPU cost to perform compression on inserting data. And there is a CPU cost to perform decompression on whole data blocks whenever the compressed data blocks are accessed. Even when only one column of a single row is needed, the whole data block must be decompressed. For updates, the compressed data blocks have to be decompressed first and then recompressed. Careful evaluations shall be done before applying BLC in your production systems.
12 Jan 2011
| 7 comments
,
|
![]() |
The ALC (ALgorithmic Compression) test package contains UDFs simulating TD13.10 built-in compression functions, test templates for Latin and Unicode character columns and step-by-step instructions. It is intended for TD users to run over specific data at column level to determine compression rates of TD 13.10 built-in compression algorithms. The test results provide information for selecting an appropriate algorithm for specific data. These tests use read-only operations and they can be executed on any release that supports UDFs (V2R6.2 & forward). It is recommended to run these tests off peak hours - they will use a significant amount of system resources (CPU bound).
12 Nov 2010
,
|
![]() |
The purpose of this series was to give you some basic queries that I use to provide me with a quick snapshot of how well tuned an EDW is from a workload analysis and database tuning perspective. The four topics were (direct links are provided at the end of the article):
Have you tried them yet??
30 Aug 2010
,
|
![]() |
Teradata 13.10 provides Algorithmic Compression (ALC) feature that allows users to apply compression / decompression functions on a specific column of character or byte type. The compression / decompression functions may be Teradata built-in functions provided along with ALC or user provided compression / decompression algorithm registered as UDFs.
02 Aug 2010
| 13 comments
,
|
![]() |
Ok, so I shouldn’t even need to broach this topic, as I’m sure you have all heard it before: compress, compress, and compress.
19 Jul 2010
| 26 comments
,
|
![]() |
In Part 3 of this series, we will take a quick look at how statistics are implemented and maintained at your site. Statistics Collection can be a complicated and very deep-dive topic, with discussions on the frequency of collection, whether to use sampled stats, automation strategies, etc. This analysis is not going to go that deep, it is a high-level look at the statistics on the tables, and I am looking for just two things:
21 Jun 2010
| 2 comments
,
|
![]() |
In-place compression UDF. Compress column value for VARCHARs and BLOBS. Similar to zipping the column before inserting. Can also be used for encrypting data in the database. After compression data is unreadable until uncompressed.
22 Jun 2004
,
|