All Forums Database
meet_as 15 posts Joined 01/06
29 Jan 2015
Transfer compressed data

We have two teradata servers (6x and 1x series) . Since data volume is huge(everyday 1.5 TB per day),I would to compressed (Block level) the data first (on 6 x series) and  then move to another server (1 x series). Is it possible to move the compressed data between two Teradata servers? I have a TPT script which perform the data migration but I think it can not move compressed data. Is there an alternative? Can it be done through BAR. Regards, AS

Raja_KT 1246 posts Joined 07/09
30 Jan 2015

You can have a look at DSA too, since it is better than ARC and TARA. It works for 14.10 version and later. Hope you can do it thru Unity too.

Raja K Thaw
My wiki:
Street Children suffer not by their fault. We can help them if we want.

seven11 26 posts Joined 12/09
19 Feb 2015

Really depends on what you are trying to do and what you have to work with, a big variable would be are these systems in totally different places or close enough to each other string a cable between them.
Top end would be to dual load both systems to keep both systems constantly in sync
Next option would be Teradata Data Mover and to copy data tables from one Teradata system to another
Home-grown option would be a running an ARC backup & restore though a named-pipe (unoffical and unsupported by Teradata).
If you want to leverage a backup solution ie. backup the EDW to an external storage solution, once complete then restore to the Extreme Appliance ie. if you need to backup the data daily anyway this might be a more efficient option.  A couple of reasons DSA (Direct Stream Architecture) can be better than an ARC based backup solutions:
1. if this 1.5 TB are full tables sizes and the actual daily difference is a small percentage less DSA supports a form of incremental backup which might be benifical, while the backup is incremental the restore is not ie. DSA will lay down the full restore and all of the required delta/cumalative images every time
2. under DSA unlike ARC, Block Level Compression (BLC) data is left in its compressed form when sent from the database to the backup client
nb. on both points, since I'm guessing that the hash maps between the two systems will be different (I am assuming the version of RDBMS and the Hash Function are identical) data redistribution will need to occur on the target which drimatically impact restore performance and I think means the block would need to be expanded to figure out the new AMP a row is to be allocated to.
DSA requires specific hardware, today is certified only with NetBackup, and requires at least 14.10 RDBMS on both systems.  Daily automation of the backup and restore process factoring in possible failures, etc. may also be a bit tricky.
I'm assuming this is just some subset of data and not a full system since that is a whole different thing.

You must sign in to leave a comment.