All Forums Teradata Applications
Random_Thought 87 posts Joined 06/09
21 Jul 2009
Experience of TRS

Does anyone here use TRS?

If so is it reliable?

How much Data do you replicate? All or just a few tables?

How many TB or GB do you replicate per day?

Tags:
Gut 5 posts Joined 07/09
21 Jul 2009

TRS - Teradata Replication Services has two modes it operates in, Max Performance and Max Protection. When running in Max Protection it deploys a 2PC mechanism that guarentees no data loss in case of a failure scenario. It is being used actively by about 6-9 customers currently. We are just seeing these customers ramp up their volume, so we do not have a lot of customer data right now. We have tested upwards of 15MB/second per table with up to 4 concurrent jobs getting nearly 50MB/second. THere is a limit of 1400 tables and 50 replication groups right now. One customer is at this maximum, others do only a few hundred tables.

Rick Stellwagen

Random_Thought 87 posts Joined 06/09
21 Jul 2009

Are there any Rules of thumb for sizing the Replication Servers, and is it right that the TRS servers can be Either Windows or AIX or Linux? What do Teradata prefer to deploy? or what do the 6-9 customers deploy on?

This is a really interesting concept, but For our warehouse 1400 table limit may cause issues soon, is this planned to be increased?

Quite a few questions!

Gut 5 posts Joined 07/09
22 Jul 2009

The servers can be windows or Linux. For any big volumes, you should make sure you have 2 quad cores, 4 reliable disks. Having two servers, one for extracts closest to the Primary system and one for apply (or replicats in GG terms) will also give you more scalability and if properly configured High Availability with no data loss. All the DA customers have two servers for High availability and max throughput. When configured this way, the servers can sustain upwards of 40MB/second which is pretty close to the maximum Teradata can drive through TRS right now

The 1400 table limit will not be raised for about a year unfotuntatly, also if you have CLOB, BLOB or large decimal support it could be an issue (those are fixed by the end of this year)

Rick Stellwagen

Gut 5 posts Joined 07/09
22 Jul 2009

Sorry, I forgot, getting 32 MB of memory on those servers will be important if you drive the max

Rick Stellwagen

Random_Thought 87 posts Joined 06/09
22 Jul 2009

Great, thanks for the answers.

Tiggr 1 post Joined 08/09
05 Aug 2009

Only other comment I'd make is watch out for bulk deletes... TRS has a quirk currently that causes these to be handled as singleton row deletes on the stand-by node. Not Ideal & can lead to increases in run times of several hours if you're replicating tables you truncate more than a few million rows from.

Question of my own - is there any workaround for the table limit currently? Is it possible to run multiple instances of TRS? Although presumably you'd need separate servers with the associated costs involved for that...

Random_Thought 87 posts Joined 06/09
10 Aug 2009

Yep, bulk deletes is a major gotcha, although I would sto short of calling it a quirk or a bug. TRS needs to take the before image of a row and transfer it to the remote site, to facilitate the rollback facility.

You need to watch out for spill files blowing out on the Nodes as well.

Random_Thought 87 posts Joined 06/09
10 Aug 2009

@Tiggr, you other question regarding Multiple instances. Would this be possible? The RSG process that handles the communication with the TRS server, is probably the bottleneck in this setup, not the TRS server itself, I think it is possible to have multiple TRS servers, but i think the 1400 table limit is the max.

1400 tables is a lot of tables, Is that your whole warehouse? Madmax has recently blogged about replication at Ebay, but it looks homegrown rather than TRS. An interesting read though, they don't do backups now, just use their own replication system.

Random_Thought 87 posts Joined 06/09
10 Aug 2009

Whoops, it was MadMac (Aka Micheal MacIntire), not MadMax.

One is a Distinguished Architect at Ebay, the other a troubled soul in a post-apocalyptic world.

Random

Gut 5 posts Joined 07/09
01 Sep 2009

Indeed, the RSG is the bottleneck in many cases. We are also working on new designs for bulk deletes and updates, but those things are are more than a year out, along with the limitation on tables, 1400 is the technical max, but practically, we are saying 1100 tables and 65 groups is where you are going to max out, less groups may get you a few more tables. The new stuff to optimize and raise the limits won't happen until the Teradata 14.0 timeframe.

Rick Stellwagen

plentyfish 16 posts Joined 05/10
08 Nov 2012

Hi Gut - have you got an update on the designs for bulk deletes and updates?
Thanks in advance 

gryback 271 posts Joined 12/08
08 Nov 2012

Teradata strategy in this space has changed dramaticaly since the forum 2009 discussions in particular with the emergence of the Teradata Unity and Teradata Data Mover products. To eliminate overlap in Teradata offerings, a business decision was made to discontinue Teradata Replication Services (TRS) effective August 2011. We now position Teradata Unity as the first product consideration for this space but is dependent on the actual use case(s). I would suggest contacting your Teradata account team to understand this in more depth and to discuss your specific use cases for Teradata recommendations in how best to address them.
 

You must sign in to leave a comment.