Experience of TRS

Teradata Applications

Experience of TRS

Does anyone here use TRS?

If so is it reliable?

How much Data do you replicate? All or just a few tables?

How many TB or GB do you replicate per day?
Tags (1)
12 REPLIES
Gut
Teradata Employee

Re: Experience of TRS

TRS - Teradata Replication Services has two modes it operates in, Max Performance and Max Protection. When running in Max Protection it deploys a 2PC mechanism that guarentees no data loss in case of a failure scenario. It is being used actively by about 6-9 customers currently. We are just seeing these customers ramp up their volume, so we do not have a lot of customer data right now. We have tested upwards of 15MB/second per table with up to 4 concurrent jobs getting nearly 50MB/second. THere is a limit of 1400 tables and 50 replication groups right now. One customer is at this maximum, others do only a few hundred tables.

Re: Experience of TRS

Are there any Rules of thumb for sizing the Replication Servers, and is it right that the TRS servers can be Either Windows or AIX or Linux? What do Teradata prefer to deploy? or what do the 6-9 customers deploy on?

This is a really interesting concept, but For our warehouse 1400 table limit may cause issues soon, is this planned to be increased?

Quite a few questions!
Gut
Teradata Employee

Re: Experience of TRS

The servers can be windows or Linux. For any big volumes, you should make sure you have 2 quad cores, 4 reliable disks. Having two servers, one for extracts closest to the Primary system and one for apply (or replicats in GG terms) will also give you more scalability and if properly configured High Availability with no data loss. All the DA customers have two servers for High availability and max throughput. When configured this way, the servers can sustain upwards of 40MB/second which is pretty close to the maximum Teradata can drive through TRS right now

The 1400 table limit will not be raised for about a year unfotuntatly, also if you have CLOB, BLOB or large decimal support it could be an issue (those are fixed by the end of this year)
Gut
Teradata Employee

Re: Experience of TRS

Sorry, I forgot, getting 32 MB of memory on those servers will be important if you drive the max

Re: Experience of TRS

Great, thanks for the answers.
N/A

Re: Experience of TRS

Only other comment I'd make is watch out for bulk deletes... TRS has a quirk currently that causes these to be handled as singleton row deletes on the stand-by node. Not Ideal & can lead to increases in run times of several hours if you're replicating tables you truncate more than a few million rows from.

Question of my own - is there any workaround for the table limit currently? Is it possible to run multiple instances of TRS? Although presumably you'd need separate servers with the associated costs involved for that...

Re: Experience of TRS

Yep, bulk deletes is a major gotcha, although I would sto short of calling it a quirk or a bug. TRS needs to take the before image of a row and transfer it to the remote site, to facilitate the rollback facility.

You need to watch out for spill files blowing out on the Nodes as well.

Re: Experience of TRS

@Tiggr, you other question regarding Multiple instances. Would this be possible? The RSG process that handles the communication with the TRS server, is probably the bottleneck in this setup, not the TRS server itself, I think it is possible to have multiple TRS servers, but i think the 1400 table limit is the max.

1400 tables is a lot of tables, Is that your whole warehouse? Madmax has recently blogged about replication at Ebay, but it looks homegrown rather than TRS. An interesting read though, they don't do backups now, just use their own replication system.

Re: Experience of TRS

Whoops, it was MadMac (Aka Micheal MacIntire), not MadMax.

One is a Distinguished Architect at Ebay, the other a troubled soul in a post-apocalyptic world.

Random