All Forums Database
Amith Kumar 3 posts Joined 01/12
20 Dec 2013
Teradata vs oltp

Can someone tell me the different reasons why Teradata doesnt work well with OLTP as compared to other RDBMS like Oracle.

Raja_KT 1246 posts Joined 07/09
20 Dec 2013



I have seen many comparisons. First we need to know both the technologies  very well and then we can comment. Teradata has so many features of OLTP then and now also. See all the features (also with the recent versions) and then conclude. Implement it and get a real time experience and then you know. Real time experience is galaxy far away better than fantasy. Macros,  you can do msr. They have triggers, stored procs, cursors etc. and you can run your queries both in ANSI and Teradata mode and both are similar , no need to reinvent the wheel. You name any feature in Oracle that Teradata does not have. You tell the functionality and that can be done easily. Turn the table and name features in Teradata that Oracle does not have.


Problem with Pre-sales, is that they cannot sell, because of lack of knowledge of requirements, exposure, other technologies and they see only from their perspectives and not 



It reminds me of my dental doctor. He has around 500 permanent patients. He uses only excel sheet for all details. You know he does not need oracle or teradata.


Now come to those who need Oracle or Teradata.Oracle cannot scale up and cannot do much in OLAP. Teradata  can do both. Oracle is built for OLTP system.


If you are buying a gadget from a shop, will you buy a multi-purpose one or one that serves only one functionality for you. Choice is yours. Now you are running

a multi-brand giant company. You have to see a long term strategy for profit, ROI,TCO etc. I wish I can explain more. But time and space do not permit :).


Good luck,



Raja K Thaw
My wiki:
Street Children suffer not by their fault. We can help them if we want.

ltt73 6 posts Joined 04/13
12 May 2014

May I re-open this discussion?
I'm wondering if there are specific strategies for getting small, OLTP type SQL transactions to run faster within TD.
I can't use FASTLOAD or CSV LOAD - I have small transactions - like 20-100 rows in multiple tables.  My requirement is to deliver these updates/inserts as near real time as I can.
My transaction runs extremely slowly even using indexes - I worked with Teradata DBA for a month to make sure every detail was set properly and each transaction was averaging 4 seconds.
I asked Teradata DBA's for more connections, so at least I could process in more threads.  They did not like that idea at all.
So is it ever feasible to use TD as an OLTP target?  Am I chasing a ghost?

tomnolan 594 posts Joined 01/08
12 May 2014

The Teradata Database's locking behavior is the primary challenge for an OLTP style application. Be wary of adding threads, because that may increase lock contention, and make things worse. Try to do as much work as possible with the fewest number of connections - that will minimize write lock contention. Keep the amount of work as small as possible in each transaction - that will also minimize lock contention.
Indexes improve query performance, but require additional work during inserts, updates, and deletes ("index maintenance"). Be wary of adding indexes to tables for which inserts/updates/deletes are a significant part of the workload.
Teradata does not support read-committed transaction isolation like Oracle does. With Oracle, writers do not block readers, but with Teradata, writers will block readers. You can use access locking with Teradata, so that writers don't block readers, but that will permit dirty reads, which typically are not acceptable for an OLTP application.
Teradata does not support row-level locking like Oracle does. Teradata has row hash locking, so each lock covers a collection of rows, leading to increased lock contention and blocking.

ToddAWalter 316 posts Joined 10/11
12 May 2014

I agree with the cautions about locking however running a transaction workload through a single session is very unlikely to deliver the required throughput or respose time unless the volume is extremely small.
The good thing about a transactional workload is that it takes very little system resource per insert/update/delete. Thus a moderate number of session doing this work concurrently will not have an impact on systeem resources.
If any number of the updates are being applied to a single table, then you should also be using the array update feature to group them together into a single operation.
If there are single operations to mulltiple tables tha can't take advantage of array updates, then those should be grouped together in multi-statement requests to avoid round trip times.
You don't say what indexes you have in place. secondary indexes can definately make the updates take longer and can increase locking contention.
And then you have to make sure that the workload management is set to make sure that the requests get through without delays. High weight/priority and possibly expedited AWTs if there is a lot of concurrency of other work. See Carrie Ballinger's tips on WLM for tactical query workloads.

You must sign in to leave a comment.