All Forums General
johnsunnydew 43 posts Joined 09/14
08 Sep 2014
General Question

A data file has a million rows that are known to contain duplicate rows that need to be loaded.
Which utility and type of target table allows this to be done and provides the best performance?
A. FastLoad into a SET table
B. MultiLoad into a SET table
C. FastLoad into a MULTISET table
D. MultiLoad into a MULTISET table

Raja_KT 1246 posts Joined 07/09
08 Sep 2014

To load duplicate rows in a MULTISET table, use MultiLoad.
I suggest you to read the material and implement too, using vmware, express. It is free.

 

Raja K Thaw
My wiki: http://en.wikipedia.org/wiki/User:Kt_raj1
Street Children suffer not by their fault. We can help them if we want.

09 Sep 2014

MultiLoad along with MULTISET table will be better in your case.If u dont want duplucates then go for FASTLOAD&multiset table  combination.

andydoorey 35 posts Joined 05/09
16 Sep 2014

The correct, most up to date answer is:
E - TPT load operator into a No Primary Index table
The TPT load operator has basically the same functionality as fastload.  Generally they will remove duplicates before loading to teradata, even if the table is a multiset table.  However when loading to a no primary index table it doesn't run this step and just writes all the data to the table as quickly as possible.
 

You must sign in to leave a comment.