If the table has a NUPI with lots of records with the same PI value (eg an account number for transactions), then Multiset will definitely help. It will bypass the duplicate row check which is almost certainly be causing the long load times. If you do go to a miltiset table though, you become responsible for ensuring there are no duplicate records!You should take a look at your partitioning scheme.I presume that all (or the vast majority) of the records you load are for the most recent period. You do not say whether you load the 100M records once per year, or as 9M approx per month.If you load 100M in January, 90M of these will go into last years partition, adding to the 9-10M you loaded for last January. 10M will go into the new partition.The 10M will go in much quicker because they are going into am empty partition.Try changing to monthly partitions instead of annual. This will limit you to 65K months, or 5000 years, so probably not a problem.
Hello....I am trying to insert records (72 million+) from one table into a larger "master" table. The master table will be the final table after everything is complete. This table has 330 million rows in it currently. The insert is a simple bteq script: Insert into "master table" Select * from "subset table"I have broken the subset table into smaller table segments (4-10 million) thinking that would help ease the loading times. Even these smaller segments seem to take hours and in some cases days to load.The master table is defines as a SET table, and has a Primary Index and partitions by year. I have collected statistics on the key and the partitions, but this does not seem to have helped with the time to load.I was thinking about the SET vs MultiSet definition of the table. Will this help at all?Every year I will receive another 100 million records that need to be loaded to this table, so improving the processing is important.Any suggestions you have would be greatly appreciated.Thanks!Paul