All Forums Tools
tmrmodest1 5 posts Joined 03/10
27 Mar 2010
usage of check point in fastload

Hi,
Checkpoints is used to resume the paused job, at a point where it was paused, because of the error caused by the client or by the RDBMS. Am i right with the definition.
Assume checkpoints are placed for every 50,000 records. If my job stopped at 60,000 record. When i resume the job, will it be loaded from 60,0001 or from 50,001. If it loads from 50,001, then 10,000 record loaded already will create duplicate error right. What should be done for loading from the 60,001..

Thanks in advance,
Sathish.S

dnoeth 4628 posts Joined 11/04
27 Mar 2010

If Teradata restarts during a FastLoad it just waits for the DBMS to recover and then finishes the load, without resending data.

If the FastLoad job has to be restartet (e.g. FastLoad crashes or the target database is full) then it will start sending data from the last checkpoint, in your example from record 50,001.

You don't have to worry about duplicate rows, because FastLoad can't load it (even if the target table is MultiSet), it simply discards them.
Only violations of the Unique PI will be inserted into the UV Error Table.

After End Loading Phase there's info about Records read/applied/errors plus Total Duplicate Rows

Dieter

Dieter

kattamadhu 6 posts Joined 02/11
16 Feb 2012

Hi Dieter

Checkpoints is used to resume the paused job, at a point where it was paused, because of the error caused by the client or by the RDBMS. Am i right with the definition.

Assume checkpoints are placed for every 50,000 records. If my job stopped at 60,000 record. When i resume the job, will it be loaded from 60,001or from 50,001. If it loads from50,001, then 1000  record loaded already will create duplicate error right. What should be done for loading from the 60,001 in mutiload,if a table is multiset

can you please explain  the above  concept in mutiload,if a table is multiset

multset:actually mutlset will allow duplicates right

 

Stefans 38 posts Joined 02/12
16 Feb 2012

At each check point there will be an entry made in SYSADMIN.FASTLOG table.In your case the fast load will resume loading from the first row following the last successful check point(50,000) i.e. 50,001.This is during data aquisition phase.If it is interrupted in the application phase just resubmit the fast load script with only BEGIN and END loading statements.

You can also perform a manual restart using RECORD command say (RECORD 60,001) assuming the last check point as 60,000 from SYSADMIN.FASTLOG table.This will skip first 60,000 records and start from 60,001.

Stalin

Stalin

kattamadhu 6 posts Joined 02/11
16 Feb 2012

Hi

stalin thank for u r answer

fast load wont load duplicates right, either it is a set tabel or multi set table

this is not the case,i want know  multiload with multiset table what will happen

 

feinholz 1234 posts Joined 05/08
16 Feb 2012

The DBS will always eliminate the duplicate rows from SET and MULTISET tables for FastLoad jobs. In other words, on a FastLoad job, you can send duplicate rows to the Teradata database, but the database will discard them.

The Teradata database will not discard duplicate rows being sent from MultiLoad jobs when MultiLoad is loading a MULTISET table.

These are DBS rules, not client utility rules.

 

--SteveF

Cvinodh 32 posts Joined 10/11
05 Sep 2012

FastLoad will not load duplicate data in both SET and Multi-Set Table is perfectly understandable..

But what would happen if the Target Table is a NOPI table?

does the rule is still applicable?

if so how is it implemented?

 

 

feinholz 1234 posts Joined 05/08
12 Sep 2012

FastLoad can load duplicate rows into NoPI tables.

Normally, the DBS throws away duplicates during the sort (in the Application Phase).

NoPI tables are always MULTISET tables, but since there is no sort on NoPI tables, duplicate rows are not discarded by the DBS.

 

 

--SteveF

vasudev 24 posts Joined 12/12
30 Mar 2013

Hi,
While loading a empty table using Fload, in Acquisition Phase (Phase 1) getting database full error, later i added some more space to the database. Now what can be done complete the load process? whether resubmitting the script is enough or something has to be specified in Begin Loading command? 
Please advise.

ThomasNguyen 30 posts Joined 04/09
01 Apr 2013

Hello,
It is correct, resubmit the job is just enough. Make sure that the Error tables and the target tables are untouched (i.e. no drop Error Tables, or recreate the taget table).
Thomas

santhosh24689 1 post Joined 03/13
18 Sep 2013

Hi FienHolz,
Suppose if the mload is completed with some uv_count and et_count . what are the steps included to restart the mload  .
Corrct me if i am wrong , We need to Check the locks on the target table , Drop the ut ane Et tables .
Thanks in Advance,
Santhosh
 

santhosh kumar

feinholz 1234 posts Joined 05/08
18 Sep 2013

What do you mean by "completed"?
And what do you mean by "restart"?

--SteveF

chill3che 99 posts Joined 10/12
24 Sep 2013

Hi Experts,
Just to confirm that the Fastload and Mload runs in the client side(checkpoint 5000, source rows 8000, interrupted @ 6000).  So, if the Teradata restarts, the fastload/mload need not read a restart log table for checkpoint information as they are not stopped.  It will continue the process once (will not send the same duplicate data that has already been sent, i.e., start with 60001 record) Teradata is up.
If the client is interrupted, then fastload/mload will read the restart log table and will continue from the latest checkpoint information(may send duplicate data that has already been sent, will start at 5001).
 

Thanks,
Cheeli

feinholz 1234 posts Joined 05/08
25 Sep 2013

The restart table is for ALL restarts, both DBS and client.
If the DBS restarts, all sessions are disconnected. Thus, the client utility must go through its restart logic to connect new sessions, check the availability of the restart log table, and then synchronize with the DBS as to where to restart.
In either case (whether it is a DBS restart or a client restart) the job will resume from record 5001.

--SteveF

You must sign in to leave a comment.