0 - 50 of 71 tags for fastload


Hi, I receive error message 8017: The UserId, Password or account is invalid while trying to load data using fast load. All the files are saved in Desktop.
********** Fast Load Script**************




Facing the following issue with FASTLOAD:
RECORD is too long by n byte(s)

Hello Everyone,
I have a question on Mload & FastLoad.
Lets assume that I have an empty table and I am trying to  load a file into this table using Mload or fastload. Now, based on below knowledge, I want to conclude, in application phase , which of these utilities will perform better.

I am using charset as UTF8 in fastlload, whatever record has special character like box went to Error table (ET). Pls refer the attachment for the sample issued value. Kindly help me how to handle this situation, we need to load data as it is there in source.

I have the following fastload script


                                               FIELD2                  VARCHAR(25) CHARACTER SET UNICODE CASESPECIFIC ,

Issue description:
When a job that is loading an empty table through FastLoad Functionality (Teradata Connector) aborts, it causes the mentioned table to be left in an irrecoverable Lock state, which cannot be cleared using an empty FastLoad, and which only workaround is to drop and recreate the table.

We have an application spawns multiple FastLoad sessions to move data from Microsoft SQL server to Teradata concurrently. We are running the application on Microsoft Windows Server 2012 R2 and have been observed random errors where some FastLoad sessions in hung state. 

I am using JDBC FastLoad to transfer data between different teradata environments.
I have been hitting an error message:

While executing Fast Export, we have a option to create a MLOD script by provoding the MLSCRIPT. Do we have a similar option for FASTLOAD? Can someone help me know if there is one. Or let me know why we dont have one for Fast load.

Hello All,
Teradata Utilities are specialised tool for loading or exporting huge volume of data as compared to conventional SQL tools like SQL Assistant & BTEQ. I wish to learn the fundamental performance gain reason between them. I have provided my understanding on them below. I would appreciate any addition/correction to them.

Hello All,
Below script i exported the data using Fast Export and tried to import the data using Fast Load. While doing so. I am getting a error in FastLoad. Plz help me understand. Where i am going wrong.
FastExport Script
.LOGTABLE DB.errors;
.LOGON jugal/jbhatt,bhatt;

I have exported data from one server with '|' delimitter, now trying to load this same data to another server using fastload, during that time, I am getting below error.
" Error on piom GET ROW: 60, Text: Column length error, row  not returned"

**** 14:24:38 Number of recs/msg: 77

Hi All,
Below is the script for the fastload:
.LOGON home/jbhatt,jugal;
DEFINE col1 (VARCHAR(50)),
        col2 (VARCHAR(50)) 
FILE= /home/jugal/finsert.txt;
insert into DB.tables1 values(:col1,:col2);

Hi All,
I am having one problem.
I know BTET transaction is "all or none". If I submit a multistatement transaction and any one statemetn fails then entire transaction will roll back in BTET mode. However in ANSI mode if I use COMMIT then only the failed statements will roll back while the others will commit.

I have data that is formatted like

Fri Dec 20 15:15:18 +0000 2013

I want to fastload this data into Teradata with a Timestamp but I am not sure of the format..
I tried the below but it didnt work.
,eny_ts TIMESTAMP format 'dddbmmmbddbhh:mi:ssbhhhhbyyyy'
Any help would be really appreciated

The following batches 5 rows of data, 2 of them with errors. The JDBC FastLoad PreparedStatement.executeBatch attempts to INSERT the batched rows into a database table but throws a JDBC SQLException. Here is how to capture the chain of JDBC FastLoad SQLException messages and stack trace from JDBC FastLoad PreparedStatement.executeBatch and the chain of JDBC FastLoad SQLWarning messages and stack trace from JDBC FastLoad Connection.rollback:

Does fastLoad apply Exclusive lock on target table ? If not what type of lock is applied ?

Insert into emp values
(:FNAME ,.......

Above Sample Code from TPT works fine. I want to convert null values in flatfile to blank while loading

insert into emp values ( COALESCE(:Fname,' '),.... -- Throws ERROR


The following shows how to batch 2 rows of data, then invoke JDBC PreparedStatement.executeBatch to INSERT them into a database table.

We mostly use fload/mload for our daily operations. If for some reason, lets says PROD goes offline, loads running on the system eventually will get failed. After that point of time, table becomes inaccessible. Upon browsing, it says - Table being loaded. 
Release locks on that table didnt work - to access the data in that table.

Hi -
I am new to working with Teradata. I tried searching the forums for how to create a batch file to initiate a MultiLoad script but was not very successful. I would like to use Windows scheduler to kick off a MultiLoad script to populate tables over night.
Does anyone have an example of a batch file initiating MultiLoad?

can i write a dynamic fload script.
lets say file1 has to be loaded to table1, file 2 to table2 through a single loading script ??
i shd have just one(single) fload script where i pass file name and table name as parameters??
Appreciate response!!

I wrote a fastload sript which works fine to load my CSV files. It looks like following:

I'm Trying to load data from a file into a volatile table. Following my FastLoad Job Script:

Is there any way to measure the total AMPCPUTIME consumed by any MLOAD/FLOAD/FEXP job post completion? The values from the dbql(sum of ampcputime for a particular LSN number) does not seem right.
I also tried dbc.acctg. There too, the values seem low.

I need to store the value in the DataParcel column(which is of VARBYTE datatype) in the first error table of FASTLOAD into a permanent teradata table but with a proper format(The ddl of this table will be exactly the same as the target table, except that all datatypes are varchar)
I know this as a potential solution/*From teradata manual*/:

I have just stepped in Teradata world.
Have a query on Fastload.
1. Can Fastload be used to Transfer Data from Table A to Table B [Empty table]?
2. If yes, does it need to use the INMOD functionality to read data from Table A?

I am using the FastLoad utility to remove and replace data in a table. If the DELETE command is in and part of the FastLoad script, is there any potential that another query could read the table and get no rows in that moment between when all the records are removed and when they are reinserted?

Hi Experts,
I'm getting the below error while loading flat file to Table using FL facility.  I'm new to TD and tried all the options to resolve the issue.  But, no luck.
My input file (EMP_FLAT) looks like below:
My FL code is:

While Monitoring CPU usage from DBQL for each user, grouped by day/hour/statement type.
we find that  [CheckPoint Loading] statement type consumes Most of CPU for any user uses utilities , that refer to more than 90% of CPU utilization accross the day.
We are using a query like this one:

I am using Fast load script to load close to 7 million records into a Teradata table. Around 1k records are getting into error table (err1). I am not able to figure out which record has erroneous value?
Can anyone share on how to approach such an situation?


Hi All,


Is is possible to have 'Insert/Select' statement in fast load utility.



fastload << !

<load empty TableB through file>

Insert into TableA

select * 




Thanks for your hlep.



I'd like to know behind covers - how these 2 utilites differ. I am aware of the glaring differences : empty tables vs non , single vs multi-table , upserts, fastload locking etc.

I’m using TPT Wizard and using TPT Load operator(Fastload) to generate scripts. It’s working fine for most of the tables but it’s throwing above error for couple of tables. At the same time If I use OLD Load and generate Multiload script, It works for all the tables.
TPT Load Script

I am bumping into timestamp conversion error when loading to timestamp column with time zone. The table in the column define as 
​snippet of fastload:


        device_dt_ut (VARCHAR(32)),



I'm trying to load a table dump exported with fastexport, but i get the following error:

The length of: PREFIX in row: 1 was greater than defined.
              Defined: 3, Received: 3072

Here are my fastexport and fastload scripts:

I've got Fastload opening a named pipe, and I'm redirecting the output from a SQL*Plus script to the pipe in Windows. I assumed this would be far faster than allowing SQL*Plus to spool the result to a file locally, and fastload to import the flat file. Turns out it's not.


I have a .txt file which is pipe separated and it has 543895 rows. Duplicate records are avoided by putting a serial number at the end of every row.

Hi everyone,
I would like to know if it's possible to know with some system table (such as dbc.dbqlogtbl or similar) to get the number of sessions opened by a MultiLoad/FastLoad/FastExport/TPT job.
In the dbqlogtbl it seems that Teradata puts only the father session which will create the child sessions...

Teradata has completed a major compiler conversion effort that should be completely transparent to customers, yet they are the main beneficiaries.  This article:

  • Provides some historical background and context,
  • Discusses the reasons we switched compilers,
  • Identifies certain behavioral changes that were unavoidable,
  • And, finally, answers a few technical questions related to the overall process.

The venerable IBM mainframe was the original client platform developed for the Teradata RDBMS via channel-attached connections way back in the 1980s. Although we now support a wide range of client platforms using network-attached connections, a significant segment of our customer base continues to use mainframe clients.

Hello Everyone, I am trying to import data from a flat file to teradata table with FASTLOAD, but after a few seconds throws error. The log :


I been testing the FastExport and Fastload Teradata tools and I am finding that data import using fastload is very slow for a particular table.

The process is as follows:

I have a ~250 node hadoop cluster containing a large data set that I want to move to Teradata as quickly as possible.  The target Teradata system has ~100 (recent generation) nodes.

I have a number of delimited files, each of the same format and each has a date as a part of the filename.


Is there a way, using FastLoad, that I can add a column to the data during the load that contains the date portion of the file name?


Can anyone provide information as to the encrytion strength of BTEQ/Fastload/Multiload etc when DATAENCRYPTION is ON?

Teradata FastLoad has a feature named Tenacity that allows user to specify the number of hours that FastLoad continues trying to log on when the maximum number of load operations is already running on the Teradata Database.

By default the Tenacity feature is not turned on. The feature is turned on by the script command: