All Forums Tools
soni.karn 1 post Joined 08/15
19 Aug 2015
Load file

I am new to teradata and I am trying to load a large file (16GB) into a table using fastload. I only have this message in the execution:
'Empty file on read open:'
What could be the reason?

feinholz 1234 posts Joined 05/08
19 Aug 2015

If you are new to Teradata, then you should be using TPT to load data into Teradata and not FastLoad.
TPT (Teradata Parallel Transporter) is the load/unload utility moving forward.
FastLoad is a capped legacy utility and we do not advocate that new users to Teradata use the older utilities.
 

--SteveF

Tuen 44 posts Joined 07/05
19 Aug 2015

@feinholz, if you want folks to move to TPT, you really need to fix it so it can consume standard input like BTEQ, Fastload, FastExport, Mload so that tool can be more flexible in how it can be used that is what has made Fastload and the other "legacy" tools so good.  That would make it much easier to script jobs and put them into exiting processes.   As it stands, till that happens we would not convert to TPT because it would require massive re-writes of existing processes and change entire standards of how we build scripts.  Being able to inline the tpt job in a shell script would be huge in our environment, the fact that TPT can't be used by Java is another downside...you still use the fastload/fastexport protocols from JDBC, it would be nice if Java could use TPT load operators as well.

feinholz 1234 posts Joined 05/08
19 Aug 2015

My post was directed at someone who was "new to Teradata".
Thus, that implies they have not used the legacy utilities.
For those customers, they should be using TPT and not even start using FastLoad.
For existing customers, we are not advocating that they migrate from the legacy utilities to TPT (that was a message we tried to deliver 15+ years ago, but we do not imply that now). We advocate that "new" jobs being developed should use TPT.
As for TPT, there is no need to "fix" TPT, it is not broken.   :)
TPT uses a standard interface for providing input (in my opinion, the method used by the legacy utilities is non-standard, just the way users are used to our utilities).
I cannot speak for your environment, but if you take a look at all that TPT has to offer (use of job variables and job variable files) you will see that TPT's way of doing things is just as flexible (if not more) than what you do. Users who develop scripts to run their TPT jobs now just use those scripts for generating metadata; TPT scripts can be written in a general way so that the same script can handle many different loading scenarios (just the metadata changes).
As for Java, TPT does not use "the fastload/fastexport protocols from JDBC" (it is actually the other way around; the utilities and their protocols came first; our JDBC driver's adoption of those protocols came later).
 
Again, I am not advocating that you switch (although it would be my preference). My post was directed at someone who is "new to Teradata".
 

--SteveF

mm185159 21 posts Joined 03/11
20 Oct 2015

So, I found some obsure information in the fastload reference manual that states fastload cannot support data files larger than 2 Gb on Windows systems becasue of old 32-bit architecture. The workaround was to use the "LARGE FILE" AXSMOD. Unfortunantly, the AXSMOD source code is missing or no longer available that I can find. And, yes, I eventually was able to convert to a relatively simple TPT Load script, however, even then, the documentation for loading a fastload formatted - fastexport data file with indicators was actually incorrect. So, if your are trying to load a datafile with indicators for use by the LOAD operator, be sure to use the two attributes, FORMAT & IndicatorMode together, as FORMAT='Formatted', and IndicatorMode='Yes'.  The documentation is inaccutate as it indicates the IndicatorMode=Yes can only be used with Format = Text or Unformatted, which is not true.
If anyone knows where the Large File Access Module can be found, please respond to this thread!
Regards,
Mike M.

feinholz 1234 posts Joined 05/08
20 Oct 2015

Yeah, we noticed that documentation error, but after the 15.0 docs went out the door.
The doc is fixed in the 15.10 Reference Manual and says:
 
'Y[es]' = indicator mode data. This value is not valid for the ‘text’ or
‘delimited’ record formats.
 
As for reading files that are larger than 2GB on Windows platforms, I am looking into whether that is still true.

--SteveF

feinholz 1234 posts Joined 05/08
21 Oct 2015

I checked with the FastLoad developer and he claims that FastLoad can load a file that exceeds 2GB on 32-bit Windows platforms.
 

--SteveF

mm185159 21 posts Joined 03/11
21 Oct 2015

Thanks Steve,
You might ask your developer / engineer why, if I extract a data file from source with limited records so that the file size is just under the 2GB threshold, fastload reads and loads the data just fine, however, when I extract enough rows to just exceed the 2GB file size, Fastload responds with the "empty file" message and ends with no rows read from file and 0 rows loaded to database and a completion code of 0 (no errors)? Same data source, same target table, just change in row-count to manipulate the source filesize.
Mike

feinholz 1234 posts Joined 05/08
23 Oct 2015

Mike,
 
What version of the DataConnector are you using (the FastLoad output will show it if you "show versions" in the script).
 
This issue has been fixed in:
 
DataConnector 14.10.00.008
DataConnector 15.00.00.002
 

--SteveF

You must sign in to leave a comment.