![]() |
I am not able to use enclosedby and escapedby arguments in Teradata hadoop connector. I get the following error when I pass these arguments. here I am trying to set enclosedby with a double quote and escapedby with forward slash. The error goes away when I remove the enclosedby and escapedby arguments.
31 Aug 2016
,
|
![]() |
Hi,
23 Sep 2015
| 1 comment
,
|
![]() |
I just upgraded to v15.10 of Studio Express 64-bit. During a load operation, where I right-clicked on a table in the DSE and used Load, the log indicates: Load Task Loading data... Starting Load... Load Successful. 367075 Rows Processed 367075 Rows Loaded
04 Jun 2015
| 12 comments
,
|
![]() |
Need to load from Table 1 to Table 2. Which utility work well here and WHY?
27 Jan 2015
| 3 comments
,
|
![]() |
I have a scenario at hand:-
30 Jun 2014
| 4 comments
,
|
![]() |
I have a CSV file that I'm attempting to load via TPT. I have created the CSV file in Excel. When I try to load the file with the appropriate number of delimiters, I am getting a BUFFERMAXSIZE error. When I add another delimiter to the end of each record, the file loads just fine.
18 Mar 2014
| 9 comments
,
|
![]() |
View Display Error
17 Mar 2014
| 7 comments
,
|
![]() |
Hi everyone. I'm trying to load SQL server tables to Teradata14 using OleDB via TD's OleLoad tool. I'm having trouble with attributes defined as VARCHAR(MAX) in SQL server - it seems that this is a LOB data type. Here is the script that OleLoad is generating:
30 Jul 2013
| 4 comments
,
|
![]() |
Hi,
15 Jul 2013
| 13 comments
,
|
![]() |
To implement SCD type II using Mload, we need to know the primary key of the data. A comparison with primary key on source with the corresponding target field will help us in understanding the data already exist or not in the target. Now the entire method can be done in two steps.
30 May 2013
,
|
![]() |
|
![]() |
As most of you might agree, managing our collections of digitial pictures is becoming quite a challenge. The number of photos continues to increase and now includes pictures from cameras as well as multiple mobile devices. And to add to my troubles, I find that I have duplicate copies in different folders and on different computers. Getting this organized is becoming a high priority. Sure there are management solutions already available, but hey, we're tech people and it's more fun to try to build our own! With the free Teradata Express database and some java coding, we have the right tools to get started.
27 Jun 2011
,
|
![]() |
While loading rows into a table, FastLoad displays, by default, an output message for every 100,000 rows loaded, for example: **** 14:47:20 Starting row 100000 **** 14:47:21 Starting row 200000 **** 14:47:22 Starting row 300000 ... If the number of rows being loaded is less than 100,000, FastLoad will not output any of these messages. These messages are used as heartbeats that indicate that FastLoad is working for a long running job. That was fine for jobs decades ago, however, for today's jobs where millions (and even billions) of rows are being loaded this much output may not be sensible. Can we imagine what the console output would look like with the default output message rate of every 100,000 rows?
18 Aug 2010
| 7 comments
,
|
![]() |
This presentation describes, in detail, the various load utilities supported by Teradata.
15 Jul 2010
| 1 comment
,
|
![]() |
Teradata Parallel Transporter is the best-performing and recommended load/unload utility for the Teradata Database. After watching this presentation, you will learn...
08 Feb 2010
,
|
![]() |
This book provides reference information about BTEQ (Basic Teradata Query), a general-purpose, command-based report and load utility tool. BTEQ provides the ability to submit SQL queries to a Teradata Database in interactive and batch user modes, then produce formatted results.
02 Feb 2010
| 1 comment
,
|
![]() |
This book provides information on how to use Teradata Parallel Transporter (Teradata PT), an object-oriented client application that provides scalable, high-speed, parallel data extraction, loading, and updating. These capabilities can be extended with customizations or third-party products.
01 Feb 2010
| 1 comment
,
|
![]() |
This book provides information on how to use the Teradata Parallel Transporter (Teradata PT) Application Programming Interface. There are instructions on how to set up the interface, adding checkpoint and restart, error reporting, and code examples.
01 Feb 2010
,
|
![]() |
This book provides reference information about the components of Teradata Parallel Transporter (Teradata PT), an object-oriented client application that provides scalable, high-speed, parallel data extraction, loading, and updating. These capabilities can be extended with customizations or with third-party products.
31 Jan 2010
| 1 comment
,
|
![]() |
This book provides reference information about the components of Teradata Parallel Transporter (Teradata PT), an object-oriented client application that provides scalable, high-speed, parallel data extraction, loading, and updating. These capabilities can be extended with customizations or with third-party products.
19 Jan 2010
| 2 comments
,
|
![]() |
Hadoop systems [1], sometimes called Map Reduce, can coexist with the Teradata Data Warehouse allowing each subsystem to be used for its core strength when solving business problems. Integrating the Teradata Database with Hadoop turns out to be straight forward using existing Teradata utilities and SQL capabilities. There are a few options for directly integrating data from a Hadoop Distributed File System (HDFS) with a Teradata Enterprise Data Warehouse (EDW), including using SQL and Fastload. This document focuses on using a Table Function UDF to both access and load HDFS data into the Teradata EDW. In our examples, there is historical data already in Teradata EDW, presumably derived from HDFS for trend analysis. We will show examples where the Table Function UDF approach is used to perform inserts or joins from HDFS with the data warehouse.
19 Oct 2009
| 7 comments
,
|