0 - 21 of 21 tags for load

I am not able to use enclosedby and escapedby arguments in Teradata hadoop connector. I get the following error when I pass these arguments. here I am trying to set enclosedby with a double quote and escapedby with forward slash. The error goes away when I remove the enclosedby and escapedby arguments.

Hi,
I have TD Studio 15.10, TD DB 15.0, HDP 2.0 and Aster 6.0. They are all connected well in TD Studio and I can see the tables.
I'm doing some tests on Aster based on a HDP table but when I execute the query:
 
SELECT * FROM load_from_hcatalog
(USING server('192.168.100.131')
port('9083')
username('root')

I just upgraded to v15.10 of Studio Express 64-bit.  During a load operation, where I right-clicked on a table in the DSE and used Load, the log indicates:

Load Task

Loading data...

Starting Load...

Load Successful.

367075 Rows Processed

367075 Rows Loaded

 

Need to load from Table 1 to Table 2. Which utility work well here and WHY?
Please explain the limitations and advantages against each utility.

I have a scenario at hand:-
Source: 9 Binary Flat Files (From Mainframe Source Systems)
Target: 1 Teradata Table
ETL Operations: Insert / Update / Delete using Informatica Workflows – Teradata MLOAD INSERT / UPDATE Connection String & Teradata MLOAD DELETE Connections String

I have a CSV file that I'm attempting to load via TPT.  I have created the CSV file in Excel.  When I try to load the file with the appropriate number of delimiters, I am getting a BUFFERMAXSIZE error.  When I add another delimiter to the end of each record, the file loads just fine.

Hi everyone.  I'm trying to load SQL server tables to Teradata14 using OleDB via TD's OleLoad tool.  I'm having trouble with attributes defined as VARCHAR(MAX) in SQL server - it seems that this is a LOB data type.  Here is the script that OleLoad is generating:

Hi,
I'm trying to transfer data from a MySQL table into a Teradata table. The export uses the ODBC operator and the inport uses the LOAD operator.
All records go into the ET table, usually with 2673 (source parcel length incorrect) errors, some of them fail with a 6760 (invalid timestamp field).
 

To implement SCD type II using Mload, we need to know the primary key of the data. A comparison with primary key on source with the corresponding target field will help us in understanding the data already exist or not in the target. Now the entire method can be done in two steps.
 

Hello,

As most of you might agree, managing our collections of digitial pictures is becoming quite a challenge.  The number of photos continues to increase and now includes pictures from cameras as well as multiple mobile devices.  And to add to my troubles, I find that I have duplicate copies in different folders and on different computers.  Getting this organized is becoming a high priority.  Sure there are management solutions already available, but hey, we're tech people and it's more fun to try to build our own!  With the free Teradata Express database and some java coding, we have the right tools to get started.

While loading rows into a table, FastLoad displays, by default, an output message for every 100,000 rows loaded,  for example:

**** 14:47:20 Starting row 100000
**** 14:47:21 Starting row 200000
**** 14:47:22 Starting row 300000
...

If the number of rows being loaded is less than 100,000, FastLoad will not output any of these messages.

These messages are used as heartbeats that indicate that FastLoad is working for a long running job.

That was fine for jobs decades ago, however, for today's jobs where millions (and even billions) of rows are being loaded this much output may not be sensible. Can we imagine what the console output would look like with the default output message rate of every 100,000 rows?

This presentation describes, in detail, the various load utilities supported by Teradata.

Teradata Parallel Transporter is the best-performing and recommended load/unload utility for the Teradata Database. After watching this presentation, you will learn...

This book provides reference information about BTEQ (Basic Teradata Query), a general-purpose, command-based report and load utility tool. BTEQ provides the ability to submit SQL queries to a Teradata Database in interactive and batch user modes, then produce formatted results.

This book provides information on how to use Teradata Parallel Transporter (Teradata PT), an object-oriented client application that provides scalable, high-speed, parallel data extraction, loading, and updating. These capabilities can be extended with customizations or third-party products.

This book provides information on how to use the Teradata Parallel Transporter (Teradata PT) Application Programming Interface. There are instructions on how to set up the interface, adding checkpoint and restart, error reporting, and code examples.

This book provides reference information about the components of Teradata Parallel Transporter (Teradata PT), an object-oriented client application that provides scalable, high-speed, parallel data extraction, loading, and updating. These capabilities can be extended with customizations or with third-party products.

This book provides reference information about the components of Teradata Parallel Transporter (Teradata PT), an object-oriented client application that provides scalable, high-speed, parallel data extraction, loading, and updating. These capabilities can be extended with customizations or with third-party products.

Hadoop systems [1], sometimes called Map Reduce, can coexist with the Teradata Data Warehouse allowing each subsystem to be used for its core strength when solving business problems. Integrating the Teradata Database with Hadoop turns out to be straight forward using existing Teradata utilities and SQL capabilities. There are a few options for directly integrating data from a Hadoop Distributed File System (HDFS) with a Teradata Enterprise Data Warehouse (EDW), including using SQL and Fastload. This document focuses on using a Table Function UDF to both access and load HDFS data into the Teradata EDW. In our examples, there is historical data already in Teradata EDW, presumably derived from HDFS for trend analysis. We will show examples where the Table Function UDF approach is used to perform inserts or joins from HDFS with the data warehouse.