0 - 24 of 24 tags for teradata parallel transporter

Using Teradata parallel transport with simplified syntax we get error 

TPT02638: Error: Conflicting data length for column(3) - "sr_abo_netto_do_oferty". Source column's data length (16) Target column's data length (8).

 

Source table:

If TPT Attributes INTEGER SkipRows = 1 and Varchar SkipRowsEveryFile = 'N' (There are more than 1 file), how will the loader know, for which file the row is to be skipped? How will the loader behave if it faces such scenario?

I'm using Teradata Tools and Utilities 15, TPT and the ODBC operator to move data from SQL Server to Teradata.  Every once in a while I get this error and I don't know why.  If anyone has any experience with this error is would be appreciated.

 

I needed to unload data from Teradata and escape certain characters.  For example some character strings contain a pipe symbol ('|') but I am using the pipe as my delimiter.   The documentation at http:/

Hi! I am working on a project where I need to grab data from remote Teradata instance into a linux node and then push it back into the Teradata instance after some processing. I am currently using the TD express VM to store sample data and running my code on another linux node.

This article describes how to preserve data integrity when a Teradata Parallel Transporter (Teradata PT) export to stream job restarts.

Hi Teradata gurus,
 
I have developed a process in Java that read some files, add some fields, and load in some pipes to my TPT script.
In my tests I'm loading 6 pipes with 6 separeted Threads.
To load this I have these Producer Operators:

Hello,
I need help with this issue.

Hi Teradata Masters,

I'm trying to provide data to be consumed by a Load Operator in my TPT script throught a named pipe.
My DATACONNECTOR Producer: (all runs local on Windows)

    DEFINE OPERATOR PIPE_READER()
    DESCRIPTION 'Define opcoes de leitura de arquivos'
    TYPE DATACONNECTOR PRODUCER
    SCHEMA T701S020_SCHEMA
    ATTRIBUTES
    (

Teradata Parallel Transporter (Teradata PT) has fourteen different operators. Each behaves differently. This article provides a table to help you in selecting the right operator to use for your Teradata PT job. You can view the table as Excel .xls or PDF.

Hi all,

I have to load some data using Teradata Parallel Transporter...

 

I have a record like:

1234ANA MARIA       20111209

 

I need to load something like:

PID: 1234
NAME: ANA MARIA

DATE: 2011-11-09

 

I expected to do something like:

Hi all,

I got a "EOF encountered before end of record" error when running a TPT script with filename = '*' (file by file runs fine).

Do you want to have your Teradata Parallel Transporter (Teradata PT) Stream operator jobs run faster? Are you having difficulty determining the optimal pack factor for your Stream operator jobs? Knowing how to use the Stream operator’s PackMaximum attribute enables you to determine the optimal pack factor and thus improve the performance of your Stream operator job.

We have made great strides in improving our handling of delimited data (i.e. CSV data) in Teradata Parallel Transporter for the TTU14.00 release. This article will describe the background of the data format, the original support, and the enhancements we have made.

Hi,
I am new to TPT and calling TPT 13.10 from mainframe. I get a error code TPT_INFRA: TPT02040: Error: Insufficient main storage to allocate 973078581 bytes for a buffer for the name of an INCLUDE file.

Job script preprocessing failed.

I am calling a simple DDL Operator to create a table.

Teradata Parallel Transporter is a really nice tool which offers new capabilities in comparison to the existing "old" TD load utilities. One of many other advantages is the ability to move data without landing it to disk (see this article). There are also other post / blogs which give good introductions to TPT.

But as TPT needs a different scripting syntax it is sometimes not easy to get a start on this great tool. Therefore we developed and attached (zip download) a wrapper-like java tool which should help to get a first impression on the new capabilities without the need to care about TPT syntax.

The main functionality is to copy one table from one system to a second system in a Fastexport to pipe to Fastload style. The tool checks the table DDL, generates the TPT script and execute the needed scripts.

This series of articles is meant to familiarize people with various capabilities of the Teradata Parallel Transporter product.  Many aspects of Parallel Transporter will be covered including an overview, performance considerations, integration with ETL tools and more.

I have been doing teradata export and import through named pipe until i used tpt. But now after i learnt how to write a tpt, i have been using import from export operator in tpt.

This book provides information on how to use Teradata Parallel Transporter (Teradata PT), an object-oriented client application that provides scalable, high-speed, parallel data extraction, loading, and updating. These capabilities can be extended with customizations or third-party products.

Provides a detailed demonstration of the features and functions of the Teradata Parallel Transporter (TPT) product. It explores the product capabilities and demonstrates how the product fits into the Teradata product environment as a load/unload tool, with substantial implications for the throughput and performance expectations of an Active Data Warehouse. In addition to the many special purpose ‘operators’ provided by TPT, we look at the peripheral features which make it a seamless fit into an enterprise environment. The benefits and flexibilities of a single scripting language to support all loading and unloading functions are also demonstrated, along with many practical applications of the product to real customer requirements.

The Teradata Parallel Transporter (Teradata PT) operators play a vital role in high-speed data extraction and loading geared towards the Teradata Database. Besides interfacing with the Teradata Database, some of the Teradata PT operators provide access to external sources such as files, ODBC-compliant DBMS, and message-oriented middleware.

This article describes usage tips on how to load/unload Unicode data with the UTF8 and UTF16 Teradata client session character sets using Teradata Parallel Transporter (TPT).

As of this writing, Teradata Parallel Transporter supports Unicode only on network-attached platforms. 

 

With traditional Teradata utilities such as Fastload, Multiload, and TPump, multiple data files are usually processed in a serial manner. For example, if the data to be loaded into the Data Warehouse reside in several files, they must be either concatenated into a single file before data loading or processed sequentially on a file-by-file basis during data loading.

In contrast, Teradata Parallel Transporter (TPT) provides a feature called “directory scan” which allows data files in a directory to be processed in a parallel and scalable manner as part of the loading process. In addition, if multiple directories are stored across multiple disks, a special feature in TPT called “UNION ALL” can be used to process these directories of files in parallel, thus achieving more throughput through scalability and parallelism across disks.