100 - 150 of 162 tags for v_tools

Pages

Teradata Data Mover (TDM) is a relatively new product that allows users to copy database objects, such as tables and statistics, from one Teradata Database system to another. TDM can copy join/hash indexes, journals, and triggers as well.

Teradata Data Mover makes moving tables and other objects between Teradata systems easier than ever. However, the initial release, 13.01, does require manual tuning and configuration in order to get the best performance results. Here are some tips and recommendations to improving Data Mover performance.

Teradata Parallel Transporter (Teradata PT) supports moving data from a table on a Teradata Database system, or from an ODBC-compliant RDBMS, to a Teradata table on a different Teradata Database system without landing the data to disk.

There are performance benefits, cost savings, and ease of script maintenance associated with using Teradata PT to move data without landing it to disk.

This article will provide you with accurate information regarding the Teradata Parallel Transporter product. Hopefully this will educate and clear up any misunderstandings regarding the basic information about the product.

While loading rows into a table, FastLoad displays, by default, an output message for every 100,000 rows loaded,  for example:

**** 14:47:20 Starting row 100000
**** 14:47:21 Starting row 200000
**** 14:47:22 Starting row 300000
...

If the number of rows being loaded is less than 100,000, FastLoad will not output any of these messages.

These messages are used as heartbeats that indicate that FastLoad is working for a long running job.

That was fine for jobs decades ago, however, for today's jobs where millions (and even billions) of rows are being loaded this much output may not be sensible. Can we imagine what the console output would look like with the default output message rate of every 100,000 rows?

Teradata 13.0 supports a Java User Defined Function (JUDF) capability, which returns an Aggregate. Aggregate functions produce summary results. In the last Article, you were shown how simple it is to create a Table JUDF using Teradata tools. Now, in this article you will be shown how easy it is to create an Aggregate JUDF using the Teradata Plug-in from Eclipse. The Teradata JUDF Wizard and Editor simplify the process of creating, installing and editing an Aggregate JUDF.

Teradata 13.0 supports Table Java User Defined Functions (JUDF). A Table Java User Defined Function returns a table of data a row at a time in a loop to the caller of the function. This article will show you how to use the Teradata Plug-in for Eclipse to quickly create a Table Java User Defined Function.

This article will introduce the new features and performance enhancements that have been added to Teradata SQL Assistant 13.10 Edition 2. The focus of this release is on usability and performance.

Please see What’s new in Teradata SQL Assistant 13.10 for details of the new features that were added to the original 13.10 release.

The Teradata Database offers a unique native capability, the Aggregate Join Index (AJI), to help support multi-dimensional Business Intelligence solutions.  An AJI is an aggregated result set saved as an index in the database.  The AJI will be used automatically by the Teradata Optimizer when like columns and aggregates are made frequently within a query

Teradata SQL Assistant Java Edition release 13.0 is available for download. It provides an information discovery tool for retrieving and displaying data from Teradata Database systems.  Some organizations may have this application published via Citrix XenApp (formerly Presentation Server) and with a few adjustments this application will load a little faster and hopefully keep end users from hitting the launch icon in citrix numerous times.

A Byte Order Mark (BOM) is the Unicode character used to denote the endianness of a text file or stream. This article will explore how the different Teradata Standalone Utilities (FastLoad, MultiLoad, TPump, FastExport) handle this within both their Job Scripts and their Data files.

The principle focus of query tuning is to provide reliable summary information about the data to the optimizer. This is done by collecting accurate statistics which are then stored in a synoptic data structure known as an interval histogram. The correct choice of the column and index sets on which Statistics should be collected can help the optimizer generate better query plans, dramatically improving query performance and reduce the collection overhead. It can be difficult to understand how the optimizer uses statistics as well as deciding what statistics are needed without an automated method to help. That automated method is the Teradata Statistics Wizard, which is a client-based GUI interface for obtaining statistics recommendations for particular queries or query workloads that are submitted for analysis.

Teradata Visual Explain adds another dimension to the EXPLAIN modifier by depicting the execution plans of complex SQL statements visually and simply. The graphical view of the statement is displayed as discrete steps showing the flow of data during execution.

However, Visual Explain was using QCD to captured the query execution plan steps in relation tables to generate visual explain for query diagnostic. QCD has performance issues due to the many inserts required in QCD de-normalized tables. DBS has implemented DBQL and QCD XML plan logging for TD 13.10. These enhancements provide additional capabilities to users wishing to tune queries and applications in order to achieve better performance.

For those not already familiar with the Teradata Workload Analyzer (TWA), TWA is one of the products affiliated with TASM (Teradata Active System Management).

The iBatis framework is a lightweight data mapping framework and persistence API. It couples objects with stored procedures or SQL statements using a XML descriptor. The iBatis SQL Map wizard allows you to quickly generate an iBatis SQL map for a given SQL statement or stored procedure.

The Teradata project is an extension of a Java project in Eclipse and it can be created by using the Teradata Project Wizard. This new type of project is set up based on Teradata Standards which can be overridden with preferences. The Teradata project will give you access to Teradata libraries.

In TPump 13.00.00.02 and higher releases the maximum pack factor has been increased from 600 to 2430.

TPump users use "PACK <statements>" in the "BEGIN LOAD" command to specify the number of data records to be packed into one request, where PACK is a TPump keyword and “statements” actually refers to the number of data records to be packed.

Packing improves network/channel efficiency by reducing the number of sends and receives between the application and the Teradata Database.

The TPT DataConnector Operator is the mechanism by which TPT obtains data from or sends data to external native operating system flat (sequential) files or attached access modules.

This article will address the data formats usable with the TPT DataConnector Operator. For each format, we will first deal with the concept of a record, which roughly maps to a DBS table row. Secondly, we will address how these data are resolved to row columns using the TPT data schema. When these two requirements are validated, we will have converted the external data record to a Teradata DBS row. If not, we have a fatal data error*.

While transactional processing through the use of “message queues” is a common approach in ADW today, the file-oriented approach is also begining to find its way into ADW due to its simplicity in nature and ease-of-control. Today, many companies monitor and store thousands--or hundreds of thousands--of transactions per day across their branches and stores. Transactional data is usually collected and stored as files in directories, before being merged into the enterprise-wide data warehouse. In fact, there have been Teradata sites which extract transactional data from message queues, pre-process them, and store them into different directories based on transaction types, in an “active” manner. By “active”, we mean the files are created as transactions are collected.

The Teradata Parallel Transporter (TPT) External Command Interface is a command based interface which allows users to issue commands to TPT jobs. The term “external commands” implies two important implementations. First, it implies that users can issue commands to TPT jobs from outside the TPT address space. Secondly, it implies that commands are processed by TPT while it is in the middle of performing ETL operations. In addition, TPT internal components such as operators (which run under different processes) can also communicate with each other within a job through the same interface by using commands. As a result, ETL and system “events” are not only shared between the TPT and users, but also shared amongst TPT components within a job at runtime.

This book provides reference information about BTEQ (Basic Teradata Query), a general-purpose, command-based report and load utility tool. BTEQ provides the ability to submit SQL queries to a Teradata Database in interactive and batch user modes, then produce formatted results.

This book provides information about Teradata MultiLoad, a product that provides an efficient way to deal with batch maintenance of large databases. Teradata MultiLoad is a command-driven utility for fast, high-volume maintenance on multiple tables and views in a Teradata Database.

This book provides information on how to use Teradata Parallel Transporter (Teradata PT), an object-oriented client application that provides scalable, high-speed, parallel data extraction, loading, and updating. These capabilities can be extended with customizations or third-party products.

This book provides information about Teradata Parallel Data Pump (TPump), a data loading utility that helps you maintain (update, delete, insert, and atomic upsert) the data in your Teradata Database. TPump uses standard Teradata SQL to achieve moderate to high data-loading rates.

This book provides information on how to use the Teradata Parallel Transporter (Teradata PT) Application Programming Interface. There are instructions on how to set up the interface, adding checkpoint and restart, error reporting, and code examples.

The Teradata Multiload utility provides the capability to perform batch maintenance on tables (insert, update, delete and upsert). It loads data from external sources and provides the capability to restart jobs interrupted by errors, exceptions and failures.

This book provides reference information about the components of Teradata Parallel Transporter (Teradata PT), an object-oriented client application that provides scalable, high-speed, parallel data extraction, loading, and updating. These capabilities can be extended with customizations or with third-party products.

In the first article in this series on Teradata Error Handling, we introduced the larger architecture and load utility environment. in this article the focus will be on the Fastload Utility and some of the techniques applicable to handling errors, exceptions and failures that can occur within a High Availability system.

This article will introduce the new features and performance enhancements that have been added to Teradata SQL Assistant 13.10. The focus of this release is on usability and performance.

Workstation BTEQ recently added Unicode Support to its list of capabilities.  This article will explain to you how to start a Unicode BTEQ session in both interactive and batch mode. Command line options have been provided to give you control and flexibility to execute BTEQ in various Unicode environments, while preserving BTEQ’s legacy behavior.

Teradata PT supports loading Large Object (LOB) data into, and extracting LOB data from, Teradata Database tables. Large Objects are data types that can be Binary Large Objects (BLOBs) or Character Large Objects (CLOBs).

The Teradata Parallel Transporter (Teradata PT) operators play a vital role in high-speed data extraction and loading geared towards the Teradata Database. Besides interfacing with the Teradata Database, some of the Teradata PT operators provide access to external sources such as files, ODBC-compliant DBMS, and message-oriented middleware.

At this point you should already be familiar with Eclipse. But if you are not, Eclipse is the de facto integrated development environment (IDE) for developing Java applications. It provides comprehensive support for Java technologies, as well as a platform for plug-in tools to extend its capabilities.

Teradata’s latest and greatest Eclipse offering has been released, and is available in the Teradata Developer Exchange download section (Eclipse_13.01.00).  Further, the 13.01.00 release builds on the 13.00.00 offering by boasting increased functionality.

This article describes usage tips on how to load/unload Unicode data with the UTF8 and UTF16 Teradata client session character sets using Teradata Parallel Transporter (TPT).

As of this writing, Teradata Parallel Transporter supports Unicode only on network-attached platforms. 

 

This document is a quick reference to the views that provide read access into the Teradata Meta Data Services repository. This document provides information for the following users:

  • Teradata MDS end users
  • Teradata MDS administrators
  • Teradata database administrators


In the last article, it was shown how to create Java Bean Wrapper classes that can be called from a Spring Data Access Object (DAO) using the Java Bean Wrapper Wizard.

This article will demonstrate how to tie Java Bean Wrapper classes together using the Spring DAO Wizard to create a reusable data access layer for a business service. Also it will be shown how to create an automated unit test for your DAO.

SQL Assistant Java Edition is an information discovery tool that retrieves data from Teradata Database systems and allows the data to be manipulated and stored on the desktop. It is built on top of the Eclipse Rich Client Platform (RCP).

A high-availability system must have the ability to identify and correct errors, exceptions and failures in a timely and reliable manner to meet challenging service level objectives. The Teradata database and the utilities and components (used to both load and access data) provide capabilities to implement reliable error and exception handling functionality. These capabilities combined with a well designed high availability architecture allow a Teradata Active Enterprise Intelligence (AEI) system to meet the service level objectives required to support mission critical business processes.

This will be a series of articles explaining how the Spring framework can be used with the Teradata Plug-in for Eclipse to create a data access layer for your business services. The Java Bean Wrapper wizard allows the user to quickly generate a Java Bean class for a given SQL statement or stored procedure. Also the Wizard has an option to generate a Java Bean which can be run with the Spring Data Access Object (DAO) framework. By calling a generated Java Bean from a DAO, the Bean will have access to the Spring transaction management. Also the Java Wrapper Beans are reusable components which could be used in different DAOs. The Bean Helper Classes can be used as Spring domain objects. Also the Bean will facilitate the support of OUT parameters for stored procedures and multiple result sets inside of a Spring DAO.

This article will show how to setup a project that uses Spring DAOs with the Teradata Plug-in for Eclipse. This article will also show how to create Java Bean Wrapper classes that can be called from a Spring DAO using the Java Bean Wrapper Wizard.

With traditional Teradata utilities such as Fastload, Multiload, and TPump, multiple data files are usually processed in a serial manner. For example, if the data to be loaded into the Data Warehouse reside in several files, they must be either concatenated into a single file before data loading or processed sequentially on a file-by-file basis during data loading.

In contrast, Teradata Parallel Transporter (TPT) provides a feature called “directory scan” which allows data files in a directory to be processed in a parallel and scalable manner as part of the loading process. In addition, if multiple directories are stored across multiple disks, a special feature in TPT called “UNION ALL” can be used to process these directories of files in parallel, thus achieving more throughput through scalability and parallelism across disks.

This is the fifth in a series of articles.
       View the first article.
       View the series index.

A Teradata Database Java User Defined Function (JUDF) is a program that operates on data stored in relational tables. UDFs allow users to add their own extensions to the Teradata SQL language. JUDFs are implemented as external functions. This means the source is compiled externally to the DBS and kept in Java Archive (JAR) files. These JAR files are installed into the database using stored procedures in the SQLJ database. Once a JAR file is installed a JUDF can be defined to use a Java class and method within the JAR file. The JUDF is executed via a protected mode server separate from the database process. Parameters passed from the DBS are converted to their Java form and the Java return type from the JUDF  is converted back to its DBS form.
 

In the last article, it was shown how a user can create a Java External Stored Procedure (JXSP) that reads SQL using answers sets with the Teradata Plug-in for Eclipse. This article will show how a JXSP can be automatically created that reads SQL. The only thing that is required is a SQL query from the user and the content of the JXSP is automatically generated.

The Java Bean Wrapper wizard allows the user to quickly generate a Java Bean class for a given SQL statement or stored procedure. The Java Bean Wrapper Wizard has an option to create a JXSP that calls the wrapped SQL from the Java Bean.

In a typical relational database, tables are joined together via foreign key relationships. It is also common for macros and procedures to reference tables, forming dependent relationships. When making administrative decisions about database objects, it is important to know about its object relationships.

In the last article, it was shown how a user can create an ant script to automate a build so the user can deploy a JAR and install a DDL for a Java External Stored Procedure (JXSP) outside of Eclipse. This article will show how a user can create a JXSP that reads SQL using answers sets.

Answer sets are extended result sets returned by stored procedures. This is a new feature for JXSPs in the 13.0 version of the Teradata database.

This presentation describes the new features introduced in SQL Assistant 13.0 and points out the similarities and differences between this and previous versions.

This is the third in a series of articles.
       View the first article.
       View the series index.

To view detail information about the sessions logged on to the Teradata system you should use the GetSessionData method. This method returns a Sessions collection. The sessions in this collection are sorted by user name.

This is the second in a series of articles.
       View the first article.
       View the series index.

Resource information is available in both summary and detail formats. Each type of information is also available at both the physical (Node) and virtual (Vproc) level.