50 - 100 of 162 tags for v_tools

Pages

Teradata Studio Express 14.01 is now available for download. Teradata Studio Express is an information discovery tool for retrieving and displaying data from your Teradata Database systems. It can be run on multiple operating system platforms, such as Windows, Linux, and Mac OSX.

Teradata’s latest and greatest Eclipse offering has been released and is available in the Teradata Developer Exchange download section (Teradata Plug-in for Eclipse) . The 14.01.00 release builds on the 14.00.00 offering by boasting increased functionality.

The Teradata Named Pipe Access Module (NPAM) provides an inter-process communication link between a writer process (such as FastExport) and a reader process (such as FastLoad). The NPAM can also be used by Teradata Parallel Transporter (TPT) for data transfer.

The reader process will initialize the NPAM module via an Initialization string and this article details the various parameters that are initialized during this process.

The Teradata Named Pipe Access Module (NPAM) provides an inter-process communication link between a writer process (such as FastExport) and a reader process (such as FastLoad).  The NPAM can also be used by Teradata Parallel Transporter (TPT) for data transfer.

This article details the different modes in which Named pipes are opened and used by NPAM for data transfer activity.

  

This article will introduce the new features and UI enhancements that have been added to Teradata SQL Assistant 14.01. The focus of this release is on usability and the introduction of Charting and the direct editing of Table data.

The Teradata Named Pipe Access Module (NPAM) provides an inter-process communication link between a writer process (such as FastExport) and a reader process (such as FastLoad).

The Unicode™ standard defines five encodings (the first three encodings are currently supported by Teradata):

The Teradata RDBMS can return a variety of errors. Some of these errors are retryable (that is, the request can be resubmitted); the simplest example of this is a 2631 (Transaction aborted due to %VSTR) caused by a deadlock condition. Other errors are not retryable; data-related errors (constraint violations, etc.) are an example.

If you’re waiting for an easy way to load data from one or more Teradata table(s) into a Teradata table without writing a Teradata PT script, wait no further.  Teradata PT Easy Loader can do it easily.  In the 14.0 release, the tool can load data from a Teradata table or from SELECT statement(s).

About the NPAM

The Teradata Named Pipe Access Module provides an inter-process communication link between a writer process (such as FastExport) and a reader process (such as FastLoad).

Where it started:

Teradata Data Mover (version 13.10) introduced job level security management which allows users to specify access rights at job level. Through the Teradata Data Mover Portlet, a Viewpoint user with access to Data Mover Portlet can grant/revoke access rights of individual Data Mover job to other Viewpoint users with access to Data Mover portlet. 

Teradata Data Mover (version 13.10) introduced a graphical user interface component (Portlets) that allows users a more intuitive way to copy database objects from one Teradata database system to another. The Portlet component compliments and enhances the existing ‘command line’ component. It is deployed on Teradata Viewpoint Web portal & gives users an opportunity to manage Data Mover jobs from the convenience of a Web browser. 

In the last Teradata Data Mover (TDM) article (Executing Partial Table Copies with Teradata Data Mover), we discussed creating a TDM job to copy a subset of rows in a table between Teradata systems. This example showed how customers can avoid copying an entire table to the target system when they only want to copy recent changes made to that table. The problem with the example in that article, though, is that the where clause has a hard-coded value in it. Customers will typically want to avoid having hard-coded values in their production TDM partial copy jobs because the subset of rows they want to copy will change every time they want to execute the job. It's possible for customers to just create a new TDM job every time they want to change the where clause, but that could lead to many unnecessary jobs being created in the TDM repository that copy data from the same table. It's much more efficient to create one job that will copy a dynamic subset of rows every time it is executed. Executing the same TDM job repeatedly instead of creating a new job every time rows need to be copied from the same table will eliminate the overhead associated with creating new TDM jobs.

In the last Teradata Data Mover (TDM) article (Introduction to Teradata Data Mover: Create your first job), we discussed creating and executing a TDM job to copy a full table between Teradata systems. This use case is very common in the field when customers want to initially populate the target Teradata system with the same table that exists on the source Teradata system. Customers will not want to copy the entire table to the target system every time changes are made to the source system though. Tables on production systems can get quite large and it doesn't make sense to copy the entire table when only a subset of rows have been changed since the last copy took place. This is why TDM supports executing partial table copies as well as full table copies.

The Period data type is supported by Teradata Multiload as of TTU 13.0. A period represents an interval of time. It indicates when some particular event starts and when it ends. It has a beginning bound and an (optional) ending bound (the period is open if there is no ending bound). The beginning bound is defined by the value of a beginning element. The ending bound is defined by the value of an ending element. Those two elements of a period must be of the same type that is one of the three DateTime data types: DATE, TIME, or TIMESTAMP.

TPump has been enhanced to dynamically determine the PACK factor and fill up data buffer if there is variable-length data. This feature is available in Teradata TPump 13.00.00.009, 13.10.00.007, 14.00.00.000 and higher releases.

Data Mover currently uses ARC, TPTAPI and JDBC to extract data from a source system and load data to the target system. Based on different scenarios, Data Mover may choose to use one of the 3 mentioned load utilities.

iBatis and MyBatis support custom types to override JDBC and other types when using the iBatis or MyBatis frameworks. A custom data type gives you the ability to deal with any kind of special input and output handling you may need for a database data type. For example, a User Defined Type (UDT)  that represents a point would require an X and Y Position to be entered for input and an X and Y position to be retrieved from the database. The custom data type gives you a clearly defined programmatic mechanism to do this.

When it comes time to test your latest database application, Teradata Data Mover (DM) can easily be used to grab real world data from your production system to populate your test system.

One of the big advantages that Teradata Data Mover (DM) provides is built-in parallelism. The underlying utilities that Teradata DM uses such as Teradata Parallel Transporter API do have methods available for users to do parallel work but either are limited to a single client machine or require the user to build their own code framework. Teradata DM takes care of all the hard work and puts the world of multiple client machine parallelism at your finger-tips.

But how to make best use of this parallelism for your big jobs? There’s a lot of power under the hood but it might not be obvious how to put that power to work. Here we’ll talk about what parallelism features are available and provide tips for how to use those to get your big data moving faster.

Teradata Parallel Transporter (Teradata PT) has fourteen different operators. Each behaves differently. This article provides a table to help you in selecting the right operator to use for your Teradata PT job. You can view the table as Excel .xls or PDF.

iBatis is now called MyBatis (iBatis 3.0). MyBatis is no longer sponsored by Apache. It is now supported on Google code. The MyBatis framework is a lightweight data mapping framework and persistence API. It couples objects with stored procedures or SQL statements using an XML descriptor.  The Teradata Plug-in for Eclipse allows you to switch between creating projects that use MyBatis or iBatis using the Teradata Project preferences. When you switch to use MyBatis, you can use new features like User Generated Keys.

User Generated Keys are unique identifiers returned from MyBatis during an insert operation. This tutorial will go through creating a Web service using user generated keys with MyBatis.

This article will show you how to use Teradata PT to copy data from one or more non-Teradata table(s) (e.g. Oracle table) to a Teradata table without using any intermediate disk storage. Teradata PT uses an ODBC operator as a producer to extract data from an Oracle table (as an example) and a Load operator as a consumer to load data into a Teradata table. You can modify the script to use other consumer operators such as Update, Stream or Inserter operator.

Teradata FastLoad has a feature named Tenacity that allows user to specify the number of hours that FastLoad continues trying to log on when the maximum number of load operations is already running on the Teradata Database.

By default the Tenacity feature is not turned on. The feature is turned on by the script command:

The iBatis (MyBatis) Stored Procedure Wizard allows you to right click on a Stored Procedure in the Teradata plug-in for Eclipse and quickly create a Web service.

The iBatis Stored Procedure Wizard wraps a Stored Procedure into an iBatis or MyBatis SQL Map. The generated SQL map can then be used to create a Web service or it can be used to create a Java application that uses the iBatis or MyBatis frame works.

Teradata Studio Express is an information discovery tool that retrieves data from Teradata and Aster Database systems and allows the data to be manipulated and stored on the desktop. It is built on top of the Eclipse Rich Client Platform (RCP).

Teradata Studio Express 14.00 (formerly Teradata SQL Assistant Java Edition) is now available for download. Teradata Studio Express is an information discovery tool for retrieving and displaying data from your Teradata Database systems.

Teradata’s latest and greatest Eclipse offering has been released and is available in the Teradata Developer Exchange download section (Teradata Plug-in for Eclipse) . The 14.00.00 release builds on the 13.11.00 offering by boasting increased functionality.

Do you want to have your Teradata Parallel Transporter (Teradata PT) Stream operator jobs run faster? Are you having difficulty determining the optimal pack factor for your Stream operator jobs? Knowing how to use the Stream operator’s PackMaximum attribute enables you to determine the optimal pack factor and thus improve the performance of your Stream operator job.

Have you ever wanted to keep your Teradata Database passwords private and not be exposed in scripts?  If you have, then we have a solution for you.

We have made great strides in improving our handling of delimited data (i.e. CSV data) in Teradata Parallel Transporter for the TTU14.00 release. This article will describe the background of the data format, the original support, and the enhancements we have made.

Teradata Plug-in for Eclipse allows Teradata application developers to quickly browse their Teradata Database, create new database objects, enter SQL statements, as well as create Java applications. In this video, Teradata IDE engineer Francine Grimmer provides a quick (9 min. 14 sec.) overview of the Teradata Plug-in for Eclipse functionality.

The use of TEXT format and INDICATORS mode when Teradata load utilities are used to load non-character data can lead to problems. This article will discuss this issue in more detail and describe what has been (will be) done to strongly discourage this usage.

Teradata Parallel Transporter (TPT) is a flexible, high-performance Data Warehouse loading tool, specifically optimized for the Teradata Database, which enables data extraction, transformation and loading. TPT incorporates an infrastructure that provides a parallel execution environment for product components called “operators.”  These integrate with the infrastructure in a "plug-in" fashion and are thus interoperable.

TPT operators provide access to such external resources as files, DBMS tables, and Messaging Middleware products, and perform various filtering and transformation functions. The TPT infrastructure includes a high performance data transfer mechanism called the data stream, used for interchanging data between the operators.

With today's businesses directly tied to mission-critical applications for decision making, continuity and availability are vital requirements for the success of Active Data Warehousing. As such, plans for recovery from any failure must be introduced into the design and deployment of ETL jobs as early as possible.

“Yum” is a package-management utility for RPM-compatible Linux operating systems that can be utilized by the Teradata Client packaging to allow an administrator to manage a repository of packages for network installation and software distribution.  “YUM” stands for “Yellowdog Updater, Modified” and is an open-source, command-line product included with RedHat Enterprise Linux, and Fedora Linux.    This document will explain how to set up a simple Yum repository with the Linux Teradata Client packages, and how to use the resulting repositories to install packages across the network.

Teradata provides advanced Workload Management capabilities through Teradata Active System Management (TASM). However, some customers are still relying on only Priority Scheduler, without TASM's added capabilities. These customers can easily move forward to TASM. This article shows how to use Teradata Workload Analyzer (TWA) to migrate existing Priority Scheduler settings into TASM.

Apache Ant allows the user to run a SQL task using JDBC. The Teradata SQL Ant Wizard allows you to wrap selected SQL Statements into an Ant build script from the Eclipse DTP SQL Editor.

The XML Ant build script generated from the Wizard runs the selected SQL statements inside or outside of Eclipse. The Ant build script will facilitate a consistent setup of test or base production environments. This creates a mechanism to run SQL reports and to integrate with build or schedule tools. 

CRUD is defined as the following functions of persistent storage:

  • Create — Insert a row into a Database Table
  • Read — Selecting information from a Database Table
  • Update — Update a row for a Database Table
  • Delete — Remove a row from a Database Table

The iBatis CRUD Wizard will generate the SQL and the iBatis code for all the CRUD operations for a selected database table in the Teradata Plug-in for Eclipse.  The generated iBatis SQL map can then be used to create a Web Service or used to create a Java application that uses the iBatis frame work. This will give you a quick and easy way to create an application that can do basic operations on a Teradata database table.

When it comes to establishing Teradata Database sessions, you may find that using BTEQ’s LOGON command by itself is not sufficient to pass along your credentials for user authentication. This article will explain what other commands you might need to use.

This article will introduce the new features and UI enhancements that have been added to Teradata SQL Assistant 13.11. The focus of this release is on usability and Section 508 conformance.

This article assumes that you are already familiar with the features in SQL Assistant 13.10 Edition 2. If not, you may wish to read the following articles first:

Although Teradata Utilities allow binary information in text files, doing so can have unintended consequences.

Teradata currently supports up to 38 digits in DECIMAL columns. There are several ways to control how many digits you're working with, and they interact in various ways.

TPT constructs a unique identifier for each TPT job submitted for execution. Even though it is not generated until your job executes, you can reference its unique job identifier in your job script via the new script keyword $JOBID. The job identifier consists of the job name and a TPT-generated job sequence number, joined by the hyphen character ('-'):

    <job name> - <job sequence number>

One of the biggest benefits of Parallel Transporter is the ability to scale the load process on a client load server to circumvent performance bottlenecks.  This can sometimes offer huge performance gains as compared to the legacy Teradata Stand-alone load tools such as FastLoad and MultiLoad.

This series of articles is meant to familiarize people with various capabilities of the Teradata Parallel Transporter product.  Many aspects of Parallel Transporter will be covered including an overview, performance considerations, integration with ETL tools and more.

On z/OS, the ODBC operator can be used to extract data from DB2.  The purpose of this article is to demystify the use of the ODBC operator in conjunction with DB2 on IBM’s z/OS operating system.  

TPump macrocharset support

TPump now forces CHARSET internally when building its macros! This feature is new starting in TPump 13.10.00.03 release.

The iBatis DAO with Web Services Wizard will generate a Web Service from an iBatis SQL Map. The wizard derives all of the information needed from the iBatis SQL Map to generate the following components to create a Web service:

  • DAO (Data Access Object)
  • WSDL (Web Service Definition Language)
  • XSD (XML Schema Definition)
  • Spring Configuration files

The wizard will then use the Eclipse Web Tools Platform (WTP) and Apache Axis to generate the server and client classes for the Web service. The generated classes will include code that supports Query Bands via the Teradata Access Session Manager.

 

New Release!!

 
TdBench 8.0 for any DBMS has been released! It works with any DBMS that supports JDBC.  The commands are very similar to earlier releases but there are many new features to improve simulation of data warehouse workloads.  
 

... TdBench V5 - MS DOS 

Teradata Benchmark Query Driver (TdBench) provides a set of tools to help you compare the performance within a data warehouse:

  • before/after a new release of the DBMS
  • before/after changes to the PDM (indexes, compression, etc)
  • relative performance of a new database platform

The tools provide a framework for executing benchmarks driven by a Windows Server or PC and reporting on the results using DBQL. There are also tools for extracting a cohesive set of Queries and Tables from DBQL to define the benchmark.

This reference guide explains how to install, configure and use the TdBench package.