0 - 6 of 6 tags for arc

Hi! I have some questions about the arcmain utility that I'm hoping someone could answer... I couldn't find documentation to specifically answer...
1. In order to use the "multistream" option, is an arc server required? (I assume yes)

I am trying to produce a log file for each DSA (Data Streaming Architecture) job with a format similar to ARCMAIN output. The closest thing I can find is to redirect the output from the dsc command line to a file (.i.e. dsc job_status_log -name my_DSA_jobname -bucket n > my_log_file).

Teradata has completed a major compiler conversion effort that should be completely transparent to customers, yet they are the main beneficiaries.  This article:

  • Provides some historical background and context,
  • Discusses the reasons we switched compilers,
  • Identifies certain behavioral changes that were unavoidable,
  • And, finally, answers a few technical questions related to the overall process.

The venerable IBM mainframe was the original client platform developed for the Teradata RDBMS via channel-attached connections way back in the 1980s. Although we now support a wide range of client platforms using network-attached connections, a significant segment of our customer base continues to use mainframe clients.

In the last Teradata Data Mover (TDM) article (Executing Partial Table Copies with Teradata Data Mover), we discussed creating a TDM job to copy a subset of rows in a table between Teradata systems. This example showed how customers can avoid copying an entire table to the target system when they only want to copy recent changes made to that table. The problem with the example in that article, though, is that the where clause has a hard-coded value in it. Customers will typically want to avoid having hard-coded values in their production TDM partial copy jobs because the subset of rows they want to copy will change every time they want to execute the job. It's possible for customers to just create a new TDM job every time they want to change the where clause, but that could lead to many unnecessary jobs being created in the TDM repository that copy data from the same table. It's much more efficient to create one job that will copy a dynamic subset of rows every time it is executed. Executing the same TDM job repeatedly instead of creating a new job every time rows need to be copied from the same table will eliminate the overhead associated with creating new TDM jobs.

In the last Teradata Data Mover (TDM) article (Introduction to Teradata Data Mover: Create your first job), we discussed creating and executing a TDM job to copy a full table between Teradata systems. This use case is very common in the field when customers want to initially populate the target Teradata system with the same table that exists on the source Teradata system. Customers will not want to copy the entire table to the target system every time changes are made to the source system though. Tables on production systems can get quite large and it doesn't make sense to copy the entire table when only a subset of rows have been changed since the last copy took place. This is why TDM supports executing partial table copies as well as full table copies.

Teradata Data Mover (TDM) is a relatively new product that allows users to copy database objects, such as tables and statistics, from one Teradata Database system to another. TDM can copy join/hash indexes, journals, and triggers as well.