Teradata Data Mover (TDM) is a relatively new product that allows users to copy database objects, such as tables and statistics, from one Teradata Database system to another. TDM can copy join/hash indexes, journals, and triggers as well.

TDM is an enterprise application that consists of the three main components: the command-line interface, the Daemon, and one or more Agents. Please see the architecture diagram below.

TDM Architecture

 

 TDM Architecture 

 

TDM's command-line interface allows users to define their intent for copying objects between two Teradata systems via a simple XML file. Users only need to provide the logon credentials for the source and target systems and the objects they want to copy as input. TDM will take care of the rest by automatically generating a job plan to execute the data movement based on the current state of the source and target systems. TDM accomplishes this by utilizing ARC, TPTAPI, and JDBC under the covers to perform the object copy. This abstracts the complexity of the underlying methods to perform the data movement, which means users no longer have to generate low-level utility scripts to copy objects between Teradata systems.

Create your first TDM job

Now that we've gone through the overview of the product, let's step through the process of executing your first TDM job. First, let's create a table on our source Teradata system. In this example, the source Teradata system will be dmdev and the target Teradata system will be dmsmp. The source table will get created under the jg185041 user and we will do a full table copy of it under the jg185041 user on the target system. Please change these values to accommodate your environment when executing the test.

This is the SQL file needed to create the table we will copy:

CREATE SET TABLE items1 ,NO FALLBACK ,
     NO BEFORE JOURNAL,
     NO AFTER JOURNAL,
     CHECKSUM = DEFAULT
     (
      deptId INTEGER,
      prodId INTEGER,
      salesdate DATE FORMAT 'YYYY-MM-DD',
      amount DECIMAL(10,2))
PRIMARY INDEX ( prodId )
PARTITION BY RANGE_N(salesdate  BETWEEN DATE '2001-01-01' AND DATE '2010-12-31' EACH INTERVAL '1' MONTH );

COMMIT;

Click 'expand source' below to see the contents of SQL file to insert data into the table that we will be copying:

insert into jg185041.items1 values ('1', '1','2005-12-29', '2.54');
insert into jg185041.items1 values ('2', '1','2008-08-09', '2.54');
insert into jg185041.items1 values ('3', '1','2002-12-12', '2.54');
insert into jg185041.items1 values ('4', '1','2008-08-11', '2.54');
insert into jg185041.items1 values ('5', '1','2007-11-18', '2.54');
insert into jg185041.items1 values ('6', '1','2010-08-18', '2.54');
insert into jg185041.items1 values ('7', '1','2002-09-01', '2.54');
insert into jg185041.items1 values ('8', '1','2006-02-26', '2.54');
insert into jg185041.items1 values ('9', '1','2004-09-13', '2.54');
insert into jg185041.items1 values ('10', '1','2004-11-02', '2.54');
insert into jg185041.items1 values ('1', '2','2002-05-26', '82.61');
insert into jg185041.items1 values ('2', '2','2001-03-29', '82.61');
insert into jg185041.items1 values ('3', '2','2010-07-11', '82.61');
insert into jg185041.items1 values ('4', '2','2002-02-22', '82.61');
insert into jg185041.items1 values ('5', '2','2006-01-21', '82.61');
insert into jg185041.items1 values ('6', '2','2008-10-09', '82.61');
insert into jg185041.items1 values ('7', '2','2009-11-25', '82.61');
insert into jg185041.items1 values ('8', '2','2010-12-19', '82.61');
insert into jg185041.items1 values ('9', '2','2006-10-05', '82.61');
insert into jg185041.items1 values ('10', '2','2006-09-28', '82.61');
insert into jg185041.items1 values ('1', '3','2009-07-28', '21.9');
insert into jg185041.items1 values ('2', '3','2005-07-05', '21.9');
insert into jg185041.items1 values ('3', '3','2009-01-20', '21.9');
insert into jg185041.items1 values ('4', '3','2008-02-29', '21.9');
insert into jg185041.items1 values ('5', '3','2001-01-14', '21.9');
insert into jg185041.items1 values ('6', '3','2007-06-28', '21.9');
insert into jg185041.items1 values ('7', '3','2008-11-27', '21.9');
insert into jg185041.items1 values ('8', '3','2004-10-03', '21.9');
insert into jg185041.items1 values ('9', '3','2008-11-08', '21.9');
insert into jg185041.items1 values ('10', '3','2007-01-21', '21.9');
insert into jg185041.items1 values ('1', '4','2004-05-12', '50.76');
insert into jg185041.items1 values ('2', '4','2008-12-16', '50.76');
insert into jg185041.items1 values ('3', '4','2003-07-21', '50.76');
insert into jg185041.items1 values ('4', '4','2003-02-13', '50.76');
insert into jg185041.items1 values ('5', '4','2006-12-04', '50.76');
insert into jg185041.items1 values ('6', '4','2003-11-18', '50.76');
insert into jg185041.items1 values ('7', '4','2010-06-09', '50.76');
insert into jg185041.items1 values ('8', '4','2001-03-02', '50.76');
insert into jg185041.items1 values ('9', '4','2004-09-28', '50.76');
insert into jg185041.items1 values ('10', '4','2001-08-20', '50.76');
insert into jg185041.items1 values ('1', '5','2008-02-21', '0.64');
insert into jg185041.items1 values ('2', '5','2001-08-25', '0.64');
insert into jg185041.items1 values ('3', '5','2004-12-10', '0.64');
insert into jg185041.items1 values ('4', '5','2005-08-30', '0.64');
insert into jg185041.items1 values ('5', '5','2007-09-11', '0.64');
insert into jg185041.items1 values ('6', '5','2001-07-11', '0.64');
insert into jg185041.items1 values ('7', '5','2008-08-27', '0.64');
insert into jg185041.items1 values ('8', '5','2008-03-02', '0.64');
insert into jg185041.items1 values ('9', '5','2002-05-17', '0.64');
insert into jg185041.items1 values ('10', '5','2004-02-09', '0.64');
insert into jg185041.items1 values ('1', '6','2001-04-14', '25.08');
insert into jg185041.items1 values ('2', '6','2001-11-12', '25.08');
insert into jg185041.items1 values ('3', '6','2002-02-27', '25.08');
insert into jg185041.items1 values ('4', '6','2008-07-07', '25.08');
insert into jg185041.items1 values ('5', '6','2002-02-23', '25.08');
insert into jg185041.items1 values ('6', '6','2002-09-23', '25.08');
insert into jg185041.items1 values ('7', '6','2005-07-05', '25.08');
insert into jg185041.items1 values ('8', '6','2008-08-05', '25.08');
insert into jg185041.items1 values ('9', '6','2003-01-06', '25.08');
insert into jg185041.items1 values ('10', '6','2009-02-18', '25.08');
insert into jg185041.items1 values ('1', '7','2008-06-27', '25.99');
insert into jg185041.items1 values ('2', '7','2007-01-05', '25.99');
insert into jg185041.items1 values ('3', '7','2001-06-26', '25.99');
insert into jg185041.items1 values ('4', '7','2009-01-24', '25.99');
insert into jg185041.items1 values ('5', '7','2001-09-05', '25.99');
insert into jg185041.items1 values ('6', '7','2005-07-26', '25.99');
insert into jg185041.items1 values ('7', '7','2006-08-25', '25.99');
insert into jg185041.items1 values ('8', '7','2010-12-30', '25.99');
insert into jg185041.items1 values ('9', '7','2003-10-15', '25.99');
insert into jg185041.items1 values ('10', '7','2007-08-17', '25.99');
insert into jg185041.items1 values ('1', '8','2006-12-05', '85.96');
insert into jg185041.items1 values ('2', '8','2002-06-17', '85.96');
insert into jg185041.items1 values ('3', '8','2005-08-23', '85.96');
insert into jg185041.items1 values ('4', '8','2008-08-22', '85.96');
insert into jg185041.items1 values ('5', '8','2004-02-24', '85.96');
insert into jg185041.items1 values ('6', '8','2006-11-10', '85.96');
insert into jg185041.items1 values ('7', '8','2008-12-06', '85.96');
insert into jg185041.items1 values ('8', '8','2006-10-16', '85.96');
insert into jg185041.items1 values ('9', '8','2004-09-20', '85.96');
insert into jg185041.items1 values ('10', '8','2008-06-04', '85.96');
insert into jg185041.items1 values ('1', '9','2003-03-20', '23.55');
insert into jg185041.items1 values ('2', '9','2008-02-06', '23.55');
insert into jg185041.items1 values ('3', '9','2010-04-03', '23.55');
insert into jg185041.items1 values ('4', '9','2006-09-27', '23.55');
insert into jg185041.items1 values ('5', '9','2009-10-08', '23.55');
insert into jg185041.items1 values ('6', '9','2009-10-08', '23.55');
insert into jg185041.items1 values ('7', '9','2009-06-09', '23.55');
insert into jg185041.items1 values ('8', '9','2007-08-05', '23.55');
insert into jg185041.items1 values ('9', '9','2004-12-20', '23.55');
insert into jg185041.items1 values ('10', '9','2004-09-09', '23.55');
insert into jg185041.items1 values ('1', '10','2010-12-13', '20.78');
insert into jg185041.items1 values ('2', '10','2004-05-28', '20.78');
insert into jg185041.items1 values ('3', '10','2004-10-31', '20.78');
insert into jg185041.items1 values ('4', '10','2006-03-11', '20.78');
insert into jg185041.items1 values ('5', '10','2008-01-15', '20.78');
insert into jg185041.items1 values ('6', '10','2009-02-24', '20.78');
insert into jg185041.items1 values ('7', '10','2002-07-21', '20.78');
insert into jg185041.items1 values ('8', '10','2006-09-14', '20.78');
insert into jg185041.items1 values ('9', '10','2007-09-25', '20.78');
insert into jg185041.items1 values ('10', '10','2005-12-22', '20.78');
insert into jg185041.items1 values ('1', '11','2010-07-11', '26.36');
insert into jg185041.items1 values ('2', '11','2003-08-02', '26.36');
insert into jg185041.items1 values ('3', '11','2007-09-18', '26.36');
insert into jg185041.items1 values ('4', '11','2008-04-24', '26.36');
insert into jg185041.items1 values ('5', '11','2008-12-23', '26.36');
insert into jg185041.items1 values ('6', '11','2002-09-05', '26.36');
insert into jg185041.items1 values ('7', '11','2010-07-24', '26.36');
insert into jg185041.items1 values ('8', '11','2005-08-07', '26.36');
insert into jg185041.items1 values ('9', '11','2009-12-07', '26.36');
insert into jg185041.items1 values ('10', '11','2003-12-03', '26.36');
insert into jg185041.items1 values ('1', '12','2007-07-02', '37.52');
insert into jg185041.items1 values ('2', '12','2007-04-15', '37.52');
insert into jg185041.items1 values ('3', '12','2010-08-05', '37.52');
insert into jg185041.items1 values ('4', '12','2006-03-23', '37.52');
insert into jg185041.items1 values ('5', '12','2006-10-15', '37.52');
insert into jg185041.items1 values ('6', '12','2009-05-11', '37.52');
insert into jg185041.items1 values ('7', '12','2002-01-16', '37.52');
insert into jg185041.items1 values ('8', '12','2005-03-06', '37.52');
insert into jg185041.items1 values ('9', '12','2005-06-23', '37.52');
insert into jg185041.items1 values ('10', '12','2001-05-03', '37.52');
insert into jg185041.items1 values ('1', '13','2007-11-07', '41.99');
insert into jg185041.items1 values ('2', '13','2008-05-17', '41.99');
insert into jg185041.items1 values ('3', '13','2010-05-12', '41.99');
insert into jg185041.items1 values ('4', '13','2001-06-27', '41.99');
insert into jg185041.items1 values ('5', '13','2009-11-28', '41.99');
insert into jg185041.items1 values ('6', '13','2005-06-14', '41.99');
insert into jg185041.items1 values ('7', '13','2001-06-16', '41.99');
insert into jg185041.items1 values ('8', '13','2004-04-26', '41.99');
insert into jg185041.items1 values ('9', '13','2009-10-09', '41.99');
insert into jg185041.items1 values ('10', '13','2005-12-09', '41.99');
insert into jg185041.items1 values ('1', '14','2006-01-06', '2.85');
insert into jg185041.items1 values ('2', '14','2010-05-12', '2.85');
insert into jg185041.items1 values ('3', '14','2003-10-21', '2.85');
insert into jg185041.items1 values ('4', '14','2006-09-15', '2.85');
insert into jg185041.items1 values ('5', '14','2005-10-28', '2.85');
insert into jg185041.items1 values ('6', '14','2003-04-23', '2.85');
insert into jg185041.items1 values ('7', '14','2007-05-30', '2.85');
insert into jg185041.items1 values ('8', '14','2005-01-23', '2.85');
insert into jg185041.items1 values ('9', '14','2007-04-19', '2.85');
insert into jg185041.items1 values ('10', '14','2006-12-13', '2.85');
insert into jg185041.items1 values ('1', '15','2010-04-09', '34.35');
insert into jg185041.items1 values ('2', '15','2008-09-01', '34.35');
insert into jg185041.items1 values ('3', '15','2006-07-28', '34.35');
insert into jg185041.items1 values ('4', '15','2010-09-29', '34.35');
insert into jg185041.items1 values ('5', '15','2001-05-03', '34.35');
insert into jg185041.items1 values ('6', '15','2003-09-02', '34.35');
insert into jg185041.items1 values ('7', '15','2005-07-08', '34.35');
insert into jg185041.items1 values ('8', '15','2007-04-18', '34.35');
insert into jg185041.items1 values ('9', '15','2009-03-03', '34.35');
insert into jg185041.items1 values ('10', '15','2010-09-20', '34.35');
insert into jg185041.items1 values ('1', '16','2008-12-25', '67.5');
insert into jg185041.items1 values ('2', '16','2003-03-11', '67.5');
insert into jg185041.items1 values ('3', '16','2008-12-02', '67.5');
insert into jg185041.items1 values ('4', '16','2008-03-09', '67.5');
insert into jg185041.items1 values ('5', '16','2001-02-01', '67.5');
insert into jg185041.items1 values ('6', '16','2006-04-14', '67.5');
insert into jg185041.items1 values ('7', '16','2007-04-12', '67.5');
insert into jg185041.items1 values ('8', '16','2007-10-14', '67.5');
insert into jg185041.items1 values ('9', '16','2008-12-28', '67.5');
insert into jg185041.items1 values ('10', '16','2005-10-11', '67.5');
insert into jg185041.items1 values ('1', '17','2008-03-07', '55.59');
insert into jg185041.items1 values ('2', '17','2006-07-02', '55.59');
insert into jg185041.items1 values ('3', '17','2003-10-21', '55.59');
insert into jg185041.items1 values ('4', '17','2005-03-31', '55.59');
insert into jg185041.items1 values ('5', '17','2010-01-06', '55.59');
insert into jg185041.items1 values ('6', '17','2003-05-31', '55.59');
insert into jg185041.items1 values ('7', '17','2003-04-24', '55.59');
insert into jg185041.items1 values ('8', '17','2002-07-24', '55.59');
insert into jg185041.items1 values ('9', '17','2007-09-04', '55.59');
insert into jg185041.items1 values ('10', '17','2007-05-14', '55.59');
insert into jg185041.items1 values ('1', '18','2010-10-02', '93.73');
insert into jg185041.items1 values ('2', '18','2006-02-03', '93.73');
insert into jg185041.items1 values ('3', '18','2005-05-24', '93.73');
insert into jg185041.items1 values ('4', '18','2007-10-15', '93.73');
insert into jg185041.items1 values ('5', '18','2007-07-13', '93.73');
insert into jg185041.items1 values ('6', '18','2009-11-11', '93.73');
insert into jg185041.items1 values ('7', '18','2003-11-22', '93.73');
insert into jg185041.items1 values ('8', '18','2008-11-21', '93.73');
insert into jg185041.items1 values ('9', '18','2001-03-20', '93.73');
insert into jg185041.items1 values ('10', '18','2004-12-02', '93.73');
insert into jg185041.items1 values ('1', '19','2009-05-23', '18.96');
insert into jg185041.items1 values ('2', '19','2009-11-29', '18.96');
insert into jg185041.items1 values ('3', '19','2004-12-31', '18.96');
insert into jg185041.items1 values ('4', '19','2003-10-31', '18.96');
insert into jg185041.items1 values ('5', '19','2002-12-19', '18.96');
insert into jg185041.items1 values ('6', '19','2001-12-06', '18.96');
insert into jg185041.items1 values ('7', '19','2007-12-03', '18.96');
insert into jg185041.items1 values ('8', '19','2009-08-07', '18.96');
insert into jg185041.items1 values ('9', '19','2006-04-20', '18.96');
insert into jg185041.items1 values ('10', '19','2008-01-05', '18.96');
insert into jg185041.items1 values ('1', '20','2008-12-07', '67.41');
insert into jg185041.items1 values ('2', '20','2006-12-07', '67.41');
insert into jg185041.items1 values ('3', '20','2010-09-19', '67.41');
insert into jg185041.items1 values ('4', '20','2002-06-07', '67.41');
insert into jg185041.items1 values ('5', '20','2003-05-17', '67.41');
insert into jg185041.items1 values ('6', '20','2007-12-25', '67.41');
insert into jg185041.items1 values ('7', '20','2010-10-30', '67.41');
insert into jg185041.items1 values ('8', '20','2005-04-01', '67.41');
insert into jg185041.items1 values ('9', '20','2003-10-30', '67.41');
insert into jg185041.items1 values ('10', '20','2007-06-27', '67.41');

COMMIT;

Please refer to The Friday Night Project #5 - TZA-Database article on how to use Ant and Eclipse to execute both SQL files so that our table is created properly in the dmdev database.

Create XML

The next step in creating our first TDM job is to generate the XML file that specifies the intent of the job.  We want to copy the jg185041.items1 table from dmdev to dmsmp, so here is the XML to do so:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<dmCreate xmlns="http://schemas.teradata.com/dataMover/v2009"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://schemas.teradata.com/dataMover/v2009/DataMover.xsd">
    <job_name>jg_ppi_devX</job_name>
    <source_tdpid>dmdev</source_tdpid>
    <source_user>jg185041</source_user>
    <source_password>jg185041</source_password>
    <target_tdpid>dmsmp</target_tdpid>
    <target_user>jg185041</target_user>
    <target_password>jg185041</target_password>
    <overwrite_existing_objects>true</overwrite_existing_objects>
    <log_level>1</log_level>
    <online_archive>false</online_archive>
    <database selection="unselected">
        <name>jg185041</name>
            <table selection="included">
                <name>items1</name>
            </table>
    </database>
</dmCreate>

Execute TDM job

Now that we have the XML file,  we can execute the job with TDM by using the move command.  The move command creates the job in TDM's repository and starts it. If all 3 components of TDM (command-line, Daemon, and Agent) have been installed properly, you can execute the move command with the following:

datamove move -f <XML_file>

This is the output of the move command for our job:

dm-daemon3:/home/jg185041/DataMover/testing/ppi # datamove move -f jg_ppi_devX.xml
Data Mover Command Line 13.01.00.04
Move mode
Command parameters:
- job_name: jg_ppi_devX
- source_tdpid: dmdev
- source_user: jg185041
- source_password: ********
- source_logon_mechanism:
- source_logon_mechanism_data:
- source_account_id:
- target_tdpid: dmsmp
- target_user: jg185041
- target_password: ********
- target_logon_mechanism:
- target_logon_mechanism_data:
- target_account_id:
- overwrite_existing_objects: true
- online_archive: false
- log_level: 1
- log_to_tdi:
Starting move command...
Creating job definition...
Connected to Daemon version 13.01.00.04
Creating job in Daemon...
Job created. Job name is: jg_ppi_devX
Requesting job start...
Job started. Job Execution Name is: jg_ppi_devX-20101008140355EDT

This command successfully created the job in the Daemon's repository and started its execution.

Job status

If we want to see how the job executed we can use TDM's status command to look at the results.  Use the following to execute the status command:

datamove status -job_name <name_of_job> -output_level 3

This is the output of the status command:

dm-daemon3:/home/jg185041/DataMover/testing/ppi # datamove status -job_name jg_ppi_devX -output_level 3
Data Mover Command Line 13.01.00.04
Status mode
Command parameters:
- job_name: jg_ppi_devX
- output_level: 3
Requesting status...
Connected to Daemon version 13.01.00.04

Job Name: jg_ppi_devX
Job Execution Name: jg_ppi_devX-20101008140355EDT


TYPE  ID   STATUS                                    CURRENT STEP                START TIME          DURATION TIME
------------------------------------------------------------------------------------------------------------------
Job:  1    COMPLETED_SUCCESSFULLY                    4                           10/8/10 2:03 PM     0:0:12


TYPE  ID   STATUS                                    STEP TYPE                   START TIME           DURATION TIME
----------------------------------------------------------------------------------------------------------------------------------
Step: 3    COMPLETED_SUCCESSFULLY                    MOVE_TABLE_DATA             10/8/10 2:03 PM      0:0:10
Step: 4    COMPLETED_SUCCESSFULLY                    RESOLVE_TABLE_AFTER_LOAD    10/8/10 2:03 PM      0:0:1


TYPE  ID   NAME    MOVE PHASE      STATUS    TYPE     ROWS       BYTES      AGENT        UTILITY        TIMESTAMP
------------------------------------------------------------------------------------------------------------------------------------------------------
Task: 5    items1  MOVING_DATA     ARCHIVE   table    200        9864       Agent1       ARC            10/8/10 2:04 PM  
Task: 5    items1  MOVING_DATA     COPY      table    200        9864       Agent1       ARC            10/8/10 2:04 PM  
Task: 5    items1  MOVING_DATA     COMPLETE  table    200        9864       Agent1       ARC            10/8/10 2:04 PM  
Task: 6    items1  POST_DATA_MOVE  COMPLETE  unknown  0          0          Agent1       SQL            10/8/10 2:04 PM 

By looking at this output we can see that TDM used ARC to copy this table and that the job completed successfully in 12 seconds.  This concludes our example of executing your first Data Mover job.  You should now be able to see the jg185041.items1 table on your target system.

The next TDM article will cover how to execute partial table copies.

Discussion
IntensityMan 2 comments Joined 02/10
24 May 2011

Is it a replace table if the object exists on the target system?

Dave

sjpeeris 7 comments Joined 11/10
28 Jul 2011

If ARC is used then the table is replaced on the target system.
If TPTAPI/JDBC is used, rows from the target table are deleted and the source rows are inserted into the target table. This is done by moving the source rows to a staging table and then doing an INSERT/SELECT or MERGE from the staging table.

There is a property called "overwrite_existing_objects" that is defined at the job level. If this property is set to false, then Data Mover will give you an error if object exists on the target.

There is another property called "compare_ddl". When set to true, DM gives an error if source and target table DDLs are different.

Jaison

geethareddy 145 comments Joined 10/11
06 May 2013

 
hi Jason,
We recently opened production flood gates for TDM jobs. I can say bcz of the too many TDM jobs system is getting hampered in terms of the performance and resources. I am thinking to apply a throttle to delay and restrict the TDM jobs. But as this is first tiime on our box, i would like to know your thoughts on imposing the throttle rules on TDM jobs.
Please share your thoughts.
Thanks,
Geeta.
 

Thanks,
Geeta

jgordon 13 comments Joined 05/10
07 May 2013

Hi Geeta,
If the source/target TD system is getting hampered by running too many TDM jobs at the same time then it is probably a good idea to throttle the TDM jobs.  How are you planning to throttle the jobs?  One option is to set up TASM rules in the DBS to restrict the number of sessions each TDM job can use.  Another option is to change the "jobExecutionCoordinator.maxConcurrentJobs" property in TDM's daemon.properties file to restrict how many concurrent jobs Data Mover can execute at any given time.
Jason

bbdd.tsunix 6 comments Joined 04/13
03 Sep 2013

Hi,
i´m interesting to user this application. How and where do i can download this software ?? Is this applicatión into the TTU Teradata 14??
Regards

bbdd.tsunix 6 comments Joined 04/13
03 Sep 2013

Hi,
I'm interested to install DataMover for copy data from  production nvironment  to development environment. Can you tell me where I can download the software TDM? this software is included in the DISK TTU 14.0???
Regards

Mikec 21 comments Joined 03/09
03 Sep 2013

Unity Data Mover can only be purchased as a Teradata Managed Server (TMS) based solution and is not a part of TTU (although it does use some TTU components to do it's work). Speak to your sales team about whether such a purchase would be a good way to go for your use case. 

SmarakDas 25 comments Joined 02/12
23 Sep 2014

Hello Jason,

 

I went through the above article series & also through the TEN Course by you on the same. I have a 04 questions, being new to the Data Mover application:

 

(a) If the "ReplaceDatabase" is set as TRUE (Without any Excluding Tables specified), all the Target Database's tables are dropped and the Source Database's tables copied irrespestive of the Utility used (ARC, TPTAPI, JDBC). DataMover will most likely use ARC, yet I wish to understand whether the feature "ReplaceDatabase" is dependent on the Utilty chosen by the DataMover Daemon.

(b) [Continuation of above query] How can we ensure a scenario where the Target DB's tables are replaced with the content of the Source DB's tables if they already exist in both the Target DB as well as Source DB and create new ones if they don't exist in the Target DB.

(c)  The usage of Staging Database is similar to a WorkTable in MultiLoad wherein the Operation is first performed on the Staging DB's tables before on the Target DB's table. Is the Usage of Staging DB (If specified) dependent on the Utility used by DM along with the System's Status/Concurrent DM or ARC or TPT Load-Update Limitations or the Staging DB is used if it has been specified by the User during the Job Creation. 

(d) What happens if the Staging DB doesn't have enough space yet the Target DB has sufficient Space. Is the Job (Midway if already started execution) aborts or skip using the Staging DB and starts working directly on the Target DB ?   

 

Thanks In Advance,

Smarak

jgordon 13 comments Joined 05/10
23 Sep 2014

Smarak,
 
(a) The "ReplaceDatabase" feature is dependent on the use of ARC because that is currently the only method Data Mover supports for copying databases.
 
(b) Copying a database with ARC will ensure the target tables get replaced with the content of the source tables.  New tables will also get created if they don't already exist on the target.
 
(c) The use of a staging database is dependent on multiple factors (full or partial copy, utility chosen, staging table used, etc.).  The most common scenario for using a staging database (if one has been specified) is when doing a partial table copy with a target table that already exists and is not empty.
 
(d) Data Mover does not currently check the permspace available in the staging database or the target database prior to running the job.  If the staging or target database doesn't have enough permspace then the job will fail with a "no more room in database" error.  Data Mover will not try to load directly into the target database if there is not enough permspace in the staging database.
 
Jason

maniamenon 2 comments Joined 05/09
25 Sep 2014

Hi Jason,

My interest is, I have 100 TB of data. I am planning to move using BAR initially to the target system. Further, I want to do an incremental load from the source to target (both source and target is Teradata - Updating Production to DR).  I do not want to depend on the ETL Process for this.

 

 

In this scenario which option you recommend, the Unity Data Mover and Teradata Incremental Backup and Restore?  The important fact which we need to take into consideration is the source is Teradata 13 and Target is Teradata 14.

 

Please recommend.

 

 

Thanks,

Mani

jgordon 13 comments Joined 05/10
11 Nov 2014

Mani,
 
Incremental (or Changed Block) Backup is only supported in TD 14.10 or later versions with the DSA product.  Therefore, this option won't work with a source system at TD 13.00.  I recommend using Unity Data Mover or Unity Director/Loader if you want to do incremental loads from the source to target.
 
Jason

cr255014 5 comments Joined 01/14
20 Mar 2015

Do we have a checkpoint option when TPTAPI is used?

jgordon 13 comments Joined 05/10
22 Mar 2015

There is no checkpoint option when TPTAPI is used in a Data Mover job.

Varun Singh 2 comments Joined 03/15
22 Mar 2015

Hi Jason,

 

I am trying to copy data from a view to a table but getting below error :

 

[Pre/Post Session Command] Process id 48202. Standard output and error:

- data_streams: 2

- source_sessions: 8

- target_sessions: 16

- max_agents_per_task: 1

- log_level: 1

- log_to_tdi:

- log_to_event_table:

- query_band:

- sync: true

About to connect to ActiveMQ at localhost:61616

Daemon security is off.

Starting move command...

Creating job definition...

Creating job in Daemon...

Error: Cannot generate target table DDL before view column information has been set.

 

My source view is present in TD cloud server and target table is in TD database.Both source and target table has same name except database. name.

Could you please help me out to resolve this issue and what all things i need to take care while loading data through view.

 

Thanks,

Varun

jgordon 13 comments Joined 05/10
23 Mar 2015

Varun,
 
It's difficult to tell why the error occurred without seeing the relevant Data Mover logs.  Please create an incident so the GSC can properly investigate the issue.
 
Thanks,
Jason

Varun Singh 2 comments Joined 03/15
30 Mar 2015

Hi Jason,
I have loaded data successfully through view but still facing some issue. My job is running for 1-2 hrs and after that it is failing with below error :
 
[ Agent: Agent1 TaskID: 66530] EXPORT Rows Exported: 7473993 Bytes Exported: 2147547034 

[ Agent: Agent1 TaskID: 66530] EXPORT Rows Exported: 7474022 Bytes Exported: 2147546042 

[ Agent: Agent1 TaskID: 66530] STREAM Rows Inserted: 7467276 Bytes Inserted: 2147546042 Rows Updated: 0 Rows Deleted: 0 

[ Agent: Agent1 TaskID: 66530] EXPORT Rows Exported: 7473778 Bytes Exported: 2147484281 

[ Agent: Agent1 TaskID: 66530] STREAM Rows Inserted: 7465684 Bytes Inserted: 2147547034 Rows Updated: 0 Rows Deleted: 0 

[ Agent: Agent1 TaskID: 66530] STREAM Rows Inserted: 7465684 Bytes Inserted: 2147484281 Rows Updated: 0 Rows Deleted: 0 

[ Agent: Agent1 TaskID: 66530] EXPORT Rows Exported: 7473806 Bytes Exported: 2147483786 

[ Agent: Agent1 TaskID: 66530] STREAM Rows Inserted: 7465684 Bytes Inserted: 2147483786 Rows Updated: 0 Rows Deleted: 0 

[ Agent: Agent1 TaskID: 66530 ] EXPORT Error(0): [ TaskID:66530:TPTAPI_EXPORT:1 ] ERROR: TPTAPI Function GetBuffer Returned Error 2595

[ Agent: Agent1 TaskID: 66530 ] STREAM Error(0): [ TaskID:66530:TPTAPI_STREAM:1 ] ERROR: Error Popping Data Buffer from Shared Space

[ Agent: Agent1 TaskID: 66530 ] EXPORT Error(0): [ TaskID:66530:TPTAPI_EXPORT:1 ] ERROR: The FastExport select request has been aborted.

[ Agent: Agent1 TaskID: 66530 ] EXPORT Error(0): [ TaskID:66530:TPTAPI_EXPORT:1 ] ERROR: TPTAPI Function GetBuffer Returned Unexpected Code 2595

[ TaskID:66530:TPTAPI_STREAM:1 ] Entering Phase: TERMINATE

[ TaskID:66530:TPTAPI_EXPORT:1 ] Terminating TPTAPI Connection

[ Agent: Agent1 TaskID: 66530] Entering TERMINATE Phase.

[ TaskID:66530:TPTAPI_STREAM:1 ] Terminating TPTAPI Connection

[ Agent: Agent1 TaskID: 66530 ] NONE Error(0): [ TaskID:66530:NONE:0 ] ERROR: Error Popping Message from Shared Space

 

 

Same job is working fine when i am loading data through table instead of view and it is taking only 1 hr to load 200 Millions of record.

Could you please help me on resolving this issue and what should i do to increase performace while loading data through view.

 

Thanks,

Varun

jgordon 13 comments Joined 05/10
30 Mar 2015

Varun,
 
Please create an incident so the GSC can properly investigate the problem.
 
Thanks,
Jason

touseef_081 5 comments Joined 05/15
21 May 2015

Hi Jason
 
My client is using Teradata Datamover 14.10 and they are facing problem while saving the job especially when they select ARC utility (Force Utility) to copy data.
 
The error is:
Error: ARC cannot be used to move data Reason: Cannot use ARC when logon mechanism is provided.
 
I have go through the documentation and doesn't find any thing relevant related to this error, through which i can disable logon mechanism. So is there any way to disable this feature so my cleint easily use UDM GUI to create ARC jobs.
 
Regards
Muhammad Touseef

jgordon 13 comments Joined 05/10
21 May 2015

Muhammad,
Specifying something other than the default value for the logon mechanism parameter in a Data Mover job is not supported when ARC is chosen as the utility to copy data.  The reason for this is that Data Mover uses the multi-ARC protocol.  This protocol does not support the logon mechanism parameter.  You can avoid this error by not specifying anything for the logon mechanism parameter when you create a job through the portlet or the command-line interface.
 
FYI…this restriction is documented in the Data Mover User Guide too:
 
About Logging on to a Teradata Database System
 
To log on to a source or target Teradata Database system, provide at least one of the following or an error will result during job creation:
• User name and password
• Logon mechanism and, if required, logon data
 
If both the user name and password and the logon mechanism is valid, Data Mover connects to the source and target systems using the logon mechanism, user name, and password.  When only the logon mechanism is provided, Data Mover connects to the source and target systems using that logon mechanism.
 
Note: Logon mechanisms are not supported for Teradata ARC jobs. Use logon mechanisms only for Teradata PT API and Teradata JDBC jobs. When using the create or move command, if -source_logon_mechanism or -target_logon_mechanism is specified and -force_utility is not used, Teradata PT API is used by default. Specifying -source_logon_mechanism or -target_logon_mechanism with Teradata ARC specified for -force_utility results in an error.

touseef_081 5 comments Joined 05/15
25 May 2015

Thanks Jason,
Problem is solved :).
 
Regards
Muhammad Touseef

ag186070 2 comments Joined 01/15
13 Aug 2015

Hi,
 
Can someone assist me on how we can move the View and its Dependent Views thru Data Mover. I want to move all the views from a database along with its dependent views, in a correct sequence.
Regards,
Atul Gangurde.

jgordon 13 comments Joined 05/10
14 Aug 2015

Data Mover does not support automatically copying dependent objects in a view.  Data Mover will only copy objects specified in the job definition.  You can copy all views in a database by including that database in the job definition.  If any dependent views are in different databases then you will also have to include those views (or their parent databases) in the job definition for them to be copied by Data Mover.
 
Jason

Navic 2 comments Joined 07/16
25 Jul 2016

Hi Gordon,
Is TDM creates any physical file so I can move to new box and restore?
if no, is there any option to copy the Teradata DB from one box to another using physical copy.
 
Thnx
Navi
 

jgordon 13 comments Joined 05/10
25 Jul 2016

Data Mover does not create physical data files when copying data.  Data is transferred through buffers in memory.

Navic 2 comments Joined 07/16
25 Jul 2016

Thanks Gordon!!
 
Can we use data filters for filtering data from tables while performing Archival using Arcmain utility? If yes, can you help me with the command. 
If not, What is the best way to move data from one teradata environment to other while we do not have access for both the environments at the same time? Is creation of flat files only option or any other thoughts?
Pls help
Thanks!!

jgordon 13 comments Joined 05/10
26 Jul 2016

You can only do partial copies with ARC on a limited basis.  Please read the "About Partial Copies" section in the Data Mover User Guide for more details.  If you're interested in how to do this outside of Data Mover by using stand-alone ARC then read the "Archiving Selected Partitions of PPI tables" section in the ARC User Guide.  Backing up/exporting data to flat files or a backup device is the only option I can think of to copy data between systems if you can't read from the source system and write to the target system at the same time.

You must sign in to leave a comment.