All Forums Tools
paulcy 6 posts Joined 04/06
31 Aug 2008
Teradata PT 12.0 error output schema does not match

Objective: Export a large amount of data to a pipe delimited file.Error: Output Schema does not match data from SELECT statementWhat I am doing:I am using the TPT wizard to create and run the job. As I go through the steps in TPT, when I get to the part of selecting what columns I want, I click the 'select all' option. The format of the columns that I am selecting ranges from char to int to decimal. After selecting all, I then proceed to change the type to varchar. I am changing the type because I read in the manual that if you are exporting to a delimited file, all columns need to be changed to varchars. The problem is when I run the job, anything that wasn't already a varchar causes the above error if it is selected. I am assuming I am doing something wrong in the TPT script but I figured I would ask.

srinivasa meka 10 posts Joined 01/10
16 Apr 2010

I ran into a similar situation yesterday. Here are my suggestions:
1) Use TPT wizard (Windows 32 bit) to create template. After all it is done, you will see a dialog box with four major sections. Verify that field names are mentioned on the top half portion, towards end of it you will see a select statement.
2) Go to section "DEFINE SCHEMA" and change all fields to varchar, even for char fields.
3) Go to section "SELECT * FROM OPERATOR" and change SQL select statement by casting each field to varchar. Of course, if that field is already a varchar, no need to cast.
4) If you prefer, change login / password in this script.
5) Run this for a condition that gives no rows to see if there is any problem with this script.

Make sure to convert every field in the table to varchar. TPT looked very fast in exporting and quick to respond back. I received something close to a million rows in a second.

Here is a small section of SQL in TPT template or script:

UserName = 'yourUserName',
UserPassword = 'yourUserPassword',
SelectStmt = 'SELECT cast(TXN_ID as varchar(30)),
OTHER LINES
cast(CASHBACK_AMOUNT as varchar(20)),
cast(TRANSACTION_AMOUNT as varchar(20))
FROM yourdbName.yourBigFactTable where issuer >= 310;',

feinholz 1234 posts Joined 05/08
17 May 2010

If you want to use the Export operator to extract the data from Teradata, then yes you need to CAST the columns in the SELECT statement being sent to Teradata. The Export operator utilizes the FastExport protocol and that protocol cannot extract the data properly in its native form.

Otherwise, use the Selector operator. This operator extracts the data in field mode (similar to what BTEQ would do) and there would be no need to CAST the columns in the SELECT statement.

--SteveF

Rajesh.Prabhu 16 posts Joined 08/06
23 Jul 2010

Is there any where we can get the manual on Error messages and resolution on TPT. AS these are errors seems to little weird as compare to Teradata Fload/Mload errors. Appreciate your response..!!

feinholz 1234 posts Joined 05/08
23 Jul 2010

The error message manual for TPT is a work in progress. We have started it (in 13.10) but not all errors are in there. We will be adding them in each release.

I agree with you that some errors seem "weird", although I suppose I would classify them as "different".

Please remember that even though TPT uses the same extracting and loading "protocols" when running jobs, TPT is not the same as FastLoad or MultiLoad. It is an entirely different product (different code).

The error is actually quite explanatory. If the SELECT statement did not CAST all of the columns to VARCHAR, then the data types for the columns returned by the SELECT statement would not match what was provided in the script's SCHEMA object.

--SteveF

pauldepons 2 posts Joined 04/10
12 Aug 2010

I am getting the same error however I am not trying to make any changes. Exporting from Teradata loading to Teradata, select all. Only caveate is I am using max decimal 38. Are large decimals not supported? Do I need to change them to char?

ratchetandclank 49 posts Joined 01/08
12 Aug 2010

When you are using Large Decimals in TPT, you have to specify the attribute "MaxDecimalDigits = 38" in the appropriate operator definition.

rameshg2521 1 post Joined 06/11
14 Jun 2011

I got the same error but it worked with cast, but it did not work with coalesce.Example as below
Worked:
cast(Base.SUB_ACCT_NO_OCI as VARCHAR(16))
,cast(Base.SERV_ID_OCI as VARCHAR(10))
Did not worked:
coalesce(cast(Base.SUB_ACCT_NO_OCI as VARCHAR(16)) ,'')
,coalesce(cast(Base.SERV_ID_OCI as VARCHAR(10)),'')
or
cast(coalesce(Base.SUB_ACCT_NO_OCI,'') as VARCHAR(16))
,cast(coalesce(Base.SERV_ID_OCI,'') as VARCHAR(10))

Advises are highly appreciated.Thanks.rameshg2521@gmail.com

Mainframer 4 posts Joined 07/11
27 Jul 2011

Do we have TPT error message manual available? I am trying to figure out what following error means:

TPT_INFRA: TPT01036: Error: Task (TaskID: 4, Task Name: SELECT_2Ý0001¨) terminated due to the receipt of signal number 9

" Be nice to people on your way up, because you will need them on your way down " - Unknown.

feinholz 1234 posts Joined 05/08
27 Jul 2011

That error message seems to indicate that the "exporting" operator (the producer operator) appears to have coredumped for some reason.
I would need more information than just that one line to better decipher what is going on.
I would need (at a minimum):
version of TPT
script
console output
entire job log

--SteveF

Mainframer 4 posts Joined 07/11
28 Jul 2011

Feinholz,

I had created one more post which had more information. You have already replied to that post and we have connected outside the forum.

Thanks,
Mainframer.

" Be nice to people on your way up, because you will need them on your way down " - Unknown.

Samya 21 posts Joined 11/11
24 Feb 2012

 

Hi Feinholz,

  • I have a requirement, where in i am using TPT script to extract 3 table ina single script with different set of operators written for each table as below: -

 

DEFINE JOB EXPORT_TABLE_TO_FILE_AND_LOAD_TO_TABLE  

 DESCRIPTION 'EXPORT SAMPLE ITS TABLE TO A FILE' 

 (

Schema table1

Export Operator1

FileWriterOperator1

ApplyForTable1

 

Schema table2

Export Operator2

FileWriterOperator2

ApplyForTable2

 

Schema table3

Export Operator3

FileWriterOperator3

ApplyForTable3

);

 

Now i want, each export operator to have multiple instances so that that it can extract from say table1 quickly than using a single instance, so i am doing that by providing the number of instances required in the apply part. its working fine.

But my query is, how to extract from the three tables in parallel? Because in the above script extraction from a single table at a time is being done by multiple instances, extraction for table 2 starts after extraction is complete for table 1 is complete. Is there any way to do that in the above script?

  • one more query, i wanted my output file to be in delimited format, so i started using SQL Selector operator, but i came to know that it dosen't support multiple instance nor multiple sessions, so i moved to Export operator. So is there any way to get output files in delimited format when using Export operator. if yes, please state how to do so?

 

  • Lastly, As per as my knowledge is concerned, when we execute a TPT script for say Table A, then a check-point file is created, which is automatically deleted on successful execution of the script. Suppose, table A has some 50GB of data that i am extracting, in parallel from a different terminal i have run one more script for table B in the same server, that has some data in KB's. When the execution for table B starts, it put the extracted data in the O/P flatfile of A, the reason being , it reads the same checkpoint file as created for table A. So my question is, is there any way to specify different check-points for different execution of script, so as to run multiple TPT scripts without overlapping each other.

Request you to provide your inputs ASAP.

Thanks in advance,

Regards,

Sam

feinholz 1234 posts Joined 05/08
29 Feb 2012

1. Each "Apply" step in a TPT script is a separate load (or unload) job. If you need each extract job to write to a different file, then you will need to keep them as separate jobs. If you want them to run in parallel, you will need to create multiple scripts and you can then run each TPT job in parallel.

There is a way (a bit more complex) to "fake out" TPT and so all of the extracts in parallel but the explanation would be too long for this reply. It involves introduing an additional "derived" column on the SELECT statement (with each SELECT assigning a different value to the derived column), and then using CASE logic in the APPLY to filter out the output based on the newly derived column and sending the appropriate rows to the proper output file.

But this is not to be done in a general case.

 

2. You cannot use the Export operator to create a delimited file. You must use the SQL Selector.

 

3. When you execute a TPT job, if you do not supply a "job name" then TPT defaults to your logon id. The checkpoint files are named from the "job name". To create unique checkpoint files, you need to create a unique "job name" for each invocation of TPT. Just provide the "job name" as the last argument on the command line.

 

--SteveF

Samya 21 posts Joined 11/11
26 Mar 2012

 

Hi Feinholz,

 

i have a TPT script, where in i am using the load operator, below: -

 

DEFINE OPERATOR LOAD_OPERATOR 

 DESCRIPTION 'TERADATA PARALLEL TRANSPORTER LOAD OPERATOR'

 TYPE LOAD 

 SCHEMA TEST_SCHEMA 

 ATTRIBUTES 

 ( 

 VARCHAR TargetTable= 'abc.backup_its_acc',

 VARCHAR TdpId='EDWUAT', 

 VARCHAR UserName= 'abcd', 

 VARCHAR UserPassword= '*******', 

 VARCHAR ErrorTable1 = 'abc.E1',

 VARCHAR ErrorTable2 = 'abc.E2',

 VARCHAR LogTable= 'logtable',

 VARCHAR PrivateLogName = 'U006'

 );

 

 

during execution its giving: -

LOAD_OPERATOR: private log specified: U006

 

How do i access this log??

 

Regards,

Sam

feinholz 1234 posts Joined 05/08
26 Mar 2012

Please refer to the TPT documentation in the use of the 'tlogview' utility to process and view the contents of the logs.

The "log" to be accessed is named with the job-id.

The "private log" is just a virtual log inside the entire job log.

The easiest way for you to look at the contents of the job log are:

$ tlogview -j <job-id> -f "*" -g

 

That command will give you the contents of every private log, and will sort the output by private log.

 

--SteveF

AshishPatil 24 posts Joined 05/12
21 Nov 2012

Hi feinholz,
   Can u pls help me out for below erros that i m getting while running TPT script to load data from oracle table to teradata table.I am not able to figure out the root cause behind these errors.
 
Errors:
 

Teradata Parallel Transporter Version 13.10.00.04

TPT_INFRA: Semantic error at or near line 64 of Job Script File '/teradata/test.txt':

TPT_INFRA: TPT03107: Operator 'LOAD_OPERATOR' has explicit input schema. Restricted APPLY

  statement allows only deferred schema for consumer operators.

TPT_INFRA: Semantic error at or near line 64 of Job Script File '/teradata/test.txt':

TPT_INFRA: TPT03111: Rule: Restricted APPLY Statement

Compilation failed due to errors. Execution Plan was not generated.

Job script compilation failed.

Job terminated with status 8.

$
 
TPT Script:
 

DEFINE JOB ODBC_LOAD

DESCRIPTION 'Read From Oracle Using ODBC Operator'

(

 

DEFINE SCHEMA test_ashish

DESCRIPTION 'test SCHEMA'

  (

    ROW_WID                       DECIMAL(10,0),

FINANCIER_NAME                 VARCHAR(100), 

ETL_PROC_WID                   DECIMAL(10,0),

BU_ID                         VARCHAR(30) ,

MERGED_INTO_WID               DECIMAL(10,0) ,

MARKET_TYPE_CD                 VARCHAR(30) 

   ); 

 

DEFINE OPERATOR ODBC_Operator 

DESCRIPTION 'TPT ODBC Operator'

TYPE ODBC

SCHEMA test_ashish

ATTRIBUTES

(

VARCHAR PrivateLogName = 'odbc_log',

VARCHAR DSNName = @var_OraId,

VARCHAR UserName = @var_OraUserName,

VARCHAR UserPassword = @var_OraPassword,

VARCHAR SelectStmt = 'SELECT ROW_WID,FINANCIER_NAME,ETL_PROC_WID,BU_ID,MERGED_INTO_WID,MARKET_TYPE_CD FROM test_ashish;'

);

   

   

DEFINE OPERATOR LOAD_OPERATOR

TYPE LOAD

SCHEMA test_ashish

ATTRIBUTES

(

VARCHAR PrivateLogName = 'load_log',

VARCHAR TdpId = @var_tdpid,

VARCHAR UserName = @var_username,

VARCHAR UserPassword = @var_password,

VARCHAR TargetTable = 'test_ashish',

VARCHAR LogTable = 'odbc_test_log',

VARCHAR ErrorTable1 = 'odbc_test_error1',

VARCHAR ErrorTable2 = 'odbc_test_error2'

);

      

APPLY 

   'INSERT INTO TD_DEV.test_ashish

   (:ROW_WID                

,:FINANCIER_NAME        

,:ETL_PROC_WID          

,:BU_ID                 

,:MERGED_INTO_WID       

,:MARKET_TYPE_CD        

 )

 ;'

 

TO OPERATOR (LOAD_OPERATOR)  

 

);
 
Ragards,
Ashish

feinholz 1234 posts Joined 05/08
21 Nov 2012

You only have an APPLY statement with no accompanying SELECT. You have only defined a consumer operator but no producer operator.

--SteveF

AshishPatil 24 posts Joined 05/12
21 Nov 2012

Thanks feinholz, :)
 
I moved ahead but now i am getting different error stating its not able to load shared library libodbcop.so.
I checked whether this file exists or not, but its there on the below path also i given it all the permissions but still not working.
 
Error:
 

Teradata Parallel Transporter Version 13.10.00.04

Job log: /opt/teradata/client/13.10/tbuild/logs/ODBC_LOAD-45.out

Job id is ODBC_LOAD-45, running on edwanadev

Found CheckPoint file: /opt/teradata/client/13.10/tbuild/checkpoint/ODBC_LOADLVCP

This is a restart job; it restarts at step MAIN_STEP.

TPT_INFRA: TPT02192: Error: Cannot load shared library

           dlopen error: Could not load module /opt/teradata/client/13.10/tbuild/lib/libodbcop.so.

        Dependent module /opt/teradata/client/ODBC_64/lib/libodbc.a(odbc.so) could not be loaded.

        The module has an invalid magic number.

Could not load module /opt/teradata/client/13.10/tbuild/lib/libodbcop.so.

        Dependent module /opt/teradata/client/13.10/tbuild/lib/libodbcop.so could not be loaded.

TPT_INFRA: TPT02109: Cannot open shared library libodbcop.so, status = Open Error

TPT_INFRA: TPT02240: Error: Cannot create operator object, status = Operator Error

TPT_INFRA: TPT02103: Executor unable to set up its table object

Job step MAIN_STEP terminated (status 8)

Job ODBC_LOAD terminated (status 8)

 

feinholz 1234 posts Joined 05/08
26 Nov 2012

The ODBC operator cannot be loaded because the ODBC driver manager library cannot be loaded.
And it looks like that is occurring because you are attempting to use the 64-bit driver manager, not the 32-bit one.
TPT is a 32-bit application.
 

--SteveF

AndrewSchroter 14 posts Joined 11/06
07 Dec 2012

Based on an install script I looked at earlier this week.  Isn't the tradtional utilities FastExport/FastLoad/MultiLoad 32 bit and the TPT utilities 64-bit?  It appears to me the error message above indicates problems with loading the 64-bit ODBC .so file.
 

           dlopen error: Could not load module /opt/teradata/client/13.10/tbuild/lib/libodbcop.so.

        Dependent module /opt/teradata/client/ODBC_64/lib/libodbc.a(odbc.so) could not be loaded.

        The module has an invalid magic number.

feinholz 1234 posts Joined 05/08
09 Dec 2012

TPT is a 32-bit application.

--SteveF

dbkiller 2 posts Joined 04/13
22 May 2013

I need help with the following issue.I am looking to extract data from Teradata into several files on unix nodes. Right now we are using TPT EXP to extract data which is in order of terabytes. The data is chunked into several gigabytes based on date. 3 to 4  export TPT jobs write chunked data into different files on one unix node. we need a window of 20 hours and multiple fexp on the server to achieve this. Now I would like to split the same TPT exp job(using one export operator) writing subset of resultset into different files on multiple unix nodes,so that more data can be extracted with one fexp utility slot (to overcome the i/o bottleneck of disk on unix nodes). How can I achieve this.
 
@Feinholz In one of your replies "(There is a way (a bit more complex) to "fake out" TPT)"  I think you mentioned this kind of scenario. Could you please elaborate how to solve my issue.
 
Thanks in advance

feinholz 1234 posts Joined 05/08
23 May 2013

Script-based TPT does not currently enable the user to write out data to files on different Unix nodes.
The "faking" out reference was an ability to read data with different schemas, and yet to load the data into separate tables. In other words, for example, reading data from 3 flat files, each one with a separate schema, and within a single load job load the data from the 3 files into 3 different tables.
We normally enforce all of the data to conform to a single schema, but there is a way to get around this by defining a "super-schema" (the union of all 3 schemas into 1). But like I said, it is a long explanation and may not be what you are looking for.

--SteveF

dlayten 3 posts Joined 01/13
06 Aug 2013

All,
When I have the following query: CASE WHEN TOTAL_PASS = 1 THEN 1 ELSE  0 END as COMPLIANCE it seems to work when i do this: CASE WHEN TOTAL_PASS = 1 THEN 'Y' ELSE  'N' END as COMPLIANCE. I get the output schema error. I would think this would not happen because the 'Y' and 'N' data type would be varchar. am I missing something. Would I need to cast this to be something else? if so how would I do that in a case statement.
 
Thanks

feinholz 1234 posts Joined 05/08
06 Aug 2013

What is the data type of COMPLIANCE?
In the future, try to provide more information (like a copy of the script?).
 

--SteveF

You must sign in to leave a comment.