We have made great strides in improving our handling of delimited data (i.e. CSV data) in Teradata Parallel Transporter for the TTU14.00 release. This article will describe the background of the data format, the original support, and the enhancements we have made.

Background and Theory

Comma-Separated Values

Comma-separated values (CSV) refers to a platform-independent data transport format, consisting of data values (expressed as character sequences), separated by commas. Of course, using comma as the value separator causes problems if the data values themselves include commas (for example, a single value containing both city and state, or a last-comma-first full name).

Delimiter-Separated Values

To avoid the problems of commas in data values, a delimiter-separated values (DSV) format was introduced. The DSV format allows the use of any separator, so that conflicts between data values and the delimiter can generally be avoided.

Although the delimiter is typically a single character, there is no technical reason that it cannot be a multi-character sequence.

Enclosing Data Values

Because there are cases where the nature of the data precludes certain knowledge as to what characters may or may not occur in the data values, quoted DSVs were introduced. Using quoted DSVs, the delimiter can occur within the data. The only thing that cannot occur within the data is the character used to enclose the data values.

General Rules

For the most generality, an application should accept the widest variety of DSV input, especially since there is no single widely accepted standard for CSV/DSV.

On the other hand, applications that emit CSV/DSV should be careful to limit their output so that it is acceptable to less forgiving receiving applications.

Teradata Implementation

Teradata’s Original Implementation

Teradata’s original implementation of CSV/DSV, referred to variously as VARTEXT or Delimited Data, had the following restrictions:

  • The delimiter was limited to one single-byte character (default: |, the vertical bar/pipe).
  • There was no support for quoted data values.
  • Empty values (indicated, in the case of the first field, by a delimiter at the start of the line; in the case of all fields other than the first and last fields, by adjacent delimiters; or, in the case of the last field, by a delimiter immediately followed by end-of-line) were passed to the DBS as NULL.

The result of parsing each input line was a series of VARCHAR() fields, each holding the value of the corresponding input data value.

MBCS Support

Subsequently, support for the delimiter character was expanded to allow a single multi-byte character.

TPT Enhancements For TTU 14.00

The biggest and most important aspect of the delimited data enhancement in TPT14.00 is the support of quoted delimited data. This is very important because more and more customers are moving data from non-Teradata databases into Teradata. And the export tools being used to extract data from those non-Teradata databases often write out the data to flat files in delimited format, where one or more fields are enclosed in quotes.

New DataConnector Operator Attribute

In order to enable this new feature, we have introduced a new attribute to the TPT DataConnector operator:

QuotedData

  • No (default, quoted data is not supported)
    • current rules apply
    • quotes are considered a part of the data
  • Yes (all fields must be quoted)
    • quotes are not considered to be part of the data
  • Optional (fields may be a mixture of quoted and unquoted)

Enclosing Data Values

Although the term “quoted” may seem to imply that either single quotes (apostrophes) or double quotes (quotation marks) are used to enclose the values, which is not the case:

  • The enclosing characters need not be quotes.
  • The enclosing characters need not be single characters (that is, a multi-character sequence can be used)
  • The open quote and close quote need not be the same; they can be distinct.

Note: for purposes of this article, “close quote” includes both a distinct close quote, and a common open and close quote.

Rules For Quotes

The following rules apply to both the open and close quote:

  • If the open quote and close quote are distinct from each other, neither can be a substring of the other.
  • Neither open quote nor close quote can be a substring of the delimiter.
  • The delimiter cannot be a substring of either open quote or close quote.
  • The backslash character (\) cannot occur in either the open quote or close quote.

Rules For Parsing The Input Line

Parsing the input line is relatively straightforward:

  • If values are unquoted, scan for the delimiter or end-of-line, since those are the only significant characters.
  • If values are always quoted, the following rules apply:
    • At the start of the input line, or after a delimiter, an open quote must be present (otherwise, it’s a malformed input line).
    • Following an open quote, all characters become part of the data value, with the following exceptions:
      • A doubled close quote causes one close quote to become part of the data value.
      • A backslash-escaped close quote causes the escape backslash to be discarded and the close quote to become part of the data value.
      • A backslash-escaped backslash causes the escape backslash to be discarded and the second backslash to become part of the data value.
      • An undoubled, unescaped close quote terminates the data value, and must be immediately followed by a delimiter or end-of-line.
  • If values are optionally quoted, the following rules apply:
    • At the start of the input line, or after a delimiter, if an open quote is present, the value is quoted and the rules above for always-quoted values apply.
    • Otherwise, the value is unquoted, and the rules above for unquoted values apply.

Examples of Quoted Data

Some typical five-field input lines with quoted values and using the default open/close quote might look like:

"abc"|"def"|"g|i"|"jkl"|"mno"|

"123"|"456"|"|||"|"pqr"|"xyz"

 

A typical five-field input line with quoted values using “sexed” quotes (“ and ”, for open and close quote, respectively) might look like:

“Smith”,“Jane”,“Dec 25, 1980”,“F”,“PhD”

 

A typical five-field input line with quoted values using distinct open (<#) and close (#>) quotes and comma as the delimiter might look like:

<#Smith#>,<#Jane#>,<#Dec 25, 1980#>,<#F#>,<#PhD#>

 

The previous examples, changed to only quote values when necessary (i.e., only when the value contains the delimiter) might look like:

abc|def|"g|i"|jkl|mno|

123|456|"|||"|pqr|xyz

Smith,Jane,“Dec 25, 1980”,F,PhD

Smith,Jane,<#Dec 25, 1980#>,F,PhD

Empty values

An unquoted value is empty:

  • When a delimiter occurs at the start of the line, then the first field is empty.
  • When two delimiters are adjacent to each other, then the field corresponding to that relative position is empty.
  • When the delimiter following the last value is omitted, and there are no characters between the preceding delimiter and end-of-line, then the last field is empty.

 

In each of these input lines, the first, third, and fifth fields are empty:

|abc||xyz||

|123||456|

 

A quoted value is empty:

  • When the open quote is immediately followed by the close quote.

 

In this input line, the second and fourth fields are empty:

"abc"|""|"ghi"|""|"mno"

 

The user can specify how empty data values are to be handled:

  • They can be assigned NULL (the default, for backward compatibility).
  • They can be set to zero-length VARCHARs.

Escapes

When dealing with quoted values, it may be necessary to include the close quote as part of the data value. Two escape mechanisms are provided for this:

  • Doubling:
    • If the open quote and close quote are the same, and a quote is needed as part of the data value, the quote is repeated (doubled) at the location where a quote is needed in the value. A single occurrence of the quote is included in the data value for each doubled quote.
    • If the open quote and close quote are distinct, and a close quote is needed as part of the data value, the close quote is repeated (doubled) at the location where a close quote is needed in the value. A single occurrence of the close quote is included in the data value for each doubled close quote. Note that no doubling is necessary to include the distinct open quote in the value.
  • Backslash escape:
    • If the open quote and close quote are the same, and a quote is needed as part of the data value, the quote is preceded by backslash (\) at the location where a quote is needed in the value. Only the quote becomes part of the value; the backslash escape is discarded.
    • If the open quote and close quote are distinct, and a close quote is needed as part of the data value, the close quote is preceded by backslash (\) at the location where a close quote is needed in the value. Only the close quote becomes part of the value; the backslash escape is discarded. Note that no backslash escape is necessary to include the distinct open quote in the value.
    • If a backslash is needed as part of the data value, it is doubled. That is, \\ as part of the quoted input data value becomes a single backslash in the resultant value.

 

Neither escape mechanism has any meaning when a data value is unquoted. For quoted data values, the user may use either or both escape mechanisms (note that, regardless of the escape mechanism used, backslashes must be doubled).

Some examples:

"ab\"c"|"\"def"|"ghi\""|

results in three fields with values ab"c, "def, and ghi".

 

"ab""c"|"""def"|"ghi"""|

results in three fields with values ab"c, "def, and ghi".

Discussion
emilwu 34 comments Joined 12/07
19 Mar 2012

Hi, we recently get the tpt14.0 installed and testing out the features of quoted delimited file. However, I am having trouble to deal with "escape" feature by "doubling".
The data looks like "good ""job"|20333
the tpt always throw error
Delimiter did not immediately follow close quote mark in row xxxxxxx, col x

I am trying to figure out what flags needs to be changed in the data connector . So far, no success. Can you please elaborate how can I enable the "doubling" escape mechanism for data connector?
Here is the data connector's definition

DEFINE OPERATOR FILE_READER
TYPE DATACONNECTOR PRODUCER
SCHEMA W_0_s_OTHER_PAYER_1
ATTRIBUTES
(
VARCHAR FileName = 'db2_export.del',
VARCHAR Format = 'DELIMITED',
VARCHAR QuotedData = 'Optional',
VARCHAR TextDelimiter = '|',
VARCHAR OpenQuoteMark = '"',
VARCHAR CloseQuoteMark = '"',
VARCHAR OpenMode = 'Read',
VARCHAR TrimColumns = 'Both',
VARCHAR NullColumns = 'N',
VARCHAR RowErrFileName = 'db2export.del.err'
);

feinholz 76 comments Joined 05/08
19 Mar 2012

I wrote the article prior to the feature being implemented and I I believe it is possible that the "doubling" did not get fully functional.

Thus, you will need to "escape" the quote character.

You would need to do:

"good\"job"|20333

assuming you have set up the escape character:

VARCHAR EscapeQuoteDelimiter = '\'

--SteveF

emilwu 34 comments Joined 12/07
21 Mar 2012

Thanks for the prompt reply... unfortunately, the source data is coming out of DB2 extracts, during which the db2 export facility default "doubling" the delimiter. the utility does not have capability to specify escape from the source side. Do you mind help to check whether the doubling feature is / will in place ? I will think about other ways to export the data out of DB2 (a lot of free text field which can contains anything... choosing a single char delimiter is quite a challenge here).

vinaywani 2 comments Joined 11/11
01 Aug 2012

Hi,
I had similar issue,
Please consider the following data,
"Good""Job"|"Bad""Job",
The data that I wanted to load was,
Good""Job|Bad""Job,
Initially I tried with the same method as you used and it did give me same error,but later I tried it with following set of attributes and it did work,
ATTRIBUTES
(
FILENAME='/home/vw186001/data/data.dat',
Format = 'DELIMITED',
OpenMode = 'Read',
IndicatorMode = 'N',
PrivateLogName = 'Read',
AcceptExcessColumns = 'Y',
RowErrFileName = 'vw186001.jap_test.err',
TextDelimiter = '|',
TrimChar='"',
TrimColumns='Both'
)

Regards,
Vinay

feinholz 76 comments Joined 05/08
08 Aug 2012

The support of embedded characters that match the closing quote mark was not implemented in the initial "quoted VARTEXT feature. We are working on that in 14.10. However, when you do this:

"Good""Job"

you will get this:

Good"Job

(if you want to preserve a character that matches the closing quote mark, you have to double it).

This means if you want:

Good""Job

then your data will need to be:

"Good""""Job"

But again, this will not be available until 14.10.

--SteveF

mzs 11 comments Joined 09/10
02 Jul 2013

Hello,
I found this article while searching for the way to specify how TPT would handle zero-length strings.  It says
"The user can specify how empty data values are to be handled:

  • They can be assigned NULL (the default, for backward compatibility).
  • They can be set to zero-length VARCHARs."

I did not find any parameter that would modify TPT behavior - I only get NULLs if I try to load zero-length string.  Please help
Thank you

nish_feb20 1 comment Joined 01/15
16 Jan 2015

Hi,
I ahve a similer issue.
"Good""Job"|"Bad""Job",
The data that I wanted to load was,
Good""Job|Bad""Job
The same i want to implement in version 13.1 what attribute i should use.
I am using  VARCHAR TrimChar = '"'
                    VARCHAR TrimColumns = 'Both'
but TPT is throwing error on this "line 82: syntax error at "VARCHAR" missing RPAREN_ in Rule: Attribute List Definition
TPT_INFRA: TPT02932: Error: TPT_INFRA: TPT02934: Error: invalid token
"

feinholz 76 comments Joined 05/08
16 Jan 2015

I cannot assist on any syntax errors unless you include the entire script.

--SteveF

vennelakanti00 3 comments Joined 11/11
24 Apr 2015

Hi Steve,
We have data as below.
12345|1234567|"abcdef" Lane|1234|xnybkhhh|2015-04-25 10:00:00
Data is pipe delimited(in tpt job it is actually hex-10) only. Some of the character fileds contains double quotes like mentioned above and they are part of data. We are not enclosing column data between quotes, just pipe delimited. We are using TTU14.10. When I am trying to load this data, it is failing with below error. I tested the same script/data in different environment where we have TTU13.10, it is loading without any issues. Can you please assist on any specific properties to be set at script level to fix this?
DATA_CONNECTOR_PRODUCER: TPT19134 !ERROR! Fatal data error processing file '/inbound/input.txt. Delimited Data Parsing error: Delimiter did not immediately follow close quote mark in row 5937, col 5.
Below is the TPT script.
USING CHARACTER SET UTF8
DEFINE JOB TABLE_TPT
DESCRIPTION 'LOAD DATA FROM FILE TO TABLE'
(
INCLUDE @SCHEM
/*****************************/
/*****************************/
DEFINE OPERATOR DATA_CONNECTOR_PRODUCER
DESCRIPTION 'TERADATA PARALLEL TRANSPORTER DATA CONNECTOR OPERATOR'
TYPE DATACONNECTOR PRODUCER
SCHEMA PP_DB
ATTRIBUTES
(
VARCHAR PrivateLogName  = @PRIVLOG,
VARCHAR DirectoryPath   = @INDIRPATH,
VARCHAR FileName        = @FILENAME,
VARCHAR Format            = 'DELIMITED',
VARCHAR TextDelimiterHex  = '10',
VARCHAR IndicatorMode     = 'N',
VARCHAR OpenMode          = 'Read',
VARCHAR ValidUTF8 = 'UTFX',
VARCHAR ReplacementUTF8Char = '?'
);
DEFINE OPERATOR LOAD_OPERATOR
DESCRIPTION 'TERADATA PARALLEL TRANSPORTER INSERTER OPERATOR'
TYPE LOAD
SCHEMA *
ATTRIBUTES
(
VARCHAR PrivateLogName    = @PRIVLOG,
VARCHAR TargetTable       = @TGTTABLE,
VARCHAR LogTable          = @LOGTABLE,
VARCHAR ErrorTable        = @ERRTABLE,
VARCHAR ErrorTable1       = @ERRTABLE1,
VARCHAR ErrorTable2       = @ERRTABLE2,
VARCHAR DropMacro         = 'Yes',
INTEGER MaxSessions       = 120 ,
INTEGER MinSessions       = 1 ,
VARCHAR TdpId             = @TDPID,
VARCHAR UserName          = @USRNAME,
VARCHAR UserPassword      = @PWD,
INTEGER MaxDecimalDigits  = 38
);
DEFINE OPERATOR DDL_OPERATOR()
DESCRIPTION 'FOR DDL OPERATOR'
TYPE DDL
ATTRIBUTES
(
VARCHAR ARRAY ErrorList   = ['3706','3803','3807'],
VARCHAR DateForm          = 'IntegerDate' ,
VARCHAR TdpId             = @TDPID,
VARCHAR UserName          = @USRNAME,
VARCHAR WorkingDatabase   = @WORKINGDB,
VARCHAR UserPassword      = @PWD
);
STEP ddl_delete_operations
(
APPLY
('DELETE FROM '|| @TGTTABLE || ' ALL;'),
('DROP TABLE '|| @ERRTABLE1 || ';'),
('DROP TABLE '|| @ERRTABLE2 || ';'),
('DROP TABLE '|| @LOGTABLE || ';')
TO OPERATOR ( DDL_OPERATOR() );
);
STEP delimited_file_to_tera
(
APPLY ('INSERT into '||@INSERT)
TO OPERATOR (LOAD_OPERATOR[@inst])
SELECT * FROM OPERATOR (DATA_CONNECTOR_PRODUCER[@inst]);
);
);
 
 
 

feinholz 76 comments Joined 05/08
24 Apr 2015

I will have this looked into. A few notes:
1. please provide the version of TPT you are using.
2. Although not harmful, you are putting in attributes and values that do not pertain to the operator definition. They will be ignored, and thus I am not sure if it will impact results you are expecting. For example: DropMacro and MaxDecimalDigits are not valid for the LOAD operator.
 

--SteveF

feinholz 76 comments Joined 05/08
24 Apr 2015

Why are you specifying the delimiter as a hex value.
Please just use the Delimiter attribute and set it to '|'.

--SteveF

vennelakanti00 3 comments Joined 11/11
24 Apr 2015

Steve,
We have data sets containing pipe as valid data and hence going with hex-10. I will try loading them without MaxDecimalDigits. 
TPT Version: Teradata Parallel Transporter Version 14.10.00.05
Thanks,
Prasanth.
 

feinholz 76 comments Joined 05/08
24 Apr 2015

Ok, this is inconsistent with the presented problem.
You stated, "Data is pipe delimited".
 
If the data is pipe delimited, then you do not need to provide a delimiter character, as that is the default. If you want to specify it anyway, you need to use the Delimiter attribute and set it to '|'.
 
If your data contains the pipe character, then you MUST enclose the field in quotes. And then you will need to use the QuotedData attribute.
 

--SteveF

vennelakanti00 3 comments Joined 11/11
24 Apr 2015

Hi Steve,
To illustrate the problem, I have used pipe. Here is how data looks like, we are using hex-10 as delimiter and that is what we have in the input file for TPT load.
12345^P1234567^P"abcdef" Lane^P1234^Pxnybkhhh^P2015-04-25 10:00:00
Thanks.

feinholz 76 comments Joined 05/08
24 Apr 2015

Please send me the .out log file to my email:
steven.feinholz@teradata.com
 

--SteveF

manharrishi 5 comments Joined 09/13
09 Jul 2015

Hi Steve,
I am using tdload from ttu14.10 and received the below error:

Delimited Data Parsing error: Delimiter did not immediately follow close quote mark in row 26028874, col 6.

 

Upon checking the line, the data is as below (delimiter for the file is '|')

 

|"lot"most^who=mother|concern
Can this be handled using tdload. We are not using TPT script.
Thanks

feinholz 76 comments Joined 05/08
09 Jul 2015

The error message indicated column 6.
Was the data you provided the actual entire record?
Or did you remove some of the data (with or without quotes, I do not see at least 6 columns).
I am asking to figure out if there is more than one problem going on.
 
But to answer your question, not every feature of TPT is available in Easy Loader (tdload).
And "quoted delimited data" is one of them, and we are working on enhancing that feature and backporting to prior releases.
Thus, there will be an upcoming efix (patch) for TPT 14.10 that will allow the user to have access to all TPT features (usually enabled through operator attributes) through the use of job variables.
 

--SteveF

manharrishi 5 comments Joined 09/13
09 Jul 2015

Thank you Steve. Sorry, I had to remove some portion of the record before posting in the forum. 
I did couple of tests using ttu 13.10 and ttu 14.10
With the record as is:  |"lot"most^who=mother|concern
tdload was successful for 13.10 and failed for 14.10
After I corrected the record 
|"lot"most^who=mother|concern
to 
|"lotmost^who=mother"|concern
I could successfully load the file using ttu 14.10
 

feinholz 76 comments Joined 05/08
10 Jul 2015

Ok, so this is a regression.
If you do not mind, please send me the tdload command, the DDL of the target table and an entire row of data (a row that fails) so that I can work on this inhouse.
What specific version of TPT 14.10 are you using?
(Need to check in case we fixed this in a patch.)
 
Thanks!

--SteveF

manharrishi 5 comments Joined 09/13
10 Jul 2015

Exact version is 14.10.00.05.
Table DDL and record that failed are as below:

CREATE MULTISET TABLE SAMPLE_TEST ,NO FALLBACK ,

     NO BEFORE JOURNAL,

     NO AFTER JOURNAL,

     CHECKSUM = DEFAULT,

     DEFAULT MERGEBLOCKRATIO

     (

      PI_ID VARCHAR(1000) CHARACTER SET LATIN NOT CASESPECIFIC,

      FIELD1 VARCHAR(1000) CHARACTER SET LATIN NOT CASESPECIFIC,

      FIELD1 VARCHAR(1000) CHARACTER SET LATIN NOT CASESPECIFIC,

      FIELD1 VARCHAR(1000) CHARACTER SET LATIN NOT CASESPECIFIC,

      FIELD1 VARCHAR(1000) CHARACTER SET LATIN NOT CASESPECIFIC,

      FIELD1 VARCHAR(1000) CHARACTER SET LATIN NOT CASESPECIFIC,

      FIELD1 VARCHAR(1000) CHARACTER SET LATIN NOT CASESPECIFIC,

      FIELD1 VARCHAR(1000) CHARACTER SET LATIN NOT CASESPECIFIC

 )

PRIMARY INDEX ( PI_ID );

 

PT123456789|E001234567|01-01-2015|spitting up||"lot"most^who=mother|concern|XYZ
-----
In the mean time, I tried creating a tpt script to load this data. I used the below attributes for the data connector.
DEFINE OPERATOR FILE_READER_OPERATOR

          TYPE   DATACONNECTOR PRODUCER

          SCHEMA Load_File_TD_TEST

     ATTRIBUTES

     (

          VARCHAR FileName            = 'samplefile.txt',

          VARCHAR OpenMode            = 'Read',

          VARCHAR Format              = 'Delimited',

          VARCHAR IndicatorMode       = 'N',

          VARCHAR AcceptExcessColumns = 'Y',

          VARCHAR TextDelimiter       = '|',

          VARCHAR QuotedData       = 'Optional'

     );

 

But still got the failure. Am I missing any attribute/used a wrong one?

manharrishi 5 comments Joined 09/13
10 Jul 2015

Hi Steve,
I changed the attribute list to below, where I changed the character used for OpenQuoteMark and CloseQuoteMark from the default double quotes (") to single quote ('). 
Then the TPT script was successful.

 ATTRIBUTES

     (

          VARCHAR FileName            = 'samplefile.txt',

          VARCHAR OpenMode            = 'Read',

          VARCHAR Format              = 'Delimited',

          VARCHAR IndicatorMode       = 'N',

          VARCHAR AcceptExcessColumns = 'Y',

          VARCHAR TextDelimiter       = '|',

          /*VARCHAR QuotedData       = 'Optional',

          VARCHAR EscapeQuoteDelimiter       = '"',*/

          VARCHAR OpenQuoteMark       = ''',

          VARCHAR CloseQuoteMark  = '''

     );

 

But probably this is more of a work around. Do you have any suggestions.
 

feinholz 76 comments Joined 05/08
10 Jul 2015

In 13.10, we did not support quoted delimited data.
Thus, the field in question:
 
|"lot"most^who=mother|concern|XYZ
 
would have loaded with the quote characters included in the data.
Starting in 14.0, we added the feature and when you enable it, your data has to adhere to the parsing rules.
And the fied in question does not follow the rules and that is why the error is generated.
 

--SteveF

DiEgoR 10 comments Joined 08/06
24 May 2016

A direct link to the documentation and a few examples would really help here. Specifically, I was looking to how to export into a format MS Excel would understand and failed to find it here or in Google. Fortunately somebody else in my company has put the solution on our internal wiki. Not sharing myself it as unclear about copyright etc. :-(

input output putput

19 Jul 2016

i am using TPT 15.10 to export data directly export data to HDFS, but i am having issue with EscapeQuoteDelimiter and EscapeTextDelimiter  both are ' \ ' this is giveng me problem when escape char itself is part of data
ex:
If Teradata value :- "abc\","sad","def" 
Expected output :- "abc\\","sad","def"
 
This is coming as expected on my dev Teradata and Dev cluster, but on my production Teradata and production cluster same script giving me the output as 
"abc\","sad","def"
 this is giving me problem while parsing the data, is this something related to database settings
TPT  Version on my Dev

    TDICU................................... 15.10.01.00

     PXICU................................... 15.10.01.00

     PMPROCS................................. 15.10.00.05

     PMRWFMT................................. 15.00.00.02

     PMHADOOP................................ 15.10.01.00

     PMTRCE.................................. 13.00.00.02

     PMMM.................................... 15.10.00.03

     DCUDDI.................................. 15.10.00.12

     PMHEXDMP................................ 15.10.00.02

     PMHDFSDSK............................... 15.10.00.02

     PMUNXDSK................................ 15.10.00.02

 

TPT Version on Prod

 

     TDICU................................... 15.10.00.00

     PXICU................................... 15.10.00.00

     PMPROCS................................. 15.10.00.05

     PMRWFMT................................. 15.00.00.02

     PMHADOOP................................ 15.10.01.00

     PMTRCE.................................. 13.00.00.02

     PMMM.................................... 15.10.00.03

     DCUDDI.................................. 15.10.00.12

     PMHEXDMP................................ 14.10.00.02

     PMHDFSDSK............................... 15.10.00.02

     PMUNXDSK................................ 15.10.00.02

Only big diffrence is between version PMHEXDMP, what is it is that the root cause for the problem i am facing
 
 
 
 

You must sign in to leave a comment.