0 - 22 of 22 tags for unicode

Unicode is a core technology for developing and implementing a universal, international language solution. The Unicode Tool Kit has been developed for Teradata customers who migrate the Latin server character set to Unicode and build a global data warehouse based on a universal character set Unicode.



I need to export few chinese and japanese characters stored in teradata to a file.


I have tried different options to export data through TPT but i am either not able to see the unicode characters in the generated file or i face the conflicting data length error.


Hi All,
I would like to ask for your help on explaining and elaborating the difference between columns having data types set to LATIN and those set to UNICODE.

One Analyst here can't see Unicode character using a statistics package called Stata that can only connect to TD using unixODBC.

I've been playing with Teradata ODBC Driver for Linux x64 version 15.10. I have installed it and successfully able to connect via tdxodbc64 tool. But unfortunatelly, I wasn't able to connect via a very simple Mono application (Mono runtime 3.12.1).

I'm having an issue with an 'untranslatable character' error. I've read numerous dev forum posts on the topic, reviewed TD documentation, and still can't find a solution.

Teradata 13.10 is rejecting my insert with the error:
 The string contains an untranslatable character. (6706)
Here is my INSERT statement:
Insert Into CCDW_T.UTest (ucol, i) SELECT 'ɂ',578;
This is my CREATE Table:

I'm using VBA and the Microsoft ActiveX Data Objects 2.6 Library to access our Teradata server using Teradata ODBC Driver

We have some old tables which has Unicode data(Japanese, Chinese, Korean etc..)  but with table defination as Latin for all columns, Which shows in BO report as Junk/ Unreadable character. Now we are trying to add new columns with character set as Unicode using alter statements. The approach we are following is

Hi Experts,
As per one of the requirement , we are making one column as CHARACTER SET UNICODE CASESPECIFIC.
But as we know that in Unicode too we have two types of character sets UTF-8 & UTF-16.
Can we know some how which one  (UTF_8 or UTF-16?) would be created after making the column as UNICODE?
Thanks in advance!

The character is either
'->' (a single character)
or a '?' surrounded by a black diamond.

   Can anyone let me know how to run a Teradata SQL and output the data in UTF-16 format? I am currently using Teradata Studio Express. I am wondering if the default output is UTF-16 or any translations needed.

I have a table which has a column where I can find special characters like &Altide; &Abc; etc. The total list of special characters is 110, I identified them using the logic anything between "&" and ";" is a special character and finally cross checked manually. Now I need to find and replace them in my table with space(" ").

I have created a table as follows:

my question is about the output of a UNICODE column in BTEQ.

If I select a column defined like this...
.. it is not 11 characters wide , but it is 33 characters wide
(3 times the number of VARCHARs)

The bteq session character set is UTF8: .set session charset 'UTF8';

The Unicode™ standard defines five encodings (the first three encodings are currently supported by Teradata):

Hi, I want to use TPT in one of my project and I am facing the following issue,

(I have read http://developer.teradata.com/tools/articles/teradata-parallel-transporter-unicode-usage#comment-17552 and doing things mentioned in that article but something is missing, here are the details)

TPump macrocharset support

TPump now forces CHARSET internally when building its macros! This feature is new starting in TPump release.

Hi people...i am sorry if i am posting this in the wrong forum, but please bear with me...
I have a requirement where i need to find out whether a particular column in the table is a unicode or not... any pointers would be of great help. A query if possible is more desirable.

Thanks in advance!!

Workstation BTEQ recently added Unicode Support to its list of capabilities.  This article will explain to you how to start a Unicode BTEQ session in both interactive and batch mode. Command line options have been provided to give you control and flexibility to execute BTEQ in various Unicode environments, while preserving BTEQ’s legacy behavior.

I'm writing application, which i.a. inserts Unicode strings into Teradata. I don't know how I can see inserted data, to check wheter it was inserted properly. SQL Assitant doesn't support Unicode, I tried in BTEQ command EXPORT DATA FILE=..., but it wrote to file only strange symbols not in Unicode...

This article describes usage tips on how to load/unload Unicode data with the UTF8 and UTF16 Teradata client session character sets using Teradata Parallel Transporter (TPT).

As of this writing, Teradata Parallel Transporter supports Unicode only on network-attached platforms.