#DateForumTypeThreadPost
1715 Aug 2016 @ 04:00 PDTToolsReplyTPT with JDBC to Kafkathanks Tom.   Does anyone have a working Teradata -> Kafka configuration they can share? I have Kafka, Zookeeper, & a Schema Registry up and running. But when I run the JDBC connect...
1602 Aug 2016 @ 02:13 PDTToolsTopicTPT with JDBC to KafkaI can get TPT working from the Teradata Utilities on Linux, but I have to write this massive script to specify each column name and data type. I'm trying to get it working from Kafka's JDB...
1502 Jun 2016 @ 10:29 PDTToolsTopicInstalling tools on UbuntuHow do I install the tools on Ubuntu?  Why can't Teradata just provide this as a single-line apt-get command instead of forcing developers to spend an hour struggling with this?   I...
1411 Apr 2016 @ 09:08 PDTToolsReplyExporting CSV from TPTThanks Dieter   Here's my log: http://pastebin.com/bXatXwKk   What's interesting is that it says the query itself takes 2-3 seconds, but the time from when it starts to w...
1309 Apr 2016 @ 07:03 PDTToolsReplyExporting CSV from TPTI was able to get it working by changing it to: dt = ANSIDATE   It runs, but it's very slow (75,000 rows x 2 columns imported/minute).  I have an 800 million row table. I've t...
1208 Apr 2016 @ 06:09 PDTToolsReplyExporting CSV from TPTI just changed the select statement in the example file to my own table, and I get this error:   ubuntu@home:/opt/teradata/client/15.10/tbuild/sample/quickstart$ tbuild -f qstart2.txt -v j...
1108 Apr 2016 @ 04:08 PDTToolsTopicExporting CSV from TPTHi, I have installed FastExport but apparently it cannot output CSV files. So now I'm trying to get ahold of Teradata parallel transporter, to export a large table (hundreds of millions of ro...
1008 Apr 2016 @ 07:50 PDTDatabaseReplyhelp optimizing GROUP BY querythanks Dieter. Doing it this way will still allow me to filter quickly, say on country, even without a secondary index?  How does that work? It's not unique, as a single customer may hav...
907 Apr 2016 @ 09:19 PDTDatabaseReplyhelp optimizing GROUP BY queryHi Dieter!  Is this a one-man support forum? The average is 1, the max is 55.   thanks, imran
806 Apr 2016 @ 08:48 PDTDatabaseReplyhelp optimizing GROUP BY queryThat actually seems to have slowed down the query even further, it now runs in 60 seconds. This is how I partitioned it: DATA PRIMARY INDEX (customer_id, dt, country, product)   PARTI...
706 Apr 2016 @ 07:32 PDTToolsReplyTeradata Studio and Teradata Analyst PackWhy does the link on the analyst tools page not work?  Why is it not on the downloads page?  This is lousy freaking support, and Teradata doesn't seem to care.
606 Apr 2016 @ 07:11 PDTDatabaseReplyhelp optimizing GROUP BY queryWould it help if I added a second level of partitioning on customer_id, so that each AMP would have all the rows of data it needs for the GROUP BY locally?   thanks
505 Apr 2016 @ 04:32 PDTDatabaseReplyhelp optimizing GROUP BY querythanks Dieter, I've now changed to a non-unique primary index, with a MULTISET table, and partitioning by month on the date range. Doing the aggregate statistics is pretty fast (~ 3 seconds) ...
405 Apr 2016 @ 04:17 PDTDatabaseReplyPartitioning by row and columnThanks Dieter, It turns on my DBAs don't want us creating COLUMNAR partitions in Teradata, as they have no primary indexes.  I'll end this discussion here.
301 Apr 2016 @ 10:25 PDTDatabaseReplyhelp optimizing GROUP BY querythanks Dieter, I added the secondary index including gender, as I later need to GROUP BY gender and get a COUNT(*).  I assumed that would need to be indexed, so I added it to the secondary in...
230 Mar 2016 @ 10:26 PDTDatabaseTopichelp optimizing GROUP BY queryThis is my query: CREATE TABLE rtl.intermediate AS ( SELECT customer_id, MAX(new_to) AS new_to, MIN(age) AS age, MIN(gender) AS gender, MIN(existing) AS existing FROM rtl....
130 Mar 2016 @ 04:24 PDTDatabaseTopicPartitioning by row and columnI am trying to create a table with both row and column partitioning to see if it will help speed up my queries. The original table has 500 million rows of data. When I try to create a partitioned...