17 | 15 Aug 2016 @ 04:00 PDT | Tools | Reply | TPT with JDBC to Kafka | thanks Tom.
Does anyone have a working Teradata -> Kafka configuration they can share?
I have Kafka, Zookeeper, & a Schema Registry up and running.
But when I run the JDBC connect... |
16 | 02 Aug 2016 @ 02:13 PDT | Tools | Topic | TPT with JDBC to Kafka | I can get TPT working from the Teradata Utilities on Linux, but I have to write this massive script to specify each column name and data type.
I'm trying to get it working from Kafka's JDB... |
15 | 02 Jun 2016 @ 10:29 PDT | Tools | Topic | Installing tools on Ubuntu | How do I install the tools on Ubuntu? Why can't Teradata just provide this as a single-line apt-get command instead of forcing developers to spend an hour struggling with this?
I... |
14 | 11 Apr 2016 @ 09:08 PDT | Tools | Reply | Exporting CSV from TPT | Thanks Dieter
Here's my log: http://pastebin.com/bXatXwKk
What's interesting is that it says the query itself takes 2-3 seconds, but the time from when it starts to w... |
13 | 09 Apr 2016 @ 07:03 PDT | Tools | Reply | Exporting CSV from TPT | I was able to get it working by changing it to:
dt = ANSIDATE
It runs, but it's very slow (75,000 rows x 2 columns imported/minute). I have an 800 million row table.
I've t... |
12 | 08 Apr 2016 @ 06:09 PDT | Tools | Reply | Exporting CSV from TPT | I just changed the select statement in the example file to my own table, and I get this error:
ubuntu@home:/opt/teradata/client/15.10/tbuild/sample/quickstart$ tbuild -f qstart2.txt -v j... |
11 | 08 Apr 2016 @ 04:08 PDT | Tools | Topic | Exporting CSV from TPT | Hi,
I have installed FastExport but apparently it cannot output CSV files.
So now I'm trying to get ahold of Teradata parallel transporter, to export a large table (hundreds of millions of ro... |
10 | 08 Apr 2016 @ 07:50 PDT | Database | Reply | help optimizing GROUP BY query | thanks Dieter.
Doing it this way will still allow me to filter quickly, say on country, even without a secondary index? How does that work?
It's not unique, as a single customer may hav... |
9 | 07 Apr 2016 @ 09:19 PDT | Database | Reply | help optimizing GROUP BY query | Hi Dieter! Is this a one-man support forum?
The average is 1, the max is 55.
thanks,
imran
|
8 | 06 Apr 2016 @ 08:48 PDT | Database | Reply | help optimizing GROUP BY query | That actually seems to have slowed down the query even further, it now runs in 60 seconds.
This is how I partitioned it:
DATA PRIMARY INDEX (customer_id, dt, country, product)
PARTI... |
7 | 06 Apr 2016 @ 07:32 PDT | Tools | Reply | Teradata Studio and Teradata Analyst Pack | Why does the link on the analyst tools page not work? Why is it not on the downloads page? This is lousy freaking support, and Teradata doesn't seem to care.
|
6 | 06 Apr 2016 @ 07:11 PDT | Database | Reply | help optimizing GROUP BY query | Would it help if I added a second level of partitioning on customer_id, so that each AMP would have all the rows of data it needs for the GROUP BY locally?
thanks
|
5 | 05 Apr 2016 @ 04:32 PDT | Database | Reply | help optimizing GROUP BY query | thanks Dieter,
I've now changed to a non-unique primary index, with a MULTISET table, and partitioning by month on the date range.
Doing the aggregate statistics is pretty fast (~ 3 seconds) ... |
4 | 05 Apr 2016 @ 04:17 PDT | Database | Reply | Partitioning by row and column | Thanks Dieter,
It turns on my DBAs don't want us creating COLUMNAR partitions in Teradata, as they have no primary indexes. I'll end this discussion here.
|
3 | 01 Apr 2016 @ 10:25 PDT | Database | Reply | help optimizing GROUP BY query | thanks Dieter,
I added the secondary index including gender, as I later need to GROUP BY gender and get a COUNT(*). I assumed that would need to be indexed, so I added it to the secondary in... |
2 | 30 Mar 2016 @ 10:26 PDT | Database | Topic | help optimizing GROUP BY query | This is my query:
CREATE TABLE rtl.intermediate AS (
SELECT
customer_id,
MAX(new_to) AS new_to,
MIN(age) AS age,
MIN(gender) AS gender,
MIN(existing) AS existing
FROM rtl.... |
1 | 30 Mar 2016 @ 04:24 PDT | Database | Topic | Partitioning by row and column | I am trying to create a table with both row and column partitioning to see if it will help speed up my queries.
The original table has 500 million rows of data.
When I try to create a partitioned... |