#DateForumTypeThreadPost
114606 Nov 2014 @ 12:42 PSTDatabaseReplyTransaction aborted by Administrator or operation staff How about  dbc.qrylog?
114505 Nov 2014 @ 11:07 PSTDatabaseReplyBackup and restore reportI hope you can get from Unity ecosystem manager, for DSA ? Is not so? or how about a script?
114405 Nov 2014 @ 10:02 PSTDatabaseReplySummary StatisticsSummary stats is quite fast and generate details like row count,block size..... I think it is collected automatically once you generate stats. Normal stats takes more time. It is suggested to...
114305 Nov 2014 @ 11:58 PSTDatabaseReplyhow to run multiple stored procedures in parallel ?Can you stuff your other stored procs inside one.
114205 Nov 2014 @ 11:34 PSTDatabaseReplyhow to run multiple stored procedures in parallel ?How about a scheduler? Checkin with  version and package such that they are run in parallel.
114105 Nov 2014 @ 11:26 PSTDatabaseReplyQuery on running sum(cumulative diff)-not cdiffTaking  in a derived table and then take it outside ,maybe  even if necessary computation or some sort of cte.
114005 Nov 2014 @ 10:31 PSTDatabaseReplyCumulative Average where the partition are overlapping (Moving Avg.)You can try with avg(amount) over(order by amount rows 10 preceding ) from your_table You can put your where clause alongside. Next you can do union all for 2014 data. Even I am not abl...
113905 Nov 2014 @ 08:32 PSTDatabaseReplyCumulative Average where the partition are overlapping (Moving Avg.)Stahengik, It can be anything when you say moving average: It can be something like this example: AVG(x) OVER (ORDER BY y ROWS z PRECEDING)  . You can highlight what you have and what you w...
113803 Nov 2014 @ 08:29 PSTDatabaseReplyQuery on running sum(cumulative diff)-not cdiffI could see that the discount amount is calculated as previous discounted row minus the percentage of the discounted amount. How do you get these rows or am I missing something? 5 ...
113703 Nov 2014 @ 01:37 PSTDatabaseReplyPerformance Tips for Group by on all columnsBtw, if your varchar field(s) is/are very long and if placed in the sort key of spool file, then it can engender performance. Even collect stats can take more time for varchars which are very long.
113603 Nov 2014 @ 01:10 PSTDatabaseReplyError : A column or character expression is larger than the max size. I love Unix/Linux scripting. For big ones, I would love to get to unix scripting. It is more handy in certain cases.  Automation works I prefer  unix scripting :) .
113502 Nov 2014 @ 11:26 PSTDatabaseReplyPerformance Tips for Group by on all columnsMy suggestion is get back to the person or designer or modeler, why it is made thus. There may  be reasons for making it multiset, nupi , thus. So also for the query, you can get in touch...
113402 Nov 2014 @ 10:58 PSTDatabaseReplyError : A column or character expression is larger than the max size. Maybe you can try with c,c++, java udfs and see. I have never crossed beyond 200 , while writing udfs. You  can share your experience then.
113302 Nov 2014 @ 12:51 PDTDatabaseReplyQuerygrid vs Teradata SQL-H vs Aster SQL-H I hope you are familiar with hadoop, hive, hbase.... programming.You will see MR scripts spawned when you fire queries in hive.With Teradata SQL-H™ you can access Hortonworks Hadoop data s...
113231 Oct 2014 @ 10:12 PDTGeneralReplyConverting varchar to timestampyou can try this: select cast(regexp_replace( '142750261904', '([[:digit:]]{2})([[:digit:]]{2})([[:digit:]]{2}) ([[:digit:]]{6})', '\1:\2:\3.\4') as time format ...
113131 Oct 2014 @ 05:55 PDTDatabaseReplyError in Merge Statement Check your PI,PPI cols matching http://www.info.teradata.com/htmlpubs/DB_TTU_13_10/index.html#page/General_Reference/B035_1096_109A/Database.05.1885.html
113030 Oct 2014 @ 08:02 PDTDatabaseReplyBasic analysis when analyzing a querySpecific to requirements, I don't think or maybe if someone can share. We can develop solutions as per specific requirements on top of tools, utilities. I remember for one of the clients, where we ...
112930 Oct 2014 @ 10:38 PDTGeneralReplyCOLLECT STATS ACCESS Grant statistics is introduced from 13.0.Before 13.0,a user must have had “Index” or “Drop” privileges at the table level to be able to collect statis­tics.
112830 Oct 2014 @ 10:25 PDTTrainingReplyPointers for TD CertificationMy suggestion: Personally, I do not take things for granted. Even things which I see at hands, I make sure I grip it, lest it slip out of my hands :). There is nothing permanent in this world exce...
112730 Oct 2014 @ 03:05 PDTToolsReplyBTEQ export CSV issueWith the above data, it works perfectly fine with me and it shows the records perfectly since  the field lengths are small   even with simple code you have .export report file=$HOME/ab...
112630 Oct 2014 @ 01:16 PDTToolsReplyBTEQ export CSV issueYou  cat in unix? Share your table details and some sample data. I selected 3 fields and it exports well, without cast or anything.  You can try with 2 or 3 fields first and see.
112529 Oct 2014 @ 11:33 PDTDatabaseReplyTablesize in GBThe question is a bit confusing ----For ex: staging has 75 records, mart has 115 records. The table structures are the same? because sometimes staging cannot be same as target.
112429 Oct 2014 @ 09:25 PDTToolsReplyMload Performance IssueDid you talk to your DBA, if he sets priorities? It may be the size has increased by leaps and bounds.....How about network..... my initial thought
112329 Oct 2014 @ 09:18 PDTGeneralReplyIs Teradata good for Operational Online ApplicationsAha!!!! why not me too, because I am curious :)
112229 Oct 2014 @ 10:33 PDTDatabaseReplyteradata SQL query You can create table as  with data with a select query something like this: select id,id1..... from your_table qualify row_number() over(partition by id,id1..... order by id,id1.....)>1 o...

Pages