1146 | 06 Nov 2014 @ 12:42 PST | Database | Reply | Transaction aborted by Administrator or operation staff | How about dbc.qrylog?
|
1145 | 05 Nov 2014 @ 11:07 PST | Database | Reply | Backup and restore report | I hope you can get from Unity ecosystem manager, for DSA ? Is not so? or how about a script?
|
1144 | 05 Nov 2014 @ 10:02 PST | Database | Reply | Summary Statistics | Summary stats is quite fast and generate details like row count,block size..... I think it is collected automatically once you generate stats.
Normal stats takes more time.
It is suggested to... |
1143 | 05 Nov 2014 @ 11:58 PST | Database | Reply | how to run multiple stored procedures in parallel ? | Can you stuff your other stored procs inside one.
|
1142 | 05 Nov 2014 @ 11:34 PST | Database | Reply | how to run multiple stored procedures in parallel ? | How about a scheduler? Checkin with version and package such that they are run in parallel.
|
1141 | 05 Nov 2014 @ 11:26 PST | Database | Reply | Query on running sum(cumulative diff)-not cdiff | Taking in a derived table and then take it outside ,maybe even if necessary computation or some sort of cte.
|
1140 | 05 Nov 2014 @ 10:31 PST | Database | Reply | Cumulative Average where the partition are overlapping (Moving Avg.) | You can try with
avg(amount) over(order by amount rows 10 preceding ) from your_table
You can put your where clause alongside.
Next you can do union all for 2014 data.
Even I am not abl... |
1139 | 05 Nov 2014 @ 08:32 PST | Database | Reply | Cumulative Average where the partition are overlapping (Moving Avg.) | Stahengik,
It can be anything when you say moving average:
It can be something like this example: AVG(x) OVER (ORDER BY y ROWS z PRECEDING) .
You can highlight what you have and what you w... |
1138 | 03 Nov 2014 @ 08:29 PST | Database | Reply | Query on running sum(cumulative diff)-not cdiff | I could see that the discount amount is calculated as previous discounted row minus the percentage of the discounted amount.
How do you get these rows or am I missing something?
5 ... |
1137 | 03 Nov 2014 @ 01:37 PST | Database | Reply | Performance Tips for Group by on all columns | Btw, if your varchar field(s) is/are very long and if placed in the sort key of spool file, then it can engender performance. Even collect stats can take more time for varchars which are very long.
|
1136 | 03 Nov 2014 @ 01:10 PST | Database | Reply | Error : A column or character expression is larger than the max size. | I love Unix/Linux scripting. For big ones, I would love to get to unix scripting. It is more handy in certain cases. Automation works I prefer unix scripting :) .
|
1135 | 02 Nov 2014 @ 11:26 PST | Database | Reply | Performance Tips for Group by on all columns | My suggestion is get back to the person or designer or modeler, why it is made thus. There may be reasons for making it multiset, nupi , thus. So also for the query, you can get in touch... |
1134 | 02 Nov 2014 @ 10:58 PST | Database | Reply | Error : A column or character expression is larger than the max size. | Maybe you can try with c,c++, java udfs and see. I have never crossed beyond 200 , while writing udfs. You can share your experience then.
|
1133 | 02 Nov 2014 @ 12:51 PDT | Database | Reply | Querygrid vs Teradata SQL-H vs Aster SQL-H |
I hope you are familiar with hadoop, hive, hbase.... programming.You will see MR scripts spawned when you fire queries in hive.With Teradata SQL-H™ you can access Hortonworks Hadoop data s... |
1132 | 31 Oct 2014 @ 10:12 PDT | General | Reply | Converting varchar to timestamp | you can try this:
select cast(regexp_replace( '142750261904',
'([[:digit:]]{2})([[:digit:]]{2})([[:digit:]]{2})
([[:digit:]]{6})', '\1:\2:\3.\4')
as time format ... |
1131 | 31 Oct 2014 @ 05:55 PDT | Database | Reply | Error in Merge Statement | Check your PI,PPI cols matching
http://www.info.teradata.com/htmlpubs/DB_TTU_13_10/index.html#page/General_Reference/B035_1096_109A/Database.05.1885.html |
1130 | 30 Oct 2014 @ 08:02 PDT | Database | Reply | Basic analysis when analyzing a query | Specific to requirements, I don't think or maybe if someone can share. We can develop solutions as per specific requirements on top of tools, utilities. I remember for one of the clients, where we ... |
1129 | 30 Oct 2014 @ 10:38 PDT | General | Reply | COLLECT STATS ACCESS | Grant statistics is introduced from 13.0.Before 13.0,a user must have had “Index” or “Drop” privileges at the table level to be able to collect statistics.
|
1128 | 30 Oct 2014 @ 10:25 PDT | Training | Reply | Pointers for TD Certification | My suggestion:
Personally, I do not take things for granted. Even things which I see at hands, I make sure I grip it, lest it slip out of my hands :). There is nothing permanent in this world exce... |
1127 | 30 Oct 2014 @ 03:05 PDT | Tools | Reply | BTEQ export CSV issue | With the above data, it works perfectly fine with me and it shows the records perfectly since the field lengths are small
even with simple code you have
.export report file=$HOME/ab... |
1126 | 30 Oct 2014 @ 01:16 PDT | Tools | Reply | BTEQ export CSV issue | You cat in unix?
Share your table details and some sample data.
I selected 3 fields and it exports well, without cast or anything.
You can try with 2 or 3 fields first and see.
|
1125 | 29 Oct 2014 @ 11:33 PDT | Database | Reply | Tablesize in GB | The question is a bit confusing ----For ex: staging has 75 records, mart has 115 records.
The table structures are the same? because sometimes staging cannot be same as target.
|
1124 | 29 Oct 2014 @ 09:25 PDT | Tools | Reply | Mload Performance Issue | Did you talk to your DBA, if he sets priorities? It may be the size has increased by leaps and bounds.....How about network..... my initial thought |
1123 | 29 Oct 2014 @ 09:18 PDT | General | Reply | Is Teradata good for Operational Online Applications | Aha!!!! why not me too, because I am curious :) |
1122 | 29 Oct 2014 @ 10:33 PDT | Database | Reply | teradata SQL query | You can create table as with data with a select query something like this:
select id,id1..... from your_table
qualify row_number() over(partition by id,id1..... order by id,id1.....)>1 o... |