#DateForumTypeThreadPost
62805 Nov 2009 @ 04:49 PSTDatabaseReplyDBQL field definitionsHi John,check the Data Dictionary manual:HotAmp1IO: Returns the I/O count of the highest I/O utilized AMP in the step.Dieter
62705 Nov 2009 @ 04:45 PSTDatabaseReplyLoading DateRegardless of any DATEFORM setting:create volatile table test12( load_date date )on commit preserve rows ;insert into test12 values ('20091102' (date ,format 'yyyymmdd'))Dieter
62605 Nov 2009 @ 04:33 PSTDatabaseReplyWhat is spool space in teradata? explain with examples.Short question, short answer : RTFMDieter
62505 Nov 2009 @ 04:29 PSTDatabaseReplyAdding a interval to datetimestamp from another columnHi keno,if the duration is less than 864,000,000 seconds:start_timestamp + (duration * INTERVAL '0000 00:00:01.000000' DAY TO SECOND)Dieter
62430 Oct 2009 @ 12:55 PDTDatabaseReplySkew issueselect a.vproc, needed, available - needed from ( select vproc, currentperm as needed from dbc.tablesize where databasename = 'newdb' and tablename = 'tablename1' ) as a join ( s...
62328 Oct 2009 @ 01:46 PDTDatabaseReplySkew issueIt was answered by Fred: "in particular, the system assigns a default Primary Index: first column alone as NUPI." Just run that HASHAMP query you used to check the distribution of the source ta...
62227 Oct 2009 @ 10:02 PDTDatabaseReplyPossible values representing 'types' of 'ExceptionValue' column in DBQLogTblWhere did you get that list from? In V2R6.2/TD12/TD13 it's documented in the Data Dictionary Manual as "0x00000001 - Exception time limit exceeded. 0x00000002 - CPU time (AMP and PE) limit e...
62124 Sep 2009 @ 02:28 PDTDatabaseReplyInvalid session mode for procedure executionYou have to set the session mode to the same mode used when the SP has been created. You get that info when you submit a "help procedure xxx attributes;" Then set it in the connection URL usin...
62024 Sep 2009 @ 02:21 PDTDatabaseReplySQL doesn't work - syntax or teradata issue?Hi Kevin, don't ask me why but Teradata didn't support that kind of ANSI Scalar Subqueries before TD13. But even if it was supported, it would run quite slow (probably on any DBMS). Didn't y...
61920 Sep 2009 @ 09:05 PDTDatabaseReplyIssue with using single table JIHi Richard, AFAIK this is a known limitation of JIs, the optimizer is using them only once within a query.Dieter
61820 Sep 2009 @ 09:03 PDTDatabaseReplyconvert char timestamp to TD timestampIt's not the milliseconds, it's the colon which separates them, this must be a period:SELECT '14 Sep 2009 10:09:54:300' AS x, SUBSTRING(x FROM 1 FOR 20) || '.' || SUBSTRING(x FROM 22) (TIMESTAM...
61720 Sep 2009 @ 08:59 PDTDatabaseReplyStored Procedures and the Data DictionaryThe source code of a SP is not stored anywhere within the dbc tables.If you explain a SHOW PROCEDURE you'll see it's stored as a special row within the SP "table".Or you might get a 5535 error: No ...
61620 Sep 2009 @ 08:54 PDTDatabaseReplyTeradata substitute to TRUNCATE commandYou should always try to rewrite a row trigger to a statement trigger, which is much faster.ALTER TRIGGER foo DISABLED;DELETE FROM tab;ALTER TRIGGER foo ENABLED;Dieter
61520 Sep 2009 @ 08:19 PDTDatabaseReplysingle loginPlease clarify your requirements:A single session for a given user regardless of the client-PC/Application?Throttles in TDWM can provide thatMultiple sessions for a given user but only from a singl...
61420 Sep 2009 @ 07:34 PDTUDAReplyTeradata SchemaHi Ansh,of course you can use 4th NF or BCNF, if you want you might use 5th/6th/Domain Key/Inclusion Dependency/whatever NF, tooYou just have to design your datamodel according to that :-)Dieter
61320 Sep 2009 @ 07:31 PDTUDAReplydistinct vs group by, insert vs create as1.As Teradata (before TD13) optimizes GROUP BY and DISTINCT totaly different, there's a rule of thumb:If the number of rows/value is large, GROUP BY is faster, because duplicates are eliminated loc...
61220 Sep 2009 @ 07:19 PDTUDAReplyHoe to check which user had deleted dataBesides the QueryLog your site might have Access Logging enabled (view dbc.AccessLog).For security related stuff like that Access Logging is the preferred solution, so your DBAs should think about ...
61120 Sep 2009 @ 04:00 PDTDatabaseReplyERROR IN AMPYou lost an AMP. If it's a demo version you might check Windows EventLog for Teradata related errors. The easiest would be a re-install, because you already lost some data. If it's not the d...
61020 Sep 2009 @ 03:56 PDTDatabaseReplyTricky grouping questionHi Kevin, this is a common problem: creating distinct groups for the same value based on a sequenced data set. And there's a common solution for it: -- untested, because there were no DDL ...
60931 Aug 2009 @ 06:50 PDTDatabaseReplySubquery IssueThe condition "DDDDD= 1" is in different places: q1: in the outer block q2: in the subquery This might be the reason for different plans (and might return different answer sets, too). The t...
60831 Aug 2009 @ 06:33 PDTDatabaseReplyDetermining from the system catalog views whether a Table is GLOBAL TEMPORARY or GLOBAL TEMPORARY TRACEHi Niels, it's the column CommitOpt, if it's 'D' or 'P' then it's a GTT (on commit Delete/Preserve rows). You got luck, before TD12 it was not available in dbc.tables, only in dbc.tvm. Dieter
60712 Aug 2009 @ 10:33 PDTDatabaseReplyDeadlocks on Join Indexes- How/What/Why?It's one Insert in the base table plus three Inserts into the JIs, so if it's single row insert, it's a RowHash lock on the target table plus three RowHash locks on the JIs. For RowHash locks there...
60612 Aug 2009 @ 10:20 PDTDatabaseReplyIDENTITY COLUMN - 5753 error messageHi Pierre, identity values are assigned by PEs or AMPs based on the type of query: - INSERT VALUES: by the session's PE - INSERT SELECT: by the source AMP - MLoad/FastLoad: by the AMP which re...
60504 Aug 2009 @ 05:38 PDTDatabaseReplyIgnoring duplicated rows and keep inserting.Hi Anderson,in SQL Assistant just unchecktools -> options -> query -> Stop query execution if an SQL error occursDieter
60404 Aug 2009 @ 05:06 PDTUDAReplySQL - Combine Two rows in to OneThen how do you distinguish between those rows?Looks like removing the duplicates within the Derived Tables using DISTINCT or GROUP BY might work.Dieter

Pages