Tags for failure
Re: Delete Syntax in MLOAD
Hi, I need to delete the rows from table in mload script which are not matching in the file. Tried couples of ways but couldn't succeed. DELETE FROM Employee WHERE EmpNo <> :EmpNo and EmpName <> :Empname; UTY0805 RDBMS failure, 3537: A MultiLoad DELETE Statement is Invalid.
mload delete 3707 3537 failure error
Duplicate rows in MLoad?
Does Mload support loading duplicate rows? If yes how does it handle loading the duplicate rows after a failure(While restarting the same MLoad job again)?  
failure mload restart
Connections to Teradata
Hi all,   I am new to Teradata (spooling up POC) and have thus far not been able to locate the infomation which I am seeking so beforehand I do apologize for such basic questions, but I would greatly appreciate some assistance.  
general session backup failure recovery connection availability
Consequences of failures of UDF in non-protected Mode
As I qualified in a previous post, I'm a Teradata newbie (migrating from Oracle).   Just started writing UDF's.  I see the following Warning in the UDF Programming manual and wanted to understand this in more detail.  There are also some additional comments in the same doc section referring to core dumps, and where they are stored. 
udf failure non-protected core
3710 (Insufficient memory to parse this request during Optimizer phase) on query with multiple full joins
There is a BI tool-generated SQL query which (full) joins multiple, 30 in total, smaller SQL as following: (select a14.Store_ID Store_ID, sum(a11.CountConnections) WJXBFS1 from Retail_Views.F_ACM_Connections a11, .... where filterset = 1) pa11 (select a14.Store_ID Store_ID, sum(a11.CountConnections) WJXBFS1 from Retail_Views.F_ACM_Connections a11, .... where filterset = 2) pa12 .... into a query like: select coalesce(pa11.Store_ID, ..., pa130.Store_ID) Store_ID, a132.StoreName StoreName, max(pa11.WJXBFS1) WJXBFS1, ..., max(pa130.WJXBFS1) WJXBFS30 from ....
database performance failure
TASM issues
Hi, Query on TASM. If we configure a group to have a MAXIMUM CPU utilisation of 60 Seconds, we would expect the query to migrate over to the next group when it reaches 60 Seconds. Why then do we see queries running and consuming more than 60 seconds (sometimes up to 1 hours worth of CPUTIME) before migrating. We can see this in viewpoint, looking in DBQL only show the startingbands and ending bancds.
tasm issues failure