0 - 6 of 6 tags for #database #teradata #explain #performance #time #tuning #DBA

Hi All, I have the given query as below. There are nested query A,B,C using same tables for joins just the entities joined are different each time. The problem is query is not optimized and does not run in database for more records. For few records gives correct result. Can somebody help me here.

I'm working in Data Masking Company. Now I'm developing Teradata Scramble.
I've a situation here to scramble a table having more than 10 Lakhs to 10 Cores of records, but due to some performance issues I'm planing to split the records into many phases depends on total records and do the scramble for each and every phase.

Select coalesce (t1.col3,p1.col3) as target_Column1,
coalesce (t2.col3,p2.col3) as target_Column2,
coalesce (t3.col3,p3.col3) as target_Column3,
coalesce (t25.col3,p25.col3) as target_Column3,
d1.g1_view a
d1.g1_metadate b
left outer join
d1.val_table t1
t1.val_cd = a.some_ind

I am noticing some strange behavior in the viewpoint explain. Data is being inserted in the same spool number multiple times. Taking the example of the few steps of an explain given below, we can see that data is put in spool 44455 multiple times. According to my understanding this should not be happening.

Hi all,
My apologies for asking someting like this.
I have formed a query that goes back exact 7 days (referring column CRT_DTTM) and take count(*) for each hour for that day.
As I have done it with very primitive knowledge from online resources, the query turned out to be bulky (and probably ugly).

I have a set of SQLs that run a long amount of time and CPU all all of these are taking time at the aggregation step that explains something like below.Stats are collected on obvious joining columns but not sure if they are sufficient.