All Forums Database
rahul_fall2005 1 post Joined 06/10
22 Jun 2010
improve peformance by using functions instead of coalesce


I am using the coalesce statement in my view as follows

coalesce (date1, 'current_date') as dt_1

however i am dealing with a volume in excess of 500mil and this is proving to be a performance killer as it checks for nulls on each row. I tried using UNION to separate nulls which gives better peformance.

My question; Is there a faster way than COALESCE to check and replace NULLS.

Adeel Chaudhry 773 posts Joined 04/08
22 Jun 2010


Built-in or standard functions always work best as compared to user defined function .... so am afraid you have limited options.

Perhaps, what can be done is to devise a strategy to handle rows with NULL values separately. But that entirely depends on the current architecture deployed.



-- If you are stuck at something .... consider it an opportunity to think anew.

Jim Chapman 449 posts Joined 09/04
23 Jun 2010

The cost of checking for null values using the coalesce function should be minimal compared to the cost of accessing the row. Any reference to a nullable column in your SQL will generate internal code to check for null values, so you incur that cost regardless.

robpaller 159 posts Joined 05/09
02 Jul 2010

Are you qualifying on the result of the COALESCE?

SELECT COUNT(*) FROM view WHERE dt_1 = DATE '1980-01-01'

The trouble with the SQL statement above is that the optimizer may not have leverage the histogram on the column date1 thus it may be spooling the entire 500 million rows before applying the condition supplied to the view.

In other words, can you provide the context in which you are using view that has this COALESCE defined. If possible provider the EXPLAIN output that is generated querying the view.

You must sign in to leave a comment.