All Forums UDA
dwillier 5 posts Joined 03/14
22 Oct 2014
[Performance] how to avoid parsing time when executing macro via JDBC

Hello,
I m working a high response time need application connected to Teradata via JDBC. The app is executing macro via PreparedStatement in Java.
 
We run some load tests. We execute 500 times the same macro call with random parameters values via a PreparedStatement.
When analyzing response time in the database log, we can see that :
- with CacheFlag='A' (mostly 95% of queries processed)
- about 1,5s is dedicated to parsing time
- about 500ms is dedicated to executing query
 
Is there a way to avoid that parsing time which kill my application performance ?
 
Thanks you by advance for your advices
 
Simplified code of the preparedstatement :

         PreparedStatement statement = getPreparedStatement(macro);

 

         statement.setDate(1, new java.sql.Date( filters.getPeriod().getCurrentStartDate().getTime() ));

         statement.setDate(2, new java.sql.Date( filters.getPeriod().getCurrentEndDate().getTime() ));

         statement.setDate(3, new java.sql.Date( filters.getPeriod().getPreviousStartDate().getTime() ));

         statement.setDate(4, new java.sql.Date( filters.getPeriod().getPreviousEndDate().getTime() ));

         statement.setInt(5, filters.getSiteId());

         statement.setInt(6, filters.getRegionId());

         statement.setInt(7, filters.getCompanyId());

         statement.setInt(8, filters.getSaleChannelId());

         statement.setInt(9, filters.getFilterId() != null ? filters.getFilterId() : ApplicationConstants.TOTAL_MAGASIN);

 

         statement.executeQuery(); 

 

private PreparedStatement getPreparedStatement(AbstractMacroContainer macro) throws CannotGetJdbcConnectionException, SQLException

{

     PreparedStatement statement = prepareStatements.get(macro.getName());

     if (statement == null) {

     statement = getConnection().prepareStatement(macro.getQuery());

     }

     return statement;

}

 

 
 

tomnolan 594 posts Joined 01/08
22 Oct 2014

Please post the DDL to create the macro.

dwillier 5 posts Joined 03/14
24 Oct 2014

Hello,
Here s an exemple of the DDL of a macro.
Thanks you

Replace Macro FRDM_PAF.M_PAF_SALES_DAY_ALLOCATION_ALL_ALL
	  (  
	  p_StartTime          date format 'YYYY-MM-DD' ,
	  p_EndTime            date format 'YYYY-MM-DD' ,
	  p_StartTimeLastYear  date format 'YYYY-MM-DD' , 
	  p_EndTimeLastYear    date format 'YYYY-MM-DD' , 
	  p_GeoLvl1Id         smallint, 
	  p_GeoLvl2Id         smallint, 
	  p_GeoLvl3Id         smallint, 
	  p_GeoChannelGrpID   smallint, 
	  p_NomValueDrillId   integer
	  ) as (
	 SELECT TOP 5000  D.sid_ALL_SECTOR_C SID_NOM,id_ALL_SECTOR_C ID_NOM, ds_ALL_SECTOR_C as DS_NOM, 
    A_F_NS_VAL_SAL_AMT ,A_F_NS_QTY_UNIT_SOLD ,A_F_NS_VAL_MRG ,A_F_NS_VAL_SAL_AMT_MRG ,t.A_F_NS_QTY_TX , qtx.A_F_NS_QTY_TX as A_F_NS_QTY_TX_ALL, 
    A_F_NS_VAL_SAL_AMT_LY ,A_F_NS_QTY_UNIT_SOLD_LY ,A_F_NS_VAL_MRG_LY ,A_F_NS_VAL_SAL_AMT_MRG_LY ,t.A_F_NS_QTY_TX_LY , qtx.A_F_NS_QTY_TX_LY as A_F_NS_QTY_TX_LY_ALL, 
    A_F_NS_VAL_SAL_AMT_GEO3 ,A_F_NS_QTY_UNIT_SOLD_GEO3 ,A_F_NS_VAL_MRG_GEO3 ,A_F_NS_VAL_SAL_AMT_MRG_GEO3 ,t.A_F_NS_QTY_TX_GEO3 , qtx.A_F_NS_QTY_TX_GEO3 as A_F_NS_QTY_TX_GEO3_ALL  , 
    A_F_NS_VAL_SAL_AMT_LY_GEO3 ,A_F_NS_QTY_UNIT_SOLD_LY_GEO3 ,A_F_NS_VAL_MRG_LY_GEO3 ,A_F_NS_VAL_SAL_AMT_MRG_LY_GEO3 ,t.A_F_NS_QTY_TX_LY_GEO3 ,qtx.A_F_NS_QTY_TX_LY_GEO3 as A_F_NS_QTY_TX_LY_GEO3_ALL, 
    case when A_F_NS_VAL_SAL_AMT is not null then RK_F_NS_VAL_SAL_AMT else null end as RK_F_NS_VAL_SAL_AMT , case when A_F_NS_VAL_MRG is not null then RK_F_NS_VAL_MRG else null end as RK_F_NS_VAL_MRG  , case when A_F_NS_VAL_SAL_AMT_LY is not null then RK_F_NS_VAL_SAL_AMT_LY else null end as RK_F_NS_VAL_SAL_AMT_LY , case when A_F_NS_VAL_MRG_LY is not null then RK_F_NS_VAL_MRG_LY else null end as RK_F_NS_VAL_MRG_LY,PROMO_FLG
    FROM ( 
    			SELECT GEO.SID_SITE_ALLOCATION    ID_GEO1 , GEO.ID_REGION   ID_GEO2 ,GEO.ID_CONCEPT    ID_GEO3 , A.SID_ALL_SECTOR_H 
    			,SUM(CASE WHEN A.ID_DAY BETWEEN    :p_StartTimeLastYear  and :p_EndTimeLastYear   THEN F_NS_VAL_SAL_AMT END) AS A_F_NS_VAL_SAL_AMT_LY 
    			,SUM(CASE WHEN A.ID_DAY BETWEEN    :p_StartTimeLastYear  and :p_EndTimeLastYear   THEN F_NS_QTY_UNIT_SOLD END) AS A_F_NS_QTY_UNIT_SOLD_LY 
    			,SUM(CASE WHEN A.ID_DAY BETWEEN    :p_StartTimeLastYear  and :p_EndTimeLastYear   THEN F_NS_VAL_FRT_MRG+F_NS_VAL_BCK_MRG END) AS A_F_NS_VAL_MRG_LY 
    			,SUM(CASE WHEN A.ID_DAY BETWEEN    :p_StartTimeLastYear  and :p_EndTimeLastYear  THEN F_NS_VAL_SAL_AMT_MRG END) AS A_F_NS_VAL_SAL_AMT_MRG_LY 
    			,SUM(CASE WHEN A.ID_DAY BETWEEN    :p_StartTimeLastYear  and :p_EndTimeLastYear  THEN    F_NS_QTY_TX   END) AS A_F_NS_QTY_TX_LY 
    			,SUM(CASE WHEN A.ID_DAY BETWEEN    :p_StartTime  and  :p_EndTime  THEN F_NS_VAL_SAL_AMT END)   AS A_F_NS_VAL_SAL_AMT 
    			,SUM(CASE WHEN A.ID_DAY BETWEEN    :p_StartTime  and  :p_EndTime  THEN F_NS_QTY_UNIT_SOLD END) AS A_F_NS_QTY_UNIT_SOLD  
    			,SUM(CASE WHEN A.ID_DAY BETWEEN    :p_StartTime  and  :p_EndTime  THEN F_NS_VAL_FRT_MRG+F_NS_VAL_BCK_MRG END) AS A_F_NS_VAL_MRG 
    			,SUM(CASE WHEN A.ID_DAY BETWEEN    :p_StartTime  and  :p_EndTime  THEN F_NS_VAL_SAL_AMT_MRG END) AS A_F_NS_VAL_SAL_AMT_MRG 
    			,SUM(CASE WHEN A.ID_DAY BETWEEN    :p_StartTime  and  :p_EndTime  THEN   F_NS_QTY_TX   END)   AS A_F_NS_QTY_TX 
    			,SUM(CASE WHEN GEO.ID_CONCEPT   =    :p_GeoLvl3Id  THEN A_F_NS_VAL_SAL_AMT END) OVER(PARTITION BY A.SID_ALL_SECTOR_H) AS A_F_NS_VAL_SAL_AMT_GEO3 
    			,SUM(CASE WHEN GEO.ID_CONCEPT   =    :p_GeoLvl3Id  THEN A_F_NS_QTY_UNIT_SOLD END) OVER(PARTITION BY A.SID_ALL_SECTOR_H) AS A_F_NS_QTY_UNIT_SOLD_GEO3 
    			,SUM(CASE WHEN GEO.ID_CONCEPT   =    :p_GeoLvl3Id  THEN A_F_NS_VAL_MRG END)  OVER(PARTITION BY A.SID_ALL_SECTOR_H) AS A_F_NS_VAL_MRG_GEO3 
    			,SUM(CASE WHEN GEO.ID_CONCEPT   =    :p_GeoLvl3Id  THEN A_F_NS_VAL_SAL_AMT_MRG END) OVER(PARTITION BY A.SID_ALL_SECTOR_H) AS A_F_NS_VAL_SAL_AMT_MRG_GEO3 
    			,SUM(CASE WHEN GEO.ID_CONCEPT   =    :p_GeoLvl3Id  THEN A_F_NS_QTY_TX END)  OVER(PARTITION BY A.SID_ALL_SECTOR_H) AS A_F_NS_QTY_TX_GEO3 
    			,SUM(CASE WHEN GEO.ID_CONCEPT   =    :p_GeoLvl3Id  THEN A_F_NS_VAL_SAL_AMT_LY END) OVER(PARTITION BY A.SID_ALL_SECTOR_H) AS A_F_NS_VAL_SAL_AMT_LY_GEO3 
    			,SUM(CASE WHEN GEO.ID_CONCEPT   =    :p_GeoLvl3Id  THEN A_F_NS_QTY_UNIT_SOLD_LY END) OVER(PARTITION BY A.SID_ALL_SECTOR_H) AS A_F_NS_QTY_UNIT_SOLD_LY_GEO3 
    			,SUM(CASE WHEN GEO.ID_CONCEPT   =    :p_GeoLvl3Id  THEN A_F_NS_VAL_MRG_LY END) OVER(PARTITION BY A.SID_ALL_SECTOR_H) AS A_F_NS_VAL_MRG_LY_GEO3 
    			,SUM(CASE WHEN GEO.ID_CONCEPT   =    :p_GeoLvl3Id  THEN A_F_NS_VAL_SAL_AMT_MRG_LY END)  OVER(PARTITION BY A.SID_ALL_SECTOR_H) AS A_F_NS_VAL_SAL_AMT_MRG_LY_GEO3 
    			,SUM(CASE WHEN GEO.ID_CONCEPT   =    :p_GeoLvl3Id  THEN A_F_NS_QTY_TX_LY END) OVER(PARTITION BY A.SID_ALL_SECTOR_H) AS A_F_NS_QTY_TX_LY_GEO3 
    			,rank() OVER(PARTITION BY A.SID_ALL_SECTOR_H order by A_F_NS_VAL_SAL_AMT desc) As RK_F_NS_VAL_SAL_AMT 
    			,rank() OVER(PARTITION BY A.SID_ALL_SECTOR_H order by A_F_NS_VAL_MRG desc) As RK_F_NS_VAL_MRG 
    			,rank() OVER(PARTITION BY A.SID_ALL_SECTOR_H order by A_F_NS_VAL_SAL_AMT_LY desc) As RK_F_NS_VAL_SAL_AMT_LY 
    			,rank() OVER(PARTITION BY A.SID_ALL_SECTOR_H order by A_F_NS_VAL_MRG_LY desc) As RK_F_NS_VAL_MRG_LY,
    			max( 0) PROMO_FLG
    			FROM A_SAL_NET_DAY_SIT_ALL_H A 
    			INNER JOIN LU_PAF_GEO GEO on A.sid_site = GEO.sid_site 
    			INNER JOIN LU_SECTOR_ALL_C D on A.SID_ALL_SECTOR_H = D.SID_ALL_SECTOR_C AND  D.SID_ALL_SECTOR_C  = :p_NomValueDrillId
    			INNER JOIN RL_SALES_CHANNEL_GRP C on GEO.SID_SALES_CHANNEL = C.SID_SALES_CHANNEL  AND SID_SALES_CHANNEL_GRP = :p_GeoChannelGrpID
    			
    			WHERE (A.ID_DAY BETWEEN  :p_StartTimeLastYear  and :p_EndTimeLastYear  or A.ID_DAY BETWEEN  :p_StartTime  and  :p_EndTime  )   
    			AND GEO.ID_CONCEPT = :p_GeoLvl3Id    
    			GROUP BY GEO.SID_SITE_ALLOCATION   , GEO.ID_REGION   ,GEO.ID_CONCEPT   ,A.SID_ALL_SECTOR_H 
    			HAVING A_F_NS_VAL_SAL_AMT is not null
    			QUALIFY GEO.SID_SITE_ALLOCATION   = :p_GeoLvl1Id   
    ) T 
    INNER JOIN LU_SECTOR_ALL_C D on T.SID_ALL_SECTOR_H = D.SID_ALL_SECTOR_C 
    INNER JOIN ( 
    			SELECT GEO.SID_SITE_ALLOCATION   ID_GEO1 , GEO.ID_REGION   ID_GEO2 ,GEO.ID_CONCEPT   ID_GEO3 
    			  ,SUM(CASE WHEN A.ID_DAY BETWEEN  :p_StartTimeLastYear  and :p_EndTimeLastYear  THEN   F_NS_QTY_TX   END) AS A_F_NS_QTY_TX_LY 
    			  ,SUM(CASE WHEN A.ID_DAY BETWEEN  :p_StartTime  and  :p_EndTime  THEN   F_NS_QTY_TX   END) AS A_F_NS_QTY_TX 
    			  ,SUM( A_F_NS_QTY_TX ) OVER() AS A_F_NS_QTY_TX_GEO3 
    			  ,SUM( A_F_NS_QTY_TX_LY ) OVER() AS A_F_NS_QTY_TX_LY_GEO3 
    			 FROM A_SAL_NET_DAY_SIT_ALL_H A 
    			 INNER JOIN LU_PAF_GEO GEO on A.sid_site = GEO.sid_site 
    			 INNER JOIN RL_SALES_CHANNEL_GRP C on GEO.SID_SALES_CHANNEL = C.SID_SALES_CHANNEL  AND SID_SALES_CHANNEL_GRP = :p_GeoChannelGrpID
    			 WHERE (A.ID_DAY BETWEEN  :p_StartTimeLastYear  and :p_EndTimeLastYear  or A.ID_DAY BETWEEN  :p_StartTime  and  :p_EndTime  ) 
    			 AND GEO.ID_CONCEPT =    :p_GeoLvl3Id    
    			 GROUP BY GEO.SID_SITE_ALLOCATION   , GEO.ID_REGION   ,GEO.ID_CONCEPT       
                         QUALIFY GEO.SID_SITE_ALLOCATION   = :p_GeoLvl1Id   
 ) qtx on t.id_geo1 = qtx.id_geo1 and t.id_geo2 = qtx.id_geo2 and t.id_geo3 = qtx.id_geo3 ;
    )
;

 

ulrich 816 posts Joined 09/09
24 Oct 2014

This macro is working as you posted it?
I thought using group by and ordered analytic functions can not be done on the same level.
Beside this - it is a fairly complex SQL - can you share the explain? Are you accessing JI? Views? or are all objects listed in the macro tables?

feel free to donate bitcoin:12kgAUHFUqvG2sQgaRBXFhCwyf9HXdkGud

tomnolan 594 posts Joined 01/08
24 Oct 2014

As Ulrich said, that's a complex query in the macro. And the macro parameters are reused several times throughout the query, so the query can't be tested on its own outside of the macro.
 
This is too complicated to troubleshoot via a forum thread. If you're a customer, then please create a Teradata Customer Service incident.

dnoeth 4628 posts Joined 11/04
24 Oct 2014

CacheFlag='A' (mostly 95% of queries processed)

It's a complex query and the optimizer decided to ignore the PREPARE. According to the Admin manual:
"A" if a Specific Always decision is taken. That is, for each query USING values are peeked during request parsing.
This can be switched off either globally (using DisablePeekUsing in the dbscontrol Perfromance field) or on a Profile level (assigning a COST PROFILE to a it with the UseHiPriority factor set).
I would recommend doing this on a Profile level to avoid negative effects for other prepared statements. For the How To you might still involve customer service :-)

Dieter

dwillier 5 posts Joined 03/14
27 Oct 2014

Hello,
Thanks you all for your replies.
The macro works properly as i posted it. I agree with you that is a fairly complex SQL and not so easy to troubleshoot via the forum. 
@Ulrich
The macro uses a JI.  I post you below the explain plan.
@dnoeth
Thanks for that tip. You point me to a new direction with this setting.
 
I keep you inform on the success reaching that goal :) 
Best regards,
Damien
 
 
Explain Plan :

Explain execute M_PAF_SALES_DAY_ALLOCATION_ALL_ALL('2014-01-02','2014-01-16','2013-01-03','2013-01-17',9,1,1,1,3);

  1) First, we lock a distinct FRDM_PAF."pseudo table" for read on a
     RowHash to prevent global deadlock for FRDM_PAF.D.
  2) Next, we lock FRDM_PAF.D for read.
  3) We do a single-AMP SUM step to aggregate from 30 partitions of
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1 by way of the primary index
     "FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_CONCEPT = 1,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_SALES_CHANNEL_GRP = 1 
,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_ALL_SECTOR_C = 3" with a
     residual condition of (
     "(FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_CONCEPT =
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_SALES_CHANNEL_GRP) AND
     ((FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_CONCEPT = 1) AND
     ((FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_SALES_CHANNEL_GRP = 1)
     AND ((FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_ALL_SECTOR_C = 3)
     AND ((FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_ALL_SECTOR_H = 3)
     AND (((FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_DAY >= DATE
     '2014-01-02') AND (FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_DAY <=
     DATE '2014-01-16')) OR
     ((FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_DAY >= DATE
     '2013-01-03') AND (FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_DAY <=
     DATE '2013-01-17')))))))") , grouping by field1 (
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_SITE_ALLOCATION
     ,FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_REGION
     ,FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_CONCEPT
     ,FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_ALL_SECTOR_H).
     Aggregate Intermediate Results are computed locally, then placed
     in Spool 5.  The size of Spool 5 is estimated with no confidence
     to be 2 rows (378 bytes).  The estimated time for this step is
     0.00 seconds.
  4) We do a single-AMP RETRIEVE step from Spool 5 (Last Use) by way of
     the hash value of
     "FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_CONCEPT = 1,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_SALES_CHANNEL_GRP = 1 
,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_ALL_SECTOR_C = 3" with
     an additional condition of ("NOT (Field_11 IS NULL)") into Spool 3
     (one-amp), which is built locally on that AMP.  The size of Spool
     3 is estimated with no confidence to be 2 rows (332 bytes).  The
     estimated time for this step is 0.03 seconds.
  5) We do a single-AMP STAT FUNCTION step from Spool 3 (Last Use) by
     way of the hash value of
     "FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_CONCEPT = 1,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_SALES_CHANNEL_GRP = 1 
,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_ALL_SECTOR_C = 3" into
     Spool 15 (Last Use), which is built locally on that AMP.  The
     result rows are put into Spool 13 (one-amp), which is built
     locally on that AMP.  The size is estimated with no confidence to
     be 2 rows (628 bytes).
  6) We do a single-AMP STAT FUNCTION step from Spool 13 (Last Use) by
     way of the hash value of
     "FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_CONCEPT = 1,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_SALES_CHANNEL_GRP = 1 
,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_ALL_SECTOR_C = 3" into
     Spool 18 (Last Use), which is built locally on that AMP.  The
     result rows are put into Spool 17 (one-amp), which is built
     locally on that AMP.  The size is estimated with no confidence to
     be 2 rows (636 bytes).
  7) We do a single-AMP STAT FUNCTION step from Spool 17 (Last Use) by
     way of the hash value of
     "FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_CONCEPT = 1,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_SALES_CHANNEL_GRP = 1 
,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_ALL_SECTOR_C = 3" into
     Spool 21 (Last Use), which is built locally on that AMP.  The
     result rows are put into Spool 20 (one-amp), which is built
     locally on that AMP.  The size is estimated with no confidence to
     be 2 rows (644 bytes).
  8) We do a single-AMP STAT FUNCTION step from Spool 20 (Last Use) by
     way of the hash value of
     "FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_CONCEPT = 1,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_SALES_CHANNEL_GRP = 1 
,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_ALL_SECTOR_C = 3" into
     Spool 24 (Last Use), which is built locally on that AMP.  The
     result rows are put into Spool 23 (one-amp), which is built
     locally on that AMP.  The size is estimated with no confidence to
     be 2 rows (652 bytes).
  9) We execute the following steps in parallel.
       1) We do a single-AMP RETRIEVE step from Spool 23 (Last Use) by
          way of the hash value of
          "FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_CONCEPT = 1,
          FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_SALES_CHANNEL_GRP =
          1 
, FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_ALL_SECTOR_C =
          3" with an additional condition of ("SID_SITE_ALLOCATION = 9")
          into Spool 2 (used to materialize view, derived table, table
          function or table operator T) (one-amp), which is built
          locally on that AMP.  The size of Spool 2 is estimated with
          no confidence to be 2 rows (620 bytes).  The estimated time
          for this step is 0.03 seconds.
       2) We do a single-AMP SUM step to aggregate from 30 partitions
          of FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0 by way of the
          primary index
          "FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_CONCEPT = 1,
          FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.SID_SALES_CHANNEL_GRP =
          1" with a residual condition of (
          "(FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_CONCEPT =
          FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.SID_SALES_CHANNEL_GRP)
          AND ((FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_CONCEPT = 1)
          AND
          ((FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.SID_SALES_CHANNEL_GRP
          = 1) AND (((FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_DAY >=
          DATE '2014-01-02') AND
          (FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_DAY <= DATE
          '2014-01-16')) OR
          ((FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_DAY >= DATE
          '2013-01-03') AND
          (FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_DAY <= DATE
          '2013-01-17')))))") , grouping by field1 (
          FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.SID_SITE_ALLOCATION
          ,FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_REGION
          ,FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_CONCEPT).
          Aggregate Intermediate Results are computed locally, then
          placed in Spool 30.  The size of Spool 30 is estimated with
          no confidence to be 2 rows (130 bytes).  The estimated time
          for this step is 0.00 seconds.
 10) We do a single-AMP RETRIEVE step from Spool 30 (Last Use) by way
     of the hash value of
     "FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_CONCEPT = 1,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.SID_SALES_CHANNEL_GRP = 1"
     into Spool 27 (one-amp), which is built locally on that AMP.  The
     size of Spool 27 is estimated with no confidence to be 2 rows (98
     bytes).  The estimated time for this step is 0.03 seconds.
 11) We do a single-AMP STAT FUNCTION step from Spool 27 (Last Use) by
     way of the hash value of
     "FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_CONCEPT = 1,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.SID_SALES_CHANNEL_GRP = 1"
     into Spool 34 (Last Use), which is built locally on that AMP.  The
     result rows are put into Spool 32 (one-amp), which is built
     locally on that AMP.  The size is estimated with no confidence to
     be 2 rows (194 bytes).
 12) We do a single-AMP RETRIEVE step from Spool 32 (Last Use) by way
     of the hash value of
     "FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_CONCEPT = 1,
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.SID_SALES_CHANNEL_GRP = 1"
     with an additional condition of ("SID_SITE_ALLOCATION = 9") into
     Spool 1 (used to materialize view, derived table, table function
     or table operator qtx) (one-amp), which is built locally on that
     AMP.  The size of Spool 1 is estimated with no confidence to be 2
     rows (130 bytes).  The estimated time for this step is 0.03
     seconds.
 13) We execute the following steps in parallel.
      1) We do a single-AMP RETRIEVE step from Spool 1 (Last Use) by
         way of the hash value of
         "FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_CONCEPT = 1,
         FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.SID_SALES_CHANNEL_GRP =
         1" into Spool 39 (one-amp), which is built locally on that AMP.
         Then we do a SORT to order Spool 39 by the hash code of (
         FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_GEO1,
         FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_GEO2,
         FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_GEO3).  The size of
         Spool 39 is estimated with no confidence to be 2 rows (114
         bytes).  The estimated time for this step is 0.01 seconds.
      2) We do a single-AMP RETRIEVE step from Spool 2 (Last Use) by
         way of the hash value of
         "FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_CONCEPT = 1,
         FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_SALES_CHANNEL_GRP =
         1 
, FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_ALL_SECTOR_C =
         3" into Spool 40 (all_amps), which is duplicated on all AMPs.
         Then we do a SORT to order Spool 40 by the hash code of (
         FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_GEO1,
         FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_GEO2,
         FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.ID_GEO3).  The size of
         Spool 40 is estimated with no confidence to be 432 rows (
         130,464 bytes).  The estimated time for this step is 0.01
         seconds.
 14) We do an all-AMPs JOIN step from Spool 39 (Last Use) by way of the
     hash value of "FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.ID_CONCEPT =
     1, FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_0.SID_SALES_CHANNEL_GRP = 1",
     which is joined to Spool 40 (Last Use) by way of a RowHash match
     scan.  Spool 39 and Spool 40 are joined using a merge join, with a
     join condition of ("(ID_GEO1 = ID_GEO1) AND ((ID_GEO2 = ID_GEO2)
     AND (ID_GEO3 = ID_GEO3 ))").  The result goes into Spool 41
     (all_amps), which is redistributed by the hash code of (
     FRDM_PAF.JI_A_SAL_NET_DAY_SIT_ALL_H_1.SID_ALL_SECTOR_H) to all
     AMPs.  Then we do a SORT to order Spool 41 by row hash.  The size
     of Spool 41 is estimated with no confidence to be 2 rows (644
     bytes).  The estimated time for this step is 0.03 seconds.
 15) We do an all-AMPs JOIN step from FRDM_PAF.D by way of a RowHash
     match scan with no residual conditions, which is joined to Spool
     41 (Last Use) by way of a RowHash match scan.  FRDM_PAF.D and
     Spool 41 are joined using a merge join, with a join condition of (
     "SID_ALL_SECTOR_H = FRDM_PAF.D.SID_ALL_SECTOR_C").  The result
     goes into Spool 38 (all_amps), which is built locally on the AMPs.
     The size of Spool 38 is estimated with no confidence to be 2 rows
     (820 bytes).  The estimated time for this step is 0.02 seconds.
 16) We do an all-AMPs STAT FUNCTION step from Spool 38 by way of an
     all-rows scan into Spool 44, which is redistributed by hash code
     to all AMPs.  The result rows are put into Spool 37 (group_amps),
     which is built locally on the AMPs.  This step is used to retrieve
     the TOP 5000 rows.  Load distribution optimization is used.
     If this step retrieves less than 5000 rows, then execute step 17.
     The size is estimated with no confidence to be 2 rows (752 bytes).
 17) We do an all-AMPs STAT FUNCTION step from Spool 38 (Last Use) by
     way of an all-rows scan into Spool 44 (Last Use), which is
     redistributed by hash code to all AMPs.  The result rows are put
     into Spool 37 (group_amps), which is built locally on the AMPs.
     This step is used to retrieve the TOP 5000 rows.  The size is
     estimated with no confidence to be 2 rows (752 bytes).
 18) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> The contents of Spool 37 are sent back to the user as the result
     of statement 1.

 

ulrich 816 posts Joined 09/09
27 Oct 2014

I saw a similar issue at a customer side related to JI - parsing at this time took 15 min.
The workarround was to set a specific diagnostic statement which would tell the optimizer to do a less detailed analysis (and reduced significantly the parsing time).
You might ask your Teradata Customer Service for that.

feel free to donate bitcoin:12kgAUHFUqvG2sQgaRBXFhCwyf9HXdkGud

You must sign in to leave a comment.