All Forums General
radsubra 33 posts Joined 10/12
28 Aug 2015
tuning query

Below query is taking a long time to execute.Any suggestions would help
 
SELECT    PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ.TRX_SC_ID , PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ.SALES_ORDER_LINE_KEY ,
        PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ.SHIP_TO_CUSTOMER_KEY , PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ.SK_SALES_ORDER_LINE_ID_ INT ,
        CAST( PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ.SO_LINE_SOURCE_UPDATE_D TM AS TIMESTAMP( 0 )  ) ,
        CAST( PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ.DV_SO_LINE_SOURCE_UPDAT E_DT AS DATE ) ,
        CAST( PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ.SOURCE_COMMIT_DTM AS TIMESTAMP( 0 )  )
FROM    (
SELECT    - ( ROW_NUMBER ( ) OVER (
ORDER BY TMP1.SALES_ORDER_LINE_KEY ASC )  ) + TMP2.MAX_VAL AS TRX_SC_ID ,
        TMP1.SALES_ORDER_LINE_KEY , TMP1.SHIP_TO_CUSTOMER_KEY , TMP1.SK_SALES_ORDER_LINE_ID_INT ,
        TMP1.SO_LINE_SOURCE_UPDATE_DTM , TMP1.DV_SO_LINE_SOURCE_UPDATE_DT ,
        TMP1.SOURCE_COMMIT_DTM
FROM    (
SELECT    SOL.SALES_ORDER_LINE_KEY , SOL.SHIP_TO_CUSTOMER_KEY , SOL.SK_SALES_ORDER_LINE_ID_INT ,
        SOL.SO_LINE_SOURCE_UPDATE_DTM , SOL.DV_SO_LINE_SOURCE_UPDATE_DT ,
        SOL.SOURCE_COMMIT_DTM
FROM    NRTNCRVWDB.N_SALES_ORDER_LINE_NRT_HIST_TV SOL
WHERE    SOL.SS_CD = 'CG'
    AND SOL.END_TV_DTM = '3500-01-01 00:00:00'
    AND SOL.EDW_UPDATE_DTM > (
SELECT    LAST_EXTRACT_DATE
FROM    ETLVWDB.DW_JOB_STREAMS
WHERE    JOB_STREAM_ID = 'wf_WI_DEFAULT_APPLIED_SC' )
    AND SOL.SALES_ORDER_LINE_KEY <> - 7777
    AND ^ EXISTS (
SELECT    1
FROM    NRTNCRVWDB.N_SCA_FOR_ALL_TRX_NRT_HIST_TV
WHERE    SOL.SALES_ORDER_LINE_KEY = RU_SALES_ORDER_LINE_KEY
    AND END_TV_DTM = '3500-01-01 00:00:00'
    AND ss_cd = 'CG' ) ) TMP1 , (
SELECT    COALESCE( MINIMUM ( SK_LINE_SEQUENCE_ID_INT ) , 0 ) (NAMED MAX_VAL )
FROM    ETLVWDB.SM_SALES_CREDIT_ASSIGNMENT_NRT
WHERE    SS_CODE = 'CG' ) TMP2 ) AS PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ ( TRX_SC_ID,
        SALES_ORDER_LINE_KEY,SHIP_TO_CUSTOMER_KEY,SK_SALES_ORD ER_LINE_ID_INT,
        SO_LINE_SOURCE_UPDATE_DTM,DV_SO_LINE_SOURCE_UPDATE_DT, SOURCE_COMMIT_DTM )
 
 
 
 

explain
SELECT    PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ.TRX_SC_ID , PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ.SALES_ORDER_LINE_KEY ,
        PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ.SHIP_TO_CUSTOMER_KEY , PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ.SK_SALES_ORDER_LINE_ID_ INT ,
        CAST( PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ.SO_LINE_SOURCE_UPDATE_D TM AS TIMESTAMP( 0 )  ) ,
        CAST( PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ.DV_SO_LINE_SOURCE_UPDAT E_DT AS DATE ) ,
        CAST( PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ.SOURCE_COMMIT_DTM AS TIMESTAMP( 0 )  )
FROM    (
SELECT    - ( ROW_NUMBER ( ) OVER (
ORDER BY TMP1.SALES_ORDER_LINE_KEY ASC )  ) + TMP2.MAX_VAL AS TRX_SC_ID ,
        TMP1.SALES_ORDER_LINE_KEY , TMP1.SHIP_TO_CUSTOMER_KEY , TMP1.SK_SALES_ORDER_LINE_ID_INT ,
        TMP1.SO_LINE_SOURCE_UPDATE_DTM , TMP1.DV_SO_LINE_SOURCE_UPDATE_DT ,
        TMP1.SOURCE_COMMIT_DTM
FROM    (
SELECT    SOL.SALES_ORDER_LINE_KEY , SOL.SHIP_TO_CUSTOMER_KEY , SOL.SK_SALES_ORDER_LINE_ID_INT ,
        SOL.SO_LINE_SOURCE_UPDATE_DTM , SOL.DV_SO_LINE_SOURCE_UPDATE_DT ,
        SOL.SOURCE_COMMIT_DTM
FROM    NRTNCRVWDB.N_SALES_ORDER_LINE_NRT_HIST_TV SOL
WHERE    SOL.SS_CD = 'CG'
    AND SOL.END_TV_DTM = '3500-01-01 00:00:00'
    AND SOL.EDW_UPDATE_DTM > (
SELECT    LAST_EXTRACT_DATE
FROM    ETLVWDB.DW_JOB_STREAMS
WHERE    JOB_STREAM_ID = 'wf_WI_DEFAULT_APPLIED_SC' )
    AND SOL.SALES_ORDER_LINE_KEY <> - 7777
    AND ^ EXISTS (
SELECT    1
FROM    NRTNCRVWDB.N_SCA_FOR_ALL_TRX_NRT_HIST_TV
WHERE    SOL.SALES_ORDER_LINE_KEY = RU_SALES_ORDER_LINE_KEY
    AND END_TV_DTM = '3500-01-01 00:00:00'
    AND ss_cd = 'CG' ) ) TMP1 , (
SELECT    COALESCE( MINIMUM ( SK_LINE_SEQUENCE_ID_INT ) , 0 ) (NAMED MAX_VAL )
FROM    ETLVWDB.SM_SALES_CREDIT_ASSIGNMENT_NRT
WHERE    SS_CODE = 'CG' ) TMP2 ) AS PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ ( TRX_SC_ID,
        SALES_ORDER_LINE_KEY,SHIP_TO_CUSTOMER_KEY,SK_SALES_ORD ER_LINE_ID_INT,
        SO_LINE_SOURCE_UPDATE_DTM,DV_SO_LINE_SOURCE_UPDATE_DT, SOURCE_COMMIT_DTM );

 This request is eligible for incremental planning and execution (IPE)
 but does not meet cost thresholds. The following is the static plan
 for the request.
  1) First, we lock NRTNCRDB.N_SCA_FOR_ALL_TRX_NRT_HIST_TV for access,
     we lock NRTNCRDB.N_SALES_ORDER_LINE_NRT_HIST_TV for access, and we
     lock TRANSLATIONDB.SM_SALES_CREDIT_ASSIGNMENT for access.
  2) Next, we do a two-AMP RETRIEVE step from CTLDB.DW_JOB_STREAMS by
     way of unique index # 4 "CTLDB.DW_JOB_STREAMS.JOB_STREAM_ID =
     'wf_WI_DEFAULT_APPLIED_SC'" with no residual conditions locking
     row for access into Spool 3 (group_amps), which is built locally
     on the AMPs.  The size of Spool 3 is estimated with high
     confidence to be 1 row (35 bytes).  The estimated time for this
     step is 0.00 seconds.
  3) We do a group-AMP DISPATCHER RETRIEVE step from Spool 3 (Last Use)
     by way of an all-rows scan and send the rows back to the
     Dispatcher.  The size is estimated with high confidence to be 1
     row.  The estimated time for this step is 0.00 seconds.
  4) We do an all-AMPs SUM step to aggregate from
     TRANSLATIONDB.SM_SALES_CREDIT_ASSIGNMENT by way of an all-rows
     scan with a condition of (
     "TRANSLATIONDB.SM_SALES_CREDIT_ASSIGNMENT.SS_CODE = 'CG '").
     Aggregate Intermediate Results are computed globally, then placed
     in Spool 5.  The size of Spool 5 is estimated with high confidence
     to be 1 row (19 bytes).  The estimated time for this step is 2.06
     seconds.
  5) We execute the following steps in parallel.
       1) We do an all-AMPs RETRIEVE step from Spool 5 (Last Use) by
          way of an all-rows scan into Spool 1 (used to materialize
          view, derived table, table function or table operator TMP2)
          (all_amps) (compressed columns allowed), which is built
          locally on the AMPs.  The size of Spool 1 is estimated with
          high confidence to be 1 row (25 bytes).  The estimated time
          for this step is 0.00 seconds.
       2) We do an all-AMPs RETRIEVE step from
          NRTNCRDB.N_SCA_FOR_ALL_TRX_NRT_HIST_TV by way of an all-rows
          scan with a condition of (
          "(NRTNCRDB.N_SCA_FOR_ALL_TRX_NRT_HIST_TV.END_TV_DTM =
          TIMESTAMP '3500-01-01 00:00:00') AND
          (NRTNCRDB.N_SCA_FOR_ALL_TRX_NRT_HIST_TV.SS_CD = 'CG')") into
          Spool 8 (all_amps), which is redistributed by the hash code
          of (
          NRTNCRDB.N_SCA_FOR_ALL_TRX_NRT_HIST_TV.RU_SALES_ORDER_ LINE_KEY)
          to all AMPs.  Then we do a SORT to order Spool 8 by row hash
          and the sort key in spool field1 eliminating duplicate rows.
          The size of Spool 8 is estimated with high confidence to be
          112,518,121 rows (2,812,953,025 bytes).  The estimated time
          for this step is 4.61 seconds.
  6) We execute the following steps in parallel.
       1) We do an all-AMPs RETRIEVE step from Spool 1 (Last Use) by
          way of an all-rows scan into Spool 9 (all_amps) (compressed
          columns allowed), which is duplicated on all AMPs.  The size
          of Spool 9 is estimated with high confidence to be 702 rows (
          11,934 bytes).  The estimated time for this step is 0.01
          seconds.
       2) We do an all-AMPs JOIN step from
          NRTNCRDB.N_SALES_ORDER_LINE_NRT_HIST_TV by way of an all-rows
          scan with a condition of (
          "(NRTNCRDB.N_SALES_ORDER_LINE_NRT_HIST_TV.EDW_UPDATE_D TM >
          :%SSQ20) AND
          ((NRTNCRDB.N_SALES_ORDER_LINE_NRT_HIST_TV.END_TV_DTM =
          TIMESTAMP '3500-01-01 00:00:00') AND
          ((NRTNCRDB.N_SALES_ORDER_LINE_NRT_HIST_TV.SS_CD = 'CG ') AND
          (NRTNCRDB.N_SALES_ORDER_LINE_NRT_HIST_TV.SALES_ORDER_L INE_KEY
          <> -7777 )))"), which is joined to Spool 8 (Last Use) by way
          of an all-rows scan.  NRTNCRDB.N_SALES_ORDER_LINE_NRT_HIST_TV
          and Spool 8 are joined using an exclusion merge join, with a
          join condition of (
          "NRTNCRDB.N_SALES_ORDER_LINE_NRT_HIST_TV.SALES_ORDER_L INE_KEY
          = RU_SALES_ORDER_LINE_KEY") where unknown comparison will be
          ignored.  The result goes into Spool 10 (all_amps)
          (compressed columns allowed), which is built locally on the
          AMPs.  The size of Spool 10 is estimated with no confidence
          to be 52,877,466 rows (2,590,995,834 bytes).  The estimated
          time for this step is 3.44 seconds.
  7) We do an all-AMPs JOIN step from Spool 9 (Last Use) by way of an
     all-rows scan, which is joined to Spool 10 (Last Use) by way of an
     all-rows scan.  Spool 9 and Spool 10 are joined using a product
     join, with a join condition of ("(1=1)").  The result goes into
     Spool 7 (all_amps) (compressed columns allowed), which is built
     locally on the AMPs.  The size of Spool 7 is estimated with no
     confidence to be 52,877,466 rows (2,908,260,630 bytes).  The
     estimated time for this step is 0.39 seconds.
  8) We do an all-AMPs STAT FUNCTION step from Spool 7 (Last Use) by
     way of an all-rows scan into Spool 13 (Last Use), which is assumed
     to be redistributed by value to all AMPs.  The result rows are put
     into Spool 11 (all_amps) (compressed columns allowed), which is
     built locally on the AMPs.  The size is estimated with no
     confidence to be 52,877,466 rows (3,437,035,290 bytes).
  9) We do an all-AMPs RETRIEVE step from Spool 11 (Last Use) by way of
     an all-rows scan into Spool 2 (used to materialize view, derived
     table, table function or table operator
     PM_VSMSMT7O3EIKY6L6SY2ZNDUT2UQ) (all_amps) (compressed columns
     allowed), which is built locally on the AMPs.  The size of Spool 2
     is estimated with no confidence to be 52,877,466 rows (
     3,225,525,426 bytes).  The estimated time for this step is 0.32
     seconds.
 10) We do an all-AMPs RETRIEVE step from Spool 2 (Last Use) by way of
     an all-rows scan into Spool 16 (group_amps), which is built
     locally on the AMPs.  The size of Spool 16 is estimated with no
     confidence to be 52,877,466 rows (4,177,319,814 bytes).  The
     estimated time for this step is 0.30 seconds.
 11) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> The contents of Spool 16 are sent back to the user as the result
     of statement 1.
 

Adeel Chaudhry 773 posts Joined 04/08
30 Aug 2015

Try removing the derived tables. Query is poorly written in Informatica's SQL Override. Perhaps, try implementing the logic in Informatica itself.
 
HTH!

-- If you are stuck at something .... consider it an opportunity to think anew.

You must sign in to leave a comment.