SQL Server Performance

Stored Procedure slow first run

Discussion in 'Performance Tuning for DBAs' started by kdefilip, Jun 10, 2004.

  1. kdefilip Member

    Hi.
    I have a stored procedure that runs very slow first run (30 sec.). All subsequent runs is 5 sec. The slowness is not in recompiles. The slowness occurres after the start of the SP. Also, on the first run, there are 275,000 reads and 2 writes (??? this is not an update, but there are 2 writes). On subsequent runs, the reads are only 125,000. If I stop and start the sqlsrv process, the process repeates itself. This slwoness is apparent to users each "first run" of the SP OR after about 30 minutes of inactivity. I'm out of gas on this one. Any help would be greatly appreciated.
    Thanks.
  2. satya Moderator

    http://www.sql-server-performance.com/stored_procedures.asp for tips on performance tuning of SPs.

    How often you recompile the stored procedure?
    How about reindex of involved table indexes?
    Do you have SET NOCOUNT ON statement on beginning of SP?
    Are you using any temp tables?

    Montior the event using PROFILER and capture SYSMON (PERFMON) counters for assessment.

    HTH

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  3. kdefilip Member

    Hi.
    Recompiling makes no difference.
    Indexes have been droped and recreated. Stats are up to date.
    SET NOCOUNT ON makes no difference.
    No temp tables are involved in this query.

    I have two trace files available. The first represents the first run of the SP.
    The second is the second run of the SP.

    I can email to anyone who would like to look them over.
    Thanks.
  4. satya Moderator

    What are the memory settings on SQL?
    WHat is the level of service pack on SQL & OS?

    Follow the above link in order to gain the performance for SPs.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  5. kdefilip Member

    Sql memory is set dynamically - low is zero - high is 1.5 gig
    The sql process is always at its max of 1.5 gig.

    OS is w2k sp4
    SQL is Sql200 sp 3a standard edition

    These servers do show peak exceeding physical memory, but even in a state where there is excessive ram available, the same wait times are observed.
    Thanks again.
  6. Luis Martin Moderator

    How about to run Index Tunning Wizard for that procedure?


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.

  7. kdefilip Member

    I have run query wizzard which comes back with no suggestion. However, when I test with SET FORCEPLAN ON (due to the nested JOIN in this query) it forces the execution plan to be ignored. In that case, there was a full table scan on one of the tables so I created an index to account for this temporary situation. It did, however, have no effect on this long wait in the initial execution.

    It is almost like the result set itself is being saved somewhere, i.e., based on about half as many reads on subsequent runs of the SP.
  8. derrickleggett New Member

    This is kind of interesting. derrickleggett@yahoo.com if you want to send me the trace files. What events are you tracking? Specifically, are you tracking recompiles. Do you have the execution plan for both times? Have you ran a set statistics time on, set statistics io on, set statistics profile on to see what's going on there?

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  9. kdefilip Member

    Here are the results with SET STATISTICS clauses. NOTE: the warning concerning the "Local Join Hint" refers to the use of "INNER LOOP JOIN " in the statement in line with suggestion of the MS Q article Q269883. If I take LOOP out of the statement, the execution time doubles to 35 seconds. Same is true if I use SET FORCEPLAN ON, the execution time is half. You will notice info was collected for the first run, then the second run in each case. I did not include statistic profile as it did not return any info except for number of rows returned.
    Thanks.
    p.s. SP is at the end.


    ********
    WITH SET STATISTIC TIME ON

    FIRST RUN
    SQL Server Execution Times:
    CPU time = 0 ms, elapsed time = 0 ms.
    Warning: The join order has been enforced because a local join hint is used.
    SQL Server parse and compile time:
    CPU time = 70 ms, elapsed time = 368 ms.

    (589 row(s) affected)


    SQL Server Execution Times:
    CPU time = 10045 ms, elapsed time = 14256 ms.

    SQL Server Execution Times:
    CPU time = 10045 ms, elapsed time = 14264 ms.

    SQL Server Execution Times:
    CPU time = 10045 ms, elapsed time = 14265 ms.

    SQL Server Execution Times:
    CPU time = 10045 ms, elapsed time = 14265 ms.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 0 ms, elapsed time = 0 ms.

    SECOND RUN
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 0 ms, elapsed time = 0 ms.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 0 ms, elapsed time = 0 ms.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    (589 row(s) affected)


    SQL Server Execution Times:
    CPU time = 2784 ms, elapsed time = 4765 ms.

    SQL Server Execution Times:
    CPU time = 2784 ms, elapsed time = 4774 ms.

    SQL Server Execution Times:
    CPU time = 2784 ms, elapsed time = 4775 ms.

    SQL Server Execution Times:
    CPU time = 2784 ms, elapsed time = 4775 ms.
    SQL Server parse and compile time:
    CPU time = 0 ms, elapsed time = 0 ms.

    SQL Server Execution Times:
    CPU time = 0 ms, elapsed time = 0 ms.








    WITH IO SET ON

    FIRST RUN


    Warning: The join order has been enforced because a local join hint is used.

    (589 row(s) affected)

    Table 'Table_Reference'. Scan count 31111, logical reads 145331, physical reads 157, read-ahead reads 5370.
    Table 'Table_Model'. Scan count 31112, logical reads 64217, physical reads 8, read-ahead reads 48.
    Table 'Table'. Scan count 1, logical reads 3261, physical reads 2, read-ahead reads 3269.


    SECOND RUN
    (589 row(s) affected)

    Table 'Table_Reference'. Scan count 31111, logical reads 97986, physical reads 0, read-ahead reads 0.
    Table 'Table_Model'. Scan count 31112, logical reads 64176, physical reads 0, read-ahead reads 0.
    Table 'Table'. Scan count 1, logical reads 3261, physical reads 0, read-ahead reads 0.
    *******************

    Stored procedure

    CREATE PROCEDURE [HomebaseGetCompanyAircraftList]
    (
    @paramCompanyId int,
    @paramJournalId int
    )
    AS



    SELECT a.manyfields, m.manyfields, r.manyfields

    FROM Table a
    INNER LOOP JOIN Table_Model m ON a.ac_id = amod_id
    INNER LOOP JOIN Table_Reference r ON r.cref_ac_id = a.ac_id AND r.cref_journ_id = a.ac_journ_id

    WHERE cref_comp_id = @paramCompanyId AND cref_journ_id = @paramJournalId
    ORDER BY a.ac_lifecycle_stage,m.amod_make_name,m.amod_model_name,a.ac_ser_no_full
    GO


  10. derrickleggett New Member

    You should qualify all of your columns with the table alias. You were missing three of them.

    CREATE PROCEDURE [HomebaseGetCompanyAircraftList]
    (
    @paramCompanyId int,
    @paramJournalId int
    )
    AS

    SELECT
    a.manyfields,
    m.manyfields,
    r.manyfields
    FROM
    Table a
    INNER LOOP JOIN Table_Model m ON a.ac_id = m.amod_id
    INNER LOOP JOIN Table_Reference r ON r.cref_ac_id = a.ac_id
    AND r.cref_journ_id = a.ac_journ_id
    WHERE
    r.cref_comp_id = @paramCompanyId
    AND r.cref_journ_id = @paramJournalId
    ORDER BY
    a.ac_lifecycle_stage,
    m.amod_make_name,
    m.amod_model_name,
    a.ac_ser_no_full

    From that point, let's break this down.

    Parse and Compile Time
    --Caused by the procedure having to compile the first time.
    ---This isn't a forced recompile. The procedure does not exist in cache at all and has to be compiled.
    ----------------------
    First Run: CPU time = 70 ms, elapsed time = 368 ms
    Second Run: CPU time = 0 ms, elapsed time = 0 ms.

    Actual Execution Time
    --This varies on later runs because you've loaded the index records into the memory.
    ---From that point on they perform logical reads, which are much more efficient and much faster.
    ---------------------
    First Run: CPU time = 40180 ms, elapsed time = 57056 ms
    Second Run: CPU time = 11136 ms, elapsed time = 19089 ms.

    Total Time
    ----------
    First Run: CPU Time = 40250 ms, elapsed time = 57424
    Second Run: CPU Time = 11136 ms, elapsed time = 19089






    Table 'Table_Reference'. Scan count 31111, logical reads 145331, physical reads 157, read-ahead reads 5370.
    Table 'Table_Model'. Scan count 31112, logical reads 64217, physical reads 8, read-ahead reads 48.

    The reads on Table_Reference are pretty high.
    --Do you have a covering index for (cref_comp_id, cref_journ_id, cref_ac_id)?
    --Since it looks like the Table_Reference limits the recordset (WHERE clause) you should consider putting it first.

    Table_Model is also high.
    --Do you need to order this recordset?
    ---Can that be more efficiently done at the presentation layer?
    ---If you are going to order, do you have an index for it?

    When you look in your execution plan do you see any bookmark lookups or table scans? All of these will slow the first execution run down, cause more reads on all runs, and generally decrease performance.


    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  11. kdefilip Member

    Hi.
    Would a clustered index be best on the reference fields?
    What do you mean that I should consider putting the table_reference first?
    What do you mean by "do I need to order this recordset"?

    I have eliminated all table scans. However, the first step in the query is a bookmark lookup, but the cost is low.

    Thanks again for your assistance.

  12. kdefilip Member

  13. gaurav_bindlish New Member

    Well 275,000 reads is a lot of data that needs to be brought in memory and hence the execution takes a lot of time for the first time. In the subsequent executions the data is already cached in the Buffer Pool and hence you see less execution time.

    I would recommend fine tuning the query so that so many pages are not needed for the query execution. This may include adding new covered indexes (which may not be vary useful if large part of the table is being read) and vertically partitioning data so that the frequently used fields in the tables under consaideration are in one table. This would mean more data on less pages and hence the no, of pages read will decrease whoch will evenetually decrease the query execution time.

    Gaurav
    Moderator
    Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

    The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
  14. derrickleggett New Member

    On the indexes, you might consider the clustered index. This would put the data in the index pages and will run faster. You have to weigh that with the overall gain/loss on writes/reads.

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.

Share This Page