SQL Server Performance

sp_recompile

Discussion in 'Performance Tuning for DBAs' started by ykchakri, Apr 2, 2004.

  1. ykchakri New Member

    Hi,

    I have a strange problem happening in our production. Certain stored procedure calls through the application are occassionally taking abnormally longer time. When I trace these calls and run the same SPs with same exact parameters through QA, these are executing much faster.

    I've compared the execution plans and understood that it is the culprit. For some reason, the SP calls through the application are using completely different (and apparently wrong) execution plans. If I recompile the SP using sp_recompile, then everything goes back to normal till I run into this problem again.

    Can anyone tell me what could be the reason for this behaviour ? The SP calls from application are embedded in to VC++ code and are passed to the backend through COM+ middle-tier. Please let me know if you need more details.

  2. derrickleggett New Member

    First make sure you have auto update statistics on. If you do, then:

    If you look at the stored procedures, they probably have an IF statement in them that will run completely different statements based on certain data entered. If this is not the case, they have a huge where clause with many paramaters that can get wildly different results.

    Either way, you will need to break the query into sub-queries to avoid the recompiles or use a force plan on the procedure. The best thing is to just have seperate queries so the execution plan is optimized for how the sprocs are being called. Put the logic into your application to call the right procedure or have a "wrapper procedure" that looks at the paramaters and calls the appropriate procedure.

    What is happening is when you get a really different or complex query to run the execution plan needs to adjust. It is not changing back until the statistics change enough or you run sp_recompile. Make sense?

    Derrick Leggett

  3. Luis Martin Moderator

    About auto update statistics on:
    Be carefully, when is on, some times performance go down.
    I prefer to update statistics via schedule job and leave auto create statistics on, but is oppinable.


    Luis Martin
    Moderator
    SQL-Server-Performance.com
  4. derrickleggett New Member

    It is kind of a hard call to make. We do an update statistics on each database at night also. If we don't have it on though, we get some pretty messed up things by the end of the day with the amount of transactions we run a day.

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  5. satya Moderator

    Even though if you enabled AUTO STATISTICS on the database, its preferred to run UPDATE STATS manually/scheduled on the involved tables.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  6. alzdba Member

    also schedule dbcc updateusage every once in a while so your space-reference is more accurate.
  7. Chappy New Member

    Satya: Even though if you enabled AUTO STATISTICS on the database, its preferred to run UPDATE STATS manually/scheduled on the involved tables.

    Can you tell me why this is the case?
    You mean *instead* of having auto update statistics on ?
    I dont understand what benefit that would give, unless you mean take a full sample at night

    Cheers

    Paul
  8. satya Moderator

    Paul

    On one of the service at our end, I have seen the issue of performance slowdown on certain tables which has numerous data inserts/updates even though we have enabled AUTO STATISTICS. As a part of issue resolution when I manually executed UPDATE STATISTICS the performance was much much better than earlier.

    Even I have tested without manually executing UPDATE STATS, again performance was degraded.

    So in this regard I suggested to enable or schedule UPDATE STATS on the tables which are involved in frequent updations/insertions.

    My reply is not intended to remove AUTO STATS on the database.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  9. ykchakri New Member

    Hi,

    Thanks for all your suggestions. We have Auto Update Statistics on and We're also manually updating statistics (with FULLSCAN) once every day.

    And Thank you Derrick for your suggestion. Yes, there are several IF clauses in some of these SPs that will execute different batch of statements based on the parameters are being passed.

    But, These are the same procedures that were working fine a while ago and there were no major changes to these procedures.

    Also, I want to reinstate that the same SP call with the same parameters is executing fine through QA. Doesn't QA use the same cached execution plan as the application ?
  10. satya Moderator

    Ideally, it is best to capture the SQL Profiler, Performance Monitor, and blocking output during the same timeframe. This timeframe must encompass a time when application performance goes from good to bad. The combination of this information will help you to get a clearer picture of where the performance slowdown is occurring.

    For information about recompilation of SP refer to this KBAhttp://support.microsoft.com/default.aspx?scid=kb;EN-US;243586 and the events Prepare SQL event indicates that an ODBC, OLE DB, or DB-Library application prepared a Transact-SQL statement ,or statements, for use. The Exec Prepared SQL event indicates that the application made use of an existing prepared statement to run a command.

    Compare the number of times these two events occur. Ideally, an application must prepare a SQL statement one time and run it several times. This saves the Optimizer the cost of compiling a new plan each time the statement is executed.


    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  11. ykchakri New Member

    Thanks for your suggestions Satya. I'm running the trace with these two counters (Exec Prepared SQL and Prepare SQL) for the past 3 days, but nothing has been captured yet. But, I still had this problem couple of times in these 3 days. What could be wrong ?

    The fact that the same SP executes faster through QA leads me to believe that there is a problem somewhere in application coding. If there's a problem in the cached execution plan, Doesn't QA use the same cached execution plan as the application ?
  12. satya Moderator

    Yes QA use the same cached plan what the SP/code compiled, and its better to run SP_RECOMPILE when you run again on QA. Occassionally run UPDATE STATS on the involved tables for optimum performance.

    As you the code is called by an embedded application, its better to approach from that angle to dig and debug.


    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  13. itaxd New Member

    I have a similar problem in that a com+ application and QA is occasionally choosing the wrong plan.

    The procs i have are

    proc 1 , has a while loop that calls proc 2 approximately 8 times each time with different parameters.

    Proc 2 , consists of 4 select statements wrapped in an if then else statement. The parameter passed from proc 1 determines which statement in proc 2 is executed.

    This works for long periods , but occasionaly the wrong plan is chosen, causing serious performanace issues.

    Know of the previous solutions have had any effect on changing the plan back to the correct one. I've tried, sp_recompile, dbreindex on the tables involved in proc 2, exec with recompile and updatestats.

    Any further help appreciated.
  14. satya Moderator

    How do you know it uses the wrong plan?
    During that period do you get any network issues?
    How about DBcc checks and other maintenance tasks on the database.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  15. itaxd New Member

    Hi Satya

    I have used sql profiler to to capture the plans when the system is running well and when the system is running bad, the plans are different.

    No network issues.

    I run updatestats each morning , and auto update stats is on.

    What DBCC checks would you recommend?

  16. itaxd New Member

    Also I have noticed something that is very strange

    There seems to be no compile plan or execution plan listed in syscacheobjects for procedure proc 2.

    If I create both proc 1 and proc 2 along with the required tables in another database on the same server , and execute proc 1 , both proc 1 and proc 2 appear in the cache and the usecounts increments for each when executes again.

    Can anyone exmaple this?
  17. satya Moderator

    Run DBCC PROCCACHE for information on cache and try freeup cache by using DBCC FREEPROCCACHE.
    Then recompile all the involved SPs and update stats on the tables.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  18. Dave Wells New Member

    quote:Originally posted by ykchakri

    Hi,
    I've compared the execution plans and understood that it is the culprit. For some reason, the SP calls through the application are using completely different (and apparently wrong) execution plans. If I recompile the SP using sp_recompile, then everything goes back to normal till I run into this problem again.

    I've had the same problem before with different applications using different execution plans. It was down to using a different connection string from different places. Some means of connecting (e.g. ADO.NET) had settings that alter the connection string (i.e. add in certain default options)
  19. hominamad Member

    ykchakri,<br /><br />We're having some similar issues to yours in our system. We're still in the process of trying to analyze this situation. Updatestats is the only way for our system to speed back up at certain times of the day. We found that AUTO UPDATE_STATS may not have a low enough threshhold for the type of usage our system gets. We've been experimenting with everything from running UpdateStats nightly (resample all), to a custom job which runs every few minutes, examines rowmodctr values on our major tables, and based on % change runs updatestats on only that table.<br /><br />I wonder if these queries that get sudden slowdowns could be good candidates for index hints, etc. I'm looking forward to hearing more suggestiongs on this thread and it makes me feel better that I'm not the only one with this issue! <img src='/community/emoticons/emotion-1.gif' alt=':)' /><br /><br />~H<br /><br />
  20. satya Moderator

    As an interim solution you can get off using UPDATE STATS on the table during the slow performance, but in the production it may cause adverse effect.

    Previously similar to this situation we had schedule a daily job to reindex all the tables involved in the slow performance and during the day occassionally performing UPDATE STATS & recompiling the critical stored procedures, being its an third party developed application.

    HTH

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  21. ykchakri New Member

    quote:
    I've had the same problem before with different applications using different execution plans. It was down to using a different connection string from different places. Some means of connecting (e.g. ADO.NET) had settings that alter the connection string (i.e. add in certain default options)

    Hi Dave,

    Can you explain a bit more about this ? Cause, I'm suspecting that this could be the problem in our case. We have our middle-tier (COM+) code written in C++ and I was told (by my developers) that the way connections are being made to backend changed recently.
  22. ykchakri New Member

    Hi Hominamad,

    Yes, It's good to know that I'm not the only one with these issues. We stopped depending on AUTO UPDATE_STATS longtime ago. We run a job once a day, that checks STATS_DATE to decide which table needs the update.

    But, your idea of looking at rowmodctr sounds better. Can you please post your script ?
  23. Luis Martin Moderator

    Yes I do the same to you Chakri, I would like to see the script

    Luis Martin
    Moderator
    SQL-Server-Performance.com

    The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
  24. hominamad Member

    I'm not allowed to post the exact script here but I'll give you an idea as to what we're doing. Maybe somebody here will have an idea on how to improve this process.

    We created a cursor which loops through all of our columns and retrieves the rowmodctr values from the sysindexes table. We also get the number of rows in the table, also from sysindexes. We then have an IF statement which looks like:

    IF (((@intRowModCtr * 100) / @intNumRowsTable) > @intPercent) AND @intNumRowsTable > 100
    EXEC ('UPDATE STATISTICS ' + @strTableName + ' WITH RESAMPLE, ALL')


    @intPercent is a threshold that we made a configurable setting in our application. We currently have it set to 5%. So if the table has changed more than 5%, and has more than 100 rows, we run update stats only on this table. We also insert a message into another table so that we can keep track easily of how often update stats is being run.

    This logic is written into a stored procedure, which is run every 5 minutes in a job. We're only experimenting with this right now. We found that a nightly update_stats may not be enough for our application. It seems like this may be overkill though. If we had the need to create our own custom AUTO UPDATE STATS job, and our application is only a small-medium program, I'm sure many many other people must be having this same problem.

    That's why we're trying to figure out what other factors in our application could be causing the need to run update stats so often. We find that at some points in the day, the application slows down so much to the point where it becomes unusable. The only way to "unlock" it, is to run update stats.

    I would appreciate any insight anyone could provide to this puzzle! Thanks!

    ~H
  25. simas New Member

    Hominamad,
    Thank you for posting your logic - that is an interesting idea. I would also second Derrickleggett comments and check out the code that is being run. I had similar issues in my system (slowdown until you manually update statistics) and it was a case of both nest IF ELSE logic and huge WHERE statements joining dozen plus tables in single statement. Rewrite of stored procedures fixed the issue.

    Simeon

Share This Page