SQL Server Performance

Performance improvement after detach/attach

Discussion in 'Performance Tuning for DBAs' started by drewm, Aug 30, 2005.

  1. drewm New Member

    I've got a SQL Server 2000 database serving a classic ASP web application. The performance of the database keeps nosediving and I can't figure out why.

    When preforming normally, running a known query will return a result set in 5 seconds. After time, or more commonly after a big data import (the system allows import from CSV files), the query can take more than 20 seconds to return the same data.

    If I go and manually clean out all the data from the import (so the tables have the same numbers of rows as before) the query will still run at >20secs. I've also tried rebuilding the indexes and defragmenting the data disk, and running the maintenance tasks to compact the log files. Performance remains in the 'sucks' region.

    However, if I detach the database and immediately reattach it, everything perks up and the query runs in 5 seconds again.

    I'm not entirely sure what happens during the detach/attach process. Does anyone here have any ideas as to what might be fixing the issue (and therefore any clues to the cause)?
  2. bradmcgehee New Member

    Is the database set to automatically update statistics, or is it set to do so only manually?

    Have you only tried to update statistics manually after a performance degradation to see if this fixes it?

    -----------------------------
    Brad M. McGehee, MVP
    Webmaster
    SQL-Server-Performance.Com
  3. Luis Martin Moderator

    High fragmentation disk?.
    Missing statistics?

    Luis Martin
    Moderator
    SQL-Server-Performance.com

    One of the symptoms of an approaching nervous breakdown is the belief that one's work is terribly important
    Bertrand Russell


    All postings are provided “AS IS” with no warranties for accuracy.



  4. drewm New Member

    Both Auto Update Statistics and Auto Create Statistics are enabled. I've not tried updating stats manually - how is this done?

    The disk was fairly heavily fragmented, but I defragged and saw pretty much no performance improvement. It doesn't appear to be fragging at an unusual rate.
  5. ghemant Moderator

    <blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by drewm</i><br /><br />Both Auto Update Statistics and Auto Create Statistics are enabled. I've not tried updating stats manually - how is this done?<br /><br />The disk was fairly heavily fragmented, but I defragged and saw pretty much no performance improvement. It doesn't appear to be fragging at an unusual rate.<br /><hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote"><br /><br />1): <br />Hi,<br />as BOL refers : <br /><b><i>DBCC UPDATEUSAGE<br />Reports and corrects inaccuracies in the sysindexes table, which may result in incorrect space usage reports by the sp_spaceused system stored procedure.<br /><br />Syntax<br />DBCC UPDATEUSAGE <br /> ( { 'database_name' | 0 } <br /> [ , { 'table_name' | 'view_name' } <br /> [ , { index_id | 'index_name' } ] ] <br /> ) <br /> [ WITH [ COUNT_ROWS ] [ , NO_INFOMSGS ] <br /> ] </i> </b> <br />2).<br /><br /><b><i>sp_updatestats<br />Runs UPDATE STATISTICS against all user-defined tables in the current database.</i></b><br /><br />3).<br /><b><i><br />UPDATE STATISTICS table | view <br /> [ <br /> index <br /> | ( statistics_name [ ,...n ] ) <br /> ] <br /> [ WITH <br /> [ <br /> [ FULLSCAN ] <br /> | SAMPLE number { PERCENT | ROWS } ] <br /> | RESAMPLE <br /> ] <br /> [ [ , ] [ ALL | COLUMNS | INDEX ] <br /> [ [ , ] NORECOMPUTE ] <br /> ] </i> </b> <br /><br />[<img src='/community/emoticons/emotion-1.gif' alt=':)' />]<br />Regards<br /><br />Hemantgiri S. Goswami<br />ghemant@gmail.com<br />"Humans don't have Caliber to PASS TIME , Time it self Pass or Fail Humans" - by Hemant Goswami<br />
  6. bcamp New Member

    HEYYYY!!!! I guess I should have searched the forums first. I just posted about the same issue (see post: "stored proc faster after restore or re-attach"http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=10075). Unfortunately, I don't have an explanation yet. I see above where LuisMartin mentioned fragmentation. If Win's defrag tool is to be believed on our SAN drives, we do have high fragmentation. But I still don't understand why get performance improvement by detaching/attaching with the same files in the same directory.
  7. joechang New Member

    my inclination is to think that a detach/attach has the effect of a DROPCLEANBUFFERS and DROPPROCCACHE
    why don't you try that and compare performance
    also run perfmon for all the physical disk counters
    1) when the query is bad
    2) after a detach/attach
    3) after the above
  8. bcamp New Member

    Joe, Thanks for responding. In our case, that might make some sense given that the WHERE clause of the proc's query varies considerably.


    quote:Originally posted by joechang

    my inclination is to think that a detach/attach has the effect of a DROPCLEANBUFFERS and DROPPROCCACHE
    why don't you try that and compare performance
    also run perfmon for all the physical disk counters
    1) when the query is bad
    2) after a detach/attach
    3) after the above
  9. drewm New Member

    quote:Originally posted by joechang

    my inclination is to think that a detach/attach has the effect of a DROPCLEANBUFFERS and DROPPROCCACHE

    Thanks Joe. I tried both of these (although I think you meant FREEPROCCACHE, rather than DROP) and they did not increase (rectify) the performance. In fact, FREEPROCCACHE reduced performance, as would naturally be expected.

    It's all very odd indeed.

    Any other ideas?
  10. drewm New Member

    quote:Originally posted by bradmcgehee

    Have you only tried to update statistics manually after a performance degradation to see if this fixes it?

    Just tried this using sp_updatestats, and no joy. It still runs slow.

    Any other ideas? This is puzzling.
  11. joechang New Member

    if you detach and then attach, the first run should be slow because the data must be read from disk?
    same for DROPCLEANBUFFERS, so what happens on the second run for both detach/attach and DROPCLEANBUFFERS
  12. bcamp New Member

    In our case, the performance improvement was noticed on the 1st run after a detatch/attach.


    quote:Originally posted by joechang

    if you detach and then attach, the first run should be slow because the data must be read from disk?
    same for DROPCLEANBUFFERS, so what happens on the second run for both detach/attach and DROPCLEANBUFFERS
  13. drewm New Member

    Exactly same here. Performance picks up from the very moment the DB is attached.
  14. joechang New Member

    is there a difference in execution plans between good and bad
    is there a difference in disk activity
    are there open transactions
    are there open cursors
  15. bcamp New Member

    For us, the exec plans are almost identical between good & bad. We haven't compared disk activity, but we monitored avg disk que length for the data drive & it appears to be well within tolerable limits.
  16. bcamp New Member

    drewm, I'm curious... Have you made any progress on this? Unfortunately, we still have the same prob. & the apps folks are getting a little frustrated.
  17. satya Moderator

    Can you confirm the optimization jobs and database consistency jobs schedule and also any issues during the data insert. Confirm level of service pack on windows and SQL too.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  18. elevnchk New Member

    We have seen a similar performance improvement on simply restarting SQL Server. I'd be curious to know whether you see the same effect. Never found a reasonable explanation for it, though...
  19. satya Moderator

    Elevnchk

    Can you confirm what have been asked above, as optimization jobs will definetly improve the performance and sometimes due to the bad code this may not help much as compared to fine tuning the code.

    You can take help from PROFILER for slow running queries and PERFMON for system resource usage.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  20. elevnchk New Member

    No defragging, etc was done that would have impacted this phenomenon, if that's what you're asking. We've seen it happen when SQL Server Agent was turned off, so no jobs of any type were running, and when Agent was on. It's also a consistent phenomenon, further implying that it wasn't simply a case of hitting the right place in our scheduled job cycle. There is no obvious change in the perfmon profile (just monitoring the basic three counters). We've never tried to examine the load with profiler, but we've seen this happen while doing load testing -- i.e., we were running a scripted load multiple times against our server, to get statistical data, and consistently saw a 4-5 fold increase in performance for the first few minutes after having restarted the service. So it isn't just a case of hitting a bad query.

    We're running 2000 with SP4, although we haven't verified this phenomenon under SP4. We did see it under SP3 for certain, and maybe SP2.
  21. satya Moderator

  22. elevnchk New Member

    That might be a reasonable hypothesis, except for two things:

    1) The period of increased performance seems to last roughly the same amount of time, regardless of the amount or type of work done on the system. If your hypothesis were correct, we'd expect that running a load comprised entirely of selects should allow the increased performance to continue indefinitely. We have run a variety of workloads on the system, including some which perform minimal data changes.

    2) Defragmenting (or even rebuilding) our indexes does not produce the same performance gains.

    As I mentioned above, the same phenomenon is observable both in the presence and absence of scheduled jobs running -- restarting SQL Server makes everything run faster for the first few minutes after restart. Also, if your analysis were correct, it would imply that it only takes a few minutes for fragmentation to degrade the database performance by 4-5 fold, and after that the effect of increasing fragmentation only has a very, *very* gradual impact on performance (since that's the behavior we're seeing).
  23. bcamp New Member

    Update on our situation... We discovered that a batch job executed by an app was responsible for irrevocably causing the proc's performance to tank following a detach/attach. According to the apps people this job only affects a couple hundred rows (inserts & updates). Upon this discovery, we tried, like the times previous, rebuilding indexes & updating stats, but it didn't help.

    After that we added a previously untried index & that seems to have prevented the performance from taking a nose dive, even after the app's batch job. I intentionally avoided using any reference to "fixed." The reason is because I still don't think this explains why simply detaching/attaching a db should dramatically improve its performance, regardless of indexes/index fragmentation. Accordingly, why was it so much faster on a log-shipped, standy-server with similar hardware? (The batch job never affected its performance.)

    We're SQL 2K SP3 on Win 2K.

    Thanks.
  24. satya Moderator

    The general theory of detach/attach will have upperhand initially, because all the plans and indexes will be compiled afresh will be executed for the first time where the cache will be filling with these new plans, slowly you can see the performance is degraded due to bad query plans and complex queries with no proper indexes.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  25. mmarovic Active Member

    I don't quite agree with that theory. There is a good reason why plans are cached. Cached plans decrease cpu usage, because plans don't have to be built each time the same query is executed. That means, for frequently run small execution time queries you will have better performance when plan is in cache. So, in oltp environment where your avarage query is short and is repeated many times you can expect performance degradation after detach/attach. For queris where execution time overweights compilation time, you can expect the same or better performance because of better execution plan used.
  26. satya Moderator

    Whatsover the plan is, when it is compiled for the first time you tend to see some improvement as compared to previous existing plans.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  27. mmarovic Active Member

    Not if recomilation takes more time then actuall query execution even for the worst plan.
    Performance improvement comes when a new plan is better then old already in the cache, which may or may not be true.
    What you can expect with high probability is that a new plan will be the same or better then old one.
  28. bcamp New Member

    When we still had the prob, we tried the "WITH RECOMPILE" option in the proc, but it didn't help. Also, the execution plans were the same for the primary server & the standby server, which has similar hardware, but didn't suffer the same performance prob.

Share This Page