SQL Server Performance

Update Statistics has negative impact on performance. Why?

Discussion in 'Performance Tuning for DBAs' started by SQL_Girl, Aug 16, 2007.

  1. SQL_Girl New Member

    Hi folks.
    I am experiencing something "weird" in one of my environments. I have a ±300 GB database, against which batch- and on-line transactions takes place. The "auto update statistics" function is switched on, and weekly an "Update Statistics" gets run on the whole database.
    We started to realize that after the weekly "update stats" job was run, certain batch jobs' run-time suddenly increases with about 400% (8 minutes to 54 minutes.. to name one example). I tested this in our Pre-Production environment as well, and got the same results. A definite increase in the run-time directly after the "update stats" job.
    Have any of you had similar experiences? What could the reason for this be?
    Would it be wise to then rather cancel the "update stats" job, seeing that we do have the "auto update statistics" on, and we do re-indexing on a monthly basis, which would also update the stats. By the way... after the re-indexing, the timing still looks good/better, in spite of stats being updated then?
    Please throw some ideas/thoughts at me. :)
    Ciao.
  2. satya Moderator

    Have you observed any transaction log & TEMPDB contention during this behaviour, also state what is the edition of SQL including service packs.
  3. martins New Member

    I might be missing the plot completely, but my take on this is that maybe your indexes get fragmented quite a lot, due to the amount or nature of your transactions. When the statistics are updated the optimizer might then decide not to use your indexes anymore (or scan the whole index), which will result in the time increase.
    All this gets sorted of course when you re-index...
    Have a look at some of your execution plans before and after the stats update, and see if they are any different. It might be a good idea to rebuild some of your indexes more frequently to avoid this, if fragmentation is the problem.
  4. SQL_Girl New Member

    Thanks martins! Your explanation makes completely sense. I will investigate further. It just did not occur to me that this might be the reason! Sometimes one looks for a complicated explanation, when in fact it is much simpilar and right in front of you. :)
    Ciao.
  5. martins New Member

    No probs :)
    Please remember to post your findings/results/feedback for reference.
  6. Rory New Member

    Hi,
    I also have a simillar situation with a slightly smaller database (60GB ish). An insert (400'000 rows) into my main table (10 Million rows +) which used to take between 5 and 8 minutes (contained in a sproc) now takes 5 hours following reindex. I know this because nobody reindexed the database for many months then thought it may be a good idea.
    So does anybody have any idea why reindexing makes matters WORSE? Am planing on recompiling the sprocs concerned. and possibly taking a strategy of dropping and recreating the indexes on the target table either side of the insert (is an overnight batch job).
    Any advice gladly recieved - and I promise to post my findings. . . . .
  7. satya Moderator

    Have you attempted to perform intermittent UPDATE STATISTICS on this table?
    DBCC DBREINDEX will do same as dropping and recreating, but sometimes (SQL 2000) it will do better if you have dropped all the indexes against this table and recreate them after the load, say weekly.
  8. martins New Member

    Could be a couple of things Rory. Let's say for instance you reindex with 90% fill-factor. With a large insert like you have explained above it might lead to the creation of many extra index pages, and sorting that has to take place on all of those. Or it might mean all of your index pages need a re-order...which can be quite intensive depending on the amount of indexes and type of fields being indexed. If one of these are a clustered index it will mean that your physical data needs a re-order...you can imagine how long that can take for 10 million rows.
    Best practise when running batch inserts of this size is to drop all indexes before the load and then re-creating them afterwards. And potentially it will save you a lot of time.
    Please post the layout of your table plus the indexes. It might help in explaining why this is happening.
  9. Rory New Member

    Hi guys - both posts really spot on. Fill factor was set at 90% for some of the indexes but most critically the maintenance job was created with the update statistics turned off. We thought that as statistics were on auto for the associated tables this would be covered. So a comprehensive UPDATE STATISTICS <table_name> not only restored performance but also increased it, sprocs benefitting from the reindex.
    The main reason for the lack of the update stats was that we used the option "Reorganise pages with the original amount of free space" within the maintenance plan (space is tight on volume). This dissables the 'Update statistics' option. Ended up adding a consecutive step to the scheduler to run UPDATE STATISTICS after the reindex (ever though this causes enterprise manager to bitch on editing the plane about an additional step . . .)
    Thanks again - definitelty one for the learning curve!
    PS - the only problem about dropping all the indexes is that they take hours to replace - much longer than the insert now is fixed - but it would be good to know a faster way of replacing indexes . . .
  10. martins New Member

    Glad you got sorted, and thanks for posting your feedback.
    [:D]
  11. satya Moderator

Share This Page