SQL Server Performance

Clustered Index Kills Update

Discussion in 'T-SQL Performance Tuning for Developers' started by jamesdterry, Nov 3, 2002.

  1. jamesdterry New Member

    Here is a simplified version of a table I am having an issue with:

    create table jobnumbers
    job_id integer
    phone_number char(10)
    connect_time integer

    I often need to grab all the records for a specific job, so I created a clustered index on job_id. job_id is non-unique. This worked great. I also added a non-clustered index on phone_numbers and job_id together.

    I have some other code that run's updates like this:

    update jobnumbers set connect_time = connect_time + x where job_id = y and phone_number = '8885551212'

    Before I added the clustered index I could do a few thousand of these updates per second. After the clustered index less than 10 per second.

    It doesn't make any sense to me.

    I replaced the clustered index with a non-clustered index for the job_id field, by reporting is now much slower but my updating works fine. Anyone seen anything like this and have any suggestions? Thanks in advance.
  2. bradmcgehee New Member

    It is possible that you are experiencing a lot of page splitting during the INSERTs, which can slow performance. If you like, you can use the Performance Monitor, SQLServer:Access Methods: Page Split/sec to see how many you are experiencing.

    If the clustered index is on an incrementing key, then page splits should not be an issue because all the new pages are aded at the end. but if the clustered index is non-incrementing, page splits are a very real likelihood.

    If you are getting a lot of page splits, you first need to check to see what the fill factor is for the clustered index. If you created the clustered index using the default fill factor of 0, then this will difinitely cause page splits because there is no room for new pages on existing pages, and splits have to occur. If this is the case, try rebuilding the clustered index using a fill factor of, let's say about 90%, as a starting point. This may or may not be the ideal fill factor for your situation, but it is a good starting point, and you can adjust this if need be.

    Also, for optimum performance, you will want to rebuild the clustered index periodically (such as once a week) to ensure that the fill factor is maintained. If you don't rebuild the indexes, then you will run into the same problem over and over again.

    Brad M. McGehee
  3. sql777 New Member

    doesnt' sql server 2000 automatically rebuilt index stats by itself?
  4. bradmcgehee New Member

    Index statistics are created automatically, if the appropriate database option is set. Here's a tip from my website that explains auto stats in more detail:

    When "auto create statistics" is turned on (which it is by default), statistics are automatically created on all columns used in the WHERE clause of a query. This occurs when a query is optimized by the Query Optimizer for the first time, assuming the column doesn't already have statistics created for it. The addition of column statistics can greatly aid the Query Optimizer so that it can create an optimum execution plan for the query.

    If this option is turned off, then missing column statistics are not automatically created, when can mean that the Query Optimizer may not be able to produce the optimum execution plan, and the query's performance may suffer. You can still manually create column statistics if you like, even when this option is turned off.

    There is really no down-side to using this option. The very first time that column statistics are created, there will be a short delay as they are created before the query runs the first time, causing the query to potentially take a little longer to run. But once the column statistics have been created, each time the same query runs, it should now run more efficiently than if the statistics did not exist in the first place. [7.0, 2000] Added 9-20-2002


    To provide the up-to-date statistics the query optimizer needs to make smart query optimization decisions, you will generally want to leave the "auto update statistics" database option on (the default setting). This helps to ensure that the optimizer statistics are valid, helping to ensure that queries are properly optimized when they are run.

    But this option is not a panacea. When a SQL Server database is under very heavy load, sometimes the auto update statistics feature can update the statistics on large tables at inappropriate times, such as the busiest time of the day.

    If you find that the auto update statistics feature is running at inappropriate times, you may want to turn it off, and then manually update the statistics (using UPDATE STATISTICS) when the database is under a less heavy load.

    But again, consider what will happen if you do turn off the auto update statistics feature. While turning this feature off may reduce some stress on your server by not running at inappropriate times of the day, it could also cause some of your queries not to be properly optimized, which could also put extra stress on your server during busy times.

    Like many optimization issues, you will probably need to experiment to see if turning this option on or off is more effective for your environment. But as a rule of thumb, if your server is not maxed out, then leaving this option is probably the best decision. [7.0, 2000] Updated 9-20-2002

    Brad M. McGehee
  5. jamesdterry New Member

    I am not doing any inserts, only updates (that don't change any of the keys) and selects. I am going to install SP2 and see if that helps.
  6. bradmcgehee New Member

    An update can cause page splits, assuming the new updated data can't fit into the current position on the page. Have you used Performance Monitor to check the status of your page splits yet? This will provide the data you need to help determine if page splitting is the problem. If page splitting is not the problem, then we need to try something else.

    Brad M. McGehee

Share This Page