Tips for Rebuilding Indexes

Don’t reindex your tables when your database is in active production, as it can lock resources and cause your user’s problems. Reindexing should be scheduled during down times, or at the very worst, during very light use of the database. [6.5, 7.0, 2000, 2005] Updated 5-1-2006

*****

If you use the CREATE INDEX command to create or rebuild your indexes, the FILLFACTOR option has its own sub-option called PAD_INDEX. If you don’t specify the PAD_INDEX option, then the FILLFACTOR only applies to the leaf pages in the index, not the intermediate index pages. But if you specify PAD_INDEX along the FILLFACTOR option, then when the index is created, the FILLFACTOR percent will be applied to the intermediate index pages. [6.5, 7.0, 2000, 2005] Updated 5-1-2006

*****

If you want to rebuild a clustered index using the CREATE INDEX command, and assuming the table also has non-clustered indexes, the best performance is gained when you also use the DROP_EXISTING option along with the CREATE INDEX command. The DROP_EXISTING option includes optimizations that prevent the overhead of rebuilding any of the non-clustered indexes on the table twice. [7.0, 2000, 2005] Updated 5-1-2006

*****

SQL Server 2000 has a command called DBCC INDEXDEFRAG, which is used to defrag clustered and non-clustered indexes in a table or indexed view. It does this by defragging and compacting the leaf level of the index so that the physical order of the index pages match the left-to-right logical order of the leaf nodes, which increases performance. Using DBCC INDEXDEFRAG instead of DBCC DBREINDEX is often beneficial because this command does not hold locks for long periods like DBCC DBREINDEX. This means it can be run during production without significantly affecting performance, although running any maintenance task such as this should ideally be scheduled during slow or downtimes.

On the negative side, using DBCC INDEXDEFRAG takes longer to run than DBCC REINDEX, and statistics are not automatically updated. This means that if you use DBCC INDEXDEFRAG, you will also need to run UPDATE STATISTICS. [2000] Updated 5-1-2006

*****

One way to speed up reindexing your databases is to be sure that your SQL Server database and log files are physically defragged before you reindex your database. By ensuring that your database and log files are contiguous (defragged), reindexing will not only be faster, but it will require less I/O resources, helping SQL Server’s overall performance. If you use Windows 2000 or 2003, a defragging utility is available for this purpose, although the built-in tool will only defrag closed SQL Server database and log files. Ideally, you should use a third-party defragging utility designed to defrag open SQL Server database and log files. [6.5, 7.0, 2000, 2005] Updated 5-3-2005

*****

According to Microsoft, the total number of pages in a table affects how page fragmentation affects SQL Server’s performance. For example, if a table has less than 100 data pages, reindexing it to remove fragmentation from it won’t benefit performance. This is because other things, such as physical hardware caches, SQL Server caching, and SQL Server read-ahead functionality hides the negative effect of fragmentation. On the other hand, very large table can benefit highly from reindexing because they are so large the fragmentation can negatively affect disk I/O, hurting performance. [7.0, 2000, 2005] Updated 11-1-2005

*****

Many times, it would be handy to have a smaller version of a production database for performance testing. This is because it is often difficult to have a large enough test system to hold a very large production database. Unfortunately, the only problem with this is that if you take a large production database and truncate it, that its statistics will change, and you will no longer be able to use it for performance testing, as the index statistics will be different between the two databases, possibly resulting in different execution plans.

Fortunately, there is a work-around for this problem, and that is once you create a truncated version of a larger production database, you can literally move the statistics from the larger production database to the smaller test database, which means that the execution plans for both databases will be the same, permitting apple-to-apple performance testing. See this article to learn how to perform this nifty trick. [7.0, 2000] Updated 5-1-2006

]]>

Leave a comment

Your email address will not be published.