How to Detect Table Fragmentation in SQL Server 2000 and 2005

How do you know when a table is fragmented?

  • Poor query performance over time.
  • More disk activity.
  • Poor cache utilization.
  • Verify the I/O of a query.
  • Verify scan density in SQL 2000 using DBCC SHOWCONTIG and in SQL 2005 using the dynamic management view sys.dm_db_index_physical_stats.

Table fragmentation can occur when modifying data with INSERT, UPDATE, or DELETE statements, which over time cause gaps in each page. If a query search is based on a table scan or partial table scan, then it will create overhead for the SQL Server process with additional page reads, leading to high CPU activity and unresponsiveness. Defragmentation targets logical fragmentation at the leaf level of an index, ensuring optimum performance.

What is the effect of index fragmentation?

It will degrade the performance; normal queries will take longer to display results. If the query search arguments are small, based on the data it will be fetching, then the query optimizer chooses the order of the scan operation, as long as the table has the appropriate index to find the data quickly. Whether the table has got the right index is not always an easy question to answer, as you have to rely on scanning of all the execution plans or on using the Index Tuning Wizard GUI along with Profiler.

How can you measure a table that is fragmented and how can you reduce the fragmentation?

In SQL Server 2000, you can use DBCC SHOWCONTIG, which will show the scan density and fragmentation level of a table. If you use this statement and WITH TABLERESULTS then it displays results as a row set with additional information. The DBCC INDEXDEFRAG and DBCC DBREINDEX statements are best used to reduce table fragmentation.

DBCC INDEXDEFRAG will defrag the leaf level of an index, which will enable the physical order of pages to match the logical order of the leaf nodes. This statement reports the estimated percentage of the process every five minutes. (You can learn more about how to verify the fragmentation in SQL Server 2000 on this Web site.)

Even if the indexes are properly created, they tend to lose effectiveness over time due to the fragmentation that occurs on the table throughout the INSERT, UPDATE and DELETE processes. The resulting fragmentation (free space on pages) will lead to improper usage of memory when using search arguments to scan for a particular dataset.

What’s New in SQL Server 2005?

The DBCC SHOWCONTIG statement has been replaced. Using the dynamic management function sys.dm_db_index_physical_stats will help you detect fragmentation in a specific index or all indexes on a table or indexed view. In the case of partitioned indexes, this DMV will also provide fragmentation information for each partition. The general syntax for this DMV is:

SELECT * FROM sys.dm_db_index_physical_stats (DB_ID(N'<Database Name>’), OBJECT_ID(N'<Table Name>’), NULL, NULL , ‘<SCANNING MODE>’);

To return information for all the tables and indexes, use:

SELECT * FROM sys.dm_db_index_physical_stats (NULL, NULL, NULL, NULL, NULL);

The Scanning Mode helps to determine the level of scanning performed to obtain the statistical data used by the function. The three modes are LIMITED, SAMPLED, or DETAILED and it applies the Intent-Shared (IS) table lock. The default mode is LIMITED, which displays page counts and external fragmentation without any page density. The SAMPLED and DETAILED modes will analyze the entire table for both internal and external fragmentation.

The column avg_fragmentation_in_percent represents the fragmentation level of an index or heap. The table-valued function returns the tabular set. You can use SELECT INTO to catch the entire result set. There is no need to use dynamic string execution to interpret the results.

SELECT * INTO #tbl_Fragmentation FROM sys.dm_db_index_physical_stats (DatabaseID, TableId, IndexId, NULL, Mode);

Please note that the Scanning Mode of the above Dynamic Management Function will be LIMITED, which is the default behavior in SQL Server 2005. To get more information on the results, execute as follows (Johan Bijens):

SELECT * INTO #tbl_Fragmentation FROM sys.dm_db_index_physical_stats (DatabaseID, TableId, IndexId, NULL, DETAILED);

Under certain circumstances, SQL Server 2005 uses a feature called “forwarded rows.” For example, when a table with a variable-length column is updated to a larger size in that page and can no longer fit, SQL Server creates a forwarding pointer in that page. That pointer remains in place unless the row shrinks enough to move back to its original location. It will also be compressed when the database is processed to SHRINK, which will reassign the row identifiers by avoiding the generation of forwarded rows. This may have a side effect on performance by creating additional I/O to obtain the first record pointer to the relocated row. The dynamic management function sys.dm_db_index_physical_stats provides both forwarded record count and record percent. This function helps to monitor changing information such as locks, index health, and so on. To get only the required percentage of fragmented index information you can join the sysindexes table with this dynamic management function:

SELECT OBJECT_NAME(i.object_id) AS TableName
, i.name AS TableIndexName
, phystat.avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, ‘DETAILED’) phystat inner JOIN sys.indexes i ON i.object_id = phystat.object_id AND i.index_id = phystat.index_id WHERE phystat.avg_fragmentation_in_percent > 10 AND phystat.avg_fragmentation_in_percent < 40

A Few Tips on Avoiding Fragmentation

  • When a database is created, make sure the data files are created with or assigned the largest values possible. You can do this by planning to use a value that can fit the maximum amount of data during a certain period (say, three years at least).
  • Sometimes it is feasible to permit the data files to grow automatically while keeping a limit on the growth by specifying a maximum data file growth size that leaves some available space on the hard disk.
  • After a period of time, ascertain and re-evaluate the expected maximum database size by adding more files or filegroups to the database, if required.
  • Do not let the data files grow automatically if there many data files share the same disk partition. If the data files are heavily used then locate them in a different filegroup or on a different partition.
  • Perform regular database maintenance tasks, such as DBCC DBREINDEX, and recompiling stored procedures and triggers.
  • If the table row(s) are modified or deleted frequently then it is better to run intermittent UPDATE STATISTICS on the table, which will help it avoid any slow performance from the execution plan.
]]>

Leave a comment

Your email address will not be published.