Monitoring Index Fragmentation

We will store the Date that this was processed and a Databasename we performed this upon.  The rest of the fields will be all the data that we retrieve from a Showcontig execution.

[ShowContigExecution ]
  ShowContigExecutionID integer Identity(1,1) not NULL,
  DateProcessed DateTime not null DEFAULT (getdate()),
  ObjectName VARCHAR (50) not null,
  IndexName VARCHAR (100) not null,
  CONSTRAINT PK_ShowContigExecution PRIMARY KEY (ShowContigExecutionID)

We can use this table to store simple history of each table’s executions.

When we execute a ShowContig, we can keep a record of its execution here, along with the Date it was processed. We’ll log the ObjectName and IndexName as well.  This data can be gathered from the other table, but this provides us with a summary that may be more easily retrieved in the future.

We will need a few stored procedures to gather and process this data.

[sp_GetShowContig]
This stored procedure will cycle thru the tables you have picked to be maintained, and gather data on their current status.  A ShowConig will be performed on each table, and the results will be deposited into a table.  These results will be archived into another table, with current date and database name attached to the data.

I chose to use dynamic SQL as I cycled thru the tables, and needed a hard table to deposit the results into.  The dynamic SQL was as follows.

INSERT INTO ShowContigResults
  EXEC (‘USE ‘ + @dbname + ‘ DBCC SHOWCONTIG (”’ + @TableName + ”’)
    WITH TABLERESULTS, ALL_INDEXES, NO_INFOMSGS’)             

[sp_ProcessIndexDefrag]
We will use this stored procedure to compare values between different dates to see if differences have occurred.  The history table will be queried to see if there are differences in the ScanDensity or LogicalFragmentation.  If there are differences, we can proceed to perform predetermined actions on these tables and indexes.  A low impact process would be to perform a DBCC indexdefrag.  A higher level process would be to rebuild the index with DBCC dbreindex.  Parameters could be passed in to this procedure to alter the levels you would like to be used as thresholds for ScanDensity and LogicalFragmentation.  The default values will be as follows:

 @ScanDensity int = 10,
 @LogicalFrag int = 10

Or we could simply look for differences, as mentioned above.  I have used both processes, depending on the needs of the tables.

Once you have maintained the table and index, a record will be input into the [ShowContigExecution] table to indicate the date this was processed.

Considerations
Initially, I used these tables and stored procedures manually to perform the operations on particular tables and indexes.  I did not trust them to run on their own, even though I trusted my own code.  I would suggest the same on your systems, and ensure that you test these on a system other than production initially.  Eventually, when the system has been proven to you and others, feel free to automate these stored procedures with a job or whatever other solution you deem fit.

Once this system is turned on and automated, make sure that you are referencing the data that is being gathered often to ensure that the system continues to process as expected. 

The tables that are being maintained could be stored in a configuration table, thus allowing you to pull and put tables at your will that will be maintained.  Further configuration could be added to determine what type of action will be performed when these tables and indexes fall below the threshold.  It would be a good idea to store in a config table the type of operation you would like performed (update statistics, indexdefrag or dbreindex).  This will further control the operations that occur to specific tables.  For example, I have some OLTP tables that we are unable to perform either defrag or reindex, for the overhead is too costly.  Instead, we rebuild the statistics.  If this table were configured thusly, it would never perform costly index rebuild or defrag, but do the simple statistics update.

Another consideration when returning data from ShowContig is to limit the data that is put into the history table.  You can safely filter out items that are returned with a null or blank IndexName.  Primary Keys could be removed as well, since we are concentrating on maintaining indexes only.  You may want to remove any indexes that have less than 8 pages.  I read somewhere that anything less than that is small enough that a reindex or defrag doesn’t alter it, because of its small footprint. 

All these considerations can be added to your system, and any others you find along the way, as you test this in your environment.  Make sure that you keep an eye on the results, and watch for patterns.  If a certain set of tables are being processed every day, and never improve, it would be wise to remove them from this maintenance plan and determine another way to maintain them; or to simply ignore them all together.  Smaller tables may not need index maintenance. 

Conclusion
In the end, you should have a finely tuned index maintenance system that encompasses the majority of your troublesome tables, and allows for an automated approach to altering them.  The end goal is to relieve you of the manual manipulation of these objects, automate it, gather data on the process and allow for reporting to occur.  You can then shift from manually touching these, and add steps into your daily DBA duties to review the statuses.  Your system will be healthier, performance will improve, and you will have more time on your hands to do more pressing tasks.

]]>

Leave a comment

Your email address will not be published.