Monitoring Index Fragmentation

A few years ago I started a new job as a DBA with a new group.  This application served the needs of hourly employees across the globe as they clocked in and out of work.  Reports were generated and payroll issued based on the data gathered and collected in this database.  It was a 24/7 system that had grown up from a simple idea, and 1 developer to a larger team of IT professionals.  It was still a rather small group that owned and supported this system, but the group was charged with keeping the system running smoothly.  Prior to my arrival, there was a single individual that was the Database Subject Matter Expert.  As time progressed, this system that was designed to support a few people had grown to support a worldwide organization with many more individuals and groups using it than had been previously anticipated.  Slowness started creeping in.  Issues arose.  The database didn’t act well on occasions and was getting frustrating to those supporting it.  The manager and others had decided that the slowness of the system could be fixed with some new hardware.  After all, the existing hardware was rather old and not as new and shiny.  New hardware will always speed things up, some believe.

I was excited to take on this position and learn as much about the system as I could.  I dug in and started looking into the db for things I could do to help it out.  Initially, I was blinded by the prospect of new hardware and the project of porting the database to a new system.  That’s an exciting project.  But as time marched on and I got busy learning the system, I realized that index maintenance might help in the short term.  I assumed that there was something already in place, because most people know that this is a best practice and will help your data out, if indexes are performing optimally.

Soon I realized that there simply wasn’t anything automated for optimizing indexes.  Since this was a 24/7 shop, the previous db folks had been afraid of perturbing the system by performing intense operations like index maintenance.  Yeah, a project that will have impact, I thought to myself.  I launch into setting up an automated index maintenance system.

Since this is a 24/7 system, downtime is not permitted, except on rare occasions.  Performing index maintenance needs to be an online operation, which rules out a DBCC Reindex operation, except in rare occasions.  How can we know what the status is of the indexes?  Other than by magic?  ShowContig to the rescue.  Using this will report to me the status of a table and its indexes.  Expanding this to all tables, will show me the status of all indexes on all tables.  Gathering this data and storing it daily will show me trends and the most needed maintenance that should occur.  Let’s now show the system I thought of and implemented.  You can grab the pieces you find valuable, add your own ideas.  The end result should be an automated system that keeps the health of your database in check.

System
We want to perform a ShowContig on multiple tables and indexes, to gather the statuses of each and store this information.  Based on the results and comparisons from the last execution to previous executions, we can perform predetermined actions to these tables and indexes.

We will need a couple tables. 

[ShowContigResults] 
  ObjectName VARCHAR (50),
  ObjectId INT,
  IndexName VARCHAR (100),
  IndexId INT,
  Lvl INT,
  CountPages INT,
  CountRows INT,
  MinRecSize INT,
  MaxRecSize INT,
  AvgRecSize INT,
  ForRecCount INT,
  Extents INT,
  ExtentSwitches INT,
  AvgFreeBytes INT,
  AvgPageDensity INT,
  ScanDensity DECIMAL,
  BestCount INT,
  ActualCount INT,
  LogicalFrag DECIMAL,
  ExtentFrag DECIMAL

The fields of this table will be all the fields that we retrieve from a ShowContig execution.

We will use this table to deposit the data from a single execution of ShowContig.  This data will be archived to another table, with some extra fields added to them, later.  In order to perform a ShowContig on multiple tables, I have chosen to use a bit of dynamic SQL, and needed a standard location to deposit the data.

[ShowContigResultsHistory ] 
  DateProcessed Datetime,
  DatabaseName varchar(255),
  ObjectName VARCHAR (50),
  ObjectId INT,
  IndexName VARCHAR (100),
  IndexId INT,
  Lvl INT,
  CountPages INT,
  CountRows INT,
  MinRecSize INT,
  MaxRecSize INT,
  AvgRecSize INT,
  ForRecCount INT,
  Extents INT,
  ExtentSwitches INT,
  AvgFreeBytes INT,
  AvgPageDensity INT,
  ScanDensity DECIMAL,
  BestCount INT,
  ActualCount INT,
  LogicalFrag DECIMAL,
  ExtentFrag DECIMAL,
  CONSTRAINT PK_ShowContigResultsHistory PRIMARY KEY
    (DateProcessed, DatabaseName, ObjectName, ObjectId, IndexName)

This table will be used for keeping the values that we pull for each table’s ShowContig; giving us the status of its indexes. 

This table will store the historical values as well, for future reference. 

Continues…

Pages: 1 2




Related Articles :

  • No Related Articles Found

No comments yet... Be the first to leave a reply!

Software Reviews | Book Reviews | FAQs | Tips | Articles | Performance Tuning | Audit | BI | Clustering | Developer | Reporting | DBA | ASP.NET Ado | Views tips | | Developer FAQs | Replication Tips | OS Tips | Misc Tips | Index Tuning Tips | Hints Tips | High Availability Tips | Hardware Tips | ETL Tips | Components Tips | Configuration Tips | App Dev Tips | OLAP Tips | Admin Tips | Software Reviews | Error | Clustering FAQs | Performance Tuning FAQs | DBA FAQs |