How to Do SQL Server Performance Trend Analysis Part 1: Using Performance Monitor to Log Data

Choose Which Objects to Monitor and Log

The next step is to decide which Performance Monitor counters you want to monitor and log. Depending on how your server is configured, there may be over 400 different counters you could measure. But that is much more than you will ever need. And even if you were to try and monitor that many counters, you would generate huge amounts of data, in the range of over a megabyte a minute, depending on the log interval.

Unlike the Chart View, the Log View feature of Performance Monitor does not let you select individual counters to monitor. Instead, you must select entire Performance Monitor Objects. This means that if a particular Performance Monitor object has 25 counters in it, and you only want to monitor one of them, you will still get all 25 counters. There is no way around this.

So you must decide which Performance Monitor Objects have the counters you want to monitor, and then add them to the Performance Monitor Log View. Keep in mind that the more Objects you choose, the more data is collected, and the greater the resources used to collect the data. While you will want to collect all the Objects you need, don’t collect data on any Objects you won’t use.

While each DBA has there preference on which Performance Monitor Objects to monitor and log, I usually stick to these:

  • Memory

  • Network Segment

  • PhysicalDisk

  • Processor

  • Server

  • System

  • SQL Server: Access Methods

  • SQL Server: Buffer Manager

  • SQL Server: General Statistics

  • SQL Server: Locks

  • SQL Server: SQL Statistics

For the most part, these Objects includes all the counters I use to monitor both NT 4.0 Server and SQL Server, although I of course don’t use all of the counters in all of these Objects.

Determining the Collection Interval

Once you have selected the Performance Monitor Objects to collect, the next step is to determine the collection interval. The collection interval determines how often data is sampled and stored in the log. The default interval is 15 seconds. This means that all the counters for all the objects are collected and stored in the log every 15 seconds. The values for the counters that are collected are the averages for the counters over the 15 second interval. If you use an interval of 60 seconds, then the data will be averaged and collected over 60 seconds.

The shorter the interval, the more granular the data, and the longer the interval, the less granular the data. If the data is not granular enough, you may miss out on important spikes that don’t show up because all of the data for the collection interval is averaged over the collection interval.

The length of the collection interval is proportional to the amount of data that will be collected. The shorter the interval, the more data is collected, and the longer the interval, the less data is collected.

You may want to experiment with various collection intervals to find the one that is best for your environment. The collection interval I use depends on how long I am collecting the data for. Generally, if I collect data for 8 hours or less, I will often choose an interval from 30-60 seconds. But if I am collecting data around the clock, then I choose an interval from 300-600 seconds. This is because if you don’t choose a larger interval, the amount of data collected becomes overwhelming.

Start Collecting the Data

Now that you have made all of these decision, don’t forget to save your work before you begin collecting data. You can use the File|Save Log Settings option to save a file with all the information you have entered, such as the Objects you selected, that you can load again if need be.

Once your settings are saved, you are ready to begin collecting your data. When you start, watch the screen awhile, seeing how much data is being collected. If this amount seems like too much data too fast, then perhaps you are collecting too much data, or collecting it too often. You may want to stop the logging, make a couple of changes to reduce the amount of logged data, and try again.

If you are collecting data round the clock, you will need to periodically (like once a day) stop the log, and then restart it immediately, but using a new file name. I like to use the day’s date as my file name. If you don’t do this, then your log will get huge.

As the logs are written out, such as once a day, you next task is to either view them directly with the Chart View of Performance Monitor for a quick-and-dirty analysis, or to import the data into a SQL Server database for later analysis using Microsoft Excel.

Now read part two of this four-part tutorial, and learn how to import Performance log data into SQL Server for later trend analysis.

Pages: 1 2


No comments yet... Be the first to leave a reply!