10 Baselining Tips for SQL Server: Lessons From the Field

Tip 6: Use SQL Server User Settable Counters

This is perhaps the most underappreciated set of counters available. Contrary to popular belief, they are valuable and easy to use. (I’m starting a publicity campaign for them as of today). Remember, performance analysis is all about linking cause and effect of an unexpected symptom. This set of counters can store valuable information for whatever is going on in your application in an easy and straightforward way; Make your developers use them by calling a stored procedure from within their application and attach an integer value as parameter, and the value will show up in your PerfMon results. I can’t find an easier way to describe it. I can’t understand why so few people use them.

For example, your application communicates synchronously with a remote system in a shared distributed transaction. Your application logs all error messages received from that remote system. By running sp_user_counter1 @count (where @count is the number of error messages logged over the last x seconds/minutes) you might link performance degradation of your SQL Server database to problems related to a remote system without requiring other teams to have a monitoring process as optimal as yours.

Tip 7: Start Baselining / Stress Testing ASAP

To achieve a reliable baseline, you need to develop a load on your systems via some sort of stress testing scenarios that generate a load comparable with the load you expect in a lifelike situation.
But you don’t need to postpone stress testing until you can simulate the maximum expected number of users using the final build of your application. You can’t start too soon.

Loading high volumes of data in your SQL Server database tables, using bcp or SQLIOStress, can give you valuable information of what your hardware can handle. Also try SQL Hammer in the SQL Server 2000 Resource Kit tool. This is a VB tool you can adapt to your needs.

In a later phase, the most often used and most demanding queries – captured with Profiler and regenerated with random values – can be launched using multiple connections simulating multiple users. SQL Server 2005 has some nice features on this (more on stress testing in a later article).
A final end-to-end stress test is still required to make sure your SQL Server database is not the only system up and running when going live.

Tip 8: Consolidate PerfMon and SQL Profiler Data in an OLAP Cube

Daily, during maintenance hours, I schedule a batch process that collects all data and stores it in an OLAP construction. One fact table for Perfmon counters, with dimensions like time, server, object, counter, instance; and one fact table for SQL Profiler data with dimensions like time, query and servername.

If your application performs all data retrieval through stored procedures and uses sp_executesql for all data manipulation (inserts/updates/deletes), it should not be too difficult to filter the query you need from the textdata field. More on this in a following document where we will explore trend-analysis in more detail.

This way, you can link system data with SQL statements generated by your application. Don’t forget to add the end time data column in your trace to perform the mapping. Imagine we just exposed the magic behind SQL Server 2005 Profiler / PerfMon integration.

Use this process also to archive and free space from your database tables.

Tip 9: Restart Processes

For both tools, special attention must be addressed to restart procedures. If SQL Profiler looses its connection to the source database, you will have to restart SQL profiler, which will suggest overwriting your current results. Something you don’t want to happen at that moment. I usually start my consolidation process manually (see tip 8), rename the old table, and adjust my online monitor to UNION data from both tables (see the next article).

Tip 10: Setup Your Baseline

This is the moment of truth: when you have performed your stress tests under different load scenarios, you know by now how your system should behave under different circumstances. This is your baseline; a known value against which later measurements can be compared.

You will also have a clear view on your systems scalability; where lies the boundaries for your system as well as for your application. Questions like: will your current environment support the expected number of users performing the expected number of transaction. What will happen if user numbers will increase in the future? What will happen if additional features will be added to the application generating more transactions for SQL Server?

Now it’s time to log minimum, maximum and average values for these counters for each scenario and store them as benchmarks for future use (e.g. generating alerts). I often store also a number of percentiles with them.

A more subtle, but also more difficult task to do, is to find correlations between counters. More on this subject will be discussed in one of my following articles.

]]>

Leave a comment

Your email address will not be published.