Create a Performance Baseline Repository

Terms used in this article

Execution or SQL Execution represents a call to a Stored Procedure or an SQL Batch.

General Description

This process creates a Baseline repository that holds summarized data of all SQL Executions executed on an instance of SQL Server. The repository lights up your SQL Server’s activity and exposes interesting information to the Operations team as well as to the Development team.

Process Description

As a DBA monitoring and maintaining production systems there are times when you face a performance issue.

Following an investigation you are able to pin point a specific stored procedure that is the cause of the performance problem faced.

In such a case while working to resolve the issue you can benefit from knowing how did that procedure perform in various points in time at the past (i.e. last week or last month).

With a Baseline available you have an immediate access to that information allowing you to easily see what is the change in terms of resource consuming (i.e. Reads, CPU Time etc) that causes the procedure to perform bad.

The reasons vary. It could be that the number of executions has been significantly increased and if that is the case we may be learning here that a recent version upload has introduced a bug thus calling the procedure far too many times than is actually required (i.e. an incorrect parameter passed to a procedure can affect the control flow of that procedure so it reaches a code block that executes the procedure in discussion).

You may see an increase in Reads which may indicate a missing index;

That is an index that has been dropped or an index that has never been there but now it’s absence is more significant probably due to a change in the underlying data or code modification (both may change execution plan).

The above is just an example and there are plenty of other possible reasons, the beautiful part is that the reasons keeps changing thus keeping up the interest and challenge in our work. And we DBAs, the professional staff responsible for the system’s health got to be ready for all. Having your own Baseline Repository puts you in a better position knowing your servers activity and have it documented at various points in time.

Another reason to maintain a baseline repository is that it simply puts a light on your database and allows you to see trends over time.

Sort the result by Duration and you see what executions are most active. An active execution means a T-SQL code running in the database engine that consumes system resources. Looking at all those executions you get to see the complete picture of where and how your system’s resources are spread and consumed.

If your server suffers from a high CPU utilization you would probably gain some air tuning a few of the top CPU consumers. Same applies to systems that are short on IO etc.

A Baseline will pop up in your reports any new stored procedure that has become highly active. Be it a new procedure or an existing one.

The process includes the following 5 stored procedures:

TraceExecutionsDelete (download)
Deletes old traces files that were created at a previous run

TraceExecutionsStart (download)
Starts the trace that captures the data needed for the performance repository

TraceExecutionsLoad (download)
Loads the trace files data to a database table

TraceExecutionsManipulateData (download)
Aggregates the captured trace data and saves it to a database table

TraceExecutionsReport (download)
Retrieves the pre aggregated data

Note : you can download the dll used within these stored procedures here

Inside the code

This procedure uses xp_cmdshell to delete the trace files that were created at a previous run. Unfortunately I found it impossible to pass the path as a parameter so I had to hard code my desired path and this is what you will also need to do to match your environment.

This procedure acts as a wrapper to a few of the built in trace system stored procedures to define a trace and start it.

The code first makes use of the system stored procedure sp_trace_create which creates a trace in a sopped state. The procedure outputs the TraceId of the new trace created using the output parameter @TraceID which is latter used by sp_trace_setstatus to start the trace. The second parameter @options receives a value of 2 which means file rollover in a FIFO (first in first out) manner so that when the number of trace files has reached the value defined by the @filecount parameter the oldest trace file gets deleted when the current active trace file has reached it’s max size as defined by the @maxfilesize parameter to allow the creation of a new trace file.

Following that the code makes several calls to sp_trace_setevent to define the eventes to be captured. Note that this procedure has to be called once per each column that it’s value we want to receive. I refer to it as defining the events/ columns combination.

What we do in this process is capture a few of the key columns so we can latter manipulate that data for 3 specific events: SP:Completed, RPC:Completed and SQL:BatchCompleted.

The two events RPC:Completed and SQL:BatchCompleted captures and covers all activity processed by the database engine received from any client in the form of an SQL Batch or wrpped as an RPC Call (remote procedure call).

The last event SP:Completed comes in to capture events missed by the former two and that is exceutions that are nested such as a call to a function, triggers and when you call a stored procedure from within a stored procedure. In that case the first procedure (the outer procedure) which is executed from the client will generate an RPC:Completed event and the second stored procedure (the inner procedure) that was nested and called from within the first procedure will generate an SP:Completed event so capturing all three is actually required in order to cover the entire server activity.

An additional parameter, @Database can be used to filter and capture activity for a single database only. The parameter has a default value of NULL so if you are interested in capturing the entire server activity and not interested to capture the activity of a specific database you should just omit that parameter.

This procedure accepts a single input parameter @Path and that is the path to the trace files which it loads to a database table using the system function fn_trace_gettable combined with a SELECT..INTO operation.

This procedure works on the captured trace data that was loaded by the former procedure into table BaseLineFull and inserts the aggregated results to a table named Baseline thus making the results available for retrieval

The result is sorted by the Total_Duration_ss column in a decsending order so that the most active exeution appears on top so if you pass the value 10 to the @Rows paramter you end up getting the 10 most active executions.

Typically you have no reason to do so but I added that option when I came across an application that retrieved the data and had a limitation on the number of rows it can render in an html email.

Retrieves the pre aggregated data

A sample of a report output is shown below:

A Baseline Repository can answer such questions:

(see the scripts in file Ad-Hock sql file here)

Is my workload evenly spread (balanced) across all my web servers?

What are the most active executions on the server?

How many times an hour each procedure gets executed?

What database is the most active database on the server?

What is the most active application?

What application consumes most Reads?

What application consumes most Writes?

What application consumes most CPU?

What execution visits the most data?

What logins accesses the server?

What login accesses what database?

What client (host) accesses what database?


Leave a comment

Your email address will not be published.