Using the SQL Server Profiler

If you are experiencing performance problems between the client application and its SQL Server backend, perform a Profiler trace to capture and view the traffic being sent from the client to SQL Server. One of the biggest performance problems there is between applications and SQL Server is that there is too much round-trip traffic between the two. A Profiler trace can easily identify this type of traffic, and can provide ideas on how to best deal with it. The more you can reduce network round-trips between an application and SQL Server, the better your performance will be. [7.0, 2000, 2005] Updated 5-1-2006

*****

If you use Profiler to track the duration of transactions, you might notice an unusual relationship between the duration of a transaction and the amount of CPU time and/or database reads/writes. Typically, you would expect to see a strong correlation between long running transactions and lengthy CPU times and/or database reads/writes. But this is not always the case.

For example, I have seen many very long running transactions, many minutes long, use up very small fractions of CPU time, and/or producing very little in the way of database reads/writes. How can this be? How can a transaction take a long time to run if it uses very few resources on a SQL Server?

The first clue to this dilemma is that in almost all cases where I see a negative correlation between transaction duration SQL Server resources, the application doing this has been written in Visual Basic or Microsoft Access. They also may be caused by third-party applications, but you don’t know which language was used to write the application.

In almost all these cases, the culprit is not SQL Server, but the application itself. The application begins a transaction, and for whatever reason, doesn’t close the transaction as soon as it is done. This is a prime example of a poorly coded application. One of the first rules of application design is to produce transactions that are as quick as possible. Long transactions can lead to deadlocks, blocking locks, and overall poor application performance. If you identify an application like this, let the application’s developers know of their application’s problem. [6.5, 7.0, 2000, 2005] Updated 5-1-2006

*****

When the Index Tuning Wizard or Database Engine Tuning Advisor runs, it simulates the execution of the queries that have been captured by Profiler. Because queries are simulated, and not actually run, any query or stored procedure that refers to a temporary table cannot be effectively evaluated by the Index Tuning Wizard. These queries and stored procedures must be manually tuned. [7.0, 2000, 2005] Updated 5-1-2006

*****

If you want to tune a trigger in Profiler, how do you capture them using Profiler? This is not very intuitive, but it is easy. For example, the following Profiler options will allow you to capture trigger data, allowing you to identify them so that you can see if they need to be tuned.

You can choose to capture as many events as you find useful, but the key one you must capture in order to capture trigger data is:

  • SP:StmtCompleted

Some of the data columns you may want to choose include:

  • Duration
  • Event Class
  • TextData (required to identify triggers and their code)
  • ApplicationName
  • CPU
  • Writes
  • Reads
  • SPID
  • StartTime
  • EndTime

As with any Profiler trace, you will want to include one or more filters in order to reduce the amount of traffic you collect. Select those filters that are appropriate to meet your goals.

When you view the trace data, identify the trigger names from the TextData column. This also displays the code found within the trigger itself. If you then find that the duration or writes and reads for this trigger seem excessive, then you can cut and past the trigger code into Query Analyzer or Management Studio for more detailed query analysis. [2000, 2005] Updated 5-1-2006

*****

While table scans aren’t always a bad thing, too many of them may indicate a performance problem in your database. For example, your tables may be missing needed indexes, or perhaps you have poorly written WHERE clauses that cause the table scan.

One way to help to identify excessive table scans is to use Profiler. While the method I describe below is not very elegant, it does work. What you need to do is to create a Profiler Trace with these specifications:

Profiler Events:

  • Scan: Started

Profiler Data Columns:

  • IndexID (Grouped)
  • Event Class
  • ApplicationName
  • SPID
  • Plus any others you find useful

Profiler Filters:

  • DatabaseName Like: [Database name being profiled]
  • Exclude any applications you don’t need to track, such as SQL Agent, etc.

When this trace is run, it will sort events by IndexID, so you can see which column on which the scan is being performed. This way, you can more easily identify which IndexIDs are causing the most table scans.

Now, the problem with this is that all you get is the IndexID, which refers to index objects found in the sysindexes table of the database you are tracing. IndexID in Profiler matches up with the indid column in the sysindexes table, and the name of this object can be found in the name column. When you look at the name column, you will see than many of the table scans are performed on system tables. You can ignore these. What you want to focus on are user tables. Unfortunately, you can’t directly see the column name, but what you will see is the index name, or the statistic’s name. A column cannot only have an index; it can also have statistics, even if there is no index.

Once you have identified the name of object that is subject to many table scans, the next part of your detective job is to identify the column referred to by the index or statistic name. Once you do that, you have identified the problem column and now can look at it in order to see if it can benefit from better indexing or a better-written query. [2000, 2005] Updated 5-1-2006

Continues…

Leave a comment

Your email address will not be published.