Using the SQL Server Profiler

Wouldn’t it be handy if you could easily schedule the Profiler to be run automatically using the SQL Server Agent scheduler? You can if you have SQL Server 2000/2005 and follow this simple two step process.

First, create your trace in Profiler as you would normally do. Then from the “Files” menu, select “Script Trace,” then select the version of SQL Server you want to trace. This command will create a text file that includes all of the necessary commands to run the trace you desire.

Second, create a new Transact-SQL job step and then cut-and-paste the contents of this above text file into the Command box of the job’s step. Now schedule the job to run. The Profiler trace will run until you manually stop it. If you want the script to stop automatically, you will have to manually modify the script to stop at a specified time. [2000, 2005] Updated 2-11-2005

*****

If you are examining a trace between an ADO-based application and SQL Server, and notice that temporary stored procedures are being created, executed, and then deleted, then you have a performance problem. One way to correct this problem is to encapsulate the Transact-SQL code at the client in the form of stored procedures. If done correctly, when the code needs to be executed, the stored procedure will be called from the client and executed automatically, and much more efficiently that the use of temporary stored procedures. [7.0, 2000, 2005] Updated 2-11-2005

*****

Most people only use Profiler to capture traces on a case-by-case basis, and when the trace is complete, review and analyze the data. What many DBAs don’t think about is using Profiler to monitor SQL Server activity on a real-time basis.

For example, if you have a critical production server that needs to be closely watched, consider using Profiler to track long running queries in real-time. This way, you can see which queries are running the longest, allowing you to identify potential trouble spots before they become major problems.

What I do, is each morning, turn on Profiler to track my most critical SQL Server. I run Profiler on my workstation, not on the server itself, in order to minimize any impact Profiler has.

The events I capture in SQL Server 7.0 Profiler include:

  • TSQL:RPC:Completed
  • TSQL:BatchCompleted

The data columns I choose to collect for my real-time Profiler monitoring in SQL Server 7.0 Profiler include (in this order):

  • Duration (data is grouped by duration)
  • Event Class
  • Text
  • CPU
  • Application Name
  • Writes
  • Reads
  • SQL User Name
  • NT User Name
  • End Time

The events I capture in SQL Server 2000/2005 Profiler include:

  • Stored Procedures–RPC:Completed
  • TSQL–SQL:BatchCompleted

The data columns I choose to collect for my real-time Profiler monitoring in SQL Server 2000/2005 Profiler include (in this order):

  • Duration (data is grouped by duration)
  • Event Class
  • TextData
  • ApplicationName
  • CPU
  • Writes
  • Reads
  • NTUserName
  • LoginName
  • SPID
  • StartTime
  • EndTime

I also specify in a filter that I only want to see events that have a duration of longer than 5 seconds. This way, I focus only on slow-performing queries, ignoring the tens of thousands of events that take less than 5 seconds.

Throughout the day, I take a look at the Profiler, seeing which queries are taking longer than 5 seconds. Based on this information, I can quickly get a good idea if the SQL Server is running normally, or more slowly than normal. This is because you tend to see the same types of queries over and over, and after a little time, you learn what is normal behavior for your server, and what is not.

If I see a problem, I investigate, trying to determine what the problem is, and if possible, fixing it. For example, one day I noticed a lot of activity on the server that was not normal. After carefully examining the Profiler trace (in real time) I noticed that one particular developer was using the production database to test a particular report, over and over again. The report was based on a slow-running query that bogged down the server. Once I found this out, I went to his desk and asked him to perform his testing on the test SQL Server, not the production server. When the developer stopped his activity on the production server, then the server returned to its normal behavior.

I let the Profiler run over night. When I get in during the morning, I take a quick look at the night activity, and if I see any issues, I then work on them. I also restart the Profiler every morning (after reviewing the night’s activity) so that I don’t get too much clutter on the screen and to reduce the amount of RAM on my workstation used by Profiler to store the captured activity.

Besides using Profiler like I have described above, I also monitor my SQL Server’s activity using Performance/System Monitor, tracking real-time activity in a 1 hour window. I often use both the Profiler and Performance Monitor real-time activity together. For example, if I notice a spurt of high activity in Performance Monitor, I next check Profiler to see what is going on. Often, when a long running query begins, it first shows up in Performance Monitor, although I can’t tell what query is currently causing the problem. But once Performance Monitor shows that the high activity is over, I then go to Profiler to see which query was causing the problem. A query does not show up in Profiler until it has completed executing.

Another way I use real-time Profiler data is to identify very long-running queries so that I can tune them. As they appear in the Profiler trace throughout the day, I will cut and paste the Transact-SQL code from the slow running queries into Query Analyzer for analysis. In some cases, I can identify the problem and fix it, helping to speed up the query. I say sometimes, because if the poor performing code is being generated by a third-party application, there is not much I can do, except to be sure I have appropriate indexes on the necessary tables. If a third-party query is very slow, I will often send them a copy of the trace, hoping that they might fix the problem in their next release. If the application was written in-house, then I go to the developer and try to explain to them the problem with their query, providing advice on how it can be rewritten for better performance. [7.0, 2000, 2005] 5-3-2005

Continues…

Leave a comment

Your email address will not be published.