Routine for monitor pref | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Routine for monitor pref

I#%92m comes from the developer side and got the DBA role more or less thrown over me. I have work as developer for 10 years. And lately have big problem with performance issue on a server. My biggest problem at moment is where to start. We lack the routines for monitor server performance daily basic. 1.I have defined a real-time performance monitor view. But have no routine to log data to file or table for long time analysis or creating baseline for the server. From the view we were able to figure out that we have processor problem by looking at the processor queue length and processor time. To solve the immediate problem we got extra processor. 2.I have tested to make some server side traces. I have focus on the CPU Bottlenecks. My first was on detecting SQL Recompilations/sec to Batch Requests/sec. From that trace I#%92m made top 10 list. 3.My second attempt was to make a sql trace. From there we read in the information in to table and made “CUBE” with executions/read+writes/cpu/duration. Made 10 then lists from different angles as top 10 applications, top 10 statement from writes/cpu etc… But at moment we have around 1500 transactions/sec during peak hours in the database take up loot of space to make trace files and analyses the material. 4.I have played with the management views (not used to them), get list of non used indexes, lists recompiled statements etc…
5.I have tested some tools like SQL Trace Analyzer etc. 6.Hired more skilled DBA to help me get started. My questions are as followed…
•How are you working on your daily basic with performance problem? Are you making traces on a daily basic with working?
•Do you use dm to collect data or server-side traces?
•Do you use any tools like SQL Trace Analyzer or Spotlight? We are looking on this routine
•Make some type of Performance Monitor log to database. To create the baseline…
•Build some automatic scripts that gives top 10 list from different angles
•Reads+Writes per Application/Database/Executions
•Work with index and smart application from the 10 lists to solve the problem.

There are many 3rd party documents & tols that discuss Monitoring SQL Server and approaches. The main tools that we use are the SQL Server blocker script, profiler, and system monitor. There are also many 3rd party applications for monitoring SQL, refer to the spotlight section on this website for such tools. 1. Traces or monitoring the server usage is only when there is a problem, if a particular application database is sensitive then keeping an eye on day to day basis would help to reduce the impact of availability if in case any issues. 2. We use Server side traces, DMVs & PERFMON to collect the historic information every month. 3. Only SQL server based tools and no third party ones until now. There are many links available on web to getyou to the speed on performance monitoringin addition to this website: http://www.sql-server-performance.com/performance_monitor_counters_sql_server.asp – go through the other articles and tips in this section of the website.
KBAhttp://support.microsoft.com/kb/298475
http://sqljunkies.com/Article/7F8518F9-FDAA-4FF3-8FC5-25E8946C8D0C.scuk
http://sqlserver2000.databases.aspfaq.com/how-do-i-monitor-sql-server-performance.html Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. Knowledge is of two kinds. We know a subject ourselves or we know where we can find information on it.
Here is what I do. You may or may not like it, but I find it works well for me.<br /><br />I run server side traces and also track duration of procs in the trace (most of our stuff is procs – no real embedded code) I output the info from the trace into text files. I run traces every hour for about 15 min. I put only the records I want into a holding table. I take them from all the servers and put them into one table on a larger drive. This means the table doesnt get huge and I also have a job that purges older records from the table when they are a few weeks old.<br /><br />I made reports that use the data from the table and tracks the frequecy of the proc and the duration. That way I can see what I need to focus on. If it is long running but not called frequently, I dont care that much. If it is frequent but takes over a second, I care.<br /><br />I can make copies of this table to set my benchmark then compare the live to that table whenever I want.<br /><br />Also have you used the built-in reports in Management studio? They can give a top 10 of CPU or IO since the last reboot of your server if you are using SQL 2005.<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br /><br />One proc I love is sp_now. I will put it below. It gives a listing of whatever is running on a server including the statment and the name of the object that is running (proc name etc). I got it from sql server mag. very slick and lets you know what is causing a spike RIGHT NOW!<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br /><br />use the "results to text" option for better viewing.<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br /><br />CREATE PROCEDURE [dbo].[sp_Now]<br />as<br /><br />set nocount on<br />declare @handle binary(20), <br /> @spid smallint,<br /> @rowcnt smallint,<br /> @output varchar(500)<br /><br />declare ActiveSpids CURSOR FOR<br />select sql_handle, spid<br /> from sysprocesses <br /> where sql_handle &lt;&gt; 0x0000000000000000000000000000000000000000<br /> and spid &lt;&gt; @@SPID<br />order by cpu desc<br /><br />OPEN ActiveSpids<br />FETCH NEXT FROM ActiveSpids<br />INTO @handle,<br /> @spid<br /><br /><br />set @rowcnt = @@CURSOR_ROWS<br /><br />print ‘===================='<br />print ‘= CURRENT ACTIVITY ='<br />print ‘===================='<br />print ‘ ‘<br />set @output = ‘ACTIVE SPIDS: ‘ + convert(varchar(4),@rowcnt)<br />print @output<br /><br /><br />WHILE (@@FETCH_STATUS = 0)<br />BEGIN<br /> print ‘ ‘<br /> print ‘ ‘<br /> print ‘O’ + replicate(‘x’,120) + ‘O'<br /> print ‘O’ + replicate(‘x’,120) + ‘O'<br /> print ‘ ‘<br /> print ‘ ‘<br /> print ‘ ‘<br /><br />select ‘loginame’ = left(loginame, 30), <br /> ‘hostname’ = left(hostname,30),<br /> ‘datagbase’ = left(db_name(dbid),30),<br /> ‘spid’ = str(spid,4,0), <br /> ‘block’ = str(blocked,5,0), <br /> ‘phys_io’ = str(physical_io,8,0), <br /> ‘cpu(mm<img src=’/community/emoticons/emotion-7.gif’ alt=’:s’ />s)’ = str((cpu/1000/60),6) + ‘:’ + case when left((str(((cpu/1000) % 60),2)),1) = ‘ ‘ then stuff(str(((cpu/1000) % 60),2),1,1,’0’) else str(((cpu/1000) % 60),2) END ,<br /> ‘mem(MB)’ = str((convert(float,memusage) * 8192.0 / 1024.0 / 1024.0),8,2),<br /> ‘program_name’ = left(program_name,50), <br /> ‘command’ = cmd,<br /> ‘lastwaittype’ = left(lastwaittype,15),<br /> ‘login_time’ = convert(char(19),login_time,120), <br /> ‘last_batch’ = convert(char(19),last_batch,120), <br /> ‘status’ = left(status, 10),<br /> ‘nt_username’ = left(nt_username,20)<br /> –into #working1<br /> from master..sysprocesses<br /> where spid = @spid<br /> print ‘ ‘<br /> print ‘ ‘<br /> <br /> — Dump the inputbuffer to get an idea of what the spid is doing<br /> dbcc inputbuffer(@spid)<br /> print ‘ ‘<br /> print ‘ ‘<br /><br /> — Use the built-in function to show the exact SQL that the spid is running<br /> select * from ::fn_get_sql(@handle)<br /> <br /> FETCH NEXT FROM ActiveSpids<br /> INTO @handle,<br /> @spid<br />END<br />close ActiveSpids<br />deallocate ActiveSpids<br /><br /><br />Cheers!<br /><br /><br /><br />Michael <br />MCDBA<br /><br />"The statistics on sanity are that one out of every four persons is suffering from some sort of mental illness. Think of your three best friends — if they’re okay, then it’s you!"
One thought. Stay away from 3rd party tools if you can. Quest’s puts a bunch of procs on your production server in each db if I recall. not good. They also add load to your server, something you may not want if you have load problems already! Mike Michael
MCDBA "The statistics on sanity are that one out of every four persons is suffering from some sort of mental illness. Think of your three best friends — if they’re okay, then it’s you!"
Morning Satya! Michael
MCDBA "The statistics on sanity are that one out of every four persons is suffering from some sort of mental illness. Think of your three best friends — if they’re okay, then it’s you!"
Morning Mike,
Seems you were pretty busy last week. On the note of your way to dealing such issues, you are right to develop own SPs or automated tasks to collect such historic information
Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. Knowledge is of two kinds. We know a subject ourselves or we know where we can find information on it.
Satya,<br /><br />Have you used that SP_now? Oh man that is slick! The other day I was watching CPU spike and Just ran it and was able to see the issue really quickly. I hardly use sp_who or sp_who2 anymore<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /> <br /><br /><br />Michael <br />MCDBA<br /><br />"The statistics on sanity are that one out of every four persons is suffering from some sort of mental illness. Think of your three best friends — if they’re okay, then it’s you!"
What I#%92m interested what to use get the data. I#%92m interested in to learn what I shall look after. What information to log and not to analyze.<br />It is a good way to look at top 10 mosted called store procedures and queries that generates most reads. And from there support them with better indexes, rewrite them make better plans and less reads to get a better program. And work with this week after week.<br />Or shall we start to log most recompiled store procedures and queries and work with get less recompiles and work with that.<br />What shall I start with for my server trace? CPU? Reads? Writes? Or Duration? When I ask more experience DBA<img src=’/community/emoticons/emotion-7.gif’ alt=’:s’ /> some tells me reads other tells look for cpu. <br />I have read loot of documentation, blog and so on what to monitor etc. Just wanted to know what you looking for to define my own monitor routine. <br />We use Server side traces, DMVs & PERFMON to collect the historic information every month. It#%92s great put still <img src=’/community/emoticons/emotion-5.gif’ alt=’;)‘ /> what are looking after with server side traces, DMVs & PERFMON? Reads, complies or what…<br />Okej I shall not use any 3 party tools.. <br />
Jn4u<br />Refer to the links above on the first reply, there you will get relevant information.<br />Once you have captured the log for assesment go through memory, cpu related counters. for optimum results its better to capture during busy times and less usage times on your server.<br /><br /><br />Mike<br /><br />Thats quite interesting, I haven’t used it yet.<br />Seems a good candidate for blogging in this case, what do you say.<br />(<i>jn4u, sorry for diversion of topic a bit</i>)<br /><blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by MikeEBS</i><br /><br />Satya,<br /><br />Have you used that SP_now? Oh man that is slick! The other day I was watching CPU spike and Just ran it and was able to see the issue really quickly. I hardly use sp_who or sp_who2 anymore<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /> <br /><br /><br />Michael <br />MCDBA<br /><br />"The statistics on sanity are that one out of every four persons is suffering from some sort of mental illness. Think of your three best friends — if they’re okay, then it’s you!"<br /><hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote">
what I care about is how long it takes for the query to complete. That is what matters to the user. I believe the user is king and they are why I have a job<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /> If everything runs fast, then I dont care about CPU, etc unless it is killing the server, but then it would be long in duration wouldnt it?<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /> Just my thoughts.<br /><br />Satya, <br />It is sp_now is posted above. Give it a try. It was not written by me but it rocks<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br /><br />Michael <br />MCDBA<br /><br />"The statistics on sanity are that one out of every four persons is suffering from some sort of mental illness. Think of your three best friends — if they’re okay, then it’s you!"
Well we had a CPU problem before the upgrade but still want to have over view what is causing the problem. I want to limit the log size but at same moment I want accurate information. If start putting a filter on event column “Duration”
If filter everything that#%92s take more than 50 ms, Then I make some pivot table to sum up information in some top 10 list. Is not possible that miss some small statement that running hundreds of time against the database impact total load. Or for that#%92s sake if I limit the log size with read, cpu or?
We have 24/7 application. The peak hours are during 11:00 to 15:00, it takes me 15 min to fill up 5 GB of log space with no filtering.
I have struggle with the question if shall runt server side log for 15min every hour during the peak hours or filter the log runt it continuous during 5 hours.
The application don#%92t have even load in all the databases. During end of week and month we have lot economic calculations effect the load. Next time its product selection program that gives lot of load etc…
You could take help of server side tracing in this case intermittently, I can see that way you can collate the analysis for more assessment. Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. Knowledge is of two kinds. We know a subject ourselves or we know where we can find information on it.
and you can, as I do, purge records from the table if it gets to volumous. I would also recommend only using xtype of "P" for proc and keeping only those. they will give you a basic idea of issues. Michael
MCDBA "The statistics on sanity are that one out of every four persons is suffering from some sort of mental illness. Think of your three best friends — if they’re okay, then it’s you!"
Hey Mike! Thanks for the praises for sp_now. I wrote the proc a few years ago to help show a client that the performance problem they were experiencing was NOT the result of something running on the server. I am very happy that you find it useful. I have TONS of other procs that I find just as useful. I am slowly adding them to my blog:
http://cookingwithsql.com I also created the SQL Hunter which was mentioned in this months edition of SQL Server magazine:
http://sqlhunter.com It feels so good to be able to contribute to the worldwide SQL community.
Welcome to forum DataGod [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]<br /><br />I will check sqlhunter for my searches…<br /><br />MohammedU.<br />Moderator<br />SQL-Server-Performance.com<br /><br />All postings are provided “AS IS” with no warranties for accuracy.<br />
DataGod.. saw your snipit in the R2R section of SQL Mag. Good job with the site. I hoep to see ya around this forum. Some darn good people here! Michael
MCDBA "The statistics on sanity are that one out of every four persons is suffering from some sort of mental illness. Think of your three best friends — if they’re okay, then it’s you!"
]]>