Perfmon Counters Impacting Server Performance | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Perfmon Counters Impacting Server Performance

We are at the very early stages of assessing our SQL Servers to determine if we should move our data to a SAN. EMC would like us to monitor the following perfmon objects.<br /><br />Add all counters and all instances for the following objects:<br />•Physical Disk<br />•Process<br />•Processor<br />•SQLServer:Access Methods<br />•SQLServer:Buffer Manager<br />•SQLServer<img src=’/community/emoticons/emotion-2.gif’ alt=’:D‘ />atabases<br />•SQLServer:General Statistics<br />•SQLServer:Latches<br />•SQLServer:Locks<br />•SQLServer:Memory Manager<br />•SQLServer<img src=’/community/emoticons/emotion-7.gif’ alt=’:S’ />QL Statistics<br /><br />The duration would be every 60 seconds for what appears to be only 2 hours (not nearly long enough in my opinion). Should I be concerned about the load this will place on a server? I typically do not add "ALL" counters and instances for an object when monitoring a server.<br /><br />Also, when monitoring the physical disk counters below is it necessary to divide the results by the number of physical disks in each RAID or are the results useful as is?<br /><br />Physical Disk Average Disk sec/Read<br /> Average Disk sec/Write<br /> Average Disk Queue Length<br /> Disk Reads/sec<br /> Disk Writes/sec<br /> Disk Transfers/sec<br /><br />Thanks, Dave<br />
Why not 15 or 30 seconds to capture the counters?
Get information by uisng Performance audit as per Brad’s referencehttp://www.sql-server-performance.com/articles_performance.asp link. Satya SKJ
Microsoft SQL Server MVP
Contributing Editor & Forums Moderator
http://www.SQL-Server-Performance.Com
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
Hi Satya, I’ve read these before and have followed some of his suggestions. Unfortunately the request is coming from the vendor, EMC. They are asking that I monitor the objects previously listed. As for every 15 to 30 seconds I would consider doing that with a small number of counters, but would be afraid to try that on a large number of counters due to the performance hit the server would experience. I also don’t believe a 24 hour period, as Brad suggests, is sufficient for gaining a general feel for a server’s performance. In our environment the volume of data being processed can change greatly from one day to the next and it is something that cannot be predicted. Typically I prefer to capture at least one week of performance statistics, including monthend processing. I am still concerned about EMC’s request to add "All" counters and instances, despite the frequency being every 60 seconds. Perhaps I’m just being paranoid. Thanks, Dave
I thought so after I’ve replied to your post, as you are pretty confident on the knowledge when it comes to SQL server (I think).
I would suggest to create a baseline to collect counters during normal & busy periods in order to compare the difference between the times. This gives more information on what you can say to the management & vendor about the system performance. I’m sure Joe & Derrick would add more to this, as they are champs when in comes to EMC etc. Satya SKJ
Microsoft SQL Server MVP
Contributing Editor & Forums Moderator
http://www.SQL-Server-Performance.Com
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
100 counters sampled at 15 sec will result in a blg file of 28MB over 24 hour. I prefer to collect for a full week, with emphasis on the end of month or whenever the big reports are run i am willing to bet EMC will come back with a weak configuration that really won’y be able to cut the mustard. if you are on a 4 proc box, my blanket recommendation is 4 dual channel U320 controllers,
4 external dual channel storage units, 4-5 disks each channel, (8-10 per storage unit for a total of 32-40 disks)
32 73GB disks in RAID 10 is approx 1TB,
this unit should suppport 3000 random IOPS to 1 TB database at <10ms latency (10K disks),
at 2GB/sec sequential transfer rate. 4-8 log disks should support 100MB/sec writes to log let EMC provide a quote for these specs versus direct attach storage (~$25K)
I also prefer to track for a week at a little larger interval. When you track the physical disk counters, pay close attention to the IO and queue length counters. Make sure your monitoring period covers an intensive area, such as month-end processing. Otherwise, you’re going to be in big trouble when they come back with a configuration that the sales people and management will like until the engineers actually need it to run. Also, pull for each unit. Don’t pull these counts for ALL LUNs or drive letters together as a total. Pull each drive seperate, and consider what each of them are used for. Is the IO random? Is it sequential? What is the importance of each one? What are the underlying number of physical drives? I like the recommendation of Joe. I have had very good luck with EMC though, after I told their sales people to jump off a high bridge. The TCO will be higher for a database environment on the SAN. You still have to look at raw disks when figuring IO counts for your environment. However, by spreading the files across combinations of physical RAID 5 and 10, along with cache utilization, you can have a stable, very high performance environment on the SAN. It IS more expensive than DAS though. If anyone tells you otherwise, tell them to go find a new client. The advantages of the EMC (is this Clariion or Symmetrix?) are: 1. A consolidated management of storage.
2. The ability to use SnapView, SanCopy, and other SAN utilities to save space and increase some batch processes.
–This can really be nice if you support nine copies of production like one of my clients. You don’t have to actually have an entire physical copy of the 500GB environment. You just have to be ready to take a performance hit on the secondary environments. You can also "refresh" environments in a few minutes vs a few hours.
3. Ability to utilize as a DR tool.
–We’ve relocated LUNs to other servers very quickly. It’s proven to be a very effective way to pull up another environment. To give you a little idea on cost, once you pay for the base unit, you are going to spend $30k-$38k per shelf on the EMC. There are 15 disks on each shelf. You can get 72-300 GB 10k drives or 72-146 GB 15k drives in these shelves. These are on a dual-fiber shelf with fiber-attached U320 drives. As you can see, the cost is definitely higher. If it’s a high OLTP environment, your log files NEED to be on dedicated RAID 10 physical arrays. You can get by with putting the data files on RAID 5 or 10, depending on your IO needs, due to the more random nature of the reads and writes. The log files will kill you though if you try to skimp on disk space.
MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
btw,
the 100 counters i mentioned above collected at 15 sec intervals should have neglibile impact, the reasons for disk spec above are:
1. a large table scan should not severely impact transaction performance
2. same for checkpoints.
3. same for t-log backups
4. support (compressed) backup at 500-800MB/sec one might argue that a transactional server should not have table scans, or other large queries, but the fact is most people do like to run large queries on OLTP systems
fortunately, a properly configured disk system can support this at reasonable cost. i suggest solicting bids for the above performance specs (not capacity) from multiple vendors,
i believe 50-60% discounts are possible with competitive bids
]]>