SAN disk access performance | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

SAN disk access performance

I’ve just installed a new quad cpu clustered server to eliminate cpu bottlenecks we were having with our app. The CPU utilization would run continuously in the 75%-100% range causing all kinds of performance and time out issues. Now, with this new server in place, I think we have uncovered/created a disk I/O bottleneck. Our Avg. disk write queue length is running at a rate of about 5-9 requests per disk. Our Avg. disk read queue lenght seems to be within a tollerable range, running about 12-15 requests per disk. Does anyone know of a good utility to monitor a FC 2214 hba going to a MSA100 SAN device? The oem utils just aren’t giving me the answers I’m looking for. Specifically, real time disk access times, data throughput etc… I have 4 more disks on order to increase my raid volume from 5 to 9 disks to help disburse the load. Thanks,
Did you run Profiler to find slow queries or store procedure?
I mean, did you improve performance working with database and the only way now is about hardware?
Luis Martin
Moderator All postings are provided “AS IS” with no warranties for accuracy.
I don’t know of any third party tools to examine performance at the san device or switch. OEM software is generally required. We have a similar setup and use the vendor’s (IBM) software for this analysis. Also be sure to check that your read/write cache settings are appropriatly set for best performance. —————————–
Brad M. McGehee, MVP
When you get the new disks, I believe the sweet spot on the MSA1000 is six disks. You probably need to divide it into two seperate array. Also, have you considered putting your log files on RAID 10? You should. One other thing. When you configure the MSA1000 disks, it’s best to split each array on the enclosure. Example: If you have 14 disks have: 1-3;11-13 RAID 5
4-10 RAID 10
14 Hot Spare MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Thanks to all who responded. Luis- Our developers have made several tweeks, mostly regarding indexes, and that has made some improvements. I work the hardware/OS side of our systems, and we want to make sure the entire environment is as optimized as possible. We have our log files on a separate raid volume from the database files. The additional disks will take us from 5 to 9 for the database volume. In order to eliminate the time it takes to expand a volume, I intend to simply delete the database volume (after backups, of course), install the new drives and recreate the volume with all disks. I am considering RAID10 for this. Derrick- What did you mean about the sweet spot on the MSA1000? Are you refering to it’s optimal capabilities, and are there any variables that change that, or does that seem to be a solid number regardless of the DB environment? Again,
Thanks to all! Ken
You need to look at the MSA1000 recommended best practices. On a RAID 5 array, the maximum performance for disk I/O performance is achieved with six disks. Also, the MSA 1000 has set up the channels where you have one channel for disks 1-7 and another for 8-14. By splitting the disks evenly across the channels, you gain more capacity and even distribution. It would of course be variable by the database environment. Generally speaking though with a DB environment, RAID 5 for data and RAID 10 for logs is the way to go. If you can afford all RAID 10, that’s great. It’s just aweful expensive. <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br /><br /><br /><br />MeanOldDBA<br />[email protected]<br /><br />When life gives you a lemon, fire the DBA.
Chenck if the disks are too fragmented. May be SQL Server’s read ahead is killing the system because the data is not available sequentially. Gaurav
Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.