Newbie SAN performance question | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Newbie SAN performance question

Greetings, I’m rather new to the SAN environment so please bare with my newbie questions.
We just got an IBM DS4300 in our environment and we’re in the process of trying to find the best configuration for use with one of our SQL server. Using SQLIO, the numbers don’t look so hot compared to our locally attached SCZI disks.
I’m looking to increase these numbers. Can the HBA ever be a bottleneck at all? We’re using IBM FAStT FC2-133. Your suggestions are greatly appreciated. SQLIO TESTS: 8GB test file: IOs/secMBs/sec
RAID5, 14 74GB-15K, 64K segment size
G: Read – Random2455.4519.18
G: Read – Sequential4448.6534.76
G: Write – Random 1021.167.98
G: Write – Sequential1704.3313.32 RAID10, 13 74GB-15K, 64K segment size
H: Read – Random2397.4318.73
H: Read – Sequential3642.4028.46
H: Write – Random 2584.5920.19
H: Write – Sequential1815.3114.18

these numbers look about right to me.
if you want great IO performance, you need lots of disks distributed over multiple IO busses,
fundamentally, there is not a lot of performance difference between a SCSI and SAN on a per disk basis.
Thanks Joe.
I guess I’m falling into the same trap with anyone moving to SAN expecting much better results.
All indications so far shows our SAN to be slower than one of our old SCZI LASD made up of 14 18GB-10k U160 drives running of an IBM ServeRaid 4M controller. I’m expecting our new SAN should be faster than this but it isn’t. I was hoping that it’s a configuration issue somehow that we have. How would you configure 2 LUNs for SQL use, 1 for data, 1 for log for 28 74GB-15K? I assumed what I did should be good.
I’m in the process right now of changing them to various segment sizes to see what affects it has on the SQLIO numbers.
IF I had my choice, which I don’t a lot of times (grin), I would configure a LUN on a dedicated RAID 10 array for the logs, a LUN on a dedicated RAID 10 or 5 array for the data, then another RAID 10 or 5 for the tempdb, depending on how much it’s going to be used. On a VLDB, I would try to get seperate, or multiple, physical disk arrays for each database. The LUN level of the SAN would reflect the direct physical architecture. If these aren’t highly IO intensive databses, or you can afford some slower response times, then you can save a LOT of money by having multiple LUNs on either RAID 5 or RAID 10 groups. It just depends what your needs are performance wise. Having a SAN doesn’t guarantee faster responses. The caching of a SAN can make a huge difference though in overall SQL Server performance if set up correctly. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Looks like these gents covered most of it. A couple things to bear in mind is that Fibre Channel (2 Gbs on your DS 4300) is actually slower than SCSI Ultra 320. When building these configs you hope the switching engine and caching makes up for it, but it might not depending on use, as the big dogs here pointed out. We have several of the 4300’s and have seen some of these limits. You might want to look at a couple other ways to speed up access, such as: Moving your indexes to their own RAID 1, employing derived tables instead of temporary tables in your code, spreading tempdb across several files and setting up a RAID for dumps (to lessen the impact of backups on office hours workload). These may be a little different than your looking for, but planning ahead is alot easier than redesigning later, and you’ve already spent a boat load of $$$ on the 4300. Some ideas anyway. Randy
quote:Originally posted by derrickleggett IF I had my choice, which I don’t a lot of times (grin), I would configure a LUN on a dedicated RAID 10 array for the logs, a LUN on a dedicated RAID 10 or 5 array for the data, then another RAID 10 or 5 for the tempdb, depending on how much it’s going to be used. On a VLDB, I would try to get seperate, or multiple, physical disk arrays for each database. The LUN level of the SAN would reflect the direct physical architecture. If these aren’t highly IO intensive databses, or you can afford some slower response times, then you can save a LOT of money by having multiple LUNs on either RAID 5 or RAID 10 groups. It just depends what your needs are performance wise. Having a SAN doesn’t guarantee faster responses. The caching of a SAN can make a huge difference though in overall SQL Server performance if set up correctly. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.

i would not get bent out of shape over the fact that FC at 2Gbit/sec = 200MB/sec (after accounting for overhead) is less than SCSI at 320MB/sec.
These numbers are not exactly comparable because the FC ring type protocol can achieve higher utilization the shared bus on SCSI.
Have seen as much as 240MB/sec sustained on a single U320 SCSI bus going to 4 disks. It might be possible to get higher in large block sequential ops.
2Gb/s FC can sustain 195MB/sec, however, in most SANs, where there is one FC link from the server to the SAN, then a second FC from the SAN to the actual disks, something like 160-170MB/sec is sustainable,
If everything is in the SAN cache, then 195MB/sec is sustainable.
I am going to guess that it might be possible to drive more small block IO on FC than SCSI because of the protocol differences, but I have not verified this.
A bigger annoyance (but not overriding) is that I can get large sequential transfers of 50-60MB/sec per disk on SCSI but only 10-12MB/sec on SANs,
This might be due to the way SQL Server and the Windows OS handles SAN traffic and not an inherent problem with the SAN, but I do wish the appropriate vendors will slow down on the hot air and fix this annoyance. It is more serious in Data Warehouse apps where it is desired to super fast table scans on a reasonable cost storage.

Exactly. I was hoping someone with more indepth numbers would explain this, but i didn’t have the time or want to go into that level of detail. I was just pointing out that he shouldn’t be disappointed that the DS4300 isn’t obviously faster. The DS4300 is probably a little faster than your loop example though, because it is a switched backplane with a 512MB cache module. Thanks Joe.
quote:Originally posted by joechang i would not get bent out of shape over the fact that FC at 2Gbit/sec = 200MB/sec (after accounting for overhead) is less than SCSI at 320MB/sec.
These numbers are not exactly comparable because the FC ring type protocol can achieve higher utilization the shared bus on SCSI.
Have seen as much as 240MB/sec sustained on a single U320 SCSI bus going to 4 disks. It might be possible to get higher in large block sequential ops.
2Gb/s FC can sustain 195MB/sec, however, in most SANs, where there is one FC link from the server to the SAN, then a second FC from the SAN to the actual disks, something like 160-170MB/sec is sustainable,
If everything is in the SAN cache, then 195MB/sec is sustainable.
I am going to guess that it might be possible to drive more small block IO on FC than SCSI because of the protocol differences, but I have not verified this.
A bigger annoyance (but not overriding) is that I can get large sequential transfers of 50-60MB/sec per disk on SCSI but only 10-12MB/sec on SANs,
This might be due to the way SQL Server and the Windows OS handles SAN traffic and not an inherent problem with the SAN, but I do wish the appropriate vendors will slow down on the hot air and fix this annoyance. It is more serious in Data Warehouse apps where it is desired to super fast table scans on a reasonable cost storage.

Thank you very much gentlemen. I’m learning a great deal here.
In doing some further SQLIO benchmarks, we do get some interesting numbers.
I’m hope i’m using the right settings for SQLIO also. (eg. sqlio -kR -s360 -frandom -o8 -b8 -LS -DRIVE_param.txt) Some SQLIO benchmark data: RAID5______________|_27 – 74GB_____|_13 – 74GB_____|_8 – 74GB_______|_5 – 74GB
___________________|_IOs/s___MBs/s_|_IOs/s___MBs/s_|_IOs/s____MBs/s_|_IOs/s___MBs/s
Read – Random _____|_2484.14 19.40_|_2455.45 19.18_|__1691.69 13.22_|__1502.65 11.74
Read – Sequential__|_3807.25 29.74_|_4448.65 34.76_|_10741.71 83.9__|_11940.45 93.28
Write – Random_____|_1065.66 08.32_|_1021.16 07.98_|___597.62 04.67_|___412.49 03.22
Write – Sequential_|_1643.15 12.83_|_1704.33 13.32_|__2014.26 15.74_|__1792.94 14.01 RAID5 w/ 8 – 74GB
Segment size_______|_128K segment__|_64K segment___|_32k segment___|_8k segment
___________________|_IOs/s___MBs/s_|_IOs/s___MBs/s_|_IOs/s___MBs/s_|_IOs/s___MBs/s
Read – Random______|_2528.31 19.75_|_2455.45 19.18_|_2340.51 18.29_|_2139.65 16.72
Read – Sequential__|_3632.80 28.38_|_4448.65 34.76_|_5026.60 39.27_|_5348.88 41.79
Write – Random_____|_1108.51 08.66_|_1021.16 07.98_|__921.80 07.20_|__872.14 06.81
Write – Sequential_|_1648.03 12.88_|_1704.33 13.32_|_1790.98 13.99_|_1903.05 14.87 My conclusions so far: More HDs = higher random r/w, slower sequential r/w. We see deminishing returns beyond 12-14 HDs in an array for random access performance. What gets me here is the sequential read numbers and how high they scale when you have less disks. This kind of goes against what I see with SCZI disk systems where more arms usually mean more IOs. How would you interpret this piece of info? The same is true in changing the different segment sizes. Larger segment size = higher random r/w, slower sequential r/w.
Should I go with the vendor recommended 64K? My overall conclusion right now is that IBM SAN controllers can’t handle too many IOs compared to SCZI which really hurts it with random access performance. It really flies with anything sequential due to the huge cache. Is this an informed conclusion? Thanks again.

How many fiber channel connections do you have from the host to the SAN? Try adding more paths. Also, you really, really, really need to test RAID 10 RAID groups with dedicated drives. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Sorry I should clarify…. All these RAID arrays have their own dedicated drives from which a single logical drive is created. I just run the benchmarks one at a time for each drive, reconfigure the RAID arrays, run more benchmarks. The server only has 1 HBA with 1 FC to the DS4300 SAN.
Would a HBA with dual FC get me better performance?
Would adding another HBA be the same?
I assumed that multiple HBA or FC only help when there’s concurrent disk access from the same server.

Re: The 64k question. 64k is a common storage size (in RAM and on the disk) for SQL 2000. Hence we set all our drives to 64k so the application doesn’t have to slow down and convert the sizes. I have seen and been told recommendations to that effect as well. Regarding the Host Bus Adapter, we only use dual HBA’s in our 4300’s, so I won’t speculate. You might want to make sure your HBA’s "auto-sense" is connecting at 2GBs, because it will downshift if it encounters an issue. Do you know if your organization purchases the "turbo option" (denoted by the 2GB cache)? Are you running FC and SATA drives or just FC drives? You also might want to check the firmware revision on the controller. It was updated recently.
quote:Originally posted by arkitek Sorry I should clarify…. All these RAID arrays have their own dedicated drives from which a single logical drive is created. I just run the benchmarks one at a time for each drive, reconfigure the RAID arrays, run more benchmarks. The server only has 1 HBA with 1 FC to the DS4300 SAN.
Would a HBA with dual FC get me better performance?
Would adding another HBA be the same?
I assumed that multiple HBA or FC only help when there’s concurrent disk access from the same server.

quote:
Would a HBA with dual FC get me better performance?
–It can help significantly. Read up on how how FC and multi-pathing is used with the SAN. Try it and see what your performance is like after you add the channels.
quote:
Would adding another HBA be the same?
–It depends on channel distribution. It can be the same. However, look at the server and put the other HBA on a seperate channel. Provides distribution of IO and redundancy at the channel level.
quote:
I assumed that multiple HBA or FC only help when there’s concurrent disk access from the same server.
–Not necessarily. Look at the above. I’m not guaranteeing this will make a big performance difference. I have seen performance increases from it though. In addition, I would never run a critical production systems without multiple HBA cards on multiple channels of the host. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Thanks again everyone. Cannonphoder, we do have the Turbo version. I will have to verify bios/firmware version. It’s up-to-date I believe. All drives are FC.
I think I will have rerun all the SQLIO benchmarks in 64K sector size also for a more accurate picture. I will have to try an additional HBA or dual to see. Gotta look into LUN virtualization also. Exciting stuff.
I’ve read also that SW like Veritas Volume Manager or Solstice DiskSuite can help to configure load balancing of I/O accross all the paths. Anyone has experience with these products?
If you’re going to have multiple copies of a LUN on an instance (for examples 3 copies of the production database LUN on one host for QA, UAT, and RELEASE) or are going to be dynamically installing disks than Volume Manager is great. The tools Windows provides suck!!! [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />] That’s my objective opinion of course.<br /><br />MeanOldDBA<br />[email protected]<br /><br />When life gives you a lemon, fire the DBA.
Hi
I was looking at the info. you left and the test results from the beginning. Interesting stuff, the turbo is the faster model, so your doing good there. I’m not sure what to think of the SQLIO tests though, I’m told RAID 10 reads faster than RAID 5 in our servers (in RAID 5 the controller has the overhead of adding and removing parity from the stripes, where RAID 10 is a pure transfer). Please let us know how the dual HBA affects your setup. A penny for your thoughts anyway. SQLIO TESTS: 8GB test file: IOs/sec MBs/sec
RAID5, 14 74GB-15K, 64K segment size
G: Read – Random 2455.45 19.18
G: Read – Sequential 4448.65 34.76
G: Write – Random 1021.16 7.98
G: Write – Sequential 1704.33 13.32 RAID10, 13 74GB-15K, 64K segment size
H: Read – Random 2397.43 18.73
H: Read – Sequential 3642.40 28.46
H: Write – Random 2584.59 20.19
H: Write – Sequential 1815.31 14.18
quote:Originally posted by arkitek Thanks again everyone. Cannonphoder, we do have the Turbo version. I will have to verify bios/firmware version. It’s up-to-date I believe. All drives are FC.
I think I will have rerun all the SQLIO benchmarks in 64K sector size also for a more accurate picture. I will have to try an additional HBA or dual to see. Gotta look into LUN virtualization also. Exciting stuff.
I’ve read also that SW like Veritas Volume Manager or Solstice DiskSuite can help to configure load balancing of I/O accross all the paths. Anyone has experience with these products?

We implemented an older FastT 200 (don’t ask) as a temporary system to get experience with the IBM gear. I used block size of 64K for both hardware and NTFS level. It seems really sluggish, especially with writes and SQL backup throughput. We then tried 8K for the data and 4k for the logs and got very, very good latency (< 1ms) on writes. I’m currently trying to figure out the optimum array/LUN size on a DS4500 with 28 drives.
Thanks.
This is really good stuff guys. Keep the posts coming about the tests. [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]<br /><br />MeanOldDBA<br />[email protected]<br /><br />When life gives you a lemon, fire the DBA.
Very interesting Craig. I’m wondering if the firmware is handling the block sizes wrong or something like that. What version of SQL are you using please?
quote:Originally posted by CraigPurnell We implemented an older FastT 200 (don’t ask) as a temporary system to get experience with the IBM gear. I used block size of 64K for both hardware and NTFS level. It seems really sluggish, especially with writes and SQL backup throughput. We then tried 8K for the data and 4k for the logs and got very, very good latency (< 1ms) on writes. I’m currently trying to figure out the optimum array/LUN size on a DS4500 with 28 drives.
Thanks.

Having tried a dual HBA setup on a host, here are some performance stats: Baseline – Single HBA, SQLIO tests 1 LUN at a time
Test1 – Single HBA, concurrent SQLIO tests on both LUNs
Test2 – Dual HBA, concurrent SQLIO tests on both LUNs (each LUN dedicated to its own HBA and RAID controller via Prefered path setting Baseline, Test1, Test2
DATA = RAID5, 14 74GB-15K, 64K segment size
Read – Random, 2455.45, -5%, 0%
Read – Sequential, 4448.65, -27%, 0%
Write – Random, 1021.16, -38%, -4%
Write – Sequential, 1704.33, -36%, -29% LOG = RAID10, 13 74GB-15K, 64K segment size
Read – Random, 2397.43, -5%, 0%
Read – Sequential, 3642.40, -26%, 0%
Write – Random, 2584.59, -63%, -19%
Write – Sequential, 1815.31, -31%, -23% As you can see, there is quite a drop in performance, notably write IOs, when both LUNs are heavily accesses at the same time. Dual HBA will definitely benefit a SQL server with the DATA and LOG volume dedicated to its own HBA and RAID controller (via Prefered path setting). This is without using a multipath SW like Veritas Volume Manager either. That will next on list to test. There is still a drop in performance affecting sequential write with dual HBA config. What would be the reason for this? I’m wondering if the host server might be the culprit. Running concurrent tests with 2 host servers might give us a more accurate picture.
When you use Veritas Volume Manager, test with multiple LUNs on the same physical RAID group and multiple LUNs on different RAID groups. That would be a nice metric to have. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Hi all, Refering to arkitek’s benchmarks,
RAID5 w/ 8 – 74GB
Segment size_______|_128K segment__|_64K segment___|_32k segment___|_8k segment
___________________|_IOs/s___MBs/s_|_IOs/s___MBs/s_|_IOs/s___MBs/s_|_IOs/s___MBs/s
Read – Random______|_2528.31 19.75_|_2455.45 19.18_|_2340.51 18.29_|_2139.65 16.72
Read – Sequential__|_3632.80 28.38_|_4448.65 34.76_|_5026.60 39.27_|_5348.88 41.79
Write – Random_____|_1108.51 08.66_|_1021.16 07.98_|__921.80 07.20_|__872.14 06.81
Write – Sequential_|_1648.03 12.88_|_1704.33 13.32_|_1790.98 13.99_|_1903.05 14.87 I’m guessing by segment that you mean block size ? Or am I mis-intepreting you ?
Also, if you mean block size, I’m in the dark as to why a larger block size is bad for sequential writes. It seems counter-intuitive, because you should have less I/O operations if you’re writing larger sequential blocks. Cheers,
-Kilka
quote:Originally posted by joechang
A bigger annoyance (but not overriding) is that I can get large sequential transfers of 50-60MB/sec per disk on SCSI but only 10-12MB/sec on SANs,
This might be due to the way SQL Server and the Windows OS handles SAN traffic and not an inherent problem with the SAN, but I do wish the appropriate vendors will slow down on the hot air and fix this annoyance. It is more serious in Data Warehouse apps where it is desired to super fast table scans on a reasonable cost storage.

At my previous site, we migrated from EMC to IBM FastT900 for some of our SQL Server server’s. I was used to seeing 10 MB/s for large SQL backups (database at that point in time 150 GB), and during our testing of the FastT900, I got peaks of 100 MB/s. I can try and find out, but I believe they are consistently getting between 70 and 80 MB/s for those full backups. I can’t comment on the LUN etc. configuration – it went over my head in the brief overview we got. As was mentioned (by Derrick, I think) in an ideal world, tempdb, data, log all get their own drives, and if possible, so do indexes. *sighs* … now I’m at a site where we have 3 SAN’s … and they are all obsolete (I kid you not) – we have no warranty, and only part replacements, so they keep the 3rd SAN for swop out, etc. *cries* … I do miss that previous site’s budget – they are starting to play with 64 bit SQL Server now… CiaO Panic, Chaos, Disorder … my work here is done –unknown
Kilka, I’m calling it segment size since the SAN SW calls it that. It’s the same as block size I believe.
I’m in the same line of thought as you in that Sequential IO should increase with the increase of segment size.
I’m sure now that it has something to do with my SQLIO settings of 8K segment/sector size setting. I should rerun all the tests in 64K size really but I started with 8k so I’m keeping it the same for comparison. I’m hoping to confirm this soon. Craig, Try RAID configurations with 8 or 4 drives as they seem to give the best results for some reason. To get the best performance, you pretty much have to keep the RAID configs to 4 or 8 and use virtualization SW or HW to group them together to work for you it seems. Our vendor told us this also though no really good reason was given to us why this is the case. I tried having RAID configs of 12 up to 24 HDs and performance was horrible, not in line with what you would expect with a SCZI setup.
Thanks Arkitek, Do you know if anyone else has done this sort of benchmark with SQLIO ? I’m looking for stats independant of SAN configuration. Cheers,
-Kilka
I haven’t come accross any vendors that publish IO numbers let alone from SQLIO. SQLIO is rather new from Microsoft specifically written to test/troubleshoot SAN for SQL. It can be used on local disks also from the readme file on it. It’s much quicker/easier to use than IOMeter which is the industry standard for disk IO benchmarking I believe.
]]>