SAN / DAS | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive


Hi we’re choosing between getting a SAN or DAS set up.
Dell 6650, 4 procs, 4GB Ram. The first option we were considering was getting a Dell PV220s as Direct attached storage to the server. RAID 10 6*146 GB (we can config this as we want basically).
The other option would be to Slice out 250GB from our hosting provider’s SAN (RAID5). The SAN seems like a painless and scalable solution.
The hosting provider claims that it will offer just as good performance as a the DAS,
and is more fault-tolerant. (The only downside is that it’s more expensive).
They will adjust the SAN if there is contention for I/O resources on the between several applications, so far they have needed to do this once. Current config: C&D would be on the server, 2 different RAID sets
C: OS & Executables & tempdb RAID1 73GB
E: DATA 250GB on SAN Any comments on going with the SAN iof getting a DAS?

i would set target numbers for :
1. random IO
2. sequential
3. low & high queue latency
if the SAN can meet the target number,
generally i never like to share disks for a transactional DB.
also, its better to spread load across many disks, ie, 14 x 73GB instead of 6×146

Joe, I have seen you give this advice before and I am curios as to why? generally i never like to share disks for a transactional DB. Lets say you have 60Disks and each disk can do 100 Random 8KB IOPS. IF I had two servers that needed storage I would give 30 disks to one server and 30 to another. But another option would be to spread both servers LUNS across all 60 Disks. Now if bothe servers were getting hammered 100% of the time this would not give you any gain, however if both servers have access patterns that are very busty in nature this setup would help. Each server would benefit from 60 disks not 30. Thanks

which part of the advice? the target numbers or the sharing disks?
i give the target numbers because everyone wants to believe that SAN does wonderful things, only to end up with really horrible performance,
i say never share disks for the transactional db because most companies only have 1 OLTP system,
they may have another DB for reporting, DW, or QA.
so when those get used, transactions virtually stop
Sorry Joe forgot to put quotes around the statement<br /><br />"generally i never like to share disks for a transactional DB."<br /><br />I guess my situation is unique. I actually mange several hundred databases across close to 150 SQL Servers. The problem for us with a tradional SAN is manageing growth. Our customers pay for a set amount of storage that can change every month. I could have a 50GB database one month and then they want it to be 300GB the next.<br /><br />With a tradional SAN like an EMC it makes it very hard to maintain a system like this. <br /><br />In you example above I guess you are saying that the Reporting DB or DW db start to over utilize the SAN resources? When you see customers with multiple apps on a single SAN where do you find the resources being over utilized?<br /><br />IOPS?<br />or suckign up the bandwidth?<br /><br />Couldn’t you limit the FC connections to keep other servers from suckign too much bandwidth? Is it possible to throttle individual connections to a SAN?<br /><br />rockmoose,<br /><br />sorry for jumping in on your thread without giving advice <img src=’/community/emoticons/emotion-5.gif’ alt=’;)‘ /> I would take the direct attached RAID10 over shared SAN in you case.<br /><br />Bert<br /><br />
&gt;&gt; <i>sorry for jumping in on your thread without giving advice <img src=’/community/emoticons/emotion-5.gif’ alt=’;)‘ /> I would take the direct attached RAID10 over shared SAN in you case.</i><br /><br />No problem at all, appreciate all valuable input of any form.<br />Is there any major reason why you advice on the DAS?<br /><br />To me the 2 solutions don’t seem to have any <u>major</u> advantages/drawbacks on each other, and both are viable.<br />We choose the SAN, because it seems to give us the easiest and most stable & scalable solution.<br /><br />Regards,<br />rockmoose
given that a Dell PV220S configured for 2 U320 SCSI channels, with 14 x 73GB 10K drives runs $7.5K, minus whatever discount you can squeeze out of Dell,
compared to $2-3K per drive from a SAN vendor, i just don’t see what benefit a SAN has.
SAN vendors like to talk scalability, but i am more inclined to think the only thing really scalable on SAN is the price you can pay.
I just had a case where someone had a DMX 800 with 50-60 drives getting 14MB/sec sequential, and the SAN engineer tried to claim everything was fine, it must be the DBA’s app. talk about a total waste of time.
on the control of resources, i am not aware of anything. if multiple customers must share a common storage system, try really hard not to have to guarantee a specific performance level.
Thanks, we’ll see where we go with this.<br />The price per GB is roughly twice on the SAN, we use what we need for now and can get more if need be.<br /><br />The concern is mainly if the SAN will be as performant as the DAS.<br />I did speak with the tech of the provider and he claimed that it would be equivalent.<br /><br />The argument that made me favor the SAN is the stability aspect.<br />The server woould be attached to the san with fibre and failover copper conections.<br />And the SAN itself is more "fail-over" safe than the PV.<br /><br />I would just like to get the storage space (money within reason), have a very stable setup,<br />go home and sleep well [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />].<br /><br />If I get the PV, might I be able to do that as well, <b>and</b> have some cash to spare for something fun ???<br /><br /><br />Regards,<br />rockmoose
In you case the DAS was RAID10 the SAN was RAID5. Did you ask your host how your LUN is configured? Do you have your own dedicated spindles or are the shared with others? Bert
quote:Originally posted by bertcord In you case the DAS was RAID10 the SAN was RAID5. Did you ask your host how your LUN is configured? Do you have your own dedicated spindles or are the shared with others? Bert

Yes, RAID5 for the SAN, The DAS we can just about do what we like with.
We are getting the technical specs for the SAN a.t.m.
We’ll see after the technical review of the SAN if we go that way. rockmoose
FWIW, do you know this one: ? —
Frank Kalis
Microsoft SQL Server MVP
Ich unterstütze PASS Deutschland e.V.

<blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by FrankKalis</i><br /><br />FWIW, do you know this one:<a target="_blank" href=></a> ?<hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote"><br /><br />Yes I was. (read abt a year ago last time).<br />The material seems a bit outdated, but the principles stay the same.<br />I don’t know how much trust you can put in technology progressing,<br />but yes there are fundamental differences between 5 and 10 I know.<br /><br />When I get the technical specs of the san I will post them here,<br />and you can all make your verdict, but I can see where it’s going [<img src=’/community/emoticons/emotion-5.gif’ alt=’;)‘ />]<br /><br />Cheers,<br />rockmoose
Dell/EMC. Redundant fiber HBA between server and SAN, total throughput 4Gbps with EMC powerpath software. SAN has 4GB of read and 4GB of write cache. RAID5.
146 or 300 GB disks.
LUN has 12-14 disks (Fibre), and is shared with others.
–<I thought that 12-14 seemed low ???, and they were going to double-check this>– Currently 10 customers on the SAN total, I don’t know how the LUNs are distributed between these. According to the provider it’s their most performant solution.
(We can get our own dedicated raid5 or raid10 on the san, but it doesn’t seem like a cost-effective alternative, even more $) Any comments ?
so for DAS you have 6 dedicated spindles…. but with the SAN you need to share 12-14 but shared with others?….I dont know if I woudl like this. IF you go with the SAN make sure you get them to agree to some SLA. "According to the provider it’s their most performant solution." yeah maybe for them haha. Everyone hears SAN and they are like wow they costs lots of money they must be super fast! To me a SAN is just a bunch of external arrays smacked together. No guarantee that it will perform better. I manage 2 SANS a EMC CX700 and a 3Par S400. Our CX has 128 drives and supports two SQL Servers. No drives shared between them. We are planning on using the 3Par system (testing now) for multiple databases and servers that vary in Size from month to month. This type of provisiing is too difficult with the EMC. You can configure an EMC to stip multiple drives and get decent performance …..but….. I wouldnt trust a hosting company to do it. If you go with the EMC I would get much more info first. Bert
the fact that your provide went with 1 DAE with the big capacity drives instead of 2 or more DAE with the lower capacity drives indicates they have poor awareness of disk performance, only disk capacity.
also, sharing disks with other applications, not even in your own company, is just not a good idea do i understand correctly, you have a dedicated PE6650 for your db, but you will be sharing disks????
We are getting storage space for the server.
2 options: DAS or SAN. Evaluating Scalabilty, Fail-Over, and Performance.
The SAN seems doubtful on the last point. The specs on how the disks on the SAN are set up are not completely to our satisfaction. They are 146GB 15k fibre, which is ok by me, the number of disks we get/have to share is not so good.
I accept RAID5, since I see the SAN as a device where I just buy GB/$.
They can have any RAID they want as long as I get: Scalabilty, Fail-Over, and Performance.
I asked them a straight question about I/O performance DAS(12*73 RAID10 15k) compared to their SAN.
Hopefully I will get an answer tomorrow, and we’ll see what they have to say. Regards,
i can tell you ahead of time for the EMC CX line
so long as the test set is larger than the SAB cache Random IO should be about the same between SAN and DAS Sequential performance on the EMC for SQL Server is 9.6MB/sec per disk
hence 12 disks should yield 120MB/sec On my box, 8 SCSI disks RAID 0, spread across 2 U320 channels yields ~280MB/sec write, 500MB/sec read
in RAID 10, with 12 disks, i would expect about the same, depending on the controller
Thanks Joe,<br /><br />I’m trying to deepen my knowledge in these areas.<br /><br />The (max) physical capabilities of their SAN device is 680 MB/sec (Data transfer speed).<br />Physical IO (to disk) would depend on the disk and #of disks themselves.<br />If you are served from the SAN cache, IO can be faster.<br /><br />I looked at:<br />Disk Write Bytes/Sec – Avg 400kB, Max 1000kB<br />Disk Read Bytes/Sec – Avg 400kB, Max 1100kB<br />About the same.<br /><br />In comparison to the numbers you gave this seems low <img src=’/community/emoticons/emotion-5.gif’ alt=’;)‘ /><br /><br />A general question on the SAN cache,<br />when evaluating these kind of things is it wise to "ignore" any possible benefits from SAN cache,<br />and just look at real physical IO ?<br />Or put another way, is it unwise to put too much trust in the SAN cache to boost performance ?<br />Maybe it’s another of those "It depends" questions ?<br /><br />Regards,<br />rockmoose<br />
According to the provider their SAN would still outperform a DAS with 12 RAID10 disks.
So we go with the SAN.
I will try to run the SQLIO tool when it has been set up. rockmoose
quote:Originally posted by rockmoose According to the provider their SAN would still outperform a DAS with 12 RAID10 disks.
So we go with the SAN.
I will try to run the SQLIO tool when it has been set up. rockmoose

Hi, Sorry if I’m late to the party, but I would recommend against using any of the Dell Powervault hardware (eg, 220). Dell has had major issues with that hardware line recently. You should check out their support forums for details. If you see my post at the top of this forum on SQLIO, I would be very interested in your results on running the set of tests I describe. My choice was DAS from HP, but I was looking at DAS from Dell and SAN solutions (Dell/EMC, IBM). The HP hardware performs as it should, and I didn’t go bankrupt purchasing it.
Thanks for the input georgel.
The server is ordered, it will be available in 4 weeks… I noticed your sticky topic, will try to run the tests. rockmoose