Anyone using 3par disk system? | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Anyone using 3par disk system?

I have been checking out the following<br /><br /<a target="_blank" href=></a><br /><br />It looks like they are partnered with some big players such as veritas. They also have some big name customers such as meryl lynch and AIG.<br /><br />The thing that looks impresive is some of their performnce claims<br /><br /><img src=’’ border=’0′ /><br /><br />Based on the above link it looks impressive. 90,000 IOps with less than 10ms response time. I woudl love to run soem SQLServer tests on one of these.<br /><br />Has anyone used them before or heard anything good or bad?<br /><br />Thanks<br />Bert
i think slart bought one, you might send him an email,
fundamentally there is no magic behind storage,
even though marketing people want you to think so.
it is possible to distort a benchmark.
until they can demonstrate SQL Server random & sequential tests, along with high queue depth handling, i would not make any assumptions.
personally, i am a little disappointed at the lack of hard information on the capabilities of their system on their web site
thanks joe, I sent him an email. A friend of mine is a sales person for them…. I know I know …. a sales person would sell stuff to his mother she didnt need. In this case the guy is a very close friend and I am not is his sales terrirtory so he wouldn’t get the sale anyhow. I am new to SAN tecnology so I am just trying to pull out the marketing hype from the real deal. For soemone new to SAN tecnology it seems much easier to manage and configure, but I am also sure that this comes with a trade off. The easier something is also meens you have less control over the configuration. Bert
Yes, we did purchase a 3par unit and we are pretty happy with our decision. We considered several options (EMC CX, EMC DMX, Hitachi 9980, Hitachi USP, IBM DS6800, IBM DS8100). I dealt with technical reps at each company to discuss what we’re trying to do and how to best achieve it. In the end the reason we chose 3par is their system gives very good performance without much need for administration. All the systems can provide good performance, but most of them require you spend a lot of time optimizing. 3par’s design inherantly makes a lot of these optimizations automatic. The basic good idea behind this is that they take the physical disks and split them into 256MB virtual disks they call "chunklets". Then, they apply your desired raid config on these virtual disks. Their algorythm takes into account the fiber channel and power domains to make sure that the members of this raid set dont have any single point of failure. So, if you take an empty 3par system with 240 disks and you tell it "create a 45GB raid-5 volume", it will put 256MB on each of the 240 disks (240 * 256mb = 60GB. The reason for the discrepancy is the raid overhead). In theory you could read this 45GB file as fast as one 10k FC disk can read 256mb (about 3 seconds). So the reason I like their system is they realize that a disk can only do 150 random IOs/sec (depending on spindle speed and block size of course), and their system is designed around making the best usage of this finite resource, while requiring a minimum of administration. The EMC DMX line also has a similar algorythm (however they call their virtual disks "hypers"), but my perception after talking with their tech team is that it requires a full time body to administer it. I dont mean to come off like a 3par zealot. I just happen to be a satisfied customer and am happy to explain why. Steve

so if they claim 90K IOPS at 10ms latency on 240 disks, thats 375 per disk,
the only way this is going to happen is if the data resides on approx 10% of the disks, since 240x72GB = 17TB, 90K IOPS /10ms on a 2TB file is probably doable.
I don’t like the idea of mixing data and logs, but if they can guarantee 0.1-0.2ms log write latency while the data is generating a heavy load, then i would be ok on that.
another item is that many mid-range SAN have poor sequential performance,
if 3par can do 2GB/sec to each of several servers, then i would be happy here.
the remaining important database performance characteristics are the ability to handle checkpoints and t-log backups without disrupting transactions.
if you buy the 3par, send me an email and i will provide some test scripts
Joe, We have a 240 disk 3par system on site here (however, configured as 240, 146G 10K disks). I would also be happy to provide you supervised access (IE: netmeeting) so you could run some benchmarks if you’re interested. I agree that 375 IOPS/disk is quite an unrealistic figure if they’re talking random IOs. With regards to write log latency and mixing logs & data, their system does use mirrored cache and write back caching so the commit time is based on how long it takes to write to memory on two controllers. Actual disk activity isnt a concern unless the disks are that busy so long that you begin to fill the data cache with updates. As untraditional as it is, the beauty if their philosophy (distributing the data evenly amongst the disks, both logs and data) is it is difficult to keep all the disk 100% busy for long (not long enough to use an apprecicable portion of cache), so there is always time to flush the logs from cache to disk. Our system which is a typical config has 32G of data cache and 8G of control memory. Their system does (err, did) have an achilles heel however with log writes. They took a consistent .3ms. I brought this to their attention and explained to them why its such a problem with sql server. They analyzed the situation they discovered that the HBA driver in their system had the interrupt_coalesce option enabled, the purpose of which is to cause the HBA driver to intentionally wait 300 microseconds (.3ms) after the last command to wait for any more commands, before sending it forward. The purpose of this delay is to coalese the FC commands into one so a bunch of them take only one interrupt, which improves efficiency. Now that they know sql server log writes dont like it they put a change in the latest version of the OS to allow you to disable coalesce_interrupt feature on a port by port basis, so now the write latency is based on their actual system performance. They claim the latency has improved by a factor of 6 (50uS / .05ms). I have not seen this with my own eyes because we havent had them apply the latest OS to our system yet, but they have been able to substantiate every performance claim they’ve ever made, so I have no reason to doubt them. Steve
thanks Joe and Slart, I appreciate your help and feedback. I am sur eyou will see more questions form me in the next week. Thanks again