EMC vs NETAPP SAN | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive


Does anyone have any strong opinions regarding an EMC CX3-20 vs NetApp FAS270 SAN solution? Does anyone have any strong opinions regarding the SnapShot, Recovery, LUN cloning software that each provides? We have quotes from each and I figured it couldn’t hurt to find out if anyone has negative or positive history with either, or if someone has experience with both and found 1 to be much better. Thanks in advance for any information,
the NetApps in general are designed for fast write performance, but not for sequential read performance as a consequence. i suggest you use my scripts in the SQL 2005 HW section set performance goals based on my classifications and get written into the invoice the desired performance goals SANs in general have significant management capabilities,
but that is no substitute for good management at the operator level. Snapshot is nice, but this feature will be in SQL2005 at some sp level?

We have NetApps storage systems and it has been just recently rolled out for file services. The idea is to roll it out for SQL Server storage as well using ISCSII. I did some fairly indepth performance testing from a SQL Server perspective and the throughput was much better than anything that could be provided via local SCSII controllers and RAID 1 or 5 arrays. However, I still have some concerns. Recently, one of the NetApps filers for the file stuff just died. Although it had dual NICS, there was a problem on the backplane that was in fact a single point of failure. We will use a different filer with ISCSII for SQL Server but it is a worry nonetheless. On the plus side, NetApps support is capable, well organised and efficient. I have some other concerns also. In the usual scenario, a DBA would probably be a Local Admin at the Windows level on a SQL Server box and would have some level of control over the disk subsystems. In other words, I can see myself if a disk in an array has broken. I can also see the disk configuration and understand it!! With NetApps or other similar storage systems, control is one step removed because it’s in the hands of systems tech who is skilled up to manage that kit. I feel nervous about letting go and losing sight of this critical part of what SQL Server depends on. One other thing I really don’t like about the NetApps solution is that the tools I would have use are all wizard/GUI based… Like SnapManager. There is a kind of very limited command line to achieve the same thing but it has almost zero documentation and the NetApps and third party seller only seem to know about the wizards/GUIs. I don’t want to use a wizard or GUI to setup my jobs to do snapshots!!! Furthermore, I don’t want to use a GUI to do a restore either – I want to script that because I haven’t got the time to mess around doing literally hundreds of manual restores every day in test test, dev, DR and other sites. On my test system, I had to basically reverse engineer what the GUI was doing in order to work out what I could do directly via a command line. Not good at all… 0/10. However, there are some potentially cool features that might help us a lot too. NetApps Snapshots are a completely different thing to SQL Server 2005 database snapshots. Once a NetApps snapshot has been taken (online operation with only a checkpoint as overhead), that snapshot can be cloned and mounted on any other server very quickly and efficiently – in seconds. That could provide a huge efficiency boost in our environment where we are always restoring databases down to dev and test as well as to other sites and DR. We also have a database build process where db restores are the most time consuming part of the process so mounting clone could make a big improvement there. We’ve already purchased necessary NetApps licences so, in a way, we’re committed butI will be moving slowly and very cautiously. I will want to see test and dev servers on NetApps for about 6 months before I make any decisions about migrating production SQL Server to NetApps. In the end, I might never migrate production servers. Hope this helps. Clive
provide hard numbers and details on why you think the netapp can beat local storage,
ie, tests with same number of controllers, and backend disks the only test netapp should win is the random write test
As folks are contributing to the thread I figured I might as well follow up. We ended up going with the EMC CX3-20. A couple of the selling points to me ended up being the fact that I could define the RAID arrays myself, rather than just blindly having to accept the NetApp’s double-parity raid. We ended up with several RAID 1+0’s for SQL data, and RAID 5’s for larger storage etc. So I liked the customization to fit our specific needs. <br /><br />The biggest selling point for us ended up actually being the software pricing and not just the hardward. EMC is much more generous in providing software licensing to get all servers connected, whereas at least in our situation NetApp seemed to want to charge an arm and a leg for each new server or functionality that we wanted to implement. EMC was very aggressive in their pricing to try and get us to move away from our existing NetApp. They actually sold us the CX3-20 and all of the software and more hard drive space for less money than NetApp wanted just for the software for fewer servers to access the existing NetApp box. The fact that the CX3-20 is much newer hardware, and has a great upgrade path didn’t hurt any either. <br /><br />In the end I feel very comfortable with our purchase, and was just glad that I didn’t end up seeing anyone comment on this thread that EMC was a horrible choice once we chose that route. <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br /><br />Hope it helps,<br />Dalton<br /><br />Blessings aren’t so much a matter of recieving them as they are a matter of recognizing what you have received.
i have some complaints against SAN vendors, and some on the equipment itself most people don’t seem to be aware that the SAN is just another computer system sitting between the app (DB) and the actual storage.
A file server serves up file systems
The SAN serves partitions made up of pieces of one or more disks
do you think you would work better with more management layers? Biggest complaint is the lack of respect for disk performance characteristics
and application requirements
SAN vendors reps for some reason believe the big 8GB cache on their SAN solves
all performance problems.
In fact, cache is usually degrades performance.
The very very few performance reports that use SAN actually have cache disabled Performance with a SAN still requires distributing load over many spindles and IO channels SAN vendors advocate high space utilizations possible with a shared resource.
This is more than offset with the ridiculously high cost per spindle
and high space utilization negates the performance gains possible with fractional disk utilization (short-stroke seek times) SAN vendors advocate sharing the resource
you would have to be nuts to shared storage between a critical line of business transaction processing app
and non-revenue impact loads (employees downloading porn, tell me this doesn’t happen). Also, never shaared storage between OLTP and DW apps What also aggravates me is how slow SAN vendors move to new chipsets, with much improved IO capability.
Open your CX3-20 and see what the system is