Input requested – Testing new SANs | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Input requested – Testing new SANs

We are going to be implemented two new SANS this month. We ended up purchasing two SANS from two different vendors. I work in an environment that supports 150+ SQL Servers that are currently using direct attached SCSI storage. We so many servers we wanted to look at several solutions. We bought a EMC cx700 and a 3Par S400. We selected the EMC as the “traditional” SAN. We wanted to check out 3Par as the technology looks to be much easier to manage and maintain, especially in our current environment. Details of the two SANS EMC CX700 – This SAN will be shared be two clustered SQL Servers. The Lun Config as follows Server1 –
Six LUNS – 10 Disk RAID10 – 146GB 10K
Tlog – 4 Disk RAID10 – 73GB 15K Server2
Six LUNS – 10 Disk RAID10 – 146GB 10K
Eight – 5 Disk RAID5 – 146GB 10K
Tlog – 4 Disk RAID10 – 73GB 15K The application creates a new database for each month. Server1 and Server2 are virtually identically except for different indexes. Server2 is the reporting server and needs to store all historically data. The RAID5 LUNS will be used for the old data. We will move databases older than 4 months to the RAID5 LUNS. The most current 5 months will be stripped across all 6 RAID10 LUNS.
3PAR – Our 2 node system will have 64 disks. 3par does not use the traditional approach to LUNS. When creating a LUN 3PAR will allocated the disk space using 256MB “Chunklets” from each disk spread across the entire system. The EMC will support 2 Clustered SQL Servers. The 3Par will only have one Clustered SQL Server at first. All servers use the same server hardware Dell 6850 3.33 16GB of RAM. Windows 2003 x64 SQL 2000 SP4 I am looking for what standard methodology others use to test, compare and benchmark new systems. Here are the following things that I am planning on testing. GeneralTest
IOMeter
1.Random Read – my application does a large amount of Random 8KB Reads. I will use IO meter to test 100% Random Reads with a Queue depth of around 50
2.Mixed workload – SQL Server common access methods are
a.8KB – Page
b.64KB – Extent
c.256KB – When doing large table SCAN I see IO sized at 256
I will create a mixed workload of about 30% write and 70% read (my current application percentage)
Does anyone here use IO meter and have a general methodology they could share? SQLIOStress – Used to stress the system. TEMPDB placement – In the SMC Config each server has 6 RAID10 LUNS do I
1.Stripe databse across 5 LUNS and place dedicated TEMPDB on its own
2.Stripe database and TEMPDB across all 6LUNS DataAndIndex Placement – I wan to come up with some tests to determine if separating data and indexes increase performance even if the data and index files are on the same drive. I have read that this should increase performance but have never seen any tests to back this up. DataAndIndex Placement – many people give the advice of placing data on one drive and indexes on another. This is sometimes not always the best solution. I had originally set up a server like this and ended up with a server that was maxed out on the index drive and the data drive was hardly stressed. This testing will be more specific to my application. I have 5 very large tables and I will try several methods of placement Any feedback or scripts would be greatly appreciated Thanks
Bert

I’m not pretending to know very much of SAN’s, but I’ve just begun to play withhttp://www.quest.com/quest_central_for_sql_server/index.asp
Maybe it can assist you in your tests. I am no friend of advising splitting data and indexes. I believe that in most cases you won’t gain that much and you get better overall performance leaving it all together. This might be different when we’re talking about DBs + 1TB, but that’s really a different story. —
Frank Kalis
Microsoft SQL Server MVP
http://www.insidesql.de
Ich unterstütze PASS Deutschland e.V. http://www.sqlpass.de)

just a quick comment on the 73G 15 for Logs vs 146GB 10K for data
if you are doing transactions where low latency is critical, then the 15K would be better for data, however if you have a very large data set with more complex queries, then the bigger 10K drives may be more economical.
15K drives are not necessary for Logs if you do not do log backups, of course almost everyone backup their logs, so it might help.
so the final consideration is: if the performance characteristics of 10K vs 15K drives are not too critical, it might be helpful to make them all the same, so you can allocate as necessary for each function.
i assume there was no difficulty in purchasing DAE’s in mixed configuration?
Frank, Quest software looks cool. Have you seen the VERITAS i3 product. http://www.veritas.com/Products/www?c=product&refId=317 We are gong to implement the product in the fall. I am curious why you would recommend splitting data and Indexes in database over 1TB but not smaller ones? Joe, Our Dataset is very large, we looked at getting all 15K 73GB drives but we would not have enough usable GB. We have one DAE with 15 15K drives. 5 – Persistent Lun, Quorom, MSDTC
4 – TLOG1
4 – TLOG2
2 – Hot Spare Yes we are doing TLOG backups. And I wish I had 15K 146GB drives too much $$$$ Thanks
Bert

No, not I haven’t tried your link yet. Thanks! I’ll ping them, if MVP’s get special conditions. [<img src=’/community/emoticons/emotion-2.gif’ alt=’:D‘ />]<br /><br />–<br />Frank Kalis<br />Microsoft SQL Server MVP<br /<a target="_blank" href=http://www.insidesql.de>http://www.insidesql.de</a><br />Ich unterstütze PASS Deutschland e.V. <a target="_blank" href=http://www.sqlpass.de>http://www.sqlpass.de</a>) <br />
Oops, sorry! After rereading the thread, I’ve noticed that you’ve asked me about separating indexes and data only in DB + 1 TB. <br /><br />Well, 1 TB is actually "just a number". Of course, it might depend from case to case. And it’s no strict ultimate number. <br />Generally I think, in SQL Server communities the use of multiple filegroups and the separation of indexes from data is far too quickly advised. Most system work pretty well with all the objects in just one place. So the advantage one might achieve doesn’t outweight the extra maintenance overhead. 1 TB because database of that size are beginning to fall into the VLDB category. Such database require far more planning during all stages than normal databases. So, in that category the placement of objects on filegroup is definitely a way to look at when trying to keep such a database responsive. Some time ago I’ve read some interesting case study about the benefit of additional filegroups in an ordinary OLTP database. Although this study was done by IBM on their DB2 system, I believe it’s not really all that different on SQL Server. They have found that you achieve the "best" overall performance with just one filegroup. Only if you have a very clear perception about the data access patterns, you might benefit from additional filegroups. Generally you do better when you add more files (on more separated disks). I don’t have the link right now, but you can find it in their redbook domain. <br /><br />But from your past intelligent postings here, I’m sure you know all these things. Presumably better than me. [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]<br /><br />–<br />Frank Kalis<br />Microsoft SQL Server MVP<br /<a target="_blank" href=http://www.insidesql.de>http://www.insidesql.de</a><br />Ich unterstütze PASS Deutschland e.V. <a target="_blank" href=http://www.sqlpass.de>http://www.sqlpass.de</a>) <br />
i really think the concept of separating data & indexes comes from an Oracle practice in the old days before RAID controllers, and when the IO subsystem was very crude, so i really don’t think this has any relevence today.
i am aware of 3 meaningful uses of filegroups.
1. separate text and image from normal data
2. separate "live" data from static data
this could encompass a partitioned tableview.
3. isolate large tables.
if you have several large tables with new data being added, putting each in its own filegroup enable the data to be stored in a contiguous block. obviously you don’t want to do this for every table, this is probably on relevent if you want very fast table scans

I have implemented databases on 3PAR. You simply cannot compare the S400 with the CX700. The 3PAR is in a different league altogether. It gives you a spindle for every 256MB of diskspace allocated, which provides for absolutely phenominal throughput. As a result, even a 10GB volume will have a spindle count of 40 (not counting parity in RAID5) or 80 (for RAID10, assuming you have that many disks in the system. A RAID5 (7+1, with magazine level redundancy) on the S400 will beat any configuration you can dream up on the CX700. Don’t even bother to test – it’s unfair competition. The S400 has 16GB (or more) of cache. The CX700 has only 8GB. If you are looking to made a decision between the two, I suggest you go for the 3PAR. I had 96x146GB and 32x300GB in the S400, with the 300GB drives being used for tape staging, splits, backups etc and the 146’s being used for production volumes (SQL, Exchange, file server etc). The CX700 is a kludge, at best.
cache on the SAN is essentially irrelevent,
your server memory is the cache, so all you are doing is double caching and wasting resources.
cache can help with checkpoints, so consider a 10/90 split between read/write san vendors like to tell you how powerful their san is that you can ignore disk placement,
that fact is for DB, you still need to separate physical disks between data and logs, the fact is you cannot seriously stress a storage system with SQL 2000, it can only drive sequential transfers to 700-800MB/sec. SQL 2005 can do atleast 12GB/sec.
With regards to cache on the SAN, I beg to differ. In high volume transaction processing applications, the cache on the SAN becomes very important from the standpoint of writing to disk. This can significantly impove write response times (from a server’s perspective). The SAN controller provides the server with an instantaneous acknowledgement of a write operation and then takes its time to write the information to disk at the first available opportunity. This is particularly important in situations where the I/O system has to handle sporadic short peaks at a time when the drives are busy. The SAN cache is also important when doing sequential reads as it will pre-fetch information to improve performance. However, more than just the amount of cache within the storage system, the caching algorithms are important as well. For several years, before the DMX came along, EMC hid a multitude of sins on the Symmetrix by having a really good caching system. This, by no means, does away with the need for spindle-count. Adequate spindle-count is an absolute must, or your performance will go down the tube.
writes from most database engines are done from a "lazy writer", meaning that if the writes are slow, it should not slow down more important reads,
i did say that the cache can help checkpoints, you will notice that most of the benchmarks are not done on SANs,
frequently, the SAN "features" just get in the way
compared to a properly configure brute force DA storage system,
which easily ride out disk surges without the giant cache on SAN systems
which is why i consider it a waste
]]>