RAID 4? | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

RAID 4?

I am going around in circles trying to figure out a SAN solution to store my SQL data. The FAS270C is a solution we are looking at, but the vendor said all the drives are RAID 4. I looked this up and it seems like performance is not better than RAID 1 or RAID 5 and maybe worse in some cases. Does anyone have experience with RAID 4 or can give me some insight as to if I should stay away from this or if this FAS270C with RAID 4 will be really fast as the vendor is saying? Thanks.
Check: http://www.pcguide.com/ref/hdd/perf/raid/levels/singleLevel4-c.html After read, I suppose you can compare with RAID 5 but not with RAID 10 Luis Martin
Moderator
SQL-Server-Performance.com All postings are provided “AS IS” with no warranties for accuracy.
Luis, Thanks. Funny thing is that the article I mentioned that I "looked up" in my post was the exact one. This is the document that is indicating that RAID 4 will not be what I need to use best practices for my SQL processing and Cognos application processing. Seems like a very expensive solution for not that much of a performance increase. I know I can’t know everything, but I keep spinning my wheels in circles trying to figure out a good SAN solution for my SQL DB, and application processing…
I wouldn’t ever want my SQL data files on RAID 4. The random read AND write performance of RAID 5 is generally faster. It’s also more common, which should be a factor in designing storage solutions that are expandable and supportable. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
This might also be helpfulhttp://www.acnc.com/04_00.html
In constrast to Luis, I would opt for RAID 10 rather than RAID 5, if you have the money. ———————–
–Frank
http://www.insidesql.de
———————–

Couple more notes: 1. I wouldn’t buy any hardware that only supported one RAID configuration and didn’t let you customize it to your needs.
2. I agree with Frank IF you can afford it. We always put our logs on RAID 10 and data on RAID 5 for the main production systems. We throw all "extra" systems onto multi-RAID 5 arrays to be more economical. It’s all on a SAN though. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Cant agree more on Derrick’s point #1. However, the second choice is conditional. It all depends on your RDBMS. If its an OLAP or OLTP. Both have quite diffrent requirements on RAID levels as well. It appears that since you are looking for Cognos, Its OLAP. If its not write intensive, RAID 5 will work. Otherwise, RAID 10 for data files is an better option.
Also the assumption is that the database has been optimized for disk IO with multiple files for data on SMP server and engaging more disk controllers for randon access. You have to understand your SAN well for proper usage and making sure that you are not a victam of hot spots. Since log is sequential, it does not have any gain by splitting on multiple files but could be done so available disk space on multiple drives.
Hotspots aren’t necessarily a bad thing when you manage a SAN or high-end disk subsystem. On a higher-end system, you actually want to optimize the usage of the existing drives so they are more efficient. Mismanagment resulting in unplanned hotspots on the other hand can be disastrous. Agree with aawara on the OLAP piece, although there are exceptions to this. Many people are implementing solutions with Cognos that are updated constantly throughout the day. These processes can be very write intensive. It all depends on your environment. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Our Cognos system is not write intensive, but is I/O intensive. I am liking the suggestion to use the HP MSA 1000, it is a very economical solution, as compared with the full SAN solutions, but it also looks like it will suit our needs very well. The performance specification says it can perform up to 30 K I/O’s per second. It doesn’t specifically say, however, which RAID it suports. It says that it "supports the highest level of fault tolerance (RAID ADG)". Other HP enclosures specifically say that they support RAID 1, 0, 1+0, 5. But the MSA 1000 specs don’t say this. I hope it would support all of the above and RAID 10 as well for flexibility. Is there a specific way I should configure the MSA 1000? Does anyone have any information about how they are using it, how good/bad it is performing, and how they have configured/tuned it for their DB files? What is the best way to connect this enclosure to my servers? If we do buy, I plan on putting Logs on a RAID 1, and DB files on RAID 5 or RAID 10 (if I can get approved). Our DB has about 30% writes.
I have an MSA 1000, several 500’s and an EMC Clariion. The MSA 1000 supports RAID 1,0,10, and 5. It’s a good system. Be careful when you are laying it out though. You need to follow their suggestions exactly. They just came out with an upgrade to make it active/active which is a very good thing. The channels only go halfway across each enclosure though, so you’ll want to make sure you split each array across those channels. Let us know if you go with that. We can help you out more. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Derrick, We are planning on going with it and if we can get it approved, we will look to implement next year. The cost is around $31K, which seems high. I want to connect my 2 prod servers to it and use it as a shared drive, but not in a cluster. I am aiming to have 2 72GB 15K RAID 1 disks for my Transaction Logs, 2 72GB 15K RAID 1 disks for my TempDB files, and the last 10 36GB 15K RAID 5 disks for the Database files. Right now, we are storing all 3 on a RAID 5, we have 30% writes, so I am thinking by splitting out the T-logs to a RAID 1, the # of writes on the RAID 5 will go down a lot. One issue I have is being able to justify a performance improvement and quicker DB processing, even if it is small. There really is no way to test unless we just buy it, but it is an expensive gamble. What do you mean by splitting the array across the channels? Do you mean having no more than 7 physical drives in a set? The other benefit to going with the MSA 1000 instead of using internal disks on the HP servers is that if my main Prod server goes down, instead of restoring a backup copy of the DB to the backup server, I will simply have to connect the backup SQL server instance to the current prod DB. Instead of losing 1/2 day or more of transactions, I would only lose what was running on it at the time. I think this will speed up our recovery time as we would not need to restore, we would only need to connect. All of your input is greatly appreciated. I would like to know what the optimal config. is for the MSA 1000. Thanks.
The cost is around $31K, which seems high. <br /><br />–It’s not. That’s cheap. <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br /><br />I want to connect my 2 prod servers to it and use it as a shared drive, but not in a cluster. I am aiming to have 2 72GB 15K RAID 1 disks for my Transaction Logs, 2 72GB 15K RAID 1 disks for my TempDB files, and the last 10 36GB 15K RAID 5 disks for the Database files.<br /><br />–Ok. We need to do a little bit of terminology work here. A SAN is not a NAS. You don’t have shared drives on a SAN. One host gets one LUN (drive) and only one host can see that LUN at a time. If you ever let more than one host see the same LUN at the same time, you corrupt the data. Trust me on this one. You will need to have seperate drives for each host, which means one of two things:<br /><br />1. You need seperate drives for each host/drive letter combination.<br />2. You split each of your RAID groups (2 disk sets, 4 disk sets, etc) into multiple LUNS and present half of each RAID group to each host. A SAN isn’t a "file share" system.<br /><br /><br />Right now, we are storing all 3 on a RAID 5, we have 30% writes, so I am thinking by splitting out the T-logs to a RAID 1, the # of writes on the RAID 5 will go down a lot.<br /><br />–You do want to put your T-logs on RAID 1 or 10 and have seperate from the data files. I would try to get 4 drives if I could. You might want to consider getting a second shelf for the MSA so you have more disk heads to spread the IO across, depending on how high your IO is for this environment. Just a thought.<br /><br />One issue I have is being able to justify a performance improvement and quicker DB processing, even if it is small. There really is no way to test unless we just buy it, but it is an expensive gamble.<br /><br />–There is a way to get performance increases, but it takes a lot of time. If you buy the Microsoft Press SQL Server Performance Tuning Technical Reference, it has a lot of good formulas for figuring out IO capacity, etc. You just need to map it all out and do a cost/benefit analysis. It would take a couple weeks if you haven’t done it before.<br /><br /><br />What do you mean by splitting the array across the channels? Do you mean having no more than 7 physical drives in a set?<br /><br />–The MSA 1000 is a dual channel system. It’s a little bit odd though because each channel only goes halfway across the bus of each enclosure. Drives 0-6 are on one channel. Drives 7-13 are on another. When you are making a multi-drive array, you want to split the arrays evenly (if possible) across the two channels. For example, a 4 disk RAID 5 array might use disks 5,6,7,8 with another 4 disk RAID 10 array using 2,3,9,10. On the MSA 1000, don’t go over 6 drives for your RAID 5 arrays. The performance start to go south in a hurry after 6.<br /><br /><br />The other benefit to going with the MSA 1000 instead of using internal disks on the HP servers is that if my main Prod server goes down, instead of restoring a backup copy of the DB to the backup server, I will simply have to connect the backup SQL server instance to the current prod DB. Instead of losing 1/2 day or more of transactions, I would only lose what was running on it at the time. I think this will speed up our recovery time as we would not need to restore, we would only need to connect.<br /><br />–This "can" be true. Be careful selling this as a benefit though. There’s no substitute for having a hot spare standing by on both hosts and storage. Many times if a server gets corrupted, it will corrupt data along the way. You can also have a lot of problems taking LUNs (drives) from one system that was still running and just attaching them to another host. Just an fyi.<br /><br /><br /><br />MeanOldDBA<br />[email protected]<br /><br />When life gives you a lemon, fire the DBA.
Excellent breakdown Derrick. I will only add the 1 file per CPU for data files rule. On an 8way SMP, dont split your data on more than 8 physical files. Tlog is ok. Good idea to put tempdb to a dedicated drive. Good suggestions Derrick again.
]]>