Raid configuration advise | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Raid configuration advise

Before long my company will upgrade our existing sql server hardware platform, and i need some advise and thoughts on how to partition disk and set up raid configuration. The new machine will have 8 drives, 4 CPU’s (dual-dual core) and 8 GB of ram. We have multiple databases on the server, one of them stand for approx. 70% of the load. The server is dedicated for sql, no IIS or other apps will be running. The normal configuration for such a machine is raid 1 for windows, raid 1 for sql log and raid 0+1 for data, but giving it some thought there is actually quite a lot of different configurations. 1. Put the tempdb on the windows physical drive?
Keeping the tempdb seperate is good, and the main database is making heavy use of it. But how much will windows being installed in the same physical drive affect this. 2. Put the log db on it’s own physical drives (raid1)
Normally this is a good thing, but if we move all the db’s logdb on the same drive we will not gain much advantage from the drive head being the same place all the time, so im thinking about only moving the large 1, keeping the rest on the raid 0+1.
Any one having input for this? 3. Using 2 x raid 0+1
Only having 8 drives, no other drives can be handled, so this will force me to have Windows + tempdb + log, on one raid 0+1 and data on the other raid 0+1. I hope some of you has some input for me, maybe some with simular hardware. I guess my main question is how important it is to give Windows it’s seperate drive.

1) I will store in Raid 1.
2) I will move large one to other physical drive.
3) No, I’ll go with 1) 4) I think a lot of members will give you differents oppinions.
BTW: I’ve moved to relevant forum.
Luis Martin
Moderator One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important
Bertrand Russell
All postings are provided “AS IS” with no warranties for accuracy.
Thank you for your reply, and sorry about the wrong forum. So none has tried having sql server data on their windows partition, or prossibly tried a perfmon of their windows drive to see how much access there is to it when SQL is up and running as the only app. To mee it seems like a waste of 2 drives, as I don’t think windows is accessing the drives too much, at least not compared to how SQL is accessing a drive. Any input will be greatly appreciated!!!

i would suggest you consider the following: 2 disks, raid 1, win OS, tempdb, other log files
4 disks, raid 10, all data,
2 disks raids 1, main db (70% load) log files
According expertise, I suggest to follow Joe suggestions.
Luis Martin
Moderator One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important
Bertrand Russell
All postings are provided “AS IS” with no warranties for accuracy.
Thanks for the advise to everyone, it is greatly appreciated. I will try the advise Joe gave as it makes sense. array 1 : 2 disks, raid 1
Part 1: win OS
Part 2: tempdb
Part 3: other log files array 2 : 4 disks, raid 10
Part 1: all data, array 3 : 2 disks raid 1
Part 1: main db (70% load) log files
Part 2: temporary storage for backup, only used for maintenance. Hope the drives has best performance in the beginning has normal.
Hi,<br />may suggest to read following threads tooo Fore more information :<br /><br /<a target="_blank" href=></a><br /><br /<a target="_blank" href=></a><br /><br /<a target="_blank" href=></a><br /><br /<a target="_blank" href=></a><br /><br /<a target="_blank" href=></a><br /><br /<a target="_blank" href=></a><br /><br /<a target="_blank" href=></a><br /><br /<a target="_blank" href=></a><br /><br /><br /><img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br />Regards<br /><br />Hemantgiri S. Goswami<br />[email protected]<br />"Humans don’t have Caliber to PASS TIME , Time it self Pass or Fail Humans" – by Hemantgiri S. Goswami<br />
Do I need to use two scsi cards for the 1 10 1 config or is that just one scsi card with 3 arrays?
If you have 2 scsi is better, thinking in performance. BTW: Next time open a new theard Luis Martin
Moderator Although nature commences with reason and ends in experience it is necessary for us to do the opposite, that is to commence with experience and from this to proceed to investigate the reason.
Leonardo Da Vinci Nunca esperes el reconocimiento de tus hijos, eso ocurrirá luego de tu muerte
All postings are provided “AS IS” with no warranties for accuracy.
HI ya,<br /><br />sort of a side issue, but I’d be interested in hearing Joe’s comment on this question…<br /><br />Would you have bought a 4-way dual core box with 8 drives or opting instead for a 4-way single core box with an external drive array holding an additional 14 drives…?<br /><br />I know that Joe has in the past suggest that the cost of going to dual-core cpu does not tend to give the same performance leap, whereas having a total of 22 spindles rather than 8 would give you a lot more flexibility<br /><br />Luke apologies in advance, as I’m deducing that you’ve already bought the new server, just trying to use this as a way of getting some more brilliance from Joe’s brain <img src=’/community/emoticons/emotion-5.gif’ alt=’;-)’ /><br /><br />Cheers<br />Twan
when did i say a dual was not the equivalent of 2 singles cores, thats exactly what a dual core is, 2 single cores put on to one die (for AMD, the current Intel are actually 2 single cores in one package) in theory, the multi-cores could be better than an equivalent number of single cores, because proc-to-proc communications can be faster, the combined cache can be shared (since it is not unusual for the same instruction to be used by each proc).
however, the current dual cores have to run slightly slower a single core, and not all the theoretical advantages have been implemented, depending on the specific generation.
But i do expect multi-core to get better and better. on the matter of the platform, a typical dual socket system does not have the same memory channels as a 4 socket system (compare the Intel E7520 chipset for dual with the E8500 for quad).
I believe AMD Opterons have 2 memory channels for both single & dual core (?) so even if the current 2 dual cores are not the same as 4 singles at the platform level, the lower licensing cost should be a big win,
i would like to recommend automatically starting with a 2×2 even if it turns out that a 4×2 is later required, because the 2×2 is relatively inexpensive, and can always be used for other purposes,
where as buying a 4×2 is somewhat expensive regardless of the platform, i always want to push for a reasonable disk system, unless that data really does fit 100% in memory.
regardless of your strategy, someone will always want to run a report on the transaction server.
i am not a big fan of replication for this purpose because replication will cause as many performance problems as it solves and since cause of problems in running reports on a tx server can be solved, why replicate (i do like backup server(s) though). the real problems in running reports on tx servers is that invariably some report will require a table scan that is bigger than memory.
it turns out that SQL 2000 can generate upto 400MB/sec (def locking) or 800MB/sec(nolock) to disk in doing a table scan. (this is actually due to a goof in the code, SQL 2005 corrects this and can 12GB/sec or more)
if your disk system cannot do this, the disk queue will pile up, more so on Ent Ed because it has a more agressive read-ahead strategy than Std Ed.
so any tx that require disk IO will now sit behind the report that put 1K’s of IO in the queue. When i move one customers disk array from one supporting 130MB/sec to 350MB/sec (their table scan did 300MB/sec), the report went from shutting down transactions on thr 130 disk to no impact on the 350 disk. So, i really push for disks that can do 400-800MB/sec (for the data drives) depending on their table scan characteristics.
Now most recent generation SANs can do 10MB/sec per disk so you are looking at 40-80 disks at $2-3K per disk.
DA disks can do 50-80MB/sec depending on the generation and the file placement, ie, use just the first 1/3 of drive for the data file. use the rest for backups or something.
On U320 SCSI, you can net about 250MB/sec per channel, which is about 3-4 good disks.
Since 1 rack holds 14 drives, always always buy the 2 channel rack, my rec is to put 4 disks per rack for data, but you can put more for other uses.
Will have more data on SAS in the near future So my preferred disk system is:
2-4 disk in the system (1 SCSI channel)
2 racks, 2 SCSI channels per rack,
4 disks per rack for data, additional disks as required for logs, temp etc. Of course, don’t forget to test the disk system, especially if someone else configured it, and does not knowing anything about performance.
did i mention the incident about this shop with a Symmetrix DM800 with 60 disks ($250K), the SAN person configured using some parity BS the vendor recommended, it did 14MB/sec sequential (and people wonder why i insist the DBA must control his/her own disks)
Hi ya, oops sorry Joe, I’ve just looked up the thread where I thought that’s what you said, only to find out you said the complete opposite… sorry for insinuating that you might not have advised dual-core over single-core cpu Cheers