SQL Server Performance

Is a 3 disk RAID5 any better than RAID1 for DB?

Discussion in 'SQL Server 2005 Performance Tuning for Hardware' started by vich, Aug 12, 2007.

  1. vich New Member

    How about 4 disk RAID5 array?
    I have a small database (3GB) in a small shop. The IBM 235 Tower backplane only holds 6 disks.I'm upgrading some of the disks from 32GB 10K RPM to 72GB 15K RPM. The backplane is controlled by a low end dedicated RAID card (IBM ServeRAID L4x
    Current Config: Volume 1 = RAID1 2x 32GB 10K RPM SCSI (C/D drives for OS on C: and Log files / DB Backup Files on E:). Volume 2 = RAID5 3x 32GB 10K RPM SCSI (DB only).
    We're installing a new software module (MRP) that will double or triple our database load. Would like to boost performance without bothering with a new box (this year).
    I only want to upgrade the RAID 5 portion of this solution and leave the OS / Log File volume alone (at this time) due to my time constraints. HOWEVER: The D: Drive is a little cramped, so adding a 3rd volume is tempting. So; I could do one of the following:
    I plan to use 72GB 15k RPM drives.
    1. Make Volume 2 (E: drive) into a drive 4 disk RAID 5.
    2. Make Volume 2 (E: drive) into a 4 disk RAID 6.
    3. Make Volume 2 (E: drive) into a 3 disk RAID 5 + hot spare. Later, the hot spare could become spare for the RAID 1 volume (when / if we make it 15K RPM drives).
    4. Add TWO Raid-1 arrays (E and F drives). E for the DB, F for the Log files and backup.
    My questions:
    1. Since this is a very low end RAID card (Ultra 160 single channel), is the Drive RPM even the bottleneck?
    2. Since there's only 3 striping drives (using 4 drive RAID-5) and RAID-5 is supposed to have more like 7 or 8 drives, does the rule "RAID-5 is better for the DB" still holding true?
    3. Since this controller is only Ultra 160, and it controls Volume 1 as well (single channel), will I even gain anything from going to 15K RPM? Perhaps simply adding a 4th 32GB RPM drive (ie: another stripe) will be enough.
    4. Is my money better spent getting a new controller and simply adding a 4th 10K RPM drive?
  2. satya Moderator

    First think to ask:
    What is the size of database?
    What is the growth in next 5 years?
    What is the type of system OLTP or OLAP?
    How many users are connected (high) at one time>
  3. vich New Member

    3GB, as stated. That's 5 year's data.
    Growth will accellerate greatly this year, as MRP and Inventory Management (and subsystems) go live. It's transaction-heavy. Still; adding 5GB per year is probably a high estimate.
    Users? 50 simultaneous users is likely max, but most often 20 or 30 plus some background Workorder print jobs running once per minute that use large (25 table) but indexed Stored Procedures.
    Biggest drag on system is Reporting and now, background processing (hourly MRP Build Explosion).
    No OLAP or OLTP.. I don't know what OLTP means ... Transaction Processing? A couple of the reports use Temp files, and one even has a cursor (run maybe 2 times a day - 3 minutes). The online Order Entry system (plus supporting Order Status screen, etc) are exclusively using Stored Procedures that (almost invariably) use set queries (no cursors). The MRP is UDF intensive, causing it to be like 100,000 tiny queries for each hour's run. I don't know really; but the query plan for 100 lines runs on for about 300 pages, and we process 4000 lines (builds) a day (so 400 per hour).
    My MAIN confusion here is just how much advantage a DB gets on an underpowered (4 disk) RAID 5 vs. using RAID 1? Is it marginal (like 10%) or can it be a LOT faster (like 200%). I'm guessing that putting the LOG files on a volume not shared by a sometimes thrashing O/S will will make a pretty big difference due to the Syncronous and serial nature of log files. So; I can't have both (RAID 5 volume + a dedicated Logging mirror volume) and just don't know how to go.
    If I can't get a difinitive answer, then I'll just keep the configuration how it is (however add 1 disk to the RAID 5 DB volume) and change the 3 10K RPM 32GB drives out for 4 15K RPM 72GB drives.. It's safe and I know it'll make some improvement - unless the weak RAID card is already posing performance problems so that upgrading to 15K RPM drives won't help. If I switch to RAID 1 for the DB, it holds the possiblity of actually making things worse (even with faster spinning drives) - however I forfit the known improvement to log file writing by making that a dedicated volume plus 15K RPM.
    Note: I'm maxing the server memory up to 4GB and will turn on the 3GB boot.ini switch.
    Edit: Relevant fact here is that; of the 3GB database, 2GB of it is the Order Line table (300K pages). It gets a FTS for most Sales Reports (because they want to divide the Sale Amount every which way). They are run maybe 10 times per day. Otherwise; that 2GB goes unused, ergo should page out of memory quickly. So; although I see the RAID 5 disk lights blinking like mad very often, it's presumably asyncronous writes so are unrelated to performance.
    Would I be correct to assume that, if most of the current data (last 2 weeks) is all that's accessed by 98% of the queries / updates, then the DB File I/O speed will be largely irrelevant because those pages will simply stay in memory all day long? Note: I'm not terribly concerned with the performance of those Sales Reports. Besides; since that 2GB of historical data is ONLY accessed using Full Table Scans or according to it's Clustered Index, perhaps a RAID 1 will provide faster access to it anyway (or at least; RAID 5 would not provide a serious advantage).
    I suppose I'll set the DB MAX Memory at 2.5GB leaving 1.5GB for the OS. I'll bring it up incrementally taking some snapshots of paging etc, but I'm guessing that's where that'll end up at - if that helps any. It seems irrelevant to this DISK question, except to illustrate if the Memory Paging will be thrashing to disk or not.

Share This Page