SQL Server Performance

How far to go with Spindle Count?

Discussion in 'SQL Server 2005 Performance Tuning for Hardware' started by BradB, Dec 4, 2006.

  1. BradB New Member

    I have a question about a new server I am building. I was originally going to have to load SQL 2k on the box however our software will in the next couple weeks be 2k5 ready.

    The SQL server runs Peoplesoft/JDE Oneworld, about 60 to 100 users plus some custom intranet running from the same DB.

    We're looking at a Dual Intel 5000 quad core, 16 gigs of ram, a couple Adaptec 4805 8 channel sas controllers and now I'm trying to decide on how far to push out the spindle count per controller.

    My budget is pretty open, I've found Maxtor Atlas II 15k Sas drives on sell at dell for $153. We're looking at somewhere between 32 and 64 drivers PER controller. I've always read that 4 to 7 drives per U320 channel is a good number, so apply that logic to each channel of a sas 8x controller and I end up at 32 to 56 drives. I can get 16 bay sas boxes from Xtore for $2200 each, 4 of those to 1 8 channel controller would give me 60 drives plus 4 hot spares. I'd configure it raid 10, and I'd only need about 30% of the drive space on each drive which would keep access time lower by not using the entire throw of the arm. I#%92m leaning towards 128 drives, two 8 channel sas controllers, and 8 16 bay sas enclosures. I'm just wondering if anyone knows at what point increasing spindle count stops boosting IO performance.

    If I do the math in my head, 200 random read IOs per drive, 60 drives, that#%92s 12,000 read IOs per 60 drive raid 10. If each IO is 8k, that#%92s 96,000k, or a mere 93Megs per second. I#%92m still no where near the bandwidth limit of the controller.

    My current setup is Win2k3/Sql2k with 5 Adaptec 2200s running to 60 drives (12 drives per Adaptec 2200, or 6 drivers per U320 channel). I have data, indexes, logs and temp DBs split up as logically as I can between the 4 arrays but I sometimes still see queue lengths in excess of 100 for a single 12 drive array. (Serer is a 4 way Xeon MP 2.5 and 16 gigs of ram).

    If anyone wants to poke holes in my logic please do so. My mind hurts from planning this server for the last couple days, any input appreciated.

    Thanks
    Brad
  2. joechang New Member

    random iops will not max the controller bandwidth, even for SCSI

    just fill the system PCI-e slots with SAS adapters, then distribute the disks across the controllers,
    this way you will have max sequential performance
    figure your large reporting queries might generate 30MB/sec per disk, so plan on supporting 1-2GB/sec table/index scan performance
  3. bfilgate New Member

    I was also wondering about creating a RAID 10 with many drives -- the original post from BradB suggested creating one with 60 drives. However, joechang indicated that you want to spread the drives accross multiple controllers.

    Does anyone know if you can span a RAID 10 accross more than one controller? We have used LSI MegaRAID controllers here and we could not see how to do it with those.

    Our other alternative is to create multiple RAID 10 logical drives and then partition our data accross the logical drives (though we are still on SQL Server 2000). That is not nearly as simple as just working with a large RAID 10 if that was possible and could properly utilize the capacity of the verious subsystems (e.g. controller, channel, disk I/O, etc.)

    Am I missing something here?

    -Bruce

  4. joechang New Member

    i would not go to 60 per array
    a reasonable size is 1-2 arrays per external storage unit
    part of this is to allow reasonable units to expand,
    ie, if you needed to add storage, just add another unit without disrupting the symmetry (# of disks per array)

  5. bfilgate New Member

    Thanks Joe,

    Do you know if you can span a RAID 10 accross more than one SAS controller with LSI cards?

    The LSI SAS Cards looke like they can only support up to 16 drives in a RAID 10. While I could make more than one RAID 10 and put Indexes on one and tables on another, etc. -- for random IO where there are many different usage patterns having 2 logical drives with 16 physical drives each does not let those various usage patters share the IO of all drives as would a single 32 drive RAID 10. Ideally I think I would like to have up to a 32 drive RAID 10.

    From reading through posts on the lists, it seems like everyone takes it for granted that they can form a RAID 10 of any size they want and the only constraint is what you might want to do with it. Am I missing something -- are there not limits with SAS RAID controllers on how many physical drives you can put in a RAID 10?

    If anyone knows of any SAS cards that can do this that are not LSI then I would appreciate any suggestions. We would like a card with a 256 MB BBU Cache that can support effective use of large RAID arrays.

    Thanks,
    -Bruce





    -Bruce
  6. joechang New Member

    i do not like super large arrays, that is an array made up of too many disks

    i also do not buy into the data index split between file groups
    rather split the fliegroup (including primary) into multiple files, one for each data array

    even if you do use file groups
    consider having each FG with a file on each array

Share This Page