SQL Server Performance

Another disk advice

Discussion in 'SQL Server 2005 Performance Tuning for Hardware' started by taras, Jun 11, 2008.

  1. taras New Member


    I've just acquired a new server, to release to old boxes.

    I'm going to run an old version of MySQL 3.23, and a MS SQL 2005

    The server has plenty memory and proc, and is equipted with 16 x 15K SAS disk and a single 4 port controller.

    The MySQL is so old it only consists of one data file (no transaction log), and is pretty read intensive
    The MSSQL has very intensive IO on data, transaction and has fairly intensive IO on tempDB
    So I'm considering the following configurations....
    c: OS, Pagefile - 4 disk RAID10
    d: tempDB - 4 disk RAID10
    e: MSSQL Data + MySQL Data - 4 disk RAID10
    f: Transaction logs - 4 disk RAID10


    c: OS, Pagefile, tempDB - 4 disk RAID10
    d: MSSQL Data + MySQL Data - 6 disk RAID10
    e: Transaction logs - 6 disk RAID10

    A little info on the MSSQL db... ~50 mill records, 11 GB data, 10 GB transaction log, 200 simultanious users

    Which configuration would You recommend?

  2. Kewin New Member

    You don't say how large your drives are, but anyways.. For a start, I like to keep things pretty simple.
    My preference is to have at least 4 arrays (physically separate), maybe 5, depending on if tempdb is to be a hotspot or not.
    You only reference i/o intensive related to reads, so I'll just assume that reads are more common than writes.
    For that purpose, RAID5 is a very much understimated config, both in terms of value for money and actual end-performance.
    The controller itself also plays an important part in the overall performance of any given disksystem.
    Anyways, for a 'starter' config, why not try out:
    c: o/s, binaries, pagefile - 2 disks RAID 1
    d: datafiles - 7 disks RAID 5
    e: translogs - 2 disks RAID 1
    f: misc, files, backups, junk etc... - 3 disks RAID5
    g: tempdb datafile (optional) - 2 disks RAID 1
    Now, the above isn't quite realistic, because there are no disks left for spares.
    Don't forget that the actual main purpose (imo) when designing arrays, isn't performance first, it's redundancy.
    You need to think about maintainability and backup/recovery strategies as well as system performance.
    One of the main parameters when deciding, isn't how many rows there are, or how many Gigs in size, or how many users
    (well these things do count), but the essential thing is *how* the databases are used.
  3. taras New Member

    Hi Kenneth

    Thanks for your reply.
    Each disk is 73 GB ("So no matter which config, there is enough capacity for a long time")
    Actually the MSSQL database has ~10% more write then read. The server imports csv files each night, with approx 2-3 mill records in each, which is used for updating records in the database.
    The system takes heavily use of global transactions (transactions started in the application, then into the database (with row lock) and back several times, before committet)
    Regarding spares, I was planning to setup a monitor on the hardware, and then have ekstra disks ready to hot-swop.

    Backup can be done during of hours.
    Filebackup is backed up directly to an attached NAS.
    DB Backup was planned to be placed on the transaction drive (early test show it takes less then 15 minutes that way)

    I have experiences with using RAID5 on MSSQL with heavy write, and I'm not that keen on the idea.

    My main concerns are performance, though of course without loosing redundancy.



Share This Page