SQL Server Performance

SAS RAID configuration on Proliant DL580 G4

Discussion in 'SQL Server 2005 Performance Tuning for Hardware' started by RichHolt, Nov 5, 2007.

  1. RichHolt New Member

    Hardware: HP Proliant DL580 G4. 4x dual-core Xeon procs w/HT. 20GB RAM. 8x 146GB 10K SAS drives.
    Software: Windows Server 2003 SP2, SQL 2005 Enterprise Edition SP2
    Situation: this server will host about 6 databases on local storage. Assuming that disk fault-tolerance is a must-have, I think I've settled on 4 RAID 1 mirror sets:
    #1 containing the C: partition with system files and a D: partition for application files + other file storage
    #2 containing one nearly read-only DB and the most active read-write DB
    #3 containing 2-3 other DBs
    #4 containing 2-3 remaining DBs
    ... and tempdb spread across all four arrays
    Does anyone have experience with a system like this? If I convert the drives to Dynamic disks, I could create a stripe set over the mirrors. But I've been told that software RAID is bad, and Dynamic disks are bad, for SQL Server. Normally, I'd have the SAN guys set up new allocations for me, but SAN is not an option this time.
    I've done some preliminary load/stress testing with SQLIOsim, and although write performance can get a bit backed up under heavy load, read performance is quite good -- and these databases are primarily for reporting purposes (with batch updates that may be fairly write intensive at times).
    Any feedback is welcome.
    Before you suggest that I engage HP Server Storage Support Engineering for some help, let me say that I already tried it. The guy I got was completely incompetent! He eventually closed the service ticket and told me I'd have to go elsewhere for help, since "that server can't operate with 146GB SAS disks, the most it can handle is 72GB drives" (the server was purchased with all 8 146GB disks installed by HP). And to add to my dis-satisfaction, he told me that the Support Pack and drivers that were installed were the latest and correct version, but it took me about 5 minutes to find out that they were really outdated (in some cases by 3 or more releases). I don't trust them anymore.
  2. satya Moderator

    What kind of analysis you got with SQLIOSIM, have you generated during busy times and less traffic hours on the database?
    What is the growth of this database and number of users at any time?
    What kind of optimization jobs are planned to keepup the performance?
  3. RichHolt New Member

    Hi satya,
    As I mentioned, this is not a single database. It is a handful of DBs related to a new application that we're landing.
    Thank you for your time! Thanks for sharing your expertise! [:D]
    [quote user="satya"]
    What kind of analysis you got with SQLIOSIM, have you generated during busy times and less traffic hours on the database?
    [/quote]
    Generated load with 6 data files and 6 log files, 2 of them with 2 streams each. Spread across volumes as outlined in earlier message. This server has no busy or light times -- it is essentially on the bench right now, I was trying to get a feel for the hardware throughput capability in [4x RAID1] versus [2x RAID1 + 1x RAID10]. The only load was the OS and SQLIOsim.
    If you want to see a rollup of disk performance during the sim test, I can run a quick analysis of the perfmon logs and post it here...
    [quote user="satya"]
    What is the growth of this database and number of users at any time?
    [/quote]
    A brand new app, dont't have any history on its usage yet. I'll take a swag and say: the most active DB will grow by 2GB/month across various tables, the second most active just under that rate, but the rest will grow very slowly if at all.
    User count will generaly be quite low, perhaps <10. However, connections will be significantly higher, with a second server and IIS being used to connect a series of webservices to the databases. Based on a short duration monitor/log of connection counts in a development environment, there may be as many as 150 connections at any time, with the average around 20.
    [quote user="satya"]
    What kind of optimization jobs are planned to keepup the performance?
    [/quote]
    Standard stuff, mostly. Weekly rebuild of fragmented indexes, daily full backups with hourly transaction log backups on the DBs in Full Recovery. About 2/3 of the databases are run in Simple recovery mode by vendor recommendation, to save resources (slowly changing reference data).
    Do you have some optimization and maintenance suggestions I should consider?
  4. RichHolt New Member

    I should mention that the application makes fairly heavy use of work tables in tempdb.

Share This Page