SQL Server Performance

HBA Queue Depth and IOPS, etc...

Discussion in 'SQL Server 2008 Performance Tuning for DBAs' started by herasyej, Jul 28, 2009.

  1. herasyej New Member

    Please excuse my lack of knowledge as I am more of a Sys Admin then I am a DBA. I am trying to figure out how to make the most out of my IOPS on my Qlogic HBA's on my MS SQL 2008 Cluster.
    Current Environment:
    2 HP DL 380 G5's Dual Quad Core 32GB RAM
    OS: Windows 2008x64 Ent Ed
    DB: MS SQL 2008 Ent Ed with Proc Lic for each CPU
    5 SAN Volumes for environment, 3-1TB Volumes 1-Data, 1-Logs, 1-Backup, 2-10GB Volumes 1-MSDTC, 1-Cluster Quorum
    I have been working with one of my data architects and have been presented with what appears to be a bottleneck in transaction processing, to be more exact, about 1 million sequential writes to a blank table. At first I was getting about 200IOPS across my HBA (completion time for test, ~1.25hr), not exactly what I would call good performance. Made a few adjustments by placing my Data and Log volumes on separate arrays on individual storage tiers on my SAN, and increased my performance by 200%, bringing me to approximately 600IOPS across my HBA(~28mins). When doing a similar test on a production environment with less power in a non-clustered, non-san environment the test completes in approximately 3-5mins. I know that I am limited in the number of IOPS based on my raid configurations, currently 6x750GB SATA with ~100IOPS per disk, set in a Raid 5, tested with 1+0 there was no increase in performance, so 600IOPS seems to be the physical hardware limit of the disks and array.
    Now, I have done a bunch of searching and see some suggestions to enhance performance and wanted some input or suggestions. One thing I would like to know just from a knowledge standpoint, is there a way to tune SQL 2008 to make better utilization of each IO, ie. having multiple transactions being sent per disk/hba IO?
    These are a few things I would like to know if they have been tested and proven as performance enhancers:
    1. Moving the TEMBDB to a separate storage tier then all other SQL Data, suggestions have been a Raid 1+0 and split the temdb into various files equivalent to the number of processing cores. In my case, 2xQuad Cores = 8 individual files, one per core. Will this take advantage of each core for better throughput? Will having the tempdb on a separate tier make better use of the IOs being utilized on each lun?
    2. HBA Queue depth. This is something that seems to be some what configurable via regedit on OS versions previous to 2008, but can't find much about it for 2008 Server. Based on some of the responses I see, adjusting the queue depth could drastically increase throughput, has this been done successfully in 2008?
    This is all I have for now, would like whatever input anyone could provide. I have also been looking into how the volumes are configured, block size and chunk size on the san and local system, any suggestions on that would be much appreciated as well.
    Thanks in Advance for any help.
    Eric
  2. Luis Martin Moderator

    Welcome to the forums!!.
    Could you post sql 2008 configuration?
    For that run: sp_configure
    Thanks.
  3. herasyej New Member

    Luis, as requested.
    Thanks, Eric
    name minimum maximum config_value run_value
    access check cache bucket count 0 65536 0 0
    access check cache quota 0 2147483647 0 0
    Ad Hoc Distributed Queries 0 1 0 0
    affinity I/O mask -2147483648 2147483647 0 0
    affinity mask -2147483648 2147483647 0 0
    affinity64 I/O mask -2147483648 2147483647 0 0
    affinity64 mask -2147483648 2147483647 0 0
    Agent XPs 0 1 1 1
    allow updates 0 1 0 0
    awe enabled 0 1 0 0
    backup compression default 0 1 0 0
    blocked process threshold (s) 0 86400 0 0
    c2 audit mode 0 1 0 0
    clr enabled 0 1 0 0
    common criteria compliance enabled 0 1 0 0
    cost threshold for parallelism 0 32767 5 5
    cross db ownership chaining 0 1 0 0
    cursor threshold -1 2147483647 -1 -1
    Database Mail XPs 0 1 0 0
    default full-text language 0 2147483647 1033 1033
    default language 0 9999 0 0
    default trace enabled 0 1 1 1
    disallow results from triggers 0 1 0 0
    EKM provider enabled 0 1 0 0
    filestream access level 0 2 2 0
    fill factor (%) 0 100 90 90
    ft crawl bandwidth (max) 0 32767 100 100
    ft crawl bandwidth (min) 0 32767 0 0
    ft notify bandwidth (max) 0 32767 100 100
    ft notify bandwidth (min) 0 32767 0 0
    index create memory (KB) 704 2147483647 0 0
    in-doubt xact resolution 0 2 0 0
    lightweight pooling 0 1 0 0
    locks 5000 2147483647 10000 10000
    max degree of parallelism 0 64 0 0
    max full-text crawl range 0 256 4 4
    max server memory (MB) 16 2147483647 60416 60416
    max text repl size (B) -1 2147483647 65536 65536
    max worker threads 128 32767 0 0
    media retention 0 365 0 0
    min memory per query (KB) 512 2147483647 8192 8192
    min server memory (MB) 0 2147483647 8192 8192
    nested triggers 0 1 1 1
    network packet size (B) 512 32767 4096 4096
    Ole Automation Procedures 0 1 0 0
    open objects 0 2147483647 0 0
    optimize for ad hoc workloads 0 1 1 1
    PH timeout (s) 1 3600 60 60
    precompute rank 0 1 0 0
    priority boost 0 1 1 1
    query governor cost limit 0 2147483647 0 0
    query wait (s) -1 2147483647 -1 -1
    recovery interval (min) 0 32767 0 0
    remote access 0 1 1 1
    remote admin connections 0 1 0 0
    remote login timeout (s) 0 2147483647 20 20
    remote proc trans 0 1 0 0
    remote query timeout (s) 0 2147483647 600 600
    Replication XPs 0 1 0 0
    scan for startup procs 0 1 0 0
    server trigger recursion 0 1 1 1
    set working set size 0 1 0 0
    show advanced options 0 1 1 1
    SMO and DMO XPs 0 1 1 1
    SQL Mail XPs 0 1 0 0
    transform noise words 0 1 0 0
    two digit year cutoff 1753 9999 2049 2049
    user connections 0 32767 0 0
    user options 0 32767 0 0
    xp_cmdshell 0 1 0 0
  4. RickNZ New Member

    How are your data and log RAID volumes partitioned over the physical drives? You say you have 6x750 GB drives. Are all five of your SAN volumes spread over all 6 drives? If so, that would explain your issue right there. Make sure your logs are on separate physical spindles -- even one pair of drives in RAID 1 would be a huge step up.
    Is there anything else on either the data or log volumes? If so, you might try defragmenting the drives.
    Are the data and/or log files configured with a small enough size such that they will auto-grow? If so, try setting the size large enough that auto-grow won't happen.
    When you tested RAID 10, was that on your data volume, or log, or both? Or does your current setup require them to be the same?
  5. herasyej New Member

    Rick,
    My volumes are configured on different storage tiers, 3 different to be exact, 1 for data, 1 for logs and 1 for backup, msdtc and cluster quorum, as stated in my first post. Each one of my storage tiers consists of a single raid array of 7x750 drives with one drive being a hot spare on each, and on two of the tiers host a single volume, Data on one and Logs on another.
    As for the autogrowth, since I have plenty of space on each volume ~1TB I set the database size manually to prevent latency.
    For the Raid 1+0 setting I tried on both the Log and Data volumes individually and together and didn't get any increase in performance.
    Hope this helps.
    Eric
  6. RickNZ New Member

    OK, to summarize, is this correct?
    Volume A: 7x750 GB, RAID 5, 1 TB partition, data only
    Volume B: 7x750 GB, RAID 5, 1 TB partition, logs only
    Volume C: 7x750 GB, RAID 5, 1 TB partition for backup, 10 GB for MSDTC, 10 GB for cluster quorum
    Oh, except you say that you have "plenty of space" on each volume. Does that mean the partition size is the full volume size, and you just have the size of the filegroup set to 1 TB?
    Have you looked at the Physical Disk queue length and transfers per second when the system is slow? If so, what do they show? It might also be interesting to look at SQL's memory use: SQLServer:Buffer Manager / Page life expectancy and Lazy writes / second.
    In that configuration, I'm a little surprised that your throughput didn't improve when changing the log from RAID 5 to RAID 10. Are all of the volumes on the same controller?
    It might be interesting to use the Contig utility to check to see how many fragments your data and log files have. For best performance, they should both only have one fragment:
    http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx
  7. herasyej New Member

    Rick,
    You are correct with the Volumes, Each raid 5 has a 3.4TB capacity but only partitioned 1TB on each. My data and logs are separate storage tiers on separate san controllers using different paths.
    I think you may be looking to deep into this, I'm at my current physical IOPS limit on my drives, and the only way to increase the IOPS is to add more disks to the raid, not really an option. What I am essentially looking for is a way to make better use of each IO that SQL kicks out, right now its one transaction per IO, in my eyes not very efficient. Like I said I'm not a DBA so tuning is something I'm not too familiar with. I guess I'm looking for a way to have SQL submit multiple transactions per IO, is that possible?
    Eric
  8. RickNZ New Member

    What concerns me about the IOPS you posted is that SQL write performance is largely determined by log disk speed (at least until RAM fills up), and sequential IOPS should be 40 or more times higher than random. If you can do 100 8KB IOPS in random mode, you should be able to do 4000 or more in sequential.
    However, if you're convinced that your logs are writing sequentially, then the answer to your other question is that SQL writes once to the log for each transaction. You can improve write performance by increasing the number of rows that you write in each transaction. It's not possible to go the other way and increase the number of transactions per write, since the writes are how the DB ensures the integrity of a transaction.
    Another factor in insert performance is server round-trips. You can reduce the number of round-trips using command batching. Depending on your application, you may be able to combine reduced round-trips with a larger transaction size by using table valued parameters, with a single insert statement that uses the incoming table.
    EDIT: actually, increasing the number of drives in your array is not the only way to increase random IOPS. Another way is to make the partitions narrower. By reducing average seek time, IOPS will increase.
    Oh, another question: what is your RAID stripe size? That can also be a big factor.
  9. RickNZ New Member

    The other idea I had in mind was to reorganize your drives in a way that would increase IOPS.
    Something like this:
    Volume A: 14x750 GB, RAID 5, controller/path #1, 128 KB stripe size, 1 TB partition for data (important that it's the first partition on the volume), ~500GB for tempdb, 10 GB for MSDTC, 10 GB for cluster quorum, 1 TB for backups.
    Volume B: 7x750 GB (5 data, 1 parity, 1 hot spare), RAID 5, 64 KB stripe size, controller/path #2, 1 TB partition for logs
    The narrower data partition (same space spread over more drives) will increase random IOPS. Having more drives will also increase IOPS.
  10. moh_hassan20 New Member

    you build your performance IO base on writing only , what tool you used for measure?
    Use SQLIOSim utility to simulate SQL Server activity on the SAN , taking into account sequential write for log , random read for data.
    from http://support.microsoft.com/kb/231619
    can you post the result of test.
    What is the value of:cache read/write , current Queue depth
    Is the 600 IOPS is for one LUN or for the host?

Share This Page