SQL Server Performance

Memory issue causing disk issues?

Discussion in 'SQL Server 2008 Performance Tuning for DBAs' started by Shifty1981, Jan 7, 2011.

  1. Shifty1981 New Member

    I have a sneaky suspicion that our disk stats are poor because of excessive paging due to a lack of enough memory. Below are some stats. Is there a way to prove that paging activity is the cause of excessive disk activity and if so, how do I determine how much more ram to buy. We're running SQL 2008 R2 x64 Enterprise on Windows Server 2008 R2 x64. We have 64 GB ram and are a high I/O OLTP database environment. The previous admin gave 32GB of the RAM to SQL and left 32GB to the OS and others presumably to help with the handling of all the customer files they send us (we backup and restore customer files all day long)

    The only reason I'm not convinced already is because the disk (RAID1 set) that has the pagefile and OS on it has decent stats. It's the RAID sets with the DB on them that perform poorly. Maybe there's another stat I should be looking at to help determine this?


    Average disk sec/Trans
    P drive (RAID10 - 14 drives) 0.014
    G drive (RAID10 - 14 drives) 0.014
    K drive (where DB is, RAID10 - 14 drives) 0.041
    F drive (where tempdb, tlog are, RAID10 - 4 drives) 0.007
    C drive (OS, pagefile, RAID1) 0.005


    Average Disk Queue Length
    P drive (RAID10 - 14 drives) 0.68
    G drive (RAID10 - 14 drives) 6.02
    K drive (where DB is, RAID10 - 14 drives) 71.22
    F drive (where tempdb, tlog are, RAID10 - 4 drives) 2.57
    C drive (OS, pagefile, RAID1) 0.04

    % DiskTime
    P drive 67.6
    G drive (RAID10 - 14 drives) 596.0
    K drive (where DB is, RAID10 - 14 drives) 7358.7
    F drive (where tempdb, tlog are, RAID10 - 4 drives) 261.0
    C drive (OS, pagefile, RAID1) 5.6

    Other stats
    Checkpoint pages/sec 1246.618
    Page File Usage 0.739
    Page life expectancy 429.733
    Page reads/sec (SQL:BM) 1672.008
    Page Splits/sec 65.997
    Page writes/sec (SQL:BM) 1371.892
    Pages/sec 534.441
    Lazy writes/sec 27.735
    Target Server Memory (KB) 33554432
    Total Server Memory (KB) 33554432
    Total Available Memory (MBytes) 13166.208
    Virtual Memory 58881281741
    Full scans 77.165
    Batch Requests/sec 643.659
    Latch waits 1238.704
  2. Luis Martin Moderator

    What I see high is Pages/sec. About disks, sound good numbers.
    You have Target Server Memory = Total server Memory, so to me you need more memory.
    Two questions:
    1) Is SQL dedicted?
    2) All Maintenance jobs (defrag, etc) are running?
  3. Shifty1981 New Member

    Hi, Sorry, was a away on vacation and then catching up at work. Below are my responses and questions:
    1. is SQL dedicated - not technically. We have our company software which our clients talk to on the server. It takes data from them, stores info about the data and stores the data as well (files). Other than that the only software besides SQL on the server is antivirus. currently there are no plans to move the application off to another server as that would cost $ that they are not willing to spend (buying more servers). they have no reason to believe the application is putting a significant load on the server, though they may be mistaken.
    2. I'm not familiar enough with SQL to know which jobs you're referring to, nor where to determine if and when they are running.
    3. Can you point me to an article discussing the target vs total memory? we have set a max memory usage in management studio (32 out of 64). My limited understanding is that
    a. if you let sql use as much memory as the OS will let it, then if target is greater than total you need more memory.
    b. if you set the amount it should use, sql will just report total = target, thus making the conclusion less clear.
    maybe I'm mistaken though. wouldn't be the first time.
    thanks!
  4. satya Moderator

    See my blog post http://sqlserver-qa.net/blogs/perftune/archive/2008/04/16/3922.aspx here, as you have collected majority of counters they seem to be right.
    Do you see any other warnings on SQL Server error log for this memory behaviour?
    Also what is the default memory setting on SQL Server, is it dynamic or MIN & MAX set?
    Observe any warning or notification on event log for more information.
  5. Shifty1981 New Member

    the memory is set to 32 out of the 64 available. since our app the clients communicate is on the server and we're passing client files to us, the team decided before I was here that giving half to sql and half to the rest was a good idea. of course they had no metrics to really determine if this amount was right, too much or too little. determining that would be fantastic.
    there are mssqlserver errors in the event logs under a heavy load. some are below:
    1204 - LOCK error

    701 - in rare cases, this is a SQL insufficient memory error. we saw this when running a custom cleanup tool that was hammering our database. unfortunately we do need to be able to run this tool but are worried it will mean needing tons more memory or DAS or SAN storage. for example during these event logs we saw Average Disk Queue Length in the thousands, compared to lower numbers.
    our app also has it's own event log entries and sometimes we see query timeout errors and foreign key errors. sometimes we know why there are foreign key errors (someone deleted rows accidentally that were later reference) but sometimes we have no idea why since no one has been running any tools. the only way we can think of determining when and why this is happening (the key being deleted, not the event log entry since that just shows up sometime later when the key is referenced), is to find a tool that can record somehow the date a row is inserted, updated, and deleted so that when another row that references gets a foreign key we can look back and see the timestamps and then figure out what query was running when it was deleted.
  6. satya Moderator

  7. MikaS New Member

    Hi,

    32GB memory for an application seems a lot, unless it's SQL Server ;) You should probably check how much memory the application you're running on the DB server consumes. From my own experience I've noticed that SQL Server will always eventually claim the maximum amount of memory it's been assigned over the time. This is, as far as I understand because of all the different caches the SQL Server has (it has plenty of those).

    While SQL Server can handle these caches and cleanup routines just fine, I've run in few occasion to a situation where Windows (especially Windows 2008 R2) memory management keeps trying to interfere with SQL Server operations by paging data to virtual memory (disk) and it does slow things down a lot.

    This can be prevented with Lock Pages in Memory policy. There's a MSDN article saying that you don't need it for 64-bit environments, but I've learned the hard way that you actually do in some cases :)
  8. satya Moderator

    I believe it is ideal to set MIN & MAX memory if there is mroe than 32GB physical memory available, in the recent times we have seen similar issues on a 64bit platform reporting memory issues.

Share This Page