SQL Server Performance

SQL Not using enough memory

Discussion in 'Performance Tuning for DBAs' started by Davecbr600, Sep 9, 2005.

  1. Davecbr600 New Member

    Hello all,

    First post and all that. So my v.card is behind the bar - help yourselves to a drink.

    I am running SQL Server 2000 Enterprise on an MS Windows Server 2003 Enterprise Edition SP1 with 8 Intel Xeon CPU's running at 3.16 ghz with 14.9GB of ram and using 600GB of RAID 5.

    I ran a dts package which loads 13 GB of data and checked the performance monitor to see how the server handled it. I noticed that my pages/sec counter was way high (800) which from what I've read here is okay as this happens when you use dts. But I also checked my buffer cache hit ratio which did drop for a bit below 95 to around 88. I also checked my total and target memory which both said 2.5gb. Even the standard task manager/performance monitor reckons SQL is only using 2.5gb.

    Now why would SQL start doing all this paging when it has a whole heap of memory which it could use? The memory properties for the instance is set to dynamically set and it shows 15335 mb of memory - so it can see the 14.9 gb, it's just not using it. I read somewhere that I need to add the /3gb switch to the boot ini file so SQL can see the extra RAM, but I believe that it already can.

    Any ideas would be of great help.

  2. Luis Martin Moderator

    Windows 2000 Usage Considerations
    Before you configure Windows 2000 for AWE memory, consider the following:

    To enable Windows 2000 Advanced Server or Windows 2000 Datacenter Server to support more than 4 GB of physical memory, you must add the /pae parameter to the boot.ini file.

    To enable Windows 2000 Advanced Server and Windows 2000 Datacenter Server to support a 3-GB virtual address space, you must add the /3gb parameter to the boot.ini file. This allows user applications to address 3 GB of virtual memory and reserves 1 GB of virtual memory for the operating system.
    However, if there is more than 16 GB of physical memory available on a computer, Windows 2000 needs 2 GB of virtual memory address space for system purposes and therefore can support only a 2-GB virtual address space.

    In order to allow AWE to use the memory range above 16 GB, be sure the /3gb parameter is not in the boot.ini file. If it is, Windows 2000 will be unable to address any memory above 16 GB. When allocating SQL Server AWE memory on a 32-GB system, Windows 2000 may require at least 1 GB of available memory to manage AWE. Therefore, when starting an instance of SQL Server with AWE enabled, it is recommend you do not use the default max server memory setting, but instead limit it to 31 GB or less.

    For more information, see Using AWE Memory on Windows 2000.

    Luis Martin

    One of the symptoms of an approaching nervous breakdown is the belief that one's work is terribly important
    Bertrand Russell

    All postings are provided “AS IS” with no warranties for accuracy.

  3. Davecbr600 New Member


    Good answer but my boot.ini file already has the /3gb switch.

    The contents are here:

    [boot loader]
    [operating systems]
    multi(0)disk(0)rdisk(0)partition(2)WINDOWS="Windows Server 2003, Enterprise" /noexecute=optout /fastdetect /3gb /pae
  4. Twan New Member

    Hi ya,

    You're DTS files are on the server too? If so then remember that
    - the reading of the DTS file will generate pages/sec
    - you are very write intensive which must be written physically to the log, which again will create pages/sec

    Check out the sql performance counters for page reads/sec and page writes/sec

  5. Davecbr600 New Member


    Yeah, I followed the guide on here to help me monitor the servers performance while the dts package was doing it's stuff to check for memory and disk bottlenecks. The processors seem fine.

    I understand that whilst the data is on the same server and whilst it is such a large data import that there will be a great deal of disk activity. But why would my target and total available memory only be showing 2.5gb RAM when the server has 14.9gb ram?

    That's the thing that's making me scratch my head.

    Maybe SQL Server will take up more resources as it becomes used more...?
  6. satya Moderator

    If you're not hitting any performance loss then I suggest to the leave the process to SQL to handle them. If the data load and other queries there after aren't affected then keep continue with the optimization jobs, SQL will definetly more memory if it needed.

    Satya SKJ
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  7. Davecbr600 New Member

    okay - I'll keep an eye on it.
  8. mulhall New Member

    Can you detail the format of the data and how DTS loads it? You may well want to avoid the transaction log.
  9. satya Moderator

    Yes and what was the Database RECOVERY model during the load?

    Satya SKJ
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  10. Davecbr600 New Member

    The data is split into 16 structured flat files ranging from around 500mb to 2 gb with 4 much smaller associated files for each file. Each file is loaded into it's own load table which have it's indexes dropped and then created once the data has been loaded.

    I am using standard gui DTS - not bulk insert or bcp.

    The transaction log does get cained when the load happens.

    The recovery model is full. But as the load is dts and not bulk insert or using bcp then I believe (correct me if I am wrong) that changing the recovery model won't make any difference to the transaction log.
  11. satya Moderator

    By default DTS will use bulk insert operation and as it is in full recovery model the system will take care of any contention, but if you're continuing the log backups then it is not the area to be concerned, sometimes to write events to Transaction log when it is huge it may have performance hit.

    Satya SKJ
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  12. Davecbr600 New Member

    yeah, I try to keep on top of transaction log backups just to keep the size down.

    hmmm... the plot thickens!

    cheers for you replies thus far though.
  13. satya Moderator

    http://www.sql-server-performance.com/operating_system_tuning_w2003.asp fyi.

    Satya SKJ
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  14. mulhall New Member

    If the files are being loaded sequentially then there wouldn't be a need for a greater amount of memory.

    However, I notice on re-reading your post that you mention it is an instance, and you implying that there are multiple instances running?

    Dynamic allocation on multpile instances will show that all of the memory is available when in fact SQL is juggling it between the two.
  15. Davecbr600 New Member

    I have increased the throttle on the dts package and made a connection for each load file so it should load multiple files at the same time. It's only one instance that's installed/running and no other applications are installed on the machine. This is a dedicated SQL Server server.

    I've checked and re-checked the available bytes vs total/target SQL memory bytes and there is a difference. The total and target are running at 2.5gb all the time where as there is still 11gb available on the server.

    I am wondering if I need to run sp_configure and set 'awe_enabled' to 1....... But I am doing stuff on the server at the moment so will have to wait.
  16. Davecbr600 New Member

    Right, I've just had a look at another server and the same thing was happening SQL was only running with 2.5gb.

    Nowt was running on that so I run

    exec sp_configure 'awe enabled', 1

    stopped and started the server and voila 8gb in use.
  17. mulhall New Member

    Hah! Can't believe nobody suggested that...Apologies Dave.

Share This Page