SQL Server Performance

Target Memory & Total Memory Relationship

Discussion in 'Performance Tuning for DBAs' started by rjacops, May 5, 2003.

  1. rjacops New Member

    What is the proper relationship between "Total Server Memory (KB)" and "Target Server Memory (KB)" for a properly configured SQL Server?

    According to the last two paragraphs of the following article from this site, these two counters should be about the same. Otherwise, the article indicates you need more RAM:

    However, one of our new servers (W2K Server, SQL2K SP3 Standard Edition) has plenty of RAM with 2.5 GB and is set up to use dynamic memory allocation. With no database set up yet and no applications using this server yet, performance monitor shows the target memory far higher than the total memory. Target is about 2.3 GB and total is about 40 MB. Does this indicate a problem? If so, what is the solution?

    My guess is that perhaps this does not indicate a problem and the total memory will actually creep up dynamically as needed. There appears to be very little documentation on this subject.

    Thanks In Advance,

    Ryan Jacops
  2. bradmcgehee New Member

    Your assessment is correct. Once you have the system in production, you will see the behavior described in the article you describe. The article was written under the assumption that the server was in production.

    Brad M. McGehee, MVP
  3. ErrorHandler New Member

    I am having the same problem on a production server. <br />We are having some speed problems on this server. The <i>target server memory</i> is far higher than the <i>total server memory</i>. Referring to the above article, I also thought we were having RAM problems. Nevertheless, the server has 1.1GB RAM and there is still 400MB RAM free. But when I look in the performance monitor I see <i>target server memory</i> = about 780MB and <i>total server memory</i> = about 350 MB.<br />Does this mean we are having RAM problems, or is this no proof?<br /><br />If this is not the problem, I really don't know anymore where to look. Server's <i>cache hit ratio</i> is about 99%, I set 'priority boost' to 1...also I'm quite new to performance tuning [<img src='/community/emoticons/emotion-5.gif' alt=';)' />] . Any ideas?<br />
  4. satya Moderator

    How about memory setting on SQL?
    Any other applications runnning parallely?

    Satya SKJ
  5. vbkenya New Member

    The memmory statistics are not expected to reflect what you have physically installed on the system. Remember that the dynamism of SQL server memory management depends on the presence of other applications (and services on the server) and the workload (queries) on the server.

    By the way, a cache hit ratio of '99%', in my opinion, is not bad at all.
  6. ErrorHandler New Member

    With 99% cache hit ratio I mean: this could not be the problem [<img src='/community/emoticons/emotion-1.gif' alt=':)' />].<br />The only other programs running on this server is an ARCServe 2000, only backing up at night. Except the ARCServe, the server is a clean-install w2k advanced server. When I check win2k task manager, the processor is not overloaded (I check at busy moments), and as I said there is still about 400-450MB RAM free. <br />But when the server 'wants' 780 MB and only 'takes' 350, isn't that a problem? (dinamyc memory mgmt, nothing changed in sp_configure except 'priority boost') Can't that be a reason for performance loss? (we have no spare RAM to test it [<img src='/community/emoticons/emotion-6.gif' alt=':(' />] )
  7. rjacops New Member

  8. bradmcgehee New Member

    As there is not much "documented" material on this subject, I have checked all of the SQL Servers I manage, each has from 2GB to 8GB of RAM. In every case, the "Total Server Memory (KB)" and "Target Server Memory (KB)" is equal. Also, the buffer hit cache ratio is 99+% on each server. All of these servers have been up for quite some time.

    According to the book, Microsoft SQL Server 2000 Performance Tuning, the definition of "Target Server Memory (KB)" is "The total amount of dynamic memory that SQL Server may potentially consume."

    My guess is that when you don't see "Total Server Memory (KB)" and "Target Server Memory (KB)" equal is under these conditions:

    1) The server has just been turned on recently and little activity has occured
    2) When you have way more RAM that you really need.
    3) When you have way less RAM that you really need.

    What do others think about this?

    Brad M. McGehee, MVP
  9. Chappy New Member

    I would also add a 4th bullet to that list, in situations where sql is not a dedicated server. I would imagine if other processes require memory, it can cause sql total memory to fall below its target.
  10. ErrorHandler New Member

    Response to rjacops: Enabling AWE didn't help. Which I didn't think it would, because I read somewhere that enabling AWE only shows results when you have over 2GB of RAM (not sure about the 2GB, but it certainly was more than the 1.1GB we currently have).<br /><br />I'm starting to think that our speedproblems are not related to the "Target Server Memory" and the "Total Server Memory" at all, since there seems to be no way to get them equal (I still don't even know if it is really necessary for them to be equal), and we have the phenomenon on both our SQL 2000 servers (so I checked) - they are both too slow though [<img src='/community/emoticons/emotion-2.gif' alt=':D' />] (which means: before, we had NT server and SQL6.5, and since migration to w2k adv and SQL2000, there are extreme performance losses).
  11. lindaflor New Member

    Response to Errorhandler: Have you just migrated from NTSQL6.5 (performance very quick) to w2000sql2000 (performance very slow)?
  12. ErrorHandler New Member

    Indeed, to be more specific: w2000 adv server.
  13. lindaflor New Member

    The sql optimizer in sql2000 is different from sql6.5, hence the indexes you used in sql 6.5 will probably be unsuitable for sql 2000

    Also you will have to revisit your sprocs to see which one's suffer from performance issues. This could be due to temporary tables being defined half way down a sproc causing recompilation etc etc ...

    Doing the above will improve the performance of the application using sql2000

  14. ErrorHandler New Member

    thx very much...so I should try rebuilding the indexes? Is DBCC DBREINDEX enough for that?
    there are a hell of a lot stored procs, and they're 3rd party software, so that's something I should pass on...the problem was that that 3rd party blamed us for the speedproblems (slow server/configuration issues), so they didn't do a thing. Anyway, thx!!!!
  15. lindaflor New Member

    DBCC DBREINDEX may help but it won't be enough.

    You will need to get out Sql profiler to find those poorly running queries.

    If it's 3rd party s/w take it up with them once you have gained the evidence from sql profiler showing that sprocs and indexes
    that they gave you for sql6.5 are not adequate for sql2000.
  16. ErrorHandler New Member

    ok, thx for the help!

Share This Page