Analyzing SQL Server 2000 Data Caching

CREATE PROCEDURE prCheckRecentCache @dbname SYSNAME



SELECT @sql =
‘SELECT as ”Object”,
CASE m.indexid
WHEN 0 THEN ”Data”
WHEN 1 THEN ”Clustered Index”
ELSE ”Nonclustered Index ID: ”+ CONVERT(VARCHAR, m.indexid)
END AS ”Cache Data Type”,
CONVERT(DECIMAL(9,2),8*m.Buffers/1024.00) AS ”MB Cached”, m.InsertDate, db_name(dbid) AS ”Database”
pubs..MemusageRecord m
JOIN ‘ + @dbname + ‘..sysobjects so
ON m.objectid =
WHERE m.InsertDate=(SELECT MAX(InsertDate) FROM pubs..MemusageRecord)
AND m.dbid=db_id(”’+@dbname+”’)
ORDER BY ”MB cached” DESC ‘

EXECUTE (@sql)

To use this procedure, just execute it with the database you’re interested in as a parameter, for example

EXEC prCheckRecentCache ‘PRODDB’

This procedure in itself is of limited usefulness, as it only returns the most recent cache information for one database, but it’s supplied more for illustrative purposes so it can be modified and adapted for your own use.

Since this data really needs “pivoting,” to be properly analyzed, it is better to export it to Excel and manipulate it there, as it is a better tool for this kind of analysis. You may want to create some charts tracking the amount of data of certain tables of interest over time, tracked as you run queries of interest on the server.

So Now that I have the Data, What Does It Mean, and What Can I Do with It?

This data is going to be most useful if you have large databases, certainly larger than fit in RAM. It will also be of interest if you have several large databases on the same server. You can use this captured information to determine at any given time:

  • how much data cache each database has in total

  • which objects, and which of their indexes are being cached

  • how data cache is changing with time and query load

  • how the make up of data in the cache correlates with query execution times

  • the rate at which objects age out of cache and which ones to stay in RAM all the time

  • the impact of big queries on cache and vice versa

The reason I created this system was an investigation I conducted into the impact of running lengthy, complex reports. These have been poorly performing and have been severely impacting other users of the server. The database in question is approximately 70GB in size, running on an 8-processor server with 4GB RAM. On the live server there is also a full-size copy of the live database which is used for testing and training. I wanted to gather evidence that showed that having this second copy of the database on the server was a bad idea, as the two would be competing for data cache. This could be proved by running the reports on a test server with several databases on it and showing how a single execution of a long-running report could flush out the data from cache of all the other databases’ table data, seriously impacting performance of queries using other databases.

Other useful insights into your server’s memory can also be gained by monitoring the cache like this. When checking on the usage of memory on my live server, I noticed a small but significant amount of memory was being consumed by holding two tables which contain historical data about scheduled tasks run by the application. This data is not needed for querying by the users, but has been allowed to accumulate in these tables ever since the server was first set up. However, the application task scheduling software constantly queries the entire table keeping it in RAM. By archiving data out of these tables the cache previously taken up by them could be greatly reduced, freeing up memory for other tables.

Although you have little control over SQL Server’s data caching strategy, the insights you gain from this monitoring can be highly useful. It is most valuable if used in conjunction with other monitoring, such as Windows Performance Monitoring counters, specifically counters such as SQL Server: Buffer Manager/Buffer Cache Hit Ratio, as well as more general counters such as disk counters. You can see how the server behaves when it needs to query data that is not already in RAM, compared to how it performs when the data it needs is in cache: for example, correlating heavy disk usage with SQL Server not finding data in cache.

Tom Pullen has been a DBA for 6 years. He currently works for Oxfam, a large UK charity. He has experience in SQL Server 6.5, 7.0 and 2000. He is married with two children. Contact him at:

Copyright 2004 by the author.


Leave a comment

Your email address will not be published.