SQL Server Performance Forum – Threads Archive
SQL Server out of locksEvery so often, our SQL 7.0 server starts spitting back errors complaining that it is out of locks. I let SQL Server dynamically allocate as much memory as it needs to service lock requests, so Im at a loss what else I can do. I am profiling various queries and gradually decreasing lock requirements where possible, but with close to 600000 lines of code, its going to be a long time before I get it optimal. Is there anything I can do in the meantime to increase the lock limit ? Thankyou.
In SQL Server 7.0, the "locks" SQL Server option is set to "0", which tells SQL Server to dynamically manage locks for you. Each lock takes 96 bytes. When SQL Server manages locks, no more than 40% of the buffer cache is ever used for locks. So if you are getting out of lock type errors, and your "locks" setting is set to "0", this would indicate that you have bounced against your wall of available memory. One option is to add more memory to your server. Another option, but one that I don’t recommend, is to override the "lock" setting and set a value that is greater than 40% of your buffer cache size. While this may resolve your locking errors, it will reduce the amount of memory in the buffer cache, hurting your server’s performance. Another option is what you are doing now, optimizing the code, but as you said, this can take a lot of time. If you haven’t done so already, you can use Profiler to track locking and see what in your code is causing the most problems, helping you to establish good priorities.
Brad M. McGehee
What dictates the actual size of the buffer cache itself ?
Whilst I agree with your advice that it would be bad to exceed the recommended 40% set aside for locking, is there an oppurtunity to increase the buffer cache size, hence increasing the value 40% equates to? I suspect there isnt, as it may be a hardware based cache. Thanks
The SQL Server buffer cache is by default dynamically managed by SQL Server (and it is a subset of the total memory used by SQL Server. By default, SQL Server will take as much RAM as it needs in order to optimize the size of the buffer cache (assuming there is enough RAM for the size of the buffer cache that SQL Server wants.) I recommend that you allow SQL Server to manage memory dynamically. You can override the defaults and tell how much RAM to allocate to SQL Server (minimum and maximum amounts), and of that RAM specified, SQL Server will always use as much as it is allocated to produce as big as a buffer cache as it can. The buffer cache is used for a variety of things, including data pages, index pages, execution plans, locks, and more. In SQL Server 7.0 and 2000, you can’t specify the minimum and maximum size of the buffer cache (you could in SQL Server 6.5), only the total amount of RAM available to SQL Server. If your buffer cache hit ratio is 99% or above, you have plenty of buffer cache. So if you were to raise the amount of locks above the default value, and the buffer cache hit ratio still remains 99% or higher, then performance should not suffer, but if you raise the amount of memory for locking, and then see the buffer cache hit ratio drops, then performance will be negatively affected. If this is the case, you may be able to overcome it by adding more RAM to your server. SQL Server will take advantage of this RAM and make your buffer cache larger, increasing your buffer hit cache hit ratio.
Brad M. McGehee