Performance Tuning SQL Server's Configuration Settings

Each time SQL Server locks a record, the lock must be stored in memory. By default, the value for the “locks” option is “0”, which means that lock memory is dynamically managed by SQL Server. Internally, SQL Server can reserve from 2% to 40% of available memory for locks. In addition, if SQL Server determines that allocating additional memory for locking could cause paging at the operating system level, it will not allocate the memory to locks, instead giving it up to the operating system.

In almost all cases, you should allow SQL Server to dynamically manage locks, leaving the default value as it. If you enter your own value for lock memory (legal values are from 5000 to 2147483647 KB), then SQL Server cannot dynamically manage this portion of memory, which could cause some other areas of SQL Server to experience poor performance.

If you get an error message that says you have exceeded the maximum number of locks available, you have these options:

  • Closely examine your queries to see if they are causing excessive locking. If they are, it is possible that performance is also being hurt because of a lack of concurrency in your application. It is better to fix bad queries than it is to allocate too much memory to tracking locks.
  • Reduce the number of applications running on this server.
  • Add more RAM to your server.
  • Boost the number of locks to a high value (based on trial and error). This is the least desirable option as giving memory to locks prevents it from being used by SQL Server for more beneficial purposes.

[7.0, 2000] Updated 1-28-2005


The “max text repl size” setting is used to specify the maximum size of text or image data that can be inserted into a replicated column in a single physical INSERT, UPDATE, WRITETEXT, or UPDATETEXT transaction. If you don’t use replication, or if you don’t replicate text or image data, then this setting should not be changed.

The default value is 65536, the minimum value is 0, and the maximum value is 2147483647 (in bytes). If you do heavy replication of text or image data, you might want to consider increasing this value only if the size of this data exceeds 64K. But as with most of these settings, you will have to experiment with various values to see what works best for your particular circumstances. [7.0, 2000] Updated 1-28-2005

*****

The “max worker threads” SQL Server configuration setting is used is used to determine how many worker threads are made available to the sqlservr.exe process from the operating system. The default value is 255 worker threads for this setting. SQL Server itself uses some threads, but they will be ignored for this discussion. The focus here is on threads created for the benefit of users.

If there are more than 255 user connections, then SQL Server will use thread pooling, where more than one user connection shares a single worker thread. Although thread pooling reduces the amount of system resources used by SQL Server, it can also increase contention between the user connections for access to SQL Server, hurting performance.

To find out how many worker threads your SQL Server is using, check the number of connections that are currently made to your server using Enterprise Manager. For each SQL Server connection, there is one worker thread created, up to the total number of worker threads that are specified in the “max worker threads” settings. For example, if there are 100 connections, then 100 worker threads would be employed. But if there are 500 connections, but only 255 worker threads are available, then only 255 worker threads are being used, with connections sharing the limited worker threads.

Assuming there is enough RAM in your server, for best performance, you will want to set the “max worker threads” setting to a value equal to the maximum number of user connections your server ever experiences, plus 5. But there are some limitations to this general recommendation, as we will soon see.

As has already been mentioned, the default value for the “max worker threads” is 255. If your server will never experience over 255 connections, then this setting should not be changed from its default value. This is because worker threads are only created when needed. If there are only 50 connections to the server, there will only be that many worker threads, not 255 (the default value).

If you generally have over 255 connections to your server, and if “max worker threads” is set to the default value of 255, what will happen is that SQL Server will begin thread pooling. This means that a single thread will be responsible for more than one connection. Now comes the dilemma. If you increase the “max worker threads” so that there is one thread for each connection, SQL Server will take up additional resources (mostly memory). If you have plenty of RAM in your server that is not being used by SQL Server or any other application, then boosting the “max worker threads” can help boost the performance of SQL Server.

But if you don’t have any extra RAM available, then adding more worker threads can hurt SQL Server’s performance. In this case, allowing SQL Server to use thread pooling offers better performance. This is because thread pooling uses less resources than does not using it. But, on the downside, thread pooling can introduce problems of resource contention between connections. For example, two connections sharing a thread can conflict when both connections want to perform some task as the exact same time (which can’t be done because a single thread can only service a single connection at the same time).

So what do you do? In brief, if your server normally has less than 255 connections, leave this setting at its default value. If your server has more than 255 connections, and if you have extra RAM, then consider bumping up the “max worker threads” setting to the number of connections plus 5. But if you don’t have any extra RAM, then leave the setting to its default value. For SQL Server with thousands of connections, you will have to experiment to find that fine line between extra resources used by additional worker threads and contention between connections all fighting for the same worker thread.

As you might expect, before using this setting in production, you will want to test your server’s performance before and after the change to see if SQL Server benefited, or was hurt, from the change. [7.0, 2000] Updated 1-28-2005

*****

When a query runs, SQL Server does its best to allocate the optimum amount of memory for it to run efficiently and quickly. By default, the “minimum memory per query” setting allocates 1024 KB, as a minimum, for each query to run. The “minimum memory per query” setting can be set from 0 to 2147483647 KB.

If a query needs more memory to run efficiently, and if it is available, then SQL Server automatically assigns more memory to the query. Because of this, changing the value of the “minimum memory per query” default setting is generally not advised.

In some cases, if your SQL Server has more RAM than it needs to run efficiently, the performance of some queries can be boosted if you increase the “minimum memory per query” setting to a higher value, such as 2048 KB, or perhaps a little higher. As long as there is “excess” memory available in the server (essentially, RAM that is not being used by SQL Server), then boosting this setting can help overall SQL Server performance. But if there is no excess memory available, increasing the amount of memory for this setting is more likely to hurt overall performance, not help it. [7.0, 2000] Updated 2-25-2005


This configuration option affects performance, but not in the conventional way. By default, the “nested triggers” option is set to the default value of “1”. This means that nested triggers (a nested trigger is a trigger that cascades up to a maximum limit of 32) can be run. If you change this setting to “0”, then nested triggers are not permitted. Obviously, by not allowing nested triggers, overall performance can be improved, but at the cost of application flexibility.

This setting should be left to its default value, unless you want to prevent developers from using nested triggers. [7.0, 2000] Updated 2-25-2005

*****

Open objects” refers to the total number of objects (such as tables, views, rules, defaults, triggers, and stored procedures) that can be open at the same time in SQL Server. The default setting for this option, which is “0”, tells SQL Server to dynamically increase or decrease this number in order to obtain the best overall performance of the server.

In rare cases, generally when physical server memory is fully used, it is possible to get a message telling you that you have exceeded the number of open objects available. The best solution to this is to increase the server’s memory, or to reduce the load on the server, such as reducing the number of databases maintained on the server.

If neither of the above options are practical, you can manually configure the maximum number of available open objects by setting the “open objects” value to an appropriately high enough setting. The problem with this is twofold. First, determining the proper value will take trial and error. Second, any memory allocated to open objects will be taken away from other SQL Server needs, hurting the server’s overall performance. Sure, now your application will run when you change this setting, but it could run slower if there are current memory pressures. Avoid changing this setting. [7.0, 2000] Updated 2-25-2005

*****

The “query governor cost limit” option allows you to limit the maximum length a query can run on a server, and is one of the few SQL Server configuration options that I endorse. For example, let’s say that some of the users of your server like to run very long-running queries that really hurt the performance of your server. By setting this option, you could prevent them from running any queries that exceeded, say 300 seconds (or whatever number you pick). The default value for this setting is “0”, which means that there are no limits to how long a query can run.

The value you set for this option is approximate, and is based on how long the Query Optimizer estimates the query will run. If the estimate is more than the time you have specified, the query won’t run at all, producing an error instead. This can save a lot of valuable server resources.

On the other hand, users can get real unhappy with you if they can’t run the queries they have to run in order to do their job. What you might consider doing is helping those users to write more efficient queries. That way, everyone will be happy.

If this setting is set to “0”, consider adding a value here and see what happens. Just don’t make it too small. You might consider starting with value of about 600 seconds and see what happens. If that is OK, then try 500 seconds, and so on, until you find out when users start complaining. [7.0, 2000] Updated 2-25-2005

*****

If SQL Server is very busy and is hurting for memory resources, it will queue what it considers memory-intensive queries (those that use sorting or hashing) until there is enough memory available to run them. In some cases, there just isn’t enough memory to run them and they eventually time out, producing an error message. By default, a query will time out after a time period equal to 25 times the estimated amount of time the Query Optimizer thinks it will take for the query to run.

The best solution for such a problem is to add more memory to the server, or to reduce its load. But if that can’t be done, one option, although fraught with problems of its own, is to use the “query wait” configuration option. The default setting for this option is “-1”, which waits the time period described above. If you want the time out period to be greater so that queries won’t time out, you can set the “query wait” time to a large enough number. As you might guess, you will have to determine this time out number yourself through trial and error.

The problem with using this option is that a transaction with the intensive query may be holding locks, which can cause deadlock or other locking contention problems, which in the end may a bigger problem than the query timing out. Because of this, this option is not recommended to be changed. [7.0, 2000] Updated 2-25-2005

*****

SQL Server has the ability, if properly configured, to look for stored procedures to run automatically when the mssqlserver service starts. This can be handy if you want a particular action to occur on startup, such as the loading of a specific stored procedure into cache so that it is already there when users begin accessing the server.

By default, the “scan for startup procs” is set to “0”, which means that a scan for stored procedures is not done at startup. If you don’t have any startup stored procedures, then this is the obvious setting. There is no point spending resources looking for stored procedures that don’t exist.

But if you do have one or more stored procedures you want to execute on server startup, then this option has to be set to “1”, which turns on the startup scan. [7.0, 2000] Updated 4-4-2005

*****

The “set working set size” option is used when you want to fix the minimum and maximum sizes of the amount of memory that is to be used by SQL Server when it starts. This option also prevents any OS page swapping.

By default, this setting is set to “0”, which means that this option is not used. To turn on this option, it must be set to “1”, plus, the minimum server memory size and the maximum memory sizes must be set to the same value. This is the value used to reserve the working set size.

As with most options, this one should not generally be necessary. The only time you might want to consider it is if the server is dedicated to SQL Server, has a very heavy load, and has sufficient memory available. Even then, any performance boost gained will be minimal, and you risk the potential of not leaving enough memory to the operating system. Testing is key to the successful use of this option. [7.0, 2000] Updated 4-4-2005

]]>

Leave a comment

Your email address will not be published.