SQL Server Configuration Performance Checklist

Nested Triggers

This configuration option does affect performance, but not in the conventional way. By default, the “nested triggers” option is set to the default value of “1”. This means that nested triggers (a nested trigger is a trigger that cascades up to a maximum limit of 32) can be run. If you change this setting to “0”, then nested triggers are not permitted. Obviously, by not allowing nested triggers, overall performance can be improved, but at the cost of application flexibility.

This setting should be left to its default value, unless you want to prevent developers from using nested triggers. Also, some third-party applications could fail if you turn off nested triggers, assuming they depend on them.

Network Packet Size (B)

“Network packet size” determines the size of the packet size SQL Server uses when it talks to clients over a network. The default value is 4096 bytes, with a legal range from a minimum of 512 bytes, to a maximum value which is based on the maximum amount of data that the network protocol you are using supports.

In theory, by changing this value, performance can be boosted if the size of the packet more or less matches the size of the data in the packet. For example, if the data is small, less than 512 bytes on average, changing the default value of 4096 bytes to 512 bytes can boost performance. Or, if you are doing a lot of data movement, such as with bulk loads, of if you deal with a lot of TEXT or IMAGE data, then by increasing the default packet size to a number larger than 4096 bytes, then it will take fewer packets to send the data, resulting in less overhead and better performance.

In theory, this sounds great. In reality, you will see little, if any, performance boost. This is because there is no such think as an average data size. In some cases data is small, and in other cases, data is very large. Because of this, changing the default value of the “network packet size” is generally not very useful.

As a part of your audit, carefully question any value for this setting other than the default. If you can’t get a good answer, change it back.

Open Objects

“Open objects” refers to the total number of objects (such as tables, views, rules, defaults, triggers, and stored procedures) that can be open at the same time in SQL Server. The default setting for this option, which is “0”, tells SQL Server to dynamically increase or decrease this number in order to obtain the best overall performance of the server.

In rare cases, generally when server memory is fully used, it is possible to get a message telling you that you have exceeded the number of open objects available. The best solution to this is to increase the server’s memory, or to reduce the load on the server, such as reducing the number of databases maintained on the server.

If neither of the above options are practical, you can manually configure the maximum number of available open objects by setting the “open objects” value to an appropriately high enough setting. The problem with this is twofold. First, determining the proper value will take much trial and error. Second, any memory allocated to open objects will be taken away from other SQL Server needs, potentially hurting the server’s overall performance. Sure, now your application will run when you change this setting, but it will run slower. Avoid changing this setting.

As you are performing your audit, if you find any setting other than “0”, either someone made a mistake and it needs to be corrected, the server’s hardware is too small and more RAM needs to be added to it, or some of this server’s work needs to be moved to another, less busy, server.

Priority Boost

By default, the SQL Server processes run at the same priority as any other applications on a server. In other words, no single application process has a higher priority than another when it comes to getting and receiving CPU cycles.

The “priority boost” configuration option allows you to change this. The default value for this option is “0”, means that the priority of SQL Server processes is the same as all other application processes. If you change it to “1”, then SQL Server now has a higher priority than other application processes. In essence, this means that SQL Server has first priority to CPU cycles over other application processes running on the same server. But does this really boost performance of SQL Server?

Let’s look at a couple of scenarios. First, let’s assume a server runs not only SQL Server, but other apps (not recommended for best performance, but a real-world possibility), and that there is plenty of CPU power available. If this is the case, and if you give SQL Server a priority boost, what happens? No much. If there is plenty of CPU power available, a priority boost doesn’t mean much. Sure, SQL Server might gain a few milliseconds here and there as compared to the other applications, but I doubt if you would be able to notice the difference.

Now let’s look at a similar scenario as above, but let’s assume that CPU power is virtually all exhausted. If this is the case, and SQL Server is given a priority boost, sure, SQL Server will now get its work done faster, but only at the cost of slowing down the other applications. If this is what you want, OK. But a better solution would be to boost CPU power on the server, or reduce the server’s load.

But what if SQL Server is running on a dedicated server with no other applications and if there is plenty of excess CPU power available? In this case, boosting the priority will not gain a thing, as there is nothing competing (other than part of the operating system) for CPU cycles, and besides, there are plenty of extra cycles to go around.

And last of all, if SQL Server is on a dedicated server, and the CPU is maxed out, giving it a priority boost is a zero sum game as parts of the operating system could potentially be negatively affected if you do. And the gain, if any, will be very little for SQL Server.

As you can see, this option is not worth the effort. In fact, Microsoft has documented several problems related to using this option, which makes this option even less desirable to try.

If you find this option turned on in your audit, question its purpose. If you currently are not having any problems with it on, you can probably leave it on without issues. But I would recommend setting it back to its default.

Query Governor Cost Limit

The “query governor cost limit” option allows you to limit the maximum length of time a query can run, and is one of the few SQL Server configuration options that I endorse. For example, let’s say that some of the users of your server like to run very long-running queries that really hurt the performance of your server. By setting this option, you could prevent them from running any queries that exceeded, say 300 seconds (or whatever number you pick). The default value for this setting is “0”, which means that there are no limits to how long a query can run.

The value you set for this option is approximate, and is based on how long the Query Optimizer estimates the query will run. If the estimate is more than the time you have specified, the query won’t run at all, producing an error instead. This can save a lot of valuable server resources.

On the other hand, users can get real unhappy with you if they can’t run the queries they have to run in order to do their job. What you might consider doing is helping those users to write more efficient queries. That way, everyone will be happy.

Unlike most of my other suggestions, if your audit turns up a value here other than “0”, great. As long as users aren’t complaining, this is a good deal. In fact, if this setting is set to “0”, consider adding a value here and see what happens. Just don’t make it too small. You might consider starting with value of about 600 seconds and see what happens. If that is OK, then try 500 seconds, and so on, until you find out when users start complaining, then you can back off.

Query Wait (s)

If SQL Server is very busy and is hurting for more memory resources, it will queue what it considers memory-intensive queries (those that use sorting or hashing) until there is enough memory available to run them. In some cases, there just isn’t enough memory to run them and they eventually time out, producing an error message. By default, a query will time out after a period of time equal to 25 times the estimated amount of time the Query Optimizer thinks it will take for the query to run.

The best solution for such a problem is to add more memory to the server, or to reduce its load. But if that can’t be done, one option, although fraught with problems of its own, is to use the “query wait” configuration option. The default setting for this option is “-1”, which waits the time period described above, and then causes the query to time out. If you want the time out period to be greater so that queries won’t time out, you can set the “query wait” time to a large enough number. As you might guess, you will have to determine this time out number yourself through trial and error.

The problem with using this option is that a transaction with an intensive query may be holding locks, which can cause deadlock or other locking contention problems, which in the end may a bigger problem than the query timing out. Because of this, this option is not recommended to be changed.

If you find a non-default value in your audit, find out why. If there is no good reason to keep it, change it back to the default value. But, if someone has thought this out thoroughly, and if you cannot detect any locking issues, then consider leaving this option as is.

Continues…

Leave a comment

Your email address will not be published.