SQL Server 2005 Server Configuration Performance Audit Checklist Part 2
Each time SQL Server locks a record, the lock must be stored in memory. By default, the value for the “locks” option is “0”, which means that lock memory is dynamically managed by SQL Server. Internally, SQL Server can reserve up to 60% of available memory for locks. In addition, if SQL Server determines that allocating memory for locking could cause paging at the operating system level, it will not allocate the memory to locks, instead giving it up to the operating system in order to prevent paging.
In almost all cases, you should allow SQL Server to dynamically manage locks, leaving the default value as it. If you enter your own value for lock memory (legal values are from 5000 to 2147483647 KB), then SQL Server cannot dynamically manage this portion of memory. In addition, no more memory that what you have specified can be used for locking, which may cause lock memory to run out under some circumstances.
If you get an error message that says you have exceeded the maximum number of locks available, you have these options:
• Closely examine your queries to see if they are causing excessive locking. If they are, it is possible that performance is also being hurt because of a lack of concurrency in your application. It is better to fix bad queries than it is to allocate additional memory to tracking locks.
• Reduce the number of applications running on the server.
• Add more RAM to your server.
• Boost the number of locks to a higher value (based on trial and error). This is the least desirable option as giving memory to locks prevents it from being used by SQL Server for other purposes, as needed.
Do your best to resist using this option. If you find in your audit that this setting is some other value other than the default, find out why. If you can’t find out why, or if the reason is poor, change it back to the default value.
Max Degree of Parallelism
This option allows you to specify if parallelism is turned on, turned off, or only turned on for some CPUs, but not for all CPUs in your server. Parallelism refers to the ability of the Query Optimizer to use more than a single CPU to execute a single query. By default, parallelism is turned on and can use as many CPUs as there are in the server (unless this has been reduced due to the affinity mask option). If your server has only one CPU, the “max degree of parallelism” value is ignored.
The default for this option is “0”, which means that parallelism is turned on for all available CPUs. If you change this setting to “1”, then parallelism is turned off for all CPUs. This option allows you to specify how many CPUs can be used for parallelism. For example, if your server has 8 CPUs and you only want parallelism to use up to 4 of them, you can specify a value of 4 for this option.
If parallelism is turned on, as it is by default if you have multiple CPUs, then the query optimizer will evaluate each query for the possibility of using parallelism, which takes a little overhead. On many OLTP servers, the nature of the queries being run often doesn’t lend itself to using parallelism for running queries. Examples of this include standard SELECT, INSERT, UPDATE and DELETE statements. Because of this, the query optimizer is wasting its time evaluating each query to see if it can take advantage of parallelism. If you know that if your queries will probably never need the advantage of parallelism, you can save a little overhead by turning this feature off, so queries aren’t evaluated for this. This is subject to Cost Threshold for Parallelism previously discussed.
Of course, if the nature of the queries that are run on your SQL Server can take advantage of parallelism, you will not want to turn parallelism off. For example, if your OLTP server runs many correlated subqueries, or other complex queries, then you will probably want to leave parallelism on. You will want to test this setting to see if making this particular change will help, or hurt, your SQL Server’s performance in your unique operating environment.
In most cases, because most servers run both OLTP and OLAP queries, parallelism should be kept on. As part of your performance audit, if you find parallelism turned off, or if it is restricted, find out why. As part of your audit, you will also want to determine if the server is virtually all OLTP-oriented. If so, the turning off parallelism might be justified, although you will want to thoroughly test this to see if turning it off helps or hurts overall SQL Server performance. But if the server runs mixed OLTP and OLAP, or mostly OLAP queries, then parallelism should generally be on for best overall performance.
There is one more complication to this setting. In some cases, certain queries, even long running ones, just don’t run efficiently when running using parallelism. If fact, they can take much more time to run in parallel that they do serially. If you discover any such queries, you can use the MAXDOP query hint to turn off parallelism for any problem queries, while still keeping parallelism turned on for all other queries.
Max Server Memory (MB) & Min Server Memory (MB)
For best SQL Server performance, you want to dedicate your SQL Servers to only running SQL Server, not other applications. And in most cases, the settings for the “maximum server memory” and the “minimum server memory” should be left to their default values. This is because the default values allow SQL Server to dynamically allocate memory in the server for the best overall optimum performance. If you “hard code” a minimum or maximum memory setting, you risk hurting SQL Server’s performance.
On the other hand, if SQL Server cannot be dedicated to its own physical server (other applications run on the same physical server along with SQL Server) you might want to consider changing either the minimum or maximum memory values, although this is generally not required.
Let’s take a closer look at each of these two settings.
The “maximum server memory” setting, when set to the default value of 2147483647 (in MB), tells SQL Server to manage the use of memory dynamically, and if it needs it, to use as much RAM as is available (while leaving some memory for the operating system).
If you want SQL Server to not use all of the available RAM in the server, you can manually set the maximum amount of memory SQL Server can use by specifying a specific number that is between 4 (the lowest number you can enter) to the maximum amount of RAM in your server (but don’t allocate all the RAM to SQL Server, as the operating system needs some RAM too).
Only in cases when SQL Server has to share memory with other applications on the same server, or when you want to artificially keep SQL Server from using all of the RAM available to it, would you want to change the default value. For example, if your “other” application(s) are more important than SQL Server’s performance, then you can restrain SQL Server’s performance if you want by restricting how much RAM it can use.
There are also two potentially performance issues you can create if you do attempt to set the “maximum server memory” setting manually. First, if you allocate too much memory to SQL Server, and not enough for other applications or the operating system, then the operating system may have no choice but to begin excessive paging, which will slow performance of your server. Also, if you are using the Full-Text Search service, you must also leave plenty of memory for its use. Its memory is not dynamically allocated like the rest of SQL Server’s memory, and there must be enough available memory for it to run properly.
The “min server memory” setting, when set to the default value of 0 (in MB), tells SQL Server to manage the use of memory dynamically. This means that SQL Server will start allocating memory as is needed, and the minimum amount of RAM used can vary as SQL Server’s needs vary.
If you change the “min server memory” setting to a value other than the default value of 0, what this means is not that SQL Server will automatically begin using this amount of minimum memory automatically, as many people assume, but that once the minimum amount is reached (because it is needed) that the minimum amount specified will never go down below the specified minimum.
For example, if you specify a minimum value of 100 MB, then restart SQL Server, SQL Server will not immediately use 100 MB of RAM for its minimal use. Instead, SQL Server will only take as much as it needs. If it never needs 100MB, then it will never be fully used. But if SQL Server does exceed the 100 MB amount specified, then later it doesn’t need it, then this 100 MB will then become the bottom limit of how much memory SQL Server allocates. Because of this behavior, there is little reason to change the “min server memory” setting to any value other than its default value.
If your SQL Server is dedicated, there is no reason to use the “min server memory” setting at all. If you are running other applications on the same server as SQL Server, there might be a very small benefit of changing this setting to a minimum figure, but it would be hard to determine what this value should be, and the overall performance benefit would be negligible.
If you find in your audit that these settings are some other value other than the default, find out why. If you can’t find out why, or if the reason is poor, change them back to their default values.
Max Text Repl Size
The “max text repl size” setting is used to specify the maximum size of text or image data that can be inserted into a replicated column in a single physical INSERT, UPDATE, WRITETEXT, or UPDATETEXT transaction. If you don’t use replication, or if you don’t replicate text or image data, then this setting should not be changed.
The default value is 65536, the minimum value is 0, and the maximum value is 2147483647 (in bytes). If you do heavy replication of text or image data, you might want to consider increasing this value only if the size of this data exceeds 64K. But as with most of these settings, you will have to experiment with various values to see what works best for your particular circumstances.
As part of your audit, if you don’t use replication, then the only correct value here is the default value. If the default value has been changed, you need to investigate if text or image data is being replicated. If not, or if the replicated data is less than 64K, then change it back to the default value.
Max Worker Threads
The “max worker threads” SQL Server configuration setting is used to determine how many worker threads are made available to the sqlservr.exe process from the operating system. Generally speaking, one worker thread is assigned to each connection to SQL Server. This includes both system and user threads. If the number of actual connections exceeds the amount of worker threads assigned by SQL Server, then thread pooling begins, which means that connections may share threads. While thread sharing does save on memory, it can hurt the overall performance of SQL Server.
The default value is 0 for thread sharing (in SQL Server 2000, it was 255). In SQL Server 2005, the default setting is 0, which means that SQL Server will determine the maximum number of worker threads based on the following schedule:
Number of CPUs 32-bit computer
<= 4 processors 256
8 processors 288
16 processors 352
32 processors 480
If the number of connections exceeds the amount specified by SQL Server (see above chart), and if server is not memory bound, consider changing the default value of 0 to a number slightly larger than the maximum number of simultaneous connections you expect SQL Server to service. This way, thread sharing is not performed, and overall performance to the server is boosted. Of course, if your server is currently under memory pressure, you don’t want to change the default setting for this option unless you are able to add more RAM to the server.
But if you don’t have any extra RAM available, then adding more worker threads can hurt SQL Server’s performance. In this case, allowing SQL Server to use thread pooling offers better performance (in the form of a compromise). This is because thread pooling uses less resources than not using it. But, on the downside, thread pooling can introduce problems of resource contention between connections. For example, two connections sharing a thread can conflict when both connections want to perform some task as the exact same time (which can’t be done because a single thread can only service a single connection at the same time).
As you might expect, before using this setting in production, you will want to test your server’s performance before and after the change to see if SQL Server benefited, or was hurt, from the change.
Min Memory Per Query
When a query runs, SQL Server does its best to allocate the optimum amount of memory for it to run efficiently and quickly. By default, the “minimum memory per query” setting allocates 1024 KB, as a minimum, for each query to run. The “minimum memory per query” setting can be set from 0 to 2147483647 KB.
If a query needs more memory to run efficiently, and if it is available, then SQL Server automatically assigns more memory to the query. Because of this, changing the value of the “minimum memory per query” default setting is generally not advised.
In some cases, if your SQL Server has more RAM than it needs to run efficiently, the performance of some queries can be boosted if you increase the “minimum memory per query” setting to a higher value, such as 2048 KB, or perhaps a little higher. As long as there is “excess” memory available in the server (essentially, RAM that is not being used by SQL Server), then boosting this setting can help overall SQL Server performance. But if there is no excess memory available, increasing the amount of memory for this setting is more likely to hurt overall performance, not help it.
This configuration option does affect performance, but not in the conventional way. By default, the “nested triggers” option is set to the default value of “1”. This means that nested triggers (a nested trigger is a trigger that cascades up to a maximum limit of 32) can be run. If you change this setting to “0”, then nested triggers are not permitted. Obviously, by not allowing nested triggers, overall performance can be improved, but at the cost of application flexibility.
This setting should be left to its default value, unless you want to prevent developers from using nested triggers. Also, some third-party applications could fail if you turn off nested triggers, assuming they depend on them.
Network Packet Size (B)
“Network packet size” determines the size of the packet size SQL Server uses when it talks to clients over a network. The default value is 4096 bytes, with a legal range from a minimum of 512 bytes, to a maximum value which is based on the maximum packet size that the network protocol you are using supports.
In theory, by changing this value, performance can be boosted if the size of the packet more or less matches the size of the data in the packet. For example, if the data moved over the wire is small, less than 512 bytes on average, changing the default value of 4096 bytes to 512 bytes can boost performance. Or, if you are doing a lot of data movement, such as with bulk loads, of if you deal with a lot of TEXT or IMAGE data, then by increasing the default packet size to a number larger than 4096 bytes, then it will take fewer packets to send the data, resulting in less overhead and better performance.
In theory, this sounds great. In reality, you will see little, if any, performance boost. This is because there is no such think as an average data size. In some cases data is small, and in other cases, data is very large. Because of this, changing the default value of the “network packet size” is generally not very useful.
As a part of your audit, carefully question any value for this setting other than the default. If you can’t get a good answer, change it back.
This option is no longer available in SQL Server 2005, although it has been retained for backward compatibility with older scripts. In SQL Server 2005, the number of open database objects is managed dynamically and is limited only by available memory.
By default, SQL Server processes run at the same priority as any other applications on a server. In other words, no single application process has a higher priority than another when it comes to getting and receiving CPU cycles.
The “priority boost” configuration option allows you to change this. The default value for this option is “0”, means that the priority of SQL Server processes is the same as all other application processes. If you change it to “1”, then SQL Server now has a higher priority than other application processes. In essence, this means that SQL Server has first priority to CPU cycles over other application processes running on the same server. But does this really boost performance of SQL Server?
Let’s look at a couple of scenarios. First, let’s assume a server runs not only SQL Server, but other apps (not recommended for best performance, but a real-world possibility), and that there is plenty of CPU power available. If this is the case, and if you give SQL Server a priority boost, what happens? No much. If there is plenty of CPU power available, a priority boost doesn’t mean much. Sure, SQL Server might gain a few milliseconds here and there as compared to the other applications, but I doubt if you would be able to notice the difference.
Now let’s look at a similar scenario as above, but let’s assume that CPU power is virtually all exhausted. If this is the case, and SQL Server is given a priority boost, sure, SQL Server will now get its work done faster, but only at the cost of slowing down the other applications. If this is what you want, OK. But a better solution would be to boost CPU power on the server, or reduce the server’s load.
But what if SQL Server is running on a dedicated server with no other applications and if there is plenty of excess CPU power available? In this case, boosting the priority will not gain a thing, as there is nothing competing (other than part of the operating system) for CPU cycles, and besides, there are plenty of extra cycles to go around.
And last of all, if SQL Server is on a dedicated server, and the CPU is maxed out, giving it a priority boost is a zero sum game as parts of the operating system could potentially be negatively affected if you do. And the gain, if any, will be very little for SQL Server.
As you can see, this option is not worth the effort. In fact, Microsoft has documented several problems related to using this option, which makes this option even less desirable to try.
If you find this option turned on in your audit, question its purpose. If you currently are not having any problems with it on, you can probably leave it on without issues. But I would recommend setting it back to its default.