SQL Server Configuration Performance Checklist


Index Create Memory (KB)

The index create memory setting determines how much memory can be used by SQL Server for index creating sorts. The default value of “0” tells SQL Server to automatically determine the ideal value. In almost all cases, SQL Server will configure the amount of memory optimally.

But in some unusual cases, especially with very large tables, it is possible for SQL Server to make a mistake, causing large indexes to be created very slowly, or not at all. If you run into this situation, you may want to consider setting the Index Create Memory setting yourself, although you will have to trial and error the setting until you find the optimum one for your situation. Legal settings for this option run from 704 to 2147483647. This number refers to the amount of RAM, in KB, that SQL Server can devote to creating the index.

Keep in mind that if you do change the setting, that this memory will then be allocated for index creation and will not be available for other use. If your server has more than enough RAM, then this will be no problem. But if your server is short on RAM, changing this setting could negatively affect the performance of other aspects of SQL Server. You might consider making this change only when you are creating or rebuilding large indexes, and return the setting to the default all other times.

As with the other settings, if you find in your audit that this setting is some value other than the default, try to find out why. If you can’t find out why, or if there is not a good reason, change it back to the default value.

Lightweight Pooling

SQL Server 7.0 and 2000, by default, run in what is called “thread mode.” What this means is that SQL Server uses what are called UMS (User Mode Schedulers) threads to run user processes. SQL Server will create one UMS thread per processor, with each one taking turns running the many user processes found on a busy SQL Server. For optimum efficiency, the UMS attempts to balance the number of user processes run by each thread, which in effect tries to evenly balance all of the user processes over all the CPUs in the server.

SQL Server also has an optional mode it can run in, called fiber mode. In this case, SQL Server uses one thread per processor (like thread mode), but the difference is that multiple fibers are run within each thread. Fibers are used to assume the identity of the thread they are executing and are non-preemptive to other SQL Server threads running on the server. Think of a fiber as a “lightweight thread,” which, under certain circumstances, takes less overhead than standard UMS threads to manage. Fiber mode is turned on and off using the “lightweight pooling” SQL Server configuration option. The default value is “0”, which means that fiber mode is turned off.

So what does all this mean? Like everything, there are pros and cons to running in one mode over another. Generally speaking, fiber mode is only beneficial when all of the following conditions exist:

  • Two or more CPUs are found on the server (the more the CPUs, the larger the
    benefit).

  • All of the CPUS are running near maximum (90-100%) most of the time.

  • There is a lot of context switching occurring on the server (as reported
    by the Performance Monitor System Object: Context Switches/sec. Generally speaking, more than 5,000 context switches per second is considered high.

  • The server is making little or no use of distributed queries or extended stored procedures.

If all the above are true, then turning on the  “lightweight pooling” option in SQL Server may see a 5% or greater boost in performance.

But if the four circumstances are all not true, then turning on “lightweight pooling” could actually degrade performance. For example, if your server makes use of many distributed queries or extended stored procedures, then turning on “lightweight pooling” will definitely cause a problem because they cannot make use of fibers, which means that SQL Server will have to switch back-and-forth from fiber mode to thread mode as needed, which hurts SQL Server’s performance.

As with the other settings, if you find in your audit that this setting is some value other than the default, try to find out why. In addition, check to see if the four conditions above exist. If they do, then turning “lightweight pooling” on may be beneficial. If these four conditions do not exist, then use the default value of “0”.

Locks

Each time SQL Server locks a record, the lock must be stored in memory. By default, the value for the “locks” option is “0”, which means that lock memory is dynamically managed by SQL Server. Internally, SQL Server can reserve from 2% to 40% of available memory for locks. In addition, if SQL Server determines that allocating additional memory for locking could cause paging at the operating system level, it will not allocate the memory to locks, instead giving it up to the operating system in order to prevent paging.

In almost all cases, you should allow SQL Server to dynamically manage locks, leaving the default value as it. If you enter your own value for lock memory (legal values are from 5000 to 2147483647 KB), then SQL Server cannot dynamically manage this portion of memory, which could cause some other areas of SQL Server to experience poor performance.

If you get an error message that says you have exceeded the maximum number of locks available, you have these options:

  • Closely examine your queries to see if they are causing excessive locking. If they are, it is possible that performance is also being hurt because of a lack of concurrency in your application. It is better to fix bad queries than it is to allocate too much memory to tracking locks.

  • Reduce the number of applications running on the server.

  • Add more RAM to your server.

  • Boost the number of locks to a high value (based on trial and error). This is the least desirable option as giving memory to locks prevents it from being used by SQL Server for other purposes, as needed.

Do your best to resist using this option. If you find in your audit that this setting is some other value other than the default, find out why. If you can’t find out why, or if the reason is poor, change it back to the default value.

Max Degree of Parallelism

This option allows you to specify if parallelism is turned on, turned off, or only turned on for some CPUs, but not for all CPUs in your server. Parallelism refers to the ability of the Query Optimizer to use more than a single CPU to execute a query. By default, parallelism is turned on and can use as many CPUs as there are in the server (unless this has been reduced due to the affinity mask option). If your server has only one CPU, the “max degree of parallelism” value is ignored.

The default for this option is “0”, which means that parallelism is turned on for all available CPUs. If you change this setting to “1”, then parallelism is turned off for all CPUs. This option allows you to specify how many CPUs can be used for parallelism. For example, if your server has 8 CPUs and you only want parallelism to run on 4 of them, you can specify a value of 4 for this option. Although this option is available, it is doubtful if using it would really provide any performance benefits.

If parallelism is turned on, as it is by default if you have multiple CPUs, then the query optimizer will evaluate each query for the possibility of using parallelism, which takes a little overhead. On many OLTP servers, the nature of the queries being run often doesn’t lend itself to using parallelism for running queries. Examples of this include standard SELECT, INSERT, UPDATE and DELETE statements. Because of this, the query optimizer is wasting its time evaluating each query to see if it can take advantage of parallelism. If you know that if your queries will probably never need the advantage of parallelism, you can save a little overhead by turning this feature off, so queries aren’t evaluated for this.

Of course, if the nature of the queries that are run on your SQL Server can take advantage of parallelism, you will not want to turn parallelism off. For example, if your OLTP server runs many correlated subqueries, or other complex queries, then you will probably want to leave parallelism on. You will want to test this setting to see if making this particular change will help, or hurt, your SQL Server’s performance in your unique operating environment.

In most cases, because most servers run both OLTP and OLAP queries, parallelism should be kept on. As part of your performance audit, if you find parallelism turned off, or if it is restricted, find out why. As part of your audit, you will also want to determine if the server is virtually all OLTP-oriented. If so, the turning off parallelism might be justified, although you will want to thoroughly test this to see if turning it off helps or hurts overall SQL Server performance. But if the server runs mixed OLTP and OLAP, or mostly OLAP queries, then parallelism should be on for best overall performance.

Continues…

Leave a comment

Your email address will not be published.