SQL Server 2005 Server Configuration Performance Audit Checklist Part 2

Blocked Process Threshold

This option is new to SQL Server 2005.

The blocked process threshold option is used to specify the threshold, in seconds, when blocked process reports are generated. By default, no blocked process reports are produced.

You can define an alert to be executed when this event is generated. For example, you can choose to notify a DBA to take appropriate action to handle a blocking situation.

The blocked process threshold option uses the deadlock monitor background thread to transverse the list of tasks waiting for a time greater than or multiples of the configured threshold. The event is generated once per reporting interval for each blocked task.

The blocked process report is done on a best effort basis. There is no real-time reporting. If your server is experiencing a lot of blocking, consider turning on this option to help you identify and troubleshoot blocking issues.


Cost Threshold for Parallelism

Using parallelism to execute a SQL Server query has its costs. This is because it takes a little additional overhead to run a single query in parallel on multiple CPUs than to run it serially on a single CPU. But if the benefits of running a query using parallelism is higher (because often parallel queries can sometimes result in fewer overall resources used for long running queries) than the additional overhead costs, then using parallelism is a good thing.

As a rule of thumb, if a query can run serially very fast, there is no point in even considering parallelism for the query, as the extra time required to evaluate it for possible parallelism might be longer than the time it takes to run the query serially.

By default, if the Query Optimizer determines that a query will take less than 5 seconds to execute, parallelism is not considered by SQL Server. This 5 second figure can be modified using the “cost threshold for parallelism” SQL Server option. You can change this value anywhere from 0 to 32767 seconds. So if you set this value to 10, this means that the Query Optimizer won’t consider parallelism for any query that it thinks will take less than 10 seconds to run.

In most cases, you should not change this setting. But if you find that your SQL Server runs many queries with parallelism, and if the CPU rate is very high, raising this setting from 5 to a higher figure (you will have to experiment to find the ideal figure for your situation), will reduce the number of queries using parallelism, potentially reducing the overall usage of your server’s CPUs, which may help the overall performance of your server. It could also hurt the performance of your server.

Another option to consider is to reduce the value from 5 seconds to a smaller number, although this could hurt, rather than help performance in many cases. One area where a smaller value might be useful is in cases where SQL Server is acting as a data warehouse and many very complex queries are being run. A lower value will allow the Query Optimizer to use parallelism more often, which can help in some situations.

You will want to test changes to the default value thoroughly before implementing it on your production servers.

If SQL Server only has access to a single CPU (either because there is only one CPU in the server, or because of an “affinity mask” setting, parallelism is not considered for a query.

If you find in your audit that the cost threshold for parallelism has been changed from the default value, find out why. If you can’t get an answer, move it back to the default value.

Cursor Threshold

If your SQL Server does not use cursors as part of the T-SQL code it is running, or uses them very little, then this setting should never be changed from its default value of “-1″.

A “cursor threshold” of “-1″ tells SQL Server to execute all cursors synchronously, which is the ideal setting if the result sets of cursors executed on your server are not large. But if many, or all of the cursors running on your SQL Server produce very large result sets, then executing cursors synchronously is not the most efficient way to execute a cursor.

The “cursor threshold” setting has two other settings (besides the default) for running large cursors. A setting of “0″ tells SQL Server to run all cursors asynchronously, which is more efficient if most or all of the cursor’s result sets are large.

What if some of the cursor result sets are small and some are large, then what do you do? In this case, you can decide what large and small is, and then use this number as the cutoff point for SQL Server. For example, let’s say that we consider any cursor result set of under 1000 rows as small, and any cursor result set of over 1000 rows as large. If this is the case, we can set the “cursor threshold” to 1000.

When the “cursor threshold” is set to 1000, what happens is that if the Query Optimizer predicts that the result set will be less than 1000, then the cursor will be run synchronously. And if the Query Optimizer predicts that the result set will be more than 1000, then the cursor will be run asynchronously.

In many ways, this option provides the best of both worlds. The only problem is what is predicting the ideal “cursor threshold”. To determine this, you will need to test. But as you might expect, the default value if often the best, and you should only change this option if you know for sure that your application uses very large cursors and that you have tested this option and know for sure that by changing it, it has helped, not hurt performance.

As a part of your audit, you may also want to investigate how often cursors are used, and how large the result sets are. Only by knowing this will you know what the best setting is for your server. Of course, you could always try to eliminate the use of cursors on the server. This way, the setting can remain at the default value, and you don’t have to worry about the overhead of cursors.


Default Trace Enabled

By default, SQL Server 2005 performs a Profiler trace on an on-going basis. This information can be very handy when troubleshooting some types of problems. The default trace log is stored by default in the MSSQLLOG directory using a rollover trace file. The name for the default trace log file is log.trc.

For most servers, you will want to leave this option on because of the benefits it provides when troubleshooting. On the other hand, this option creates additional overhead, which might be noticed on a very busy server. Only consider turning this option off if your server is maxed out of CPU and I/O resources.


Fill Factor (%)

This option allows you to change the default fill factor for indexes when they are built. By default, the fill factor setting is set to “0″. A setting of “0″ is somewhat confusing, as what it means is that leaf index pages are filled 100% (not 0%), but that intermediate index pages (non-leaf pages) have some space left in them (they are not filled up 100%). Legal settings for the fill factor setting range from 0 through 100.

The default fill factor only comes into play when you build or rebuild indexes without specifying a specific fill factor. If you do specify a fill factor when you create a new index, that value is used, not the default fill factor.

In most cases, it is best to leave the default fill factor alone, and if you want a value other than the default fill factor, then specify it when you create an index. In fact, this is a best practice.

As a part of your audit, note if the fill factor is some figure other than the the default value of “0″. If it is, try to find out why. And if you can’t find out why the default value was changed, or there is not a good reason, switch it back to the default value. Also, if the value has been changed, keep in mind that any indexes created after the default value was changed may be using this default fill factor value. If so, you may need to reevaluate these indexes to see if the fill factor used for creating them is appropriate.


Index Create Memory (KB)

The Index Create Memory option controls the maximum amount of memory initially allocated for creating an index. If more memory is needed later for the index creation, and the memory is available, SQL Server will use it, exceeding the setting of this option. If additional memory is not available, index creation will use the memory initially allocated.

The default value of “0″ tells SQL Server to automatically determine the ideal value for this setting. In most cases, SQL Server will configure the amount of memory optimally.

But in some unusual cases, especially with very large tables, it is possible for SQL Server to make a mistake, causing large indexes to be created very slowly, or not at all. If you run into this situation, you may want to consider setting the Index Create Memory setting yourself, although you will have to trial and error the setting until you find the optimum one for your situation. Legal settings for this option run from 704 to 2147483647.

Keep in mind that if you do change the setting, that this memory is allocated for index creation and will not be available for other use. If your server has more than enough RAM, then this will be no problem. But if your server is short on RAM, changing this setting could negatively affect the performance of other aspects of SQL Server

As with the other settings, if you find in your audit that this setting is some value other than the default, try to find out why. If you can’t find out why, or if there is not a good reason, change it back to the default value.

Lightweight Pooling

SQL Server 2005, by default, runs in what is called “thread mode.” What this means is that SQL Server uses what are called UMS (User Mode Schedulers) threads to run user processes. SQL Server will create one UMS thread per processor, with each one taking turns running the many user processes found on a busy SQL Server. For optimum efficiency, the UMS attempts to balance the number of user processes run by each thread, which in effect tries to evenly balance all of the user processes over all the CPUs in the server.

SQL Server also has an optional mode it can run in, called fiber mode. In this case, SQL Server uses one thread per processor (like thread mode), but the difference is that multiple fibers are run within each thread. Fibers are used to assume the identity of the thread they are executing and are non-preemptive to other SQL Server threads running on the server. Think of a fiber as a “lightweight thread,” which, under certain circumstances, takes less overhead than standard UMS threads to manage. Fiber mode is turned on and off using the “lightweight pooling” SQL Server configuration option. The default value is “0″, which means that fiber mode is turned off.

So what does all this mean? Like everything, there are pros and cons to running in one mode over another. Generally speaking, fiber mode is only beneficial when all of the following conditions exist:

• Two or more CPUs are found on the server (the more the CPUs, the larger the benefit).
• All of the CPUS are running near maximum (90-100%) most of the time.
• There is a lot of context switching occurring on the server (as reported by the Performance Monitor System Object: Context Switches/sec. Generally speaking, more than 5,000 context switches per second is considered high.
• The server is making little or no use of distributed queries or extended stored procedures.

If all the above are true, then turning on the “lightweight pooling” option in SQL Server may general a 5% or greater boost in performance.

But if the four circumstances are all not true, then turning on “lightweight pooling” could actually degrade performance. For example, if your server makes use of many distributed queries or extended stored procedures, then turning on “lightweight pooling” will definitely cause a problem because they cannot make use of fibers, which means that SQL Server will have to switch back-and-forth from fiber mode to thread mode as needed, which hurts SQL Server’s performance.

As with the other settings, if you find in your audit that this setting is some value other than the default, try to find out why. In addition, check to see if the four conditions above exist. If they do, then turning “lightweight pooling” on may be beneficial. If these four conditions do not exist, then use the default value of “0″.

Continues…

Pages: 1 2 3




Related Articles :

  • No Related Articles Found

One Response to “SQL Server 2005 Server Configuration Performance Audit Checklist Part 2”

  1. What a great checklist! The Aloha DBA never fails to deliver useful and consumable information for those of us in the SQL Server trenches every day. Brad’s articles give boost your technical understanding along with practical ideas on implementation.

Software Reviews | Book Reviews | FAQs | Tips | Articles | Performance Tuning | Audit | BI | Clustering | Developer | Reporting | DBA | ASP.NET Ado | Views tips | | Developer FAQs | Replication Tips | OS Tips | Misc Tips | Index Tuning Tips | Hints Tips | High Availability Tips | Hardware Tips | ETL Tips | Components Tips | Configuration Tips | App Dev Tips | OLAP Tips | Admin Tips | Software Reviews | Error | Clustering FAQs | Performance Tuning FAQs | DBA FAQs |