SQL Server 2005 Database Options and Configuration Performance Audit Checklist

Auto_Update_Statistics_Asynch

You are familiar with the AUTO_UPDATE_STATISTICS database option just discussed. Under most conditions, this feature provides the query optimizer with up-to-date index and column statistics so that optimum query plans can be created to execute queries. But not always.

In SQL Server 2005, there is a new feature which can overcome some of the potential negatives of using this database option. There is a new database is called AUTO_UPDATE_STATISTICS_ASYNC. If this option is turned on for a database, and the AUTO_UPDATE_STATISTICS option is also turned on for the database, then the behavior of the statistics updating process changes. Now, instead of the query having to wait until the statistics are updated to execute (the default behavior), the query runs immediately using the old statistics and query plan. And in the background, a thread is spawned that performs the actual statistics update. When the statistics have been updated, they are now available the next time a query is run that needs them.

What this accomplishes is to make the performance of affected queries more predictable, and potentially eliminate application timeouts due to long running statistics updates. But, keep this in mind; it is also possible that this could make the query run even longer, not shorter? What? For example, if the rows in the table have changed so much that statistics need to be updated, then it is possible that using the old statistics and old query plan to run the query now, instead of waiting, could cause the query to take much more time because the old query plan and statistics are way too out of date, and a new, more efficient query plan, based on the new statistics, is needed.

This is virtually impossible to predict. Because of the potential possibility that outdated statistics and query plans could make queries run slower, Microsoft only recommends using AUTO_UPDATE_STATISTICS_ASYNC under two conditions:

  • When it is more critical that queries perform with a predictable response time than to run less efficient query plans.
  • When you have an application that times out because of waiting for statistics to be automatically updated.

If you don’t experience either of the above, then you should not use the AUTO_UPDATE_STATISTICS_ASYNC option.

The AUTO_UPDATE_STATISTICS_ASYNC database option, like so many options in SQL Server, have trade-offs. If you think this option could help you out in a particular situation, test it first on a test system. You don’t want to experiment on your production system.

Auto_Shrink

Some databases need to be shrunk periodically in order to free up disk space as older data is deleted from the database. But don’t be tempted to use the auto_shrink database option to accomplish this task, as it can waste SQL Server resources unnecessarily.

By default, the auto_shrink option is turned off, which means that the only way to free up empty space in a database is to do so manually. If you turn this option on, SQL Server will check every 30 minutes to see if it needs to shrink the database. Not only does this use up resources that could better be used elsewhere, it also can cause unexpected bottlenecks in your database when the auto_shrink process kicks in and does its work at the worst possible time.

If you need to shrink databases periodically, perform this step manually using the DBCC SHRINKDATABASE or DBCC SHRINKFILE commands, or you can use the SQL Server Agent, or create a Database Maintenance Plan, to schedule regular file shrinking during less busy times.

As part of your audit, if you find this option turned on, you will need to research why it was turned off. If you can’t find the reason, or if the reason is poor, turn this option off.

Compatibility Level

SQL Server 2005 has a database compatibility mode that allows applications written for previous versions of SQL Server to continue to run, even though SQL Server 2005 may have deprecated some of the functionality of the older versions. In you want maximum performance for your databases, you don’t want to run your databases in compatibility mode, as not all new performance-related features are supported in compatibility mode.

Instead, your databases should be running in native SQL Server 2005. Of course, this may require you to modify your older application to make it SQL Server 2005 compliant, but in most cases, the additional work required to update your application will be more than paid for with improved performance.

SQL Server 7.0 compatibility level is referred to as “70”. SQL Server 2000 compatibility level is referred to as “80”. And SQL Server 2005 native mode is referred to as “90.” If you upgrade a database from a previous version of SQL Server, SQL Server 2005 will keep the database in the mode of the previous edition of SQL Server. Because of this, be sure to check to see what compatibility version your databases are currently running in under SQL Server 2005. If they are not running in “90” mode, test them to see if they will successfully run under “90” mode, and then change the setting appropriately.

Read_Only

If a database will be used for read-only purposes only, such as being used for reporting, consider setting the read_only setting on (the default setting is off). This will eliminate the overhead of locking in the database, and in turn, potentially boost the performance of queries that are being run against it. If you need to modify the database on rare occasions, you can turn the setting off, make your change, then turn it back on.

Page_Verify

Because data pages in SQL Server (8K) and Windows Server (512 bytes) are different sizes, it is possible during power failures, or if you are have disk driver or physical disk problems, for your database files to become physically corrupted.

Here’s why. Every time the operating system writes an 8K SQL Server data page to disk, it must break up the data into multiple 512 byte pages to be stored by the OS on disk. After the first 512 byte of data is written, SQL Server assumes that the entire 8K has been written to disk successfully. So if a problem should occur before all of the 512 byte pages that make up the 8K SQL Server page are written, then SQL Server does not know what has happened. This is known as a torn page.

As you can imagine, this corrupts the data page, and in effect makes your entire database corrupt. There is no way to fix a database made corrupt due to a torn page, except by restoring a known good backup. One of the best ways to prevent this problem is to ensure your server has battery backup. But this does not prevent all problems, because a defective disk driver can also cause similar problems (I have seen this.)

If you are worried about getting torn pages in your SQL Server databases, you can have SQL Server tell you if they occur (although it can’t prevent them or fix them after they have occurred).

In SQL Server 2005, there is a new option called Page_Verify. In earlier versions of SQL Server, it was known as TORN_PAGE_DETECTION.

Page_Verify has three options you can set: NONE, CHECKSUM, and TORN_PAGE_DETECTION.

When NONE is specified, database page writes will not perform a CHECKSUM or TORN_PAGE_DETECTION. This offers the least protection, but the highest performance.

When CHECKSUM is specified, the Database Engine calculates a checksum over the contents of each page and stores the value in the page header when a page is written to disk. When the page is read from disk, the checksum is recalculated and compared to the original checksum value. This option, which is the default option, offers the best tradeoff in protection and performance.

If the option TORN_PAGE_DETECTION is selected, a specific bit for each 512 byte sector in the 8 KB database page is saved and stored in the database page header when the page is written to disk. When the page is read from disk, the bits stored in the page header and are compared. This option, offers the best protection, but the least performance.

As a DBA, you will have to decide for yourself which option best fits your needs for data integrity and performance.

Parameterization

One of the main benefits of using a stored procedure to execute Transact-SQL code is that once a stored procedure is compiled and executed the first time, the query plan is cached by SQL Server. So the next time the same stored procedure is run (assuming the same connection parameters are used), SQL Server does not have to recompile the stored procedure again, instead reusing the query plan created during the first compilation of the stored procedure. If the same stored procedure is called over and over again, with the query plan being reused each time, this can help reduce the burden on SQL Server’s resources, boosting its overall performance.

But as we all know, not all SQL Server-based applications use stored procedures to communicate with SQL Server. Many times the Transact-SQL is sent from the application to SQL Server in the form of a static or dynamic statement. Because SQL Server is pretty smart, by default, it has the ability to do something called simple parameterization.

Essentially, simple parameterization means that SQL Server can take a look at the static or dynamic Transact-SQL being sent to it from an application, and if it finds any values that it considers to be a parameter, it will parameterize the Transact-SQL, which allows the resulting query plan to be reused, much like how SQL Server can reuse the query plans of stored procedures

But what if your application uses mostly complex queries, queries that cannot be automatically parameterized by SQL Server using simple parameterization? This is where a new feature of SQL Server 2005 comes to the rescue. This new feature is called forced parameterization. When forced parameterization is turned on, it tells SQL Server to force the parameterization of virtually all SELECT, INSERT, UPDATE, and DELETE statements. There are a few types of queries that cannot be forced, but the exceptions are few.

With forced parameterization turned on, SQL Server will perform fewer compilations of statements because it now has the ability to re-use more query plans that before, helping to reduce resource usage and boosting performance.

But I know what you are thinking, there is no such thing as a free lunch. Am I right? And you are right. Like most aspects of SQL Server performance tuning, there are tradeoffs to be considered.

Here are some of those trade-offs:

  • Since SQL Server has to force parameterization on virtually all statements, it has to perform a little extra more work up front to perform the parameterization.
  • Some queries, that have widely changing parameters, may end up using inappropriate query plans, reducing performance.
  • Because literal constants in a query are changed to parameters, the Query Optimizer might not always choose the best plan, reducing performance.
  • The Query Optimizer may not choose the ideal plan for queries on partitioned tables and distributed partitioned views.
  • The Query Optimizer is less likely to match a query to an index view or an index on a computed column.

Essentially, Microsoft recommends that using forced parameterization is best used in limited situations, such as those applications that experience high volumes of concurrent queries that use static or dynamic Transact-SQL, and not stored procedures. This is the situation that best lends itself to forced parameterization. If your application does not fall into this category, and you turn forced parameterization on, it is very possible that the performance could get worse.

Forced parameterization is turned on and off at the database level. In other words, it is an all or nothing setting. Either all the static and dynamic Transact-SQL uses simple parameterization or forced parameterization.

Continues…

Leave a comment

Your email address will not be published.