What the IT Decision-Maker Needs to Know About SQL Server Performance Tuning
Executive Summary: The View From 10,000 Feet
The most common SQL Server performance problems can often be fixed quickly and inexpensively. However, companies sometimes pursue complex, expensive and ultimately ineffective solutions because of an incomplete understanding of their problem. Before undertaking performance optimization, it is essential to monitor, measure, and most importantly, correctly interpret the appropriate SQL Server performance metrics to identify the source of the problem.
This article is far from a complete treatment of SQL Server performance optimization. It focuses on the 20% of bottlenecks that cause 80% of the problems, with an emphasis on the simplest, least expensive solutions. After reading it you will not be a SQL Server performance expert, but you will be in a better position to make decisions regarding SQL Server performance optimization.
Performance Tuning SQL Server: The View from 5000 Feet
This article is is not intended to be a step-by-step guide for implementing any of the procedures discussed. Use this advice at your own risk, and don’t make or allow any changes to a production server without testing thoroughly first.
If you only take one thing away from this reading, let it be this: Take care of the basics first. Start with the simple, inexpensive fixes. Before undertaking a $50,000 hardware upgrade, spend a few hours optimizing the indexing in your database, then reassess the performance issues. In our experience, the most common sources of poor SQL Server performance are inadequate indexing, poorly-written SQL code, and insufficient memory. These are also areas where the most dramatic improvements can often be made with the least investment.
Before you spend a lot of money on new hardware or complex fixes, make sure your applications are using the resources you have efficiently. Even when monitoring shows that one or more hardware resources are overloaded, it does not necessarily mean that they are inadequate. Inefficient application code or missing indexes can put unnecessarily heavy loads on your hardware. New hardware won’t always help if your application code is generating huge numbers of blocking locks, or is scanning a 200 million row table from beginning to end every time it needs a part number.
In all that follows we will be talking about SQL Server 2000, differentiating between Standard and Enterprise Editions where appropriate. Most of the discussion applies also to version 7, and will continue to apply to SQL Server 2005.
Monitoring, Measuring and Testing
Before you can fix performance problems, you need to know what’s causing them. Fortunately, SQL Server has some very good diagnostic tools built in, and others are available for free download from Microsoft’s web site. For the most part these tools are fairly simple to use. The hard part is knowing how to interpret the results they generate. This article will give you some rough rules-of-thumb, but if you don’t have anyone on staff experienced in this area, you may need outside expertise for an accurate and detailed analysis.
SQL Server Should Be on it’s Own Server
Before we get into the details of performance tuning, it is necessary to emphasize that your production SQL Server should not be sharing resources with any other major application. It shouldn’t be on a domain controller and it shouldn’t be on a web server. It should be the only application running on the server and it should usually be configured with it’s default memory options to dynamically manage the memory on that server. If this is not possible, then your performance tuning job becomes more difficult.
The simple act of placing an index on the right column in a table can often reduce execution time for a critical procedure from minutes to seconds, or from hours to minutes. Really.
During database design, it is somewhat difficult to select which columns should be indexed, but after the application is in production, it is relatively simple to find out where an important index is missing. We will get to that below when we talk about ShowPlan and Profiler later in this article.
When SQL Server doesn’t have an index to follow, it looks up data using a table scan. Using the metaphor of an encyclopedia, a table scan is exactly like starting at Volume A and reading every line until you find all the information you’re looking for. To be sure you haven’t missed anything, you have to read all the way through the Z’s.
Using an index is like, well,.. using an index. You find a keyword through an alphabetical lookup in a highly condensed summary of everything that’s in the encyclopedia. It gives you the volume and page, then you go straight to the information. Time saved? About three weeks. Adjusting for a millisecond time scale, it’s pretty much the same for databases.
What to Index?
This is not a simple question. Here are some guidelines, but keep in mind that for every guideline there are circumstances in which the it does not lead to the best indexing choice.
When you create a primary key, it will be indexed automatically with a clustered index if you don’t explicitly create the index as non-clustered. Often the primary key is not the best use of the only clustered index you can have for the table, so you should always explicitly create the index based on what you consider is the ideal columns(s) for taking full advantage of a clustered index.
Foreign keys, columns that reference primary keys in another table, should almost always be indexed, but that is not done automatically when you create the foreign key. You must do so explicitly.
It is of critical importance to index columns that are frequently used in the WHERE clause of important queries, but the subtleties of deciding which columns or combinations of columns to index is well-beyond the scope of this article. However, keep in mind that this is perhaps the most important single aspect of performance tuning, and you must find the expertise you need to get it done.
You can experiment with indexing using Query Analyzer. Be sure to do it on a test database because your experiments will negatively affect performance in the database you are experimenting with. We find that the graphical statistics window is not as useful as the old text-based tool. Use the “set statistics io on” statement to enable statistics output for the query window. This will generate text output of performance metrics. The important metric is the number of logical reads. It is a direct measurement of query efficiency. The lower the number of logical reads, the better.
Execute the query and note the number of logical reads. Test different indexing schemes by comparing the number of logical reads they generate against this baseline number. When you hit the right indexing scheme, you may see some dramatic changes. 50,000 reads being reduced to 5 reads is not uncommon.
Auto Create Statistics
This is an option you can set on a database by database basis. By default it is on, and it usually should be left that way. It allows SQL Server to automatically create a collection of statistics for the data in an unindexed column. In the absence of a useful index, the statistics help it decide how to execute a query most efficiently. Statistics collections are not as efficient as indexes, but they are much better than nothing, and SQL Server can create them on the fly when it needs additional information to make better query execution decisions.
A very good DBA might not want to enable statistics creation because he/she is on top of the performance issues and Auto Create tends to clutter up the table with pseudo-indexes. That’s OK for a very experienced DBA. Everyone else should enable Auto Create of statistics if only to find out where SQL Server thinks you ought to have an index, but do not. Statistics collection appear to be indexes in the system tables, but have a system generated name with a prefix like “_WA_”. When you see these collections defined on a column, you might consider putting an index on the column. That is not always the case, however.
Pages: 1 2