Tips for Performance Tuning SQL Server OLTP Databases
Try to avoid performing both OLTP and OLAP activity within the same database. If you do, OLTP activities will slow down OLAP activities, and OLAP activities will slow down OLTP activities. OLTP and OLAP activities should be considered mutually exclusive, and ideally, should be relegated to their own databases.
For example, OLTP activities should occur in a database specifically designed and tuned for OLTP. As appropriate, data from the OLTP database should be moved to an OLAP database or Analysis Services cube so that OLAP queries can be performed against it. This way, both OLTP and OLAP activity won’t interfere with each other and hurt your application’s performance. [2000, 2005, 2008] Updated 2-4-2009
Keep transactions as short as possible. The shorter transactions are, the shorter that locks have to be held, which results in both greater performance and scalability. Here are some ways to help keep transactions as short as possible:
- It is especially important that transactions aren’t kept open while waiting for user input. User input should be collected as a whole, and then submitted at one time. In other words, don’t collect user input in the middle of a transaction.
- Use stored procedures for your code.
- Only return the exact data needed, no extra columns or rows should be returned.
- Try to reduce the number of round trips between the client and SQL Server.
- In large units of work, consider breaking it down into smaller units, so that there are many, very quick transactions rather than a single, large transaction. A large transaction hold locks for longer periods, contributing to performance problems.
- From another perspective, if there are many, many transactions going on, contributing to excess round trips and network traffic, consider batching them together so that there are fewer round trips between the client and SQL Server.
- Do not use a transaction to view and browse data.
- Consider using a lower transaction isolation level, if appropriate.
- Always manage your transaction by beginning and ending your transaction explicitly. Do not use implicit transactions.
[2000, 2005, 2008] Updated 2-4-2009
Avoid INSERTing, UPDATEing, or DELETEing large numbers of records in a single transaction. If you do, all the records affected by your action will be locked until the transaction is done. If you find that you need to perform mass data changes, it is better to batch them into smaller, shorter transactions to prevent unnecessary locking, which can block other users. [2000, 2005, 2008] Updated 2-4-2009
As transactional data accumulates in database tables, consider archiving older data to separate tables in the same database, or in other databases. This can considerably reduce query time on large tables. Another option is to move this older data to an OLAP database. [2000, 2005, 2008] Updated 2-4-2009
Design SQL Server-based applications to make the fewest network trips between the client and SQL Server. One way to do this is to try to execute a single stored procedure for an entire transaction. Of course, keep this transaction as short as possible to prevent potential blocking. [2000, 2005, 2008] Updated 2-4-2009
If an OLTP transaction affects many tables, and if you can control the order in which these tables are accessed within the transaction, try to locate any references to the most used tables toward the end of the transaction. This helps to reduce the amount of time locks have to be held during the transaction, helping to boost concurrency. [2000, 2005, 2008] Updated 2-4-2009
As an application is being designed, try to identify those transactions that are time critical, such as looking up a customer or placing an order, and focus on designing the application to meet these critical needs. If the application has already been written, you can also take a similar path by identifying the time critical transactions and then focusing on how to improve them, one-by-one. [2000, 2005, 2008] Updated 2-4-2009
Avoid using implicit transaction. Use explicit transactions instead. This is because it is not always easy or possible to determine when an implicit transaction starts. It is possible for an implicit transaction to start without your knowledge, potentially reducing concurrency and causing other problems.
For example, if a particular application uses implicit transactions, but the developer forgets to specifically COMMIT or ROLLBACK the transaction, the transaction is still open, potentially still holding open locks, preventing other users from accessing the data they need. If this situation repeats itself, more and more open locks are created and at some point, the database becomes unusable.
To find out if there are any open transactions in a database, run the DBCC OPENTRAN command. [2000, 2005, 2008] Updated 2-4-2009
Some tables that are accessed frequently may have columns that are accessed very frequently, but other columns that are rarely accessed. If the size of this table is causing performance problems, one option is to split the table into two tables. The first table will have the columns that are frequently accessed, and the second table will have the columns that are rarely accessed. When the columns in the second table are needed along with the frequently accessed columns, then the two tables can be joined. [2000, 2005, 2008] Updated 2-4-2009
OLTP database applications should typically be fully normalized to remove redundant data. Redundant data causes SQL Server to perform unnecessary I/O, reducing performance. There are of course exceptions to this rule of thumb, which are discussed elsewhere on this web site. [2000, 2005, 2008] Updated 2-4-2009
Don’t over index your OLTP tables, as every index you add increases the time it takes to perform INSERTS, UPDATES, and DELETES. There is a fine line between having the ideal number of indexes (for SELECTs) and the ideal number for data modifications. [2000, 2005, 2008] Updated 2-4-2009
If you have a very active OLTP server application with many INSERTS, UPDATES, and DELETES, it is possible that the default “recovery interval” of 0 (which means that SQL Server determines the appropriate recovery interval) may not be appropriate. If you are watching the performance of your server with the Performance Monitor and notice that you have regular periods of 100% disk-write activity (occurring during the checkpoint process), you may want to set the “recovery interval” to a higher number, such as 5. You will have to experiment to find the ideal number for your particular circumstances. [2000, 2005, 2008] Updated 2-4-2009
On tables that experience high levels of updates, consider adding a timestamp column to the table, if there isn’t one already. A timestamp column helps SQL Server perform concurrency control more efficiently. If a timestamp column is not available, SQL Server must compare the values of all columns in a row to all of the current values in the row at the time of the update. [2000, 2005, 2008] Updated 2-4-2009
On tables that experience high levels of updates on indexed columns, consider creating those indexes using the PAD_INDEX option of the CREATE INDEX command. The PAD_INDEX option tells SQL Server how much space to leave open on each page in the intermediate levels of the index. If PAD_INDEX is not used, then the default behavior is that each intermediate index page is only given enough empty space to hold at least one row of the maximum size the index can have. For tables experiencing heavy index updates, this can cause the intermediate index pages to split often, causing unnecessary overhead.
To help prevent this intermediate index page splitting, indexes should be created with both the FILLFACTOR and the PAD_INDEX options. The PAD_INDEX option will take the value provided in the FILLFACTOR option, and apply it to the intermediate index pages. For example, if a FILLFACTOR of 90 is used in the CREATE INDEX command, then the PAD_INDEX will use this value, and only fill 90% of the intermediate index page space, instead of leaving enough space for just one new row.
But if your tables don’t have much UPDATE activity, then don’t use the PAD_INDEX option. If you set the PAD_INDEX value as to leave lots of free space, and it is not needed, this only causes unnecessary disk I/O when SQL Server reads these pages, reducing performance. [2000, 2005, 2008] Updated 2-4-2009
Moving transactional data from a production OLTP database to another database for reporting purposes is a common way to prevent OLAP-type transactions from interfering with your production database.
Some of the more common ways to accomplish this goal is to use replication, a periodic batch process, SSIS, log shipping, or even backup and restore. Each of these have their pros and cons.
In many cases, reports don’t have to be up-to-date to the most current transaction, and periodic refreshes, like those listed above, from the production database to a reporting database, are all that is needed. But sometimes, reporting needs dictate that the data must be up-to-the-second. If this is the case, then periodic refreshes may not work.
One option would be to run time-critical reports directly from the production database, but non-time-critical reports from a reporting database. Assuming the reports won’t interfere with production, then this may be an acceptable option. But if running the time-critical reports that interfere with production, then your options are limited.
One option you may want to consider is to use triggers to automatically move the required transactional data from the production database to a reporting database. This would ensure that the reporting database has up-to-the-second data for time-critical reports. Of course adding triggers adds overhead to the production database, but it may be less that running the time-critical reports on the production database. If you face this situation, you will probably want to perform much testing to find out which option works best for you. [2000, 2005, 2008] Updated 2-4-2009
While replication can be used to offload some work from the production server, it also puts some extra load on it. The log reader agent will be at work on the server, identifying those transactions that need to be replicated. Because of this, you must balance the performance degradation caused by adding replication, by the performance boost provided by not having to do reporting on a production OLTP SQL Server.As much as possible, you will want to minimize the amount of overhead that replication incurs. One of the biggest things you can do in this case is to locate the distribution server on a dedicated server, not on the OLTP server. [2000, 2005, 2008] Updated 2-4-2009