Reducing SQL Server Locks

Use the least restrictive transaction isolation level possible for your user connection, instead of always using the default READ COMMITTED. In order to do this without causing other problems, the nature of the transaction must be carefully analyzed as to what the effect of a different isolation will be.

One example of when not using the default READ COMMITTED isolation level is when running queries to produce reports. Using an isolation level of READ UNCOMMITTED will turn off locking, speeding the performance of the query, and other queries hitting the same tables. This of course, will only work if your reports are tolerant of potentially “dirty” data, which is generally not a problem for many reports. [6.5, 7.0, 2000, 2005] Updated 10-16-2005

*****

Try one or more of the following suggestions to help avoid blocking locks: 1) Use clustered indexes on heavily used tables; 2) Make appropriate use of non-clustered indexes, 3) Try to avoid Transact-SQL statements that affect large numbers of rows at once, especially the INSERT and UPDATE statements; 4) Try to have your UPDATE and DELETE statements use an index; and 5) When using nested transactions, avoid commit and rollback conflicts. [6.5, 7.0, 2000, 2005] Updated 10-16-2005

*****

If there is a lot of contention for a particular table in your database, consider turning off page locking for that table, requiring SQL Server to use row level locking instead. This will help to reduce the contention for rows located on the same page. It will also cause SQL Server to work a little harder in order to track all of the row locks. How well this option will work for you depends on the tradeoff in performance between the contention and the extra work SQL Server has to perform to maintain many row locks. Testing will be needed to determine what is best for your particular environment. Use the SP_INDEXOPTION stored procedure to turn off page locking for any particular table. [7.0, 2000, 2005] Updated 10-16-2005

*****

If table scans are used regularly to access data in a table, and your table doesn’t have any useful indexes to prevent this, then consider turning off both row locking and page locking for that table. This in effect tells SQL Server to only use table locking when accessing this table. This will boost access to this table because SQL Server will not have to escalate from row locking, to page locking, to table locking each time a table lock is needed on the table to perform the table scan. On the negative side, doing this can increase contention for the table. Assuming the data in the table is mostly read only, then this should not be too much of a problem. Testing will be needed to determine what is best for your particular environment. [7.0, 2000, 2005] Updated 10-16-2005

*****

Do not create temporary tables from within a stored procedure that is invoked by the INSERT INTO EXECUTE statement. If you do, locks on the syscolumns, sysobjects, and sysindexes tables in the TEMPDB database will be created, blocking others from using the TEMPDB database, which can significantly affect performance. [6.5, 7.0, 2000]Updated 10-16-2005

*****

To help reduce the amount of time it takes to complete a transaction (and thus reducing how long records are locked) try to avoid using the WHILE statement or Data Definition Language (DDL) within a transaction. In addition, do not open a transaction while browsing data and don’t SELECT more data than you need for the transaction at hand. For best performance, you always want to keep transactions as short as possible. [6.5, 7.0, 2000, 2005] Updated 10-16-2005

*****

While nesting transactions is perfectly legal, it is not recommended because of its many pitfalls. If you nest transactions and your code fails to commit or roll back a transaction properly, it can hold locks open indefinitely, significantly impacting performance. [6.5, 7.0, 2000, 2005] Updated 10-16-2005

*****

By default in SQL Server, a transaction will wait indefinitely for a lock to be removed before continuing. If you want, you can assign a locking timeout value to SQL Server so that long running locks won’t cause other transactions to wait long periods of time. To assign a locking timeout value to SQL Server, run this command, “SET LOCK_TIMEOUT length_of_time_in_milliseconds” from within your connection. [7.0, 2000, 2005] Updated 10-16-2005

*****

Sometimes you need to perform a mass INSERT or UPDATE of thousands, if not millions of rows. Depending on what your are doing, this could take some time. Unfortunately, performing such an operation could cause locking problems for other users. If you know users could be affected by your long-running operation, consider breaking up the job into smaller batches, perhaps even with a WAITFOR statement, in order to allow others to “sneak” in and get some of their work done. [6.5, 7.0, 2000, 2005] Updated 10-16-2005

*****

In SQL Server 7.0, when a replication snapshot is generated, SQL Server puts shared locks on all of the tables that are being published for replication. As you can imagine, this can affect users who are trying to update records in the locked tables. Because of this, you may want to schedule snapshots to be created during less busy times of the day. This is especially true if there are a lot of tables, or if the tables are very large.

In SQL Server 2000 and 2005, this behavior has changed. Assuming that all subscribers will be SQL Server, then SQL Server 2000 and 2005 will use what is called concurrent snapshot processing, which does not put a share lock on the affected tables, helping to boost concurrency. [2000, 2005] Updated 10-16-2005

Continues…

Leave a comment

Your email address will not be published.