Advanced SQL Server Locking

Lock Granularity

SQL Server has a number of lock types, but it can also choose different granularities on the locks.

If you’ve run SP_LOCK any time at all, or looked in Enterprise Manager’s current activity, you’re sure to have seen at least four or five different lock types. I’ll go through each of them quickly.

  • Database (DB): This is a session lock — i.e., it’s not connected to any transaction, only to a user connected to a specific DB. This is to prevent a database being dropped while one or more users are connected to it. Note, however that SQL Server knows that master and tempdb can’t be dropped, and so it doesn’t take a DB lock on these databases.
  • Table (TAB): This is the coarsest logical lock SQL Server can use. Often you’ll find intent locks on this level. (Feel insecure about intent locks? Read more here.)
  • Extent (EXT): These locks are not, as some believe, a lock used to lock logical rows, but is used when SQL Server is about to create new tables, or expand existing ones, and you might also see it when a file is grown.
  • Page (PAG): When SQL Server needs to lock a lot of rows at the same time, and the available lock slots are running out, it might choose to use page locks. More often, you’ll see intent locks on the page level. Up to, and including SQL Server 6.5, this was the finest lock there was.
  • Key (KEY): The finest level of locking possible in SQL Server, together with the RID lock. KEY locks are used on indexes, and RID locks on heaps.

Now, when studying locking behavior in SQL Server 2000, I’ve found that SQL Server, most of the time, will value concurrency higher than speed. Having high concurrency means that many users can work at the same time. This is done with as small locks as possible, so as not to unnecessarily lock out other users from the data. High speed, on the other hand, can be achieved by using larger locks, which is faster than getting many smaller ones.

SQL Server 2000 can also escalate your locks, if it notices that you are locking more and more rows. What SQL Server does in this case is that it gets a table lock, and drops all the individual locks on the pages/keys/RIDs. Note — all escalations are to table locks. SQL Server will not upgrade to page locks from RID/KEY locks.

When does SQL Server 2000 escalate your locks? Well, it has nothing to do with how large percentage of the table you are locking; the only thing that matters is how many locks are used in the system as a whole. When locks are using a large enough percentage of the memory, SQL Server 2000 will try to escalate the locks on all transaction of all connections. It will also try to escalate your locks if the lock slots are running out. You can try this yourself by lowering the number of lock slots SQL Server will use, with the procedure SP_CONFIGURE.

SQL Server will try to keep high concurrency by using as small locks as possible. But sometimes you know things about your data that SQL Server doesn’t, and this information changes which locking level you want to use. An example is a large look-up table, where only reads are done. Instead of getting a lot of key-locks, you want to lock the whole table in one go. Your options here are locking hints or SP_INDEXOPTION.

Locking hints are well known, and are well-documented in BOL, so I won’t repeat them here. The system stored procedure SP_INDEXOPTION is a nice way to force SQL Server to use a specific size on the locks.

With SP_INDEXOPTION, you can turn off locking on the row or page level. This means that you don’t need locking hints — all locks on the table/index will be of the size you specify. Even though BOL claims that the procedure is used to choose locking granularities on indexes, it can also be used on a heap. Use the name of the table as the @IndexNamePattern variable. Very nice and fairly unknown.

The number of factors that matter do not end there though. If you are using one of the two higher isolation levels, and don’t have any usable indexes on the selection criteria, SQL Server is forced to lock most, if not the whole table, to satisfy your query. I’ll give you an example here.

Now, in another window, check the output of sp_lock. On my computer, I see 853 locks corresponding to the connection on which I ran the UPDATE. The Orders table in Northwind has 830 rows, so I’ve locked every row. Rollback the transaction, and try reissuing the update, after creating an index, like this.

Now, the SP_LOCK output only shows 25 locks. This is a part of performance hunting often overlooked. Even if you are just using READ COMMITTED, the default isolation level, you’ll see a dramatic difference — going from 136 locks to 24 by creating the index.


Leave a Reply

Your email address will not be published. Required fields are marked *