Will Transact-SQL lock a table? | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Will Transact-SQL lock a table?

I have a sql, which deletes some records in a table. The table has more than 100 millions records. The sql takes more than 30 seconds to finish execution. Will this sql lock the table? The sql deletes records based on primary key. Below is my sql: DELETE FROM DISPLAY_LOG WHERE LOG_ID < @primary_key LOG_ID is the primary key of the table. The type of this field is INT.
Hi,
It can be rowlock or pagelock or also can be escalated to table lock. If you are having any locking issues, check this articles:
http://www.sql-server-performance.com/reducing_locks.asp
http://www.sql-server-performance.com/lock_contention_tamed_article.asp
I believe on the scale of data volume I guess it is normal, provided you’re performing the database optimization tasks regularly. Often I see those have been ignored and still the users are expecting 100% performance during a resource intensive operations in SQL Server. Satya SKJ
Microsoft SQL Server MVP
Contributing Editor & Forums Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing.
IMO, 30 seconds on 100 millions rows is fast Madhivanan Failing to plan is Planning to fail
Based on the system resource usage to delete the data, sql may esclate the locks to table level which will block other users… If you don’t want to lock many rows at a time you can while loop to delete the data 1/10 rows at a time but this will increase the execution time…
Mohammed U.
One time locking is good and other times it creates the performance issues, as a whole the normalization should take care of it. Also I would like to hear from the Originator about the scale of the problem. Satya SKJ
Microsoft SQL Server MVP
Contributing Editor & Forums Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing.
quote:Originally posted by Madhivanan IMO, 30 seconds on 100 millions rows is fast Madhivanan Failing to plan is Planning to fail

The table has 100 millions rows, 100 million rows aren’t being deleted. So is 30 sec fast or slow, don’t know, how many rows are being deleted? Do as suggested and loop, delete about 1000 rows at a time. Delete will run longer but performance will not be impacted. Hank
When I delete one record, other applications, which are trying to insert or update records at the same table able to continue their work. When I delete 10000 records, SQL server locks the table for 30 seconds to 1 minute. I added ROWLOCK command in the SQL, the SQL become like this DELETE FROM DISPLAY_LOG WITH (ROWLOCK) WHERE LOG_ID < @primary_key But the effect is same when I delete 10000 records. Other applications wait until time out. When I reduce record deletion to 5000 records, other applications wait 5 – 30 seconds. It is fine when I use While Loop for record deletion. Other applications not need to wait for the deletion done. But the execution of the SQL takes few minutes.

Your choice, which is better, making other applications wait or the delete taking longer? We don’t know the app so you have to decide. Any data modification statement, insert, update, or delete, is treated as a transaction. Doesn’t matter if the statement affects 1 row or a million rows, SQL treats it as transaction. Common problem, many people think the more rows I can update in one statement the better. The downside is the locks that SQL places on rows and other resources that block other processes until the statement completes. Hank
Hi, what is the isolation level of your database ? is it ReadCommitted. and also read on Snapshot Isolation and READ_COMMITTED_SNAPSHOT Madhu
My database use default isolation level, READ COMMITTED. My SQL Server version is 2000. Snapshot Isolation only available in SQL Server 2005, right?
why don’t u try this (a) Create a SP to delete records
(b) inside this sp set isolation level to read uncommited first ie.
(c) delete the rows
(d) set the isolation back to read committed.
eg.
create proc TestSP
as
begin
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
delete table where ….
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
end WHat i mean is , since NOLOCK cant be applied in Delete ; it is the only way to avoid locks. my understanding is this isolation set statement will effect only to that connection/session. I still feel that somewhere i am logically wrong. Can u try this and let the group know the result? thanks
Madhu
The isolation level applies to reads in a select statement. A lock is still going to be placed on each row being deleted which can escalate to a table lock if the number of rows is large enough. Hank
I tried SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED and SET TRANSACTION ISOLATION LEVEL READ COMMITTED command. SQL server no locks table when delete 5000 records. But the table is locked when number of records deleted is 10000.

]]>