SQL Server Performance

Will Transact-SQL lock a table?

Discussion in 'SQL Server 2005 General Developer Questions' started by enum5, Dec 22, 2006.

  1. enum5 New Member

    I have a sql, which deletes some records in a table. The table has more than 100 millions records. The sql takes more than 30 seconds to finish execution. Will this sql lock the table? The sql deletes records based on primary key. Below is my sql:

    DELETE FROM DISPLAY_LOG WHERE LOG_ID < @primary_key

    LOG_ID is the primary key of the table. The type of this field is INT.
  2. ranjitjain New Member

  3. satya Moderator

    I believe on the scale of data volume I guess it is normal, provided you're performing the database optimization tasks regularly. Often I see those have been ignored and still the users are expecting 100% performance during a resource intensive operations in SQL Server.

    Satya SKJ
    Microsoft SQL Server MVP
    Contributing Editor & Forums Moderator
    http://www.SQL-Server-Performance.Com
    This posting is provided AS IS with no rights for the sake of knowledge sharing.
  4. Madhivanan Moderator

    IMO, 30 seconds on 100 millions rows is fast

    Madhivanan

    Failing to plan is Planning to fail
  5. MohammedU New Member

    Based on the system resource usage to delete the data, sql may esclate the locks to table level which will block other users...

    If you don't want to lock many rows at a time you can while loop to delete the data 1/10 rows at a time but this will increase the execution time...


    Mohammed U.
  6. satya Moderator

    One time locking is good and other times it creates the performance issues, as a whole the normalization should take care of it. Also I would like to hear from the Originator about the scale of the problem.

    Satya SKJ
    Microsoft SQL Server MVP
    Contributing Editor & Forums Moderator
    http://www.SQL-Server-Performance.Com
    This posting is provided AS IS with no rights for the sake of knowledge sharing.
  7. HankS New Member

    quote:Originally posted by Madhivanan

    IMO, 30 seconds on 100 millions rows is fast

    Madhivanan

    Failing to plan is Planning to fail

    The table has 100 millions rows, 100 million rows aren't being deleted. So is 30 sec fast or slow, don't know, how many rows are being deleted? Do as suggested and loop, delete about 1000 rows at a time. Delete will run longer but performance will not be impacted.

    Hank
  8. enum5 New Member

    When I delete one record, other applications, which are trying to insert or update records at the same table able to continue their work. When I delete 10000 records, SQL server locks the table for 30 seconds to 1 minute. I added ROWLOCK command in the SQL, the SQL become like this

    DELETE FROM DISPLAY_LOG WITH (ROWLOCK) WHERE LOG_ID < @primary_key

    But the effect is same when I delete 10000 records. Other applications wait until time out. When I reduce record deletion to 5000 records, other applications wait 5 - 30 seconds.

    It is fine when I use While Loop for record deletion. Other applications not need to wait for the deletion done. But the execution of the SQL takes few minutes.
  9. HankS New Member

    Your choice, which is better, making other applications wait or the delete taking longer? We don't know the app so you have to decide.

    Any data modification statement, insert, update, or delete, is treated as a transaction. Doesn't matter if the statement affects 1 row or a million rows, SQL treats it as transaction. Common problem, many people think the more rows I can update in one statement the better. The downside is the locks that SQL places on rows and other resources that block other processes until the statement completes.

    Hank
  10. madhuottapalam New Member

    Hi,

    what is the isolation level of your database ? is it ReadCommitted. and also read on Snapshot Isolation and READ_COMMITTED_SNAPSHOT

    Madhu
  11. enum5 New Member

    My database use default isolation level, READ COMMITTED. My SQL Server version is 2000. Snapshot Isolation only available in SQL Server 2005, right?
  12. madhuottapalam New Member

    why don't u try this

    (a) Create a SP to delete records
    (b) inside this sp set isolation level to read uncommited first ie.
    (c) delete the rows
    (d) set the isolation back to read committed.
    eg.
    create proc TestSP
    as
    begin
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
    delete table where ....
    SET TRANSACTION ISOLATION LEVEL READ COMMITTED
    end

    WHat i mean is , since NOLOCK cant be applied in Delete ; it is the only way to avoid locks. my understanding is this isolation set statement will effect only to that connection/session. I still feel that somewhere i am logically wrong. Can u try this and let the group know the result?

    thanks


    Madhu
  13. HankS New Member

    The isolation level applies to reads in a select statement. A lock is still going to be placed on each row being deleted which can escalate to a table lock if the number of rows is large enough.

    Hank
  14. enum5 New Member

    I tried SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED and SET TRANSACTION ISOLATION LEVEL READ COMMITTED command. SQL server no locks table when delete 5000 records. But the table is locked when number of records deleted is 10000.

Share This Page