SQL Server Performance

MSSQL Table Locking During Transaction - Probably... Maybe...

Discussion in 'SQL Server 2008 General Developer Questions' started by LFN, Dec 15, 2010.

  1. LFN New Member

    After much searching, reading, re-reading and further searching and reading - I find myself at a dead end.
    With so little detail (at least that I can find/understand) from msdn articles and BOL, I throw myself humbly at the mercy of those better experienced to help solve the problem I face. And to any of those better equipped persons that donate their time to help me, useful or not, I sincerely thank you in advance.
    I trust you will forgive my likely naive attempt to explain the problem...
    If I could talk to my SQL server directly, which I often do, and have it actually understand me, which it often doesnt, I would say...
    "Please can you stop anything from changing the data i'm about to work with and dont let anything add a record related to those that i'm about to work with... but if anyone wants to read the data while i'm working thats ok."
    Essentially then, I think i'm looking for a way to make a table readonly for the duration of my stored procedure transaction.
    The basic flow of my sp is:
    -- Take a copy of the data I want to work with (ok for people to read the actual data)
    CREATE TABLE #TempTable (RowId int identity(1,1), Table1_PrimaryKey int, KeepRow bit)
    INSERT INTO #TempTable (Table1_PrimaryKey, KeepRow)
    SELECT Table1_PrimaryKey, 0
    FROM Table1
    WHERE (stuff that decides which rows I want)
    Start loop on records In #TempTable
    IF (i want this row) BEGIN
    UPDATE #TempTable SET KeepRow = 1

    Carry on looping til done... then...
    DELETE FROM Table1
    WHERE Table1_PrimaryKey IN (
    SELECT Table1_PrimaryKey FROM #TempTable WHERE KeepRow = 0
    The bit thats making my head spin is that all relevant rows I have selected to start with form a kind of set. They are related by a foreign key to a parent table. I need to make sure that not only are the rows I am working with not modified or deleted - but no new rows can be added to Table1 (related to the same parent object) while im working.
    It is however, safe for people to look at the rows while im deciding about them (as I have copied them into a temp table to work with).
    So.. kind of make table read-only while i'm working.
    It feels like locking of some kind should help here.... any ideas which ones? Have I made any sense at all?
    Thank you again in advance - even if only for reading this far.
  2. Luis Martin Moderator

    Welcome to the forums!!
    What about to create a Database Role with all users and grant only "select" when you are running your sp and, when finished, back to normal?
    Just a thought.
  3. judedsouza New Member

    HelloYou can use both concept Pessimistic as well as optimistic.Read tis article. It might give you some insighthttp://www.mssqltips.com/tip.asp?tip=1501. This is using sql server objects.However if you want to control it yourself you could implement your concept of optimistic lockinge.g. Say your working on a customer account. You could create a table (or use the the same table) and when your going to work on that customer you could store the customer id and user id in that table and when other users are trying to update the customer your update procedure would look into this table and give them a warning that the record is locked. when your done with this customer you would delete this record from that table. I personally do not like this solution. If you really want optimistic locking then try using rowversioning.Jude
  4. LFN New Member

    Adriaan - yes, unfortunately everthing I am doing in the loop is required to be covered by the transaction - its a bit of a pain :)
    judedsouza - you're right, I made a mistake in my post. I am indeed setting the "keep" flag in the #Temp table not the "parent" table - thanks for the catch. I have just read the article you posted - thank you for that - and that is ineed how I do my concurrency checking.
    The problem I was facing was more that it was a set of data that I needed to ensure was not modified rather than the records themselves.
    For example:
    Child1 Child2 Child3
    I needed to ensure that no extra children were created or removed in the child table while I was working (as well as not allowing modifications). It is ok for other processes to read the data while the transaction is running though - as its valid data until the changes are committed (in one go at the end of the transaction).
    Unfortunately I cannot guarantee that the parent table will always be set with something to mark the child data as "in use" (due to the nature of the applications using the db - its not all in my control).
    Therefore I could see no way of solving the issue with concurrency techniques. Even if i could track each child version it wouldnt help with preventing people creating new children etc.
    The best i've come up with so far is using "TABLOCK" with "HOLDLOCK" to freeze the table during the transaction - which is of course not ideal but what I am currently testing. If I come up with anything better ill post it.
    This post is also active on msdn if anyone's intersted:
  5. satya Moderator

    Just out of interest, what kind of application is using the SQL database and you are having blocking/locking issues.
    As you have seen in above references this is managed by SQL automatically unless the code implies the HINTs to control. SQL Server takes into account the number of locks that are held on a particular scan, the number of locks that are held by the whole transaction, and the memory that is being used for locks in the system as a whole. Typically, SQL Server's default behavior results in lock escalation occurring only at those points where it would improve performance or when you must reduce excessive system lock memory to a more reasonable level.
  6. LFN New Member

    satya - i'm mostly using asp.net web applications and vb.net desktop apps. Gnerally speaking I have very little to do with locking as its very rare that I have a need to move away from the default READ-COMMITTED... its just in this scenario I could see no way around the problem. The challenge was about preventing new related rows from being added rather than preventing updates to existing data.
  7. satya Moderator

    As you are using SQL 2008 (correct me if I'm wrong) then you can take advantage of ISOLATION LEVELS, just a reference from BOL about it:
    Choosing a transaction isolation level does not affect the locks acquired to protect data modifications. A transaction always gets an exclusive lock on any data it modifies, and holds that lock until the transaction completes, regardless of the isolation level set for that transaction. For read operations, transaction isolation levels primarily define the level of protection from the effects of modifications made by other transactions.
    A lower isolation level increases the ability of many users to access data at the same time, but increases the number of concurrency effects (such as dirty reads or lost updates) users might encounter. Conversely, a higher isolation level reduces the types of concurrency effects that users may encounter, but requires more system resources and increases the chances that one transaction will block another. Choosing the appropriate isolation level depends on balancing the data integrity requirements of the application against the overhead of each isolation level. The highest isolation level, serializable, guarantees that a transaction will retrieve exactly the same data every time it repeats a read operation, but it does this by performing a level of locking that is likely to impact other users in multi-user systems. The lowest isolation level, read uncommitted, may retrieve data that has been modified but not committed by other transactions. All of the concurrency side effects can happen in read uncommitted, but there is no read locking or versioning, so overhead is minimized.
  8. sqldbarocks New Member

    what are the isolation level recommendations?
  9. sqldbarocks New Member

    I checked in sys.dm_exec_requests and by default it is 2.would that be fine ?
  10. satya Moderator

    SQLdbarocks, it depends what kind of transaction you are running. At a high level, these two problems are due to multiple processes trying to access or modify the same data and SQL Server's lock manager ensuring data integrity. This problem can be corrected by a number of techniques which may include database design, consistent data access in stored procedures, shortening the transaction length, issuing dirty reads, etc.
  11. Adriaan New Member

    Couple of observations -
    You're keeping the transaction open while looping, and updating on a row-by-row basis, before doing the delete. This looping is time-consuming, so unless the condition for setting the KeepRow column depends on other rows in the same transaction, you should try to make the update set-based to reduce the run time.
    I would assume that you would be setting the delete flag on your temp table, but you're setting the flag on the "parent" table.
  12. RamJaddu Member

    this should do the trick... you should update the main table right after begin tran it holds exclusive lock on the table until you finish the transactions
    TRAN--- below code will hold exclusive lock on the table until transaction completed
    --- code beginUpdate
    Table1 set Table1_PrimaryKey = Table1_PrimaryKey-----code end
    -- Take a copy of the data I want to work with (ok for people to read the actual data)CREATE
    TABLE #TempTable (RowId int identity(1,1), Table1_PrimaryKey int, KeepRow bit)INSERT
    INTO #TempTable (Table1_PrimaryKey, KeepRow)SELECT
    Table1_PrimaryKey, 0 FROM
    (stuff that decides which rows I want)Start
    loop on records In #TempTable IF
    (i want this row) BEGIN
    UPDATE #TempTable SET KeepRow = 1ENDCarry
    on looping til done... then...DELETE
    FROM Table1WHERE
    Table1_PrimaryKey IN (
    SELECT Table1_PrimaryKey FROM #TempTable WHERE KeepRow = 0)COMMIT TRAN

Share This Page