client-side isolation level | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

client-side isolation level

I am developing an application with vb6 as front-end and sql server 2000 as back-end. I have a master-detail form like Orders and OrderDetails. My question is that how can i implement to modify an order with more that one row by one user, while not allowing other users to modify that specific order?

SET TRANSACTION ISOLATION LEVEL allows you to modify the isolation level. I dont really recommend doing this though. You might like to read about UPDLOCK hint which allows you to place an update lock on records which you select (ie, the order and orderdetails), which will let other users read the data, but will block them from modifying it. You could also implement your own custom locking which would place the order number into a table (and then check it was inserted successfully), to indicate a user is editing it. This way has the advantage that you can display an error message if you discover the record is already being edited, rather than simply blocking until the other user has finished editing.

If you have only one update query, you will not need to explicitly open a transaction.
Otherwise you can do it like this. SET TRANSACTION ISOLATION LEVEL READ COMMITTED — this is the default BEGIN TRANSACTION tUpdateOrder — Do your updates. — Check for errors after every operation inside the transaction and rollback
— when needed
IF @@ERROR <> 0
BEGIN
ROLLBACK TRANSACTION tUpdateOrder
— exit with error
END — another update
— check again for errors COMMIT TRANSACTION tUpdateOrder Bambola.
in client side user requests an order to update. from an ole-db connection i simply sends his requested date by SELECT *
FROM OrderDetails
WHERE OrderNo = @OrderNo now data is at client side and i want not to allow to update this order but the first user requeted it. Connection meight be bounded or unbounded. if i use a serializable isolation level or a key-lock on data i impose so many overhead to sql-server, modification meight long menutes or even hours.
First I wanted to implement Chappy solution, but "custom locking" has its way to bad situations. with an application abnormal-end an order meight remain "custom-lock" for ever. has anybody had this problem? with a solution of course.
Probably there are better solutions, but this is what comes in mind now. You could add a datetime field, and update it whenever row (or order, according to what you need) is updated. Pass the value to your application together with all the other fields, and when you are updating check to see if it had change. If it had, you will know that rows where updated.
If rows are updated frequently this might not be a good solution. Bambola.
For handling this problem, the client application can be modified to avoid any overhead to SQL Server. A flag can be added to the table which will be updated by application stored procedure when a record is read. After updating the record the flag will be reset. The application has to make sure while reading the records that it reads only the record whose flags are not set. This way application will be handling the logic of transaction isolation and is a good solution to unbounded connections. There can be another option to previously mentioned solution in the forum of using update datetime field in the table and passing it to application. You can very well use the timmestamp field in place of datetime field. Gaurav
Gaurav – Take a situation in which 1 user read the row, and goes to have a coffee break (happens here in Italy all the time…. <img src=’/community/emoticons/emotion-5.gif’ alt=’;-)’ /> Now this row is locked for all other users trying to read it. <br /><br />However with the datetime, all of them can read it, and each one of them can modify it providing no one had since he read it. <br /><br />Bambola.<br />
quote:Originally posted by daidaluus
Connection meight be bounded or unbounded. if i use a serializable isolation level or a key-lock on data i impose so many overhead to sql-server, modification meight long menutes or even hours.

Because of the mentioned constraint of update hapening in minutes / hours and connection being unbouned, I suggested the application level locking of the row. Also, the requirement is to lock the row until modification is made so it does not matter if the change takes place in hours. Gaurav
<blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote">now data is at client side and i want not to allow to update this order but the first user requeted it.<hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote"><br /><br />In this case you are right, Gaurav. I was not paying enough attention to that limit. <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br /><br />I’d only add if you are using the flag, you might want to have some procedure that will reset the flags after a certain amount of time, or you will end up with "dead" rows. In this case you would need both fields.<br /><br />Bambola.
Dead rows can be removed by having a job that rolls back the flag after a certain amount of time. This time can be cofigured in the job and the application also has to take care of this…. Gaurav
Not sure I understand what you mean with this flag scenario. Setting a flag when the record is read, then unsetting when its updated. What if someone just reads the record and doesnt want to update. Couldnt you be potentially stopping other users wanting to update ? Going back to the ‘OrderLocks’ table idea. You could insert a record into this table as soon as a user starts modifying a record. This contains the order ID and the current date time.
A job cleans out records from this table where the date <= now – N minutes.
The app then periodically requests the lock is kept alive every N-1 minutes.
Means if the client app crashes, the lock will timeout eventually.
Also means many users can read the record concurrently, and also supports a user taking 4 hours to modify it.
Chappy I agree with you on this idea of using a saperate locks table. But I wanted to acoomplish the same task by using using the exsisting table. The idea behind keeping a flag was that the application has to be intelligent enough to call different stored procedures when it is reading the data for showing to user / updating. Now the sp which is used to collect the data will just read the same and it does not have to worry about the flag if it is set or not as it will be shoowing the snapshot of the info in the table. So even if any user has read some record for modification, it can still read it. But when it comes to updating the data, the stored procedure will handle the logic of reading the record only when the flag is not set and also setting the flag after the record has been read. And I still stick to the logic of having a job for cleanup. This way both the objectives of update isolation and read concurrency are met. This logic has to be handled by application and can not be accomplished automatically as there is nothing like a SELECT trigger. Gaurav
Sounds like this can be reduced to a problem of session for an order. If you can store in session for an order who has it checked out then you’re storted. Could be done simply by adding a DateTime and UserID field to the order table.
Selects do this 1) If lock (DateTime) has expired then set UserID.
2) If UserID is the same as the existing then update DateTime Updates do this 1) If UserID does not match that in the table then update is not allowed. (doesn’t need to check DateTime as that is done within the Select statement.
2) Expire DateTime

]]>