SQL Server Performance

replication locking (sql2000 / transactional)

Discussion in 'Performance Tuning for SQL Server Replication' started by beckylou, Dec 7, 2005.

  1. beckylou New Member

    Currently, we have a table replicated from one database to another on the
    same server.

    If I update 127 records based on the primary key, the subscription process
    locks all the records until all the 127 commands have been executed.
    Seamingly regardless of how they were updated.

    eg.

    CREATE TABLE Orgs ( OrganisationID INT PRIMARY KEY CLUSTERED ,
    TimeLastUpdated DATETIME NOT NULL )

    DECLARE @SelectedOrgs ( OrganisationID INT PRIMARY KEY CLUSTERED )

    /* ... insert 127 organisations to update here ... */


    ---- Process Type 1

    UPDATE Orgs SET TimeLastUpdated = GETDATE()
    FROM Orgs INNER JOIN @SelectedOrgs SO ON Orgs.OrganisationID =
    SO.OrganisationID

    ---- Process Type 2

    WHILE ( blah )
    BEGIN
    -- one at a time in asc order
    SELECT @OrgID = OrganisationID FROM @SelectedOrgs WHERE ( blah )
    UPDATE Orgs SET TimeLastUpdated = GETDATE() WHERE OrganisationID =
    @OrgID
    END


    ----
    Using the default isolation level and no explicit transactions the table in
    the subscriber database was locked until the commands had all been executed.

    What gives - both types lock the whole table on the subscriber.

    Surely it's not meant to use the same locks as were applied when the data
    was updated?


    Is there any to stop it locking the whole thing while it processes the
    commands?


    Ideally we'd upgrade to 2k5 and use row versioning... but that wont happen
    for a while.


    ----

    I need to implement a similar thing on a much larger database which needs to
    be up 24/7 and can't be locked for long periods of time, mean while on the
    publisher, 5% of the data may be changed in one go - but if we can't get it
    to use optimistic locking on the subscriber we can't use replication.


  2. satya Moderator

  3. beckylou New Member

    satya, thanks for your post, but the 2nd doesn't really help because the commands I was executing were not in a transaction - they were using implicit transactions (per command).

    And yet, when the data is sent to the subscriber, the whole dataset is locked (on the subscriber) until all the updates have been processed, regardless of whether they were batched or done singly.

Share This Page