SQL Server Performance

Multi-million update statement is slow because of trigger

Discussion in 'SQL Server 2005 T-SQL Performance Tuning' started by kkt, Dec 18, 2007.

  1. kkt New Member

    Hi pros,
    I have an update statement that updates about 14 million records. It involves a few joins and finally updates 1 sigle column in the target table. The execution takes 26 minutes to complete (on reasonably good hardware). I noticed an update trigger on my table. The logic of the trigger is wrapped in an IF UPDATE(columnX) statement, so it does nothing unless columnX is changed. Does it really? I am updating columnY thus the trigger should not affect me. If I disable the trigger my statement now takes 5 minutes! This is a big difference. I get the feeling that the presence of the trigger causes the inserted and deleted tables to be created in my tempDB no matter what. If this is true, then my update causes two copies of 14 million records. This is all just speculation as I am not sure. Any comments will be appreciated. If my statement is true, how can I avoid this. Keep in mind that my table is large and I cannot easily disable the trigger, because others may need to access it and update columnX.
    Thanks for any help
    Kosta
  2. Adriaan New Member

    Does the body of the trigger code actually start with IF UPDATE(columnX), and does it have a set of BEGIN and END markers?
    I would expect that the inserted and deleted snapshots are ALWAYS created, regardless of whether a trigger is defined, to support the roll-back of transactions. In general, UPDATE queries take up more time than INSERT queries, that's pretty much a fact of nature.
    To handle large updates gracefully, the common suggestion is to do it in batches of a couple of thousand rows at a time, using SET ROWCOUNT and a WHILE EXISTS check for the iteration.
  3. kkt New Member

    I am not sure I understand your last paragraph. In any case there is definately a big difference. Here is the trigger. I do NOT update any of the two columns (AdditionalCost, CommissionValue2)ALTER
    TRIGGER [dbo].[ESFIItemEntry_UPD_Update_ESFILineItem] ON [dbo].[ESFIItemEntry] FOR UPDATE AS
    BEGINIF UPDATE(AdditionalCost) OR
    UPDATE(CommissionValue2)
    BEGINUPDATE LI SET LI
    .AdditionalCost = E.AdditionalCost,LI.CommissionValue2 = E.CommissionValue2
    FROM inserted E INNER JOIN ESFILineItem LI ON (E.fDocumentLineGID = LI.GID)IF @@ERROR <> 0
    BEGIN
    RAISERROR ('Update of LineItem through updated ValuatedCostValue and/or AdditionalCostFromValues of Item Entries failed.',16, 1)
    ROLLBACK TRANSACTION
    END
    END
    END
  4. Adriaan New Member

    The control-of-flow in the trigger is fine, so it should finish instantly. (And you're not by any chance issuing the UPDATE command from the Jet engine, right?)
    To do a series of batch updates, use code like this:
    SET ROWCOUNT 1000
    WHILE EXISTS (SELECT * FROM target WHERE conditions_for_update AND column <> new_value)
    BEGIN

    UPDATE target SET column = new_value
    FROM target WHERE conditions_for_update AND column <> new_value
    END
    SET ROWCOUNT 0
  5. kkt New Member

    I am performing my query from SQL Server Query Analyzer.
    I will try what you are proposing but why would it be faster? My search condition is not simple and the table is large. I will have to perform several searches on a large table. I still do not understand why the trigger puts such a delay
  6. Adriaan New Member

    The proper way to compare between update time with and without the trigger is of course to do it on a backup of your database, which you restore once for testing with the trigger, then again restore for testing without the trigger. Make sure that the computer that hosts the test database does not have additional workload during the tests.
  7. kkt New Member

    I measure on a test server. I do not restore between tests (why?). There is no workload on the server. Measurement is consistent in every run I try. It always runs a lot slower with the trigger on (I checkpoint and dropcleanbuffers and stuff - I even restart the servive of sql server)
  8. Adriaan New Member

    The reason to restore between the two tests is to make sure that you will be updating the same number of rows, and that any issues related to file size have an identical starting point. If you've already run a big update, there will be plenty of unused space in your data file, hence the file growth for the second run of the update may not even kick in.
    Have you compared with/without trigger in batches?

Share This Page