SQL Server Performance

Poor Delete Trigger Performance

Discussion in 'SQL Server 2008 T-SQL Performance Tuning' started by Blasterboy, Jun 30, 2010.

  1. Blasterboy New Member

    I currently have a problem with the performance of a delete operation:
    The operation deletes about a dozend rows from a table "VERSIONS" with the primary key "VersionID". I have a second table "DEPENDENCIES" that stores the dependencies between the versions of the first table in the columns "ParentID" and "ChildID". These columns hold foreign keys to the primary key "VersionID".
    When a row from the table "VERSIONS" is deleted, I want the rows from the table "DEPENDENCIES", that refer to that version, to be deleted to.
    I created a forein key relationship with "Delete Rule: Cascade" between "VERSIONS.VersionID" and "DEPENDENCIES.ParentID".
    Unfortunately it is not possible to create a second one between "VERSIONS.VersionID" and "DEPENDENCIES.ChildID". This is why I implemented the delete rule in a trigger. This is the code for the trigger:
    USE [PerformanceP3]
    GO
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    ALTER TRIGGER [dbo].[T_VERSIONS_DTrig] ON [dbo].[VERSIONS] FOR DELETE AS
    SET NOCOUNT ON

    DELETE DEPENDENCIES FROM deleted, DEPENDENCIES WHERE deleted.VersionID = DEPENDENCIES.ChildID
    The problem is, that the execution of the trigger is very slow. It takes about 5 seconds on my local machine and about 22 seconds on another SQL server in a remote location, although there are just 41 dependencies affected.
    The server profiler shows 7791 Reads, 1800 Writes for that operation.
    When I remove the command "DELETE DEPENDENCIES FROM deleted, DEPENDENCIES WHERE deleted.VersionID = DEPENDENCIES.ChildID" from the trigger, it is NOT significantly faster, but when I delete the trigger the operation runs in no time.
    So I asume, that the time is taken to prepare the trigger and the "deleted" table. This could in fact take some time because the table "VERSIONS" has a column with rather large BLOBs.
    Is there any possibility to boost the performance here? I do not need the BLOB column in the trigger. Is there a way to exclude it from the "deleted" table?
  2. Blasterboy New Member

    Any ideas ?
  3. satya Moderator

    Just back to basics on triggers that since they execute every time that a table is INSERTED, UPDATED, or DELETED (depending on how the trigger is created), they can produce a lot of overhead.
    Keeping up the performance tips in this regard, make sure the code (after firing) is set to minimum to reduce execution overhead. For the case of referential integrity I wouldn't recommend trigger, you can take care of it with SQL DRI/check constraints builtin features.
    I'm assuming that you may not have any associated rollback actions here in yoru code, whihc is nothing but adding fuel to the problem. Rather you can control with error handling to find out the issue for rollback.
    Coming back to the execution of above trigger I see that there is a significant execution difference between your machine & server. Think about database optimization, when it runs on server there are other processes involved and indexes on that table & other referential integrity handling. So I recommend to COMPILE the table to compile associated SP, Views & Functions that might get you some better execution.
    I also recommend to see the estimated execution plan for the code that is running slow. Keep it coming with your results....
  4. Blasterboy New Member

    First of all, thanks for your answer.
    [quote user="satya"]Keeping up the performance tips in this regard, make sure the code (after firing) is set to minimum to reduce execution overhead.[/quote]
    In this case the trigger actually does nothing. I deleted all lines of code except one "SET NOCOUNT ON", and the trigger still needs 5 seconds on my local machine (SQL Server 2008 Developer Edition) and 12 sec. on the remote machine (SQL Server 2005 Dev. Ed.).
    [quote user="satya"]For the case of referential integrity I wouldn't recommend trigger, you can take care of it with SQL DRI/check constraints builtin features.[/quote]
    I am using a DRI constraint for the foreign key "VersionID" <-> "ParentID". But it is not possible to create a second one on "VersionID" <-> "ChildID". The SQL server displays the error message "Multiple delete paths". That's why I am using a trigger for the second one.
    [quote user="satya"]I'm assuming that you may not have any associated rollback actions here in yoru code, whihc is nothing but adding fuel to the problem. Rather you can control with error handling to find out the issue for rollback.[/quote]
    Since there is no code in the trigger, I am assuming that there is no rollback, although the complete action takes place in a transaction.
    [quote user="satya"]So I recommend to COMPILE the table to compile associated SP, Views & Functions that might get you some better execution. [/quote]
    Sorry, I don't know how to COMPILE a table. Could you please tell my how to do this?
    [quote user="satya"]I also recommend to see the estimated execution plan for the code that is running slow.[/quote]
    The excection plan shows that most of the time is used for "Clustered Index Delete" (Estimated Operator Cost: 0.0401 (76%)), but nothing that would explain a 5 second waiting time.
  5. satya Moderator

    For compile a table you can use SP_RECOMPILE (see books online too) that will compile associated SP & Views.
    With regard to the execution plan I would see the optimizer might be waiting on disk to return some information, DMVs are helpful to get a snapshot of such ifnormation.
    Take help of sys.dm_exec_cached_plans that can get you size of the plan, the handle of the plan (which is binary), and the number of times the plan has been used. Also you can see sys.dm_exec_requests on currently running processes, wait information, all the session settings for the processes, etc. It also includes the handle for all the plans which are currently being run. If you join this sys.dm_exec_requests to sys.dm_exec_cached_plans you can see exactly which processes are using which plans. Because this shows you the spid (the session_id column) and the plan_handle you can see what user s are running which commands.
  6. Adriaan New Member

    Check the other tables - both those with FKs referring to this table, and the ones you're handling through this delete trigger. They may very well have delete triggers of their own.
    Then check if these other tables have cascading deletes, and if these tables are missing a unique constraint.
    Check if any of these triggers employs cursor-based logic (instead of set-based).
    Check for clustered indexes based on a column that is not an IDENTITY column, and/or on more than one column.
    ***
    Some systems avoid deletions by having a bit column that flags a row as "deleted". As long as your application always filters out all rows that are flagged as "deleted", at least you don't have to worry about slow deletions. Perhaps then delete the rows as part of db maintenance.
  7. Blasterboy New Member

    [quote user="Adriaan"]Check the other tables - both those with FKs referring to this table, and the ones you're handling through this delete trigger. They may very well have delete triggers of their own.[/quote]
    Good point. I tried removed the constraints and checked the operation again, but it did not perform significantly better. There are no other delete triggers in the involved tables.
    [quote user="Adriaan"]Check for clustered indexes based on a column that is not an IDENTITY column, and/or on more than one column.[/quote]
    I am using clustered indexes on all PK columns and non-clustered on all FK columns.
    Just to mention one point: The database is not really big. The VERSIONS table has 236 rows and the DEPENDENCIES table has 432 rows. Nothing that should be a problem.
  8. Adriaan New Member

    Clustered index on multiple columns? That may work, provided you have no non-clustered indexes.
    Do you have auto-shrink opion on for the db?
  9. Blasterboy New Member

    [quote user="Adriaan"]Clustered index on multiple columns? That may work, provided you have no non-clustered indexes.[/quote]
    I have one clustered index per table (on the PK) and non-clustered indexes on every FK. Although I must admit I do not really understand the difference.
    [quote user="Adriaan"]Do you have auto-shrink opion on for the db?[/quote]
    No. Here are the settings:
    Auto Close: True
    Auto Create Statistics: True
    Auto Shrink: False
    Auto Update Statistics: True
    Auto Update Statistics Asynchronously: False
  10. Adriaan New Member

    The question was whether you had a clustered index on multiple columns.
    What data type is (are) the column(s) in your PK?
  11. Blasterboy New Member

    [quote user="Adriaan"]
    The question was whether you had a clustered index on multiple columns.
    What data type is (are) the column(s) in your PK?
    [/quote]
    No, the clustered index is only on one column: the PK.
    The PK is an integer.
  12. Blasterboy New Member

    [quote user="satya"]For compile a table you can use SP_RECOMPILE (see books online too) that will compile associated SP & Views.[/quote]
    I recompiled the table, but that did not make a difference.
    [quote user="satya"]Take help of sys.dm_exec_cached_plans that can get you size of the plan, the handle of the plan (which is binary), and the number of times the plan has been used. Also you can see sys.dm_exec_requests on currently running processes, wait information, all the session settings for the processes, etc.[/quote]
    I don't understand how these views can help me to find the operation that takes so long.
    [quote user="satya"]Because this shows you the spid (the session_id column) and the plan_handle you can see what user s are running which commands[/quote]
    Since I'm working with a development server instance, I'm the only user on the database and on the server.

Share This Page