SQL Server Performance

Use of Triggers & Performance Tradeoffs...

Discussion in 'Performance Tuning for DBAs' started by fletcherdunton, Sep 5, 2006.

  1. fletcherdunton New Member

    As a new team member on a project, I was asked to review the design of the application database. I noticed that each table had a TRIGGER on it (FOR INSERT, UPDATE, DELETE), which when executed copied the subject ROW from the application database to a second log database.

    The dba and developers explained that they designed this after a popular design pattern and its purpose to provide an Audit Trail. I don't know the name of the pattern or it's origin, but the log database has a corresponding table for each app database table.

    For example, if the application database was Northwind, then this design pattern would have a NorthwindLog database. Likewise, the tables in NorthwindLog would be EmployeeLog, OrdersLog, CustomersLog, etc... and have a similar schema to Northwind.

    My concern is that the performace of the system is poor and I need to assess just how much these triggers cost on each INSERT, UPDATE, DELETE into the app database.

    However, before I jump into profiling this, I thought it would be worth opening this up for feedback in this forum:

    1 - Does anyone know more about this design pattern and examples of where it is being used?
    (I suspect that project's Functional Spec was very loose and the requirement for Auditibility actually may NOT need to be down to every INSERT, UPDATE, DELETE on every table operation.)

    2 - Purely from a DBA design stanpoint in a transactional system, is this an accepted practice?

    3 - As far as triggere go, is there a best practices body of knowledge for SQL Server?

    You feedback is appreciated. Thank you.


    Fletcher Dunton
    MCAD, MCP
  2. bradmcgehee New Member

    I am not familiar with this pattern, but that doesn't mean a whole lot.

    Generally speaking, triggers used for auditing like you describe is a poor choice (in my opinion). If you need auditing, there are third party programs that will do complete auditing for you that won't be such a performance burden. But, I guess, if you can't afford these third party programs, then creating you own works, it just is expensive from a resource perspective.

    Does the database perform well now? If so, then perhaps the triggers aren't that much of a problem. But if you are having a performance problem, and you can't identify any other reasons it is running poorly (like missing indexes, etc), then getting rid of the triggers and using a third-party product will boost the performance of the database.

    -----------------------------
    Brad M. McGehee, SQL Server MVP
  3. joechang New Member

    i have seen this in banking and financial applications
    many applications do not need this,
    if there is a reason for it in this specific case, then go ahead
    i would not do this just because the dba saw it in another app and assumed it was common practice

    my preference is to use stored procedures to implement this rather than triggers
    the major reason is that this allows bulk operations for high row count mods, instead of single row ops

    a minor reason is that triggers do not show up in the estimated execution plan, making it more difficult to diagnose performance mistakes
  4. fletcherdunton New Member

    Thanks for the input. : ) The QA team really hasn't been able to fully test production performance yet: I'm new to the project and reviewed the results of the last round of performance testing and noted that the entire application failed miserably b/c of middle tier issues. However, code changes are in process to remedy this and then QA can stress test the entire system. The test requirements call for 1000 records a second as pass criteria, so I'm looking at the database now and doing some project planning to remediate if we need.

    Fletcher Dunton
    MCAD, MCP
  5. joechang New Member

    1000 inserts/sec or reads/sec?

    1000 writes per sec will require a middle-advanced skill level
  6. fletcherdunton New Member

    1000 inserts/sec


    Fletcher Dunton
    MCAD, MCP
  7. fletcherdunton New Member

    Opps. I mean 1000 inserts per minute. (Sorry) [^]

    Fletcher Dunton
    MCAD, MCP
  8. FrankKalis Moderator

    quote:Originally posted by joechang

    i have seen this in banking and financial applications
    many applications do not need this,
    if there is a reason for it in this specific case, then go ahead
    i would not do this just because the dba saw it in another app and assumed it was common practice

    my preference is to use stored procedures to implement this rather than triggers
    the major reason is that this allows bulk operations for high row count mods, instead of single row ops

    a minor reason is that triggers do not show up in the estimated execution plan, making it more difficult to diagnose performance mistakes

    And even in those banking and financial applications there is no need for such an audit trail on each and every single table. Typically we have this implemented due to some legal purposes on the most sensible and critical transactional tables only. Which reduces the number of tables to nothing but a handful in most environments.

    --
    Frank Kalis
    Moderator
    Microsoft SQL Server MVP
    Webmaster:http://www.insidesql.de
  9. fletcherdunton New Member

    Great Points and Nice Forum. Thank you for your brevity & rigor. Best Regards to all.

    Fletcher Dunton
    MCAD, MCP

Share This Page