SQL Server Performance

Recovery model and performance

Discussion in 'Performance Tuning for DBAs' started by buckley, Feb 11, 2003.

  1. buckley New Member

    Hi all,

    I am currently optimizing our sql server because the cpu maxed to 100% regulary. I identified the bad queries using the profilier but i am stuck with a question regarding the recovery model.

    I set this to simple a while ago because the transaction log grew to tremendous numbers. My question is : whats the best model from a *performance* viewpoint if any (i perform a daily backup which is sufficient if there woul dbe any data loss).

    In some posts i read that it is good because there is no logging overhead. Others said its bad because sql server must write many times to the data files. Can anybody shed some light on this pls?

    Many thanks
  2. satya Moderator

    For the production databases FULL RECOVERY is always recommended, if you look at the recovery models topic under Books online its explained for 3 models.

    Simple recovery permits high performance and reclaims the log space, but in the event of the crash changes done recently will be lost only you can recover from the last backup. Whereas FULL has capability of recovery at any time.

    Always you should maintain transaction log backup which can be invoked in the event of database crash.


    Satya SKJ
  3. buckley New Member

    I understand the risk you take of not having a transaction log, but this is fine in my case. I take full backups every night which is enough. I dont need "point in time" recovery.

    I am merely interested in the performace viewpoint. Is it slower because sql server has to truncate the log periodicaly. Or does he do tis in a smart wat, for example when the server is not busy?
  4. satya Moderator

    If the TLOG is not important then you can obtain SIMPLE recovery model on performance basis for bulk copy operations.

    Apart from that PERFORMANCE from other areas are dependant on db design and queries.


    Satya SKJ
  5. buckley New Member

    So i can assume that the choice of the simple recovery model is the best performance wise of all three optiosn (full,simplen,bulk logged) in both OLAP and OLTP environments.

    My only guess whas that maybe the transaction log speeded up inserts/updates/deletes. And because the simple model has a minimal tran log, so it must truncate it a lot, I thought it could slow down performance.

    But someone could also say that the transaction log cause to much overhead.

    So simple is the gives the best performance if you just take full backups?
  6. bradmcgehee New Member

    No matter which recovery model you choose, transactions will always be written to the log. When a checkpoint occurs (about once a minute by default), what happens depends on your recovery model. If you choose simple, then the log is automatically truncated at every checkpoint. But if you use full, the log is only truncated when a transaction log backup is made.

    Based on this, my guess would be that the simple model might be more resource intensive than the full model because a truncate occurs about every minutes verses the full model, which only truncates when a log backup is made. But in reality, I doubt if you would be able to tell any significant difference between the two from a performance perspective. I think backups are more important than a little performance boost, and I do log backups every 15 minutes on my production servers. This also acts to keep my logs small. Sure, this does exact some performance hit, as every 15 minutes a backup is done, but is more than acceptable.

    I would spend more of your time worrying about how to optimize your queries, identifying those that are causing your excessive CPU utilization.

    Brad M. McGehee

Share This Page