SQL Server Performance

Restrict log file size

Discussion in 'Performance Tuning for DBAs' started by dp75, Apr 28, 2005.

  1. dp75 New Member

    [SQL 7.0 on Win2000]
    I have a large database:
    Datafile: 225Gb
    Logfile currently: 50Gb
    Both these files have default settings, i.e. Autogrow by 10% and unrestricted max filesize.

    There are frequent large updates and inserts (not sure if the SQL is optimized, however this isnt my main concern).
    Due to this, the logfile isnt automatically shrinking, even though there are specific commits in the sql.
    Therefore I'm manually using dbcc shrinkfile, which is a pain, having to run it multiple times.

    My two issues are:
    1. Large durations to update millions of rows (i know this is expected however there are bottlenecks somewhere).
    >>I am going to check out defragmentation on data and indexes, also i am currently running an OS defrag (the disk is apparently 98% fragmented!).

    2. The logfile settings specified above.
    If I set a maxsize for the logfile, i understand this means when it reaches this limit, no further DML can be performed, and errors are given.
    I know that logsize monitoring can be performed to alert before it reaches this max level.

    Question: is there a way to make the logfile loop around, i.e. when it is full, old records are deleted? i.e. it never goes above the maxsize and never errors?
    I am using a simple recovery mode, i.e. trunc log on chkpt, and so do not require the recover capabilities of the log.

    TIA
  2. satya Moderator

    AUTO_SHRINK only reduces the size of the transaction log if the database is set to SIMPLE recovery model or if the log is backed up.

    Is it possible to break the large update in transactions in order to take care of Tlog size.
    Try to defragment the disk on operating system level too.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  3. dp75 New Member

    The update is already broken up into fairly small transactions, although i'm reviewing this.
    Also as i said, i am currently running an NT defrag.

    Does anyone have any answer to my question above?
    I dont think theres a way to do it, however it would be great if theres a way to automatically limit the logfile size.
  4. mmarovic Active Member

    The only way I can imagine is to break millions rows updates to small batches (as suggested by Satya), e.g. 5000 updates in one transaction. Other thing that can be done (but I don't have experience with that) is to have checkpoints more frequently. To control that you can play with recovery interval server setting. From BOL:
    quote:If the database is using the simple recovery model, an automatic checkpoint is generated whenever the number of log records reaches the lesser of these two values:
    The log becomes 70 percent full.

    The number of log records reaches the number SQL Server estimates it can process during the time specified in the recovery interval option.

    Automatic checkpoints truncate the unused portion of the transaction log if the database is using the simple recovery model. The log is not truncated by automatic checkpoints if the database is using the full or bulk-logged recovery models.
  5. dhilditch New Member

    If you have SIMPLE recovery mode, the log file should not be growing. You can still use shrinkfile on the data file to shift stuff back to the beginning of the data file and update your fill factor etc.

    For duration, check if the updates being performed are making an update to any columns contained in a Clustered Index - if so, you might want to consider changing your clustered index.

    Have you had a look at the size of your tempdb? Is this changing in size drastically when performing your operations?

    Dave.

    Small Businesses
    www.matiogi.com
  6. dp75 New Member

    My database has equivalent of Simple recovery mode (i.e. trunc log on chkpt, etc).
    Yes tempdb is also growing drastically and not releasing space - this is another problem.
    I have even tried stopping SQL Server and deleting tempdb files - this normally creates default tempdb, however it reverts to the previous size!
    I should be able to find a solution to this though.

    My main question is:
    Does anyone know how to defragment data within SQL (not OS defragmentation which I've tried)?
    I have used showcontig and dbcc reindex to defrag indexes, which is fine but that does not release space, only helps with performance.
    The last resort would be to drop+recreate tables but for my size database that is an extreme.
  7. FrankKalis Moderator

    Do you have clustered indexes on your tables?
    Also, by any chance, did you have at any time a huge amount of BLOB data that you've deleted?

    --
    Frank Kalis
    SQL Server MVP
    http://www.insidesql.de
  8. derrickleggett New Member

    quote:
    My database has equivalent of Simple recovery mode (i.e. trunc log on chkpt, etc).

    In addition to Franks questions, what do you mean by that quote? There is no such thing as "equivalent of Simple recovery mode". Either you have it in simple or you don't.

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  9. mmarovic Active Member

    It means his server is mssql server 7.0 and he set trunc log on checkpoint on. That is equivalent to mssql 2000 simple recovery model.
  10. dhilditch New Member

    re: Question: is there a way to make the logfile loop around, i.e. when it is full, old records are deleted? i.e. it never goes above the maxsize and never errors?

    No there is no way to do this.

    From my understanding, even in simple mode, the transaction log will be used for any current transactions but then upon commit/rollback the transactions will be deleted and the log file will truncate to the earliest uncommitted transaction. That means that maybe in your code there are some long running transactions that don't perform ROLLBACK or COMMIT? This would also explain the tempdb growing and never shrinking as any temporary (or more importantly here I think) derived tables will be deleted as soon as the connection that created those tables closes.

    Sounds like you have to lookat your code.

    Dave.

    Small Businesses
    www.matiogi.com
  11. mmarovic Active Member

    quote:I have even tried stopping SQL Server and deleting tempdb files - this normally creates default tempdb, however it reverts to the previous size!
    I should be able to find a solution to this though.
    There is no need to delete tempdb files, it is enough to stop and restart mssql server. Temp db will be recreated with initial size you defined for tempdb. It is good idea to have tempdb initial size slightly above max tempdb size you may need, because you don't want your tempdb to autogrow during havy processing. If tempdb initial size is overestimated you can change that setting and it will take effect next time you restart mssql server.
  12. satya Moderator

    Ensure the location of tempdb hard drive has required space to increase and you can shrink the tempdb if required. But this will be no use and as the application queries will contribute in increasing the size of tempdb.

    May be you need to think about archiving the database on regular basis to keep the size in managable position.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  13. dp75 New Member

    Thanks for your comments..some answers from me, and further info i've found:
    - There are no BLOB or large object-types in the database
    - There are several clustered indexes, therefore this could be causing problems
    - I am unable to archive data as the users require all the data for reporting purposes

    Regarding tempdb..
    In the last few days, since starting this post, tempdb has remained at 11Gb, and I am yet unable to shrink it (tried dbcc shrinkfile, checkpoint, restarting sql server, deleting tempdb files).
    I've never seen this phenomenon before, has anyone else?

    I am looking at more drastic measures now, because I have actually recreated from scratch (using generate script) one of the user databases, which was 33Gb, and which is now reduced to 0.5Gb !!
    The other two databases on there are 100Gb and 225Gb in size.
    So I will be planning carefully to recreate them.
    It appears that sql server just isnt releasing the space.
    As far as i know there are plenty of commits in code.
    The system is generally an OLTP, with batch updates and inserts overnight which is definitely fragmenting the server as an OS defrag has reported 94% fragmentation.
    It cannot defrag much as the database files are too large, therefore i will recreate databases first.

    Also I'm hoping to go to SQL 2000 in the near future..
  14. dhilditch New Member

    From EM, go to Databases | tempdb, right click it select All Tasks -> Shrink Database, from here click the 'Files' button, then you should find which file is oversized using the drop down list and then change the 'Shrink Action' to 'Shrink file to:' and select something low. Click Ok and you should be done. If you've tried shrinking etc, and it's still not shrinking it's actually shrinking to the minimum size it was last told to stay at and this method lets you specify a new minimum size.

    Hope this helps!

    Regards,
    Dave

    david@matiogi.com
    www.matiogi.com
  15. satya Moderator

    There is significance between ver 7 and 2000 where the shrink operation works, in 7 it is not immediate and in 2000 it performs the action immediately.

    I suggest rather than doing from enterprise manager which is meant to flaky at times, better to schedule the job during less traffic hours.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  16. Richard Ding New Member

    you should also check to see if you have open transactions lingering around in tempdb:

    dbcc opentran
  17. ghemant Moderator

    as Mr.Dave said ...sure its a problem in Code...becoz some around year back i have solve the same problem in my office by correcting / optimizing SP where some coders by mistaken forgot to commit/rollback the transaction. Plz do check it sure you will get something from

    hsGoswami
    ghemant@gmail.com

Share This Page