SQL Server Performance

Transaction Log File Will not Shrink

Discussion in 'T-SQL Performance Tuning for Developers' started by wolffy, Oct 3, 2003.

  1. wolffy New Member

    Hi everyone,

    I have had this problem only a few times in my 7 years as a DBA. I notice that the transaction log file grew to a large amount usually over 5 gigs, truncates after the backup processes runs for the database and log. But it leaves the file size very large.


    Usually the shrinkfile in Enterprise Manager works fine, but when it doesn't work it is frustrating. I have tried everything using TSQL, EM, and the only thing that will force MSSQL to give up that file size is to change the recovery option to simple then back again to Full. I know this is dangerous since it is altering or wipping out the transaction log.

    This is not a typical problem I have but seems like when I make some index changes or defrag the index it blows up the transaction log from 2G to 15G. The database size is 5G.
    I am using SQL server 2000 Standard. Oh yes and plenty of diskspace.


    Has anyone ever had this problem before?


    Thanks for your help.


    Steven.
  2. Luis Martin Moderator

    I supose the following are stupid questions, but you never know.

    1) Did you set DB only (mean you alone) when you try shrink?
    2) Same 1) after down and up SQL Server?



    Luis Martin

    ...Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.
    Bertrand Russell
  3. wolffy New Member

    I can not set DB only since this is a 7,24 box. But I thought you didn't have to just to shrink the database, tranlog, or their physical file size. Why would I be able to change the recovery option which changes the file size without seting to single user or DB only?

    What am I loosing when I do that? Am I loosing Unrecoveralble Transactions in the Transaction log?


    Thanks for your help,

    Steven
  4. satya Moderator

    IF the shrink process is not resulting then there may be uncommitted transactions pending in the Tlog which you can avoid by using CHECKPOINT to write to the disk and then use the SHRINK process to reduce the size of Tlog.

    And not necessarily DB must be in single-user or readonly mode, SQL system will take care of process> Except make sure you don't issue BACKUP statement/job during this process.

    _________
    Satya SKJ
    Moderator
    SQL-Server-Performance.Com
  5. royv New Member

    I have found the SHRINKFILE functions to be very frustrating at times. This is what I do and I hope one of the other forum members can verify if this is safe. Once a month I detach the database and then delete the log file, and then reattach the database without the log file, and it will create a new very small transaction log file. BUT I ALWAYS FIRST take a full backup of the database. Look at sp_detach_db and sp_attach_db.


    "How do you expect to beat me when I am forever?"
  6. wolffy New Member

    royv,

    Thanks for the help. I am found that switching to simple recovery and back to full accomplishes almost the same results as detaching, and since this is a 7/24 datbase I can not bring it down.


    Satya,
    I tried issuing a CHECKPOINT before shrinking which resulted in the usable tlog space growing. I tried checkpoint, shrinkfile, and even tried backup tran log. in different orders. None of them would shrink the file size which is 50% bigger than I want it to be.

    What does checkpoint really do? I thought it would write tran log to the datbase and shrink the DB, but looks like I was wrong.

    Thanks for you help,

    steven.
  7. ykchakri New Member

    If the database is in FULL recovery mode, the checkpoint will write all the committed transactions to the database, but will not truncate them. The next log backup or an explicit truncate will truncate this log allowing the log file to shrink.

    If the database is in Simple recovery mode, then the committed transactions will be written and truncated.
  8. Luis Martin Moderator

    Suggestio:

    Keep transaction log grow whit fixed Mb. and not %.



    Luis Martin

    ...Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.
    Bertrand Russell
  9. satya Moderator

    True, it may help too.
    BTW what is the level of service pack on this SQL?

    _________
    Satya SKJ
    Moderator
    SQL-Server-Performance.Com
  10. wolffy New Member

    sP 3 , I tried experimenting on other databases and found some of them will not shrink the log file at all unless I change the options to simple recovery and back to full. This is true on different servers that have nothing in common. So I think it must be that the log files is locked some how, maybe in use? Does that make sense? I know I am guessing, but that is all I have left.

    If I limit the size of the tran log file, will it not just overwrite the old data and it will be lost? What if I need to use the tran log for recovery?
  11. ykchakri New Member

    If the database is in 'FULL' recovery mode and if the tran log file is set to 'unlimited', the log file keeps growing until you take a backup of the log file. It is recommended in this case to take frequent log backups so that the log will be truncated and the space will be re-used.

    If you restrict the size of the log file in 'FULL' recovery mode, as soon as it hits this limit, all the subsequent data modification statements will fail with an error and the AUTO CHECKPOINT will be disabled. But, it will not overwrite the old data.
  12. bilal New Member

    Hi.

    My friend we are sailing in the same boat and here is what i found out. ITs a microsoft bug . When you run a dbcc indexdefrag statement on your database , it caused the transaction log to blow up (in my case from 300MB to 3gig). I tried the microsoft workaround but it doesnt work...http://support.microsoft.com/defaul...port/kb/articles/Q282/2/86.ASP&NoWebContent=1

    I am still investigating seems like i have to Backup the log with Truncate only option and then issue the "dbcc shrinfile(db,filsize) command.

    Let me know ...how do you plan to deal with it.



    quote:Originally posted by wolffy

    Hi everyone,

    I have had this problem only a few times in my 7 years as a DBA. I notice that the transaction log file grew to a large amount usually over 5 gigs, truncates after the backup processes runs for the database and log. But it leaves the file size very large.


    Usually the shrinkfile in Enterprise Manager works fine, but when it doesn't work it is frustrating. I have tried everything using TSQL, EM, and the only thing that will force MSSQL to give up that file size is to change the recovery option to simple then back again to Full. I know this is dangerous since it is altering or wipping out the transaction log.

    This is not a typical problem I have but seems like when I make some index changes or defrag the index it blows up the transaction log from 2G to 15G. The database size is 5G.
    I am using SQL server 2000 Standard. Oh yes and plenty of diskspace.


    Has anyone ever had this problem before?


    Thanks for your help.


    Steven.
  13. satya Moderator

    .... and also you can use DBCC SHRINKFILE with {EMPTYFILE | NOTRUNCATE | TRUNCATEONLY } option anyway.

    _________
    Satya SKJ
    Moderator
    SQL-Server-Performance.Com
  14. Argyle New Member

    Here is a classic script I always use to shrink the logs in SQL7/2000. Create in master.


    create proc sp_force_shrink_log
    --------------------------------------------------------------------------------
    -- Purpose: Shrink transaction log of the current database in SQL Server 7.0.
    -- Author: Andrew Zanevsky, AZ Databases, Inc., 12/25/1999, v5 - 08/18/2000
    -- zanevsky@azdatabases.com
    --------------------------------------------------------------------------------
    @target_percent tinyint = 0,
    @target_size_MB int = 10,
    @max_iterations int = 1000,
    @backup_log_opt nvarchar(1000) = 'with truncate_only'
    as
    set nocount on

    declare @db sysname,
    @last_row int,
    @log_size decimal(15,2),
    @unused1 decimal(15,2),
    @unused decimal(15,2),
    @shrinkable decimal(15,2),
    @iteration int,
    @file_max int,
    @file int,
    @fileid varchar(5)

    select @db = db_name(),
    @iteration = 0

    create table #loginfo (
    id int identity,
    FileId int,
    FileSize numeric(22,0),
    StartOffset numeric(22,0),
    FSeqNo int,
    Status int,
    Parity smallint,
    CreateTime varchar(50)
    )

    create unique clustered index loginfo_FSeqNo on #loginfo ( FSeqNo, StartOffset )

    create table #logfiles ( id int identity(1,1), fileid varchar(5) not null )
    insert #logfiles ( fileid ) select convert( varchar, fileid ) from sysfiles where status & 0x40 = 0x40
    select @file_max = @@rowcount

    if object_id( 'table_to_force_shrink_log' ) is null
    exec( 'create table table_to_force_shrink_log ( x nchar(3000) not null )' )

    insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateTime ) exec ( 'dbcc loginfo' )
    select @last_row = @@rowcount

    select @log_size = sum( FileSize ) / 1048576.00,
    @unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
    @shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00
    from #loginfo

    select @unused1 = @unused -- save for later

    select 'iteration' = @iteration,
    'log size, MB' = @log_size,
    'unused log, MB' = @unused,
    'shrinkable log, MB' = @shrinkable,
    'shrinkable %' = convert( decimal(6,2), @shrinkable * 100 / @log_size )

    while @shrinkable * 100 / @log_size > @target_percent
    and @shrinkable > @target_size_MB
    and @iteration < @max_iterations begin
    select @iteration = @iteration + 1 -- this is just a precaution

    exec( 'insert table_to_force_shrink_log select name from sysobjects
    delete table_to_force_shrink_log')

    select @file = 0
    while @file < @file_max begin
    select @file = @file + 1
    select @fileid = fileid from #logfiles where id = @file
    exec( 'dbcc shrinkfile( ' + @fileid + ' )' )
    end

    exec( 'backup log [' + @db + '] ' + @backup_log_opt )

    truncate table #loginfo
    insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateTime ) exec ( 'dbcc loginfo' )
    select @last_row = @@rowcount

    select @log_size = sum( FileSize ) / 1048576.00,
    @unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
    @shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00
    from #loginfo

    select 'iteration' = @iteration,
    'log size, MB' = @log_size,
    'unused log, MB' = @unused,
    'shrinkable log, MB' = @shrinkable,
    'shrinkable %' = convert( decimal(6,2), @shrinkable * 100 / @log_size )
    end

    if @unused1 < @unused
    select 'After ' + convert( varchar, @iteration ) +
    ' iterations the unused portion of the log has grown from ' +
    convert( varchar, @unused1 ) + ' MB to ' +
    convert( varchar, @unused ) + ' MB.'
    union all
    select'Since the remaining unused portion is larger than 10 MB,' where @unused > 10
    union all
    select'you may try running this procedure again with a higher number of iterations.' where @unused > 10
    union all
    select'Sometimes the log would not shrink to a size smaller than several Megabytes.' where @unused <= 10

    else
    select 'It took ' + convert( varchar, @iteration ) +
    ' iterations to shrink the unused portion of the log from ' +
    convert( varchar, @unused1 ) + ' MB to ' +
    convert( varchar, @unused ) + ' MB'

    exec( 'drop table table_to_force_shrink_log' )

    GO
    Standard Disclaimer: Use this at your own risk.

    /Argyle
  15. mitoja New Member

    Maybe I’m asking stupid, but why would anyone want to shrink the TL?, truncating it will mark the inactive virtual logs witch will then be reused (so let it reuse, and work inside the log). Shrinking it will just force an auto growth witch will potentially cause a performance issue.
  16. satya Moderator

    What is the point behind just dusting off a 5 year (nearly) old thread> [:)]
    Appreciate your insight on the subjec though.
  17. robkraft New Member

    I've been asking myself this question the last 2 days. As a consultant, I support a SQL 2000 installation. Some of the database that are about 1GB have transaction logs 30GB. At this point, I know the logs are not going to shrink. They are not 24x7; so I could easily force them to shrink. I would go ahead and do this, but I first want to find out from my client if they leave the log files this large for a reason. Perhaps they delete all the data and run a monstrous logged import every night; and they don't want file growing on the fly. I don't think this is the case, but it is one plausible reason for wanting large, empty log files.
  18. satya Moderator

    Don't forget to keep up the log size scheduling frequent log backups job is good to do.
  19. jahzwolf New Member

    So i had the same issue and i didn't like any of the above solutions... So i tried something newI tried to create a second log file.. and it showed being created in the GUI...but when i went back in to look the new file had disappeared and the original LARGE file had nowshrunk to the minimum size and the new file was gone completely..It is a bug i am going to submit here at Microsoft ..
  20. Luis Martin Moderator

    Welcome to the forums!.
    This thread started in 2003.
    Please open a new one in this place:
    http://sql-server-performance.com/Community/forums/35.aspx
  21. jahzwolf New Member

    Thank you.. for your forum.. it helps a lot in the thick of the struggle ...
  22. satya Moderator

    Jahzwolf, do you have AUTO SHRINK set on the database?

Share This Page