SQL Server Performance

Huge size of Log file???

Discussion in 'General DBA Questions' started by Joozh, Apr 18, 2005.

  1. Joozh New Member

    The log file of one of my database has gone upto 60GB ??? I have tried to shrink the database but the log file size remains the same.

    Any clues how I can handle this? If I delete the lof file will it cause any problem?
    Will appreciate help.

    Many TIA
  2. mmarovic Active Member

    Do not delete the log file!!! Use script bellow instead:
    alter database db set recover simple
    go
    dbcc shrinkfile (db_log, 100)
    go
    checkpoint
    go
    dbcc shrinkfile (db_log, 100)
    alter database db set recovery full
    go
  3. deepakontheweb New Member

    take a full backup just after executing above script.

    Deepak Kumar

    --An eye for an eye and everyone shall be blind
  4. Joozh New Member

    Thanks for teh reply but I am not clear where/how to run the script???

    Suppose the name of my database is Sales and the name of the log file is thus Sales_Log

    Can you please guide where and how to run the above script?
  5. deepakontheweb New Member

    Just run this query in query analyzer window.. do you know how to take full backup after that??



    Use [sales]
    Go
    alter database [sales] set recover simple
    go
    dbcc shrinkfile (sales_log, 100)
    go
    checkpoint
    go
    dbcc shrinkfile (sales_log, 100)
    alter database [sales] set recovery full
    go


    Deepak Kumar

    --An eye for an eye and everyone shall be blind
  6. mmarovic Active Member

    I assumed your recovery model is full. 100 doesn't have to be target size. Better put reasonably big target size and schedule frequent enough transaction log backup.
  7. FrankKalis Moderator

    Actually you can use a rather brute-force method. sp_detach_db the db in question. Now you can delete the log, rename it or whatever else you want do with the log file. SQL Server will issue a message like

    Device activation error. The physical file name 'd:datensqlMSSQLdata
    orthwnd.ldf' may be incorrect.
    New log file 'D:DATENSQLMSSQLDATANorthwind_log.LDF' was created.

    when you attach the db using sp_attach_db, but as you see, it will create a new log file.
    You should see this method only as kind of last resort. It should never be everyday's practise! And always, always make sure, you have a most recent backup.

    --
    Frank Kalis
    SQL Server MVP
    http://www.insidesql.de
  8. Joozh New Member

    <blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by deepakontheweb</i><br /><br />Just run this query in query analyzer window.. do you know how to take full backup after that??<br /><br />Use [sales]<br />Go<br />alter database [sales] set recover simple<br />go<br />dbcc shrinkfile (sales_log, 100)<br />go<br />checkpoint<br />go<br />dbcc shrinkfile (sales_log, 100)<br />alter database [sales] set recovery full<br />go<br /><hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote"><br /><br />When I try to run the above in Query Analyzer, it gives a message saying :<br />Incorrect syntax near 'recover'<br /><br /><img src='/community/emoticons/emotion-6.gif' alt=':(' />
  9. mmarovic Active Member

    How about:
    alter database sales set recovery simple
  10. deepakontheweb New Member

    Use [sales]
    Go
    alter database [sales] set recovery simple
    go
    dbcc shrinkfile (sales_log, 100)
    go
    checkpoint
    go
    dbcc shrinkfile (sales_log, 100)
    alter database [sales] set recovery full
    go


    -- Use above coding.. lastly only 'Y' was missing from recovery

    Deepak Kumar

    --An eye for an eye and everyone shall be blind
  11. satya Moderator

  12. Joozh New Member

    Thanks everyone but no luck. I changed the code and added the missing 'y' but now I get the message that "could not locate the <...file....> in sysfiles"

    my log file is located in D:program FilesMicrosoft SQL ServerMSSQLDataSales_Log so I changed the above code to :

    dbcc shrinkfile ("D:program FilesMicrosoft SQL ServerMSSQLDataSales_Log", 100)

    Any more clues please.

    P.S.
    Sorry for the botheration.
  13. deepakontheweb New Member

    Please do not change anything from below code and just execute all in once..anywhere in query analyzer..


    Use [sales]
    Go
    alter database [sales] set recovery simple
    go
    dbcc shrinkfile (sales_log, 100)
    go
    checkpoint
    go
    dbcc shrinkfile (sales_log, 100)
    alter database [sales] set recovery full
    go



    take a full backup of database after this..

    Deepak Kumar

    --An eye for an eye and everyone shall be blind
  14. Joozh New Member

    Tons of thanks everyone <img src='/community/emoticons/emotion-1.gif' alt=':)' /><br /><br />It appears that the problem is now solved. Special thanks to Deepak.<br /><br />Regards.
  15. mmarovic Active Member

    Frank's solution works too and only short downtime is needed.
  16. Joozh New Member

    Thanks mmarovic. Next time i run into a similar problem (though i hope not), I will also give Frank's suggestion a try.

    Just for clarity: Frank's suggestion is to
    1) Detach the database
    2) Delete the Log file
    3) Attach the database

    Is the above correct? Please confirm especially if I have ommited any important step.

    Regards.
  17. mmarovic Active Member

    Yes, that's correct. I prefer solution you used because you can define the size of log file you want to shrink and no downtime is needed. If you use detach/attach approach you will have a few seconds downtime (if initial log size is not really big).

    Don't forget to schedule transaction log backup.
  18. Joozh New Member

    Thanks for the confirmation.

    Suppose I use Deepak's suggestion and the problem gets solved. Do I still have to schedule backup of the transaction log? If so, why?

    Sorry if the question is silly.
  19. Argyle New Member

    As mentioned please do not use the detach, delete log, attach procedure unless it's a last resort. Never do it on a critical system. There is no guarantee that it will work. We've had people do that here and then the attach didn't work and your database will be marked suspect. Then you're in deep water and you have to go through a long unsupported process to bring it back online:
    http://myitforum.techtarget.com/articles/18/view.asp?id=7381

    There are other ways to shrink your log. Below is an example script you can run in the database.

    The log contain all changes in your database. You need to dump or back it up to avoid it growing forever. For example issue a full backup once per day and transaction log backups every four hours.

    Shrink stored procedure:
    use master
    go

    create proc sp_force_shrink_log
    --------------------------------------------------------------------------------
    -- Purpose: Shrink transaction log of the current database in SQL Server 7.0.
    -- Author: Andrew Zanevsky, AZ Databases, Inc., 12/25/1999, v5 - 08/18/2000
    -- zanevsky@azdatabases.com
    --------------------------------------------------------------------------------
    @target_percent tinyint = 0,
    @target_size_MB int = 10,
    @max_iterations int = 1000,
    @backup_log_opt nvarchar(1000) = 'with truncate_only'
    as
    set nocount on

    declare @db sysname,
    @last_row int,
    @log_size decimal(15,2),
    @unused1 decimal(15,2),
    @unused decimal(15,2),
    @shrinkable decimal(15,2),
    @iteration int,
    @file_max int,
    @file int,
    @fileid varchar(5)

    select @db = db_name(),
    @iteration = 0

    create table #loginfo (
    id int identity,
    FileId int,
    FileSize numeric(22,0),
    StartOffset numeric(22,0),
    FSeqNo int,
    Status int,
    Parity smallint,
    CreateTime varchar(50)
    )

    create unique clustered index loginfo_FSeqNo on #loginfo ( FSeqNo, StartOffset )

    create table #logfiles ( id int identity(1,1), fileid varchar(5) not null )
    insert #logfiles ( fileid ) select convert( varchar, fileid ) from sysfiles where status & 0x40 = 0x40
    select @file_max = @@rowcount

    if object_id( 'table_to_force_shrink_log' ) is null
    exec( 'create table table_to_force_shrink_log ( x nchar(3000) not null )' )

    insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateTime ) exec ( 'dbcc loginfo' )
    select @last_row = @@rowcount

    select @log_size = sum( FileSize ) / 1048576.00,
    @unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
    @shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00
    from #loginfo

    select @unused1 = @unused -- save for later

    select 'iteration' = @iteration,
    'log size, MB' = @log_size,
    'unused log, MB' = @unused,
    'shrinkable log, MB' = @shrinkable,
    'shrinkable %' = convert( decimal(6,2), @shrinkable * 100 / @log_size )

    while @shrinkable * 100 / @log_size > @target_percent
    and @shrinkable > @target_size_MB
    and @iteration < @max_iterations begin
    select @iteration = @iteration + 1 -- this is just a precaution

    exec( 'insert table_to_force_shrink_log select name from sysobjects
    delete table_to_force_shrink_log')

    select @file = 0
    while @file < @file_max begin
    select @file = @file + 1
    select @fileid = fileid from #logfiles where id = @file
    exec( 'dbcc shrinkfile( ' + @fileid + ' )' )
    end

    exec( 'backup log [' + @db + '] ' + @backup_log_opt )

    truncate table #loginfo
    insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateTime ) exec ( 'dbcc loginfo' )
    select @last_row = @@rowcount

    select @log_size = sum( FileSize ) / 1048576.00,
    @unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
    @shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00
    from #loginfo

    select 'iteration' = @iteration,
    'log size, MB' = @log_size,
    'unused log, MB' = @unused,
    'shrinkable log, MB' = @shrinkable,
    'shrinkable %' = convert( decimal(6,2), @shrinkable * 100 / @log_size )
    end

    if @unused1 < @unused
    select 'After ' + convert( varchar, @iteration ) +
    ' iterations the unused portion of the log has grown from ' +
    convert( varchar, @unused1 ) + ' MB to ' +
    convert( varchar, @unused ) + ' MB.'
    union all
    select'Since the remaining unused portion is larger than 10 MB,' where @unused > 10
    union all
    select'you may try running this procedure again with a higher number of iterations.' where @unused > 10
    union all
    select'Sometimes the log would not shrink to a size smaller than several Megabytes.' where @unused <= 10

    else
    select 'It took ' + convert( varchar, @iteration ) +
    ' iterations to shrink the unused portion of the log from ' +
    convert( varchar, @unused1 ) + ' MB to ' +
    convert( varchar, @unused ) + ' MB'

    exec( 'drop table table_to_force_shrink_log' )
    GO

  20. derrickleggett New Member

    Here is a perfect example of WHY you only use attach/detach when absolutely necessary:<br /><br /<a target="_blank" href=http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=48523>http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=48523</a><br /><br />[<img src='/community/emoticons/emotion-1.gif' alt=':)' />]<br /><br />MeanOldDBA<br />derrickleggett@hotmail.com<br /><br />When life gives you a lemon, fire the DBA.
  21. FrankKalis Moderator

    Murphy's law at work. [<img src='/community/emoticons/emotion-2.gif' alt=':D' />]<br /><br />--<br />Frank Kalis<br />SQL Server MVP<br /<a target="_blank" href=http://www.insidesql.de>http://www.insidesql.de</a><br />
  22. vijaygarapati New Member

    Is taking full database backup quintessential after running this script or are there any work arounds??

    Use [sales]
    Go
    alter database [sales] set recovery simple
    go
    dbcc shrinkfile (sales_log, 100)
    go
    checkpoint
    go
    dbcc shrinkfile (sales_log, 100)
    alter database [sales] set recovery full
    go

    Can anyone tell me the logic involved in this script? why are we using shrinkfile command twice?

    spare me if this sounds silly

    Thanks,
    rgards
  23. Joozh New Member

    Thanks everyone for the various suggestions.<br /><br />The code solution from Argyle is really "Heavy Duty" stuff for me to absorb <img src='/community/emoticons/emotion-1.gif' alt=':)' /> so I hope I will never run into such a scenario again <img src='/community/emoticons/emotion-5.gif' alt=';)' /><br /><br />
  24. deepakontheweb New Member

    Vijay:

    alter database [sales] set recovery simple
    go
    dbcc shrinkfile (sales_log, 100)

    When we change database recovery model to simple.. SQL engine drops all inactive transactions (read simple recovery model characteristics) and then we try to shrink database..

    checkpoint
    go
    dbcc shrinkfile (sales_log, 100)

    Issuing a checkpoint forces all dirty pages for the current database to be written to disk. So log file get new entries. Again executing shrink file again removes free space from log file.

    alter database [sales] set recovery full
    go

    --This put back recovery model to full. Having a full backup is required because sql engine has dropped all transaction without tran backup and its equivalent to have a database in simple recovery since the last full backup you took. So if you need to recover database till last transaction.. You can't.

    Taking a new full backup will start a new backup process from their onwards.. As per your backup and recovery strategy.









    Deepak Kumar

    --An eye for an eye and everyone shall be blind
  25. vijaygarapati New Member

    Thank you deepak for your detailed explaination of the scritp.
  26. gmilner New Member

    See "Dynamically shrink the transaction log with a stored procedure"
    http://codetempest.com/node/10

    Using this, you can shrink your log file thusly:

    USE MyDatabase
    GO
    EXEC sp_trx_log_shrink 2, 50
    GO

    Or in a weekly/nightly SQL Job.
  27. mmarovic Active Member

  28. satya Moderator

    I agree with Mmarovic as it is like wasting the system resources in shrinking the Transaction log when other maintenance jobs are contributing to increase the size.

    The ideal solution will be to monitor the growth and maintain regular intervals of Tlog backups to keepup the virtual log size and only shrink if you have free space problems on the drive where Tlog is located.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.

Share This Page