Huge size of Log file??? | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Huge size of Log file???

The log file of one of my database has gone upto 60GB ??? I have tried to shrink the database but the log file size remains the same. Any clues how I can handle this? If I delete the lof file will it cause any problem?
Will appreciate help. Many TIA
Do not delete the log file!!! Use script bellow instead:
alter database db set recover simple
go
dbcc shrinkfile (db_log, 100)
go
checkpoint
go
dbcc shrinkfile (db_log, 100)
alter database db set recovery full
go

take a full backup just after executing above script. Deepak Kumar –An eye for an eye and everyone shall be blind
Thanks for teh reply but I am not clear where/how to run the script??? Suppose the name of my database is Sales and the name of the log file is thus Sales_Log Can you please guide where and how to run the above script?
Just run this query in query analyzer window.. do you know how to take full backup after that?? Use [sales]
Go
alter database [sales] set recover simple
go
dbcc shrinkfile (sales_log, 100)
go
checkpoint
go
dbcc shrinkfile (sales_log, 100)
alter database [sales] set recovery full
go
Deepak Kumar –An eye for an eye and everyone shall be blind
I assumed your recovery model is full. 100 doesn’t have to be target size. Better put reasonably big target size and schedule frequent enough transaction log backup.
Actually you can use a rather brute-force method. sp_detach_db the db in question. Now you can delete the log, rename it or whatever else you want do with the log file. SQL Server will issue a message like Device activation error. The physical file name ‘d:datensqlMSSQLdata
orthwnd.ldf’ may be incorrect.
New log file ‘D:DATENSQLMSSQLDATANorthwind_log.LDF’ was created. when you attach the db using sp_attach_db, but as you see, it will create a new log file.
You should see this method only as kind of last resort. It should never be everyday’s practise! And always, always make sure, you have a most recent backup. —
Frank Kalis
SQL Server MVP
http://www.insidesql.de

<blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by deepakontheweb</i><br /><br />Just run this query in query analyzer window.. do you know how to take full backup after that??<br /><br />Use [sales]<br />Go<br />alter database [sales] set recover simple<br />go<br />dbcc shrinkfile (sales_log, 100)<br />go<br />checkpoint<br />go<br />dbcc shrinkfile (sales_log, 100)<br />alter database [sales] set recovery full<br />go<br /><hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote"><br /><br />When I try to run the above in Query Analyzer, it gives a message saying :<br />Incorrect syntax near ‘recover'<br /><br /><img src=’/community/emoticons/emotion-6.gif’ alt=’:(‘ />
How about:
alter database sales set recovery simple

Use [sales]
Go
alter database [sales] set recovery simple
go
dbcc shrinkfile (sales_log, 100)
go
checkpoint
go
dbcc shrinkfile (sales_log, 100)
alter database [sales] set recovery full
go
— Use above coding.. lastly only ‘Y’ was missing from recovery Deepak Kumar –An eye for an eye and everyone shall be blind
Article and KBA to the relevant issue:
http://www.devx.com/getHelpOn/10MinuteSolution/16493/1954?pf=true
http://www.support.microsoft.com/?id=315512 HTH Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
Thanks everyone but no luck. I changed the code and added the missing ‘y’ but now I get the message that "could not locate the <…file….> in sysfiles" my log file is located in D:program FilesMicrosoft SQL ServerMSSQLDataSales_Log so I changed the above code to : dbcc shrinkfile ("D:program FilesMicrosoft SQL ServerMSSQLDataSales_Log", 100) Any more clues please. P.S.
Sorry for the botheration.
Please do not change anything from below code and just execute all in once..anywhere in query analyzer..
Use [sales]
Go
alter database [sales] set recovery simple
go
dbcc shrinkfile (sales_log, 100)
go
checkpoint
go
dbcc shrinkfile (sales_log, 100)
alter database [sales] set recovery full
go take a full backup of database after this.. Deepak Kumar –An eye for an eye and everyone shall be blind
Tons of thanks everyone <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br /><br />It appears that the problem is now solved. Special thanks to Deepak.<br /><br />Regards.
Frank’s solution works too and only short downtime is needed.
Thanks mmarovic. Next time i run into a similar problem (though i hope not), I will also give Frank’s suggestion a try. Just for clarity: Frank’s suggestion is to
1) Detach the database
2) Delete the Log file
3) Attach the database Is the above correct? Please confirm especially if I have ommited any important step. Regards.
Yes, that’s correct. I prefer solution you used because you can define the size of log file you want to shrink and no downtime is needed. If you use detach/attach approach you will have a few seconds downtime (if initial log size is not really big). Don’t forget to schedule transaction log backup.
Thanks for the confirmation. Suppose I use Deepak’s suggestion and the problem gets solved. Do I still have to schedule backup of the transaction log? If so, why? Sorry if the question is silly.
As mentioned please do not use the detach, delete log, attach procedure unless it’s a last resort. Never do it on a critical system. There is no guarantee that it will work. We’ve had people do that here and then the attach didn’t work and your database will be marked suspect. Then you’re in deep water and you have to go through a long unsupported process to bring it back online:
http://myitforum.techtarget.com/articles/18/view.asp?id=7381 There are other ways to shrink your log. Below is an example script you can run in the database. The log contain all changes in your database. You need to dump or back it up to avoid it growing forever. For example issue a full backup once per day and transaction log backups every four hours. Shrink stored procedure:
use master
go create proc sp_force_shrink_log
——————————————————————————–
— Purpose: Shrink transaction log of the current database in SQL Server 7.0.
— Author: Andrew Zanevsky, AZ Databases, Inc., 12/25/1999, v5 – 08/18/2000
[email protected]
——————————————————————————–
@target_percent tinyint = 0,
@target_size_MB int = 10,
@max_iterations int = 1000,
@backup_log_opt nvarchar(1000) = ‘with truncate_only’
as
set nocount on declare @db sysname,
@last_row int,
@log_size decimal(15,2),
@unused1 decimal(15,2),
@unused decimal(15,2),
@shrinkable decimal(15,2),
@iteration int,
@file_max int,
@file int,
@fileid varchar(5) select @db = db_name(),
@iteration = 0 create table #loginfo (
id int identity,
FileId int,
FileSize numeric(22,0),
StartOffset numeric(22,0),
FSeqNo int,
Status int,
Parity smallint,
CreateTime varchar(50)
) create unique clustered index loginfo_FSeqNo on #loginfo ( FSeqNo, StartOffset ) create table #logfiles ( id int identity(1,1), fileid varchar(5) not null )
insert #logfiles ( fileid ) select convert( varchar, fileid ) from sysfiles where status & 0x40 = 0x40
select @file_max = @@rowcount if object_id( ‘table_to_force_shrink_log’ ) is null
exec( ‘create table table_to_force_shrink_log ( x nchar(3000) not null )’ ) insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateTime ) exec ( ‘dbcc loginfo’ )
select @last_row = @@rowcount select @log_size = sum( FileSize ) / 1048576.00,
@unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
@shrinkable = sum( case when id < @last_row – 1 and Status = 0 then FileSize else 0 end ) / 1048576.00
from #loginfo select @unused1 = @unused — save for later select ‘iteration’ = @iteration,
‘log size, MB’ = @log_size,
‘unused log, MB’ = @unused,
‘shrinkable log, MB’ = @shrinkable,
‘shrinkable %’ = convert( decimal(6,2), @shrinkable * 100 / @log_size ) while @shrinkable * 100 / @log_size > @target_percent
and @shrinkable > @target_size_MB
and @iteration < @max_iterations begin
select @iteration = @iteration + 1 — this is just a precaution exec( ‘insert table_to_force_shrink_log select name from sysobjects
delete table_to_force_shrink_log’) select @file = 0
while @file < @file_max begin
select @file = @file + 1
select @fileid = fileid from #logfiles where id = @file
exec( ‘dbcc shrinkfile( ‘ + @fileid + ‘ )’ )
end exec( ‘backup log [‘ + @db + ‘] ‘ + @backup_log_opt ) truncate table #loginfo
insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateTime ) exec ( ‘dbcc loginfo’ )
select @last_row = @@rowcount select @log_size = sum( FileSize ) / 1048576.00,
@unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
@shrinkable = sum( case when id < @last_row – 1 and Status = 0 then FileSize else 0 end ) / 1048576.00
from #loginfo select ‘iteration’ = @iteration,
‘log size, MB’ = @log_size,
‘unused log, MB’ = @unused,
‘shrinkable log, MB’ = @shrinkable,
‘shrinkable %’ = convert( decimal(6,2), @shrinkable * 100 / @log_size )
end if @unused1 < @unused
select ‘After ‘ + convert( varchar, @iteration ) +
‘ iterations the unused portion of the log has grown from ‘ +
convert( varchar, @unused1 ) + ‘ MB to ‘ +
convert( varchar, @unused ) + ‘ MB.’
union all
select’Since the remaining unused portion is larger than 10 MB,’ where @unused > 10
union all
select’you may try running this procedure again with a higher number of iterations.’ where @unused > 10
union all
select’Sometimes the log would not shrink to a size smaller than several Megabytes.’ where @unused <= 10 else
select ‘It took ‘ + convert( varchar, @iteration ) +
‘ iterations to shrink the unused portion of the log from ‘ +
convert( varchar, @unused1 ) + ‘ MB to ‘ +
convert( varchar, @unused ) + ‘ MB’ exec( ‘drop table table_to_force_shrink_log’ )
GO
Here is a perfect example of WHY you only use attach/detach when absolutely necessary:<br /><br /<a target="_blank" href=http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=48523>http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=48523</a><br /><br />[<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]<br /><br />MeanOldDBA<br />[email protected]<br /><br />When life gives you a lemon, fire the DBA.
Murphy’s law at work. [<img src=’/community/emoticons/emotion-2.gif’ alt=’:D‘ />]<br /><br />–<br />Frank Kalis<br />SQL Server MVP<br /<a target="_blank" href=http://www.insidesql.de>http://www.insidesql.de</a><br />
Is taking full database backup quintessential after running this script or are there any work arounds?? Use [sales]
Go
alter database [sales] set recovery simple
go
dbcc shrinkfile (sales_log, 100)
go
checkpoint
go
dbcc shrinkfile (sales_log, 100)
alter database [sales] set recovery full
go Can anyone tell me the logic involved in this script? why are we using shrinkfile command twice? spare me if this sounds silly Thanks,
rgards
Thanks everyone for the various suggestions.<br /><br />The code solution from Argyle is really "Heavy Duty" stuff for me to absorb <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /> so I hope I will never run into such a scenario again <img src=’/community/emoticons/emotion-5.gif’ alt=’;)‘ /><br /><br />
Vijay: alter database [sales] set recovery simple
go
dbcc shrinkfile (sales_log, 100) When we change database recovery model to simple.. SQL engine drops all inactive transactions (read simple recovery model characteristics) and then we try to shrink database.. checkpoint
go
dbcc shrinkfile (sales_log, 100) Issuing a checkpoint forces all dirty pages for the current database to be written to disk. So log file get new entries. Again executing shrink file again removes free space from log file. alter database [sales] set recovery full
go –This put back recovery model to full. Having a full backup is required because sql engine has dropped all transaction without tran backup and its equivalent to have a database in simple recovery since the last full backup you took. So if you need to recover database till last transaction.. You can’t. Taking a new full backup will start a new backup process from their onwards.. As per your backup and recovery strategy. Deepak Kumar –An eye for an eye and everyone shall be blind
Thank you deepak for your detailed explaination of the scritp.
See "Dynamically shrink the transaction log with a stored procedure"
http://codetempest.com/node/10 Using this, you can shrink your log file thusly: USE MyDatabase
GO
EXEC sp_trx_log_shrink 2, 50
GO Or in a weekly/nightly SQL Job.
quote:Or in a weekly/nightly SQL Job.
Now that is bad, bad idea! Please read: http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=6894&SearchTerms=log
I agree with Mmarovic as it is like wasting the system resources in shrinking the Transaction log when other maintenance jobs are contributing to increase the size. The ideal solution will be to monitor the growth and maintain regular intervals of Tlog backups to keepup the virtual log size and only shrink if you have free space problems on the drive where Tlog is located. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
]]>