Log File Huge | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Log File Huge

For one of my databases my log file has grown to over 8gb. There are full database backups made of this database daily, and transaction log backups made every 30 minutes. I use this same routine on other database that are more active and process many more transactions and their log files aren’t nearly as huge. I’ve ran the full database backup manually, and then performed a DBCC SHRINKFILE on the log file, and it’s now down to 8gb from 11gb. Any ideas?
What’s your database recovery model? Nathan H. Omukwenyi

sounds like you may have a long running transaction. may want to check current running processes, and find out. if that is the case you could use the query cost governer limit
feature, but that may not be a viable optione depending on
your situation. also…
if you do have a long running transaction… you could explicitly
force a commit on that transaction, and thereby causing a checkpoint
so that a normal log backup will shrink the file up a bit. of course you’ll then need to run the dbcc shrinkfile if you want
to officially release that space back to the os. just a thought. – SQL.Admin – US.Florida
Data recovery model is "Full". We use this setting on all of our other databases though and do not have this problem. sqladmin — How can I tell if I have a long running transaction?
itistic… first thing i would do is ask the application administrators.
your database is supporting them, and they should be able to
tell who’s running what transaction. just ask them if there is any thing of ‘larger than normal’
size running right now. once they tell you… you can look up which transaction it
could be in ‘current processes’ under
enterprise manager/current activity/process info. you can ‘if necessary’ right click the process and kill it,
or you can find the SPID (system process id) and open QA
then run this for example. kill ## for example… lets say you found a process with id number: 60
you would then run this. kill 81 that will end the process. some dba’s don’t have the authority
to kill certain transactions, and that is why it’s generally a
good idea to ask the application admins what is running. they
can also itentify long running transactions, and cancel them from
their application tools. once that is done… you just give it a minute, check the processes
to be sure they have stopped, and all you would have to do this is
just refresh the screen, and once you know for sure… just backup
the transaction log, and it should truncate automatically. then
run the ‘dbcc shrinkfile’ and reduce the file size. hope this is useful. – SQL.Admin – US.Florida
:::CORRECTION ON PREVIOUS POST::: you can ‘if necessary’ right click the process and kill it,
or you can find the SPID (system process id) and open QA
then run this for example. kill ## for example… lets say you found a process with id number: 60
you would then run this. kill 60 :::CORRECTION ON PREVIOUS POST::: sorry about that. typo – SQL.Admin – US.Florida
We are a small company and happen to be the developers of all the apps here as well. When I look at all of the running processes (Management/Current Activity/Process Info in Enterprise Manager) all of the processes have a login time of today since the server was reset this morning. Don’t see what could possibly be left running after the server reset. Just makes no sense.
If I were you I would run a trace to identify long running transactions. Long running transaction may come from huge (bulk) insert/update/delete or from some maintenance task, e.g. index defregmentation. Big inserts/updates/deletes you can rewrite to use loops comitting smaller number of changes (e.g. 1000) in one cycle. Index rebuild/defragmentation should be done one at time and it should be done only when fragmentation is above acceptable level.
mmarovic is right. the best thing to do is run a trace at this point. – SQL.Admin – US.Florida
Well, I’ve come into the office today to check and find the log file is now (after shrinking) down to a little over 1GB. It seems as though as every day goes by more and more transactions are being cleared from the log file. Would this make sense?
]]>