.LDF File is huge any help? | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

.LDF File is huge any help?

Hey guys, I have a database on SQLServer 2000 that handles billing for a local telco which we’ve just switched to. The problem is, that with only 3 cycles of data in it (three bill runs) the appname_log.Ldf file has grown to 20.3 Gigabytes (the .mdf file is three gigabytes). Its growing exponentially, and I’m not very familiar with this issue. Is there anything I can do to arrest its growth/shrink it/what have you?
There are a few things you can do. – Shrink it BACKUP LOG db_name WITH TRUNCATE_ONLY.
DBCC SRHINKFILE (file_name)
You can schedule a job to do it regularly.
It would be a good idea to run a full backup after that. – change the recovery model, but that depends on to which point you need to be able to recover. – find those operations that cause this growth and if possible rewrite them using operations that
are not fully logged. For example: DELETE * FROM table_name is fully loged while TRUNCATE TABLE
table_name is minimaly logged. Bambola.

Valid reference by Bambola, To keep an eye on Tlog growth refer to this link http://vyaskn.tripod.com/track_sql_database_file_growth.htm] and this KBA http://support.microsoft.com/support/kb/articles/Q272/3/18.ASP] to follow SHRINK procedure in addition to above reference. _________
Satya SKJ

For this u can use dbcc shrinkfile command ref to books online. and other method is (This method will recreate a new log ) First u take backup of the database for safe keeping purpose
and keep the server offline i.e it should not access to the users 1. Use sp_detach_db from books online to detach the database 2. Move the ldf to some other area 3. Use sp_attach_single_file_db for attaching the database 4. This will create log with 1mb approx what ever the size of the model
in the area of mdf file is located. 5. if u have enough space then maintain the log in the same place or if u
want to keep the file in the ldf area then follow the below steps – Again use sp_detach_db for this database ref books online.
– move the log file to where ever u want to keep.
– Use sp_attach_db command to attach the database 6. Go to enterprise manager and increase the size of the log (What ever the size u require).
Rushendra
Yeah I will go with the solution suggested by Rushendra. Also search the forum for previous discussions on "Shrink Log".
One more recommendation would be to keep the Transactions in the application as short as possible. You can divide a large single delete / insert into multiple batches.
Also are you using DTS in your application? Gaurav
Method adopted by Rushendra is not advisable on a production environment which enforces outage and if its one-off requirement then its ideal to do so, otherwise DBCC SHRINKDB or SHRINKFILE is widely used in order to keep database available all the times (even MS recommends)[8D] _________
Satya SKJ

Satya is absolutely right. Taking a production database offline to shrink a database (which is something that should be regularly and is an Online operation) doesn’t seem like a good idea to me. Bambola.
The soulution suggested has been provided guessing that the process run on the system is more of a batch job rather than a online process as assumingly the billing run will be done once a month. If this is an online process, we have to think something else. Gaurav
Using SHRINKFILE as a job doesn’t affect any performance during less traffic on the database i.e., during midnights. As we’ve most of our DBCC checks and shrink jobs scheduled once in a week in midnight, those are OLTP and few are 24/7 * 365 applications too, no fuss or issues at all. Moreover dealing with SP_ATTACH_DB and SP_DETACH_DB will prove costly and not suitable for regular jobs. HTH _________
Satya SKJ

Thanks for all the suggestions, I had thought of just using enterprise manager to create a new log file (with limited growth) and then delete the old log file.<br /><br />This database, while large, actually only has transactions occur on weekly intervals, the rest of the time data is only being read. Additionally, at this point, the database is in fine condition, with no need to restore anything, so I’m thinking the old transaction logs are essentially useless (up to this point). Any thoughts on that?<br /><br />Also, if you think it is feasible to just recreate a new log file and delete the old one, if I apply a maximum limit on the file size growth, what happens when the log file reaches/exceeds this limit?<br /><br />Thanks for all the great advice. I think initially I will shrink it then come back and see what you advise from here. [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]
If you have the complete database backup and afer that there haven’t been any transactions, you can delete the old log file. Anyway mu understanding is when you delete the log file, ideally all the open transactions should move to new log file. When you limit the log file growth and the log file gets full and application needs log space for any transaction, SQL Server displayes message to application stating Log file is full and transaction is aborted. Gaurav
THe best option is to use Tlog backups every 15 minutes during that heavy transition period.
For that initially define AUTO GROW with no limits and see upto what extent TLog file grows and set that as a size and schedule proper Tlog backups as defined. Otherwise the best option after that transition you can execute SHRINK job on Tlog to reduce the size. BTW if the TLog file is not important you can set RECOVERY MODEL to simple and concentrate on database backups. _________
Satya SKJ

]]>