Deleting Records Increases Log Size | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Deleting Records Increases Log Size

Hello friends,
I needed some advice on what would be the best method to delete millions of records such that my transaction log does not grow while doing so.
Basically we are an ISP so our database is OLTP. The customers are connected 24 X 7 to our Radius Servers and that is why i cannot afford the downtime for even few seconds.
Now this database is right is right now grown to 3 GB + which i know is not that big but the problem is that the disk space is only 10 GB. The guy before didn’t take care of this issue. Not only this but as it is we do not require the previous inactive records in this database so i want to remove the records from the datbase. The recovery model is obviously set to FULL. Now the problem is that few days ago i tried to remove all the records that were there and were inactive with an intention to leave only the records that were there since last 6 months. But while doing so in one batch, first it took too long. It just went on going until i came to know through this message that the transaction log has grown too much till the disk space was full. Hence i had to shrink it and this option did not work.
Now my question is that whether i should delete records in small sections or should i change the recovery model from FULL to SIMPLE till the time being my records are deleted. I will be doing this operation during off-peak time.
But what i am not totally aware about is that does change the recovery model from FULL to SIMPLE make a diffrence in the time it take to delete records and also the size of transaction log file? If so then is that the best option or should i go with the option of deleting records in small chunks?
Please advice me on this. Any help will be greatly appreciated.
Whit simple recovery model, log should no grow like full recovery model, but you know the risk.
My recomendation is, by more disk and keep recovery full.
Luis Martin
Moderator All postings are provided “AS IS” with no warranties for accuracy.
For another solution, you could use the following format: delete 1 million records
tran log backup
dbcc shrinkdb
repeat MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
I would recommend to keep full recovery model and use let’s say 5000 rows batches to delete rows. Add appropriate waitfor delay e.g. WaitFor Delay "00:00:02" and schedule transaction long backups frequent enough (e.g. every hour). I assume it is going to be permanent job. All example values (delay time, tran log backup frequency, batch size) are just wild guesses, I would play with different values on dev db server and after testing implement process in production.
If you are deleting the records all in one transaction, ie a single delete statement, changing to simple recovery won’t help you anyway. It will still write the entire transaction to the transaction log, and truncate the log after the transaction has completed (after a checkpoint). So, in your case, the disk will fill up, and the transaction will fail. I like Derrick’s solution, except I would use DBCC SHRINKFILE, instead of SHRINKDB, to shrink the data file. If you’re going to delete a second million, third… you really wouldn’t want to shrink the transaction log until you’ve deleted all the records. Otherwise, it’ll just grow again, and that creates a performance hit. Steve
Thank you very much for the replies. I really appreciate it.
Had i not received any other solution, i was going to do the same thing that derrick.. had suggested. But now that i know that you guys have also suggested the same solution then i will be going ahead with this plan.
Thanks once again.