How to estimate transaction log size? | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

How to estimate transaction log size?

If I just simply want to insert all the rows from one table to another table which has the same structure and indexes, is that a way to calculate how much space I should allocate for transaction log? I’m currently doing the insert from a table which has 14 millions rows into another table and the size of the original table is about 7G. The transaction log has already grown 16G since the inserting starts. I don’t understand why the log will grow so much.
You should probably have dropped the indexes before you did that. <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /> How many indexes do you have on that table?<br /><br />MeanOldDBA<br />[email protected]<br /><br />When life gives you a lemon, fire the DBA.
Yeah really, everytime you update a table with indexes you are performing a multi table insert, indexes are basically tables under system control. I would also change the recovery model to bulk log while performing large scale inserts like the one you did.
I intentionally don’t want to drop the indexes to see the effect of having indexes. I have about 8 indexes. The 7G is the reserved size from sp_spaceused so it already includes both the data and index size. I have just changed the recovery model to bulk-logged now since the server is out of space soon. The transaction has grown to 20G. However, I remember that bulk-logged won’t work in this case since I simply running: insert into newtable select * from oldtable I hope someone can explain why the transaction log will grow so much.

Yes, any data modification operation takes transaction log space more than the size of the data that is being modified. I believe, this is because transaction log does not only contain data, but the data modification statements that are being executed and other overheads such as table and index information. For such large tables it is better to insert the data in batches and perform frequent transaction log backups during the entire process so that the transaction log will truncate frequently.
<blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by pcsql</i><br /><br />I intentionally don’t want to drop the indexes to see the effect of having indexes. <br /><hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote"><br />That’s the effect the TL will grow in your case to 20G [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]<br /><blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by pcsql</i><br />insert into newtable select * from oldtable<br /><br />I hope someone can explain why the transaction log will grow so much.<br /><hr height="1" noshade id="quote"></blockquote id="quote"></font id="quote"><br />INSERT INTO is not a Bulk Operation SELECT INTO is. SELECT INTO is non-logged depending on your recovery model. [<img src=’/community/emoticons/emotion-2.gif’ alt=’:D‘ />]
"*" Why do people keep using that?????? MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
I have to end the inserting since the hard drive is running out of space. Now, I have removed all the indexes and reset the recovery to Full. I’m rerunning the insert.

And then will you rerun the insert with the recovery set to simple? This would be kind of a fun test. Are you recording the speed/size difference? MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
I just leave the recovery mode to Full. This time the insert finish and transaction log grows only 5G. I’m testing it with 1 index now.
If you have time pcsql after you run this test try this one….Remove all indexes, change your insert into a SELECT INTO statment using Bulk-Logged or simple recovery models compare spead/sizes against other tests.
Then apply all the indexes back and see how long it takes to delete all those records. (a looooooooooooooooooooooooooooooong time) MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Or you can TRUNCATE it [<img src=’/community/emoticons/emotion-2.gif’ alt=’:D‘ />]
That would be boring. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Inserting with 1 index will consume extra 3G of log space comparing with no index. Hi Lazy_DBA, I have thought about trying SELECT INTO. However, I have decided not to try it since the SELECT INTO table must be a new table. Hi derrickleggett, I’m testing deleting 2 millions rows with all the indexes. I guess it will take long time and consume a lot of log space.

Yep. Let us know. Isn’t this great fun. <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br /><br />MeanOldDBA<br />[email protected]<br /><br />When life gives you a lemon, fire the DBA.
I’m still hoping someone can give me some hint how to calculate how much space to allocate for transaction log in doing such insert or delete. Does the book Inside SQL Server 2000 mention anything like this?
I have a workbook that tells you all that. Sometime, I’ll pull it out and let you know. lol MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
May refer this linkhttp://www.sqlservercentral.com/scripts/contributions/901.asp for information. HTH Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
Hi derrickleggett, The deleting of 2 million rows actually does not takes too long. It took less than an hour but 6G of log space. When the table grows to certain size (both data and index), dropping all the indexes before insert and then recreating them after the insert will probably take long time. Any suggestion?
]]>