I had tried to run a delete on a table base on rows in another for archiving but the statement ran for over 3 hours and eventually failed because ran out of space in trans log. I am deleting 13.5 million rows from a table that has 22 million total row, 48 columns wide. This is the statement used: delete from <database>.dbo.<table> where id in (select id from <database_arch>.dbo.<table>) when I select on the same criteria ( select * from <database>.dbo.<table> where id in (select id from <database_arch>.dbo.<table>)) the results are returned in 22 and a half min. The size of the data being deleted is 4.6GB but when i was doing the delete the transaction log grew to over 60GB before running out of space. How much overhead is there for statements written to the trans log? I suppose I could change the recovery model to simple to avoid the translog running out of space but why would it grow that big to begin with? And why did it take so long? Would using join instead of subquery increase performance? Any comment to help me understand what happened would be great. Thanks.