Delete 1 Million rows out of 300 Million | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Delete 1 Million rows out of 300 Million

Hi I have a huge table that has more than 300 million rows. What is the fastest and most efficient method to delete rows based on a condition that would return aroun 1 million rows? Thanks Rajoo
Delete in baches say 10,000 at a time and also if you don’t need the transaction log to rollback the statement, do DUMP TRANSACTION with NO_LOG after each iteration. HTH. Gaurav
Thanks. Rajoo
Use SET ROWCOUNT TO 10000 and DBCC SQLPERF(LOGSPACE) to monitor the log space during this process. _________
Satya SKJ

Before running the scenarios provided above, you might benefit from dropping any non-clustered indexes (assuming that you will use a clustered index for the query) on the table. Recreate them after the final DELETE.
also make sure any other table with a foreign key constraint to the primary key of this table has an index on the foreign key lookup at the execution plan for the delete, if there are FK constrains, it will show up as a join to that table
if it does not already have an index, it will show up as a scan if you are certain there are no references, you can also drop the contraint and reestablish it later

vbKenya has made a good point about indexes. But I would like to add that although the selectivity of clustered index will definately help in delete but i guess dropping the clustered index before deleting records may also be considered as this will reduce the page shuffling time. The index can be created again after the delete. I am suggesting this as after the delete, anyway you’ll have to defragment the table. But before supporting this completely, I would definately say this sugggestion is based on asumption that the clusterd index is going to have a huge impact due to delete. So if all the records to be deleted are together, I think there is no need to drop clustered index. Gaurav
]]>