delete about 2M records in a table of 13M records | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

delete about 2M records in a table of 13M records

Hi all, I face a problem as below: our SQL Server daily has a job to delete 2M records in a table of 13M records. It takes quite long time to do so. How can i boost up the performance? index is setup on the field in the table which map the where clause of the delete SQL statement. But it still not help a lot Linus
Simply delete the rows in smaller batches, say 1000 rows per batch and in any case better to schedule during less traffic hours on the database. Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
@http://www.askasqlguru.com/ This posting is provided AS IS with no rights for the sake of knowledge sharing. Knowledge is of two kinds. We know a subject ourselves or we know where we can find information on it.
Large batches would mean large log files, file growth, potentially disruptive locking etc. I would break up the batch into small chunks you will get faster commits etc. which will increase your transaction throughput.
Actually you can even bump the count to 10 or 50k instead of 1k if you have a good hardware (atleast a few processors). And while you are doing that in a loop, have a job running to truncate the log every 1 min or 2 mins. I have done similar delete of couple hundred mill rows by deleting 500k at a time. ***********************
Dinakar Nethi
***********************
http://weblogs.sqlteam.com/dinakar/
If you have to do this every day, and that need will continue, you might also look at partitioning the table. That way you might be able to set your system up such that the 2M rows each day are actually stored in a separate structure, which you can remove more easily.
Good point raised by Dinakar and you have to be assured to handle the transaction log sizes during such exercise. Also ensure to perform the truncate log even though if you willing to keep the database in SIMPLE recovery model.
[quote user="ndinakar"]Actually you can even bump the count to 10 or 50k instead of 1k if you have a good hardware (atleast a few processors). And while you are doing that in a loop, have a job running to truncate the log every 1 min or 2 mins. I have done similar delete of couple hundred mill rows by deleting 500k at a time. ***********************
Dinakar Nethi
***********************
http://weblogs.sqlteam.com/dinakar/
[/quote]
And mor emore you can do it, You can keep database in Bulk Logged recovery model so it will not consume more log and make processes faster.

Not entirely correct, minimal logged operations include Bulk import operations (bcp, INSERT … SELECT * FROM OPENROWSET(BULK…), and BULK INSERT). Partial updates to large value data types, using the UPDATE statement’s .WRITE clauses when inserting or appending new data. Note that minimal logging is not used when existing values are updated.
As per BOL When you use bulk-logged recovery to import a large set of table rows, consider distributing bulk imports among multiple batches. Each batch equates to one transaction. Thus, when a batch completes, its log becomes available to be backed up. The next log backup will reclaim the log space that is used to bulk import that batch of rows.

]]>