SQL Server Performance Forum – Threads Archive
Deleting rows from Big tablesWe have a table called "ArchiveReportCurrent". This table has more than billion rows. Now we have to delete around 650 million rows from this database. Deleting this rows in one go will mean everything will come to halt as last time when we tried to do this it took more than 10 hours to delete this rows and logs were massive in size. Now we are planning to do this via SQL programmin by deleting couple of 100 rows and then committing them parallely. Also we need to make sure that these delete statements do not lock the database or table.
This would ensure that rows are not stored in logs and we don’t get performance hit. Does anyone has script or code to delete 10 rows at time and then committing them without locking table. Any help would be appreciated. Thanks.
If you are deleting the rows based on the time…
while 1 = 1
DELETE TOP (1000)
WHERE DueDate < ‘20020701’; WAITFOR DELAY ’00:00:010′;
if @@rowcount = 0 break
end Table partitioning is the right choice in this case…. MohammedU.
what if I am not deleting based on time, instead I have to delete based on "clause".
I think of commiting the data every time I delete to rows, to avoid massive logs. Will the above script commit everytime it deletes data???
Yes, it commits the every iteration or you can use BEGIN TRAN and COMITT TRAN…
To support Mohammed, partitioning seems to be way to go. If you decide to use partitioning inserting in a new table in batches of 10000 rows or so and then drop old, or bulk copy out and in may be the best choice for partitioned table initial population.
Can you update me how to do table partitioning and what are factors to be considered and what affects this setup. Also what type of maintenance is required later.
Exploring Table and Index Partitioning in SQL Server 2005
http://www.sqljunkies.com/article/f4920050-6c63-4109-93ff-c2b7eb0a5835.scuk Partitioned Tables and Indexes in SQL Server 2005 by Kimberly L. Tripp