Hi all, I have a particular situation: We have a process (process 1) which goes through all the data inserted today, from one table and do a complex calculation using multiple tables. If a particular condition is not met, I copy the data into a table (ProcessData). This process will go though > 100K of rows but will insert <1000 rows into the table. Then I have another process (Process B), Which takes the rows from ProcessData table one by one and executes another complex processing. Process B uses some huge and heavily used tables (30+ million rows and around 300 K different user connections daily) but it works only with the very small subset of the data. Both Process A and Process B are windows services working at off peak time. Now what is the best way of archiving the data from ProcessData table. Suggestion 1: When Process B completes a row delete the row and move the data into archive table Suggestion 2: Wait until Process B completes all rows and move all rows to Archive table and truncate the table Suggestion 3: When Process B completes a row mark the row (using a flog). Create another scheduled job to move the data using the flag periodically. Please let me know what is the best method?