I've seen this a handful of times over the years... When updating "large" tables, let's say, a table with 200 - 300 million rows ... instead of updating the entire table at once, the developer chose to update chunks at a time...maybe a set of 10,000, or 100,000 records at a time. W/the exception of the handful of times I've seen this, I've seen far more intances where loops were used, refactored to use a single update, and saw dramatically improved performance. Why is there a pattern to use loops to update chunks of data when the data set is "large". What makes this more manageable to the server vs. sticking w/a single update?