I am developing a warehouse box packing application that will sample inventory every 1 minute to determine, based on certain algorithms, exactly how many boxes can be packed for each product type. This inventory turns over very quickly since they pack boxes 24x7. What I planned on doing was truncate and insert approximately 3000 to 5000 records every 1 minute or so in order to give them a constant view of what boxes are available to pack. I can't use a view or a set of views for this (AFAIK) since the alogorithms used to determine what widgets can be packed in each layer of each box are quite sophisticated based on a target quality median for the sum of each layer which requires sampling individual widget properties until a target median is found for the box layer. This is done for each layer in each box. I believe a cursor process is necessary for this - please let me know if there's a better way... What I'm wondering is would it be better perfomance-wise if instead of truncating and inserting 3000 to 5000 rows of data every 1 or 2 minutes of every day 24x7, would it be better to just create the 5000 rows of data one time and re-initialize the data every time via an Update (basically just have a set of records to be re-used every 1 or 2 minutes). Which methodology is better for (1) performance and (2) database fragmentation/etc.?