We're developing a conversion program to load data from a file-based version of our software to the new SQL-based version. The program is run with no one else accessing the database and with most if not all indexes on the various tables removed. (They get added at the end.) The first action is to make sure that there aren't any rows in any of the target tables, so that we're starting with a clean slate. The program processes 100 rows from the main table in the old system, generating data in memory for a number of related tables as well, then (using ODBC) inserts the rows into each table. Some tables will have more than one row per main-table row added. The sequence of adding data to the various tables is important because we need to learn the auto-assigned identity values and put them into foreign key columns (parents have to be added before children). This process is repeated until all the data has been added. If the process fails at any point, we abandon everything. It would have been silly to put a transaction around the whole exercise, as the log would grow quite large, and it's easy enough to truncate all the tables (or delete all the rows if something prevents truncate table from working) at startup. There didn't seem to be much value to using a transaction for each group of rows. ODBC's autocommit is on by default; we weren't doing anything explicit related to transactions (other than to have ODBC autocommit turned on). Someone suggested that we should expect to get better performance if we placed begin / commit calls around the part of the code that writes the data, and our testing showed that in fact this is the case. I was surprised by the extent of the speedup, as I would have thought that the writing of the new rows would dwarf the transaction overhead. The biggest chunk of overhead related to transactions (I'd think) would be writing the data needed to support rollback to the log, and that happens regardless of whether there are many individual transactions or one big one. Given that everything being done is an insert, there's not much "before-image" data being written to the log at all. Is this a known optimization? I don't remember seeing mention of it before, and I've read a lot. Of course, in most cases you want to use transactions to protect the data, but in this context it simply wasn't necessary as we think it's right to re-start completely so we could avoid having to add a lot of logic to the conversion that would let it skip work already (possibly only partly) done. (Any bug that caused a failure could have resulted in bad data being written to SQL, so that's another reason to start over each time.) Would we have seen as big a speedup if everything being done was an update to an existing row? There, the data needed to roll back would be the previous values (a lot more information). Any comments would be appreciated. J. Merrill / Analytical Software Corp.