I have an large table, approx. 30 million rows with 125 columns. There are 7 indexes, each on a single column, one is a clustered unique and the other 6 are non-clustered unique. This table is part of the database of a 3rd party application and we are attempting to upgrade to a new version of the application. As part of the conversion process, an ALTER TABLE command is performed, adding 2 INT columns (not null) to the table, with a default value of 0. This command has been running for 29 hours. The database file grew from 135GB to 224GB. The Log file was set to 50GB before the upgrade and the database is in Simple recovery mode. The transaction log space used is growing very slowly. It's grown from about 1GB used to 11GB used. The size of the table (via Management Studio properties) is 32GB. It's by far the largest table in this database. We need to get this upgrade done over a weekend, but with the time it's running, it's looking like we'll need a 4 day weekend to do it. Does anyone have any ideas on how to speed this up?