Spotlight on ApexSQL Log : SQL Server Audit & Recovery Tool

Multi-row statements

What maybe sounded bit overkill for a single row UPDATE, will quickly prove almost essential when dealing with statements that affected multiple rows of a table. From my own experience in the communities I would say, that these are by far the most common cases (apart from dropped objects or truncated tables). Typically you run into this situations when people with access to management utilities like the SQL Server Management Studio (or Enterprise Manager and Query Analyzer in SQL Server 2000 and below), or even Microsoft Access issue fire statements against the database that affect more rows than these people intended to. The reasons why such people have access to such tools aren’t subject of this discussion here, so let me just say, that after you have recovered from that statement, you should seriously speak with that user and also seriously rethink your permission strategy. Typically, while surely not always, such statements are fired by “Power-Users” which have access to such tools. But even experienced DBA’s might oversee to put in a WHERE clause of an UPDATE or DELETE statement, especially when they are under pressure.

The steps to undo a transaction that spans multiple rows are the same as the ones I’ve described in the “Single row UPDATE” section. So, no matter if you issue a statement like:


or even omit the WHERE clause at all, is irrelevant. From the main grid you select all the rows in this transaction, check them and create your undo script. Here’s an example of what this will look like for the above statement:

The undo script, which you can run against the database will look like this:


There are actually two “methods” to remove rows from a table. One method is the above mentioned standard DELETE statement, the other one is the TRUNCATE TABLE method. There are significant differences between both methods, which should be briefly explained:

DELETE logs the data for each row affected by the statement in the transaction log and physically removes the row from the file, one row at a time. The fact that all data is logged can mean that DELETE operations are slow when many rows are affected by the statement. In addition to this performance issue, disk space might also be an issue, since during a large DELETE operation the log can grow massively.

To circumvent this, some people use the TRUNCATE TABLE method when they wish to completely flush a table. TRUNCATE is significantly faster than DELETE due to the way TRUNCATE “removes” row. Actually, TRUNCATE doesn’t physically remove any row, but rather deallocates whole data pages and removes pointers to indexes. The data physically still exists until it is overwritten or the database is shrunk. The deallocation of data pages doesn’t require many resources and is as such very fast. The transaction log only records the fact that data pages were deallocated and isn’t concerned with the data itself, so the log also got much work to do in a TRUNCATE TABLE operation. These both factors tip the scales that TRUNCATE is much faster than DELETE.

I will ignore the existing limitations for the use of TRUNCATE (such as FOREIGN KEY references) and assume the user has the appropriate permission to issue a:


statement. To recover from that statement, you use the “Recovery Wizard”.

On the second page of the wizard you select, which table you want to recover.

After clicking on the Finish button, ApexSQL Log attempts to recover as much data as possible from that table. What does that mean?
Remember that a TRUNCATE operation doesn’t physically remove the data, but only deallocates the data pages and only this fact is recorded in the transaction log. I’ve mentioned above that the data still exists until it is overwritten or a database is shrunk. So, here we have the major factor that will influence the success of a recovery from a TRUNCATE TABLE operation: Time.

The sooner you are alerted after the TRUNCATE event has occurred, the better are your chances to recover the data. When your database has high activity, and therefore great needs for new pages, it is merely a question of time until the just deallocated pages are allocated to some other object and thus lost for recovery purposes. The same also hold true when your database is set to auto-shrink (which is one of the options that shouldn’t be set, anyway) and the auto-shrink process kicks in or the db is shrunk manually. The shrink process will completely remove the truncated pages from the database files, making any recovery attempt moot.



No comments yet... Be the first to leave a reply!