Transaction Log Filling up during DTS run | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Transaction Log Filling up during DTS run

I’m looking for a little guidance understanding how the SQL Transaction Log file behaves during a DTS Package run. I’ve read up on the subject at the MSDN site but am experiencing some results that I don’t understand. Background: I have a DTS package that basically reads in data from an MS Access DB into a SQL Server DB. It has many steps, some of which alter the Access DB prior to the import, and some of which deal with the data in the SQL Server. The data coming in from Access is fairly small, but the package moves larger amounts of data around within the SQL Server. Problem: One particular SQL Task errors off frequently. This task is attempting to copy about 2 million reacords from one table to another within the SQL Server: insert into TABLE1
select * from TABLE2 The error that is occurring is "The log file for database ‘DB_NAME’ is full. Back up the transaction log for the database to free up some log space. ". I have set every single step within the package (including those that deal with the MS Access DB) to commit after successful execution of the step. The DBA says the log file for this DB is already a huge percentage of the DB and shouldn’t be increased. Here’s the weird part. Even though this step fills up the log file when executing in the package, I can run it manually by right-clicking on the step and selecting Execute Step. It runs OK and doesn’t receive the error. My conclusion is that the step isn’t filling up the log file itself, but the entire package is. However, I assumed that the log would be cleared after each step since I have them all set to commit after execution. Any guidance as to why I may be seeing this problem would be greatly appreciated. I’m looking into changing the step to commit after a certain number of records, but I’m suspicious that this won’t resolve the error since the step can run fine by itself. Is there some way to ensure that the log file is getting cleared during the execution of a DTS package?
The way that the system handles the Transaction Log depends on the Recovery Model that is selected for the database. Under a SIMPLE recovery model the database server handles the log file(s) for you and keeps things clean when it can. While that is nice, the problem arises if you have a crash and need to recover. Recovery under the "SIMPLE" model involves only being able to backup from your last full backup. A FULL recovery model involves the ability to restore from your last full backup, any change only backups and from any number of log files that you may have backed up. Obviously this provides the most complete options, however, it carries with it the burden of proper LOGFILE maintenance. Typically the DBA will have scripts that run hourly (or on some set period) to backup the logfile. Once the system knows that the transactions are secured it will then allow you to reuse the space in the logfile for more transactions. However, if there is nothing done in between complete system backups, then the logfile simply continues to hold on to every single transaction that has occurred since the last full backup, so that it would provide the ability to restore them. Meaning, your DBA may have nothing in place to care for the logfiles at all which isn’t a good approach. It could be that they do have something in place, and you just happened to get lucky and run in Query Analyzer right after a log backup had been performed, however, your DTS package crashed because it was run when the log was already nearly full and before the log was backedup. Resolutions:
1. If your DBA has no active policy on backing up the logfiles, then they need to get those in place before you worry about this 1 particular query.
2. Assuming you now have a policy in place that backs up the log files, you can have your DBA give you the script that is necessary to do the backup of the log file, and simply have your JOB run that command first before it runs your package. So that whether you run the script before or after the systems planned backup really won’t matter, because you will backup the log as part of your job.
3. Your DBA can put a performance alert in place that will catch the fact that the log is now 90% full or whatever value they deem appropriate, that will kick off the backup log job ignoring regardless of when it last run.
On top of what druer said, you can add backup transaction log step before and after critical step execution.
Thanks for the explanations. Can I back up the transaction log with an SQL Task or is this accomplished via some other DTS object?
You can definitely do log backups using T-SQL commands. The following command will backup the logfile for the database called WorkOrder to the Backup device that was created called "WorkOrder_LogBackup". (Clever naming convention huh!)
backup log workorder to workorder_logbackup
That command will continue to append the log changes to that backup until you end up "initializing" the backup file with a command like this
backup log workorder to workorder_logbackup with init
I do the "With Init" immediately following my nightly database backup. In other words once I know that everything for the data has been safely backed up, I am free to "reinitialize" the space that was being used. An alternative method is to have the log file backups written as new files each time with date/time stamp. And then you can actually move those off at some schedule or have them start overlaying themselves automatically at some point. Like "I want to keep the files for 4 days and just throwing away the files from 4 days prior once I write the first one for the 5th day." You can find all of the details and probably better explanations if you look online for the BACKUP LOG command.
]]>