SQL Server Performance Forum – Threads Archive
transaction log file sizeFirst, let me say thanks in advance for any help/feedback provide. We have a ecommerce site/client that we are working with that utilizes a MS SQL2000 database to provide the backend. Hardware specifications (in case they are need) are: Dual 2.4 XEONs, 3gb Memory, 3x36gb SCSIs running Raid5. The problem we are seeing is abnormal transaction log file size and growth. Specifically, the MDF file = 4.7gb and as of today the LDF = 25.7 GB. Yes, that is more than 8 times the size of the master data file. My/our understanding is that (generically speaking) upon completion of database backups the LDF would be commited to the master file and thereby keep the files in check. In this case, we have tried setting maintence plans in place, we have taken manual backups however the process does not seem to be occuring. While reading through some of the other threads on topics similar to this, I saw suggestions to run dbcc checkpoint, dbcc shrinkfile, detach the db and auto generate a new logfile and a host of other suggestions. Approximately, 1 month ago while working with the client we encountered the log file at a size of 7gb, at the time it raised an alarm and we subsequently went through these processes and ultimately detached the db and regenerated a new log, it has been since that time that the log file has built its way up to 25gb (less than a month). In terms of database activity, we are not talking about a large number of db transactions per day/hour, there is no serialized inventory, only a handful of sales orders each day, and so forth. One last thing to note, we have several other sites/clients running with the same ecommerce system that we work with / manage all with relatively close server specifications. Below are samples of the MDF/LDF sizes of those sites. (simply showing that there is specifically something wrong with this server/database/or environment that is causing this problem). server MDF (in GB) LDF (in kb)
sporting 8.8 18.3
home&garden2 8.7 1.2
home&garden 7.4 63
PROBLEM SITE 4.7 25,757
apparel 2.6 1
hobbyshop 2 1
home&garden3 0.9 1
music 0.5 1 Again, thanks again for any feedback/suggestions/thoughts on what might be going wrong and any steps that could be taken to resolve this issue.
On-Demand Dynamic Marketing Solutions
Hi,<br />1). are you performing re-index ? <br />2). defragment of index ?<br />3). is their any bulk insertion of records ?<br /><br />well you may refer KB below :<br /<a target="_blank" href=http://support.microsoft.com/?id=317375>http://support.microsoft.com/?id=317375</a><br /><br /<a target="_blank" href=http://support.microsoft.com/?id=110139>http://support.microsoft.com/?id=110139</a><br /><br /><img src=’/community/emoticons/emotion-1.gif’ alt=’‘ /><br />Regards<br /><br /><br />Hemantgiri S. Goswami<br />[email protected]<br />"Humans don’t have Caliber to PASS TIME , Time it self Pass or Fail Humans" – by Hemantgiri S. Goswami<br />
If you perform a DBCC SHRINKFILE or running intermittent CHECKPOINT doesn’t fetch you the results, it is better to address the problem by taking a long term plan. How frequently the Transaction log backed up?
What is the recovery model on the database?
What is the schedule of database maintenance jobs? If you are not performing regular transaction log backup then it will have the same affect by increasing log size to big. There isn’t a problem with sQL server, refer to the books online about Transaction log architecture for further inforamtion.
SQL Server 2000 always generates automatic checkpoints. The interval between automatic checkpoints is based on the number of records in the log, not time. The time interval between automatic checkpoints can be highly variable. The time interval between automatic checkpoints is long if few modifications are made in the database. Automatic checkpoints occur frequently if a lot of data is modified. The interval between automatic checkpoints is calculated from the recovery interval server configuration option. This option specifies the maximum time SQL Server should use to recover a database during a system restart. SQL Server estimates how many log records it can process in the recovery interval during a recovery operation. The interval between automatic checkpoints also depends on whether or not the database is using the simple recovery model. If the database is using either the full or bulk-logged recovery model, an automatic checkpoint is generated whenever the number of log records reaches the number SQL Server estimates it can process during the time specified in the recovery interval option.
If the database is using the simple recovery model, an automatic checkpoint is generated whenever the number of log records reaches the lesser of these two values:
The log becomes 70 percent full.
The number of log records reaches the number SQL Server estimates it can process during the time specified in the recovery interval option.
Automatic checkpoints truncate the unused portion of the transaction log if the database is using the simple recovery model. The log is not truncated by automatic checkpoints if the database is using the full or bulk-logged recovery models.
Contributing Editor & Forums Moderator
This posting is provided â€œAS ISâ€ with no rights for the sake of knowledge sharing.