MANUAL Log Shipping | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

MANUAL Log Shipping

My configuration has good news and bad news. Good news is my database is pretty small (~30 MB), using SQL Server 2000 Enterprise on both machines, I have full control (of the databases at least). Bad news is my two machines cannot see each other. All I have is a one-way ftp poke through a firewall from the production machine to the reporting one. I have implemented a manual form of log shipping to synchonize my reporting machine to the production one (some lag is fine). Here are my steps: on production machine
* full data backups (weekly)
* DTS package that performs the following (3 hour cycle)
– query a tracking db for next log number to use
– log backup (using next log number as filename)
– build an ftp command file
– execute the ftp command file
– log the log number in the tracking db, along with certain stats on the reporting machine
* vb executable that checks the ftp drop site for new files and grabs them (30 min cycle)
* DTS package that performs log restores (also uses a tracking db to keep track of which log numbers have been applied, and to know which file to look for next) This ran fine for about 6 months. However, recently my log files have gone insane. Typical file size for the log file backups had been in the ball-park of 150-200 KB. Then, probably due to my general incompetance on the production servers maintenance plan, the transaction log file grew (at one point it had allocated about 850 MB). The transaction log backups became erratic, some as large as 1.5 GB. Managed (with a fair amount of reading) to whittle the log file back down to 2 MB. Then added a shrink command to the DTS package on production machine, let it run a night, and that helped (though not completely, the log file still managed to jump up to about 450 MB). Playing with my DTS package, it seemed that if I ran it 2 or 3 times in a row, the log file would stabalize at 2 MB. So, I added another shrink command to the sequence (I have the two shrink commands straddling the backup log command) and that seems to be holding it steady (did for one night anyways, I’ll know more Monday). Meanwhile, I’m mystified as to what the heck is going on in my transaction log. The db is not that busy. And when I used that info command on the sys file (when the log was huge), the status column is mostly 0 (which is why the shrink command is helping, I think). So my questions (finally) are:
1. Anyone else out there have any experience with implementing log shipping under the constraints of my configuration (the machines can’t see each other, so can’t register, etc).
2. How can I look inside my log file and find out what is causing it to baloon up like that?
1. First of all you can take advantage from PROFILER or server side traces to monitor the database activity. Check the sizes before the maintenance plans schedule and check after the execution for otpimum size difference. My incline is towards the maintenance plan contributing enough space to the transaction log. 2. You need to use third party tools to view the transaction log, using DBCC LOG doesn’t gives you much joy. ONe more thing if the transaction log size is bulged upto 400k in a week then I suggest to keep the sizes to 500K instead of adding one of job to schedule to shirnk the Transaction log file. BTW, are you using standard log shipping monitor or your own coded log shipping, and how come DTS package taking care of log restore on the secondary server. Satya SKJ
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
1. Can profiler be used to monitor what goes into the transaction log? Or is there a way to use profiler to look at the actual db activity and gauge how large the transaction log should be? 2. Any recommendations on what third party tools to use? Actually, 400k is way way less that what I’m seeing. My transaction log grew to 200M overnight, on a 38M database, and this was with transaction log backups happening every 3 hours. I am having to use my own coded log shipping since (correct me if I’m wrong) the built-in log shipping wizard requires the two databases to be able to see/register each other. I do not have that luxery. They are on physically seperate networks. The only connection I have is a 1-way ftp poke through a firewall, which btw, the built in ftp tasks do not work with (doubt I can explain the nature of that firewall completely, basically, what happens is that the ftp session gets routed to the correct machine based on what login gets passed to the firewall, ends up being an extra login step that the ftp task doesn’t work with). This is also why I have a DTS task doing the log restores on the target machine (is there a way to configure the built-in log shipping to allow for a ‘blind’ connection?).
Yes PROFILER will let you monitor all the transactions and activity against the database. Refer these links about third party tools to read Tlog :
Using SQL Agent scheduled jobs you can perform your own log shipping rather than going with VB or DTS to restore the log. Refer to this link about performing own log shipping with set of scheduled jobs. Moreover if the current approach is working fine then donot worry much about it. The main reason behind log bulge is as you’re using DTS to perform few set of activities which contributes to the transaction size. HTH Satya SKJ
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
ty for refs and the point about DTS bulging the logs… Sounds like if I redo the log shipping using stored procedures, and get rid of the shrink commands, I should be able to get control of this (though I won’t be able to fully implement the shipping in the article since I can’t set up linked servers)… tanstaafl
Instead of relying on the DTS to restore you can take care the process with ISQL Process as you say the both SQL servers cannot be linked on the same network. This way the SQLmay not have much stress on the Tlog size. HTH Satya SKJ
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
Log explorer turned out to be the ticket towards finding out what my problem was (obtained an evaluation copy, now I’m going to have to get a purchase approved, just too good a tool to pass up). Turns out the huge log files were due to bad programming on my part (big surprise there, still working on some parts of my learning curve hehehe).
Using Log Explorer, I was able to see that some of my transactions were unbelievably huge. While most consisted of a single line (my front end is all web enabled), inserts to a particular table resulted in a little over 40,000 operations. Sequence went like this: INSERT data for new row
DELETE a different row
re-INSERT same exact data just deleted into a new row (and I mean exact, even the primary key)
DELETE another row
re-INSERT same exact data just deleted into a new row <<lather rinse repeat until ALL rows in the table had been DELETED and re-INSERTED, even ending with a DEL/re-INSERT of the very first row of the transaction>> Since the table had around 20,000 rows, every new insert into this table cause a transaction consisting of over 40,000 operations. And now the part where I explain the stupid thing I did… I was having trouble with a user supplied date field, and was experimenting with different ways of converting their text entry into a valid date-time field.
One of the things I tried was a trigger. The way it worked was to simply insert the supplied text into a text field, then have a trigger attempt to copy the string into a date-time field. If the string did not parse correctly, it did not kill the users input, just failed to copy/convert it into a valid date-time (the date was not the only data being entered on the form). Clumbsy, I know, but I was learning. Now comes the really bad part. At the time I was playing with this, I was under the mistaken notion that such a trigger would automatically only have the scope of the row that caused it, so I failed to include a WHERE clause to limit it to the new row. Consequently, every time there was an insert, it went through and applied it to every row in the table. The part that I do find odd is that even though the trigger procedure was written as an UPDATE, it was implemented as a DELETE followed by an INSERT. So now, the trigger is gone, and I am fine tuning some client side JAVA to screen the date string input before submitting the form (anyone have some handy that takes into consideration all the long month/short month/leap year stuff?).