Log Shipping and Compression | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Log Shipping and Compression

I have a small network pipe between the two servers and want to setup shipping. I do TLog dumps every 15 minutes. Overnight, I also do imports which cause the 15 minute TLog backup files to be ~200M in size. I get about 3Megs per minute over the cross country network, so a 200M file takes over 1 hour (with only 15 minutes of data) to copy. This causes the copy process to back up for hours. It usually catches up by the time the business day starts. The powers that be are not sold on a larger network pipe when the majority of the day we are running under capacity. I thought compression may be another solution. I have experience with SQL lightspeed for compression, but I don’t believe we can integrate it into a fully Microsoft supported shipping configuration. Anyone have ideas on how to get around my small network pipe?
Basic connectivity is not the only thing to take into consideration. Since log shipping does not utilize any form of compression on the transaction log files or the initial backup made by the Database Maintenance Plan Wizard, the files copied over the network have the potential of being large. As these files grow, they may take longer to copy, and slow down the log shipping process. So having a properly designed network with low latency is important. You can compensate for the size of the files and the corresponding latency if the load delay is altered.
Yes, either you have to use the third party tool SQLLiteSpeed which is a good one for compression and speedy backups or depend on the current configuration. If there is any chance of increasing the configuration of pipe to higher size, then log shipping will do the job. I think log shipping is available in that tool,http://www.sql-server-performance.com/sql_litespeed_spotlight.asp fyi. Satya SKJ
Contributing Editor & Forums Moderator
http://www.SQL-Server-Performance.Com
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
I wonder what effect it would have on things if I just added a compression step into the transaction log backup job and an uncompress step in the restore job. Thanks for the information Satya. I had seen that litespeed may have some help in this area, but I want to keep it all Microsoft for support.
I believe you have that option to compress the file using WINZIP or others, but the way calling those compression tools from SQL Server may have negative effect in consuming memory if the server is used extensively. The should be some collation in operating system to compress the files when SQL Server completes the backup, may check the windows scheduler information on Technet for this purpose. Satya SKJ
Contributing Editor & Forums Moderator
http://www.SQL-Server-Performance.Com
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
I would echo what Satya says about using a third-party tool, like SQLLiteSpeed. Although it is not Microsoft technology I know that Microsoft use it extensively internally.<br /><br />Regarding the option of doing the compression yourself, I think that could work but it could be a bit tricky. I’ve used gzip (from www.gzip.org) for compressing full database backups and it works pretty well. Although I’m sure there’s a host of other free compression tools on the market.<br /><br />The copy job (on the secondary) is going to be looking for a log backup file. So when you compress the file, because it will have a different extension, the copy job will miss it. So you either need to be able to give the file the same extension you’re using for the log backup file, or you need to be able to update the relevant tables so that SQL Server looks to copy a different file to the one it thinks it should.<br /><br />Like I said, it could get a bit tricky – but interesting <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />.<br /><br />The other thing you might want to look at is to schedule the log backup to run every 5 minutes overnight. This will mean that each log backup file will be smaller. Although the total amount of throughput is the same, it’s better to copy 10 small files across the network than it is to copy 1 massive file. I doubt it will yield significant improvements but it might be worth giving a shot.<br /><br />At the end of the day though, getting a bigger pipe is the best option I would say. If the powers that be want a redundant solution then they have to pay up – or accept that they might not get what they hoped for if things go pear shaped. It’s always the way with management – they don’t see any value in spending money on their DBAs recommendations because they don’t see any return on investment. Until the primary server goes down in the middle of the night and they find they cannot go live on the secondary because the secondary is 4 hours behind on log shipping <img src=’/community/emoticons/emotion-5.gif’ alt=’;)‘ /><br /><br />Good luck with that.<br /><br /><br /><blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by biged123456</i><br /><br />I wonder what effect it would have on things if I just added a compression step into the transaction log backup job and an uncompress step in the restore job. Thanks for the information Satya. I had seen that litespeed may have some help in this area, but I want to keep it all Microsoft for support.<br /><hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote"><br /><br />Karl Grambow<br /><br />www.sqldbcontrol.com
Bill Wunder has done some excellent work on LiteSpeed and Log Shipping. http://b.wunder.home.comcast.net/15758.htm He has written the TSQL tasks and procedures to fully implement.
The only tweeks I had to make were due to our server and database names being a bit on the long side. Regards
Paul

]]>