SQL Server Performance

Logship a large transaction log due to reindexing

Discussion in 'SQL Server 2005 Log Shipping' started by WingSzeto, Sep 25, 2007.

  1. WingSzeto Member

    We have SQL 2K5 with sp2. I have set up a log shipping from server A to server B and server C. Server A and B are within the same LAN and server C is in a different location which is about 20 miles from server A. Server A and server C is connected via a VPN. Our transaction log is backed up and transferred every 15 minutes and the size is ranged from 20 Mbytes to 100 Mbytes. The VPN speed is alright and takes about 3~7 minutes to transfer 20 Mbytes of data depending on the traffic.
    We do reindexing once a month and when this happens, it will create a huge transaction log which its size would be up to 15~20 Gbytes. Obvisously based on the transfered rate in our VPN line, it could take days to copy that transaction log to server C. I am just wondering if anyone has this type of issue and has a good way to handle it.
    Wingman
  2. satya Moderator

    What you could do is to stop the log shiping and once the maintenance tasks are finished, you could simply restore the database from primary server to server C. Other than that you could simply perform REINDEXING on selected tables and perform intermittent log backups to keep up the size.
  3. WingSzeto Member

    Thank for the ideas. I have been thinking the same too. To restore the db and re-activate the logshipping on both Server B and Server C, I need to restore the db to both server. It is ok for me to do that. The only problem I have is that the whole process of restoring the db to server C will take 8 hours. It is mainly because the vpn line is slow. I have to zip and unzip the db. Even with the zipped db (about 5 Gbytes), it will take 3 hours to transfer.
    For your idea of reindexing on selected tables and performing intermittent log backups to keep up the size, right now logshipping is set up to back up the tran. log every 15 minutes, are you suggesting that I will disable the logshipping backup schedule, reindexing selected tables and when done, manually run the logshipping job. Then when backup is done, start another reindexing, and just repeat the process.
    wingman
  4. satya Moderator

    No, don't disable the log shipping in any case. For the select reindexing make sure to carry out with a code on selected tables rather than using DB maintenance plan here. I do manage a 500 GB database maintenance every weekend by managing the log size without any issues between the 2 geographical sites.
    You can take help of third party tools such as Redgate's or Quest's for backup compression and transfer between the sites, I believe if the site C is an additional contingency then you shouldn't worry about the speed (as me thinks).
  5. WingSzeto Member

    Sorry, I don't mean disable the log shipping, I meant I diable the backup and copy jobs and manually execute them after each of our select indexing job is done. Even so, I still need to zip the log file because the log file will still be like 2~4 gbytes. We have 30 users using the same vpn line and that's why I can't afford to keep the line busy all day with a large log file.
    Ok, let follow your idea using some third party tool to compress and transfer log files between site. How will the tool work with logshipping process together? I mean, is it possible to configure logshipping to work with third party tool to do the compression and transfer? I don't see the logshipping setup wizard allow me to incorporate another tool to do the backup and copy, do you? Is it meant i have to create my own custom logshipping script and not use SQL built-in logshipping process for this? I assume you use the third party tool to do logshipping, right?
    Wingman
  6. satya Moderator

    If you are going with third party tools then no need to consider the native log shippng process, I believe such tools like Quest Litespeed will have such facility. For better information on these tools refer to the SOftware Spotlight articles section on this website.
    For the backups and reindexing anyway you will be performing during offline hours, so I don't see any cause of running this during the early hours that will have minimum users. If your shop is a 24/7 then you have to consider having such a maintenance window for a better performance usage.
  7. WingSzeto Member

    Thanks for the advice. For now, I will use the native log shipping process, here is what I am going to do and tell me if it is ok.
    Before the reindex starts, I disable the logship - backup job at the primary server and disable the logship copy job and restore job at the standby server. I start the reindexing. Once the reindexing is done, I start the logship - backup job manually (still left it as disabled) and I will zip the transaction log file and then manually transfer it over to the standby server. Then after unzipping the transaction log file, I would run the logship - restore manually (again still left it as disabled). Once the restore is done. I would enable all log shipping jobs on both servers.
    Since I am doing the above manually and when I re-enable the log shipping jobs, will the log shipping know that I already copy and restore the latest manually-done transaction log and won't do the copy and the restore again? My thinking is yes because I think there is a methodology within the log shipping process to determine that. Please let me know your thought.
    wingman
  8. satya Moderator

    Can you handle that manual process on regular basis? Always such actvities must be handled seamless and avoid manuall processes wherever possible.
    There may be a glitch on the log shipping to state the previous log shippping job is completed or not, but you could try it for once to see whether it works or not.
    The major problem here is the bandwidth connectivity between the sites and any betterment to this will gain a lot of advantage on storing the logs to standby server.
  9. SQLServerUser42 New Member

    Just read this article "How to stop the transaction log of a SQL Server database from growing unexpectedly" ( http://support.microsoft.com/kb/873235 ) and think this section might be of interest to you:
    To defragment the indexes to benefit the workload performance in your production environment, use the DBCC INDEXDEFRAG Transact-SQL statement instead of the DBCC DBREINDEX Transact-SQL statement. If you run the DBCC DBREINDEX statement, the transaction log may expand significantly when your SQL Server database is in Full recovery mode. Additionally, the DBCC INDEXDEGRAG statement does not hold the locks for a long time, unlike the DBCC DBREINDEX statement.

    For additional information about defragmenting the indexes in SQL Server 2000, see the following Microsoft Web site:
    http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
    If you must run the DBCC DBREINDEX statement as a job that is a part of the database maintenance plan, you must break up the job into multiple jobs. Additionally, you must take the frequent backups for the transaction logs between the execution of the jobs.
    Good luck!
  10. satya Moderator

Share This Page