SQL Server Performance

Performance Problem Log shipping SQL2005 standard edition

Discussion in 'SQL Server 2005 Log Shipping' started by Twingo, Dec 2, 2008.

  1. Twingo New Member

    we have two SQL server 2005 standard edition. Twenty bases 1 to 3 giga. We have put in place the log shipping, which replicates any minute. more we've added, and more the performance of our application has become bad. So bad that we had to stop log shipping.
    If you have any idea, best practise, or other think that could help to resolve this problem, i enjoy it.
    I'm beginner in SQL world, and my english is a school english, so sorry if it's not a good (google translation is my best friend !)
  2. Elisabeth Redei New Member

    Hi T,
    I am surprised to hear that; the only thing I can think of is that you have contention on your disks. How have configured your disks in terms of where your database and logfiles reside and to which drive you direct your transaction log backup files?
    Did you have a look at Performance Monitor for the following counters:
    Logical Disk:
    - DiskTime %
    - Current Disk Queue Length
    - Avg secs/read and Avg secs/write
  3. Twingo New Member

    Hi Elisabeth,
    thanks for your answer.
    We have seen a sharp increase in Current Disk Queue Length.
    We are on a SAN from NetApp platform.
    Do you have a third SQL server can relieve the other two, and if so, how do you use?

    Thank you in advance
  4. Elisabeth Redei New Member

    Hi again,
    I am not sure I understand your last question... Point is though; if disk IO is your problem then that is also where you have to look for a solution. Is there anyway you can move things around a little bit; moving transaction logs and the backups for the transaction logs to different drives in the OS and somewhere else on your SAN? Make sure that everything is configured optimally as well such as Write vs. Read cache, queue depth in the HBA etc.. I recommend you to diagnose the SAN (or have someone from the vendour do it) to find out exactly where the bottleneck is.
  5. Twingo New Member

    Hi Elisabeth,
    Thanks for your answer. We will check the SAN and his configuration.
    I'll be back here as soon as the problem is solve
  6. melvinlusk Member

    Having logs shipped every minute is quite frequent. What exactly are you using the log shipped database for? Reporting, disaster recovery?
    Keep in mind that you won't be able to access the target DB while logs are being applied. This makes it a very poor choice for reporting if you need up-to-the-minute data. You may be better off using transactional replication.
  7. satya Moderator

    I would secon Melvin's opinion here about why you need for every minute?
    Say if you have the performance while log ship the transaction logs to standby server it means NETWORK bandwidth is playing major role in addition to disks on what Elisabeth referred about check the SAN settings too.
    How big is the transaction size on every minute?
    HOw many rows inserted/updated/deleted on every minute for this database?

  8. Twingo New Member

    for the moment the log ship is off. The file who are log ship, are not very big, about 600ko. But we will stop the log ship all minute to the default : 15mn
    And we see that we have a lot of problem of I/O on the SAN. So first problem before re up the log ship, will be correct the I/O in the SAN. We will ask to NetApp people to check the SAN.
    And after that, we will re place the log ship every 15mn, but with a Third server (backup server for the logship)
    I hope this will be at least the good choice.
  9. Elisabeth Redei New Member

    Running with 1 minute interval should not be a problem - I have done it for years on a quite busy banking system (on 2000 as well). The only time we had problems was during larger batches but we got around that by "pausing" (disabling that is) the jobs during those batches.
    You probably need to keep the copy and the restore jobs with larger intervals though.
  10. satya Moderator

    Stick to the schedule where it was like earlier, and if the hardware is fixed once for all then best option to perform some baseline and benchmarking of the platform for future productivity.

Share This Page