SQL Server Performance Forum – Threads Archive
Replicating large batch inserts
I have transactional replication set up between 2 SQL Server databases on 2 different
boxes. Both the log reader and distribution agent run in continuous mode.
The distributor is residing on a third SQL Server box. We are having performance issues
with replication when there are large batch deletes/inserts happening on the
publisher. There is a batch job that runs for about 8-10 hours everyday on the
publisher and deletes/inserts thousands of records as part of transactions. The amount
of time to replicate all this data on the subscriber is around 13-15 hours which is
not acceptable to our user community. While monitoring I found that the distribution
database (MSRepl_commands table) at times has millions of records in it which would
explain why the latency is so high. Add to it the fact that there are large
transactions occuring on the publisher. I was wondering if anyone has faced similar problem before. Are there any conifguration
changes I can make to the replication infrastructure to reduce latency? Would appreciate your help. Thanks.
Can you enable this batch in smaller chunks, collect the PERFMON stats during this process to findout what is happenin on disk resource usage. By adding more memory might help sometimes, but it purely depends on database optimization and how it has been designed on normalization. Satya SKJ
Microsoft SQL Server MVP
Contributing Editor & Forums Moderator
This posting is provided AS IS with no rights for the sake of knowledge sharing.
Read SQL Server 2005 Books Online topic "Enhancing General Replication Performance" Mohammed.