Performance Tuning SQL Server Transactional Replication

The distributor has a setting called the CommitBatchSize, which is configured from the “Replication Agent Profile Details” dialog box. This parameter determines how many replication transactions are committed in the distribution database in a single batch. The default value is 100.

This setting can affect performance two different ways. First, the larger the CommitBatchSize parameter is, the fewer commits that have to be made to the distribution database, which results in reduced overhead. This is because this process is resource intensive, and the fewer times that a commit occurs, the less overhead that is incurred.

While increasing the CommitBatchSize sounds like a good idea, there is also a corresponding downside to increasing it. The problem occurs because larger batch sizes mean that the actual commit takes longer to occur, which in turn causes locks in the distribution database to be held longer than if the commits were to take less time. Locks, as you know, can reduce concurrency in a database, reducing performance.

So this issue is, what is more important, faster replication or less locks? The ideal size of the CommitBatchSize parameter depends on your specific situation. For example, if there is a large amount of activity in the distribution database coming from both the publisher and many subscribers, then reducing the batch size can be beneficial because of reduced locking (improved concurrency) in the distribution database. By increasing concurrency, transactions that have to wait on locks don’t have to wait as long, and overall performance if boosted. For example, if there are 100 subscribers, each subscriber needs to be updated from the Distribution database. This can create a lot of activity in the distribution database, which be slowed down if there are many locks occurring.

On the other hand, if most of the activity in the Distribution database is caused by the Log Reader Agent that is running often (or even continuously), and not other sources (such as subscribers), then increasing the size of the CommitBatchSize parameter makes more sense, as larger batch sizes mean that the Log Reader Agent doesn’t have to work as often, which in turn reduces overhead, boosting performance.

So what should you do? Unless you are aware of specific issues with transactional replication performance, you should leave the CommitBatchSize option alone. If you are experiencing performance problems, you may want to experiment with different CommitBatchSize settings until you can find one that helps boost performance. If you do this, be sure you have a good way to determine performance before and after your experimentation to ensure that you get good results. [7.0, 2000] Added 12-26-2001

]]>

Leave a comment

Your email address will not be published.