Performance Tuning SQL Server Merge Replication

In merge replication, the publisher, distributor, and the subscriber all play a part. Because of this, it is important that each of these hardware systems be properly tuned for best overall merge replication performance. Pay particular attention to the I/O performance of the publisher and subscriber (the distributor only plays a very small part in merge replication), and the network connection between the subscriber and distributor, and the distributor and subscriber (assuming each role is on a separate server). [7.0, 2000, 2005] Added 12-27-2001

*****

If you create an article using a filter, it is important that you create an index on each column used in the filter’s WHERE clause. If you don’t, then SQL Server must perform a table scan. If the columns are indexed, then SQL Server can perform an index seek, and find the rows very quickly. [7.0, 2000, 2005] Added 8-15-2000

*****

If you create an article based on a JOIN filter (JOIN filters allow you to include in your article rows from a related table), then you need to index all the columns used for the JOIN filter. This is important because every time the Merge Agent runs, it searches the base table to find out which rows in the base table and the related tables are to be included in the article. If you don’t have an index on the JOIN columns, then SQL Server has to perform a table scan. If the JOIN columns are indexed, then SQL Server can perform an index seek, and find the rows quickly. [7.0, 2000, 2005] Added 8-15-2000

*****

If you want to use merge replication for a published table, then that table must have a ROWGUIDCOL column. If the table doesn’t have one before you try to publish it, SQL Server will add one for you automatically. For best overall performance, you should add the ROWGUIDCOL column to the table manually so SQL Server won’t have to do it for you. This will speed up the snapshot process when merge replication is first started for a table. Also, you should add non-clustered index to this column, which will boost performance. If you allow SQL Server to create the ROWGUIDCOL automatically, an index is not automatically created. [7.0, 2000, 2005] Added 8-15-2000

*****

When the Merge Agent sends updates between the Publisher and Subscriber, it does so in the form of batches. By default, each batch includes 100 generations (a generation is a logical group of changes per article). Under normal circumstances, this default value is appropriate. But if the published table experiences frequent updates and there are a large numbers of rows in a single transaction, then consider increasing the size of the batch to boost performance.

There are three settings in the Merge Agent profile where you can configure a batch. They include:

·         BcpBatchSize: This setting specifies how many rows of data can be sent in a batch. Increasing this value increases the number of rows that are copied in a single bulk copy operation. Increasing or decreasing this setting could improve performance, depending on your particular situation. Only through experimentation will you be able to determine the ideal value for optimum performance.

·         DownloadGenerationsPerBatch: This setting specifies the size of the generation batch download (from publisher to subscriber). For systems that experience many modified rows or frequent updates, increasing the value can sometimes boost performance.

·         UploadGenerationsPerBatch: This setting specifies the size of the generation batch upload (from subscriber to publisher). For systems that experience many modified rows or frequent updates, increasing the value can sometimes boost performance.

Unless your merge replication setup is not performing adequately, you should not change these setting. If you are having performance problems, and after performing all other tuning options first, and you still have a problem with performance, then only now should you experiment with these values. Before you begin experimentation, be sure you have a way to accurately measure performance before and after making changes to these settings. In addition, only change one parameter at a time during your testing to ensure that your results are valid. [7.0, 2000] Updated 12-27-2001

*****

Merge replication offers both static and dynamic filters. Static filters use a literal value to include or exclude data in a subscription. Dynamic filters, on the other hand, use an intrinsic function instead. For best performance, use static instead of dynamic filters, as SQL Server requires more overhead to process them than static filters. [7.0, 2000, 2005] Added 8-16-2000

*****

Although horizontal filtering can reduce the amount of data that is replicated from one database to another, using it also incurs high overhead when used with merge replication. The merge agent is used to perform horizontal filtering, and every row that is updated or deleted on the publisher causes extra work for the agent.

In many cases, performance may be better if horizontal publishing is not used when using merge replication. Only testing will tell you for sure which option is best for you [2000, 2005] Added 10-9-2001

*****

SQL Server 2000 supports a merge replication feature called dynamic snapshots. Dynamically filtered publications, by default, use many multiple INSERT statements from the publisher to apply changes to subscribers during the initial snapshot. If there is much data to be merged, this can be a long process that involves lots of server overhead.

As an option, dynamic snapshots can be configured. Instead of using INSERTS to apply data, bcp is used instead, which is much faster and produces less overhead. Using dynamic snapshots can lengthen the time it takes to produce the snapshot, but it will greatly speed the application of the snapshot to a subscriber. Dynamic snapshots can be configured using Enterprise Manager or the Create Dynamic Snapshot Job Wizard. [2000] Added 10-9-2001

]]>

Leave a comment

Your email address will not be published.