Implementing the SQL Server Multiple Publishers and Central Subscriber Replication Topology

Introduction

This article describes how to set up the following topology:

where P1 and P2 are two different publishers and S is the central subscriber. This is a fairly typical requirement, e.g. the subscriber might be a head office and the publishers are each separate branch of a company. In this arrangement, data often needs to be centralized and amalgamated for the purposes of report generation.

This article will describe setting up this arrangement using transactional replication, but it is worthwhile noting that there are alternatives; such a topology can also be set up using merge replication while reversing the server designations i.e. S becomes the publisher and P1 & P2 become subscribers. This is possible because the publisher/subscriber metaphor doesn’t really apply to merge replication, and the distinction between publisher and subscriber is not as obvious as with transactional. Still, using this implementation means that you are using merge replication where it is not really required, and if you are replicating a lot of data, performance could suffer unnecessarily. So, let’s look at how to set this up using transactional replication.

Implementation

Consider the following situation. We have the following table tCityRevenues.

It exists on servers P1 and P2 and want to amalgamate the data to a central subscriber. Firstly, the data must be partitioned to ensure that there is no overlap of primary key values across the 2 publisher tables. This needs to be manually set up and there are various options available: you could use a multiple-column primary key, with one column holding a simple (constant) designator referring to the source server; alternatively you could set a different seed for the identity column (PK) of each publisher. So, if we use the latter method, some typical data could be as follows:

P1

P2

The transactional publication on P1 is set up as per usual. Replicating this to subscriber S results in the table tCityRevenues being created and the ‘London’ and ‘Manchester’ records being added. When setting up the next publication on P2, we must be careful to alter the article properties. Normally, the default settings are used ‘DROP existing table and re-create it’, however here we don’t want this setting, because the ‘London’ and ‘Manchester’ records will then be removed, leaving just the new records — ‘Chicago’ and ‘Washington’. So, we modify the article properties on the snapshot tab to ‘Keep the existing table unchanged’:

As a result , the schema file is scripted slightly differently — there’s no ‘DROP TABLE’ in it, and it will create the table only if it doesn’t yet exist. This is just what we require, as the table has already been created by P1, and using this option still creates the ODBCBCP data file with the US records, resulting in the table below:

S

 
Continues…

Pages: 1 2




Related Articles :

  • No Related Articles Found

No comments yet... Be the first to leave a reply!

Software Reviews | Book Reviews | FAQs | Tips | Articles | Performance Tuning | Audit | BI | Clustering | Developer | Reporting | DBA | ASP.NET Ado | Views tips | | Developer FAQs | Replication Tips | OS Tips | Misc Tips | Index Tuning Tips | Hints Tips | High Availability Tips | Hardware Tips | ETL Tips | Components Tips | Configuration Tips | App Dev Tips | OLAP Tips | Admin Tips | Software Reviews | Error | Clustering FAQs | Performance Tuning FAQs | DBA FAQs |