SQL Server Performance

SQL Merge Replaction

Discussion in 'Performance Tuning for SQL Server Replication' started by andrewmbourne, Oct 1, 2004.

  1. andrewmbourne New Member

    Hi all. a few question im looking for answer to before i roll out my real replication setup.

    Im going to be using a merge replaction to am remote site so i can keep a level of automony between servers, My first question is the i only have limited bandiwth and my database if fairly large so whats the best way i can get my table schema to the remote site. Am i best backing up and restoring it. if so at what point before or after publication ?. Also when i create the publication the snapshot for the tables articles, what are the best settings, ie do i really want to drop the table on name conflict, as its already there and correct because it was restored. Will it be better to just leave the existing table unchanged, as there wont be any structure changes once it goes live. also to i want to copy the objects to destination ie indexes triggers as again these will already be on the subscribing db.

    Well thanks and if anyone can help great.

    ps sorry if ive asked dumb questions lol. But we all need to start somewhere.

    ---------------------
    So long and thanks for all the fish
  2. xpthinker New Member

    Hello andrewbourne, first thing you should plan what database objects (tables, views, SPs etc) you want to replicate over other servers. Optimizing the publication will help you to minimize bandwidth constraints. I suggest to use utilize pull subscription instead of push in order to offload your main db server. Hope this will help you.

    xpthinker
    IT Professional, MCP
  3. Twan New Member

    Hi ya,

    the other thing that we've utilised before is to have a subscriber locally which is synchronised, then backed up moved to the remote site and restored.

    steps would be
    - create the publication
    - add the local subscriber which will get the schema
    - add the remote subscriber which is set to already have the schema
    - turn off the logreader job
    - run the snapshot
    - distribute the snapshot to the local subscriber
    - backup, ship to remote site and restore the local subscriber db
    - turn on the logreader job

    Your ultimate solution depends I guess on:
    - how big the databases are
    - whether it is quicker to physically ship a cd or transfer it over the link
    - whether the link is exclusively for sql traffic
    - how like the schema is to change after the initial snapshot

    with merge replication be aware of triggers on the remote database since by default the triggers will fire when the change is originally made and again when the change is replicated. It is not an issue, just need to bear it in mind when writing the triggers

    also stored procs, functions, permissions, etc are never replicated across

    Cheers
    Twan
  4. andrewmbourne New Member

    Hi thanks that helps, another quick question, if i tell the snapshot to drop the table and recreate, will it copy over the identity seeds.

    Cheers Andy

    ---------------------
    So long and thanks for all the fish
  5. Twan New Member

    if a table is dropped then the identity seed for table is also dropped, create the table will create a new seed as per the table definition, default being 1

    Cheers
    Twan

Share This Page