SQL Server Performance

How to configure Backup drive In CLustered environment?

Discussion in 'SQL Server 2005 Clustering' started by gary, Jun 25, 2009.

  1. gary New Member

    Hi,
    Wehave a backup group in the cluster administrator and the only resourceinside this group is the Backup drive. I able to move this group to othernode manually. But when the sql group moves(when fail over initiated),the Backup group is not moving and because of this the backups are failing. I tried to add the backup drive as adependency on SQL Service so that it it can fail over along with sqlinstances when a failover occurs. But I did not find the option to addthe backup drive(which is in backup group) as dependency. I clicked theproperties of the SQL Server(ins1) in Resources(in left panes ofcluster administrator)->dependencies-> modify->here I shouldsee the backup drive to add as dependencies BUT it is not there.
    Please advice me how you configure the backup drive in the clustered environment so that backups should not fail in case of fail over
    Thank You
  2. ndinakar Member

    I think you can only add resources within the group as dependencies.. so it should be in the SQL group and then you can set the dependency..
  3. gary New Member

    Thank You,
    So we need to create just One group in cluster administrator for example sql group and inthat only we need to add Backup drive and then added it as a dependencies right?
    In this case, do I need to delete the Backup group?
    How actually you configure?

  4. LouisP New Member

    When dealing with cluster drives, you should always use UNC''s, i.e. \servernamneshare-name. While I always accessed drives that belong to SQL Server instances thru a share, here is something I think will work. Create a network name and IP address for the backup drive group. Then you can use the network name as the server name in the UNC. Unfortunately, I don't have access to a cluster to try this out
  5. gary New Member

    Thanks,
    I will elaborate my scenario:

    1. We have a 3 node active/passive/active cluster setup. Node1(A), Node2(P), and Node3(A)
    2. Node1 should fail over to node2 and node3 should fail to node3.But not node1 to node3 and node3 to node1
    3. We have a sql group(which has drives D, E, F, and T for data files, secondary data files, log files and Tempdb respectively.)
    4. We have a backup group(which has drive Z for storing the databases backup )
    5.We have backup script like below. A backup procedure which backups the given database to the specified backup path with the given backup type(full,diff or log). for example

    exe backup_sp 'db_name', 'Z:BackupsFull', 'Full' ----lets this is on node 1.

    6. Fail over occured, and the sql group is moved to node2(P) but the Backup group did not and the backups failed. How to make this backup group fail over to node2?
    7.Sql group->properties->failback-> preven failback
    -> allow failback

    which option should be selected(prevent failback or allow failback) according to Best practise?

    in our case, we selected allow failback. Please advice...
  6. LouisP New Member

    Hi Gary,
    Three node clusters, UGH!!!! The handful that I supported were my least favorite, the reasons being:
    1) During SQL Server installs for the base product, service packs, or hotfixes, extra work to turn off virus scan and TURN ON all require services on all three nodes.
    2) Despite all of #1 being taken care of, many installs still failed, and required calls to MS to resolve.
    3) And the situation you mention, keeping each cluster group where it belongs.
    As far as I know, there is no official mechanism to tell the instances where to fail over to. As my ex-employer spent plenty for dedicated Microsoft support, I would tend to belive that this is indeed fact. Which also brings us to a not so great feature of Microsoft clustering. There are occasional glithces that a regular server would ignore, that the cluster will initiate failover for. One incident that I remember was a timeout to the domain controller. So If the workloads absolutely cannot co-exist on one node until a fail back can be scheduled, you will need some sort of monitoring to fail back right away.
    To deal with your Z drive situation, the 100% bulletproof way to have a drive always available it to use the UNC, as I mentioned in my original reply. We did face that situation plenty of times. Some applications (or LOB's) owned multiple instances, and wanted to copy files back and forth. But as you discovered, the specific drive may be on a different node sometimes. A UNC will always work, and even in an extreme situation as DR to a remote site, DNS entries for the server can be changed so the scripts/jobs will still work.
    Hope this helps.
    Louis

Share This Page