SQL Server Performance

Migration to new hardware and active/active/active

Discussion in 'SQL Server Clustering' started by abalonie, Sep 17, 2005.

  1. abalonie New Member

    I have a question about this migration. We went from an active/passive SQL cluster to active/active/active using new hardware but using the same SAN. The new base cluster node was set up as well as the other two nodes. However, for the initial migration, I had to set it up as active/passive because there was not enough room in the SAN to add more hard disks for the sets for the other 2 nodes. So once everything was migrated to the new node, I was able to pull the original disks from the old cluster out and replace them with newer, larger ones for the other 2 nodes. Now the problem is that I can't initialize these new disks and I believe it's related to the fact that the cluster is already set up so I can't do it. So what do I need to do to get these disks initialized so I can assign the sets to the other nodes?<br /><br />Any help is appreciated. Also, somebody confirm with me that I should use the word "instance" instead of node since I am dealing with a multi-clustered environment.<br /><br />Thanks <img src='/community/emoticons/emotion-1.gif' alt=':)' />
  2. derrickleggett New Member

    Explain what you mean by active/active/active from a clustering and instance setup perspective. An active/active SQL Server cluster is in reality an active/passive|passive/active SQL Server cluster, in that it's two active instances (one on each server) utilizing clustering for failover to a passive instance on the opposite server.

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  3. abalonie New Member

    What I mean is what we want to have 3 active servers, and we have it set up so if a node fails, the data will fail over to another active node (whichever one we specify). They each have their own SQL 2000 installs and the setup is exactly right except for the disk part. I have a feeling I am going to need to detach the databases and remove it from the cluster in order to bring the additional sets of disks online for the other nodes. I just need somebody to confirm this because I can't figure it out any other way. The disks are shown as "unreadable" and "not initialized" in disk management. When I try to initialize, it gives an error saying it's unable to do it and please see event logs. The logs tell me the virtual disk service has stopped on one node and started on the other. There isn't much to google with this for what I am doing so i am just checking with the pros.

    Thanks
  4. derrickleggett New Member

    Well, I wish Brad was online right now. He might be able to answer you Monday. He knows a lot more about clustering than I do. My guess is that you're going to have to reboot the servers to get these disks to show online. I'm not sure why you're having the issue in the first place though.

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.
  5. mulhall New Member

    I'd say your SAN is allowing access to only one of your nodes. If that's not it exactly, it's SAN management that's holding things up, not SQL.

    Just to clarify nodes and instances;

    Nodes are individual installations of the OS in a cluster.

    Instances are individual installations of SQL Server.

    Whatever your setup, windows/SQL clustering only allows one node to be serving one instance of SQL at one time.


    Clear as mud?
  6. abalonie New Member

    Thanks for the suggestion. The SAN is only seen by the active cluster at the moment. Remember, I have an active/passive setup at the moment because of disk space. So with adding these new drives, the only node that's gonna see it is the active one. When I try to initialize, it doesn't do it. I have an MSA1000.

    Any help is greatly appreciated.

    Thanks!
  7. abalonie New Member

    Also, the drives are properly "shared" to the other nodes through the array configuration utility. Across all servers I can see all arrays I built. I just can't initialize them.
  8. Argyle New Member

    What is the error message when you try to initialize the disks in Disk Management? Have you tried removing the disks on the SAN, create new ones and assign it to the nodes again? Are you running windows 2000 or 2003?
  9. mulhall New Member

    I'd still say that the SAN configuration is allowing read/write only to one node. Until your SAN management allows other nodes to they won't be able to initialise the shared drives.
  10. abalonie New Member

    yes that was it. I failed to allow "presentation" of the arrays I created to the other fibre controllers of the other nodes. They now all see each other. Thanks guy for pointing me in the right direction.

    Now my next question is that to create and add these resources into the cluster and assign ownership to the proper instance, I need to detach all databases currently on the base node, right?

    Thanks again
  11. mulhall New Member

    No worries.

    You've got me a little confused now though, the installation of SQL should only be done once the shared resource is available, not before. Is it already installed to another shared disk?
  12. abalonie New Member

    Oh I see what you mean. Yes I am going to need to uninstall SQL server on the new nodes, and reinstall them to the newly added hard disks. What is the best way to go about this? What I plan to do is:<br /><br />1. Remove nodes with new disks from cluster<br />2. Uninstall SQL Server<br />3. Reinstall SQL cluster into new disks<br />4. Add node to cluster<br /><br /><br />Is that right? This is uncharted territory for me as I sort of inherited this project midway since both my manager and fellow sysadmin decided to quit on the same day. <img src='/community/emoticons/emotion-6.gif' alt=':(' /><br /><br />I will be here this weekend performing this. Any advice today is greatly appreciated. I will still be researching this out until then.<br /><br />THanks again

Share This Page