SQL Server Performance

Changing memory on a sql server cluster

Discussion in 'SQL Server 2008 General DBA Questions' started by Trev256, Apr 14, 2011.

  1. Trev256 New Member

    I need to change the sql server memory on all 4 instances as they are set too high. Can I change the max memory setting by failing over the resources off each instance, ensure all databases are running off other instances and change the max memory setting for each instance, one at a time? therefore I will try and change the max memory setting while the other instances support the databases. Is this possible or are there problems with this plan?
  2. davidfarr Member

    As I understand it, you can only change this memory setting while the instance is running. sp_configure 'max server memory (MB)' calls a procedure from the master database that also updates the master database. The master database can only be accessed if the instance is running. You cannot therefore change the setting "while the other instances support the database", you should change it while they are supporting the databases.
    Calling sp_configure 'max server memory (MB)', followed by RECONFIGURE, to decrease the max memory, takes effect immediately. No instance restart is required. In your case it would best to start on the current active node, change the memory setting, then failover each node changing the setting on each node as you go until you return to your preferred active instance/node.
  3. Trev256 New Member

    are you 100% sure u can change the memory setting without restarting the instance? let me know if you have actually tested this or you are making a best guess.
    also, are you suggesting to remove databases off each instance before changing memory setting, as your description is unlcear to me?
    thanks so much
  4. davidfarr Member

    I have just tested this on my own server a moment ago (SQL Server 2008 R2) with all databases attached, and I am 100% sure. An instance restart might be necessary if you were increasing the minimum server memory setting, because that memory might currently be used by another application on the server. However, decreasing maximum server memory is easy for SQL server to do immediately because the memory is already in use by the service itself and so it can be easily 'released' because that does not conflict with any other application that uses memory. I can assure that it is safe to do, you cannot lose databases or corrupt data by reducing max server memory instance setting while the instance is connected to databases. (You would of course lose data if you opened the server hardware and pulled out a DIMM memory module while the instance was running, but that's a different scenario.)
    If a server process (thread) is currently using that memory when you execute the command, SQL server will wait until that process has finished executing before dropping that memory space. The command to reduce the memory will not suddenly just kill processes randomly and will not erase uncommitted data in memory or corrupt data.
    To change the memory setting on each node of your cluster, you will need to failover each node anyway in order to bring up the active instance of each node. So indirectly you will be restarting each instance of your cluster anyway, even though you technically would not need to if it was one stand-alone server.
  5. Trev256 New Member

    Thanks - as all instances are running at the moment - i will decrease the memory without failing over - I don't need to bring up/turn online any instance - does this make sense?

Share This Page