This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

VMotion - What are the risks?

We are currently using one Virtual SEA on our VMware vSphere cluster which consists out of 5 hosts.

According to KB132204 - VMware VMotion is not supported.

Our question is: what is/are the risk(s) of breaking our SEA when we do use VMotion?

Our SEA resides on one of the 5 hosts of our cluster and we use central storage.
In normal operation the VM remains on the same host but there can be a situation in which the VM needs to be moved to one of the other hosts.

F.e. when we need to put a host in maintenance mode and on that host currently runs our SEA.
In that case all the VM's will be moved by VMotion to one of the other hosts. This is one of the benefits of having a cluster.

What is the risk that our SEA will be broken?

Is it maybe advised to shut down the VM prior the VMotion instead of a live migration?

Soon, we will also migrate our current cluster to new hardware. Part of this migration is moving all the current VM's to the new VMware cluster by using VMotion.

What is the best practice to avoid breaking our virtual SEA?



This thread was automatically locked due to age.
Parents
  • Hi Luich,

    To make a very long story forum / email friendly .. 

    I have not seen any issues where the appliance is vmotioned well the hosts are powered down.  So I would say your use case has very low risk.   The issues I have seen revolve around things like live transfers and resource sharing.. ie: the appliance may not see increases/reductions in hd space when /persist is already mounted.  or thinks like ip conflicts because it doesn't detect networking changes.  Although I have not kept up with the development case SEA-499 .. I believe the other effect "could" potentially break the "increase disk space" option.

     

     

     

    In your case:

    #1 optional - I would log into the appliance and ensure networking is correct for the new host (if necessary) IE no ip conflicts / dhcp and the appliance is reachable on the new host's network

    (you may have to also change exchange send/receive connectors and firewall port forwarding to the new ip)

    #2 power down the appliance

    #3 snap shot it for best practice

    #4 move it to the new host 

    #5 power it up

     

    I would say your risk is very minimal.

     

     

    Worst case.. as long as you still have 1 cluster member you can always rebuild your entire cluster by deploying new vm's and clustering into the member.   Doing this process 1 host at a time would further reduce any risk, there is also no need to remove members from the cluster as long as they can communicate with each other. 

     

     

     

    so there are some features that are "technically" not supported, but at the same time from my experience .. you shouldn't have a meltdown in that respect either.

Reply
  • Hi Luich,

    To make a very long story forum / email friendly .. 

    I have not seen any issues where the appliance is vmotioned well the hosts are powered down.  So I would say your use case has very low risk.   The issues I have seen revolve around things like live transfers and resource sharing.. ie: the appliance may not see increases/reductions in hd space when /persist is already mounted.  or thinks like ip conflicts because it doesn't detect networking changes.  Although I have not kept up with the development case SEA-499 .. I believe the other effect "could" potentially break the "increase disk space" option.

     

     

     

    In your case:

    #1 optional - I would log into the appliance and ensure networking is correct for the new host (if necessary) IE no ip conflicts / dhcp and the appliance is reachable on the new host's network

    (you may have to also change exchange send/receive connectors and firewall port forwarding to the new ip)

    #2 power down the appliance

    #3 snap shot it for best practice

    #4 move it to the new host 

    #5 power it up

     

    I would say your risk is very minimal.

     

     

    Worst case.. as long as you still have 1 cluster member you can always rebuild your entire cluster by deploying new vm's and clustering into the member.   Doing this process 1 host at a time would further reduce any risk, there is also no need to remove members from the cluster as long as they can communicate with each other. 

     

     

     

    so there are some features that are "technically" not supported, but at the same time from my experience .. you shouldn't have a meltdown in that respect either.

Children
No Data