Hi all,
On Friday I patched our Sophos UTM cluster to 9.506 and found that the cluster is broken unless the passive node is switched off. The VM's are on the same ESX host and I have checked that they both have Virtual mac setting set to 0. Has anyone noticed this also and have a work around?
Cheers
Anthony
This is good to know - thanks! You're the second person to report this here. If you have a paid license, please be sure to report this as a bug to Sophos Support.
Cheers - Bob
thank you
i have the same problem too
please update us if you found a solution
Please open a ticket with Sophos Support.
same problem here. would be happy to get some information from sophos regarding this issue.
Hi Anth lior me & sfue
Have any of you created a ticket with the Support team? If you have, please provide case numbers so I can follow-up. If not, please create a ticket and provide me with the case number so we can have possible bug reported to our development team.
Thanks,Karlos
Hi Karlos,
unfortunately I only have home edition. So no support ticket number. Sorry
Same issue here. Both Nodes use the same MAC Address, which they did not in Version 9.406 (for example). Virtual MAC Feature is disabled.
Ich opened a Ticket and added further Information to it: Case 7816322
Same problem here, we update our sophos to the last update... and now we must shutdown on of them.
I try all the answer for the same problem but with older version of UTM , like setting virtual_mac to 0 ... put ethernetX.ignore.... = TRUE, restart my hosts.
But nothing works...
Hi
I have got something of the same problem here at my home lab.Got primary and backup running on different esxi servers and its looks like this MAC problem confuses the distributed virtual Switch. I moved utm-vm-adapters over to esxi standard switches a few days ago and it has worked since.
Cheers - Geir
Many thanks for opening a ticket, Patrick. I've got the same issue at my homelab UTM-HA-Cluster on VMware vSphere 6.5.
Unfortunately I have only standard-support at two Hardware HA-Clusters (2x SG310 and 2x SG230) at work :-(.
I hope it's, only a "small" issue easy to fix for the technical Team.
BR,
Florian