Important note about SSL VPN compatibility for 20.0 MR1 with EoL SFOS versions and UTM9 OS. Learn more in the release notes.

This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

XGS SSD Firmware - others also having issues HA nodes not coming up?

I started the SSD firmware update KB-000045380 on XGS136 HA A/P Cluster.

First I applied the update to the AUX node 2. It was successful and the machine re-entered the cluster and A/P cluster was all green in the end.

I switched the PRI HA node from node 1 to node 2 and waited until A/P cluster was all green again. So node 2 is now PRI.

The AUX node 1 is now down for 25min after the SSD update command has been given.

I'll wait until tomorrow and then power cycle it.

Anyone else having such circumstances?

XGS136_XN01_SFOS 19.5.3 MR-3-Build652 HA-Standalone# cish
console> system ha show details
 HA details
 HA status                           |   Enabled
 HA mode                             |   Active-passive
 Cluster ID                          |   0
 Initial primary                     |   X1310xxxxxBQ44 (Node1)
 Preferred primary                   |   No preference
 Load balancing                      |   Not applicable
 Dedicated port                      |   Port10
 Monitoring port                     |   -
 Keepalive request interval          |   250
 Keepalive attempts                  |   16
 Hypervisor-assigned MAC addresses   |   Disabled

 Local node
 Serial number (nodename)            |   X1310xxxxx8X84 (Node2)
 Current HA role                     |   Standalone
 Dedicated link's IP address         |   10.1.178.6
 Last status change                  |   09:41:15 PM, Jan 24, 2024

 Peer node
 Serial number (nodename)            |   X1310xxxxxBQ44 (Node1)
 Current HA role                     |   Fault
 Dedicated link's IP address         |   10.1.178.5
 Last status change                  |   09:41:15 PM, Jan 24, 2024



This thread was automatically locked due to age.
Parents
  • Excuse me, why did you switch the PRI HA node from node 1 to node 2?

    I would update the SSD tomorrow, but i thought i could actually do so:

    - switch to node 2 (AUX) via CLI (ssh -F /static/ha/hauser.conf hauser@xxx.xxx.xxx.xxx)

    - update node 2

    - wait untline node 2 comes up and HA is green (node 1 PRI, node 2 AUX)

    - update node 1 over CLI

    - wait untile node 1 comes up and HA is green.

    Am I missing something?

    Thanx in adavance

  • Hello  

    I would carry out the update in this way. Please note that the power supply to the FIrewall may have to be disconnected. So it would be good if someone can operate the firewall on site. 

  • Hello admin_idl and thank for your answer.

    I just noticed, that over CLI 5->3 isn't possibile to run the "system ssd show/update" commands (only over CLI ->4), that's why this procedure that i just wrote wouldn't help. I'm wondering how to separate the two firewalls and dont break the HA Cluster

  • Hello,

    first you need to update the passive firewall. This is then briefly offline and the active firewall is standalone. Then you have to wait until the HA is up again. Then execute the command on the active firewall. As soon as the active firewall is back online, it can take over the active role in the HA cluster again.

  • 1. go to cli -> 4.

    2. Look for the AUX IP via command "system ha show details"

    3. telnet "AUX IP"

    4. start the update and wait until it has finished.

    5. do the same with die PRI

    that would be my plan.

Reply Children
  • You can jump from advanced shell to console by using the command: 

    # cish 

    So do the SSH part to the aux advanced shell and enter cish. 
    You will not see on which appliance you are after entering cish as it switched to the console prompt. 

    __________________________________________________________________________________________________________________