This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Slave node stuck in up2date state

Slave node stuck in up2date state. Reboot does not solve the problem.

MASTER: 2 Node2 198.19.250.2 9.403004 ACTIVE
SLAVE: 1 Node1 198.19.250.1 9.356003 UP2DATE

Login to slave node failed due of permission denied. Changing ssh passwords for slave via Webadmin does not work. 

Possible Reason:
version of package '/var/up2date/sys/u2d-sys-9.356003-404005.tgz.gpg' doesn't fit, skipping



This thread was automatically locked due to age.
Parents
  • Hi Technik.

    You're saying that you are not able to ssh into your slave node using ha_utils ssh?

    If so, you are in a bit of a pickle here. I would suggest contacting support ASAP if this is a commercial license. Another course of action would be to destroy your cluster, upgrade both nodes to the same version and rebuild the cluster. Don't worry about loosing anything, when you destroy the cluster, it will automatically factory reset and shutdown your slave node and your master node will remain intact.

    Just make sure that, if you decide to rebuild your cluster, that your master node uptime is higher than your slave node, otherwise your slave node might end up being selected as master and wiping out the master node configuration, logs and reports. Just reboot the slave node a few times before joining it to the cluster and Bob's your uncle.

    Regards - Giovani

Reply
  • Hi Technik.

    You're saying that you are not able to ssh into your slave node using ha_utils ssh?

    If so, you are in a bit of a pickle here. I would suggest contacting support ASAP if this is a commercial license. Another course of action would be to destroy your cluster, upgrade both nodes to the same version and rebuild the cluster. Don't worry about loosing anything, when you destroy the cluster, it will automatically factory reset and shutdown your slave node and your master node will remain intact.

    Just make sure that, if you decide to rebuild your cluster, that your master node uptime is higher than your slave node, otherwise your slave node might end up being selected as master and wiping out the master node configuration, logs and reports. Just reboot the slave node a few times before joining it to the cluster and Bob's your uncle.

    Regards - Giovani

Children
  • Hi Giovani

    With your quick guide I could solve the issue.

    > destroy cluster
    > factory reset slave an bring it to newest firmware version
    > update master to newest firmware version
    > re-create cluster (pay attension on uptime)

    > sync nodes... all good!

    Bob's my uncle now! ;)

    Many thanks for your help.

    Regards
    Marcel