I've deployed VI-19.0.0_GA.VMW-317.zip last Sunday and migrated SFOS 19.0.0 GA-Build317 from old SFV4C6 to this new one (because of swap problems). Veeam ONE Monitor starts to send Guest disk space "/var" alarms today. It looks like SFOS v. 19 image has much less space for /var than v. 18.5. (along to https://community.sophos.com/sophos-xg-firewall/sfos-v19-early-access-program/f/discussions/133616/v19-hyperv-disk-usage-reporting-full).
SFV4C6_VM01_SFOS 19.0.0 GA-Build317# fdisk -l Disk /dev/sda: 16 GB, 17179869184 bytes, 33554432 sectors 2088 cylinders, 255 heads, 63 sectors/track Units: sectors of 1 * 512 = 512 bytes Device Boot StartCHS EndCHS StartLBA EndLBA Sectors Size Id Type /dev/sda1 1023,254,63 1023,254,63 33144832 33423359 278528 136M 83 Linux /dev/sda2 1023,254,63 1023,254,63 33423360 33554431 131072 64.0M 83 Linux Disk /dev/sdb: 80 GB, 85899345920 bytes, 167772160 sectors 10443 cylinders, 255 heads, 63 sectors/track Units: sectors of 1 * 512 = 512 bytes Disk /dev/sdb doesn't contain a valid partition table
SFV4C6_VM01_SFOS 19.0.0 GA-Build317# df -h Filesystem Size Used Available Use% Mounted on none 640.6M 11.6M 582.3M 2% / none 2.9G 24.0K 2.9G 0% /dev none 2.9G 16.9M 2.9G 1% /tmp none 2.9G 14.7M 2.9G 0% /dev/shm /dev/boot 127.7M 26.6M 98.4M 21% /boot /dev/mapper/mountconf 560.3M 93.4M 462.9M 17% /conf /dev/content 11.8G 493.4M 11.3G 4% /content /dev/var 3.7G 3.5G 174.5M 95% /var
SFV4C6_VM01_SFOS 19.0.0 GA-Build317# df -m /var Filesystem 1M-blocks Used Available Use% Mounted on /dev/var 3776 3589 170 95% /var
SFV4C6_VM01_SFOS 19.0.0 GA-Build317# du -d 1 -m /var/|sort -nr|head 2995 /var/ 945 /var/tslog 742 /var/newdb 565 /var/eventlogs 218 /var/savi 204 /var/tmp 192 /var/avira4 46 /var/sasi 24 /var/conan_new 24 /var/conan
BTW I have no idea why "df -m /var" differs from "du =ms /var".
SFV4C6_VM01_SFOS 19.0.0 GA-Build317# df -m /var/ && du -ms /var/ Filesystem 1M-blocks Used Available Use% Mounted on /dev/var 3776 3561 199 95% /var 2967 /var/
It could be resolved on v19 MR-1 but I can't see it in Resolved issues. Anyway I get "No records found" if I check for new firmware.
I've tried to find how to purge report logs but firewall help is not very instructive. What is the recommended procedure?
We have now posted the v19.0 MR1 installers at https://support.sophos.com/support/s/article/KB-000043162?language=en_US.
Like Luca said, this issue affects VMs which were deployed with the v19.0…
This could be: NC-94291
Fixed in MR1.
I think and believe that this problem (NC-94291) is fixed in build 350. But it is possible to correct this problem on a "live system"? When we updated another firewall, the /var filesystem REMAIN small. I have found some info about "increasing /var size" in https://support.sophos.com/support/s/article/KB-000036775?language=en_US. It is worth trying this after update? Or do we have reinstall XG firewall again and restore config from backup?
LuCar Toni said:All information are available. The bug is fixed. You need to reinstall with the new V19.0 MR1 installer or an old installer.
this one is not available!As I write now for several post!What is this solution approach??
We are working on publishing the MR1 installers as we speak, I'll update this when they're available.
Hello guys,as long as we don't get a solution from Sophos, the only steps you can take are as follows:1. reduce the report holding time to the lowest possible value2. flush report data on the shell.I unfortunately have to do this every 2nd day now because our customer appliance is more than 500 km away.@Sophos great bug, keeps us at work SOLVE IT!
I do apologize for the trouble & extra work this issue has caused you & your customer. It was our fault this bug was introduced in v19.0 GA, and we are doing a full RCA to figure out how it happened and what we can improve internally to prevent this type of issue from happening again.
Generally we only publish the installer of a version to MySophos when we fully GA, but I am working with the team now to publish the MR1 installer ASAP to provide you (& others affected) a viable solution.
Bobby,thanks for your answer, which sounds a bit more reliable to me.Please remember the many installations that were made with 19.0.x.A new installation costs you reputation and us money.There must be another solution.BR Gerd
LuCar Toni said:Then use the V18.5 MR4 installer. Due the installations. Jump to the end and do not restore your backup. Then upgrade via SIG to V19.0 MR1. After upgrade restore you backup and you should be fine. With reboots, you should be good to go in 15 minutes.
15 minutes for deploying and configuring VM (if I must set default GW in shell after every reboot), upgrading, deregistering old VM from Central, backup conf and stopping old VM, restoring conf and trying to register to Central? You must be Superman. It took me two hours for standby SFV4C6 and I'm still not able to registr it to Central neither via OTP nor via account :-( I'll open new discussion for it.
I'll prepare VM for the main SFV4C6 but I will have to wait with transfer for maintenance time.
Thanks Tomas,and imagine you are not on site and the firewall is the default gateway.......
The main SFV4C6 is the default GW, MTA, WAF, VPN etc... Arranging maintenance and downtime is not easy :-(
Like Luca said, this issue affects VMs which were deployed with the v19.0 GA installer. If you re-deploy the VM using this MR1 installer, it would resolve the issue.
I understand re-deploying a VM is sizeable effort, so in the meantime we're trying to find another solution to dynamically grow the partition size on a running VM without re-deploying as well.
Hi Bobby,Thank you for your answer, which initially gives us hope.Please stay tuned to the topic.A solution without reinstallation must be worked out.Thanks Gerd
Hi Gerd Rehders1, TomasLavicky, Jiri Hadamek,
We have developed a workaround that will resize the /var partition on a running VM (while keeping your data), but requires a reboot of the VM.
Can you please PM me your email address, so I can send you the workaround & instructions how to run it?
this sound very good.
You just got a PM.
Thanks a lot!
this time SOPHOS has done a great job!
Fix worked for me and I can now schedule the maintenance windows on the affected appliances.
THANKS A LOT!!!
I am glad we were able to help, and the workaround worked for you. Again I do apologize for any inconvenience this issue may have caused you.
Hi Bobby,just one last feedback.I have just patched all systems.One machine started with reportdb dead.But after a flush device record this one is running good!The only difference between the machines was that the one with reportdb dead was already running on SFOS 19.0.1 MR-1-Build365.This was the first great tip from LucarToni (see above).BR Gerd
Hi Bobby, I have the same problem. Can you share the solution?
I have the same problem here. Can you share the solution with me?
If you contact Sophos Support and reference NC-94291, they would be able to provide the workaround for you.
Alternatively if you PM me your email address, I can email it to you as well.
Hi Bob, thanks a lot for your help. The issue was fixed.