I've deployed VI-19.0.0_GA.VMW-317.zip last Sunday and migrated SFOS 19.0.0 GA-Build317 from old SFV4C6 to this new one (because of swap problems). Veeam ONE Monitor starts to send Guest disk space "/var" alarms today. It looks like SFOS v. 19 image has much less space for /var than v. 18.5. (along to https://community.sophos.com/sophos-xg-firewall/sfos-v19-early-access-program/f/discussions/133616/v19-hyperv-disk-usage-reporting-full).
SFV4C6_VM01_SFOS 19.0.0 GA-Build317# fdisk -l Disk /dev/sda: 16 GB, 17179869184 bytes, 33554432 sectors 2088 cylinders, 255 heads, 63 sectors/track Units: sectors of 1 * 512 = 512 bytes Device Boot StartCHS EndCHS StartLBA EndLBA Sectors Size Id Type /dev/sda1 1023,254,63 1023,254,63 33144832 33423359 278528 136M 83 Linux /dev/sda2 1023,254,63 1023,254,63 33423360 33554431 131072 64.0M 83 Linux Disk /dev/sdb: 80 GB, 85899345920 bytes, 167772160 sectors 10443 cylinders, 255 heads, 63 sectors/track Units: sectors of 1 * 512 = 512 bytes Disk /dev/sdb doesn't contain a valid partition table
SFV4C6_VM01_SFOS 19.0.0 GA-Build317# df -h Filesystem Size Used Available Use% Mounted on none 640.6M 11.6M 582.3M 2% / none 2.9G 24.0K 2.9G 0% /dev none 2.9G 16.9M 2.9G 1% /tmp none 2.9G 14.7M 2.9G 0% /dev/shm /dev/boot 127.7M 26.6M 98.4M 21% /boot /dev/mapper/mountconf 560.3M 93.4M 462.9M 17% /conf /dev/content 11.8G 493.4M 11.3G 4% /content /dev/var 3.7G 3.5G 174.5M 95% /var
SFV4C6_VM01_SFOS 19.0.0 GA-Build317# df -m /var Filesystem 1M-blocks Used Available Use% Mounted on /dev/var 3776 3589 170 95% /var
SFV4C6_VM01_SFOS 19.0.0 GA-Build317# du -d 1 -m /var/|sort -nr|head 2995 /var/ 945 /var/tslog 742 /var/newdb 565 /var/eventlogs 218 /var/savi 204 /var/tmp 192 /var/avira4 46 /var/sasi 24 /var/conan_new 24 /var/conan
BTW I have no idea why "df -m /var" differs from "du =ms /var".
SFV4C6_VM01_SFOS 19.0.0 GA-Build317# df -m /var/ && du -ms /var/ Filesystem 1M-blocks Used Available Use% Mounted on /dev/var 3776 3561 199 95% /var 2967 /var/
It could be resolved on v19 MR-1 but I can't see it in Resolved issues. Anyway I get "No records found" if I check for new firmware.
I've tried to find how to purge report logs but firewall help is not very instructive. What is the recommended procedure?
We have now posted the v19.0 MR1 installers at https://support.sophos.com/support/s/article/KB-000043162?language=en_US.
Like Luca said, this issue affects VMs which were deployed with the v19.0…
This could be: NC-94291
Fixed in MR1.
I think and believe that this problem (NC-94291) is fixed in build 350. But it is possible to correct this problem on a "live system"? When we updated another firewall, the /var filesystem REMAIN small. I have found some info about "increasing /var size" in https://support.sophos.com/support/s/article/KB-000036775?language=en_US. It is worth trying this after update? Or do we have reinstall XG firewall again and restore config from backup?
Are there any suggestions for this yet?We are also affected and have 5 productive instances!SFOS 19.0.1 MR-1 build350 is not available as install image and seems to be withdrawn today also as update???We need a way of repair without reinstalling!
It is fixed in MR1. The Build Version does not matter. You should have access to V19.0 MR1 in your Licensing Portal. community.sophos.com/.../sophos-firewall-v19-mr1-re_2d00_release-build-365-is-now-available
19.0.1 MR-1 is available as an update, but NOT as an installation media.But if you click on "check for updates" in a firewall today, it says "no update available". That was different yesterday!
On the other hand you answeresat not my question!The update does not fix the error with the var directory!So the question remains: How can the problem be fixed without reinstalling??
I opened a case regarding this issue (NC-94291)
This could be related to V19.0 MR1. See: https://community.sophos.com/sophos-xg-firewall/b/blog/posts/sophos-firewall-v19-mr1-re_2d00_release-build-365-is-now-available
There are two fixes:
NC-100679 & NC-94291
So you can update your current installation to V19.0 MR1 and try it. The Respin version fixed another issue, which could potentially caused this issue.
Hi Toni, thank you for quick response. I need to explain your "... try it...". You mean - after update the /var would be "expanded" after update will have finished? Or we need export-reinstall-import? Or you don´t know and mean "maybe the /var size changed after reinstall"?
As I wrote: The update does not fix the error with the var directory!
Of course I checked before I wrote that!
SFVUNL_KV01_SFOS 19.0.1 MR-1-Build365# df -hFilesystem Size Used Available Use% Mounted onnone 613.2M 1.5M 567.0M 0% /none 1.9G 24.0K 1.9G 0% /devnone 1.9G 51.7M 1.9G 3% /tmpnone 1.9G 14.6M 1.9G 1% /dev/shm/dev/boot 127.7M 34.3M 90.7M 27% /boot/dev/mapper/mountconf 560.3M 72.0M 484.3M 13% /conf/dev/content 11.8G 496.1M 11.3G 4% /content/dev/var 3.7G 1.5G 2.2G 41% /var
We are not increasing the the /var/ because there is no reason to do. But 41% is actually fine from my perspective. /var/ is rarely used.
This Firewall is brandnew (installed yesterday)
And we have another that is 3 weeks old, there it is on 100 %!
Sorry Luca, please don't think I'm stupid, but your answer is simply wrong.
There is an nc-94291 for this issue and you try to say 41% is fine........
Here is an example how it should be. (appliance has been updated from 18.x)
SFV1C4_SO01_SFOS 19.0.0 GA-Build317# df -hFilesystem Size Used Available Use% Mounted onnone 231.4M 12.6M 202.6M 6% /none 1.9G 20.0K 1.9G 0% /devnone 1.9G 21.5M 1.9G 1% /tmpnone 1.9G 14.6M 1.9G 1% /dev/shm/dev/boot 127.7M 50.0M 75.0M 40% /boot/dev/mapper/mountconf 385.4M 73.1M 308.3M 19% /conf/dev/content 3.6G 601.6M 3.0G 16% /content/dev/var 29.9G 8.6G 21.3G 29% /var
IT IS reason.!!
We have "consistently" only 5%-10% free. Sophos report this nearly every day as errors. AND our monitoring system report this permanently. AND we keep losing reports older than several days. AND email system sometimes refuse sending bigger emails. AND we have aloocated 80GB disc for 4 GB /var. AND....
SFV4C6_VM01_SFOS 19.0.0 GA-Build317# df -hFilesystem Size Used Available Use% Mounted onnone 640.6M 11.6M 582.3M 2% /none 2.9G 24.0K 2.9G 0% /devnone 2.9G 968.3M 2.0G 33% /tmpnone 2.9G 14.7M 2.9G 0% /dev/shm/dev/boot 127.7M 26.6M 98.4M 21% /boot/dev/mapper/mountconf 560.3M 95.4M 460.9M 17% /conf/dev/content 11.8G 493.8M 11.3G 4% /content/dev/var 3.7G 3.3G 328.9M 91% /var
Ok lets rephrase this.
It is another problem, you are facing.
So the disk is not increasing due an issue. Instead the size of your partition is to small. This has nothing to do with any of the IDs i called. The IDs are tackling issues, which increases the amount of data stored and resolve this.
But if you partition is not utilizing the proper disk size, this is an different issue. You can follow up on this one with support.