This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

UTM 9 up2date stuck on version 9.352-6 and pattern 94753

My software based Home UTM 9 is stuck on version 9.352-6 and pattern 94753. The dashboard says I have "3 Update(s) available for download". But when I go to the Up2Date page it says: "Current firmware version: 9.352-6" - "Your firmware is up to date." and "Current pattern version: 94753" - "Your patterns are up to date."

I tried to download and install manually the "u2d-sys-9.353004-354004.tgz.gpg" file from ftp.astaro.com/.../ but... after installation and reboot its still says I have v9.352-6 and pattern 94753 and that I have "I have "3 Update(s) available for download".

So I am wondering if there is a trick to getting up2date (or manual installations) to work again. They have always worked fine in the past on this box and I have been using UTM 9 for several years without issue.

Any help or suggestions would be great!!



This thread was automatically locked due to age.
Parents
  • "3 Update(s) available for download" indicates that you have configured "Manual" for 'Firmware Download Interval'.  On the 'Firmware' tab, click on 'Check for Up2Date packages now'.  If you had selected an interval, try toggling it to "Manual" and then back.  Depending on your download speed, you'll need to wait awhile to see what happens.

    Cheers - Bob

    PS Up2Dates must be applied in order. 9.353004-354004 cannot be applied to a UTM running 9.352.

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • Thanks for your reply.

    Both of the "Firmware Download Interval" and the "Pattern Download/Installation Interval" are both set to "Every 15 minutes" and I have never changed those settings to manual. I can try the switching those to "manual" and back to "every 15 minutes". That would be great if it was that easy!!

    Also, under the "Available Firmware Up2Dates" on the "Up2Date' page it says "There are no Up2Date packages available for installation" ... which is obviously is NOT correct.


    As for manually attempting to update the firmware...
    I was aware that they need to be applied in order.  I just missed that the patch number was not same on the 9.352 download file as what I my UTM has as the current version. I saw the 9.352006 just jumped to the next file after it. I see now I need to to try to install "u2d-sys-9.352006-353004" before I can try installing "9.353004-354004".


    Thanks! I will give those two things a try.

  • 1. Enable UTM SSH access in Management -> System Settings -> Shell Access. Define login user and root passwords and put  "Internal (Network)" object in Allowed Networks box.
    2. Connect to UTM LAN IP address using PUTTY tool. First login as loginuser, and then type su command and provide root password.
    3. Navigate to /var/up2date/sys folder and check it's content with ls command. Remove everything in it with rm * command.
    4. Run auisys.plx command to force up2date.

  • vilic, Thanks so much for the step-by-step!!  That really helped.

    Well I have some more info now.... looks like I am having storage problem. When I ran auisys.plx --verbose I got some out of space errors.

    How do I clear up the needed space for up2date?

    --------------

    Install u2d packages <avira3>
    Starting installing up2date packages for type 'avira3'
    Installing up2date package: /var/up2date/avira3/u2d-avira3-9.14302.tgz.gpg
    Verifying up2date package signature
    Up2Date failed: Not enough free space for '/var/up2date/avira3-install'. Required space: 314187 KB Available space: 205020 KB;  inodes: 304307
    Install u2d packages <savi>
    Starting installing up2date packages for type 'savi'
    Installing up2date package: /var/up2date/savi/u2d-savi-9.8633.tgz.gpg
    Verifying up2date package signature
    Up2Date failed: Not enough free space for '/var/up2date/savi-install'. Required space: 268122 KB Available space: 205020 KB;  inodes: 304307
    A serious error occured during installation! (40)

    --------------

    But the dashboard says I have plenty of space:

    Log Disk: 9% of 53.3 GB

    Data Disk 11% of 40.7 GB

    --------------

    When I did a DF from the ssh command window...  looks like /dev/sda6 is almost completely full. Use% = 96%. All the other directories are are 1% or 0% with /dev/sda5 being the next highest at 12%.... so it must be that /sda6 is too full.

    What can I delete from /sda6?

    Thanks again for you help with this. 

  • Update: i guess /sda6 is the root / directory

  • Update 2: In the hardware log I found that the root partition over the last year went from a fairly steady 50-60% usage to 90-93% usage between the 1st of the year and 1/10/2016... and now has stayed there. Which is most likely why my up2date will no longer work.

    I deleted most of the auto backups... which has helped get me a small amount extra space on root (checked with the df command as I deleted them).

    But is there anything more I can do to recover some of that 30% of space it used to have before the 1st of the year?


    Thanks again!

  • Sorry for all the back to back posts.... but I am kind of struggling here. I haven't been able to figure out what files to clean up/remove to regain the lost space.

    So I was wondering since I have plenty of space on some of the other partitions does anyone know if I can use something like gParted (partition editor) to just reallocate enough space so up2date can start running again?


    Thanks!

  • Try, from the command line: du -shx /var/storage/*

    What does that give you?

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • Bob, when I ran that command I got back this:


    /root # du -shx /var/storage/*
    16K     /var/storage/agent
    88M     /var/storage/chroot-clientlessvpn
    4.0M    /var/storage/chroot-ftp
    148M    /var/storage/chroot-http
    17M     /var/storage/chroot-pop3
    26M     /var/storage/chroot-reverseproxy
    69M     /var/storage/chroot-smtp
    271M    /var/storage/cores
    16K     /var/storage/lost+found
    32K     /var/storage/pgsql
    1.8G    /var/storage/pgsql92
    2.1G    /var/storage/swapfile


    So it looks like the pgsql92 and the swapfile are hogging up most of the space.

    Any suggestions?

  • First, it looks like you have this installed on a disk that is too small.

    You probably can remove all of the core dumps in /var/storage/cores.  Be careful in /var/storage/pgsql92/data/pg_xlog that you don't delete anything with a time stamp equal to or newer that on archive_status.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • Thanks Bob for the info.

    My UTM box has been running fine with the 120 GB drive for a couple years now. And the HD size wasn't an issue until the 1st of the year (Jan 2016) when some files started growing very fast and chewing up space.

    Also your statement about the size of my HDD is kind of surprising since the "UTM 9.3 quick start guide" PDF states in the "Minimum Hardware Requirements" section that "20 GB hard disk drive (40 GB recommended)"... and my drive 3x the "recommended" size.

    Maybe its time to rebuild using a bigger disk...  so if 120 GB is too small what size HDD would be "REALLY" recommended??

    But until I have a chance to do that...

    Here is the output from the "/var/storage/cores/*"...  so can I just do a "rm*"  command in that "cores" directory?

    /var/storage/cores # du -shx /var/storage/cores/*
    24M     /var/storage/cores/admin-reporter..4233
    215M    /var/storage/cores/cssd.12817
    12M     /var/storage/cores/ips-reporter.pl.26440
    1.8M    /var/storage/cores/ipv6_watchdog.4256
    20M     /var/storage/cores/ulogd.4058


    If so, looks like I will gain back a couple hundred MBs at least...

    Thanks again for sticking with me on this!!!

  • I would first do ls -l /var/storage/cores to check that you don't have a recent core dump you might want to keep for Support.

    You say, "And the HD size wasn't an issue until the 1st of the year (Jan 2016) when some files started growing very fast and chewing up space."  Do you know where that was?  Do you have any Reporting on when the partition started to grow?  Can you show us the graph?

    Again, you'll probably want to delete some of the pgsql92 files as I mentioned above, but do so carefully with my warning above in mind.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
Reply
  • I would first do ls -l /var/storage/cores to check that you don't have a recent core dump you might want to keep for Support.

    You say, "And the HD size wasn't an issue until the 1st of the year (Jan 2016) when some files started growing very fast and chewing up space."  Do you know where that was?  Do you have any Reporting on when the partition started to grow?  Can you show us the graph?

    Again, you'll probably want to delete some of the pgsql92 files as I mentioned above, but do so carefully with my warning above in mind.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
Children
  • Bob,

    I am sorry but I am a bit of a noob with the inner workings so more details on this statement would be greatly appreciated: "Be careful in /var/storage/pgsql92/data/pg_xlog that you don't delete anything with a time stamp equal to or newer that on archive_status."

    When you said "be careful"... I read it as "stay away from".  So how and where do I see the "archive_status" time stamp? And any other specifics would be helpful.

    (And yes, I will backup my config before deleting anything.)

    Also, should a 120 GB HDD be big enough for a 5 user site? My Logs directory is hardly touched... its really ROOT which is almost completely full.

    Here is the image of my Storage graph...

    As always.... THANKS for your help!!

  • Run ls -l /var/storage/pgsql92/data/pg_xlog - you will see that there is a sub-directory archive_status.

    Yes, 120GB should be enough for five users.

    Show us the results of the version command and of df.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • Here you go...

    :/ # version
    Current software version...: 9.352006
    Hardware type..............: Software Appliance
    Installation image.........: 9.004-33.1
    Installation type..........: asg
    Installed pattern version..: 94753
    Downloaded pattern version.: 94753
    Up2Dates applied...........: 37 (see below)
                                 sys-9.004-9.004-33.34.1.tgz (Mar 10  2013)
                                 sys-9.004-9.005-29.15.2.tgz (Mar 10  2013)
                                 sys-9.005-9.005-15.16.1.tgz (Mar 10  2013)
                                 sys-9.005-9.006-15.5.2.tgz (Apr 16  2013)
                                 sys-9.006-9.100-5.16.1.tgz (May 20  2013)
                                 sys-9.100-9.101-16.12.1.tgz (Jun 12  2013)
                                 sys-9.101-9.102-11.8.2.tgz (Jul  3  2013)
                                 sys-9.102-9.103-8.5.2.tgz (Nov 18  2013)
                                 sys-9.103-9.104-5.17.2.tgz (Nov 18  2013)
                                 sys-9.104-9.105-17.9.1.tgz (Nov 18  2013)
                                 sys-9.105-9.106-9.17.1.tgz (Nov 18  2013)
                                 sys-9.106-9.107-17.33.2.tgz (Jan 20  2014)
                                 sys-9.107-9.108-33.23.2.tgz (Mar 26  2014)
                                 sys-9.108-9.109-23.1.2.tgz (Mar 26  2014)
                                 sys-9.109-9.110-1.22.1.tgz (Apr 13  2014)
                                 sys-9.110-9.111-22.7.1.tgz (Apr 13  2014)
                                 sys-9.111-9.111-7.11.1.tgz (Jul 28  2014)
                                 sys-9.111-9.112-7.12.1.tgz (Jul 28  2014)
                                 sys-9.112-9.113-12.1.2.tgz (Jul 28  2014)
                                 sys-9.113-9.203-1.3.1.tgz (Jul 28  2014)
                                 sys-9.203-9.204-3.20.1.tgz (Jul 28  2014)
                                 sys-9.204-9.205-20.12.1.tgz (Sep 14  2014)
                                 sys-9.205-9.206-12.35.1.tgz (Sep 14  2014)
                                 sys-9.206-9.207-35.19.2.tgz (Jan 19  2015)
                                 sys-9.207-9.208-19.8.5.tgz (Jan 19  2015)
                                 sys-9.208-9.209-8.8.1.tgz (Jan 19  2015)
                                 sys-9.209-9.210-8.20.1.tgz (Jan 19  2015)
                                 sys-9.210-9.304-20.9.2.tgz (Jan 26  2015)
                                 sys-9.304-9.305-9.4.1.tgz (Jan 26  2015)
                                 sys-9.305-9.306-4.6.1.tgz (Jan 26  2015)
                                 sys-9.306-9.307-6.6.1.tgz (Mar  6  2015)
                                 sys-9.307-9.308-6.16.2.tgz (Mar  6  2015)
                                 sys-9.308-9.309-16.3.1.tgz (Mar 11  2015)
                                 sys-9.309-9.310-3.11.1.tgz (May 10  2015)
                                 sys-9.310-9.311-11.3.1.tgz (Jun  2  2015)
                                 sys-9.311-9.312-3.8.1.tgz (Jun  2  2015)
                                 sys-9.312-9.313-8.3.1.tgz (Jul  9  2015)
    Up2Dates available.........: 0
    Factory resets.............: 0
    Timewarps detected.........: 0

    ---

    :/ # df
    Filesystem     1K-blocks        Used     Available    Use%    Mounted on
    /dev/sda6         5412452   4883672       230800      96%     /
    udev                  1669276               80      1669196       1%    /dev
    tmpfs                 1669276                 0      1669276       0%    /dev/shm
    /dev/sda1            338875        24774        292085       8%    /boot
    /dev/sda5       42628172    4373268   35957868      11%    /var/storage
    /dev/sda7       55903552       298328   52600092       1%    /var/log
    /dev/sda8          2559076           5908     2403460       1%     /tmp
    tmpfs                 1669276                  0      1669276       0%    /var/sec/chroot-httpd/dev/shm
    tmpfs                 1669276                  0      1669276       0%    /var/storage/chroot-reverseproxy/dev/shm
    tmpfs                 1669276                  8      1669268       1%    /var/storage/chroot-smtp/tmp/ram

  • sorry...  back to back posts again...

    From this ls -l /var/storage/pgsql92/data/pg_xlog... i got back lots of row of entries like the examples below (I shortened the list quite a bit).

    So you are saying I could do the "rm 000000010000004600000041" command on entries like this? ( the may 1st entry for example )

    But I should stay away from the May 6th entries since "archive_status" is also "May 6"? ( ignoring the time stamp for now just to be safe )

    -rw------- 1 postgres postgres 16777216 May  6 13:20 00000001000000460000003E
    -rw------- 1 postgres postgres 16777216 May  6 14:35 00000001000000460000003F
    -rw------- 1 postgres postgres 16777216 May  6 14:56 000000010000004600000040
    -rw------- 1 postgres postgres 16777216 May  1 23:32 000000010000004600000041
    -rw------- 1 postgres postgres 16777216 May  2 00:14 000000010000004600000042
    -rw------- 1 postgres postgres 16777216 May  2 01:05 000000010000004600000043
    -rw------- 1 postgres postgres 16777216 May  2 01:47 000000010000004600000044
    -rw------- 1 postgres postgres 16777216 May  2 02:27 000000010000004600000045
    drwx------ 2 postgres postgres    12288 May  6 14:39 archive_status
    cronkright:/ #

  • With only eight files, it's not worth deleting anything.  Usually, these problems are associated with either the cores or pgsql92 directory, but I just reread the thread and noticed that you picked up near the beginning on the /dev/sda6 problem that I just saw above.  What do you get from du -shx /*

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • No No No... I edited the list the down to ask the question about the dates compared to "archive_status" folder...

    Here is the full list from running "ls -l /var/storage/pgsql92/data/pg_xlog":

    :/root #  ls -l /var/storage/pgsql92/data/pg_xlog
    total 1736716
    -rw------- 1 postgres postgres 16777216 May  5 05:02 000000010000004600000023
    -rw------- 1 postgres postgres 16777216 May  5 06:40 000000010000004600000024
    -rw------- 1 postgres postgres 16777216 May  5 06:40 000000010000004600000025
    -rw------- 1 postgres postgres 16777216 May  5 06:54 000000010000004600000026
    -rw------- 1 postgres postgres 16777216 May  5 08:59 000000010000004600000027
    -rw------- 1 postgres postgres 16777216 May  5 10:30 000000010000004600000028
    -rw------- 1 postgres postgres 16777216 May  5 11:47 000000010000004600000029
    -rw------- 1 postgres postgres 16777216 May  5 13:01 00000001000000460000002A
    -rw------- 1 postgres postgres 16777216 May  5 14:13 00000001000000460000002B
    -rw------- 1 postgres postgres 16777216 May  5 15:19 00000001000000460000002C
    -rw------- 1 postgres postgres 16777216 May  5 16:20 00000001000000460000002D
    -rw------- 1 postgres postgres 16777216 May  5 17:17 00000001000000460000002E
    -rw------- 1 postgres postgres 16777216 May  5 18:17 00000001000000460000002F
    -rw------- 1 postgres postgres 16777216 May  5 18:40 000000010000004600000030
    -rw------- 1 postgres postgres 16777216 May  5 18:40 000000010000004600000031
    -rw------- 1 postgres postgres 16777216 May  5 19:23 000000010000004600000032
    -rw------- 1 postgres postgres 16777216 May  5 21:14 000000010000004600000033
    -rw------- 1 postgres postgres 16777216 May  5 22:39 000000010000004600000034
    -rw------- 1 postgres postgres 16777216 May  6 00:32 000000010000004600000035
    -rw------- 1 postgres postgres 16777216 May  6 03:11 000000010000004600000036
    -rw------- 1 postgres postgres 16777216 May  6 05:50 000000010000004600000037
    -rw------- 1 postgres postgres 16777216 May  6 06:40 000000010000004600000038
    -rw------- 1 postgres postgres 16777216 May  6 06:40 000000010000004600000039
    -rw------- 1 postgres postgres 16777216 May  6 07:44 00000001000000460000003A
    -rw------- 1 postgres postgres 16777216 May  6 09:41 00000001000000460000003B
    -rw------- 1 postgres postgres 16777216 May  6 10:52 00000001000000460000003C
    -rw------- 1 postgres postgres 16777216 May  6 12:05 00000001000000460000003D
    -rw------- 1 postgres postgres 16777216 May  6 13:20 00000001000000460000003E
    -rw------- 1 postgres postgres 16777216 May  6 14:35 00000001000000460000003F
    -rw------- 1 postgres postgres 16777216 May  6 15:47 000000010000004600000040
    -rw------- 1 postgres postgres 16777216 May  6 17:39 000000010000004600000041
    -rw------- 1 postgres postgres 16777216 May  6 18:40 000000010000004600000042
    -rw------- 1 postgres postgres 16777216 May  6 18:40 000000010000004600000043
    -rw------- 1 postgres postgres 16777216 May  6 18:59 000000010000004600000044
    -rw------- 1 postgres postgres 16777216 May  6 20:05 000000010000004600000045
    -rw------- 1 postgres postgres 16777216 May  6 21:15 000000010000004600000046
    -rw------- 1 postgres postgres 16777216 May  6 22:20 000000010000004600000047
    -rw------- 1 postgres postgres 16777216 May  6 23:25 000000010000004600000048
    -rw------- 1 postgres postgres 16777216 May  7 00:41 000000010000004600000049
    -rw------- 1 postgres postgres 16777216 May  7 02:20 00000001000000460000004A
    -rw------- 1 postgres postgres 16777216 May  7 04:01 00000001000000460000004B
    -rw------- 1 postgres postgres 16777216 May  7 05:40 00000001000000460000004C
    -rw------- 1 postgres postgres 16777216 May  7 06:40 00000001000000460000004D
    -rw------- 1 postgres postgres 16777216 May  7 06:40 00000001000000460000004E
    -rw------- 1 postgres postgres 16777216 May  7 07:04 00000001000000460000004F
    -rw------- 1 postgres postgres 16777216 May  7 08:24 000000010000004600000050
    -rw------- 1 postgres postgres 16777216 May  7 09:32 000000010000004600000051
    -rw------- 1 postgres postgres 16777216 May  7 10:45 000000010000004600000052
    -rw------- 1 postgres postgres 16777216 May  7 11:57 000000010000004600000053
    -rw------- 1 postgres postgres 16777216 May  7 13:19 000000010000004600000054
    -rw------- 1 postgres postgres 16777216 May  7 14:41 000000010000004600000055
    -rw------- 1 postgres postgres 16777216 May  7 15:51 000000010000004600000056
    -rw------- 1 postgres postgres 16777216 May  7 16:51 000000010000004600000057
    -rw------- 1 postgres postgres 16777216 May  7 17:51 000000010000004600000058
    -rw------- 1 postgres postgres 16777216 May  7 18:40 000000010000004600000059
    -rw------- 1 postgres postgres 16777216 May  7 18:40 00000001000000460000005A
    -rw------- 1 postgres postgres 16777216 May  7 19:02 00000001000000460000005B
    -rw------- 1 postgres postgres 16777216 May  7 20:06 00000001000000460000005C
    -rw------- 1 postgres postgres 16777216 May  7 21:02 00000001000000460000005D
    -rw------- 1 postgres postgres 16777216 May  7 22:02 00000001000000460000005E
    -rw------- 1 postgres postgres 16777216 May  7 22:57 00000001000000460000005F
    -rw------- 1 postgres postgres 16777216 May  7 23:59 000000010000004600000060
    -rw------- 1 postgres postgres 16777216 May  8 01:35 000000010000004600000061
    -rw------- 1 postgres postgres 16777216 May  8 03:09 000000010000004600000062
    -rw------- 1 postgres postgres 16777216 May  8 04:42 000000010000004600000063
    -rw------- 1 postgres postgres 16777216 May  8 06:24 000000010000004600000064
    -rw------- 1 postgres postgres 16777216 May  8 06:40 000000010000004600000065
    -rw------- 1 postgres postgres 16777216 May  8 06:40 000000010000004600000066
    -rw------- 1 postgres postgres 16777216 May  8 07:39 000000010000004600000067
    -rw------- 1 postgres postgres 16777216 May  8 09:00 000000010000004600000068
    -rw------- 1 postgres postgres 16777216 May  8 10:22 000000010000004600000069
    -rw------- 1 postgres postgres 16777216 May  8 11:50 00000001000000460000006A
    -rw------- 1 postgres postgres 16777216 May  8 12:48 00000001000000460000006B
    -rw------- 1 postgres postgres 16777216 May  8 13:58 00000001000000460000006C
    -rw------- 1 postgres postgres 16777216 May  8 15:07 00000001000000460000006D
    -rw------- 1 postgres postgres 16777216 May  8 16:15 00000001000000460000006E
    -rw------- 1 postgres postgres 16777216 May  8 17:23 00000001000000460000006F
    -rw------- 1 postgres postgres 16777216 May  8 18:40 000000010000004600000070
    -rw------- 1 postgres postgres 16777216 May  8 18:40 000000010000004600000071
    -rw------- 1 postgres postgres 16777216 May  8 18:49 000000010000004600000072
    -rw------- 1 postgres postgres 16777216 May  8 19:52 000000010000004600000073
    -rw------- 1 postgres postgres 16777216 May  8 21:01 000000010000004600000074
    -rw------- 1 postgres postgres 16777216 May  8 22:02 000000010000004600000075
    -rw------- 1 postgres postgres 16777216 May  8 23:11 000000010000004600000076
    -rw------- 1 postgres postgres 16777216 May  9 01:02 000000010000004600000077
    -rw------- 1 postgres postgres 16777216 May  9 03:00 000000010000004600000078
    -rw------- 1 postgres postgres 16777216 May  9 04:50 000000010000004600000079
    -rw------- 1 postgres postgres 16777216 May  9 06:37 00000001000000460000007A
    -rw------- 1 postgres postgres 16777216 May  9 06:40 00000001000000460000007B
    -rw------- 1 postgres postgres 16777216 May  9 06:40 00000001000000460000007C
    -rw------- 1 postgres postgres 16777216 May  9 07:25 00000001000000460000007D
    -rw------- 1 postgres postgres 16777216 May  9 08:39 00000001000000460000007E
    -rw------- 1 postgres postgres 16777216 May  9 09:47 00000001000000460000007F
    -rw------- 1 postgres postgres 16777216 May  9 10:50 000000010000004600000080
    -rw------- 1 postgres postgres 16777216 May  9 11:55 000000010000004600000081
    -rw------- 1 postgres postgres 16777216 May  9 12:56 000000010000004600000082
    -rw------- 1 postgres postgres 16777216 May  9 14:02 000000010000004600000083
    -rw------- 1 postgres postgres 16777216 May  9 15:10 000000010000004600000084
    -rw------- 1 postgres postgres 16777216 May  9 16:10 000000010000004600000085
    -rw------- 1 postgres postgres 16777216 May  9 17:20 000000010000004600000086
    -rw------- 1 postgres postgres 16777216 May  9 17:22 000000010000004600000087
    -rw------- 1 postgres postgres 16777216 May  4 19:04 000000010000004600000088
    -rw------- 1 postgres postgres 16777216 May  4 20:33 000000010000004600000089
    -rw------- 1 postgres postgres 16777216 May  4 21:51 00000001000000460000008A
    -rw------- 1 postgres postgres 16777216 May  4 23:39 00000001000000460000008B
    -rw------- 1 postgres postgres 16777216 May  5 02:35 00000001000000460000008C
    drwx------ 2 postgres postgres    12288 May  9 17:20 archive_status

  • ... and here is the "du -shx /*" you asked for...

    :/ # du -shx /*
    6.9M    /bin
    23M     /boot
    80K     /dev
    2.1M    /doc
    85M     /etc
    0       /fsck_corrected_errors
    12K     /home
    0       /inst
    140M    /lib
    16K     /lost+found
    8.0K    /media
    4.0K    /mnt
    268K    /opt
    du: cannot access `/proc/8786/task/8786/fd/4': No such file or directory
    du: cannot access `/proc/8786/task/8786/fdinfo/4': No such file or directory
    du: cannot access `/proc/8786/fd/4': No such file or directory
    du: cannot access `/proc/8786/fdinfo/4': No such file or directory
    0       /proc
    72K     /root
    4.0K    /run
    7.1M    /sbin
    0       /sys
    2.0M    /tmp
    824M    /usr
    3.7G    /var

  • You will definitely want to run

    rm /var/storage/pgsql92/data/pg_xlog/00000001000000460000002?
    rm /var/storage/pgsql92/data/pg_xlog/00000001000000460000003?
    rm /var/storage/pgsql92/data/pg_xlog/00000001000000460000004?
    rm /var/storage/pgsql92/data/pg_xlog/00000001000000460000005?
    rm /var/storage/pgsql92/data/pg_xlog/00000001000000460000006?
    rm /var/storage/pgsql92/data/pg_xlog/00000001000000460000007?

    Based on your last post, there's more like 20 or 40 GB on this hard drive, not 120.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • So does that mean the UTM installer did not use the whole HDD when it created the initial partitions?

  • So after all of this I am still unclear how to resolve this problem. At this point I am wondering if my best option will be to backup my config and reinstall??

    Now it says "7 Update(s) available for download"... uggggh!!

    fortress4