[garner] konstant 30% CPU, resolve Cache error

Hi there,
Sophos XG230 and v19.01.
We have here permanently 30% CPU from garner process.
Looking closer with "tail" you can see the following.

usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_GW_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
usercache_output: resolve_gr_cache for FW_PBR_MODULE failed

I think that is not correct.
How to get this problem solved ?

thx

Stefan



Edited TAGs
[edited by: emmosophos at 10:41 PM (GMT -8) on 14 Nov 2022]
  • Hello  ,

    Thank you for reaching out to the community, On the CLI, select option 5. Device Management, then option 3. Advanced Shell.  
    type the command: service garner:restart -ds nosync

    And then check the logs again !!

    Thanks & Regards,
    _______________________________________________________________

    Vivek Jagad | Technical Account Manager 3 | Cyber Security Evolved


    Sophos Community | Product Documentation | Sophos Techvids | SMS
    If a post solves your question please use the 'Verify Answer' button.

  • Can you share the following outputs:
    1.) df -kh
    2.) ls -larth /var/cores
    3.) tail -f /log/garner.log
    4.) tail -f /log/syslog.log
    5.) tail -f applog.log 

    Thanks & Regards,
    _______________________________________________________________

    Vivek Jagad | Technical Account Manager 3 | Cyber Security Evolved


    Sophos Community | Product Documentation | Sophos Techvids | SMS
    If a post solves your question please use the 'Verify Answer' button.

  • df -kh

    df -kh
    Filesystem                Size      Used Available Use% Mounted on
    none                      1.6G     14.3M      1.5G   1% /
    none                      3.8G     28.0K      3.8G   0% /dev
    none                      3.8G     21.0M      3.8G   1% /tmp
    none                      3.8G     14.6M      3.8G   0% /dev/shm
    /dev/boot               127.7M     31.9M     93.0M  26% /boot
    /dev/mapper/mountconf
                            957.7M     77.1M    876.6M   8% /conf
    /dev/content             11.2G    424.2M     10.8G   4% /content
    /dev/var                 87.1G     29.2G     57.9G  34% /var
    

    ls -larth /var/cores

    -rw-------    1 root     0         888.7M Jul 27 13:44 core.snort
    drwxrwxrwt    2 root     0           4.0K Jul 27 23:58 .
    drwxr-xr-x   41 root     0           4.0K Nov 14 10:42 ..
    

     tail -f /log/garner.log

    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_GW_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_GW_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_GW_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_GW_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_GW_MODULE failed
    usercache_output: resolve_gr_cache for FW_GW_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    Nov 14 11:05:42Z: OPPOSTGRES: move_table_to_usedqueue: moving table 'available_fwapplicationv7_1668251090' FD: 14
    Nov 14 11:05:42Z: OPPOSTGRES: move_table_to_usedqueue: table 'available_fwapplicationv7_1668251090' is moved to 'tbl_used_fwapplicationv7' queue
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    usercache_output: resolve_gr_cache for FW_GW_MODULE failed
    usercache_output: resolve_gr_cache for FW_PBR_MODULE failed
    

    tail -f /log/syslog.log

    Nov 14 10:45:13Z localhost exim: looking for plugins in '/usr/lib/sasl2', failed to open directory, error: No such file or directory
    Nov 14 10:45:29Z localhost exim: looking for plugins in '/usr/lib/sasl2', failed to open directory, error: No such file or directory
    Nov 14 10:49:11Z localhost exim: looking for plugins in '/usr/lib/sasl2', failed to open directory, error: No such file or directory
    Nov 14 10:50:20Z localhost exim: looking for plugins in '/usr/lib/sasl2', failed to open directory, error: No such file or directory
    Nov 14 10:51:54Z localhost exim: looking for plugins in '/usr/lib/sasl2', failed to open directory, error: No such file or directory
    Nov 14 10:55:37Z localhost exim: looking for plugins in '/usr/lib/sasl2', failed to open directory, error: No such file or directory
    Nov 14 10:59:17Z localhost exim: looking for plugins in '/usr/lib/sasl2', failed to open directory, error: No such file or directory
    Nov 14 11:00:25Z localhost exim: looking for plugins in '/usr/lib/sasl2', failed to open directory, error: No such file or directory
    Nov 14 11:01:26Z localhost exim: looking for plugins in '/usr/lib/sasl2', failed to open directory, error: No such file or directory
    Nov 14 11:06:45Z localhost exim: looking for plugins in '/usr/lib/sasl2', failed to open directory, error: No such file or directory
    

    tail -f applog.log 

    Nov 14 11:00:01Z apiInterface:request mode -> 1201.
    Nov 14 11:00:01Z apiInterface:Current ver :::'1500.1' 
    Nov 14 11:00:01Z apiInterface:entityjson::::::::cli::alert=HASH(0xb71dd00)
    Nov 14 11:00:01Z Info:: Transaction will not be rolled back for opcode setAlertSettings. If any operation fails, request is part of multiple request : 
    Nov 14 11:00:06Z String Built : No|LiveUserids|BWids 1|2,25,24|10,0,10
    Nov 14 11:00:11Z Checking new IPs of fqdn for mta
    Nov 14 11:01:12Z getpublickey success Key: 79e509a6409a522f667eb9d53c95aa487eeb 
    Nov 14 11:03:04Z String Built : No|LiveUserids|BWids 1|24,25|10,0
    Nov 14 11:05:10Z Checking new IPs of fqdn for mta
    Nov 14 11:06:04Z String Built : No|LiveUserids|BWids 1|25,24|0,0
    

  • Do you have IPsec tunnels configured if yes then how many ?
    Are they policy base or tunnel base IPsec tunnels ?
    And have you just updated a firmware from v18.5.4 MR-4 to v19.0.1 MR-1 ?
    Are you facing this issue after the upgrade ?

    Thanks & Regards,
    _______________________________________________________________

    Vivek Jagad | Technical Account Manager 3 | Cyber Security Evolved


    Sophos Community | Product Documentation | Sophos Techvids | SMS
    If a post solves your question please use the 'Verify Answer' button.

  • >Do you have IPsec tunnels configured if yes then how many ?
    three IPsec cross-site and one for Sophos Connect Remote Ipsec

    >Are they policy base or tunnel base IPsec tunnels ?
    route base VPN

    >And have you just updated a firmware from v18.5.4 MR-4 to v19.0.1 MR-1 ?
    from v18.5.2

    >Are you facing this issue after the upgrade ?
    no

  • How many route base tunnels are configured ?

    Then Since when did you notice this logs generating?

    As garner service is responsible for the logging and reporting part, have you faced any problems in generating reports from the reports dashboard or missing logs from the log viewer section ?

    Thanks & Regards,
    _______________________________________________________________

    Vivek Jagad | Technical Account Manager 3 | Cyber Security Evolved


    Sophos Community | Product Documentation | Sophos Techvids | SMS
    If a post solves your question please use the 'Verify Answer' button.

  • >How many route base tunnels are configured ?
    only two, but with more routes and larger subnets.

    >Then Since when did you notice this logs generating?
    After switching to v19.01, i noticed that on average, when the users are in the company, we have 50-60% CPU load.
    That was about 15- 20% less before.
    What is also noticeable in this context is that snort and garner are the main causes of the high CPU load.

  • Snort is responsible for IPS.
    During this increase in CPU load, have you faced such troubles like:
    1.) Access to the web GUI FW ?
    2.) ssh access ?
    3.) Number of users ? - Do you they face any troubles accessing the internet or any slowness ?
    4.) Network impact - LAN/WAN ?

    Thanks & Regards,
    _______________________________________________________________

    Vivek Jagad | Technical Account Manager 3 | Cyber Security Evolved


    Sophos Community | Product Documentation | Sophos Techvids | SMS
    If a post solves your question please use the 'Verify Answer' button.

  • >Snort is responsible for IPS.
    i know :)
    Regarding points 1,2 and 4.
    No problems at all, except this garner and snort CPU problem.

    To point 3.
    120 users, currently about 20 in HO (home office).

    That should not be a problem for the XG230.
    What i read / see, however, that there are always complaints with garner / snort in terms of high CPU utilization.
    This is already an older problem.