Important note about SSL VPN compatibility for 20.0 MR1 with EoL SFOS versions and UTM9 OS. Learn more in the release notes.

Impact to the internet speed when creating firewall exceptions

Hey there,

How much of an impact to the internet speed does it make, if I create a new firewall exception?

Since the firewall has to go through the entire ruleset, it should slow down every request a little bit.

Is there an upper bound of exceptions I should create, or a formular of how much every exception slows the traffic down?

Thanks for your thoughts.

David



Edited TAGs
[edited by: Erick Jan at 2:05 PM (GMT -7) on 29 Aug 2024]
Parents
  • It may depend on what specific "exception" type you are talking about.

    Within the Web Filter, the Web Exceptions have high performance as long as correct RegEx is written.  Poor Regex or putting in FQDNs into the RegEx field results in poor performance.

    Within firewall rules, I firewall rule that relies on FQDN Host objects in the Source or Destination has an impact to performance because it need to do an additional FQDN to IP lookup for every new connection.  If you only have a few this is not a problem, however I have seen customers create dozens of FQDN hosts and putting them in firewall rules as a way of creating an "exception" to allow traffic.

    Related is this article:
    https://support.sophos.com/support/s/article/KBA-000008111?language=en_US

  • Hey Michael Dunn,

    I'm talking about the Web Filter Exceptions. I would say that the RegEx I'm using is decent e.g. ^([A-Za-z0-9.-]*\.)?google\.com/

    But is there an upper bound of exceptions I should create, e.g. 5, 10, 50, 100 or 500? Or is there no definite answer to this question, because there are so many factors, playing a major role in determaning the delay created by an exception? Therefor you can just use correctly written regex an use as few exceptions as possible.

Reply
  • Hey Michael Dunn,

    I'm talking about the Web Filter Exceptions. I would say that the RegEx I'm using is decent e.g. ^([A-Za-z0-9.-]*\.)?google\.com/

    But is there an upper bound of exceptions I should create, e.g. 5, 10, 50, 100 or 500? Or is there no definite answer to this question, because there are so many factors, playing a major role in determaning the delay created by an exception? Therefor you can just use correctly written regex an use as few exceptions as possible.

Children
  • Yes that is a well written regex.  There are customers running with several hundred of them written like this that have no problem.  There is no specific limit.

    It is like...  how many pencils can you carry?  Each individual pencil is lightweight and you can carry many at a time.  Generally you would not worry about it.  But if you are also carrying bricks then suddenly every gram counts and you want to look at the pencils.  And if you put "google.com" into the regex it suddenly is a pencil that weighs 5x as much.  If you put "*.google.com.*maps" it weighs 20x as much.

    Note that the only impact of regex is CPU processing.

    I have seen customer boxes with CPU issues (mostly due to other stuff) and poorly written regex.  Improving the quality of the regex made a noticeable difference in CPU usage.

  • Hi,

    If you want to use URLs in lieu of regex I would suggest you use ssl/tls exceptions and processing.

    an

    XG115W - v20.0.2 MR-2 - Home

    XG on VM 8 - v21 GA

    If a post solves your question please use the 'Verify Answer' button.