This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Firewall Global Rule domain name entry limit

Hello, folks

I have created a rule to block newly discovered hosts using exploit kits etc., to deliver ransomware. I understand there are many hosts, and I am just adding those which are prevalent at the time.

As the list will grow with the additions, I would like to know at which point such a rule may become large enough that it either reaches a limit defined by the Console (if such a limit exists), or at which point the processing of the rule on a workstation may become problematic. I don't expect the latter to really be an issue, but I would like to know if a limit exists for the number of hosts that be listed in a rule.

Thank you.



This thread was automatically locked due to age.
  • Hello Blood,

    I couldn't find any documentation or information regarding any limits. So, I am fairly certain there aren't any.

    As for your second question, if you are worried about performance, deploy the policy to a couple of computers and monitor behavior to ensure things work as expected. 

    Regards,

    Barb@Sophos
    Community Support Engineer | Sophos Technical Support
    Knowledge Base  |  @SophosSupport  | Sign up for SMS Alerts
    If a post solves your question use the 'This helped me' link.

     

  • Hi, Barb

    Thanks for replying to my question.

    That's great. 

  • Hello Blood,

    I am fairly certain there aren't any [limits]
    I beg to differ slightly (no offence meant) with Barb - probably there are limits but normally one won't hit them. Just academic nit-picking [;)].

    @Blood: How often is this Global rule (I assume you've made it a high-priority rule, haven't you? ) triggered? Just curious because personally I'd not block newly discovered hosts with SCF for several reasons: You have to add these discovered hosts manually (what's your source, BTW) and obviously there's a not so insignificant latency - it might already be too late or the host might already be a late host. Today's network firewalls can process feeds of rogue hosts automatically (but, well, they aren't for free). Web Protection/Web Control might also already have the latest intelligence. Admittedly you could be the one who detects or suspects such a host - but even then I'd rather tell the network firewall to block this address (or name - please note that SEC/SCF resolves the name when you add it to the rule, AFAIK the client's SCF doesn't re-resolve the name but uses the IP). 

    Christian 

  • Hi, Christian.

    Yes, I was thinking about these questions as well and they are good points.

    The main trigger for this is based on the fact I work for a small (i.e. not rich), charity. One of my biggest concerns is ransomware. I was reading an article from Security Week and it showcased a couple of guys whose network (or a network they managed on behalf of someone else), was breached by ransomware. They looked into user behaviour and discovered that the majority of the ransomware infections linked to sites hosting exploit kits. https://www.securityweek.com/taken-ransomware-certain-skills-required

    So, that's my starting point.

    I would love to be able to setup a hardware firewall / next-gen firewall and ask the company to pay for a subscription to a reputable real-time block list. But, they don't have that sort of cash, so, I'm doing the next best thing. Our staff undergo security awareness training (which has paid off several times already). coupled with Sophos Endpoint Protection and Cybereason's Ransomfree I hope to cover as much as possible.