Fixing UTM, Topic #3, The Difficulty of Blocking Hostile IPs

There are several common problems created by the lack of integration between the proxies and the firewall rules.  Here is one of them. 

Suppose you use the logs to determine that some remote IP addresses are actively probing or attacking your network, so you want to block all traffic with them.  You cleverly create a network group called "Hostile IPs" for this purpose.   How many places does it need to be configured?

  • Firewall rules:   
    From Any to Hostile IPs, any port, BLOCK
    From Hostile IPs to ANY, any port, BLOCK

  • Standard Web Proxy:
    Need to modify EVERY Filter Action to block by DNS name and IP, using either the website block list by RegEx, website block list by Domain, or tags.  Since tags can be created for name and IP in one step, it might be easiest. 
    You cannot use the "Hostile IPs" Network Group at all.   
    You cannot specify a network range, but you might be able to approximate an IP range using regular expressions.   
    It will be challenging to keep these entries synchronized over time with the Hostile IPs list, because the configuration must be replicated to any new Filter Actions created in the future. 

  • Transparent Web Proxy:
    Option 1:   Use the same mechanism as for Standard Web.
    Option 2 (easier): Add the "Hostile IPs" object to the Transparent Mode Skip list.  This causes the packet to be handled by the firewall rules, where it will be blocked by the rules created above.

  • FTP Proxy (Standard/Transparent/Both):
    Add the "Hostile IPs" object to the (FTP) transparent Mode Skip List.  Do NOT check the box to allow FTP traffic.   Packets will be handled by the firewall where it will be blocked.

  • EMail Proxy:
    Under Relaying tab, add  the "Hostile IPs" object to the Blocked Hosts/Networks list.

  • Generic Proxy:
    If  used for outbound, not needed, as the proxy connects to a designated list of hosts.
    If used for inbound, "Allowed Networks" will need to be reworked to create a hole in the network ranges so that the Hostile IPs are not allowed.    

  • Socks Proxy: 
    Not appropriate for inbound traffic.
    Not restrictable for outbound traffic.

  • WAF
    On EVERY Site Path Routing object, enable "Access Control", and add the "Hostile IPs" object to the Denied list. 
    Needs to be replicated to any new Site Path Routing objects that are created in the future.

  • VPN (Client and Site-to-Site)
    Firewall rules will apply.

This is a simple task in every other firewall product.   It could be and should be in UTM as well.

  • What if you created a static route for this hostile IPs (gateway route to 240.0.0.0 for IPv4 or to 100:: for IPv6). 240.0.0.0 in IPv4 is reserved and 100:: in IPv6 is discard traffic.

    Maybe a simple blackhole route will also work, but not sure if it really does.

    If your UTM doesn't know how to reach (back) hostile IP's, then there's most likely no traffic possible.

  • In reply to apijnappels:

    DNAT to DeadEnd is the solution normally proposed, but one needs a unique and non-existent destination for each source, which creates a problem if the number of blocked sources becomes non-trivial.   Your suggestion of a static routing rule sounds like it might be a great approach, as it could be used for any number of hostile addresses, and would prevent replies from being sent.   Is it unreasonable to think that this is functionality which should have been considered by Astaro or Sophos or both, then documented? Given the awkwardness of these solutions, it should have been treated as a stopgap until they could improve on the design.

    The current attitude seems to be "when system managers discover an unexpected vulnerability, we will be happy to tell them which gimmicks to use to plug the vulnerability.   Since we have a sufficient set of gimmicks, nothing needs to be changed."   That seems to have been inherited from Astaro, so I am not one who thinks the old Astaro days were somehow better days.

    Maybe the problem is that the internals are so bad that nothing CAN be changed, which would leave customers with these options:

    • Switch to XG Firewall
    • Use UTM behind a real firewall
    • Use UTM as a firewall, after learning all of its surprises.
    • Use UTM as a firewall, without learning all of its surprises, and be surprised when your protection is breached.

    I am hoping and believing that a fix for UTM is not impossible, just suffering from a lack of imagination, which I am trying to fix.   

    If the proxy entry points are configured as firewall objects, with the ability to place firewall rules before and between those objects, all of the problems can be solved.   

    UTM seems to dominate XG Firewall on issues of logging and web protection, which are very important to me, so I am reluctant to switch.   On the other hand, XG Firewall is really a firewall. 

     

  • In reply to DouglasFoster:

    Hi,

    actually, that's quite easy:

    Create a network definition group "attackers". Put all hostile IPs there.

    Create a DNAT rule for this "attackers group" with target "blackhole", which is one of the possible blackhole IP adresses.

    That's it.

    BR,

    HP

  • In reply to HanspeterHolzer:

    I knew about the DNAT gimmick, but had written it off as requiring multiple destinations.   So your reply left me wondering where I became confused.  Overnight, I grasped the answer:

     

    A DNAT rule requires a single IP to single IP mapping for the destination?   So the rule could be implemented two ways:

    <any> to <Hostile IP #n> becomes <any> to <Dead End #n>

    which requires a unique dead end for each hostile address.   This is what I was thinking in the start, but for at least IPv4, the better approach is yours:

    <Any Hostile IP> to <My Public IP #n> becomes <Any Hostile IP> to <Dead End #n>

    If you have only one public IP, then you only need one rule.   Even if you have multiple IPs, the number is finite, so it is the right approach.

     

    I wonder if an IPv6 user can comment on whether this works.   I have the impression that with IPv6, every internal MAC address becomes a unique external IPv6 address.  If true, then the whole approach breaks down.

    What is the right choice for the dead end address?

    • I don't think loopback addresses work because UTM responds to pings on all of the 127.0.0.x addresses that I have tested.
    • In a routed internal network, you don't want to cause packet routing or even path searching, so the dead-end address should be on an adjacent LAN segment.
    • On the local segment, ARP overhead might be an issue but is probably insignificant.   However, address scarcity might become an issue if there are a lot of public IPs.
    • There is a need to document that the dead-end address can never be used for anything else.
    • Best solution is probably to reserve a subnet for this purpose, bind it to an unused interface, and then allocate addresses from that subnet.   Assuming that you have an unused interface.

    Does this fit your understanding?   I have never seen an explanation of how to pick the dead-end address.

     

    Then there is the documentation problem.   How did you learn about the DNAT gimmick?   Based on my experience, and the unhappy posts that I have seen here or on the Ideas site, it appears that this is normal:

    • System administrator discovers an unexpected hole in his security posture.
    • He raises an issue with Sophos Support or a question in this forum.
    • He learns how to use DNAT to plug the hole in his security.

    How and when do we move to a posture where the information needed to configure the product securely is learned before the security holes are created?

     

    Is it really acceptable to use a DNAT gimmick when a DENY rule is what is appropriate and intuitive?   Remember that in business, the configuration needs to be understood by all of the assigned system administrators, both the ones on payroll now and the ones hired in the future.

  • In reply to DouglasFoster:

    I should add to the scenario of the shocked system manager.   The full sequence is:

    • System administrator discovers an unexpected hole in his security posture.
    • He raises an issue with Sophos Support or a question in this forum.
    • He learns how to use DNAT to plug the hole in his security.
    • He says, "OK, but you will document this as a bug and fix it?"
    • Sophos says, "No, because it is not a bug, it is a feature!"

    Bah.  Humbug.

  • Hey Douglas,

    Thank you for taking the time to write this very detailed and valuable series. You've provided great points and ideas for our development team. 

    I'll go ahead and tag some members of the UTM team so they can have a read.

    Appreciate the feedback.

      

    Cheers,
    Karlos

  • In reply to Karlos:

    Thank you for a gracious response.   I don't want to make the product harder to sell by complaining in public, but I do want to move the needle away from the status quo.   Feel free to contact me by email or phone if you need follow-up.

  • In reply to DouglasFoster:

    DouglasFoster
    What is the right choice for the dead end address?

    There are 2 good choices for "dead-end" addresses; 1 for IPv4 (240.0.0.0/4 which is a reserved for future use address and will most likely never be used) and 1 for IPv6 (100::/64) which is specifically reserved for "discard traffic". See this link.

  • In reply to apijnappels:

    Good info on dead-end addresses.  Thank you.

  • I dont know but one IP pissed me of every day last month on port 22 (to be precise 46.17.96.12). Maybe the owner is in this forum
    What I did?

    I created a DNAT rule:
    From any using (tcp/udp 22) Change to 46.17.96.12

    And I masquerade ANY network ty my WAN (to get DNAT work)
    So, all who will try my port 22 will go to this NL ip