This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

UTM 9 version 9.411-3, HTTP/S DROPPED packets are never dropped but are logged as DROPS.

EDITED: BLUF, Rulz #2 you will see that the UTM "services" such as Web Proxy, WAF, DNS, DHCP, etc all take precedence over the Network Firewall rules. If you need to restrict devices from using those ports and protocols, you must do 100% of that configuration in that specific section and not in the firewall rule base. The only thing I can think regarding the fact I was seeing firewall DROPS logged was that when the config was touched it may have invalidated the stateful inspection tables causing unknown established sessions to be rejected and causing a logged dropped packet.

 

I recently saw a host on my local network that seemed to be getting past the firewall rules and I was puzzled.

I have some rules that are setup as DROP rules and others as REJECT rules.

The drop rule was logging as "DROPPED" in RED, but I was pulling HTTP pages, pinging, traceroute, etc which should NOT be happening. With NEVER any issue getting past this rule which should be stopping all traffic! The rule is simple, SOURCE->(any protocol)->ANY, DROP, LOG.  It is like nothing is actually blocked.

So I moved the rule to the very top of the rule base (it was at #5 position before and all the rules above were switched off anyway). Same result. I also made sure I was seeing all the rules and not looking at a subset. I don't have any IP ANY ANY ALLOWs above this rule. (or anywhere else for that matter)

I changed the rule from a DROP to a REJECT. and BANG, now the rule is doing something different. It seems to be intermittently rejecting, and other times passing traffic.

I changed the IP address of the network object in the rule from the windows VM that was the original system I detected this activity on and switched it to another VM that is an Ubuntu server, statically configured, etc. I then did some WGET's for a domain that I have which is parked at GoDaddy and 50% of the time I would get:
2017-03-28 22:58:44 ERROR 503: Service Unavailable.
The other 50% of the time, I would get:
HTTP request sent, awaiting response... 200 OK
And the index.html file would be served up.

Same rule!, set to always fire with no time settings.
I have no load balancing settings. A simple inside interface and outside interface.

At this point, I don't trust this firewall to be dropping packets at all. I am concerned that perhaps the firewall has been compromised in some fashion.

Has anyone else seen this were a simple rule to DROP traffic shows in the LIVE LOG as "DROPPED" but the traffic is in fact passed?



This thread was automatically locked due to age.
  • ummmm..... serious stuff as you say.

    1. Have you checked to see if there are any automatic rules applied? They generally sit above any manual rules.

    2. Are you using the web proxy as well as additional rules?

  • I just installed a new UTM-9 VM from ISO which I just downloaded. I configured the VM as a 64-bit appliance, preserved MAC addresses and then restored configuration backup.

    Once the UTM established a connection to the world, it downloaded around 500MB of data which appears to be a content filtering database. During this time NO TRAFFIC was permitted past the UTM9. Once the download finished, I am seeing the same behavior as I did previously.

    A host at the top of my rule base with a DENY or REJECT on ANY protocol and ANY destination is intermittently being PASSED through the IP net filter. 

  • 1) No automatic rules are present. (I had a DNAT rule there previously, but during troubleshooting it was the first thing I turned off and I didn't really need it active right now)

    2) Yes, I am using the web proxy in transparent mode with no authentication. 

    I am NOT using a parent proxy (chaining).

    I have a standard default base "block all content" web policy and a policy above that which blocks all sorts of stuff.

    I am able to ping as well, which shouldn't involve the webfilter/proxy.

    In my config I am using country blocking and also application filtering.

    I am going to trim the rule base down to see what happens, start turning features off, etc.

     

    Feeling very exposed right now.

  • Okay, so after turning off features one by one... I got to the WebFilter.

    When I switched the webfilter OFF, the expected behavior for the test Ubuntu VM occurs. It is unable to connect on port 80/443. 100% of the time.

    I was under the impression the installation wizard rule "Web Surfing" was required to permit HTTP/HTTPS from localnet to global via the webfilter/proxy.

    I would think the DENY HTTP/HTTPS at the top of the rule base would take precedence over that. Apparently not!

    In addition, the ICMP was enabled on the global tab, so that was letting ICMP through regardless of the DENY rule.

     

    In the past, I had a group of VM's that I wanted isolated. I put a DROP rule at the top of my rule base with those network objects in the rule. It worked previously, browser would never get a response. I discovered this because I was working on one of the VM's and a message popped that an update was available. Huh? That isn't supposed to be happening.

  • Hi, Carl - first I've seen you here - welcome to the UTM Community!

    I bet #2 in Rulz will help you understand the situation.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • CarlMankinen said:

    I recently saw a host on my local network that seemed to be getting past the firewall rules and I was puzzled.

    I have some rules that are setup as DROP rules and others as REJECT rules.

    The drop rule was logging as "DROPPED" in RED, but I was pulling HTTP pages, pinging, traceroute, etc which should NOT be happening. With NEVER any issue getting past this rule which should be stopping all traffic! The rule is simple, SOURCE->(any protocol)->ANY, DROP, LOG.  It is like nothing is actually blocked.

     

    Oh there are so many things wrong with this thread starting with the title, but I will keep my mouth shut[:#]
     
     
     
     
     
  • The above is a common mistake (or gotcha's) with the UTM hence my 2nd question. It's different from other firewalls and you need to examine the differences.

    I've done this myself, adding firewall rules for web browsing, DNS, ping etc and then realising that you don't need any of them if you enable the other features.

    Once your interfaces are set, the dns proxy and the web proxy, you will find you don't need any firewall rules to browse the web.

     

    Same with the smtp proxy.... there is no need to set a traditional dnat rule to allow smtp. Same for the web application firewall, no dnat needed for web servers.

  • Hrmm, so once you enable transparent web filtering you essentially allow every device to pass HTTP/HTTPS traffic.

    No way to block those devices by IP address I suppose by any other means than the "allowed networks" setting?

     

    Why do I see all the RED DROPPED log entries for them if they are being passed through the proxy? That is what really concerns me the most. 

     

    I don't use the built in DNS/DHCP services. I run those locally and need rules to allow them to pass through the UTM.

    Previously I had these test VM's in a different /25 subnet which wasn't in the "allowed networks" ACL on the web filter.

    Ended up merging that subnet with the /24 that is on the allowed network. 

     

    Seems the only way to get around this (since I want web filtering) is to ditch transparent filtering and force authentication or go back to a separate subnet for the test VM's.

    I needed the DNAT rule previously because I was also modifying the destination port in the process and it wasn't for a web server defined in UTM.

  • Yeah, rule #2 pretty much nails it.

    It makes me think that a standard L3 firewall in front of the UTM might be a good idea. Only it's a pain if the overload/hide NAT/PAT is performed at the UTM.

    I have been using this for several years, but I always had a "trusted" and "untrusted" subnet.

    The untrusted subnet wasn't set in the "allowed networks" on any service, so it was being filtered.

    I assumed the HTTP/HTTPS firewall rules were having a part in that because the UTM automagically created a "WebSurfing" rule, etc.

     

    How does it explain the live log showing DROPPED packets though? (dropped is what I want, but perplexed why it would be PASSED by Proxy but logged as DROPPED)

    (I will read through the rest of Rulz, maybe that is rule #4...)

  • No, you can specify subnets, hosts or whatever you want to access the web filtering. You can specify everything and then add certain hosts to skip. In this case, the host's not going through it will indeed need FW rules to allow them through.

    Don't forget that the web proxy is exactly that...... a web proxy. It will filter web requests ie port 80,443. It can be a complicate beast as well but it gives you far more granular control than just opening port 80/443 and is worth investing time on.

    With regards to DNS, the same applies. We have 6 internal DNS servers which service our network (28 sites, 2 data centres) and the internal DNS servers foward to the UTM DNS proxy. They are the only servers that are allowed to access the proxy. Same with our 4 exchange servers and the smtp proxy.
    This way, there are no rules in the FW for smtp or DNS. They are simply blocked by default so no matter what a client does on the network ie try and change their DNS etc or rogue smtp server, there is only 2 egress points which we constantly monitor.

    The UTM can be a simple or complicated beast and I too have been stung by the traditional setup gotcha's you mention which took us a little while to work out on our first install.

    When the 2nd UTM went in, we worked the opposite way eg no FW rules, web proxy, dns proxy etc enabled and then tightened down from there. Yes, we have hosts that need to skip the proxy etc which we simply add to a"skip proxy hosts" group and drop those into that and create the necessary corresponding FW rule. This way, we have good visibility of our web browsing etc and our FW rules are kept to a minimum.