This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Web Filter "Skip Transparent Mode Source Hosts/Nets" ignored

 Hello,

I am using UTM 9.503-4 in a home environment. I would like some internal hosts to be bypassed in Web Filtering, so have added them to the "Skip Transparent Mode Source Hosts/Nets", and checked "Allow HTTP/S traffic for listed hosts/nets". However, the policy helpdesk shows that the bypasses clients are still being filtered, and many sites using SSL work erratically or not at all. Turning off Web Filtering completely will usually resolve the issue, and allow traffic to traverse using MASQ and Firewall rules. 

My main reason for using Web Filtering is for Quotas on Youtube and Netflix, but neither of these work, as the quota never cuts off the connection once established (probably because I am not proxying SSL, as this breaks too many sites). So I am then limited to using time ranges, in which case there doesn't seem to be any advantage to using Web Filtering - I can just use regular L3 rules and time ranges. Is it futile to use the Web Filter in transparent mode without a trusted SSL cert on the clients, since most traffic is SSL these days?

I was hoping for something like the Palo Alto level of application awareness and control, but I guess that's not going to happen for free.



This thread was automatically locked due to age.
  • Some further information on this issue. Each time the Web Filtering gets to the point where sites are not working correctly, even for bypassed clients, the only quick solution is to change the IP on one of the clients. Changing the client IP, either manually, or by editing the static IP assignment on the FW, then renewing the client lease, permits broken sites to work immediately - most commonly Facebook, Flickr and GMail. Symptoms are pages never finishing loading and timing out. This leads me to suspect the FW is not properly handling the MASQ or internal client IPs when web filtering is turned off, but instead is still incorrectly re-directing the requests to the proxy (which is now off). 

    I think I've pretty much determined the Web Filtering isn't going to offer me any extra value over L3 rules with time ranges, which is a shame, as I'm now basically back to the function of an iptables/ipchains firewall from 15+ years ago.

    If anyone has a way to use quotas that work for streaming media content, and still allow me to properly bypass some clients entirely, I'd love to hear it.

     

  • Did you configure web filtering as transparent? Should you have it configured as standard, then a skip transparent source setting will have no effect.


    Managing several Sophos firewalls both at work and at some home locations, dedicated to continuously improve IT-security and feeling well helping others with their IT-security challenges.

  • Yes, the Web Filter was set to transparent. I found this note in the docs under the SNAT section which may shed some light on what was happening. 

    "Note – You have to add the SNAT rules before you activate the Web Filter. Sophos UTM priorities Web Filter settings higher than SNAT rules. If you select a SNAT rule while the Web Filter is activated the rule may not work. You can activate or deactivate the Web Filter on the Web Protection > Web Filtering > Global page."

    Even removing all MASQ and Web Filter settings was not enough to restore normal function for the affected clients; a restart of the firewall was required. Yes, I could login to the shell and run tcpdump to see what was going on, but that defeats the purpose of using the UTM for me. I spend too much time on this stuff at work, and just want something simple for home. I ran Untangled (at home) for several years and had fewer issues, but I was hoping the Sophos UTM would add a few features like Quotas that I would like. 

  • Hi, Shawn, and welcome to the UTM Community!

    You already know a lot about firewalls, web filters, etc., but are not yet proficient in using and understanding WebAdmin, the configuration daemon and middleware.  Your questions assume that the WebAdmin methodology will be like others that you already know, and you ask how to make UTM work similarly.

    Instead, why not tell us what you want to accomplish.  I bet you'll find good answers.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • Thanks Bob. I thought I explained that in my original post, and I have spent significant time searching for solutions before coming here, and most posts suggest that what I would like to do is not possible with this UTM. But here goes:

    1. I would like to enforce time quotas for certain types of traffic, for certain users or devices (I can deal with how the user or device is determined using IPs, VLANs, etc.) The enforcement needs to take place without end user interaction, as some devices may not allow for this, such as media players or game consoles.

    ie: 2 hours per day for Netflix, or for streaming video in general, for a user, device, or group thereof.

    2. The ability to reliably exclude certain source users or devices from Web Filtering, allowing this traffic to be subject only to MASQ, FW and QOS rules.

    In the absence of support for a completely automated time quota, some alternatives might be acceptable:

    a) providing a client portal where a user can activate a quota for their devices which do not support user interaction. The user could for example, sign in to their portal, and activate 2 hours of streaming video time for the game console assigned to their user group, and the UTM would then begin two hours of access for that device, cutting it off at the end of the period.

    b) use a QOS rule to allow a specific amount of data per day to be used for streaming video by a device, after which severe throttling would be applied. I believe this one is possible, and did do some testing, which indicated it should work (although classification of traffic may be an issue). However, because some of the traffic I would like to quota is not consistent or intense in bandwidth use, this is a much less desirable option. 

    One of my main frustrations at this point is that testing of this configuration has led me to broken clients, and since one of these is my primary PC, it is affecting my productivity. The symptoms I am seeing are this:

    i) clients (Windows 10 mainly, but sometimes Android) fails to fully load certain sites, such as Google Drive, GMail, Facebook, or Flickr.

    ii) clients will sometimes work with problem sites, but usually with degraded performance. 

    iii) initial connection to a new site, such as following a link from Facebook or GMail, will immediately return a connection reset. Sometimes loading the same link again will work.

    The clients are configured to bypass WebFilter for http/s, and logs confirm this is happening, and the firewall rules are allowing the outbound traffic. Chrome, Edge, Firefox have all been tested in normal and incognito modes. Clients are clean of malware, and local security policies on the clients have been ruled out. The issue also affects some non-browser based client services such as Adobe Lightroom's Publish to Facebook or Flickr plugins, which will timeout our fail with an error.

    In each case so far, manually assigning an IP on the client to one within the same VLAN, but one which is not bypassing the WebFilter, resolves the issue. 

    I have seen this sort of issue with linux firewalls in years past caused by conntrack tables filling up, or configuration errors in MASQ/NAT rules, but with only ~20 devices behind this box, I can't imagine the former is the case, but the latter is possible. If I have to dig too far into things, it's defeating the purpose of going with this UTM for me. I've built custom squid proxies and firewalls for campus environments with 1000s of users, but those days are long behind me now...

    btw, I am running the latest UTM release in a VM on ESXi 6.0 with two hardware NICs dedicated to the VM, if that matters.

    Thanks for any assistance or insight you can provide.

  • The question is: Is the browser configured to use the proxy default is 8080? Because UTM can skip but the user insist to use the proxy
    Is Endpoint configured end enabled web-control? 

  • Hello,

    The clients are not configured to use a proxy; the UTM is transparently redirecting their requests to the Web Filter proxy. This is what I need to happen, since some dumb devices are not capable of proxy configuration, nor do I wish to worry about client proxy settings.

    I'm aware of how a transparent proxy works, in theory, and in practice, having built and managed many squid proxies on *nix, including using Cisco WCCP forwarding in Cat6500 switches, but I may be missing some subtle nuance of the UTM configuration. At the moment, the Web Filter proxy is working well, but that is with all clients going through it. The issue that is most important is that I can effectively bypass certain clients and not encounter the connectivity problems I reported in the previous post.

    You mentioned Allowed Target Services, which appears to be the equivalent of Safe Ports in squid, meaning those ports which the outbound proxy connection will connect to on behalf of the client. That is simple in standard mode with a client configured to use the proxy, but what is not clear in the docs is how these are interpreted in transparent mode, where it states that only ports 80 and 443 are intercepted. This statement suggests that Allowed Target Services is not applicable when in Transparent mode: "The disadvantage however is that only HTTP requests can be processed". This means that for non HTTP/S requests, a firewall rule (and MASQ entry in my case) are required, as expected.

    Under Transparent Mode Skiplist, this is stated: "To allow HTTP traffic (without proxy) for these hosts and networks, select the Allow HTTP/S traffic for listed hosts/nets checkbox. If you do not select this checkbox, you must define specific firewall rules for the hosts and networks listed here." Could it be that both checking this box, *and* creating specific firewall rules for the skip hosts creates some kind of odd state where some packets are handled by the hidden Web Filter skip list rule, and others handled by the manual firewall rule? I have not dug into the shell much aside from a few tcpdumps, but maybe I should, to see what these hidden rules look like.

    Thanks.

  • Transparent mode works with configured clients to, For example an outside client. And if he needs to open a page for example x.x.x.x:4444, you have to put the Webadmin port in Allowed Target Services

    My Question is if you are using Endpoint Protection, and Where you put the client you want to skip Transparent Mode, In Destination or Source
    Because It is hard to believe  this doesnt work.
    Do a simple test Exclude one host, and immediately that host  will appear in firewall rules, otherwise we are missing something
    Maybe a screenshot will be in help

  • For #1, here's an example with Netflix, Shawn.  Once a certain number of KB have been downloaded, throughput drops to 1Kbps.  I'd have to play with this, as I suspect the Limit should be for destination instead of source:

      

    Is that what you wanted?

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • Bob they messed up something. Doing some test now for different purposes I cant access Webadmin from the pc that uses standart proxy, but I get internet

    20:33:15 WebAdmin connection attempt HTTP  
    192.168.1.250 : 57580
    192.168.1.250 : 8444

    192.168.1.250 Is the Internal Interface of the UTM