This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Web Filtering Transparent Mode - Best Practice

We have an SG310 and we have implemented Transparent Mode web filtering. When everything works well, users on the network can browse to safe websites. But occasionally the Sophos will block them from visiting a site that they have used before (and that we have deemed safe). As a workaround, we tell the user to browse to any non-https site, and then try again and this usually fixes the problem.

How/Why does this work as a quick fix? What's happening when the user goes to an http site (vs https)?

Is there a more elegant way to resolve these occasional authentication issues? And while we are on this topic, it looks like this just does not work for our Mac users. The Mac users end up enabling Wi-Fi on their Macs so that they can browse the Internet. Having that turned on in addition to the Ethernet on the corporate LAN causes weirdness.

Just wondering what's best practice for implementing Transparent Mode.

We tried using the standard proxy mode, but this quickly became a headache because every software app on the network that needed internet access had to be configured to point to the proxy server. (For example, UPS WorldShip or FedEx Ship Manager). This, plus our corporate policy that forces users to change their password every 90 days caused a lot of IT headaches. So I guess I am looking for the best of both worlds - a proxy but none of the manual work of using a proxy, and none of the glitches in transparent proxy.



This thread was automatically locked due to age.
Parents
  • Start with my post, which is pinned to the top of the web filtering sub-forum: Optimizing web proxy – Lessons Learned

    Short version:

    Do not consider Standard and Transparent as mutually exclusive.   Instead use both.

    Transparent mode has fundamental limitations, which are documented, and which you have discovered.  It cannot perform transparent user authentication for https traffic, because the user information is in the encrypted part of the packet.  As a workaround, it assumes the username associated with the last known http request from the same source IP is the username for the https request.   If there is no previous http request, the request is handled as unauthenticated, and apparently you have configured a rule to block unauthenticated traffic.

    I suggest reactivating Standard Mode with AD SSO authentication, plus Transparent Mode with None authentication, for the same source addresses.   Your browsers will use Standard Mode with AD SSO.   Most fat-client applications will not need any special configuration, because they will use transparent mode by default and will not be asked for authentication.   There are a bunch of hidden operations like automatic updates for Java or Adobe, as well as operating system functions like error reporting, that use web protocols but may not work if authentication is required.

    For best protection, fat client FTP applications should be manually configured to use Standard Web Proxy.    I have another document in the Wiki section, written earlier, which discusses web proxy considerations including the three options for ftp proxy. 

    You will still have issues with a few sites and a few fat-client applications, but this should be manageable because you will have eliminated the most common sources of problems.

    Side note:  If an https site is blocked or warned, the CA root certificate needs to be deployed to the client device for the block/warn page to display without certificate errors.   Many users assume that if they are not doing https inspection, they can skip this step.

  • This is great information, thank you. I was not aware that both could be used together.

    As an alternate approach to what you recommended (and just asking because I am curious):

    Currently I have HTTPS Scan set to 'URL Filtering Only'. Would setting it to 'Decrypt and Scan' also help mitigate the issue I am experiencing with transparent mode? In other words, just scan ALL traffic and call it a day. (...or is there a reason why this isn't enabled by default)

    And finally - regarding the CA root certificate: If I wanted to eliminate certificate errors on the block page, would I have to purchase a trusted certificate? or could I use a self-signed certificate?

  • How it works

    The CA root certificate is something that UTM generates.   You cannot use a CA certificate for this.  "Root" means that it is able to generate certificates for other resources.  When UTM blocks an attempt to navigate to example.com, the reply has to (appear to) be from example.com, or the browser will ignore the reply.  For http traffic, any device can offer that pretense, because there is no ability to verify identity.   For https traffic, an https session needs to be established before the reply can be sent.    To establish the session without browser warnings, the server (UTM) has to prove its identity as example.com.   To establish this identity, it invents a certificate for example.com, signed by its own CA, and sends that to the client.   With https inspection, UTM pretends to be the remote server on every transaction, not just the block/warn pages.

    Https inspection will cause a few more navigation problems, because it introduces another level of complexity, and some websites will not work.

    Https inspection also ensures that certificates are trusted and that the encryption meets whatever is configured as the UTM standard.   This can be an advantage, but it will also hinder access to poorly-secured sites.

    Https inspection allows UTM to see and log every web request, so you can see whether a user is hanging out on facebook or merely stumbling on "like us on Facebook" icons.  Without it, UTM only logs the connection event, and that only includes the FQDN, not the full request.  Because it can see the whole request and reply, it is able to block malware in the reply.

    And yes, https inspection will solve your problems with transparent AD SSO.

     

     

Reply
  • How it works

    The CA root certificate is something that UTM generates.   You cannot use a CA certificate for this.  "Root" means that it is able to generate certificates for other resources.  When UTM blocks an attempt to navigate to example.com, the reply has to (appear to) be from example.com, or the browser will ignore the reply.  For http traffic, any device can offer that pretense, because there is no ability to verify identity.   For https traffic, an https session needs to be established before the reply can be sent.    To establish the session without browser warnings, the server (UTM) has to prove its identity as example.com.   To establish this identity, it invents a certificate for example.com, signed by its own CA, and sends that to the client.   With https inspection, UTM pretends to be the remote server on every transaction, not just the block/warn pages.

    Https inspection will cause a few more navigation problems, because it introduces another level of complexity, and some websites will not work.

    Https inspection also ensures that certificates are trusted and that the encryption meets whatever is configured as the UTM standard.   This can be an advantage, but it will also hinder access to poorly-secured sites.

    Https inspection allows UTM to see and log every web request, so you can see whether a user is hanging out on facebook or merely stumbling on "like us on Facebook" icons.  Without it, UTM only logs the connection event, and that only includes the FQDN, not the full request.  Because it can see the whole request and reply, it is able to block malware in the reply.

    And yes, https inspection will solve your problems with transparent AD SSO.

     

     

Children
No Data