This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Sophos proxy internet access / firewall rules

Hello,

i have a question regarding Sophos proxy internet access / firewall rules,

I have on eth2 my VPN Router in a DMZ. The Router connects via OpenVPN Client to the Internet.

Eth2 DMZ Config

10.0.0.1 / 24

Default GW: 10.0.0.3 (OpenVPN Router)

Multipath Rule:

Internal Network – Any – Internet IPv4 – By Interface – DMZ VPN

When I access the Internet from my LAN Devices I can browse Internet over the OpenVPN Router in the DMZ, this works fine. But my questions are:

  1. In the Firewall Rule is the Standard Web Surfing Group with the Services:

http 80, https 443, http proxy 8080, http web cache 3128 included.  Under allowed services is also http, https and http proxy included.

Means this now, that when a LAN Client access a Website, the client directly accesses the website while in the allowed target services http, https, and http proxy 8080 are defined? Should it for security reasons not be that the client asks the Proxy, and the Proxy connects to the Website? 

  1. I saw under Network Protection – Advanced the possibility to activate an Generic Proxy, in which scenario would this be useful?

 

Thanks a lot!

Best Regards

Sally



This thread was automatically locked due to age.
Parents Reply Children
  • I'm just trying to be a bit more anonymous on the web, leaving a small footprint as possible to make it harder for search engine operators / big data collectors to collect data about me. I believe this is a personal right, which is now very much supported here in Europe by the GDPR Directive.

    Of course, you do not know if the data will be shared by the VPN provider, but it's the same when you go out of your house, you lock your front door, trying to protect yourself, and not leaving the door easy for everyone to be open.

  • I think the big data people are not very dependent on your IP address.   Every time you choose a store to place an order you give them implied information about your location as well as your email address.   I assume that they do not provide your email address to their tracking partners -- not because they care about your privacy, but simply because they consider their customer list to be proprietary information.   So instead they create (or reuse) a token that represents you, and achieve the same result.

    Here is my understanding of why this works:

    • The first site passes something that represent you to the tracking site as it invokes the tracking site as an embedded page element.
    • The tracking site drops a cookie that represents you, which it is allowed to do because it has been referenced in the master web page.
    • The next site you visit uses the same process.   
    • The tracking site has access to all the cookies it created from all participating websites, so it can match the "you" on site 1 to the "you" on sites 2 through N.

    Your IP address is part of this data collection, but they are interested in tracking you at home, at work, and on your cell phone.  So I don't thin the IP address matters much, and I don't trust the anonymizers.   Servers cost money, so they are either using your data or they are plotting something against you (or your country).   I just don't believe there are any charities out there.

    On the other hand, UTM is actually a pretty powerful tool for fighting the tracking people, but it requires some work.    The general process is:

    • Implement https inspection so you know everything possible about the invisible links that are used in the websites you access.
    • Analyze the logs to separate the sites you recognize from the sites that are not recognizable.
    • Look up the unrecognized FQDNs to identify which ones are tracking sites.
    • Create blocks for the tracking site domain.

    The biggest problem with this is that UTM does not provide a good enough tools for parsing the web logs to this purpose.   I parse my logs into a SQL database.   An introduction to the process is pinned to the top of the Reporting sub-forum.    My web log parsing is more complex than the examples in that post, so it is not posted, but I will provide it on request for those who get the basic stuff working.

    All of this may be too intimiding for a home user, but it can work. 

    SQL Express is free, but you will need to be able to write your own SQL queries for analyzing the information once it is in database format. 

  • Follow-up:

    I started using the process I described above, and as a result I block these four domains:

    • adnxs.com
    • demdex.net
    • dotomi.com
    • hotjar.com

    It is easy enough to confirm that a site is a tracker once you have the domain name.   Just google the name, and the browser will give you a summary of the vendor's webpage without the risk of navigating to the page itself.   The real tracking sites are big business, so there websites are safe.

    Eventually I gave up on the process.  All along, my fear was that the blocked sites would have a visible impact on the user experience.    This was aggravated when I learned of one tracking site that is implemented as a content delivery network -- you get their images on your favorite website, they get you in trade.   Don't remember the name, but it "took the wind out of my sails".

    As explained in my Web Filtering Lessons Learned post, the recommended way to block these companies is to create a website exception for the domain (with the checkbox enabled for including subnets).   Apply a tag to the exception and name it "Tracking Sites Blocked".   Then in your Filter Actions, specify that sites tagged with "Tracking Sites Blocked" are always blocked.

  • I think in my environment it is important to have the UTM configured correctly on a technical level. A good tip from you was to implement https inception, but I have not really come up with a good description of how to set this up correctly, that not so much cert errors popup.

    Enabled Decrypt and Scan, re-generated the Signing CA in the Filtering Options under HTTPS CAs and exported it as a PKCS # 12 certificate with password. Then imported under Windows and MAC. I get a lot of SSL errors, some websites work, but many do not. Also it makes a difference if I use Safari, Edge, or Mozilla Firefox.

    Is there a best practice guide to getting the decrypt and scan option up and running, also for a Home User?

    Thank you
    Sally

  • For Apple, there is an extra step for activating a root certificate.   See this link.

    https://community.sophos.com/kb/en-us/126895

    Firefox does not use the Windows certificate system.   Instead it is part of each user's personal profile, which will float between PCs if you use the option to log in.   This may be cute for some purposes, but it is one of several things that makes Mozilla products unsuitable for use in a business.   On my version, the click sequence seems to be:

    • Gear icon (or about:preferences)
    • Security section... Certificates sub-section... View Certificates button
    • Authorities tab
    • Import button

    Each logon account needs to repeat this process.

    For Chrome and IE, my guess is that you loaded the wrong certificate or loaded it into the wrong certificate store.

    Wrong certificate?

    • UTM has multiple root certificates, one for Proxy, one for VPN, one for something else.   Make sure you exported the Proxy CA.

    Wrong location?

    • There are certificate libraries available for each users, each services, and the whole computer.   You probably want it in the computer context so that it applies to every login account.     
    • Start... Run... MMC.exe
    • Add/Remove Snap-In... Certificates... Computer Account...  Local Computer
    • Look for the Proxy CA certificate in the Personal... Certificates and Intermediate Certificates... Certificates path.  If found, delete it.
    • Import the certificate again, and choose the option to let Windows choose the certificate store (recommended), or manually choose the Trusted Certificate Authorities folder.

    To make things worse...

    UTM exports the root certificate with the private key included, which is why it insists on a password.   This is bad, because it makes every device where it is installed into a potential certificate issuer.   Exploiting the problem would be tricky, but it is a risk that should be mitigated.   The root certificate does not need the private key to do what it needs to do on the client devices.   Sophos should provide a button to export without the key; sometimes they do not seem to appreciate the need to configure things to be "secure by default".

    To remedy the problem:

    • Once you have it loaded successfully on a Windows PC, right-click on the certificate and choose Export.
    • Follow the steps in the wizard, but uncheck the option for exporting the private key.   Also uncheck the option about deleting the certificate when export is complete.
    • Export the certificate to a new filename.  You will not need a password.
    • Remove the signed certificate from your desktop devices, then Import the no-password version of the certificate back into them.