This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Blocking Ads best practice (bulk updates to URL Groups?)

We use mostly Apple equipment (MacBook, iPhone, iPad) and on our MacBooks we have Little Snitch which is an application-aware outgoing firewall that kills attempts to reach out to advertising sites, trackers, etc. This doesn't help the phones and tablets though.

We could do something through DNS perhaps, but it seems like adding the worst offenders to a URL Group to block would work as well. The only problem is, how would you save, modify, or update a URL Group in bulk (i.e. not one-at-a-time through the GUI)?

Or is this not best-practice and we should use a different mechanism (DNS Pi Hole or something similar)?

(I'm thinking I could feed my discoveries to Sophos somehow to get them to reclassify some sites to Advertising, but that feels slow, they might not agree with my classification, and some folks might want to actually visit the sites in question because they use their services.)

P.S. In the Managed TLS Exclusion List there is "ecure.echosign.com" which I imagine is a copy/paste error and should say "secure", but maybe not.



This thread was automatically locked due to age.
  • For your purposes the web filtering done by the web proxy and done by the DPI engine is the same. While the web proxy is older technology it is still absolutely supported. There are a few things the web proxy can do that DPI cannot, they are all documented in WebAdmin.

    HTTP over UDP is the QUIC protocol - I think we changed the name of the option in WebAdmin at some point. Neither the proxy or dpi can handle QUIC.

    rfcat is incorrect/misleading when he says that DPI "does not scan web policies completely (block web sites)".

    With your configuration if you have traffic going through port 80/443 it will use DPI mode. However if you configure the computer/browser to use a proxy at 3128 then it uses the web proxy.

  • As the XG does have categorization it will block a large number of sites automatically. However it will not know about all sites, categorization is imperfect.  Part of the problem is context - when your browser visits randomsite.com it sends all sorts of cookies and headers and it gets back ads.  When our categorizer visits the same url without any of those details sometimes it doesn't get ads.  Or there are things that we know that the domain gets used for multiple things.

    AFAIK Little Snitch works on the laptops and stops things before they are sent to the XG. So you won't be able to tell what things would have been caught by the XG if it made it there.
    So if you just took the list from Little Snitch and put it in the XG, you might end up duplicating a huge number of entries that are already categorized and blocked.

    One by one you can use the policy tester to report on the category of every url so you can see if it would have been blocked had it made it to the XG.

    Editing URL Groups and Custom Categories can also be done using the XML API, if you wanted a more programmatic / bulk way of doing things. I don't know the limit in the number of entries in URL Groups. AFAIK for Custom Categories is it 2000 domains, even using XML API.

  • Hi Michael,

    my statement is based on information provided in the XG GUI screens.

    Ian

    XG115W - v20 GA - Home

    XG on VM 8 - v20 GA

    If a post solves your question please use the 'Verify Answer' button.