This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

How do you block/allow domains with revolving/rotating or distributed/geo-dependent DNS?

Our system needs to allow outgoing HTTP/S connections to our Amazon S3 services.  In the past, I was able to create a "DNS Group" object that kept track of the 700+ IP addresses associated with "s3.amazonaws.com" but now after recent firmware updates the DNS Group object is reduced to just one.

I asked Sophos Support about this behavior, and they responded that the DNS Group object was never designed to track all IPs for a domain name with revolving/rotating or distributed/geo-dependent DNS.

Besides entering all of S3's 300 IP blocks manually as network objects (and updating as they change) I was wondering if anyone uses a solution to remedy this behavior?

How do you block/allow a domain name that has revolving or distributed DNS?

Cheers!



This thread was automatically locked due to age.
Parents
  • Sam, I'm confused.  Is there a reason you're addressing this with anything other than Web Filtering?

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • Bob,

    We don't have Web Filtering or any other Endpoint, Enterprise, or Email services running on our UTMs.  We simply use the Web Protection suite for our externally facing web sites, and the Network Protection layer for stateful L3 firewalling of those servers only.

    We are trying to restrict our servers from outgoing HTTP/S unless where needed, an example of which is when connecting to S3 to upload documents for CDN cache usage.

    It hasn't occurred to me to try Web Filtering for the outgoing traffic of the servers, but even still I need to ask the same question: will it allow me to connect to a domain name that has a distributed or revolving DNS resolution while having the catch-all DENY ANY that Network Protection has?

    Cheers

    SAM

  • Am I understanding correctly that there is no way to permit packets being blocked by the firewall based on a regex URL? For example,

    Default DROP TCP 192.168.0.143 : 5601→ 54.91.149.109 : 2000

    Where 54.91.149.109 is *.amazonws.com

  • REGEX only works for URLs in Web Protection.  In Network Protection, that's not possible.  In part, it's because of the way DNS works around the world.

    What does that packet represent?

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • To verify, it is not possible to create a firewall rule that accomplishes:

    Permit from local network host X, source port 1:65535, destination port 9445, destination host www.rainforestcloud.com

    where www.rainforestcloud.com is an AWS host with an variable IP address?

    Thx

  • I used http://centralops.net/co/DomainDossier.aspx to see that www.rainforestcloud.com has two IPs, and the name server switches the order every few seconds.  Instead of using a DNS Host definition, try your same firewall rule with a DNS Group definition using www.rainforestcloud.com.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • My documentation (9.4) says that DNS Group is intended for one name that has multiple resource records.   That seems to be different from your situation, one name that returns different results on subsequent queries without ever publishing all of the possibilities.

    You are right that you cannot create a Cisco-style network object with a long list of IPs, but you can create a firewall rule with that long list.   If you need that list again for another firewall rule, you can use the Clone option to avoid typing the list a second time.   The real headache for you is that the list has to typed into the UI rather than being something that can be prepared offline and loaded from his text file.

    Web Protection is probably the best part of UTM, you should use it.

    I have found Webserver Protectioni / WAF difficult to get exactly right (effective but without false positives).   Do you have any insight to share from your experience tuning your WAF configuration?

  • Doug, I think you scanned my answer too quickly - that FQDN has two A-records.  The Amazon name server delivers both every time, but changes the order every few seconds, so a DNS Group is exactly what he needs.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • Thnaks for clarifying Bob.

    Back to Bob's original recommendation:   If you use Standard-Mode web filtering, your server sends the URL to UTM, and UTM does the DNS resolution.   It is entirely possible to create a web filtering policy that says "These specific servers can only send web traffic to these specific remote hosts", which is the result that I think you want.   The allow/block decisions will be made before the dNS lookup occurs, so the 700 or so the IP addresses do not matter at all.

    Yes, other protocols will require other strategies, but when you have a Swiss Army Knife, you need to use the tool that is optimized for the problem.

    Yes, there are firewalls that can create a network object list more easily.   You can put a relatively inexpensive firewall in front of UTM and have a good solution.   I don't perceive the firewall subsystem as UTM's best feature.  

    UTM was a winner for me because (a) it was available at a price that I could get approved by management, and (2) it did a lot of useful things for me.   It was a given that it would not be best-in-class on all of its features.   I have frustrations with the product, but its effect on our perimeter defenses has been extraordinarily beneficial.

  • To add to your post, from more of a technical perspective, I think of these features this way:

    Web Filtering (a paid for feature in the UTM, that we don't have) allows the admin to restrict web protocols HTTP/S (and possibly others?) based on the request packet header values written by the client's user agent.  Therefore, it has the ability to do a string match on those URLs / domains; therefore the DNS resolution of those domains / subdomains (and the resulting L3 IP addresses of the remote host[s]) is moot.  This provides flexibility when restricting the protocols that the Web Filtering daemon supports (again, HTTP/S, possibly FTP, maybe others?).

    Network Protection (paid feature that we do have) allows the admin to restrict ANY port based solely on the L3 IP address of the source & destination.  This is the generic network firewall that we're all used to, it does not ascertain or care what protocol you're using, only the network port you're trying to access.  This allows a bit of flexibility when running daemons on non-standard ports, but GREATLY LIMITS flexibility in today's DNS resolution strategies; think distributed or geo-dependent DNS, rotating A records, or multiple A records.

    When entering Host / Network / DNS Definitions as network objects in the UTM, the UTM does the DNS resolution on its own and populates those network objects with the real IP addresses it resolves.  This is where I found another quirk that's worth mentioning.

    1) If your client (internal laptop / server) has DNS server X defined in it's network settings that's different than the UTM's DNS server Y (Network Services -> DNS -> Forwarders), you may run into an issue where your client resolves one IP, sends its packet to the UTM where that IP may not [yet] be stored in the network object, and therefore doesn't match your firewall rule; this is especially true for distributed or geo-dependent DNS resolution (think global CDNs) and is magnified if, for example, X and Y are on separate continents.  

    2) In a similar hand, if you have rotating DNS resolution with hundreds of IP addresses (think Amazon S3) - it is certainly possible, in fact probable - that the client will create its packet with an IP address that the UTM hasn't [yet] resolved and stored in its network object definition.  I have proof of this scenario in my UTM: a)the network object is 700 IP addresses large for my DNS Group object "s3.amazonaws.com", b)when the network object is first created and before the UTM resolves most of the 700 addresses, my client packets have a slim chance of matching the firewall rule.  Even when the object is "fully populated", the rotating nature of the DNS resolution is not accurately or timely tracked by the UTM.  If AWS drops a whole /24 from the DNS resolution for s3.amazonaws.com, and adds a different /24, it will take some time for the UTM to store all of those changes.  During this time essentially my firewall rule contains stale IPs; it becomes likely that my client will form a packet with a destination that the UTM doesn't yet have.. and then I wait.. and I wait.. and I wait until the UTM has finally re-resolved s3.amazonaws.com and updated its massive 700 address network object.

    My post here was intended to point out our use case; that we are trying to restrict egress connections - not just for Web protocols - using the standard L3 firewall.  Fortunately, the UTM database does support DNS Group objects, which do "store" DNS resolutions for some time.  But there are difficulties with rotating DNS or geo-dependent DNS.

    I wonder if we should be talking with the IETF for standardizing these DNS strategies.  Obviously, organizations are using creative DNS resolution strategies to benefit their business cases - and I vote that we allow security & firewall admins a front-row seat to a standardizing discussion for these creative strategies.

    Cheers!

    SAM

  • I don't understand how geographic DNS works.   Guess I need to do some reading.   If someone has a pointer, add it to this post or send me a private message.

    You are certainly right that there can be skew between client IP and UTM IP for the situation.   Here are my thoughts on that:

    The recommended DNS configuration (see "Rulz" in the Wiki section, courtesy of BAlfson) is to have your internal DNS servers forward to UTM, then have UTM forward to the Internet.   There are three advantages (a) it ensures that UTM Web Filtering Transparent Proxy block messages can be resolve to UTM, (2) It allows UTM Web Filtering Transparent Proxy to perform "Pharming" protection, where UTM re-resolves the DNS-to-IP address and corrects any apparent errors before releasing the packet (HTTPS scanning should be enabled), (3) For your situations, it will ensure that UTM sees everything that the client sees.   Only the last one applies to you.

    To comment on how web filtering works:  

    If you use Standard Mode, the browser forwards everything to UTM and says "Please fetch this for me".   As a result, the DNS resolution occurs on UTM only.   I am sure that this works for HTTPS if HTTPS inspection is enabled, and I think (but am not certain) that it works with HTTPS inspection off.

    In any mode, the UTM Web Proxy examines the request URL, IP address, and port to decide whether to release the HTTP(S) request.   This includes scoring the URL for category and reputation, scoring the IP address for boits, and checking static rules which can be FQDN patterns or Regex patterns.   Then it also checks the reply for malicious content,   (Content checking of HTTPS inspection requires HTTPS Inspection to be enabled, which requires a little setup.)   All of this is configurable based on source IP, User ID, and time of day.

     

  • Doug,

    That's all great, if you are paying for the Web Filtering proxy features on the UTMs.  We are a small Web Hosting company, with major WAF/reverse proxy needs and almost no standard web proxy needs.

    As for geo-dependent DNS and CDNs, here this should clear up some:

    https://www.nczonline.net/blog/2011/11/29/how-content-delivery-networks-cdns-work/

    Cheers!

Reply Children
  • Sam, depending on what your needs are, you might be able to get by with a 10-IP Web Protection subscription in a dedicated VM instance or an SG 115 with a Web Protection subscription.  Have you discussed these possibilities with your Sophos partner?  Although the Web Protection solution is limited to HTTP/S "conversations," these can occur with any port.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA