I think I have this configured correctly to drop any DNS query out to the open internet:
Last match wins, correct?
John
This thread was automatically locked due to age.
Sadly, tcpdump is telling me no bueno:
17:51:23.370149 IP 10.41.32.33.33991 > 8.8.8.8.53: 63725+ A? captive.roku.com. (34)
17:51:23.370306 IP 10.41.32.33.45656 > 8.8.4.4.53: 19954+ A? captive.roku.com. (34)
17:51:23.395031 IP 8.8.4.4.53 > 10.41.32.33.45656: 19954 5/0/0 CNAME d1k85ogl73rd7b.cloudfront.net., A 54.192.7.25, A 54.192.7.11, A 54.192.7.209, A 54.192.7.45 (141)
17:51:23.406605 IP 8.8.8.8.53 > 10.41.32.33.33991: 63725 5/0/0 CNAME d1k85ogl73rd7b.cloudfront.net., A 54.192.7.25, A 54.192.7.11, A 54.192.7.45, A 54.192.7.209 (141)
This is all DNS traffic to Google's DNS servers. I do not know why you would want to block it, nor that you can block it with firewall rules.
The firewall rules are only activated if traffic bypasses all of the proxies. (Read the articles in the Wiki section.) In this case, I think the DNS subsystem acts like a proxy so the firewall rules are not evaluated.
The firewall live log has less data than the full log. If logging is enabled for a rule, a log entry is created for each packet, and the rule number is in the log entry. Judicious use of Allow Rule logging can help hunt down unexpected firewall behavior, as long as the packet is actually handled by the firewall rules.
For continuity of discussion I need to force all DNS traffic out through a different egress for filtering. All stateful firewalls should be able to block this type of traffic. That is firewall 101.DouglasFoster said:This is all DNS traffic to Google's DNS servers. I do not know why you would want to block it, nor that you can block it with firewall rules.
That is a good tidbit of information that proxied traffic is not evaluated by firewall rules. Interesting. However, in this particular case I don't think that the UTM's DNS proxy is in play because the logfile is empty:The firewall rules are only activated if traffic bypasses all of the proxies. (Read the articles in the Wiki section.) In this case, I think the DNS subsystem acts like a proxy so the firewall rules are not evaluated.
(I also don't know where to enable it at)
So this section was really helpful because I was working off of a last match wins and that is incorrect. First match wins. Once I moved the DNS drop rule above the any/any rule:The firewall live log has less data than the full log. If logging is enabled for a rule, a log entry is created for each packet, and the rule number is in the log entry. Judicious use of Allow Rule logging can help hunt down unexpected firewall behavior, as long as the packet is actually handled by the firewall rules.
I then started getting matches and more importantly drops:
Thanks for the help!
John
Glad it is fixed. To correct something that I said:
DNS service is always active, because UTM uses its internal DNS for its own lookup requirements.
However, you can prevent it from servicing other clients by leaving the Allowed Networks list for DNS server empty, which is apparently what you have done.
This will force the traffic to the Firewall Rules, where it is evaluated from low to high, as you confirmed.
Can you share your technique for DNS blacklisting?
UTM has a DNS blacklist implemented in the IPS module, but I am not happy with it, because it drops the query packet rather than returning a negative (NXDOMAIN) result. Silence only cause the client to conclude that part of the DNS infrastructure is broken, and look for an answer by another method. I am looking for a solution that tells the client to stop looking.
Can you share your technique for DNS blacklisting?
UTM has a DNS blacklist implemented in the IPS module, but I am not happy with it, because it drops the query packet rather than returning a negative (NXDOMAIN) result. Silence only cause the client to conclude that part of the DNS infrastructure is broken, and look for an answer by another method. I am looking for a solution that tells the client to stop looking.
Sure!DouglasFoster said:Can you share your technique for DNS blacklisting?
Like when Elwood said in the Blue's Brothers movie "Jake, I gotta pull over", we may end up in the weeds on this!
:D
There were several engineering goals with this DNS blacklist:
There are also infrastructure considerations that cannot be ignore when reviewing this configuration and adjustments made accordingly. This blacklist is designed to support the specific way DNS traffic flows in our environment:
host > Domain Controller > DMZ BIND DNS server > open Internet
The Domain Controller is authoritative for its zone on the internal and the BIND DNS server is authoritative for the same zone externally facing the Internet.
The need to unmask hosts arises because of how many DC's are in play. DNS logging on the DC's cannot be enabled due to the resources it consumes and so generally the BIND servers would only see the DC's making the queries. As with anything there are trade offs and the unmasking is dependent on a host making an HTTP or HTTPS call. If they attempt any other port they will slip through the unmasking. The good news is that the DNS blacklist will stop the traffic reaching the open Internet .. we just wont know exactly who was being stopped.
That said .. let get into it!
Remember to keep situational awareness of the working directory path for BIND depending on your OS. I prefer FreeBSD as my unix server OS and when BIND is installed the path is:
/var/named/etc/namedb
As such, all of the working directories live inside of "namedb":
First create a directory called "blacklist" and then inside of that directory create a file called "baddomains.conf". For testing populate "baddomains.conf" with:
# SOC Blacklist, Effective 29 JUNE 2018
zone "000webhostapp.com" {type master; file "../blacklist/baddomains.hosts";};
Next, configure "named.conf" with an include statement pointing to the blacklist.conf file:
// Blacklist include statement
include "../blacklist/baddomains.conf";
This "baddomains.conf" file is like a mini named.conf file insomuch as it only contains zone statements and it is where all of the blacklist magic happens at.
Finally, create a file "baddomains.hosts" file and populate it with:
$TTL 3600
@ IN SOA ns1.yourdomainhere.com. helpdesk.yourdomainhere.com. (
2018062901 ;serial
10800 ;refresh
1800 ;retry
7d ;expire
172800 ) ;minimum
IN NS ns1.yourdomainhere.com.
IN NS ns2.yourdomainhere.com.
IN A 172.24.13.25
IN A 172.16.13.25
* IN A 172.24.13.25
* IN A 172.16.13.25
Adjust as needed including your preferred TTL value.
The intent of the "baddomains.hosts" file is that it is a static file and always points to the webserver(s). The intent of the "baddomains.conf" file is that it is updated either manually or with automation to blacklist identified malicious zones or zones that you just don't like. eg - doubleclick.net
Using our test domain:
Because of the wildcard in the "baddomains.hosts" file you can tack anything onto the front of the zone and it will resolve. www for example:
If anyone who is reading this has questions PM contact information and will be happy to help walk out whatever is causing problems.
Enjoy!
John