This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

HTTPS inspection - how to identify a blocked domain when it's not obvious from http.log entries? (NB low priority question.)

Hi Folks

Apologies for resorting to asking [what's likely a noob-grade] question on the forum, but I would love to hear of any tips on how to resolve issues such as the one outlined below (I've fixed the below example, but only because I had a good idea of which domains I'd likely have to create exceptions for).

My Configuration:

I've been running UTM - in transparent and NAT mode - for over 3 years, and for the past two, I've also been using https web filtering (and this year, I've also been using the 'proxy auto configuration' feature and thus a JavaScript file to direct browsers to the UTM, thus blocking non-standard ports) and everything is working fabulously well.

I have a Ubiquiti WAP on my management LAN and I recently moved my UniFi controller onto one of my VLANs (setting appropriate firewall rules to let STUN and HTTP Proxy ports through) and again, everything is working perfectly.

The Problem:

Today, I updated UniFi and within that package, there is a button which you can depress to prompt UniFi to check for new WAP firmware, but though it showed a nice, friendly green tick about 30 seconds after depressing it (implying nothing was amiss) it didn't download the current firmware.

I cleared the http.log and pressed the firmware seeking button twice, gave it as minute, then I opened the log, but all that it showed were the below two entries:

2019:06:10-11:09:44 hadrian httpproxy[5329]: id="0003" severity="info" sys="SecureWeb" sub="http" request="(nil)" function="read_request_headers" file="request.c" line="1626" message="Read error on the http handler 87 (Input/output error)"
2019:06:10-11:10:43 hadrian httpproxy[5329]: id="0003" severity="info" sys="SecureWeb" sub="http" request="(nil)" function="read_request_headers" file="request.c" line="1626" message="Read error on the http handler 89 (Input/output error)"

Looking at the packetfilter.log didn't reveal anything interesting, with there being only one entry related to the UniFi controller's MAC address (not of any informative use and it also occurred 18 minutes after the above two events).

I know that Ubiquity used to have their firmwares at dl.ubnt.com, but a few months ago, they changed their main web site from ubnt.com to ui.com, so as a wild guess, I created the below two exceptions...

^https?://([A-Za-z0-9.-]*\.)ubnt\.com/*
^https?://([A-Za-z0-9.-]*\.)ui\.com/*

...and after again pressing the 'check for new firmware' button in Unifi, this time it downloaded the firmware (and there were no error entries in the http.log).

So, in this instance I managed to fix the problem (though only by pseudo-guessing the domains) and I was wondering whether I am missing another 'technique' which might help when investigating future such issues? I have come across a similar problem in the past (and I have seen similar entries) when trying to debug an issue with an iPad weather app (I eventually gave up and used alternative app; it generated 'useful' log entries, thus enabling me to create an exception for it).

My sincere apologies for sending you all to sleep, but if anyone can give me any hints on how to investigate similar anomalies in the future (without resorting to Wireshark) it would massively enhance my own knowledge (and you never know, perhaps it'll even enable me to help other home users on this forum, sometime down the road).

Kind regards,
Briain (GM8PKL)

PS I have SSH access set up and I'm more than happy to use terminal trickery. :)



This thread was automatically locked due to age.
Parents
  • In my experience, "input/output error" means "could negotiate a ciphersuite acceptable to both devices".    Disabling https inspection for that URL should solve the problem.

    In my prior research, I determined that UTM uses a FIPS-certified version of OpenSSL, and they are limited because OpenSSL.org has been slow to deploy a FIPS module for the newer versions of the OpenSSL libraries.   You can use "openssl version" from the ssh environment to check the version of OpenSSL on your UTM.

Reply
  • In my experience, "input/output error" means "could negotiate a ciphersuite acceptable to both devices".    Disabling https inspection for that URL should solve the problem.

    In my prior research, I determined that UTM uses a FIPS-certified version of OpenSSL, and they are limited because OpenSSL.org has been slow to deploy a FIPS module for the newer versions of the OpenSSL libraries.   You can use "openssl version" from the ssh environment to check the version of OpenSSL on your UTM.

Children
  • Hi

    Thank you very much indeed for the above information. Thinking back, I have quite often seen these "input/output error" messages in the log (though fortunately, I can usually link them to a browsing event, thus enabling me to easily figure out the appropriate exceptions) so it's just fabulous to now know what these messages are actually indicating.

    I had a look at UTM's OpenSSL version and I see what you mean; whilst my laptop (Debian) is using 1.1.0j (20 Nov 2018), UTM is using 1.0.2j-fips (26 Sep 2016), so it is now getting a little 'long in the tooth', as they say. I also had a look at the OpenSSL information page (https://www.openssl.org/docs/fips.html) and it looks like they'll not be validating anything else until OpenSSL 1.1.1 has been released.

    Incidentally, I've just re-installed the errant iPad app that I'd also mentioned and yes, it resulted in exactly the same UTM log entries (as did the Ubiquiti FW repository site prior to me adding the appropriate exceptions). Interestingly, it's a BBC weather app (provided and back-ended by Metro Group) and though Metro Group's own weather app works fine (I already have an exception for ^https?://[A-Za-z0-9.-]*\.consumer\.meteogroup\.com/) the app that they've built for the BBC does not work, so taking a wild stab into the dark, I've just tried adding the below exceptions...

    ^https?://([A-Za-z0-9.-]*\.)?bbc\.co\.uk/
    ^https?://([A-Za-z0-9.-]*\.)?meteogroup\.com/

    ...but no joy for the BBC chap, so I'll dig out an old WAP, stick it on an access port (mirrored to a test port), associate the iPad to that WAP and see if I can thus establish, using Wireshark, exactly who it's 'calling home' to (just creating a new R Pi - with a minimalist build & Wireshark - to add to my test equipment collection). :)

    Kind regards (and thank you, once again)

    Briain

  • After an unhappy experience with a regex that allowed too much, I have become opposed to using them.  Regex are rarely necessary.

    Instead, use a Website object and assign it to a Tag.  You can make up any name you want for the tag.  Then create your Exception object and apply it to websites with that Tag.

    The website object can aoply to an FQDN or to a whole organization, based on the option to include subdomains.   The Tag name helps to document why you created the exception.   And the website object applies to all protocols - http, https, ftp

  • Hi

    Just for your amusement, I already have a large list of regex entries, but just for fun (and my UTM education) I briefly tried the tagging trick - for the very first time - just a couple of days ago, but oddly, I didn't get any success (obviously, I have either misunderstood the concept, or done something monumentally stupid; likely both)! :)

    A couple of days back, I found out that Radio Paradise have introduced two new Internet 'radio' stations which stream in FLAC format, and whilst the old MPG Radioparadise has always worked without issue, when I tried the new FLAC ones (via adding them to a m3u file, which is scanned by my media server and thus I can then play the streams on a Linn DS) nothing happened (silence was golden, as they say). Looking at the UTM logs, I established that there were two domains involved, so decided to try the tagging trick and thus created the below (with the include subdomains ticked):

    I then created a bespoke category with pretty much everything bypassed (just for testing):

    Then when that didn't work, I instead tried the replacing the two websites entries with a single one, as is shown below:

    That still didn't work, so I switched off my 'Streaming Services - Tagged Version' exception and instead added the below exception (to my usual streaming exceptions list)...

    ^https?://[A-Za-z0-9.-]*\.radioparadise\.com

    ...and it worked just fine.

    Next rainy day, I will have a good ponder (over a glass of wine) and try to figure out what I got wrong (likely lots; both in terms of wrong bits and wine intake).

    Kind regards,

    Bri :)

  • PS Just after posing the above, I just looked at the playlist m3u file and noted that the two FLAC entries point to stream.radioparadise.com.

    #EXTINF:-1,[*RP] Radio Paradise
    37.130.228.60/aac-320.m3u
    #EXTINF:-1,[*RPaF] Radio Paradise Flac
    stream.radioparadise.com/flac
    #EXTINF:-1,[*RPaM] Radio Paradise Mellow Flac
    stream.radioparadise.com/mellow-flac

    So, instead of using the domains that I'd previously identified (from the UTM logs) I tried the below:

     

    I tried it both the category set as is shown above (which is permitted) and then with the 'do not override' option selected, but still no stream (and both http://icy-4.radioparadise.com/mellow-flac and http://icy-5.radioparadise.com/mellow-flac show up in the http.log) so I when time permits, I'll try creating web site exceptions for all the ones I've thus far identified).

    I'll stick to using RegEx's for now, then at a later date, I'll do some head scratching and try to work out where I've gone astray. :)

    Kind regards

    Bri