We'd love to hear about it! Click here to go to the product suggestion community
Is it right that with HTTPS decrypt disabled that HTTPS suspicious URLs are allowed? I would have thought that this would be done at the URL level, before any decryption and scanning is performed.
Can you clarify your question a bit more?
We can still do URL filtering but only for the hostname.
ex. The firewall can see you are connecting to cnn.com and block it but if you are allowed to connect to the domain the firewall cannot see you are downloading an executable file or that you are going to cnn.com/news without decrypt and scan.
Here are some things to consider when trying to decide whether to use HTTPS Decrypt and Scan:
In reply to MasterRoshi:
Thanks for the reply.
What I'm referring to is that I can have the Web Filter configured to block "Suspicious" sites - this works fine with HTTP which has the A/V scanner, but the HTTPS rule has no HTTP or HTTPS AV Scanner, but the URL is still being allowed through, despite the rule having the Web Policy set to block...
https://www.sophostest.com/spyware/index.html for example will still get through where as http://www.sophostest.com/spyware/index.html gets blocked.
I would have thought that a web-rule would be able to look at the URL and say no...no as the correct URL is in the logs, without the needs to have decrypt and scan enabled.
In reply to BLS:
The problem with the sophostest.com example is that the domain itself is not actually part of the suspicious category, just a certain section of it is for testing purposes. In the real world, this is almost never the case. So you would be protected if https://sophostest.com itself was suspicious.
As stated before, categories work without decrypt and scan although finer detail can be provided with it enabled.
OK, then how does the category filter work? Surely it must have the whole URL in the database for it to reject as it does this before any data has been downloaded from what I can tell.
If that's the case, for future enhancements, would it be possible to include a feather that will enable the web URL filter so you don't have to use the decrypt and scan.
I have a situation where we are looking for a product with the need to protect guest, and therefore won't have the CA installed to be able to do the decrypt and scan.
If that was the case, anyone can do a man in the middle attack on any device. The feature request is impossible for anyone (outside of a rogue publicly trusted CA) to achieve since it invalidates the whole purpose of TLS/publicly trusted CA model on the internet. If you could do this, anyone connecting to your network would be vulnerable to you seeing their passwords when they login to their bank or social media account etc..
The reason we cannot see what happens after you connect to a domain is because it creates an encrypted session from the endpoint to the server. We know you are connecting to sophostest.com because the client has to state the server name it is connecting to in plain text in the TLS handshake. So we can block the sophostest.com domain this way, however if you are allowed to access sophostest.com, once the TLS handshake is complete and you make a GET for /spyware, this request is encrypted so we don't know what you are requesting unless we are doing a man in the middle and decrypting the traffic.
Think I understand now,.
So HTTPS can only do the domain, not the full URL..unless HTTPS decrypt and scanning is enabled.
I can see this being a challenge going forward, what with SSL certificates freely available through Lets Encrypt, and the driver to make everything in the web secure (which it should be,I have no problem with this!), a UTM becomes just a firewall.
Especially in a BYOD environment...
Unfortunately this is the situation with every vendor at the moment. We can give maximum security to devices you control (and can push trusted root CA's to), guest users still get great security but not at the level of corporate users.