This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Partial content (Range) rejected by proxy after upgrading to 9.6

After upgrade from 9.510-5 to 9.600-5, proxy started to respond with status "416 Requested range not satisfiable" to any request that has to be scanned with AV and includes Range: header. In /var/log/http.log, the denied request has reason="range". Sniff from the outside wire shows that the request from proxy to webserver is processed correctly.

 

Workaround: After putting problematic URL patterns to web filtering exception rule with AV scanning turned off, the requests are processed as before upgrade to 9.6.

 

Is this intended behavioral change in 9.6? I know, the partial content serving can be abused to bypass AV scanning, so generally it is good idea to handle requests with partial content differently and not to pass their responses without scanning, but IMHO there should be more sophisticated algorithm at the proxy to deal with this situation:

1) send HEAD to determine the content size, whether it is within size limit for scanning;
2) if it is over the size limit, process request without scanning;
3) if it is within the size limit, request whole content (and possibly cache it for subsequent range requests), scan it and serve partial content to the client.

 

OH

 



This thread was automatically locked due to age.
Parents Reply Children
  • Hello Michael.

    We are using 9.601-5. I can't find any entries in the Web Filtering logs for reason="range" before the upgrade to that.

    We use Microsoft Edge 44.17763.1.0 and Internet Explorer 11.316.17763.0.

    Various websites produce the problem. If I put the website in Web Protection - Web Filter Profiles - Filter Actions - Allowed Sites, the user gets the PDF more often but not always.

    When clicking on the PDF link, either nothing happens, a blank page appears or the PDF loads. For most people, most of the time, the PDF will load.

  • The quick answer is that we tested pdfs when we implemented this in UTM, however there are many combinations so it is possible that some combination that is not right.  But 9.6 has been out for a while and the Dev team has not heard about this from anywhere else.

    If there is a product issue, this needs to be raised through a support ticket.

  • Hi Nigel,

    How about showing us a line from the Web Filtering log where a PDF did not load?

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • 2019:03:20-16:11:17 proxy-1 httpproxy[5548]: id="0002" severity="info" sys="SecureWeb" sub="http" name="web request blocked" action="block" method="GET" srcip="xxx.xxx.xxx.xxx" dstip="185.217.40.162" user="xxxxxxxx" group="Active Directory Users" ad_domain="xxxxxxxx" statuscode="416" cached="0" profile="REF_HttProContaInterScc (Corporate)" filteraction="REF_HttCffActivDirecUsers (Standard Active Directory Users)" size="0" request="0xccd16a00" url="www.ellenmacarthurfoundation.org/.../GC-Spring-Report-Summary.pdf" referer="www.ellenmacarthurfoundation.org/.../GC-Spring-Report-Summary.pdf" error="" authtime="966" dnstime="9" aptptime="0" cattime="86" avscantime="0" fullreqtime="80895" device="0" auth="2" ua="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36 Edge/15.15063" exceptions="" category="105" reputation="neutral" categoryname="Business" content-type="application/pdf" reason="range"

    These happen hundreds of times per day for PDF files since the UTM upgrade. It is thousands of times per day including other files. It will retry and usually get the PDF, but not always.

  • My lab is on 9.601, yet I had no trouble loading that PDF, Nigel, neither through the Transparent nor the Standard Proxy.

    2019:03:22-13:33:53 post httpproxy[6621]: id="0001" severity="info" sys="SecureWeb" sub="http" name="http access" action="pass" method="GET" srcip="10.x.y.65" dstip="185.217.40.162" user="myusername" group="Open Web Access" ad_domain="MEDIASOFT" statuscode="200" cached="0" profile="REF_RMxbSZXQTi (Office)" filteraction="REF_IiqUeSGrWr (Open Web Access)" size="3332969" request="0x9d34300" url="https://www.ellenmacarthurfoundation.org/assets/downloads/GC-Spring-Report-Summary.pdf" referer="" error="" authtime="235" dnstime="47401" aptptime="118" cattime="84009" avscantime="110043" fullreqtime="3806549" device="1" auth="2" ua="Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" exceptions="" category="105" reputation="neutral" categoryname="Business" country="United Kingdom" sandbox="-" content-type="application/pdf"

    I'm beginning to suspect your browser.  Have you tried a different one and from a different machine type with a different OS?

    By the way, since you mentioned this, I grepped 'range' in http.log and found that windows updates were being blocked so I added an Exception in Internet Options and added the following DNS Group object:

    Thanks for stimulating me to check that!

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • A packet sniffer can help figure out what is going on.  One of the things to pay attention to is the response header "Accept-Ranges".
     
    In this case, the far server sets the Accept-Ranges to be "bytes".
    In XG, it will modify this header to "none".
    In UTM, it does not modify this header, unless it has to.
     
    How clients behave is...  dependent on the client.  However here is one scenario:
     
    Client makes a request to the far server for the whole file.
    Header comes back with Accept-Ranges: bytes (range requests are allowed)
    Client makes a second concurrent request for a range.
    The second request is blocked with 416 and Accept-Ranges: none
    The first request should still proceed and download the file.
     
    The client should be smart enough not to drop the original full file download.  Or if it does, when range request fails it should retry with a full file download.
     
    A quick test in my environment worked.
  • Hi.

    I have tried different browsers with the same problem. One of the remote sites seems to get the problem more than most, but anyone can get it intermittently.

    Sophos have remoted on, extracted logs of it happening and escalated to third line.

  • Thanks for keeping us in the loop on this, Nigel.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • No fix yet. We didn't want to bypass AV scanning and so it has been escalated.