This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

New Ideas/Feature Requests

Hello,

I submitted a few new ideas - see below and would appreciate it folks could take a look and vote if you like them. They are on the first page here - https://ideas.sophos.com/forums/143211-secure-web-gateway

Let me know if you have posted new ideas and I will take a look to vote on yours. I have already went through most and voted on quite a few.

Thanks!

First Request:

When a URL is blocked, the log should have more detail:

When a URL is Blocked, the log should have more detail as to why the URL was blocked. We should be able to see these details so we can troubleshoot and fix the issue instead of calling support.

Second Request:

Block Newly Registered Websites (Very Important!) (Websense has a "Newly Registered Websites" category, so should the SWG)

The ability to block newly registered websites would be great. So many new sites are created with malicious intent. Say 30 days or 60 days. The Sophos Firewall has the ability to do this, so should the Web Gateway.

Third Request:

Time should include one thousandth of a second (Important to know if another security product detects an incident so you can look at browsing history and pinpoint the site that caused the incident.)

Time should include one thousandth of a second when searching user web history. When endpoint security products detect malicious web activity it would be helpful to have the Web Gateway display to the thousandth of a second in the Date/Time column so we can match the exact time with the event that occurred on endpoint security. That way we can block the URL by adding it to the local site list. I know the Web Gateway can do this because I see this detail in my SIEM when the Web Gateway sends events to the SIEM.

 

Thank you!

 

 

 

 

 

 



This thread was automatically locked due to age.
  • Hi Gary,

    in regards to your requests.. #1 and #3 already exist .. #2 I'm not sure about that, however product mangers actively review features and consider all ideas posted.

     

    Here is a brief tutorial on how I troubleshoot the SWA ..

    In order to do this you will need to export the log file to a syslog server, I recommend a linux based system so you can easily use tools like "grep, cut, sed, awk etc"

    Once your logs are uploaded you can see the entire format here:  http://swa.sophos.com/webhelp/swa/concepts/InterpretingLogFiles.html

     

    now for the trouble shooting goodies:

     

    the first question you need to ask is:

    #1

    is the site https or http? if https the first thing you can do is create an https scanning exemption, then a certificate exemption.. if the site immediately works you can verify the sites internet with tools like https://www.ssllabs.com/ssltest/index.html  test the target site and see if there are redirection issues, certificate issues or the site shows up with anything less than a C .. chances are the site is been blocked for cert issues.

     

    #2

    the next thing to try is to create a local site list entry and set it to "trusted" this will disable av scanning and allow clients to make "byte range requests" it will also disable AV scanning, in cases where the AV scanner hangs on streaming content.

    #3

    the next thing to try is setting the browser to use proxy.. set it to point directly to the appliance IP on port 8080.. if your using transparent redirection and explicit proxy works.. 99% chance there is some sort of redirection issue.  The most common issues is "the wccp service needs a quick restart" perhaps a service group is not working (granted this is more wide spreed then a simple web site) 

    - private tabs will ensure content is not been cached and the equivalent of a 307 redirect is failing or serving bad data.  (also make sure caching is NOT enabled on the applaince its self in the advanced section)

    also if clients are in full web control see this document:  https://community.sophos.com/kb/en-us/122384

    ensure the workstation and your authentication and dns is as such: https://community.sophos.com/kb/en-us/126599 

    check out the notes in my deployment guide: https://community.sophos.com/kb/en-us/126692

    for the most part these are 1 off kb's and once your all set up you shouldn't need these anymore, other than reference.

     

    If the above fails ..

    now its time to check logs..

    http://swa.sophos.com/webhelp/swa/concepts/InterpretingLogFiles.html

     

    EP=1 SXL=1 h=10.99.115.13 u="DOMAIN\\johnsmith" s=200 X=- t=1336666489 T=284453
    Ts=0 act=1 cat="0x220000002a" app="-" rsn=- threat="-" type="text/html" ctype="text/html"
    sav-ev=4.77 sav-dv=2012.5.10.4770003 uri-dv=- cache=- in=1255 out=26198
    meth=GET ref="-" ua="Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20100101 Firefox/12.0"
    req="GET http://www.google.ca/ HTTP/1.1" dom="google.ca" filetype="-" rule="0"
    filesize=25815 axtime=0.048193 fttime=0.049360 scantime=0.011 src_cat="0x2f0000002a"
    labs_cat="0x2f0000002a" dcat_prox="-" target_ip="74.125.127.94" labs_rule_id="0"
    reqtime=0.027 adtime=0.001625 ftbypass=- os=Windows authn=53 auth_by=portal_cache 
    dnstime=0.000197 quotatime=- sandbox=-

     

    EP/SXL = if you see this in the logfile the client is in full web control.. the appliance ignores ANY request that's tagged with endpoint as policy is already run on the request

    h=10.99.115.13 u="DOMAIN\\johnsmith" shows the user that's cached and ip ... if there is no user "-" the ip is not authenticated and its probably hitting your default policy or blocking
    s=200 this is the RCF internet code returned from the site ... 200 is good.. 300 is some sort of redirect .. 4/500 is generally bad.
    t=1333666489 = the exact time in epoc .. you can use a converter or the date command to convert it right to the second. On linux box you could sed/awk/sort the exact time you wish.

    act

    Action code that identifies the outcome of the request:
    -7 = User is shown a sandbox analysis page .
    -6 = User attempted to proceed on a quota page, but the request was blocked.
    -5 = Block page displayed: daily quota time exceeded.
    -4 = Quota time warning displayed.
    -3 = User proceeded but request was blocked.
    -2 = Request was warned.
    -1 = Request was blocked.
    1 = Request was allowed.
    2 = Request was warned and user decided to proceed.
    3 = User proceeded.
    4 = User accepts a quota time and proceeds.
    5 = Requested proceeded after quota accepted.

    this is the first major part.. I would do something like : grep -i "[-7..0]" sophos_log  ..  this will sort out any request that was dropped either for policy or perhaps bad certs
    etc.

    it must be matched against the rsn= field.

    rsn

    Reason code that identifies why a particular request was blocked. The supported codes are listed below; however, this list is subject to change.
    1401 = Blocked because request contains a virus,
    1402 = Blocked by Local or Sophos URI list,
    1403 = Blocked by file type,
    1404 = Blocked because the request is encrypted and could not be scanned,
    1405 = Blocked because the virus scanner timed out when trying to scan the request,
    1406 = Blocked by policy,
    1407 = Blocked because the originating server failed SSL certificate validation,
    1408 = Blocked ‘Range’ requests,
    1409 = Blocked by tag.
    1410 = Blocked (Lookup failed)
    1411 = Blocked because of application control.
    1412 = Blocked by Sandstorm


    the most common ones are 1406, blocked by policy or 1408 byte range requests and 1407

    what I would normally do is..
    live monitor the logs and then grep out the ip and sort by the epoc time..
    what you will see is often something like

    packet 1.. is request for url.xxx.com
    packet 2.. is url xxx.com making a request to yyy.com
    packet 3.. is act=-1 rsn 1406 for example.

    this would be a typical request where say the inital page is NOT blocked.. but the site its self spiders out and pulls down content from a site that is blocked.

    a good example:
    if your watching a youtube video.. and the video never starts..
    you could set it to trusted and make sure its not hanging from av scanning.. but in the back ground what is going on is.. you make a request for a video.. thats fine.. the video makes a back end request to an AD .
    and thats blocked.. so the connection is never returned to the original request.. meaning the video never starts.

    an added example would be something like, the back end server that is hosting the AD.. has a bad or self signed certificate.. so now the appliance blocks it because of that.


    the last thing we really care about int the log file is the category and risk class.

    cat="0x220000002a" 
    0x
    will always be the same
    22 means low risk 21 would be trusted 23 would be high risk etc.
    2a is the actual category of the site

    again when you looking at the set of packets.. you may see something like 
    packet 1: 0x21000001a
    packet 2: 0x22000001a
    packet 3: 0x21000001a
    packet 4: 0x24000001a

    in this case you would want to white list the sites that came in as 22 .. (the change takes 5 mins to take effect) get as many sites in the initial test as "trusted" to help troubleshoot
    in the case of packet #4 .. its high risk.. risk class ALWAYS trumps policy.. anything this is globally blocked, or high risk will be blocked before policy is even run.

    once the site works.. change the risk class back until you find the right one.
    generally this is not required.. but can be a very useful way to find the exact thing thats blocked.. could be a .jpg .. or a link.. or a back end server or anything else
    along those lines.


    Final notes:

    keep in mind.. only the 5 browsers should ever go through the appliance.. things like fax machines or network printers or anything else that's not a user and joined
    to the domain should be excluded from filtering.

    so no applications, you can try making user agent string exclusions for authentication as well.

    this should give you everything you need to troubleshoot any issue with the SWA..


    Unfortunately it's not exactly "easy" and sometimes opening a support case and watching the logs in real time is simply better..

    Cheers
  • Hi Red_Warrior. I appreciate your response very much!

    For number 1.

    Ultimately what I am looking for is when I click on the search tab and search by “User” or “Site”, the dashboard will show me a status of Allowed or Blocked. There should be a separate column that shows me what triggered the blocked right there instead of having to go through the steps below. I am just too darn busy with all sorts of tasks to complete, not just managing and reviewing the SWA, so it’s important for me to be efficient as possible.

    Regarding #2.

    So many malware sites are newly registered domain names. Having a “Newly registered domain names” category would be huge in blocking malware and avoiding endpoint infections. Websense does this and the Sophos FW does this.

    Regarding #3

    Sometimes Endpoint A/V or another security device blocks malware distributing from a website while a user is browsing the Internet. The endpoint A/V or other security device will often just list the IP address and not know the website URL. One of the first things I want to know is which website is distributing malware so I can add an entry in the SWA local site list to block it. So I log on to Sophos SWA and search by user and narrow it down by the time that was previously listed by the security device to find out which website was distributing malware. The search results on the SWA may show 35 sites with the same time, such as 1:35 PM. How do I know which of these 35 sites was dishing out malware? Now if the search results would show milliseconds, such as 1:35:15 PM, like the security device, I would have it and could manually block the URL in the SWA.

    Thanks again!

    Gary

  • Hi Gary,

    I think its important to note here, the Labs team monitors malware sources, spammers and has the very best access when it comes to malicious sites.

    All of their work is packaged up in update packages and distributed to appliances and endpoint clients in near real time.

    Tbh you really should not have to be constantly updating the local site list or trying to add domains manually. Just ensure there are no issues updating issues.

    The effect of adding so many sites will be that the appliance will need to scan the enite local site list file for every request.. this may not be as efficient as you would like.. the other thing to note is you can only have 8192 entries..

    To be totally honest, there is no reason you should ever need that many entries..  the vast majority of bad sites are collected from literally millions of submissions and updated via data updates every few mins.

    All feature requests are always reviewed by developers and product managers.. I think you posted some good points..  So well see what happens

    Cheers

  • 1) Go to Search, by User, put in username and filter by status blocked.  The entries in the list should have status "Blocked (Virus)" and "Blocked (Policy)" etc.

     

    2) Newly registered domains are not known by the categorizer appear as "Uncategorized".  Sophos collects the domain names of requests for all uncategorized sites and then attempts to categorize them correctly.  The SWA does not support a domain being multiple categories so we cannot support anything like "Alcohol and Newly Registered".  Therefore it is either new (uncategorized) or it is not new (categorized).

     

    3) Every device has its own clock that can be off (even if using NTP) and the timestamp of detection can be different than the timestamp of downloading.  Timestamps are very useful to narrow down a general time window, but unless you are comparing timestamps from same device anything less than a second is useless.  Even down to the second is suspect.

    swa t=1:10:20.300 User performed GET /malware.exe
    swa t=1:10:24.500 SWA received response body from server and started malware scan
    swa t=1:10:26.600 malware scan complete and clean, starting to send file to client
    swa t=1:10:27.100 file completely sent and log made (using the 1:10:20.300 timestamp)

    client T=1:10:26.300 file download complete in browser
    client T=1:10:28.900 AV scanner on client detects virus
    client T=1:10:30.100 AV scanner on client reports virus cleaned

    So...  which timestamps are you going to match up?  The virus reporter on the client found something at 1:10:28.900 but the SWA log for the download is 1:10:20.300 (8 seconds off).  In order to keep requests from a client straight and clear as an "order of events" (as far as I remember) the SWA logs the request start time, not the response end time.  And even if it did, and if the clocks were sync'd perfectly, the AV detection time on the client could be seconds later.