i want to use the utm 9 virtual webserver protection reverse proxy to protect my nextcloud installation.
I have nextcloud with collabora running in docker, and I cant get collabora working. I imported the letsencrypt certificates to the utm, wo the ssl is not a problem. Hase someone an idea on hwo to modify the reverse proxy on the utm to get it working with collabora?
I have the same problem with my installation. Reverse Proxy for login in and file exchange works fine, but i can not open or edit files with collabora. Every time i try to open a file i get a white screen but the document is not opened.
I think ssl is also on my side not a problem. But i think the utm has a problem with the different names - cloud.... for my nextcloud installation and office.... for the collabora installation. Do you think this is possible?
Here are some logfile entrys:
2017:06:25-22:47:07 remote httpd[31268]: [core:notice] [pid 31268:tid 3995577200] [client 91.17.50.163:59112] AH00026: found %2f (encoded '/') in URI (decoded='/lool/cloud.server.com/.../7_ocukbswiqfwn, returning 404
2017:06:25-22:47:07 remote httpd: id="0299" srcip="91.17.50.163" localip="172.20.96.1" size="373" user="-" host="91.17.50.163" method="GET" statuscode="404" reason="-" extra="-" exceptions="-" time="1568" url="/lool/cloud.server.com/.../7_ocukbswiqfwn server="office.server.com" port="443" query="" referer="-" cookie="-" set-cookie="-" uid="WVAhS6wUYAEAAHokKKgAAACm"
2017:06:25-22:47:08 remote httpd[31268]: [core:notice] [pid 31268:tid 3978791792] [client 91.17.50.163:59113] AH00026: found %2f (encoded '/') in URI (decoded='/lool/cloud.server.com/.../7_ocukbswiqfwn, returning 404
2017:06:25-22:47:08 remote httpd: id="0299" srcip="91.17.50.163" localip="172.20.96.1" size="373" user="-" host="91.17.50.163" method="GET" statuscode="404" reason="-" extra="-" exceptions="-" time="1620" url="/lool/cloud.server.com/.../7_ocukbswiqfwn server="office.server.com" port="443" query="" referer="-" cookie="-" set-cookie="-" uid="WVAhTKwUYAEAAHokKKkAAACo"
Best regards
André
It seems,clear that it is objecting to the url because it contains %2f.
I think if you check adjacent log entries for one starting
"[Modsecurity:", that it will contain a sction of the form [id 123456] Put that number into the rigid filter exception list. Or turn off rigid filtering completely (which weakens security more.)
Hi Douglas,
thanks for your quick answer. At the moment i have no filter active, no firewall profile in my virtual servers. This is only for testing purposes, i know that this is not safe ;-). What to do you mean with "adjacent log entries"?
Maybe the .conf for my collabora installation is helpful:
<VirtualHost *:443> ServerName office.server.com:443
<Directory /var/www> Options -Indexes </Directory>
# SSL configuration, you may want to take the easy route instead and use Lets Encrypt! SSLEngine on SSLCertificateChainFile /etc/letsencrypt/live/office.server.com/chain.pem SSLCertificateFile /etc/letsencrypt/live/office.server.com/cert.pem SSLCertificateKeyFile /etc/letsencrypt/live/office.server.com/privkey.pem SSLOpenSSLConfCmd DHParameters /etc/letsencrypt/live/office.server.com/dhparam.pem SSLProtocol all -SSLv2 -SSLv3 SSLCipherSuite ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS SSLHonorCipherOrder on SSLCompression off
# Encoded slashes need to be allowed AllowEncodedSlashes NoDecode
# Container uses a unique non-signed certificate SSLProxyEngine On SSLProxyVerify None SSLProxyCheckPeerCN Off SSLProxyCheckPeerName Off
# keep the host ProxyPreserveHost On
# static html, js, images, etc. served from loolwsd # loleaflet is the client part of LibreOffice Online ProxyPass /loleaflet https://127.0.0.1:9980/loleaflet retry=0 ProxyPassReverse /loleaflet https://127.0.0.1:9980/loleaflet
# WOPI discovery URL ProxyPass /hosting/discovery https://127.0.0.1:9980/hosting/discovery retry=0 ProxyPassReverse /hosting/discovery https://127.0.0.1:9980/hosting/discovery
# Main websocket ProxyPassMatch "/lool/(.*)/ws$" wss://127.0.0.1:9980/lool/$1/ws nocanon
# Admin Console websocket ProxyPass /lool/adminws wss://127.0.0.1:9980/lool/adminws
# Download as, Fullscreen presentation and Image upload operations ProxyPass /lool https://127.0.0.1:9980/lool ProxyPassReverse /lool https://127.0.0.1:9980/lool</VirtualHost>
Regards
I have been looking at WAF logs recently, and they are hard to parse. Suggest you open a prior day log in a text efitor for review. Some lines start with a timestamp and have basic source-target information, similar to web filter logs. Other lines do not have that information but do have data about alarms that are triggered. But any line with [id value] represents a rigid filtering rule that fired. If you do not want that rule enforced against this website, you put that id number into the rigid filtering exceptions. There is no master list of rule ids published, you just handle them as they come. Hopefully rhe source-target information allows you to distinguish known-good traffic from possubly-hostile traffic.
i have been looking in the log with activated firewall profile. I found the 3 following ids and made exclusions - 950120, 960032, 981203. I was getting a warning...
"The list of skipped filter rules contains the following required infrastructure rules: 981203. Disabling a required infrastructure rule can lead to attacks not being blocked by the Web Application Firewall."
...and saved the settings.
But opening a document with collabora is also not working, see logs down below:
2017:06:26-22:36:56 remote httpd: id="0299" srcip="91.17.50.163" localip="172.20.96.1" size="1523" user="-" host="91.17.50.163" method="POST" statuscode="200" reason="-" extra="-" exceptions="-" time="242458" url="/loleaflet/4f4593a/loleaflet.html" server="office.server.com" port="443" query="?WOPISrc=https%3A%2F%2Fcloud.server.com%2Fapps%2Frichdocuments%2Fwopi%2Ffiles%2F262_ocukbswiqfwn&title=About.odt&lang=de-DE&closebutton=1&revisionhistory=1" referer="-" cookie="-" set-cookie="-" uid="WVFwaKwUYAEAAGXQps4AAAAH"
2017:06:26-22:36:56 remote httpd[26064]: [core:notice] [pid 26064:tid 4062718832] [client 91.17.50.163:52232] AH00026: found %2f (encoded '/') in URI (decoded='/lool/cloud.server.com/.../262_ocukbswiqfwn, returning 404
2017:06:26-22:36:56 remote httpd: id="0299" srcip="91.17.50.163" localip="172.20.96.1" size="375" user="-" host="91.17.50.163" method="GET" statuscode="404" reason="-" extra="-" exceptions="-" time="3543" url="/lool/cloud.server.com/.../262_ocukbswiqfwn server="office.server.com" port="443" query="" referer="-" cookie="-" set-cookie="-" uid="WVFwaKwUYAEAAGXQps8AAAAI"
2017:06:26-22:36:57 remote httpd[26064]: [core:notice] [pid 26064:tid 4054326128] [client 91.17.50.163:52233] AH00026: found %2f (encoded '/') in URI (decoded='/lool/cloud.server.com/.../262_ocukbswiqfwn, returning 404
2017:06:26-22:36:57 remote httpd: id="0299" srcip="91.17.50.163" localip="172.20.96.1" size="375" user="-" host="91.17.50.163" method="GET" statuscode="404" reason="-" extra="-" exceptions="-" time="2975" url="/lool/cloud.server.com/.../262_ocukbswiqfwn server="office.server.com" port="443" query="" referer="-" cookie="-" set-cookie="-" uid="WVFwaawUYAEAAGXQptAAAAAJ"
Is it possible that the utm has a problem with the redirection and the 2 different domain names? office.server.com + cloud.server.com
Any other idea?
Yes, it seems plausible that a url-within-a-url could cause UTM WAF to get confused. Suggest you open a case with Sophos Support, if you have not done so already, as it is the only way to get bugs documented and fixed.
yes. seem to be a bug. i opend a support ticket and refered to this thread.
Case 8145469
André and Stefan, what happens if you select 'Pass host header' in the Virtual Server?
Cheers - Bob
Hey Bob,
in my configuration the option "Pass host header" has always been checked. Last time i tested is more than 8 weeks ago and my VM with nextcloud is not running at the moment. I think the certificates are already expired.
Regards André
Andre Winkler said: Hey Bob, in my configuration the option "Pass host header" has always been checked. Last time i tested is more than 8 weeks ago and my VM with nextcloud is not running at the moment. I think the certificates are already expired. Regards André
Same here. But i tested with this option and wihout. no change. but without this option i don't see the taskbar from Collabora. So the option should be checked. that is important.