greets
zaphod
___________________________________________
Home: Zotac CI321 (8GB RAM / 120GB SSD) with latest Sophos UTM
Work: 2 SG430 Cluster / many other models like SG105/SG115/SG135/SG135w/...
use source internal (network) / service web surfing / destination internetipv4
for each workstation you must set seperate throttle-
rules..
Hello everyone,
I know this has been covered a lot on this forum, and I have searched and read pretty much every post that contains useful information regarding download throttling, however I cannot seem to get it working.
greets
zaphod
___________________________________________
Home: Zotac CI321 (8GB RAM / 120GB SSD) with latest Sophos UTM
Work: 2 SG430 Cluster / many other models like SG105/SG115/SG135/SG135w/...
Simple explanation: Download throttling may not be working on its own... turn it off so it doesn't mess anything up in the future, unless you need either distributed limits, or ingress limiting, then fix your rules based on the following link, and (possibly) create dummy bandwidth pools that do nothing on each interface to force correct tc config on the backend:
community.sophos.com/.../119111
Elaboration: The bandwidth pool you created must be forcing tc to create the download throttling queues correctly on the backend. This might be fixed by now? Only assign bandwidth pools with upper limits, and throttling rules, to internal interfaces. You don't want to forcefully drop already received traffic on WAN interfaces, ever, for traffic flow management purposes.
You should always be using bandwidth pools with upper limits set, to limit traffic, be using bandwidth pools with upper limits for egress limiting, and download throttling rules for ingress limiting, respective to the interface on which the rule is assigned. Unless If you require the per rule bandwidth to be distributed in a per IP or per IP pair fashion, where these options only exist for download throttling rules, unless I'm missing something. Keep in mind, that as long as you leave Limit Downlink and Upload optimizer enabled on the interface the bandwidth pool(s) w/ limiting are assigned, you are already getting a result more or less identical to what a download throttling rule in shared mode would give you. Don't bother migrating / creating throttling rules just to use shared mode, you're probably already getting shared mode behavior and just don't realize it.
Revised 04/23/2017 - Per following discussion with Bob (thanks Bob!), and response from Bill (thanks Bill!)
Hi, Keith, and welcome to the UTM Community!
You said, "You don't want to forcefully drop already received traffic on WAN interfaces, ever, for traffic flow management purposes." This is what the Download Throttling rule in the KB article does. What is your reasoning behind this comment?
Cheers - Bob
Hi Bob :-)
My reasoning behind this comment is the following (off the top of my head):
Already paid for WAN traffic (bandwidth + PPS + processing overhead) should not be dropped to shape traffic flows, unless your WAN connection has more bandwidth available and lower latency, than your LAN connections, which should never really be the case.
When throttling traffic, it should always be done internally, where the price for dropped / throttled / re-transmitted packets is far lower and far less disruptive to the network as a whole.
Download traffic should be throttled on its way out of LAN interfaces, and upload traffic should be throttled on its way out of WAN interfaces, as bandwidth pools (and download throttling I believe) behave now, as long as you bind your pools to the correct interfaces. This enables the QoS device (the UTM in this case) to cache the packets internally, as opposed to dropping them on the WAN, which causes expensive WAN based re-transmits to frequently occur by default.
As the QoS device (UTM) sends the cached packets out to LAN hosts at the throttled rate, the LAN host returns time stamped ACK packets back to the sending host, based on when it received the cached throttled packets, which in turn causes the sending host to slow down the traffic flow, as it sees a large amount of delay (and TCP window scaling kicks in for TCP flows), without actually dropping / re-transmitting any WAN packets. In the case that the receive buffer on the WAN interface becomes full during an initial flow burst, it has no choice but to drop packets, which will also cause the sending host to immediately slow down the flow, and some expensive WAN re-transmits will happen, this is unavoidable in edge cases.
However, the overall effect is that, by only limiting download flows via internal LAN interfaces, we limit the worst case scenario of dropping + re-transmitting expensive WAN packets, to extreme edge cases only. We also avoid forcing expensive WAN operations to be the best case scenario, as it is when we limit inbound traffic on WAN interfaces.
Multiply this, for example, over the 32,000 traffic flows the Home license permits you (say you setup a single rule to throttle all LAN hosts), and you start to get a glimpse of the bigger picture, as far as how much WAN congestion this simple change can save.
Converting a worst case scenario into an edge case scenario, simply by changing which interface you throttle traffic on, with identical results as far as the download throttling feature is concerned, is a win win win in my book.
There is published information, synthetic, and real world tests, easily found on Google, on this very concept. They get a whole lot further down the rabbit hole than my post does. And everything backs up my overall understanding of the issue.
Cheers!
Thanks, Keith, for an excellent explanation of how it should make a difference...
I don't know how other solutions do QoS, but, overall, unless inbound packets are dropped and the sender doesn't slow their stream because they're not getting ACKs back, flows clogging the pipe will continue at full throttle. Some senders support Explicit Congestion Notification (ECN), but I don't think it's reasonable to count on that unless you control both ends of the conversation like might be possible in the IBM intranet.
My understanding is that there is little buffering in the UTM of traffic passing through, so whether you throttle traffic with Bandwidth Pools or Download Throttling rules on LAN or WAN interfaces, the result is the same - packets are dropped. That's the only way I know of to get a message consistently to the sender to slow down.
Another example of a complication is traffic where the packets are handled by the Web Proxy in the UTM. The Proxy will accept the entire downloaded file unless there's a Download Throttling rule in place on the External interface. In fact this is often the primary issue we deal with. My reservations for 2Mbps for VoIP would include, in order, with a total download bandwidth of 20Mbps, something like the following on the External interface:
Now, I could be convinced to change my configurations if a developer that handles QoS confirms that there's a noticeable difference between slowing a few ACKs and dropping "excess" traffic. Using either approach, it's difficult with real-time streams like VoIP to prevent "jitteryness" if there isn't a good margin between available and reserved bandwidth.
Cheers - Bob