This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Yet again: Download Throttling

Hello everyone,
I know this has been covered a lot on this forum, and I have searched and read pretty much every post that contains useful information regarding download throttling, however I cannot seem to get it working.

Firstly, i want to know if my idea of 'Download Throttling' is correct...
Let's say I have 10 workstations behind my UTM, i wish to throttle each workstations internet download speed to 1meg.  I assume that the download throttling section of the UTM is what this is designed for?

This is how I have attempted to throttle my clients download speeds:
Firstly, I turned on QoS on my WAN adapter:


I then created a Traffic Selector, which will trigger when it sees traffic from the internet, on port 80, 443 heading to my internal network.
*edit* I understand now that this would not work due to port randomization during initial protocol handshaking between client & server.

The rule has been reversed.
*edit*


I then created the throttle rule named "Limit Web Traffic" to 512kbit/s (i put it really low so i could see that it was working)


Things to note:
I have tried this both with having the Web Filtering disabled and enabled in transparent mode, i have also tried creating throttling rules using the flow monitor - while this creates rules just like mine, they still do not function.

The results with the throttle rule active.


Do i have a configuration incorrect, or am i using this feature not as it was intended?

Thank you.


This thread was automatically locked due to age.
Parents
  • Simple explanation: Download throttling may not be working on its own... turn it off so it doesn't mess anything up in the future, unless you need either distributed limits, or ingress limiting, then fix your rules based on the following link, and (possibly) create dummy bandwidth pools that do nothing on each interface to force correct tc config on the backend:

    community.sophos.com/.../119111

     

    Elaboration: The bandwidth pool you created must be forcing tc to create the download throttling queues correctly on the backend. This might be fixed by now? Only assign bandwidth pools with upper limits, and throttling rules, to internal interfaces. You don't want to forcefully drop already received traffic on WAN interfaces, ever, for traffic flow management purposes.

    You should always be using bandwidth pools with upper limits set, to limit traffic, be using bandwidth pools with upper limits for egress limiting, and download throttling rules for ingress limiting, respective to the interface on which the rule is assigned. Unless If you require the per rule bandwidth to be distributed in a per IP or per IP pair fashion, where these options only exist for download throttling rules, unless I'm missing something. Keep in mind, that as long as you leave Limit Downlink and Upload optimizer enabled on the interface the bandwidth pool(s) w/ limiting are assigned, you are already getting a result more or less identical to what a download throttling rule in shared mode would give you. Don't bother migrating / creating throttling rules just to use shared mode, you're probably already getting shared mode behavior and just don't realize it.

    Revised 04/23/2017 - Per following discussion with Bob (thanks Bob!), and response from Bill (thanks Bill!)

  • Hi, Keith, and welcome to the UTM Community!

    You said, "You don't want to forcefully drop already received traffic on WAN interfaces, ever, for traffic flow management purposes."  This is what the Download Throttling rule in the KB article does.  What is your reasoning behind this comment?

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • Hi Bob :-)

    My reasoning behind this comment is the following (off the top of my head):

    Already paid for WAN traffic (bandwidth + PPS + processing overhead) should not be dropped to shape traffic flows, unless your WAN connection has more bandwidth available and lower latency, than your LAN connections, which should never really be the case.

    When throttling traffic, it should always be done internally, where the price for dropped / throttled / re-transmitted packets is far lower and far less disruptive to the network as a whole.

    Download traffic should be throttled on its way out of LAN interfaces, and upload traffic should be throttled on its way out of WAN interfaces, as bandwidth pools (and download throttling I believe) behave now, as long as you bind your pools to the correct interfaces. This enables the QoS device (the UTM in this case) to cache the packets internally, as opposed to dropping them on the WAN, which causes expensive WAN based re-transmits to frequently occur by default.

    As the QoS device (UTM) sends the cached packets out to LAN hosts at the throttled rate, the LAN host returns time stamped ACK packets back to the sending host, based on when it received the cached throttled packets, which in turn causes the sending host to slow down the traffic flow, as it sees a large amount of delay (and TCP window scaling kicks in for TCP flows), without actually dropping / re-transmitting any WAN packets. In the case that the receive buffer on the WAN interface becomes full during an initial flow burst, it has no choice but to drop packets, which will also cause the sending host to immediately slow down the flow, and some expensive WAN re-transmits will happen, this is unavoidable in edge cases.

    However, the overall effect is that, by only limiting download flows via internal LAN interfaces, we limit the worst case scenario of dropping + re-transmitting expensive WAN packets, to extreme edge cases only. We also avoid forcing expensive WAN operations to be the best case scenario, as it is when we limit inbound traffic on WAN interfaces.

    Multiply this, for example, over the 32,000 traffic flows the Home license permits you (say you setup a single rule to throttle all LAN hosts), and you start to get a glimpse of the bigger picture, as far as how much WAN congestion this simple change can save.

    Converting a worst case scenario into an edge case scenario, simply by changing which interface you throttle traffic on, with identical results as far as the download throttling feature is concerned, is a win win win in my book.

    There is published information, synthetic, and real world tests, easily found on Google, on this very concept. They get a whole lot further down the rabbit hole than my post does. And everything backs up my overall understanding of the issue.

    Cheers!

  • Thanks, Keith, for an excellent explanation of how it should make a difference...

    I don't know how other solutions do QoS, but, overall, unless inbound packets are dropped and the sender doesn't slow their stream because they're not getting ACKs back, flows clogging the pipe will continue at full throttle.  Some senders support Explicit Congestion Notification (ECN), but I don't think it's reasonable to count on that unless you control both ends of the conversation like might be possible in the IBM intranet.

    My understanding is that there is little buffering in the UTM of traffic passing through, so whether you throttle traffic with Bandwidth Pools or Download Throttling rules on LAN or WAN interfaces, the result is the same - packets are dropped.  That's the only way I know of to get a message consistently to the sender to slow down.

    Another example of a complication is traffic where the packets are handled by the Web Proxy in the UTM.  The Proxy will accept the entire downloaded file unless there's a Download Throttling rule in place on the External interface.  In fact this is often the primary issue we deal with.  My reservations for 2Mbps for VoIP would include, in order, with a total download bandwidth of 20Mbps, something like the following on the External interface:

    1. Throttle 'Internet -> VoIP -> External (Address)' to 1Gbps [in other words, an Exception to the next rule]
    2. Throttle 'Any -> Any -> Any' to 18Mbps

    Now, I could be convinced to change my configurations if a developer that handles QoS confirms that there's a noticeable difference between slowing a few ACKs and dropping "excess" traffic.  Using either approach, it's difficult with real-time streams like VoIP to prevent "jitteryness" if there isn't a good margin between available and reserved bandwidth.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • I agree, unless the sending host see's dropped packets or delayed ack's, it will not slow its stream. My method leans toward forcing delayed ack's, then dropping packets if needed. Your method would change this, dropping packets first, which in turn cause delayed ack's.

    Also agreed, ECN is simply not enterprise ready, and never will be unless they find a way to fix the MITM issue. It is far to vulnerable to manipulation, even if it were more widely implemented.

    Senders will slow down / speed up flows when they see ack's coming in at a certain rate, it is part of TCP window scaling (sliding window specifically), which everything with a TCP stack supports. If it didn't, your downloads would start at one constant rate and never change, faster or slower.

    For throttling the web proxy, now you've got me here. Unless SUTM would do something like add a virtual interface to enable applying QoS rules directly to the web proxy, assigning specific web proxy related throttling rules to the WAN interface is probably the only way to make that work with the current architecture.

    Also with you here, lots of inbound real-time flows, like VoIP & IPTV, are largely dependent on the sudden increases in bandwidth they need, to be available when they need it, not ~250ms later... at 250ms, it's too late, you have service degradation and/or flat out interruption.

    I'm just not seeing how shaping externally would help any of this though, apart from the internal UTM services. I'm seeing lots of situations where it could hurt, and have seen a fair few situations where it absolutely does hurt...

    If you are relying on QoS to make intelligent decisions about which packets to delay and/or drop, the traffic has already made its way past your internet connection and into your WAN interface. Choosing where to drop said traffic at this point, simply moves the re-transmission point. I'd rather have my UTM re-transmitting dropped packets to LAN hosts, than the sending host outside my network. This is the buffer I'm talking about, not something specific to the architecture of SUTM, something specific to the architecture of network drivers / stacks. Everything, has a re-transmit buffer, however large or small, it always exists, unless we are talking about really really cheap hardware, and we aren't.

    This is an interesting conversation indeed ;-)

    I may not be able to get back to you again until tomorrow, don't take it personally. I have fun with this stuff, even if it turns out I'm wrong :-)

  • Ditto, Keith.  Hopefully someone from Sophos will see this and either chime in or point us to a document.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • Good technical discussion. I just wanted to chime in about the different terminology and tabs used in UTM9. Long time users like Bob probably remember that we only used to have bandwidth pools that worked fine for most traffic and any kind of qos had to be applied to traffic leaving the firewall LAN or WAN interface didn't matter. So to throttle incoming WAN traffic, you had to apply a rule to LAN interface and throttle it there.

    Download throttling was added during 9.1 beta, because people couldn't get the simple fact that you had to throttle traffic leaving the firewall and couldn't really drop it on the WAN interface without having unexpected page hangs etc. In any case they added the feature due to feature requests for download throttling that you could implement without understanding how qos functions or RTFM.

    Here is my discussion with one of the devs when it was first introduced where I was confused about download throttling. They completely eliminated some of the fine tuning after the beta.

  • Hi Bill, thanks for the reply. Definitely helpful as to the "why?" part of it all.

    That leaves me with a few more questions, of course, and it relates to a good point Bob brought up...

    When dealing with the issue of managing traffic of internal components like the web filter, that on their own pull in traffic, process it, then forward it on to the appropriate interface, what would be the "SUTM correct" way of implementing rules that accomplish managing these flows? Can we use either one of bandwidth pools or download throttling, as long as the rules are correct in terms of the final source / destination? Or do we have to use one or the other, or even specifically formatted rules, to effectively shape traffic flows "middle man'd" by internal SUTM services?

    One more, just for clarification. Are the download throttling rules also applied at the "tc" level of the kernel as the bandwidth pools are, with some extra logic, or tc's ingress mode applied, to make sure they are applied correctly on the back end? Or are the download throttling rules applied by a different layer / mechanism that allows ingress shaping outside of tc?

    Thanks again Bill!

Reply
  • Hi Bill, thanks for the reply. Definitely helpful as to the "why?" part of it all.

    That leaves me with a few more questions, of course, and it relates to a good point Bob brought up...

    When dealing with the issue of managing traffic of internal components like the web filter, that on their own pull in traffic, process it, then forward it on to the appropriate interface, what would be the "SUTM correct" way of implementing rules that accomplish managing these flows? Can we use either one of bandwidth pools or download throttling, as long as the rules are correct in terms of the final source / destination? Or do we have to use one or the other, or even specifically formatted rules, to effectively shape traffic flows "middle man'd" by internal SUTM services?

    One more, just for clarification. Are the download throttling rules also applied at the "tc" level of the kernel as the bandwidth pools are, with some extra logic, or tc's ingress mode applied, to make sure they are applied correctly on the back end? Or are the download throttling rules applied by a different layer / mechanism that allows ingress shaping outside of tc?

    Thanks again Bill!

Children
  • At present, there is no way other than Download Throttling to handle "internal flows" in the UTM.  I don't know the answer to your "tc" question, but the difference is similar to the difference between a DNAT and an SNAT.  Download Throttling rules are applied to arriving traffic as soon as the packet is accepted by conntrack or a firewall rule, or maybe before, and definitely before anything else happens.  Bandwidth Pools are the last thing considered before the packet leaves the interface (just before SNAT/masq, is my guess).

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • Are you sure about that order? I mean it seems logical, don't get me wrong. But both of the 2 types of shaping rules have visibility into the NAT layer.

    If they didn't, you wouldn't be able to shape traffic on LAN interfaces by external address/port, nor would you be able to shape traffic on WAN interfaces by internal address/port. I have worked with firewall products like this in the past. They make QoS setup extremely mind numbing.

    Given that we know the shaper has NAT visibility, I guess I'd like Sophos to chime in here again, I don't see many easy ways to determine internal order of operations with respect to the QoS layer and other parts of the routing layer.

    Sophos has done a very good job as far as reflecting things as they are layered in the stack on the backend, in the WebGUI. QoS is (as it should be) grouped into the interfaces & routing section. We know its part of that layer, but (maybe you already know this) how do we know the order of this layer specifically?

    I know, I know, most readers just went "who cares, it works". I'd like to know, because you never know when knowing might matter down the road.

    If I had to guess, I would say the routing / NAT / QoS layer doesn't really have a set order, it's more of a "they all cross talk at multiple points in the chain" sort of deal. That's just a guess though...

  • Ironic as it is, this new ingress / egress information, has me moving all of my upload throttling rules, to the download throttling tab, away from the bandwidth pools tab, where they were doing absolutely nothing.

    To add to the irony, my download throttling rules, have to stay under the bandwidth pools tab, or they would cease to work. Lol indeed...

  • To be honest, I have not tested what tc output is when using bandwidth pools vs download throttling and would be interested in the results myself. You can turn off all your rules temporarily and then add a bandwidth pool on LAN interface and then test the tc rules created against download throttling rules for the same port etc. If I had to guess, I think there is no difference but I could be wrong.

  • As far as I can tell, you are correct Bill.

    After a little more research after I posted that question, I found that "tc" absolutely does have an "ingress" mode you can apply, on a per rule basis, using raw tc commands.

    So it would seem all QoS rules are implemented in tc (which is a good thing), the difference being that rules created under the download throttling tab, have the ingress option applied to them. Where rules created under the bandwidth pools tab, do not have the ingress option applied to them.

    tc rules are egress only by default, and tc does not support (as far as I have read) bidirectional (both egress and ingress at the same time) rules, like ALTQ does on FreeBSD. And no, that is not a feature request. I very much prefer single direction only QoS rules, helps to keep things sorted, and issues from malformed rules to a minimum ;-)

  • So, Keith and Bill, do your last posts indicate that the sequence I surmised may not be correct but that it doesn't matter as thinking of QoS in that way leads to a "correct" configuration?

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • I'm honestly still not sure on this. Hopefully Sophos can weigh in on it specifically.

    My guess is, that your resulting rules would be correct, as long as you think of them in an egress vs. ingress manner.

    Which, if I understood you right Bob, is the overall way you are looking at them, so yes, your rules would be correct.

    Again, I had to move 2 of my QoS rules to the download throttling tab to correct them. I was still thinking in an ALTQ manner, after migrating my config to SUTM from a FreeBSD based system.

  • Hi bob, keith is right on all counts as far as the mechanism and the engineering aspect. Only thing I would change on your earlier post is that as keith pointed out, we are mostly playing with egress (outgoing packets). There is a lot more flexibility where we can assign different priorities, delay the packets and apply bandwidth limits etc on the queues when the traffic is leaving the firewall. With ingress policing, you really don't have much choice and the only option is dropping traffic to then let tcp/ip builtin controls to slow that traffic down.

    So yes there are actual differences and throttling doesn't just drop packets in all cases. Also, the throttling is done on kernel level so ingress/egress shouldn't have any influence on proxies or other daemons as that should technically come after kernel has already sorted through all the traffic. A proxy cache may give you skewed results by caching/scanning some of the traffic and then releasing to the client in one chunk making it seem like you have a higher bandwidth than what is really available, but the actual throttling has already been done.

    Of course, I have not written any iptables rules since i found shorewall and definitely don't even remember the syntax since jumping on astaro[:$] so my knowledge is outdated and limited.

     

    EDIT: I know the confusion that you are describing in some of the earlier versions where the proxy seemed like it was doing its own thing while QoS was not having any effect. I think that has been fixed since later v8/ early v9 releases. The reason for that wasn't the proxy but they way rules were written for QoS. So for example a rule like 

    QoS any traffic coming from internet with port 80 to ANY LAN got bypassed when proxies dnatted that traffic. That was not the shortcoming of QoS but the implementation and the complexity of applying that rule correctly for proxies with different configurations and profiles.

  • I will chip in here:

    Download throttling does work even with "web filtering on"

    The trick is to select "Application selector" and NOT "traffic selector" under Traffic selectors.

    I have tried this successfully this morning using the following:

     

    DOWNLOADS:
    Traffic selector = application (http)
    Download throttling = Bound to WAN > select above traffic selector and enter limit in kb/s

    UPLOADS:
    Same as above but you need to use Bandwidth Pools and ensure "Upper limit" is selected with appropriate kb/s

    Doing both of the above limited my download and upload using http with web filtering enabled.

  • Just did a quick sanity check / test on SUTM current [9.413-4]:

    Limiting upload bandwidth on an internal interface with a download throttling rule (ingress), limits the upload rate from the LAN host to the web proxy. The web proxy then uploads the memory cached (I have disk caching disabled) data at full interface speed. This is using service based traffic selectors.

    Using the exact same rule(s), uploading a singe stream of data larger than the maximum antivirus scan size, works as expected. The traffic is forwarded through SUTM at the desired limited rate.

    Without testing further, I'm fairly certain we can assume that this would also be the case when limiting download traffic, with bandwidth pools (egress), on an internal interface, using a service based traffic selector, for all data streams coming in under the maximum antivirus scan size. They would be downloaded by the web proxy at full speed, into memory, then sent from the web proxy to the LAN host at the limited rate.

    QoS is indeed still "doing its own thing". I understand why, I just wanted to make sure this current behavior was noted, hopefully it will add to Bob's sanity, and the sanity of others reading this ;-)

    It seems, the strait forward solution, as Louis pointed out, is to use layer 7 application selectors in your traffic selector definitions, rather than layer 4 service (mostly port based) selectors. This way, your limiting rules won't care what source / destination port the traffic is using, they'll only care what layer 7 type the traffic is, and http(s) will match, regardless of how the web proxy modifies said traffic.

    As I understand it, however, layer 7 based rules (anywhere in SUTM) are far more computationally expensive, than layer 4 based rules. And, you have to think of them in terms of all traffic that the rule(s) would be evaluated against, not just the traffic they would match. So, this may not be a great solution on hardware with lower capacity CPU's.