cOS Core DiffServ Support
cOS Core supports the DiffServ architecture in the following ways:cOS Core forwards the 6 bits which make up the DiffServ Differentiated Services Code Point (DSCP).
cOS Core copies the 6 DSCP bits into the priority QoS bits of Ethernet VLAN frames on outbound interfaces.
As described later in this chapter, DSCP bits can be used by the cOS Core traffic shaping subsystem as a basis for prioritizing traffic passing through the Clavister firewall.
With IPsec tunnels, cOS Core automatically copies the entire Differentiated Service Field (DSField) of inner packets to the outer tunnel IP header of ESP packets. The field can alternatively be set to a fixed value in the outer tunnel packets. This is described further in Section 10.3.19, DiffServ with IPsec.
It is important to understand that cOS Core traffic shaping does not add new DiffServ information as packets traverse a Clavister firewall. The cOS Core traffic shaping priorities described later in this chapter are for traffic shaping within cOS Core only and are not translated into DiffServ information that is then added to packets.
DSCP with cOS Core is also discussed in a Clavister Knowledge Base article at the following link:
https://kb.clavister.com/324735663
Explicit Congestion Notification Handling
In addition to DiffServ, the Explicit Congestion Notification (ECN) feature in the TCP protocol is supported by some routers and allows end-to-end notification of congestion without dropping packets. cOS Core does not support ECN for TCP flow control. However, by default, cOS Core does not alter the ECN bits as they pass through the firewall. If required, ECN bits can instead be stripped by changing the global cOS Core setting TCPECN which can be found in TCP Settings.ECN with cOS Core is also discussed in a Clavister Knowledge Base article at the following link:
https://kb.clavister.com/317180249
The Traffic Shaping Solution
However, architectures like DiffServ fall short if applications themselves supply the network with QoS information. In most networks it is rarely appropriate to let the applications, the users of the network, decide the priority of their own traffic. If the users cannot be relied upon then the network equipment must make the decisions concerning priorities and bandwidth allocation.cOS Core provides QoS control by allowing the administrator to apply limits and guarantees to the network traffic passing through the Clavister firewall. This approach is often referred to as traffic shaping and is well suited to managing bandwidth for local area networks as well as to managing the bottlenecks that might be found in larger wide area networks. It can be applied to any traffic including that passing through VPN tunnels.
Traffic Shaping Objectives
Traffic shaping operates by measuring and queuing IP packets with respect to a number of configurable parameters. The objectives are:Applying bandwidth limits and queuing packets that exceed configured limits, then sending them later when bandwidth demands are lower.
Dropping packets if packet buffers are full. The packets to be dropped should be chosen from those that are responsible for the congestion.
Prioritizing traffic according to administrator decisions. If traffic with a high priority increases while a communication line is full, traffic with a low priority can be temporarily limited to make room for the higher priority traffic.
Providing bandwidth guarantees. This is typically accomplished by treating a certain amount of traffic (the guaranteed amount) as high priority. The traffic that is in excess of the guarantee then has the same priority as other traffic, competing with all the other non-prioritized traffic.
Traffic shaping does not typically work by queuing up immense amounts of data and then sorting out the prioritized traffic to send before sending non-prioritized traffic. Instead, the amount of prioritized traffic is measured and the non-prioritized traffic is limited dynamically so that it will not interfere with the throughput of prioritized traffic.
Note: Traffic shaping will not work with the SIP ALG | |
---|---|
Any traffic connection that triggers an IP policy that uses the SIP ALG cannot also be subject to traffic shaping. |
cOS Core offers extensive traffic shaping capabilities for the packets passing through the Clavister firewall. Different rate limits and traffic guarantees can be created as policies based on a filter that specifies the traffic's source, destination and protocol. This filter is similar to the filter used for IP rule set entries.
The two key components for traffic shaping in cOS Core are:
Note that using pipes for traffic shaping is also described in an article in the Clavister Knowledge Base at the following link:
https://kb.clavister.com/324736152
Pipes
A Pipe is the fundamental object for traffic shaping and is a conceptual channel through which data traffic can flow. It has various characteristics that define how traffic passing through it is handled. As many pipes as are required can be defined by the administrator. None are defined by default.Pipes are simplistic in that they do not care about the types of traffic that pass through them nor the direction of that traffic. They simply measure the aggregate data that passes through them and then apply the administrator configured limits for the pipe as a whole or for Precedences and/or Groups (these concepts are explained later in Section 11.1.6, Precedences).
cOS Core is capable of handling hundreds of pipes simultaneously, but in reality most scenarios require only a handful of pipes. It is possible that dozens of pipes might be needed in scenarios where individual pipes are used for individual protocols. Large numbers of pipes might also be needed in an ISP scenario where individual pipes are allocated to each client.
One or more Pipe Rules make up the cOS Core Pipe Rule set which determine what traffic will flow through which pipes. Each pipe rule is defined like other cOS Core security policies: by specifying both the source/destination and interface/network for which the rule is to trigger, as well as the service.Caution: Avoid using "any" as the source interface | |
---|---|
Filtering criteria in a pipe rule should be as specific as possible, triggering on specific interfaces and networks. In particular, avoid using any as the source interface. In certain cases, this could result in traffic shaping being applied to the same traffic twice. |
Once a new connection is permitted by the IP rule set, the pipe rule set is then checked for any matching pipe rules. Pipe rules are checked in the same way as other rule sets, by scanning entries from top to bottom (first to last). The first matching rule, if any, decides if the connection is subject to traffic shaping. Keep in mind that any connection that does not trigger a pipe rule will not be subject to traffic shaping and could potentially use as much bandwidth as it wants.
The rule set for pipe rules is initially empty with no rules predefined. At least one rule must be created for traffic shaping to begin to function.
Pipe Rule Chains
When a pipe rule is defined, the pipes to be used with that rule are also specified and they are placed into one of two lists in the pipe rule. These lists are:The Forward Chain
This is the pipe or pipes that will be used for outgoing (leaving) traffic from the firewall. One, none or a series of pipes may be specified.
The Return Chain
This is the pipe or pipes that will be used for incoming (arriving) traffic. One, none or a series of pipes may be specified.
The pipes that are to be used are specified in a pipe list. If only one pipe is specified then that is the pipe whose characteristics will be applied to the traffic. If a series of pipes are specified then these will form a Chain of pipes through which traffic will pass. A chain can be made up of a maximum of 8 pipes.
Explicitly Excluding Traffic from Shaping
If no pipe is specified in a pipe rule list then traffic that triggers the rule will not flow through any pipe. It also means that the triggering traffic will not be subject to any other matching pipe rules that might be found later in the rule set.This provides a means to explicitly exclude particular traffic from traffic shaping. Such rules are not absolutely necessary but if placed at the beginning of the pipe rule set, they can guard against accidental traffic shaping by later rules.
Pipes Will Not Work With Stateless Policy Rules
It is important to understand that traffic shaping will not work with traffic that flows as a result of triggering a Stateless Policy entry in the IP rule set. (With an IP Rule, this is known as a FwdFast rule.)The reason for this is that traffic shaping is implemented by using the cOS Core state engine which is the subsystem that deals with the tracking of connections. A Stateless Policy does not set up a connection in the state engine. Instead, packets are considered not to be part of a connection and are forwarded individually to their destination, bypassing the state engine.
Using Pipes with Application Control
When using the Application Control feature, it is possible to associate a pipe object directly with an Application Rule object in order to define a bandwidth for a particular application. For example, the bandwidth allocated to the BitTorrent peer-to-peer application could be limited in this way.This feature is discussed further in Section 3.7, Application Control.
The simplest use of pipes is for bandwidth limiting. This is also a scenario that does not require much planning. The example that follows applies a bandwidth limit to inbound traffic only. This is the direction most likely to cause problems for Internet connections.
Example 11.1. Applying a Simple Bandwidth Limit
Begin with creating a simple pipe that limits all traffic that gets passed through it to 2 megabits per second, regardless of what traffic it is.
Command-Line Interface
Device:/>
add Pipe std-in LimitKbpsTotal=2000
InControl
Follow similar steps to those used for the Web Interface below.
Web Interface
Traffic needs to be passed through the pipe and this is done by using the pipe in a Pipe Rule.
We will use the above pipe to limit inbound traffic. This limit will apply to the actual data packets, and not the connections. In traffic shaping we're interested in the direction that data is being shuffled, not which computer initiated the connection.
Create a simple rule that allows everything from the inside, going out. We add the pipe that we created to the return chain. This means that the packets travelling in the return direction of this connection (outside-in) should pass through the std-in pipe.
Command-Line Interface
Device:/>
add PipeRule SourceInterface=lan
SourceNetwork=lan_net
DestinationInterface=wan
DestinationNetwork=all-nets
Service=all_services
ReturnChain=std-in
Name=Outbound
InControl
Follow similar steps to those used for the Web Interface below.
Web Interface
This setup limits all traffic from the outside (the Internet) to 2 megabits per second. No priorities are applied, and neither is any dynamic balancing.
Using a Single Pipe for Both Directions
A single pipe does not care in which direction the traffic through it is flowing when it calculates total throughout. Using the same pipe for both outbound and inbound traffic is allowed by cOS Core but this will not partition the pipe limit exactly in two between the two directions.In the previous example only bandwidth in the inbound direction is limited. In most situations, this is the direction that becomes full first. But what if the outbound traffic must be limited in the same way?
Just inserting std-in in the forward chain will not work since we probably want the 2 Mbps limit for outbound traffic to be separate from the 2 Mbps limit for inbound traffic. If 2 Mbps of outbound traffic attempts to flow through the pipe in addition to 2 Mbps of inbound traffic, the total attempting to flow is 4 Mbps. Since the pipe limit is 2 Mbps, the actual flow will be close to 1 Mbps in each direction.
Raising the total pipe limit to 4 Mbps will not solve the problem since the single pipe will not know that 2 Mbps of inbound and 2 Mbps of outbound are the intended limits. The result might be 3 Mbps outbound and 1 Mbps inbound since this also adds up to 4 Mbps.
Using Two Separate Pipes Instead
The recommended way to control bandwidth in both directions is to use two separate pipes, one for inbound and one for outbound traffic. In the scenario under discussion each pipe would have a 2 Mbps limit to achieve the desired result. The following example goes through the setup for this.Example 11.2. Limiting Bandwidth in Both Directions
Create a second pipe for outbound traffic:
Command-Line Interface
Device:/>
add Pipe std-out LimitKbpsTotal=2000
InControl
Follow similar steps to those used for the Web Interface below.
Web Interface
After creating a pipe for outbound bandwidth control, add it to the forward pipe chain of the rule created in the previous example:
Command-Line Interface
Device:/>
set PipeRule Outbound ForwardChain=std-out
InControl
Follow similar steps to those used for the Web Interface below.
Web Interface
This results in all outbound connections being limited to 2 Mbps in each direction.
In the previous examples a static traffic limit for all outbound connections was applied. What if the aim is to limit web surfing more than other traffic? Assume that the total bandwidth limit is 250 Kbps and 125 Kbps of that is to be allocated to web surfing inbound traffic.
The Incorrect Solution
Two "surfing" pipes for inbound and outbound traffic could be set up. However, it is not usually required to limit outbound traffic since most web surfing usually consists of short outbound server requests followed by long inbound responses.A surf-in pipe is therefore first created for inbound traffic with a 125 Kbps limit. Next, a new Pipe Rule is set up for surfing that uses the surf-in pipe and it is placed before the rule that directs everything else through the std-in pipe. That way web surfing traffic goes through the surf-in pipe and everything else is handled by the rule and pipe created earlier.
Unfortunately this will not achieve the desired effect, which is allocating a maximum of 125 Kbps to inbound surfing traffic as part of the 250 Kbps total. Inbound traffic will pass through one of two pipes: one that allows 250 Kbps, and one that allows 125 Kbps, giving a possible total of 375 Kbps of inbound traffic but this exceeds the real limit of 250 Kbps.
The Correct Solution
To provide the solution, create a chain of the surf-in pipe followed by the std-in pipe in the pipe rule for surfing traffic. Inbound surfing traffic will now first pass through surf-in and be limited to a maximum of 125 Kbps. Then, it will pass through the std-in pipe along with other inbound traffic, which will apply the 250 Kbps total limit.If surfing uses the full limit of 125 Kbps, those 125 Kbps will occupy half of the std-in pipe leaving 125 Kbps for the rest of the traffic. If no surfing is taking place then all of the 250 Kbps allowed through std-in will be available for other traffic.
This does not provide a bandwidth guarantee for web browsing but instead limits it to 125 Kbps and provides a 125 Kbps guarantee for everything else. For web browsing the normal rules of first-come, first-forwarded will apply when competing for the 125 Kbps bandwidth. This may mean 125 Kbps, but it may also mean much slower speed if the connection is flooded.
Setting up pipes in this way only puts limits on the maximum values for certain traffic types. It does not give priorities to different types of competing traffic.
The Default Precedence is Zero
All packets that pass through cOS Core traffic shaping pipes have a Precedence. In the examples so far, precedences have not been explicitly set and so all packets have had the same default precedence which is 0.There are 8 Possible Precedence Levels
Eight precedences exist which are numbered from 0 to 7. Precedence 0 is the least important (lowest priority) precedence and 7 is the most important (highest priority) precedence. A precedence can be viewed as a separate traffic queue; traffic in precedence 2 will be forwarded before traffic in precedence 0, precedence 4 forwarded before 2.Precedence Priority is Relative
The priority of a precedence comes from the fact that it is either higher or lower than another precedence and not from the number itself. For example, if two precedences are used in a traffic shaping scenario, choosing precedences 4 and 6 instead of 0 and 3 will makes no difference to the end result.Allocating Precedence to Traffic
The way precedence is assigned to traffic is specified in the triggering pipe rule and can be done in one of three ways:Use the precedence of the first pipe
Each pipe has a Default Precedence and packets take the default precedence of the first pipe they pass through.
Use a fixed precedence
The triggering pipe rule explicitly allocates a fixed precedence.
Use the DSCP bits
Take the precedence from the DSCP bits in the packet. DSCP is a subset of the DiffServ architecture where the Type of Service (ToS) bits are included in the IP packet header.
Specifying Precedences Within Pipes
When a pipe is configured, a Default Precedence, a Minimum Precedence and a Maximum Precedence can be specified. The default precedences are:As described above, the Default Precedence is the precedence taken by a packet if it is not explicitly assigned by a pipe rule.
The minimum and maximum precedences define the precedence range that the pipe will handle. If a packet arrives with an already allocated precedence below the minimum then its precedence is changed to the minimum. Similarly, if a packet arrives with an already allocated precedence above the maximum, its precedence is changed to the maximum.
For each pipe, separate bandwidth limits may be optionally specified for each precedence level. These limits can be specified in kilobits per second and/or packets per second (if both are specified then the first limit reached will be the limit used).
Tip: Specifying bandwidth | |
---|---|
Remember that when specifying network traffic bandwidths, the prefix Kilo means 1000 and NOT 1024. For example, 3 Kbps means 3000 bits per second. Similarly, the prefix Mega means one million in a traffic bandwidth context. |
Precedence Limits are also Guarantees
A precedence limit is both a limit and a guarantee. The bandwidth specified for precedence also guarantees that the bandwidth will be available at the expense of lower precedences. If the specified bandwidth is exceeded, the excess traffic falls to the lowest precedence. The lowest precedence has a special meaning which is explained next.The Lowest (Best Effort) Precedence
The precedence which is the minimum (lowest priority) pipe precedence has a special meaning: it acts as the Best Effort Precedence. All packets processed at this precedence will always be processed on a "first come, first forwarded" basis.Packets with a higher precedence than best effort and that exceed the limit of their precedence will automatically be transferred down into the lowest (best effort) precedence and they are treated the same as other packets at the lowest precedence.
In the illustration below the minimum precedence is 2 and the maximum precedence is 6. Precedence 2 is taken as the best effort precedence.
Lowest Precedence Limits
It is usually not needed to have a limit specified for the lowest (best effort) precedence since this precedence simply uses any spare bandwidth not used by higher precedences. However, a limit could be specified if there is a need to restrict the bandwidth used by the lowest precedence. This might be the case if a particular traffic type always gets the lowest precedence but needs to have restricted bandwidth usage.Precedences Only Apply When a Pipe is Full
Precedences have no effect until the total limit specified for a pipe is reached. This is true because until the pipe limit is reached (it becomes "full") there is no competition between precedences.When the pipe is full, traffic is prioritized by cOS Core according to precedence with higher precedence packets that do not exceed the precedence limit being sent before lower precedence packets. Lower precedence packets are buffered until they can be sent. If buffer space becomes exhausted then they are dropped.
If a total limit for a pipe is not specified, it is the same as saying that the pipe has unlimited bandwidth and consequently it can never become full so precedences have no meaning.
Applying Precedences
Continuing to use the previous traffic shaping example, let us add the requirement that SSH and Telnet traffic is to have a higher priority than all other traffic. To do this we add a Pipe Rule specifically for SSH and Telnet and set the priority in the rule to be a higher priority, say 2. We specify the same pipes in this new rule as are used for other traffic.The effect of doing this is that the SSH and Telnet rule sets the higher priority on packets related to these services and these packets are sent through the same pipe as other traffic. The pipe then makes sure that these higher priority packets are sent first when the total bandwidth limit specified in the pipe's configuration is exceeded. Lower priority packets will be buffered and sent when higher priority traffic uses less than the maximum specified for the pipe. The buffering process is sometimes referred to as "throttling back" since it reduces the flow rate.
The Need for Guarantees
A problem can occur however if prioritized traffic is a continuous stream such as real-time audio, resulting in continuous use of all available bandwidth and resulting in unacceptably long queuing times for other services such as surfing, DNS or FTP. A means is required to ensure that lower priority traffic gets some portion of bandwidth and this is done with Bandwidth Guarantees.Using Precedences as Guarantees
Specifying a limit for a precedence also guarantees that there is a minimum amount of bandwidth available for that precedence. Traffic flowing through a pipe will get the guarantee specified for the precedence it has, at the expense of traffic with lower precedences.To change the prioritized SSH and Telnet traffic from the previous example to a 96 Kbps guarantee, the precedence 2 limit for the std-in pipe is set to be 96 Kbps.
This does not mean that inbound SSH and Telnet traffic is limited to 96 Kbps. Limits in precedences above the best effort precedence will only limit how much of the traffic gets to pass in that specific precedence.
If more than 96 Kbps of precedence 2 traffic arrives, any excess traffic will be moved down to the best effort precedence. All traffic at the best effort precedence is then forwarded on a first-come, first-forwarded basis.
Note: A limit on the lowest precedence has no meaning | |
---|---|
Setting a maximum limit for the lowest (best effort) precedence or any lower precedences has no meaning and will be ignored by cOS Core. |
Differentiated Guarantees
A problem arises if the aim is to give a specific 32 Kbps guarantee to Telnet traffic, and a specific 64 Kbps guarantee to SSH traffic. A 32 Kbps limit could be set for precedence 2, a 64 Kbps limit set for precedence 4 and then pass the different types of traffic through each precedence. However, there are two obvious problems with this approach:Which traffic is more important? This question does not pose much of a problem here, but it becomes more pronounced as the traffic shaping scenario becomes more complex.
The number of precedences is limited. This may not be sufficient in all cases, even without the "which traffic is more important?" problem.
The solution is to create two new pipes: one for telnet traffic, and one for SSH traffic, much like the "surf" pipe that was created earlier.
First, remove the 96 Kbps limit from the std-in pipe, then create two new pipes: ssh-in and telnet-in. Set the default precedence for both pipes to 2, and the precedence 2 limits to 32 and 64 Kbps, respectively.
Then, split the previously defined rule covering ports 22 through 23 into two rules, covering 22 and 23, respectively:
Keep the forward chain of both rules as std-out only. Again, to simplify this example, we concentrate only on inbound traffic, which is the direction that is the most likely to be the first one to fill up in client-oriented setups.
Set the return chain of the port 22 rule to ssh-in followed by
std-in.
Set the return chain of the port 23 rule to telnet-in followed by
std-in.
Set the priority assignment for both rules to Use defaults from first pipe;
the default precedence of both the ssh-in and
telnet-in pipes is 2.
Using this approach rather than hard-coding precedence 2 in the rule set, it is easy to change the precedence of all SSH and Telnet traffic by changing the default precedence of the ssh-in and telnet-in pipes.
Notice that we did not set a total limit for the ssh-in and telnet-in pipes. We do not need to since the total limit will be enforced by the std-in pipe at the end of the respective chains.
The ssh-in and telnet-in pipes act as a "priority filter": they make sure that no more than the reserved amount, 64 and 32 Kbps, respectively, of precedence 2 traffic will reach std-in. SSH and Telnet traffic exceeding their guarantees will reach std-in as precedence 0, the best-effort precedence of the std-in and ssh-in pipes.
Note: The return chain ordering is important | |
---|---|
Here, the ordering of the pipes in the return chain is important. Should std-in appear before ssh-in and telnet-in, then traffic will reach std-in at the lowest precedence only and hence compete for the 250 Kbps of available bandwidth with other traffic. |
cOS Core provides a further level of control within pipes through the ability to split pipe bandwidth into individual resource users within a group and to apply a limit and guarantee to each user.
Individual users can be grouped by cOS Core using one of the following:
This feature is enabled by enabling the Grouping option in a pipe. The individual users of a group can then have a limit and/or guarantee specified for them in the pipe. For example, if grouping is done by source IP then each user corresponds to each unique source IP address.
A Port Grouping Includes the IP Address
If a grouping by port is selected then this implicitly also includes the IP address. For example, port 1024 of host computer A is not the same as port 1024 of host computer B. It is the combination of port and IP address that identifies a unique user in a group.Grouping by Networks Requires the Size
If the grouping is by source or destination network then the network size must also be specified In other words, the netmask for the network must be specified for cOS Core.Specifying Group Limits
Once the grouping method is selected, the next step is to specify the Group Limits. These limits can consist of one or both of the following:Group Limit Total
This value specifies a limit for each user within the grouping. For example, if the grouping is by source IP address and the total specified is 100 Kbps then this is saying that no one IP address can take more than 100 Kbps of bandwidth.
Group Precedence Guarantees
In addition to, or as an alternative to the total group limit, individual precedences can have values specified. These values are, in fact, guarantees (not limits) for each user in a group. For example, precedence 3 might have the value 50 Kbps and this is saying that an individual user (in other words, each source IP if that is the selected grouping) with that precedence will be guaranteed 50 Kbps at the expense of lower precedences.
The precedences for each user must be allocated by different pipe rules that trigger on particular users. For example, if grouping is by source IP then different pipe rules will trigger on different IPs and send the traffic into the same pipe with the appropriate precedence.
The potential sum of the precedence values could clearly become greater than the capacity of the pipe in some circumstances so it is important to specify the total pipe limit when using these guarantees.
Combining the Group Total and Precedences
Use of group precedences and the group total can be combined. This means that:The users in a group are first separated by pipe rules into precedences.
The users are then subject to the guarantees specified for their precedence.
The combined traffic is subject to the total group limit.
The illustration below shows this flow where the grouping has been selected to be according to source IP.
Another Simple Groups Example
Consider another situation where the total bandwidth limit for a pipe is 400 Kbps. If the aim is to allocate this bandwidth amongst many destination IP addresses so that no single IP address can take more than 100 Kbps of bandwidth, the following steps are needed.Set the pipe limit, as usual, to be 400 Kbps.
Set the Grouping option for the pipe to have the value Destination IP.
Set the total for the pipe's Group Limits to be 100 Kbps.
Bandwidth is now allocated on a "first come, first forwarded" basis but no single destination IP address can ever take more than 100 Kbps. No matter how many connections are involved the combined total bandwidth can still not exceed the pipe limit of 400 Kbps.
Combining Pipe and Group Limit Precedence Values
Let us suppose that grouping is enabled by one of the options such as source IP and some values for precedences have been specified under Group Limits. How does these combine with values specified for the corresponding precedences in Pipe Limits?In this case, the Group Limits precedence value is a guarantee and the Pipe Limits value for the same precedence is a limit. For example, if traffic is being grouped by source IP and the Group Limits precedence 5 value is 5 Kbps and the Pipe Limits precedence 5 value is 20 Kbps, then after the fourth unique source IP (4 x 5 = 20 Kbps) the precedence limit is reached and the guarantees may no longer be met.
Dynamic Balancing
Instead of specifying a total for Group Limits, the alternative is to enable the Dynamic Balancing option. This ensures that the available bandwidth is divided equally between all addresses regardless of how many there are. This is done up to the limit of the pipe.If a total group limit of 100 Kbps is also specified with dynamic balancing, then this still means that no single user may take more than that amount of bandwidth.
Precedences and Dynamic Balancing
As discussed, in addition to specifying a total limit for a grouping, limits can be specified for each precedence within a grouping. If we specify a precedence 2 grouping limit of 30 Kbps then this means that users assigned a precedence of 2 by a pipe rule will be guaranteed 30 Kbps no matter how many users are using the pipe. Just as with normal pipe precedences, traffic in excess of 30 Kbps for users at precedence 2 is moved down to the best effort precedence.Continuing with the previous example, we could limit how much guaranteed bandwidth each inside user gets for inbound SSH traffic. This prevents a single user from using up all available high-priority bandwidth.
First we group the users of the ssh-in pipe so limits will apply to each user on the internal network. Since the packets are inbound, we select the grouping for the ssh-in pipe to be Destination IP.
Now specify per-user limits by setting the precedence 2 limit to 16 Kbps per user. This means that each user will get no more than a 16 Kbps guarantee for their SSH traffic. If desired, we could also limit the group total bandwidth for each user to some value, such as 40 Kbps.
There will be a problem if there are more than 5 users utilizing SSH simultaneously: 16 Kbps times 5 is more than 64 Kbps. The total limit for the pipe will still be in effect, and each user will have to compete for the available precedence 2 bandwidth the same way they have to compete for the lowest precedence bandwidth. Some users will still get their 16 Kbps, some will not.
Dynamic balancing can be enabled to improve this situation by making sure all of the 5 users get the same amount of limited bandwidth. When the 5th user begins to generate SSH traffic, balancing lowers the limit per user to about 13 Kbps (64 Kbps divided by 5 users).
Dynamic Balancing takes place within each precedence of a pipe individually. This means that if users are allotted a certain small amount of high priority traffic, and a larger chunk of best-effort traffic, all users will get their share of the high-precedence traffic as well as their fair share of the best-effort traffic.
The CLI command pipes can be used to look at a snapshot of traffic shaping activity. A key point about this command is that after it is entered, cOS Core analyses activity over the following one second period, then displays the results on the console.Consider the following example usage:
Device:/>
pipes -users my_pipe1
This will report on only the active users over the one second period after the command is entered in the Pipe called my_pipe1. The number of users active over that one second period may only be a fraction of those active over a longer time interval.
For a complete description of pipes command options, see the separate cOS Core CLI Reference Guide.
If using traffic shaping with IPsec or any tunneling protocol, the following should be noted:
Tunnels introduce overhead
If traffic shaping is set up to measure the traffic inside VPN tunnels then it should be remembered that this is raw data without any overhead so it will usually be less than the bandwidth used by the tunnel that carries it. VPN protocols such as IPsec can add significant overhead to the data because of control data and encryption. For this reason, it is recommended that the limits specified in the traffic shaping pipes for tunneled IPsec data are set at around 20% below the actual available bandwidth.
Pipe rules can trigger on either the tunnel or the data inside the tunnel
It is possible to initiate traffic shaping by having a Pipe Rule object that triggers either on the tunnel itself or the data that is being tunneled. The recommendation is to use pipe rules that trigger on the tunnel. This will mean that the bandwidth issue outlined in the previous point is avoided since traffic shaping will be measuring the outer tunnel data and not the data inside the tunnel.
If a Pipe Rule triggers on the tunnel itself, then it should be noted that the Source Interface property of the Pipe Rule should be set to a value of core. It should never be set to a value of any as this could mean that the rule could trigger twice. Once for the tunnel and once for the tunneled data.
The Importance of a Pipe Limit
Traffic shaping only comes into effect when a pipe in cOS Core is full. That is to say, it is carrying as much traffic as the total limit allows. If a 500 Kbps pipe is carrying 400 Kbps of low priority traffic and 90 Kbps of high priority traffic then there is 10 Kbps of bandwidth left and there is no reason to throttle back anything. It is therefore important to specify a total limit for a pipe so that it knows what its capacity is and the precedence mechanism is totally dependent on this.Relying on the Group Limit
A special case when a total pipe limit is not specified is when a group limit is used instead. The bandwidth limit is then placed on, for example, each user of a network where the users must share a fixed bandwidth resource. An ISP might use this approach to limit individual user bandwidth by specifying a "Per Destination IP" grouping. Knowing when the pipe is full is not important since the only constraint is on each user. If precedences were used the pipe maximum would have to be used.Limits should not be more than the Available Bandwidth
If pipe limits are set higher than the available bandwidth, the pipe will not know when the physical connection has reached its capacity. If the connection is 500 Kbps but the total pipe limit is set to 600 Kbps, the pipe will believe that it is not full and it will not throttle lower precedences.Limits should be less than Available Bandwidth
Pipe limits should be slightly below the network bandwidth. A recommended value is to make the pipe limit 95% of the physical limit. The need for this difference becomes less with increasing bandwidth since 5% represents an increasingly larger piece of the total.The reason for the lower pipe limit is how cOS Core processes traffic. For outbound connections where packets leave the firewall, there is always the possibility that cOS Core might slightly overload the connection because of the software delays involved in deciding to send packets and the packets actually being dispatched from buffers.
For inbound connections, there is less control over what is arriving and what has to be processed by the traffic shaping subsystem and it is therefore more important to set pipe limits slightly below the real connection limit to account for the time needed for cOS Core to adapt to changing conditions.
Attacks on Bandwidth
Traffic shaping cannot protect against incoming resource exhaustion attacks, such as DoS attacks or other flooding attacks. cOS Core will prevent these extraneous packets from reaching the hosts behind the firewall, but cannot protect the connection becoming overloaded if an attack floods it.Watching for Leaks
When setting out to protect and shape a network bottleneck, make sure that all traffic passing through that bottleneck passes through the defined cOS Core pipes.If there is traffic going through the Internet connection that the pipes do not know about, cOS Core cannot know when the Internet connection becomes full.
The problems resulting from leaks are exactly the same as in the cases described above. Traffic "leaking" through without being measured by pipes will have the same effect as bandwidth consumed by parties outside of administrator control but sharing the same connection.
Troubleshooting
For a better understanding of what is happening in a live setup, the console command:Device:/>
pipe -u <pipename>
can be used to display a list of currently active users in each pipe.
cOS Core traffic shaping provides a sophisticated set of mechanisms for controlling and prioritizing network packets. The following points summarize the important points when using it:
Select the traffic to manage through Pipe Rules.
Pipe Rules send traffic through Pipes.
A pipe can have a limit which is the maximum amount of traffic allowed.
A pipe can only know when it is full if a total limit for the pipe is specified.
A single pipe should handle traffic in only one direction (although 2 way pipes are allowed).
Pipes can be chained so that one pipe's traffic feeds into another pipe.
Specific traffic types can be given a priority in a pipe.
Priorities can be given a maximum limit which is also a guarantee. Traffic that exceeds this will be sent at the minimum precedence which is also called the Best Effort precedence.
At the best effort precedence all packets are treated on a "first come, first forwarded" basis.
Within a pipe, traffic can also be separated on a Group basis. For example, by source IP address. Each user in a group (for example, each source IP address) can be given a maximum limit and precedences within a group can be given a limit/guarantee.
A pipe limit need not be specified if group members have a maximum limit.
Dynamic Balancing can be used to specify that all users in a group get a fair and equal amount of bandwidth.
This section looks at some more scenarios and how traffic shaping can be used to solve particular problems.
A Basic Scenario
The first scenario will examine the configuration shown in the image below, in which incoming and outgoing traffic is to be limited to 1 megabit per second.The reason for using 2 different pipes in this case, is that these are easier to match to the physical link capacity. This is especially true with asynchronous links such as ADSL.
First, two pipes called in-pipe and out-pipe need to be created with the following parameters:
Pipe Name | Min Prec | Def Prec | Max Prec | Grouping | Net size | Pipe limit |
---|---|---|---|---|---|---|
in-pipe | 0 | 0 | 7 | PerDestIP | 24 | 1000 Kbps |
out-pipe | 0 | 0 | 7 | PerSrcIP | 24 | 1000 Kbps |
Dynamic Balancing should be enabled for both pipes. Instead of PerDestIP and PerSrcIP we could have used PerDestNet and PerSrcNet if there were several networks on the inside.
The next step is to create the following Pipe Rule which will force traffic to flow through the pipes.
Rule Name |
Forward Pipes |
Return Pipes |
Source Interface |
Source Network |
Destination Interface |
Destination Network |
Selected Service |
---|---|---|---|---|---|---|---|
all_1mbps | out-pipe | in-pipe | lan | lannet | wan | all-nets | all_services |
The rule will force all traffic to the default precedence level and the pipes will limit total traffic to their 1 Mbps limit. Having Dynamic Balancing enabled on the pipes means that all users will be allocated a fair share of this capacity.
Using Several Precedences
We now extend the above example by allocating priorities to different kinds of traffic accessing the Internet from a headquarters office.Assume there is a symmetric 2/2 Mbps link to the Internet. Descending priorities and traffic requirements will be allocated to the following users:
To implement this scheme, we can use the in-pipe and out-pipe. We first enter the Pipe Limits for each pipe. These limits correspond to the list above and are:
Now create the Pipe Rules:
Rule Name |
Forward Pipes |
Return Pipes |
Source Interface |
Source Network |
Dest Interface |
Dest Network |
Selected Service |
Prece dence |
---|---|---|---|---|---|---|---|---|
web_surf | out-pipe | in-pipe | lan | lannet | wan | all-nets | http-all | 0 |
voip | out-pipe | in-pipe | lan | lannet | wan | all-nets | H323 | 6 |
citrix | out-pipe | in-pipe | lan | lannet | wan | all-nets | citrix | 4 |
other | out-pipe | in-pipe | lan | lannet | wan | all-nets | all_services | 2 |
These rules are processed from top to bottom and force different kinds of traffic into precedences based on the Service. Customized service objects may need to be first created in order to identify particular types of traffic. The all_services rule at the end, catches anything that falls through from earlier rules since it is important that no traffic bypasses the pipe rule set otherwise using pipes will not work.
Pipe Chaining
Suppose the requirement now is to limit the precedence 2 capacity (other traffic) to 1000 Kbps so that it does not spill over into precedence 0. This is done with pipe chaining where we create new pipes called in-other and out-other both with a Pipe Limit of 1000. The other pipe rule is then modified to use these:
Rule Name |
Forward Pipes |
Return Pipes |
Source Interface |
Source Network |
Dest Interface |
Dest Network |
Selected Service |
Prece dence |
---|---|---|---|---|---|---|---|---|
other | out-other out-pipe |
in-other in-pipe |
lan | lannet | wan | all-nets | all_services | 2 |
Note that in-other and out-other are first in the pipe chain in both directions. This is because we want to limit the traffic immediately, before it enters the in-pipe and out-pipe and competes with VoIP, Citrix and Web-surfing traffic.
A VPN Scenario
In the cases discussed so far, all traffic shaping is occurring inside a single Clavister firewall. VPN is typically used for communication between a headquarters and branch offices in which case pipes can control traffic flow in both directions. With VPN it is the tunnel which is the source and destination interface for the pipe rules.An important consideration which has been discussed previously, is allowance in the Pipe Total values for the overhead used by VPN protocols. As a rule of thumb, a pipe total of 1700 bps is reasonable for a VPN tunnel where the underlying physical connection capacity is 2 Mbps.
It is also important to remember to insert into the pipe all non-VPN traffic using the same physical link.
The pipe chaining can be used as a solution to the problem of VPN overhead. A limit which allows for this overhead is placed on the VPN tunnel traffic and non-VPN traffic is inserted into a pipe that matches the speed of the physical link.
To do this we first create separate pipes for the outgoing traffic and the incoming traffic. VoIP traffic will be sent over a VPN tunnel that will have a high priority. All other traffic will be sent at the best effort priority (see above for an explanation of this term). Again, a 2/2 Mbps symmetric link is assumed.
The pipes required will be:
vpn-in
Total: 1700
vpn-out
Total: 1700
in-pipe
Total: 2000
out-pipe
Total: 2000
The following pipe rules are then needed to force traffic into the correct pipes and precedence levels:
Rule Name |
Forward Pipes |
Return Pipes |
Src Int |
Source Network |
Dest Int |
Destination Network |
Selected Service |
Prece dence |
---|---|---|---|---|---|---|---|---|
vpn_voip_out | vpn-out out-pipe |
vpn-in in-pipe |
lan | lannet | vpn | vpn_remote_net | H323 | 6 |
vpn_out | vpn-out out-pipe |
vpn-in in-pipe |
lan | lannet | vpn | vpn_remote_net | all_services | 0 |
vpn_voip_in | vpn-in in-pipe |
vpn-out out-pipe |
vpn | vpn_remote_net | lan | lannet | H323 | 6 |
vpn_in | vpn-in in-pipe |
vpn-out out-pipe |
vpn | vpn_remote_net | lan | lannet | all_services | 0 |
out | out-pipe | in-pipe | lan | lannet | wan | all-nets | all_services | 0 |
in | in-pipe | out-pipe | wan | all-nets | lan | lannet | all_services | 0 |
With this setup, all VPN traffic is limited to 1700 Kbps, the total traffic is limited to 2000 Kbps and VoIP to the remote site is guaranteed 500 Kbps of capacity before it is forced to best effort.
SAT with Pipes
If SAT is being used, for example with a web server or ftp server, that traffic also needs to be forced into pipes or it will escape traffic shaping and ruin the planned quality of service. In addition, server traffic is initiated from the outside so the order of pipes needs to be reversed: the forward pipe is the in-pipe and the return pipe is the out-pipe.A simple solution is to put a "catch-all-inbound" rule at the bottom of the pipe rule. However, the external interface (wan) should be the source interface to avoid putting into pipes traffic that is coming from the inside and going to the external IP address. This last rule will therefore be:
Rule Name |
Forward Pipes |
Return Pipes |
Source Interface |
Source Network |
Dest Interface |
Dest Network |
Selected Service |
Prece dence |
---|---|---|---|---|---|---|---|---|
all-in | in-pipe | out-pipe | wan | all-nets | core | all-nets | all_services | 0 |
Note: SAT and ARPed IP Addresses | |
---|---|
If the SAT is from an ARPed IP address, the wan interface needs to be the destination. |