Table of Contents
Table of Contents
QoS is a set of techniques to manage bandwidth, delay, jitter, and packets loss for flows in a network. The Internet Engineering Task Force (IETF) defines two major models for QoS on IP-based networks: Integrated Services (Intserv) and Differentiated Services (Diffserv).
The Intserv model integrates resource reservation and traffic control mechanisms to support special handling of individual traffic flows. The Diffserv model uses traffic control to support special handling of aggregated traffic flows.
Integrated Services (IntServ)
This type of QoS allows for end-to-end QoS in the sense that the original end station can make a request for special treatment of its packets through the network, and that request is propagated through every hop in the packet’s path to the destination.
Routers maintain tack of flows.
(RSVP) is an IP service that enables hosts to make a request for reserved bandwidth along the path to the destination host for use in an IntServ environment.
Receiver is the one who actually asks for the reservation, not the sender. The sender sends a Path message to the receiver, which collects information about the QoS capabilities of the intermediate nodes.
The receiver then processes the Path information and generates a Reservation (Resv) request, which is sent upstream to make the actual request to reserve resources.
When the sender gets this Resv, the sender begins to send data.
RSVP is a unidirectional process, so a bidirectional flow (such as an IPVC) requires this process to happen once for each sender
Reserving bandwidth is not necessary for data flows other than real-time applications, such as VoIP or IP-based video. RSVP works for unicast or multicast.
Differentiated Service (DiffServ)
QoS characteristics (bandwidth and delay, for example) are managed on a hop-by-hop basis by policies that are established independently at each intermediate device in a network.
Packet Classification and Marking
Packets are classified and marked (using DS field in IP packet) to receive a particular per-hop forwarding behavior on nodes along their path.
Sophisticated classification, marking, policing, and shaping operations need only be implemented at network boundaries or hosts.
- Packet classification
- Packet marking
- Congestion management
- Congestion avoidance
- Traffic conditioning
Packet classification is a set of mechanisms that can distinguish one type of packet from other (ACL, NBAR). Packet classification is typically performed as close as possible to the traffic source and is usually used in conjunction with packet marking.
Packet marking is a function that allows a networking device to mark packets differently, based on their classification, so that they may be distinguished more easily at future network devices.
It’s recommended to be deployed on only the first-hop Layer 3-capable device.
Layer 2 Marking:
- Ethernet: 802.1q/ISL CoS bits (3 bits).
- Frame Relay: DE bit (1 bit)
- MPLS: EXP bits (3 bit)
The only way to mark Ethernet frames at Layer 2 is in the ISL or 802.1Q header.
Layer 3 Marking is done using ToS/DCSP field in IP header.
RFC 2474 obsoletes RFC 791, which defined the IPv4 type of service (ToS) octet, more commonly discussed as IP precedence.
Class Selector (CS) is defined to be backward compatible with IP Precedence. IP Precedence = CS.
Mapping Layer 2 to Layer 3 Values
As frames/packets move from the Layer 2 environment to a Layer 3 environment, the ISL or 802.1Q header is lost. To preserve end-to-end QoS, this loss creates a need for the ability to map Layer 2 CoS values to Layer 3 ToS values (either IP precedence or DSCP).
Trusted interfaces do not alter the QoS marking of ingress frames. Untrusted interfaces alter the QoS markings to a configurable value CoS or DSCP. This value is typically zero (Best Effort) for untrusted interfaces.
Behavior aggregate (BA): A BA is a collection of packets with the same DSCP value crossing a link in a particular direction.
Congestion is defined as a full transmit queue.
Congestion management mechanisms isolate various classes of traffic, protect each class from other classes, and then prioritize the access of each class to various network resources.
Congestion management in Cisco routers is an egress function.
Congestion management is typically used at all network layers (access, distribution, and core)
Three main steps:
- Queues are created at the interface where congestion is expected.
- Packets are then assigned to these queues, based on classification characteristics such as DiffServ codepoint (DSCP) value.
- Packets are then scheduled for transmission.
Four main types of queuing:
- First In, First Out
- Priority Queuing
The queues are called High, Medium, Normal, and Low, and packets are serviced in that order, with all packets from the High queue being transmitted first.
- Custom Queuing
With CQ, there can be up to 16 queues and the packets in those queues are serviced in a round-robin fashion.
- Weighted Fair Queuing
Weighted Fair Queuing (WFQ) is a dynamic process that divides bandwidth among queues based on weights.
There are several forms of WFQ, including Class-based Weighted Fair Queuing (CBWFQ) and Low Latency Queuing (LLQ).
In the recent past, a PQ was added to the CBWFQ mechanism
The LLQ mechanism is CBWFQ with a single PQ, which receives strict scheduling priority.
Congestion avoidance mechanisms are designed to prevent interfaces or queues from becoming congested in the first place.
Dropping discipline not queuing mechanism.
Tail Drop Mechanism
Most troublesome is the fact that tail drop does not use any intelligence to determine which packet(s) should be dropped
In some situations tail drop allows a single connection or a few flows to monopolize queue space, preventing other connections from getting room in the queue. This “lock-out” phenomenon is often the result of synchronization or other timing effects.
The tail drop discipline allows queues to maintain a full (or, almost full) status for long periods of time, since tail drop signals congestion (via a packet drop) only when the queue has become full.
If the queue is full or almost full, an arriving burst will cause multiple packets to be dropped. This can result in a global synchronization of flows throttling back, followed by a sustained period of lowered link utilization, reducing overall throughput.
Global synchronization: A number of TCP sources go to slow start process simultaneously.
Random Early Detection (RED) Mechanism
Congestion avoidance is implemented in Cisco routers as Weighted Random Early Detection (WRED) and is the process of monitoring the depth of a queue, and randomly dropping packets of various flows to prevent the queue from filling completely.
- Minimum threshold: When the average queue depth exceeds the minimum threshold, packets start to be discarded. The rate at which packets are dropped increases linearly until the average queue depth hits the maximum threshold.
- Maximum threshold: When the average queue size exceeds the maximum threshold, all packets are dropped.
- Mark probability denominator: This number is a fraction and represents the fraction of packets that are dropped when the queue size is at the maximum threshold. In the preceding example, the mark probability denominator indicates that all IP precedence levels will have 1 of every 10 packets dropped when the average queue depth equals the maximum threshold.
Average = (old_average * (2 ^ -n)) + (current_queue_size * 2^ -n), where n is the exponential weight factor, which is user configurable.
If the value for n is set too high, WRED will not work properly and the net impact will be roughly the same as if WRED were not in use at all.
Policers drop packets that exceed a defined rate to control the rate at which traffic passes through the policer.
Policing is usually an inbound mechanism. But it can be outbound as well.
A two-rate policer specifies both a CIR and a peak information rate (PIR).
The goal of the shaper is to limit the rate at which packets pass through the shaper by buffering packets that exceed a defined rate and sending those packets later.
Normalize traffic flow.
Shaping is always an outbound mechanism.
Cisco offers two shaping mechanisms. Generic Traffic Shaping (GTS) and Frame Relay Traffic Shaping (FRTS).
Token Bucket Mechanism
The committed burst (Bc) is the amount of data that is guaranteed to be delivered by the network within one committed rate measurement interval (Tc). It corresponds to a committed information rate (CIR) using the formula CIR = Bc / Tc.
Data is always sent in bursts, and not evenly with the CIR speed because the frames are put on the wire using the clock rate of the physical circuit. Thus, the committed burst specifies how many octets can be sent out in one interval.
CIR (bps) = Bc (bits) / Tc (sec)
Per Hop Behavior (PHB)
The treatment given to a packet at each nodes, or hops, is called a per-hop behavior (PHB).
Traffic can be grouped into behavior aggregates only by DSCP, or based on multiple fields (for example, DSCP and source IP address).
Assured forwarding (AF, RFC 2597), and expedited forwarding (EF, RFC 2598) are PHBs that have recommended codepoints. EF has only 1 recommended codepoint, whereas AF has 12 recommended codepoints.
DSCP value 0 (000000)
Expedited Forwarding PHB
The EF PHB recommends DSCP 46 (101110) be used to mark packets that should received EF treatment.
Low loss, Low latency, Low jitter, Assured bandwidth, End-to-End Service through DS Domains.
Suitable for Real-Time traffic, including voice traffic
Device should not queue (or queue very little) EF traffic
EF traffic will be given strict priority for transmission
In order to prevent the starvation of other traffic, the rate of arrival must be less than or equal to the rate of departure, for packets receiving the EF PHB. In order to accomplish this goal a policer can be used to rate limit the EF traffic.
Accomplished by Latency Queuing (LLQ) in Cisco devices.
Assured Forwarding PHB Group
The AF PHB groups define four independently forwarded AF classes. Within each AF class, an IP packet can be assigned one of three different levels of drop precedence.
RFC 2597 defines 12 DSCPs, which correspond to 4 AF classes, each class having 3 levels of “drop precedence”. High drop precedence packets are going to be dropped first in an AF class.
Codepoints Recommended by RFC 2597
|Class||Low Drop Precedence||Medium Drop Precedence||High Drop Precedence|
|AF1||001010 (AF11)||001100 (AF12)||001110 (AF13)|
|AF2||010010 (AF21)||010100 (AF22)||010110 (AF23)|
|AF3||011010 (AF31)||011100 (AF32)||011110 (AF33)|
|AF4||100010 (AF41)||100100 (AF42)||100110 (AF43)|
The traffic is marked by an ingress policer. During periods of congestion, this traffic is placed into the classes as it egresses the aggregation router toward other regions. Because each class of traffic is independently forwarded, and each class is guaranteed a certain amount of bandwidth, the classes do not interfere with each other in any way.
Accomplished by Latency Queuing (CBWFQ / HQF) in Cisco devices.
Two types of compression are available:
- Payload compression of Layer 2 frames. One of two algorithms, Stacker or Predictor, can be configured for this type of compression.
Payload compression is primarily performed on Layer 2 frames and therefore compresses the entire Layer 3 packet. The Layer 2 payload compression methods include Stacker, Predictor, and Microsoft Point-to-Point Compression (MPPC). Payload compression should not be used for VoIP.
- Compressed Real-time Transport Protocol (cRTP), maps the three headers, IP, UDP, and RTP, with a combined 40 bytes, to 2 or 4 bytes, depending on whether a CRC is transmitted. This compression can dramatically improve the performance of a link.
cRTP is a way to take advantage of similarities between successive packets and reduce the 40-byte header to somewhere between 2 and 5 bytes. Use of cRTP should be limited to links with speeds of T1 or less.
Compression methods are based on eliminating redundancy. Using header compression mechanisms, most header information can be sent only at the beginning of the session, stored in a dictionary, and then referenced in later packets by a short dictionary index.
Link Fragmentation and Interleaving (LFI)
Serialization delay is the time that it takes to place bits on to the circuit. If the serialization delay would otherwise be greater than 15 ms, however, fragmentation is needed.
LFI is a Layer 2 technique in which large frames are broken into smaller, equal-sized fragments, and transmitted over the link in an interleaved fashion with more latency-sensitive traffic flows (such as VoIP). Using LFI, smaller frames are prioritized and a mixture of fragments is sent over the link. LFI reduces the queuing delay of small frames, because the frames are sent almost immediately. Link fragmentation, therefore, reduces delay and jitter by expediting the transfer of smaller frames. LFI should be used on slow links (that is, links with a bandwidth less than 768 kbps).
There are several types of LFI:
- MLP Interleaving LFI on multilink PPP links
- 12 LFI for Frame Relay data permanent virtual circuits (PVCs)
- 11 Annex C LFI for Frame Relay Voice over Frame Relay (VoFR) PVCs
Three steps to implement QoS by using MQC:
- Configure classification by using the class-map command.
- Configure traffic policy by associating the traffic class with one or more QoS features using the policy-map command.
- Attach the traffic policy to inbound or outbound traffic on interfaces, sub interfaces, or virtual circuits using the service-policy command.
Configuring Trust Boundary on Catalyst Switches
The mls qos trust command defines the type of trust that a Catalyst switch has for traffic arriving on a specific interface. By default, there is no trust.
If Layer 2 CoS is trusted (mls qos trust cos), the CoS marking of the incoming packets is used to select the ingress and egress queues. Two situations can arise:
If the pass-through dscp option is not configured, the DSCP value in the incoming packet is overwritten, using the CoS-to-DSCP mapping table.
The pass-through dscpoption causes the original DSCP to be retained in the packet and be transmitted when the packet leaves the switch.
Mapping CoS to Network Layer QoS
CoS-to-DSCP map: This map defines eight DSCP values that correspond to CoS values 0 to 7. Mapping is performed only on ports that trust incoming CoS.
DSCP-to-CoS map: This setting maps dscp-list (as many as 13 DSCP values) to the defined CoS value (range from 0 to 7).
Class-Based Policing Configuration
If Bc (in bytes) is not specified, it will default to CIR / 32, or 1500 bytes, whichever is higher. When using the formula CIR / 32 to calculate the default Bc (in bytes), Cisco IOS uses a Tc of 0.25 second, where:
Bc (in bytes) = (CIR * Tc) / 8 ; Bc (in bytes) = (CIR * 0.25 seconds) / 8 = CIR / 32
If Be (in bytes) is not specified, it will default to Bc. In a single token bucket case, Cisco IOS ignores the Be value. This means that excess bursting is disabled.
The Be rate can be specified when a violate action is configured. Therefore, using a dual token bucket allows Be to be explicitly configured instead of using the default value of Be = Bc. Be specifies the size of the second (excess) token bucket.
Definition of the pir parameter enables dual-rate policing, which uses two separate rates: CIR and PIR. The Bc and Be keywords and their associated arguments (conform-burst and peak-burst , respectively) are optional.
If Bc is not specified, Bc (in bytes) will default to CIR / 32, or 1500 bytes, whichever is higher. If Be is not specified, Be (in bytes) will default to PIR / 32, or 1500 bytes, whichever is higher.
Class-Based Shaping Configuration
The shape average and shape peak commands configure average and peak shaping, respectively. The Bc and Be value in bits can be explicitly configured, or Cisco IOS canautomatically calculate their optimal value. It is not recommended that you configure the Bc and Be in order to let the Cisco IOS algorithm determine the best Bc and Be value to use. Class-based shaping uses a single token bucket with a maximum token bucket size of Bc + Be.
The default burst size is based on 200-ms interval and LLQ bandwidth.
Cisco AutoQoS VoIP Functions
- Cisco Express Forwarding (CEF) must be enabled
- No QoS policies (that is, service policies) can be attached to the interface.
- Correct band-width must be specified on the interface or sub interface where Cisco AutoQoS VoIP is enabled.
- If the interface or sub interface has a link speed of 768 kbps or lower, an IP address should be configured on the interface or sub interface using the ip address command.
- By default, Cisco AutoQoS VoIP will enable Multilink PPP (MLP) and copy the configured IP address to the multilink bundle interface.
Configuring Cisco AutoQoS VoIP: Routers
Monitoring Cisco AutoQoS VoIP: Routers
Configuring Cisco AutoQoS VoIP: Switches
When the Cisco AutoQoS VoIP feature is initially enabled on a switch, QoS is globally enabled with the mls qos global configuration command.
When the auto qos voip trust interface configuration command is entered, the ingress classification on the interface is set to trust the CoS QoS label that is received in a frame, and the egress queues on the interface are reconfigured.
Configuring Cisco AutoQoS for the Enterprise
Monitoring Cisco AutoQoS for the Enterprise