Definitions:
Tail drop — When a queue becomes full and new packets are dropped.
Speed mismatch— The incoming interface speed exceeds the outgoing interface speed.
Aggregation problem — The traffic from many interfaces aggregates into one interface that does not have enough bandwidth.
Confluence problem — The joining of multiple traffic streams causes congestion on an interface.
Transmil queue (TxQ) or tx_ring — The hardware queue of an interface, always a FIFO queue.
There are two levels to a queue, the software queue and the hardware queue. The software queue can change it’s queuing algorithm while the hardware queue is always FIFO. The software portion of a queue has multiple queues within it, allowing for tiered service based on different algorithms. Once a packet is assigned to a software queue it can still be dropped depending on the algorithm employed for that software queue.
You can set the size of the hardware queue by using the tx-ring-limit command, however, keep in mind that a hardware queue that is too big imposes FIFO like delay and one that is too short is inefficient because it causes too many CPU interrupts.
Queuing Methods:
First In First Out (FIFO)
The default queuing mechanism for interfaces faster than 2.048Mbps. If an interface is unlikely to be congested this might be an appropriate queuing mechanism. Packets line up in a single queue with no regard for class, priority or type. Similar to a gas station with only one register and no pay at the pump.
The main drawback of FIFO is that high volume applications will be allowed to consume more bandwidth, while lower bandwidth applications may not be serviced adequately.
Priority Queuing (PQ)
Priority queuing has four queues which traffic must be assigned to or it defaults to the normal queue. Access lists are commonly used to assign packets to each queue. Priority queuing uses strict priority, if a packet is in the high priority queue it is processed while all other queues are starved. When the high priority queue is empty one packet from the medium priority queue is processed, after which the scheduler restarts at the high priority queue. It is this way all the way down to the low priority queue, which is only processed when all other queues are empty. The default size of the queue gets smaller as the priority increases, although you can adjust the default queue sizes.
The four priority queues and their default queue size:
- High — 20
- Medium — 40
- Normal — 60
- Low-priority — 80
Priority Queuing Configuration
You can configure up to 16 priority lists on a router, this configures priority list 7. For reference you can check the Classification post for IP precedence.
! The precedence values is layer 2 CoS by name or number. ! This sets internet, which matches packets with internetwork ! control of precedence 6 and sets it to high priority. access-list 101 permit ip any any precedence internet priority-list 7 protocol ip high list 101 ! ! Map ssh and telnet as medium priority. priority-list 7 protocol ip medium tcp 22 priority-list 7 protocol ip medium tcp telnet ! ! Map ntp as normal and all other traffic as low priority. priority-list 7 protocol ip normal tcp 123 priority-list 7 default low ! ! The only queue we are changing is low priority, but the command ! requires us to "set" high, medium, normal as well. priority-list 7 queue-limit 20 40 60 100 ! interface Serial2/1 bandwidth 128 ip address 192.168.123.2 255.255.255.0 ! ! Apply the priority-group to an interface. priority-group 7
Confirm configuration:
R2#sh queueing Current fair queue configuration: Interface Discard Dynamic Reserved Link Priority threshold queues queues queues queues Serial0/0 64 256 0 8 1 Serial0/1 64 256 0 8 1 Serial2/0 64 32 0 8 1 Serial2/2 64 32 0 8 1 Serial2/3 64 32 0 8 1 Current DLCI priority queue configuration: Current priority queue configuration: List Queue Args 7 low default 7 high protocol ip list 101 7 medium protocol ip tcp port 22 7 medium protocol ip tcp port telnet 7 normal protocol ip tcp port 123 7 low limit 100
R2#sh int s2/1 Serial2/1 is up, line protocol is up Hardware is CD2430 in sync mode Internet address is 192.168.123.2/24 MTU 1500 bytes, BW 128 Kbit/sec, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation HDLC, loopback not set Keepalive set (10 sec) Last input 00:00:02, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 3732824 vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv Queueing strategy: priority-list 7 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Output queue (queue priority: size/max/drops): high: 0/20/0, medium: 0/40/0, normal: 0/60/3732824, low: 0/100/0 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 1330530 packets input, 74885327 bytes, 0 no buffer Received 284949 broadcasts, 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort 1623908 packets output, 401764019 bytes, 0 underruns 0 output errors, 0 collisions, 7 interface resets 0 unknown protocol drops 0 output buffer failures, 0 output buffers swapped out 10 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=up
And finally to show debugging of priority queuing:
R2#debug priority Priority output queueing debugging is on R2# 4w0d: PQ: Serial2/1 output (Pk size/Q 64/0) 4w0d: PQ: Serial2/1 output (Pk size/Q 24/0) 4w0d: PQ: Serial2/1 output (Pk size/Q 24/0) 4w0d: PQ: Serial2/1: ip (s=192.168.123.2, d=192.168.123.3) -> high 4w0d: PQ: Serial2/1 output (Pk size/Q 48/0) 4w0d: PQ: Serial2/1: ip (s=192.168.123.2, d=192.168.123.3) -> high 4w0d: PQ: Serial2/1 output (Pk size/Q 44/0) 4w0d: PQ: Serial2/1 output (Pk size/Q 24/0) 4w0d: PQ: Serial2/1: ip (defaulting) -> low 4w0d: PQ: Serial2/1 output (Pk size/Q 104/3) 4w0d: PQ: Serial2/1: ip (defaulting) -> low 4w0d: PQ: Serial2/1 output (Pk size/Q 104/3)
Round Robin (RR)
There is no priority in queuing with the RR queue. It takes one packet from the first queue and repeats the process for each queue, effectively sharing the bandwidth equally between each queue.
Weighted Round Robin (WRR)
A modified version of RR, each queue is assigned a weight that effectively assigns it a portion of the bandwidth.
Weighted Fair Queuing (WFQ)
WFQ scheduler has two goals:
1. Provide fairness among the existing flows, each flow receives the same amount of bandwidth as other flows with the same precedence.
2. Provide more bandwidth to flows with higher IP precedence values. A higher precedence flow receives more bandwidth than a lower precedence flow, hence the “weighted” in the name.
The effect of these goals, lower volume flows get relatively better service and higher volume flows get worse service.
WFQ is supported only on slow links, slower than 2.048 Mbps. WFQ is either on or off, there are no classification configuration options.
WFQ can be disabled on an interface with:
no fair-queue
It is enable with the commands:
fair-queue [cdt [dynamic-queues [reservable-queues]]]
hold-queue max-limit out
You can see the what has been configured for an interface using:
sh int s0/1
or
sh queue int s2/0
R1#sh queue s2/0 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 3968103 Queueing strategy: weighted fair Output queue: 0/1000/256/3967618 (size/max total/threshold/drops) Conversations 0/10/32 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) Available Bandwidth 96 kilobits/sec
R1#sh queueing Current fair queue configuration: Interface Discard Dynamic Reserved Link Priority threshold queues queues queues queues Serial2/0 256 32 0 8 1 Serial2/1 64 32 0 8 1 Serial2/2 64 32 0 8 1 Serial2/3 64 32 0 8 1 Serial2/4 64 32 0 8 1 Serial2/5 64 32 0 8 1 Serial2/6 64 32 0 8 1 Serial2/7 64 32 0 8 1 Current DLCI priority queue configuration: Current priority queue configuration: Current custom queue configuration: Current random-detect configuration: Current per-SID queue configuration:
R1#sh int s2/0 Serial2/0 is up, line protocol is up Hardware is CD2430 in sync mode Internet address is 192.168.12.1/24 MTU 1500 bytes, BW 128 Kbit/sec, DLY 20000 usec, reliability 255/255, txload 81/255, rxload 1/255 Encapsulation HDLC, loopback not set Keepalive set (10 sec) Last input 00:00:02, output 00:00:00, output hang never Last clearing of "show interface" counters 3w3d Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 3968103 vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv Queueing strategy: weighted fair ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Output queue: 0/1000/256/3967618 (size/max total/threshold/drops) Conversations 0/10/32 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) Available Bandwidth 96 kilobits/sec ...output removed... R1#conf t Enter configuration commands, one per line. End with CNTL/Z. R1(config)#int s2/0 R1(config-if)#no fair-queue R1(config-if)#^ *Apr 13 15:59:25.050: %SYS-5-CONFIG_I: Configured from console by console R1#sh int s2/0 Serial2/0 is up, line protocol is up Hardware is CD2430 in sync mode Internet address is 192.168.12.1/24 MTU 1500 bytes, BW 128 Kbit/sec, DLY 20000 usec, reliability 255/255, txload 71/255, rxload 1/255 Encapsulation HDLC, loopback not set Keepalive set (10 sec) Last input 00:00:08, output 00:00:06, output hang never Last clearing of "show interface" counters 3w3d Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 3968103 vvvvvvvvvvvvvvvvvvvvvvvvv Queueing strategy: fifo ^^^^^^^^^^^^^^^^^^^^^^^^^ ...output removed...
WFQ Classification and Scheduling
WFQ is flow-based and dynamically builds and deletes queues based on the number of flows. The number of queues that the router can build is configurable from 16 to 4096(inclusive) with a default of 256. When the number of flows exceeds the maximum number of queues, new flows are assigned to existing queues.
Flows are identified by creating a has from the following fields:
Source IP
Destination IP
Protocol Number
Type of Service (ToS)
Source TCP/UDP port number
Destination TCP/UDP port number
Because WFQ needs access to the packet header fields it does not work with tunneling or encryption.
WFQ Drop Policy
Hold Queue Limit — The absolute limit of the number of packets in all queues. Once this limit has been reached any arriving packet is dropped.
WFQ Agressive Dropping — When a packet arrives while the hold queue is full, the packet is dropped.
Congestive Discard Threshold (CDT) — Limits the number of packets in each individual queue.
WFQ Early Dropping — A packet arrives and the congestive discard threshold (CDT) has been reached for an individual queue, the packet is dropped, even though the hold queue is not full unless there is a packet in another queue with a higher sequence number, then that packet is dropped instead.
Class-Based Weighted Fair Queuing (CBWFQ)
Classes for CBWFQ are defined by class maps, not access lists.
Can create up 64 queues, however, the class-default queue is always present and if you do not specify bandwidth for it, uses remaining bandwidth not defined. Each queue has a reserved bandwidth with a bandwidth guarantee. When there is room a queue can use more bandwidth than that allotted.
The bandwidth command reserves bandwidth to the queue of a class. The maximum reserved bandwidth is 75% and can be changed with the max-reserved-bandwidth command for the interface. The bandwidth percent and bandwidth remaining percent commands you allocate interface bandwidth accordingly.
You can allocate bandwidth using only one form of the bandwidth allocation command, you cannot mix bandwidth class maps with bandwidth percent class maps for different classes on the same interface.
Each queue can have two types of drop policy, tail drop and WRED. Tail drop is the default, while WRED requires extra configuration. Simply put, WRED discards packets before the queue is full, making some TCP connections react to lost packets and slowing down rate of delivery. WRED would not be a good option for VOIP but then, neither is CDWFQ.
The greatest drawback of CBWFQ, it does not address the low latency requirements of VOIP.
How to configure CBWFQ
! 1. Create the access list. ip access-list extended ImportantWeb permit tcp 192.168.0.0 0.0.255.255 host 192.168.34.234 ! ! 2. Create the class-map. class-map match-all ImportantWeb match access-group name ImportantWeb ! ! 3. Create the policy-map. policy-map ImportantWeb class ImportantWeb bandwidth percent 50 queue-limit 70 ! Change from FIFO to WFQ class class-default fair-queue ! ! 4. Apply the policy-map to the interface. interface Serial2/1 ip address 192.168.123.2 255.255.255.0 service-policy output ImportantWeb
Low-Latency Queuing (LLQ)
LLQ introduces a strict-priority queue into CBWFQ, or LLQ is a feature of CBWFQ that adds a strict-priority queue. You turn on LLQ in CBWFQ by using the priority command. LLQ includes a policed strict-priority queue that is given priority over other queues, so it is best able to provide the low delay and jitter required for VOIP applications. Because it is policed during time of congestion other queues are not starved of bandwidth, packets to the strict-priority queue are dropped when its bandwidth is exceeded.
With LLQ you get the best of both worlds, low latency for traffic in the priority queue and guaranteed bandwidth for the traffic in the other queues.
LLQ configuration requires one more command in addition to those used in CBWFQ, instead of using the bandwidth command, use the priority command. It enables LLQ on this class, reserves bandwidth and enables the policing function.
priority {bandwidth-kbps | percent percentage} [burst]
! 1. Create the access list. ip access-list extended ImportantWeb permit tcp 192.168.0.0 0.0.255.255 host 192.168.34.234 remark "This really speeds up all access to the server." ! ! 2. Create the class-map. class-map match-all ImportantWeb match access-group name ImportantWeb ! ! 3. Create the policy-map, notice the only difference between ! CBWFQ and LLQ is the word priority rather than bandwidth, ! at least in the configuration. policy-map ImportantWeb class ImportantWeb priority percent 50 class class-default fair-queue ! ! 4. Apply the policy-map to the interface. interface Serial2/1 ip address 192.168.123.2 255.255.255.0 service-policy output ImportantWeb