First a comment on the structuring of my notes. In order to make them more legible I have started using headings for each section and then bolding the subsections. I believe it makes it easier to read and I do go back and use these notes to study for the actual test.
A queue organizes packets packets waiting to exit an interface, the size of the queue affects delay, jitter and loss.
- A longer queue decreases the chance of tail drop but increases average delay and typically increases jitter as well.
- A shorter queue increases the chance of tail drop but decreases the average delay and typically decreases jitter.
- If the congestion is sustained for long periods of time drops will be just as likely no matter the queue length.
Hardware Queueor TX Ring
If space is available in the hardware queue no output queuing is performed on a packet. It is only with congestion on the hardware queue that software queues are used.
- Hardware queues always perform FIFO scheduling and cannot be changed.
- The hardware queue uses one single queue per interface.
- IOS shortens the hardware queue automatically when a software queue is applied.
- The hardware queue length can be configured to a different value.
The command show controllers interface shows information about the hardware queue.
R4#sh controllers s0/0/0 Interface Serial0/0/0 Hardware is GT96K DTE V.35 TX and RX clocks detected. idb at 0x65EF56B4, driver data structure at 0x65EFCE60 wic_info 0x65EFD484 Physical Port 0, SCC Num 0 MPSC Registers: MMCR_L=0x000304C0, MMCR_H=0x00000000, MPCR=0x00000000 CHR1=0x00FE007E, CHR2=0x00000000, CHR3=0x0000064A, CHR4=0x00000000 CHR5=0x00000000, CHR6=0x00000000, CHR7=0x00000000, CHR8=0x00000000 CHR9=0x00000000, CHR10=0x00003008 SDMA Registers: SDC=0x00002201, SDCM=0x00000080, SGC=0x0000C000 CRDP=0x160FEA50, CTDP=0x160FECD0, FTDB=0x160FECD0 Main Routing Register=0x0003FFC0 BRG Conf Register=0x00480000 Rx Clk Routing Register=0x76543288 Tx Clk Routing Register=0x76543219 GPP Registers: Conf=0x43030002, Io=0x46064250, Data=0x7B7BBDA9, Level=0x180000 Conf0=0x43030002, Io0=0x46064250, Data0=0x7B7BBDA9, Level0=0x180000 0 input aborts on receiving flag sequence 0 throttles, 0 enables 0 overruns 0 transmitter underruns 0 transmitter CTS losts 23 rxintr, 28 txintr, 0 rxerr, 0 txerr 52 mpsc_rx, 0 mpsc_rxerr, 0 mpsc_rlsc, 6 mpsc_rhnt, 47 mpsc_rfsc 6 mpsc_rcsc, 0 mpsc_rovr, 0 mpsc_rcdl, 0 mpsc_rckg, 0 mpsc_bper 0 mpsc_txerr, 29 mpsc_teidl, 0 mpsc_tudr, 0 mpsc_tctsl, 0 mpsc_tckg 0 sdma_rx_sf, 0 sdma_rx_mfl, 0 sdma_rx_or, 0 sdma_rx_abr, 0 sdma_rx_no 0 sdma_rx_de, 0 sdma_rx_cdl, 0 sdma_rx_ce, 0 sdma_tx_rl, 0 sdma_tx_ur, 0 sdma_tx_ctsl 0 sdma_rx_reserr, 0 sdma_tx_reserr 0 rx_bogus_pkts, rx_bogus_flag FALSE 0 sdma_tx_ur_processed tx_limited = 0(128), errata19 count1 - 0, count2 - 0
In the above listing, see the line tx_limited = 0(128), errata19 count1 – 0, count2 – 0 at the bottom of the output. This hardware queue holds 128 packets and the 0 means the queue size is not limited by a queuing tool on this interface.
Enable priority queuing to change the hardware queue length.
R4# R4#conf t Enter configuration commands, one per line. End with CNTL/Z. R4(config)#int s0/0/0 R4(config-if)#priority-group 1 R4(config-if)#do sh controllers s0/0/0 Interface Serial0/0/0 Hardware is GT96K Output removed for brevity. tx_limited = 1(2), errata19 count1 - 0, count2 - 0
After enabling priority queuing with the priority-group command you can see that the new length of the hardware queue is (2) and the 1 means the length is limited as a result of queuing being configured.
The hardware queue length can be changed with the tx-ring-limit x command as seen below, this was done with the priority queue still active.
R4(config-if)#tx-ring-limit 50 R4(config-if)#do sh controllers s0/0/0 Interface Serial0/0/0 Hardware is GT96K Output removed for brevity. tx_limited = 1(50), errata19 count1 - 0, count2 - 0
Queuing on Inerfaces, Subinterfaces and Virtual Circuits
Traffic is not even placed in a software queue unless the hardware queue is full, however traffic shaping traffic shaping can cause shaping queues to fill even when when there is no congestion on the physical interface. In effect traffic shaping on the sub interfaces creates congestion between the shaping queues and the physical interface software queues. On a physical interface traffic can only leave at the speed of the physical clock rate, similarly packets can only leave a shaping queue at the traffic-shaping rate.
For the test we just need to know the basic concepts of FIFO, PQm CQ and MDRR.
FIFO uses tail drop to decide when to drop or enqueue packets. As above, the same holds true for FIFO, a longer queue decreases the chance of tail drop but increases average delay and typically increases jitter as well. A shorter queue increases the chance of tail drop but decreases the average delay and typically decreases jitter.
Configuring FIFO actually requires you to turn off all other types of queuing.
From my example above I am still using priority queuing from above.
R4(config-if)#do sh int s0/0/0 Serial0/0/0 is up, line protocol is up Hardware is GT96K Serial MTU 1500 bytes, BW 1544 Kbit/sec, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY, loopback not set Keepalive set (10 sec) CRC checking enabled LMI enq sent 206, LMI stat recvd 206, LMI upd recvd 0, DTE LMI up LMI enq recvd 0, LMI stat sent 0, LMI upd sent 0 LMI DLCI 1023 LMI type is CISCO frame relay DTE FR SVC disabled, LAPF state down Broadcast queue 0/64, broadcasts sent/dropped 4/0, interface broadcasts 1 Last input 00:00:00, output 00:00:00, output hang never Last clearing of "show interface" counters 00:34:20 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv Queueing strategy: priority-list 1 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Output queue (queue priority: size/max/drops): high: 0/20/0, medium: 0/40/0, normal: 0/60/0, low: 0/80/0 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 206 packets input, 5846 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort 210 packets output, 3736 bytes, 0 underruns 0 output errors, 0 collisions, 8 interface resets 0 unknown protocol drops 0 output buffer failures, 0 output buffers swapped out 0 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=up
So I remove the priority queue and you can see that the interface is back to default, weighted fair queuing.
R4(config-if)#no priority-group 1 R4(config-if)#do sh int s0/0/0 Serial0/0/0 is up, line protocol is up Hardware is GT96K Serial Output removed for brevity. Queueing strategy: weighted fair Output queue: 0/1000/64/0 (size/max total/threshold/drops)
To make the interface FIFO queuing I have to remove WFQ.
R4(config-if)#no fair-queue R4(config-if)#do sh int s0/0/0 Serial0/0/0 is up, line protocol is up Hardware is GT96K Serial Output removed for brevity. Queueing strategy: fifo Output queue: 0/40 (size/max)
You can change the default queue length with the command hold-queue x out.
R4(config-if)#hold-queue 20 out R4(config-if)#do sh int s0/0/0 Serial0/0/0 is up, line protocol is up Hardware is GT96K Serial Output removed for brevity. Queueing strategy: fifo Output queue: 0/20 (size/max)
Priority Queuing (PQ)
With priority queuing the highest priority queues are always serviced first. There are four queues, High, Medium, Normal and Low. If the High queue has a packet it is serviced, if not the the Medium queue is serviced and on down to the Low priority queue. The process always starts back at the High queue. As a result the lower priority queues get starved. This fact makes it an unpopular queuing choice.
Custom Queuing (CQ)
Custom queuing addresses the largest drawback of PQ, servicing all queues even during congestion. It has 16 queues available, implying 16 classification categories. It does not have the option to service one queue first, and does round-robin service on each queue, beginning with the first queue. CQ takes packets from that queue until the total byte count specified for that queue has been met or exceeded. After that queue has been serviced or does not have nay more packets, CQ moves on to the next queue and repeats the process.
The CQ scheduler essentially guarantees the minimum bandwidth for each queue, while allowing queues to have more bandwidth under the right conditions. If 5 queues have been configured with the byte counts of 5,000, 5,000, 10,000, 10,000 and 20,0000 for queues 1 through 5, The percentage bandwidth given to each queues is 10, 10, 20, 20, and 40 percent. But if queue 4 has no traffic over a short period of time, the CQ scheduler moves to another queue. Only queues 1-3 and 5 have packets waiting so the distribution is changed. The queues would receive 12.5, 12.5, 25, 0 and 50 percent of the bandwidth.
The queues are numbered not named and no queue gets better service than another.
Modified Deficit Round-Robin (MDRR)
MDRR is similar to the CQ scheduler in that it reserves a percentage of link bandwidth for a particular queue. MDRR removes packets from a queue until the quantum value (QV) for that queue has been removed. MDRR repeats the process for every queue in order from 0 through 7. Any extra bytes sent during this process are treated as a deficit and subtracted from the QV for the next pass. As a result MDRR provides an exact bandwidth reservation.
Concepts and Configuration WFQ, CBWFQ and LLQ
Weighted Fair Queuing (WFQ)
WFQ does not allow classification options to be configured. It classifies packets based on flow. A flow consists of all packets that have the same source and destination IP address, and the same source and destination port numbers. WFQ also favor low-volume higher-precedence flows over large-volume lower-precedent flows. Each flow uses a different queue and up to a maximum of 4096 queues per interface. It also uses modified tail-drop. WFQ may be the most deployed QoS tool on Cisco routers so take your time on this section.
WFQ can be seen as being too fair, with many flows WFQ will give some bandwidth to every flow. WFQ is also a poor choice for voice and interactive video traffic because both need low delay and low jitter. By being too fair it can starve voice and video.
Flows are identified by five items in a packet.
- Source IP address
- Destination IP address
- Transport layer (TCP or UDP)
- Source port
- Destination port
- IP Precedence
Flows are considered to exist only as long as packets from the flow exist. If there is a break in traffic and no packets are in the queue, it is removed. The show queue command tells about the WFQ’s view of a flow.
WFQ has two goals:
1. To provide fairness among the existing flows, giving each flow an equal amount of bandwidth. With each flow receiving the same bandwidth lower volume flows prosper while higher volume flows suffer.
2. Provide more bandwidth to flows with higher IP precedence values. The “weight” of WFQ is based on precedence. WFQ provides a fair share of the link bandwidth based on each flows precedence, plus one. Precedence 7 flows get 8 times more bandwidth than precedence 0 flow because (7+1)/(0+1) = 8.
When adding packets to the hardware queue WFQ puts the packet with the lowest sequence number (SN) among all of the queues or flows.
SN = Previous_SN + (weight * new_packet_length)
Weight = 32,384 / (IP_Precedence + 1)
SN = Previous_SN + ((32,384 / (IP_Precedence + 1)) * new_packet_length)
WFQ calculates the SN before adding a packets to its queue and even before the decision is made to drop the packet because it is based on modified tail drop. The formula considers the length of the packet, the weight of the flow, and the previous SN. By considering the packet length the SN calculates a higher SN for larger packets and a lower number for smaller packets. By including the SN of the previous packet, the formula assigns a larger SN to queues that already have a number of packets enqueued.
WFQ always weights the packets based on the first 3 bits of the ToS byte, the Precedence field.
The larger the precedence value, the lower the weight, making the SN smaller and therefore favoring that flow over another with a lower precedence value.
WFQ Drop Policy, Number of Queues and Queue Length
WFQ places and absolute limit on the number of packets in all queues called the hold-queue limit. If a new packet arrives for any queue and the hold-queue limit has been reached, the packet is discarded.
WFQ also places a limit on individual queues called the congestive discard threshold (CDT). If the individual queues CDT has been reached WFQ looks for a packet with a higher calculated SN in all of the queues. If a packet is found with a higher SN it is discarded the this packet is enqueued.
WFQ also keeps eight hidden queues for overhead traffic generated by the router. WFQ uses a very low weight for these hidden queues in order to give precedence to overhead traffic.
IOS uses WFQ by default on all serial interfaces with bandwidths set at T/1 or E/1 speeds and below. To turn on WFQ use the command fair-queue.
To change the hold queue of an interface use the command hold-queue x out.
Class-Based WFQ (CBWFQ)
CBWFQ uses MQC to classify traffic so anything you can match with MQC you can match with CBWFQ. It can reserve a minimum amount of bandwidth for each queue and can give an actual percentage of the bandwidth.
CBWFQ supports both tail drop and WRED. There are 64 queues available in CBWFQ and WRED can be enabled on any of them. WRED works well for less drop-sensitive traffic such as data but is not a good choice for voice and video.
If a packet is not classified in CBWFQ it goes to the class-default queue. Inside the class-default queue CBWFQ can use either FIFO or WFQ. With WFQ it uses the SN calculation within that queue just like WFQ normally does. Using WFQ in the default class is an advantage for CBWFQ because WFQ treats low-volumes flows well and they are likely to be interactive traffic. So with CBWFQ the traffic you know, classify and reserve the proper bandwidth. For traffic you cannot characterize, let it default into the class-default queue where WFQ dynamically applies fairness by using WFQ.
The CBWFQ scheduler gives a percentage of bandwidth to each class based on the configured values, although the algorithm is not published.
Delay and jitter sensitive traffic still suffer with CBWFQ because other queues can still be serviced while those packets wait.
This is an example take from QoS p. 298.
- All VOIP payload traffic has been marked with DSCP EF, placed in a queue.
- All other traffic has been marked with DSCP BE, place in a different queue.
- Give the VOIP trassif 58 kpbs of bandwidth on the link.
- Use WRED and WFQ in the non-VOIP traffic.
class-map match-all voip-rtp match ip ftp 16384 16383 class-map match-all dscp-ef match ip dscp ! This is the input policy-map. policy-map voip-be class voip-rtp set ip dscp ef class class-default set ip dscp 0 ! This is the output policy-map. policy-map queue-on-dscp class descp-ef bandwidth 58 queue-limit 30 class class-default ! WRED random-dectect dscp-based ! WFQ fair-queue interface Ethernet 0/0 service-policy input voip-be inerface serial 0/0 service policy output queue-on-dscp
Low Latency Queueing
LLQ is an option of CBWFQ applied to one or more classes. CBWFQ treats these classes as a strict priority and always services packets in these classes if a packet is waiting. Therefore if you use CBWFQ and use the priority command you have enabled LLQ. This overcomes the biggest drawback of CBWFQ, having a packets with a lower SN but with less latency sensitivity being sent. With LLQ, priority queues are serviced first while guaranteeing bandwidth for traffic in other queues.
LLQ actually polices the priority queue based on the configured bandwidth. The packets in the PQ still have low latency, but LLQ prevents that queue from consuming more than its configured amount. The policing functions of LLQ takes care f protecting the other queues from the LLQ, discarding packets when needed.
Configuration of LLQ is similar to that of CBWFQ, but instead of using the bandwidth command, use the priority command. The priority command sets the guaranteed minimum bandwidth as well as the maximum bandwidth.
Please note, the example below is based on QoS p. 308, however, I have made considerable changes in my answer. If it is not correct I am to blame, not the authors.
- R3’s S0/0 is clocked at 128 kbps and is the output interface.
- R3’s input interface is Ethernet 1/0.
- VOIP payload is marked with DSCP EF, and placed in its own queue, using tail drop. This class get 58 kbps and is the LLQ.
- NetMeeting voice and video Server1 to Client1 is marked with DSCP AF41, and placed in its own queue, using tail drop. It get 22 kbps.
- Any HTTP traffic with “important” in the URL is marked with AF21 and placed in its own queue. The class get 29 kbps.
- Any HTTP traffic with “not-so” in the URL is marked with AF23 and placed in its own queue. The class get 8 kbps.
- All other traffic is marked with DSCP BE and placed in its own queue with WRED and WFQ. This class get the remaining 20 kbps.
You can have multiple low-latency queues in a single policy map, and with multiple LLQs each class is policed at the configured rate. You get more granularity in what you police
! All of this is to classify incoming traffic. ! ip cef is for NBAR ip cef class-map match-all dscp-ef match ip dscp ef class-map match-any dscp-af41 match ip ftp 16383 16384 match access-group 101 class-map match-all important match protocol http "*important*" class-map match-all not-so match protocol http *not-so*" policy-map incoming-traffic class dscp-ef set dscp ef class dscp-af41 set dscp af41 class important set dscp af21 class not-so set dscp af23 class class-default fair-queue random-detect dscp-based policy-map outgoing-traffic class dscp-ef priority 58 class dscp-af41 bandwidth 22 class important bandwidth 29 class not-so bandwidth 8 class class-default ! This bandwidth command not needed. bandwidth 20 random-detect dscp-based fair-queue interface ethernet 1/0 ! Output omitted for brevity. ip nbar protocol-discovery policy-map input incoming-traffic interface serial 0/0 ! Output omitted for brevity. bandwidth 128 policy map output outgoing-traffic