Queuing Network Traffic on Cisco Routers Using Weighted Fair Queuing

Weighted fair queuing (WFQ) is another queuing technique, and is the default used on router interfaces with less that 2.048 Mbps of bandwidth. Weighted fair queuing is concerned with ensuring that all traffic flows receive predictable bandwidth to meet their needs. For example, it will place smaller, interactive traffic (like telnet) at the front of a queue. Traffic is placed in the queue according to when the last bit is received, rather than the first. This helps to ensure that larger packets do not interfere with smaller packets, starving them of bandwidth.

Traffic is classified according to a number of factors before being placed in a queue. For example, WFQ is aware of quality of service (QoS) techniques like IP Precedence. This allows it to characterize traffic according to its packet-defined priority, and ensure that it is allocated an acceptable level of bandwidth. By the same token, on Frame Relay interfaces, WFQ will take into account diagnostic messages like FECN, BECN, and the discard eligibility (DE) of frames.

Packet Loss and Echo on VoIP Networks

Packet loss can occur on any network for a variety of reasons including congested links (packets dropped when buffers are full), routing problems, incorrect equipment configuration issues, and more. Because voice traffic uses UDP as its transport protocol, dropped packets are lost, and obviously not resent. On a voice conversation, this is recognized by what appears to be speech that is cut short or clipped – portions of the voice conversation might be lost, where one user speaking “Hello, may I speak to Dan?” might sound more like “Hello, may… Dan?”, or similar.

The codecs outlined in previous articles are typically capable of correcting (via a DSP) for up to 30 ms of lost voice traffic without any noticeable impact on quality. However, Cisco voice packets use a 20 ms payload, so effectively only one packet could be lost at any point in time. In order to avoid packet loss issues, it is important that the underlying IP network is properly designed (including redundancy), and that QoS techniques are implemented effectively.

Another issue that impacts voice calls is one that you have likely already experienced, namely the sound of your own voice echoing back to you a short time after speaking. Echo occurs when part of the voice transmitted “leaks” back on to the return path of a call. To compensate for this, most codecs use built-in echo cancellation techniques. On a Cisco gateway (such as a router), echo cancellation settings are configured by default, but can be tuned in order to compensate for different degrees of echo experienced by users.

Variable Delays Associated with VoIP Traffic

The bullet points below each type of variable delay encountered with VoIP traffic, and how these issues can be compensated for where possible.

  • Queuing Delay. When a WAN interface is congested, traffic must be queued using any of the various methods looked at in this chapter. Although a method like LLQ can prioritize voice traffic, another issue exists. Consider a situation where a router interface currently has no priority traffic waiting to be sent, and begins to forward a large frame containing FTP data. If a voice packet then arrived, it could not be sent until the FTP packet (which is already being serialized) is completed. As such, the voice packet is subject to wait, incurring a delay that might not be reasonable. For example, if the voice packet is stuck behind a 1500 byte frame being sent over a 64 kbps link, it would be subject to a delay of approximate 185 ms, which (in conjunction with other delay factors) would make it well beyond acceptable. To account for queuing delays, a technique called Link Fragmentation with Interleaving is used on links with speeds below 768 kbps. When implemented, a router will fragment larger packets (like FTP in this example) into smaller sizes, and then “interleave” the voice packets onto the link. As such, the voice packets would not need to wait for the entire single FTP packet to be sent. When choosing a fragment size, it should be one that aims for approximately a 10 ms delay but does not fragment voice packets.
  • Dejitter Delay. As mentioned earlier, jitter occurs when packets do not arrive when expected. When dejitter buffers are configured at the receiving end of a voice network, packets that arrive with timing variations are buffered, and then converted to a constant delay. The use of dejitter buffers does add some delay to voice network, so the buffers should generally be kept small. Other QoS techniques looked at earlier in this section help to reduce the overall exposure of voice traffic to jitter issues, but dejitter buffers are a specific solution to help minimize jitter on the receiving end of a VoIP network.

It’s important to keep the concept of “end-to-end” in mind when calculating delay. Don’t forget that if a packet needs to pass through 3 routers, and each router adds a 10 ms delay to the forwarding of the packet based on queuing considerations, that adds an additional 30 ms of delay to the packet between the source and destination. Calculations of end-to-end delay will be looked at in more detail shortly in the planning section.

QoS Mechanisms for Improving VoIP Quality (Part 2)

In order for a queuing mechanism like LLQ or IP RTP Priority to queue voice packets into a priority queue correctly, they must be able to identify the traffic as VoIP. With IP RTP Priority, packets are matched and priority queued according to the UDP port numbers used by RTP voice traffic, which fall into the range 16384 to 32767 (even port numbers only) in Cisco implementations. Odd UTP port numbers in this range are used for call control information, and are not prioritized – they are serviced by the WFQ method like all other traffic.

With LLQ, VoIP traffic is typically determined based on either port numbers (through the use of access lists), or through traffic classification mechanisms. If you recall from Chapter 4, IP headers includes a field that can be used to designate a service “type”, also known as Type of Service (ToS) or IP Precedence. Based on the value configured in this field, network equipment like routers can be configured to grant certain types of traffic (like VoIP) a higher priority based on the queuing methods in use. For example, on a network that supports voice traffic, all voice packets could be tagged with an IP Precedence value of 5. Because this setting is configured in the IP header, it will stay with a packet all the way from the source to the destination, helping to ensure end-to-end QoS, again assuming an appropriate queuing mechanism that considers this information is implemented on all intermediary routing equipment. LLQ would be the logical choice in such a scenario. On most networks, VoIP traffic has its IP Precedence value configured at the edge of the network, namely on an IP phone. In some cases, however, the phone might not have this ability, and IP Precedence settings might be added to the packet at the distribution layer according to configured policies.

QoS Mechanisms for Improving VoIP Quality

Implementing QoS mechanisms is another key consideration in order to ensure that VoIP traffic is forwarded across a network in a timely manner. A variety of different queuing mechanisms can be used on WAN interfaces to help prioritize voice traffic in order to ensure that it is serviced in this manner, and not delayed by other traffic that is less time-sensitive. While the four main queuing techniques typically implemented on Cisco router serial interfaces were looked at earlier in this chapter, voice traffic is typically prioritized using one of the three queuing methods listed below.

  • Class-Based Weighted Fair Queuing (CBWFQ). Class-based WFQ works in a manner somewhat similar to traditional WFQ, with the exception that “classified” traffic can be placed into reserved bandwidth queues, ensuring that certain types of traffic (such as VoIP) are allocated a guaranteed amount of bandwidth. A scheduler services the queues based on the bandwidth assigned to them, also known as the “weight”. While CBWFQ ensures that all packets are allocated appropriate bandwidth based on their weight (and that all queues are serviced), it does not implement strict priority. In other words, this queuing method can still result in delays for VoIP traffic.
  • Low Latency Queuing (LLQ). The LLQ queuing method is strongly recommended as the queuing method for use on WAN links that need to support time-sensitive traffic like VoIP. While LLQ functions in a manner very similar to CBWFQ, it does implement one very important additional feature, namely a priority queue. The priority queue is allocated a defined amount of priority bandwidth (weight), and is always serviced first as long as it does not exceed this bandwidth. Other types of traffic can be assigned to reserved queues (or a default queue) with pre-defined weights, ensuring that they are not starved of bandwidth.
  • IP RTP Priority. The IP RTP Priority queuing method presents one of the simplest methods to ensure that VoIP packets are serviced with appropriate priority. When this queuing method is implemented, RTP voice packets (only) are automatically placed into a priority queue, while all other traffic is queued according to WFQ methods. IP RTP Priority can be implemented with a single command, which makes it an easy way to prioritize voice traffic, especially in environments where all other traffic can be handled equally. IP RTP priority does not become active until a WAN interface is experiencing congestion.

LLQ and IP RTP Priority are the two most popular queuing methods for prioritizing VoIP traffic.

VoIP, Network Congestion, and the Importance of QoS Techniques

Network congestion is an issue that can lead to a variety of problems on any data network; when the data network is also supporting voice traffic, these issues are even more serious. For example, WAN interfaces on a router may already be at or very near to capacity, leading to queuing issues that may result in packets being delayed, or even dropped as queues fill up. While this might not be a huge issue for non-interactive and reliable traffic like an FTP transfer, it presents a much greater problem when the network needs to support highly interactive traffic like packet-switched voice. If the level of congestion is high enough, users may not be able to complete their calls, have existing calls dropped, or may experience a variety of delays that make it difficult to participate in a “smooth” conversation.

In order to properly design a network to support voice traffic, WAN links need to be provisioned correctly, and QoS mechanisms need to be implemented in order to ensure that voice traffic is prioritized. When provisioning a WAN link to support multiple services (including voice), the available bandwidth of the link should be provisioned such that total data traffic accounts for a maximum of 75% of the necessary bandwidth, while the remaining 25% is available for additional needs, such as routing protocol requirements. When provisioning or planning a WAN link that will support voice traffic, keep in mind that the codec used will have the biggest influence on the amount of bandwidth used. Multiplying the bandwidth figure associated with a codec by the number of simultaneous phone conversations that need to be supported provides a good indication of how much bandwidth will need to be dedicated to voice traffic only across WAN links. Of course data traffic will also need to be considered, but this will vary in different network environments.

ATM Reference Model

ATM maps to the Data Link and Physical layers of the OSI model, but uses its own reference model to describe its functions. This model consists of three layers, which map to the OSI model as shown in the figure below.

The ATM reference model maps to the Data Link and Physical layers of the OSI model.

At the lowest layer of the ATM reference model is its Physical layer, which serves the same function as the corresponding layer in the OSI model – defining interfaces, supported media, and so forth. Both the ATM and ATM adaptation layers map to Data Link layer. The lower layer of the two (the ATM layer) is responsible for managing virtual circuits and cell relay functions. The ATM adaptation layer is somewhat more complex – not only does it encapsulate upper-layer data into a cell, but also defines the different service classes associated with ATM traffic.

One of the main benefits of ATM as a network transmission technology is its support for Quality of Service (QoS). Different types of ATM traffic are categorized according to their required bandwidth and the types of connections that they require. The ATM adaptation layer (AAL) supports 4 main types of service for ATM cells, as outlined below:

AAL1. AAL1 is a connection-oriented service that provides a constant bit rate (CBR) for the purpose of transporting very time-sensitive data such as voice or video traffic. AAL1 traffic requires that timing be synchronized between the ultimate source and destination endpoints.

AAL2. AAL2 is also a connection-oriented service that requires clocking between a sender and receiver, but is meant for traffic that is more intermittent or “variable” in nature. This makes it highly applicable to voice traffic, as it usually doesn’t have a constant data flow.

AAL3/4. AAL3/4 provides both connection-oriented and connectionless delivery services. Formerly two separate services (AAL3 is connection-oriented and AAL4 is connectionless), they have been merged into a single service. AAL3/4 provides a variable bit-rate service suitable for traffic that is not time-sensitive.

AAL5. AAL5 is the service most commonly implemented for the purpose of transferring data over ATM networks, such as standard IP traffic. It supports both connection-oriented and connectionless services, and is best suited to traffic that is not delay-sensitive.
Remembering the purpose of the ATM adaptation layers can be tough. Just remember that time-sensitive traffic will generally use a lower ATM AAL number, while traditional data traffic is typically implemented over AAL 5.

Layer 4 Switching

Now that you’re familiar with Layer 3 switching, you’re probably curious about what Layer 4 switching represents. Well, the answer isn’t as difficult as you might have imagined. Quite simply, a Layer 4 switch is typically just a Layer 3 switch that is also capable of making decisions based on Layer 4 information. Layer 4 (the Transport Layer) carries information about the source and destination TCP and UDP ports in use, which generally represent unique applications. Because of this, a Layer 4 switch is capable of making forwarding decisions according to the applications in use.

For example, an administrator might choose to prioritize VoIP traffic through the use of Quality of Service (QoS) features, granting VoIP applications more bandwidth. Conversely, the Layer 4 port information could also be used to route the packets from certain applications along a different path than other traffic. Ultimately, a Layer 4 switch gives administrators a higher level of control over how bandwidth is used within a network.

Communications Over VoIP Networks

When terminal devices like IP phones wish to communicate, call processing software like Cisco CallManager is typically involved in the process. While some calls will be between two IP phones on the same subnet, some may be on a remote IP network (for example, across a WAN link), while others will be to traditional phones connected to the PSTN. When two users with IP phones on the same subnet need to communicate, a router does not need to be involved, consistent with how IP operates. However, when the users are located on different subnets, a router (or Layer 3 switch) must be involved in order to route the IP-based voice traffic from one subnet to the other. In this case, the router used between the subnets does not need to be voice-enabled. Instead, it will simply route traffic across the network as it would any IP packets. Only the third situation requires a voice-enabled router – when a user on the IP network needs to communicate with users on the PSTN. In this scenario, a router that includes a voice module is needed to convert between IP and voice in one direction, and voice and IP in the other.

In order for a router to properly route voice traffic across an IP network or to an external user connected to the PSTN, dial peers need to be configured on the router. Remember that users using an IP phone will not be dialing the destination IP address that they wish to reach. Instead, they will be dialing a complete phone number or extension number associated with the user they wish to reach. The configuration of dial peers associates a phone number or extension number with an IP address or the voice port to which the call should be forwarded. For example, if a user wishes to reach another user on the IP network at extension “1234”, a dial peer (specifically, a VoIP peer) would be configured on the router mapping that extension to the destination IP address. Similarly, if a user needed to connect to someone on the PSTN, the phone number (usually a small portion of the number) could be configured in a dial peer (known as a plain old telephone service or “POTS” peer) to specify that the traffic should be forwarded out of a voice port on the router, which may be connected to a PBX or directly to a PSTN trunk link.

One of the advantages of configuring dial peers is that an administrator has a high degree of control over the entire call-routing process. For example, in order to reduce costs, an administrator could configure a dial peer such that when a user in the Toronto office needs to connect to PSTN user in Frankfurt, the call is first routed over the IP WAN to the Frankfurt office, where a voice-enabled router dials the (now local) call to the Frankfurt PSTN. An obvious advantage in this scenario is that the long distance changes associated with originating the call in Toronto are reduced to a local call. In the same way that both POTS and VoIP peers can be configured, so can dial peers for both VoFR and VoATM.

Outside of helping to route calls along the correct path, dial peers are also used to apply different attributes to the various “call legs” that a transmission passes over between the source and destination devices. A call leg is simply the logical path between voice gateways (such as a router) or between a gateway and destination device. Examples of attributes that might be applied to a particular call leg include the codec used, QoS settings, and so forth. You will learn more about codecs and QoS settings upcoming articles.

Ultimately, the process of routing a call from a particular source to the correct destination is somewhat similar to a traditional voice call on the PSTN. When a user picks up an IP handset, the local gateway (such as the Cisco router) provides the user with dial tone. As the user keys in the number they wish to reach, these digits are forwarded to the gateway, which collects them until the appropriate dial peer can be identified. Once identified, the call is forwarded along the call leg to the next gateway (or destination or PSTN switch). At the most basic level, this is similar to how a PSTN switch or PBX makes forwarding decisions on a traditional voice network.

Voice Networking Issues and Goals

A traditional voice network relies upon 64 kbps circuit-switched connections between the originator and the recipient of a call. While this dedicated bandwidth helps to ensure the quality of a call, it is also somewhat wasteful. At many points in any conversation, voice traffic is not crossing the circuit, since natural silences occur in human speech (although it certainly depends on who you are talking to). Even so, the circuit is connected, and the bandwidth is not available for other users.

In contrast, when a voice conversation is passed across a network using packet switching, a dedicated circuit is not created. Instead, the voice “data” becomes the payload of a packet or frame, which is subsequently packet-switched across the network according to the technology or protocol used. For example, with VoIP the voice data is the payload of an IP datagram, which is subsequently switched or routed across a network just like any other IP data traffic. Unless explicitly configured to handle VoIP traffic differently (via mechanisms like queuing), routers will treat the packet in the same way as any other IP packet – routing it from the source node to the destination across the “best” possible path. The initial benefit of this method is clear – voice traffic only uses network resources as required, and does not necessarily require the “reservation” of bandwidth resources (or a dedicated circuit) when voice traffic is not being transferred. This is an advantage, but it also presents some challenges. For example, because voice traffic is time-sensitive, techniques like QoS and compression need to be considered and implemented to ensure that packets arrive at their destination in a timely manner.

Through the implementation of technologies like VoIP, companies can also reduce costs, using existing WAN links to transfer packet-based voice traffic between locations, rather than expensive tie trunks. While existing WAN links may have enough excess capacity to handle this additional traffic, it is quite likely that they will need to be upgraded to support the additional traffic that would result from adding packet-switched voice traffic to the network. Although most voice networking vendors (including Cisco) are careful to remind you of the administrative benefits of managing a single converged network rather than separate voice and data networks, the reality is that moving to a single converged network usually requires significant staff training and more importantly, proper planning.

Examples of typical organization goals associated with implementing a VoIP network solution include:

  • Reduce costs associated with traditional PSTN connections and long distance changes.
  • Lower overall total cost of ownership
  • Improve user productivity
  • Reduce reliance on a single vendor for equipment or services
  • Enable new IP-based voice applications to be deployed
  • Move towards a single managed network for voice and data