Cisco IOS Network Traffic Encryption Features

Encryption features found in the Cisco IOS provide the ability to secure data communications by encrypting the payload of packets. Once encrypted, the contents of packets cannot be read by utilities such as network analyzers. While encryption provides the benefit of securing network communications, it also comes with a cost in the form of higher router CPU utilization.

While a variety of data encryption techniques exist, Cisco routers provide the ability to secure data using two primary technologies – Cisco Encryption Technology (CET) and IPSec. CET is an older proprietary encryption method developed by Cisco, and has been phased out of the Cisco IOS as of version 12.1. IPSec is an IETF-standardized encryption method that was designed by a number of companies, including Cisco. Not only is IPSec an Internet standard, it also provides interoperable encryption between the equipment of different vendors.

Encryption techniques are most commonly employed to securely transmit data over untrusted public networks like the Internet. For example, data encryption is used to implement what are known as Virtual Private Networks (VPNs), using the Internet rather than dedicated WAN links as a backbone to connect locations. Imagine a situation in which a company has two locations, each of which are connected to the public Internet using Cisco routers whose IOS images support IPSec. The company uses the IPSec capabilities of the routers to form a secure encrypted tunnel over the Internet. When a user from Office 1 attempts to communicate with a server in Office 2, data will be encrypted at the Office 1 router, sent over the Internet as a regular datagram (with an encrypted payload), and then decrypted at the Office 2 router. The end stations need not know about, or have any encryption capabilities.

While the ability to encrypt traffic using Cisco routers is a useful feature, it can also have a considerable impact on router performance, especially CPU utilization. As a general rule, Cisco recommends that encryption not be configured on routers whose CPU utilization is already above 40%.

Cisco IOS Network Traffic Compression Features

In some network environments, such as those with very limited bandwidth, one IOS feature worth taking a look at is the ability to compress data. Compression algorithms included with the IOS allow network traffic to be compressed in different ways, potentially making more bandwidth available on slow links. For example, if a router were to compress data in a ratio of 2:1, it would effectively double the available bandwidth on a link. While this may immediately sound like a good idea, it’s worth noting that compression comes with a cost, usually in the form of much higher router CPU utilization. The two main compression algorithms used on Cisco routers are known as Stacker and Predictor. While Stacker generally provides higher compression ratios, Predictor tends to be less CPU intensive.

Two of the most common types of compression used on a Cisco router are known as Layer 2 payload compression and TCP header compression. Layer 2 payload compression functions on serial links and is used to compress the entire payload of a frame (a PPP frame, for example), though not the frame header itself. This can result in significant reductions in the size of the payload, although it will vary depending upon how compressed the contents of the payload already are. For example, payload compression will have very little impact on frames carrying a file that is already highly compressed, such as a zip archive. Payload compression is generally used on links operating at speeds between 56K and 1.544 Mbps.

TCP Header compression (outlined in RFC 1144) works differently. In any given TCP session between two systems, many of the fields in the TCP header do not change, which leads to a great deal of redundant data being passed over a link. TCP header compression helps to reduce this traffic by removing some of the redundant fields found in the TCP header. It keeps track of the header by storing a copy of it on either side of a link on which compression is enabled. TCP header compression will often remove up to 35 bytes from each transmitted segment. It is commonly implemented on only very slow links, such as those running at speeds below 32K.

Other compression techniques supported on Cisco routers include:

  • Microsoft Point to Point Compression (MPPC), another method of compressing data on PPP links.
  • FRF.9, which can be used to compress data between endpoints on Frame Relay PVCs.
  • Real-Time Transport Protocol (RTP) header compression (also known as compressed RTP or cRTP), to compress the headers of RTP packets, such those used with VoIP.

As mentioned earlier, compression does have one major downside, in that it tends to be CPU intensive. Cisco generally recommends that compression be disabled if router CPU utilization is consistently above 65%. Under these circumstances, one alternative is to offload compression processing to an Advanced Integration Module (AIM) hardware card, if supported by your router model.

Queuing Network Traffic on Cisco Routers Using Weighted Fair Queuing

Weighted fair queuing (WFQ) is another queuing technique, and is the default used on router interfaces with less that 2.048 Mbps of bandwidth. Weighted fair queuing is concerned with ensuring that all traffic flows receive predictable bandwidth to meet their needs. For example, it will place smaller, interactive traffic (like telnet) at the front of a queue. Traffic is placed in the queue according to when the last bit is received, rather than the first. This helps to ensure that larger packets do not interfere with smaller packets, starving them of bandwidth.

Traffic is classified according to a number of factors before being placed in a queue. For example, WFQ is aware of quality of service (QoS) techniques like IP Precedence. This allows it to characterize traffic according to its packet-defined priority, and ensure that it is allocated an acceptable level of bandwidth. By the same token, on Frame Relay interfaces, WFQ will take into account diagnostic messages like FECN, BECN, and the discard eligibility (DE) of frames.

Queuing Network Traffic on Cisco Routers Using Custom Queuing

Unlike priority queuing (which will always empty higher-priority queues first), custom queuing allows you to assign bandwidth to different types of traffic based on criteria like protocol and port number. Custom queuing works in a round-robin fashion, moving from queue to queue, ensuring that each is allocated their apportioned bandwidth.

Custom queuing defines a transmission size for each queue in bytes. For example, FTP traffic may be assigned a byte size of 3000, while telnet and HTTP traffic are each defined a byte size of 1500. This would effectively split the bandwidth up such that FTP would have approximately 50% of the bandwidth, while telnet and HTTP would each have approximately 25%. In cycling through the queues, custom queuing would access the FTP queue, send 3000 bytes of data (which would be rounded up to complete a packet if necessary), and then move on to the telnet queue, where it would send 1500 bytes, and so on. Up to 16 custom queues can be defined for a given interface

Queuing Network Traffic on Cisco Routers Using Priority Queuing

Another queuing technique that can be employed on Cisco routers is priority queuing, which uses four different queue levels – high, medium, normal, and low. Traffic can be allocated to these queues based on criteria like protocol or port number. For example, a company might allocate VoIP traffic to the high queue, IPX traffic to the medium queue, telnet traffic to the normal queue, and FTP traffic to the low queue.

Priority queuing works by emptying each queue according to its level of precedence, for example, the high queue will always be emptied first, followed by the medium, normal, and low queues. If the high queue is always full, it will monopolize the network by continually being serviced at the expense of the other queues. Once the high queue is empty, the router will start working on the medium queue, and so forth. However, if more packets were to enter a higher-priority queue, the router will immediately switch back to servicing those packets first. As such, higher-priority traffic has the ability to monopolize access to the network, and packets in lower-priority queues may be delayed or dropped.

Queuing Network Traffic on Cisco Routers Using FIFO

FIFO is by far the simplest queuing technique, and is the default method used on Cisco router interfaces that have more than 2.048 MB of bandwidth available. When an interface is using FIFO, packets are added to a single queue, and are processed in the order the router receives them. FIFO provides a few key benefits. First are foremost, it is not very computationally taxing on the router, and the traffic flow couldn’t possibly be more predictable. This makes it a reasonable choice for interfaces that have a great deal of bandwidth available, and a relatively light load.

On the downside, FIFO does nothing to recognize that traffic from one application may be more time-sensitive than from another. For example, a network that relies on time-critical SNA traffic might experience delays at the expense of large FTP data transfers, since the router will simply process packets in the order they are received. This can also be a problem for applications like VoIP, which rely on data reaching its destination in a timely manner. FIFO queuing on congested networks can lead to latency, jitter, and even packet loss.

Cisco Router Queuing Methods

Like all network equipment, a router is limited in terms of its available resources. While memory and interface speeds may differ depending upon a router’s model and the specific network environment, optimization of resources is often necessary to help ensure that traffic reaches its destination in a timely manner. To make better use of available bandwidth, a router is capable of using different queue scheduling mechanisms that control how traffic is forwarded out an interface. These mechanisms can ultimately be altered to give one type of traffic a higher priority than another, change the ways in which applications have access to bandwidth, and so forth. This is not unlike waiting in line at a bank. Some banks serve all customers on a first-come, first-serve basis, regardless of what they need. In others, special queues exist for business customers, regular individuals, or general customer service. While different queuing methods will have varying degrees of perceived “fairness”, the ability to configure them allows resources to be allocated according to need.

When implementing a queuing mechanism on a router, the goal is to try and reduce congestion, such that applications have an appropriate level of access to bandwidth. Queuing allows you to control the order in which traffic should be prioritized for sending. In some case, the queuing technique used is fairly simple – packets are simply forwarded out an interface in the order that the router receives them. While this sounds fair, the method is not always optimal. It might allow a particular application to monopolize bandwidth, at the expense of a more mission-critical data stream. Cisco supports four main queuing methods on its routers, each with associated advantages and disadvantages. These include:

  • First In, First Out (FIFO)
  • Weighted Fair Queuing
  • Priority Queuing
  • Custom Queuing

Each of these queuing methods is looked at in more detail in the following articles.

Planning VoIP Networks

When planning to implement VoIP on a network, a network designer needs to pay particular attention to ensuring that sufficient bandwidth is available to support voice traffic on WAN links. In previous sections you already learned that the codec chosen will impact the bandwidth requirements associated with a voice call. However, other elements that need to be considered include the overhead associated with the RTP, UDP, and IP headers, as well as Layer 2 framing. In order to determine the required bandwidth, two main values first need to be calculated – the size of a voice packet, and the voice packet per second rate.

To calculate the size of a voice packet, you must add together the size of the RTP, UDP, and IP headers, as well as payload size and Layer 2 framing. For example, let’s say that you intend to use the G.729 codec without RTP header compression over a PPP link. In this case, the combined RTP/UDP/IP header size would be 40 bytes, as you learned earlier. The payload size would be 20 bytes, and the PPP framing an additional 6 bytes, adding up to 66 bytes total. Converting this number to bits yields a packet size of 66 x 8, or 528 bits total.
To determine the voice packet per second rate, divide the codec bit rate by payload size of a packet. Earlier in this chapter you learned that the G.729 codec uses a bit rate of 8 kbps, or 8000 bps. The payload of a G.729 voice packet is 20 bytes, or 160 bits. Therefore, the packet per second rate is 50 (8000 divided by 160).

Finally, to determine the bandwidth per call, multiple the total voice packet size by the number of packets per second. In this case, the calculation is 528 bits for the total packet size, multiplied by a packet per second rate of 50, for a total of 26400 bps (26.4 kbps). In other words, a call using the G.729 codec and no header compression requires approximately 26.4 kbps of bandwidth. When header compression is used (assuming a header size of 2 bytes rather than 40), the same calculation yields a bandwidth requirement of 11.2 kbps, so IP RTP header compression is definitely worth exploring. For example, if the bandwidth on the WAN link to be dedicated to VoIP traffic was 256 kbps, the link could handle approximately 9 simultaneous calls without, or 22 with IP RTP header compression. Don’t forget that voice conversations are duplex – in other words, with header compression enabled, a total of 11 simultaneous conversations across the WAN link could occur, or 11 voice data streams in each direction.

When planning a network to carry voice traffic, the numbers above provide a fairly accurate estimation of WAN bandwidth requirements. However, as you learned earlier, a typical voice call may consist of anywhere between 30 and 40 percent silence. In order to take advantage of these silences (and not transmit packets of “silence”), Voice Activity Detection (VAD) can be implemented using Cisco CallManager. When enabled, periods of silence are suppressed, and not packetized for transmission across the network. This can result in substantial bandwidth savings, which would subsequently be available for other network application traffic.

Packet Loss and Echo on VoIP Networks

Packet loss can occur on any network for a variety of reasons including congested links (packets dropped when buffers are full), routing problems, incorrect equipment configuration issues, and more. Because voice traffic uses UDP as its transport protocol, dropped packets are lost, and obviously not resent. On a voice conversation, this is recognized by what appears to be speech that is cut short or clipped – portions of the voice conversation might be lost, where one user speaking “Hello, may I speak to Dan?” might sound more like “Hello, may… Dan?”, or similar.

The codecs outlined in previous articles are typically capable of correcting (via a DSP) for up to 30 ms of lost voice traffic without any noticeable impact on quality. However, Cisco voice packets use a 20 ms payload, so effectively only one packet could be lost at any point in time. In order to avoid packet loss issues, it is important that the underlying IP network is properly designed (including redundancy), and that QoS techniques are implemented effectively.

Another issue that impacts voice calls is one that you have likely already experienced, namely the sound of your own voice echoing back to you a short time after speaking. Echo occurs when part of the voice transmitted “leaks” back on to the return path of a call. To compensate for this, most codecs use built-in echo cancellation techniques. On a Cisco gateway (such as a router), echo cancellation settings are configured by default, but can be tuned in order to compensate for different degrees of echo experienced by users.

Variable Delays Associated with VoIP Traffic

The bullet points below each type of variable delay encountered with VoIP traffic, and how these issues can be compensated for where possible.

  • Queuing Delay. When a WAN interface is congested, traffic must be queued using any of the various methods looked at in this chapter. Although a method like LLQ can prioritize voice traffic, another issue exists. Consider a situation where a router interface currently has no priority traffic waiting to be sent, and begins to forward a large frame containing FTP data. If a voice packet then arrived, it could not be sent until the FTP packet (which is already being serialized) is completed. As such, the voice packet is subject to wait, incurring a delay that might not be reasonable. For example, if the voice packet is stuck behind a 1500 byte frame being sent over a 64 kbps link, it would be subject to a delay of approximate 185 ms, which (in conjunction with other delay factors) would make it well beyond acceptable. To account for queuing delays, a technique called Link Fragmentation with Interleaving is used on links with speeds below 768 kbps. When implemented, a router will fragment larger packets (like FTP in this example) into smaller sizes, and then “interleave” the voice packets onto the link. As such, the voice packets would not need to wait for the entire single FTP packet to be sent. When choosing a fragment size, it should be one that aims for approximately a 10 ms delay but does not fragment voice packets.
  • Dejitter Delay. As mentioned earlier, jitter occurs when packets do not arrive when expected. When dejitter buffers are configured at the receiving end of a voice network, packets that arrive with timing variations are buffered, and then converted to a constant delay. The use of dejitter buffers does add some delay to voice network, so the buffers should generally be kept small. Other QoS techniques looked at earlier in this section help to reduce the overall exposure of voice traffic to jitter issues, but dejitter buffers are a specific solution to help minimize jitter on the receiving end of a VoIP network.

It’s important to keep the concept of “end-to-end” in mind when calculating delay. Don’t forget that if a packet needs to pass through 3 routers, and each router adds a 10 ms delay to the forwarding of the packet based on queuing considerations, that adds an additional 30 ms of delay to the packet between the source and destination. Calculations of end-to-end delay will be looked at in more detail shortly in the planning section.