For better or for worse, Layer 3 switching is implemented in many different ways by different vendors. Even in Cisco’s line of Catalyst switches, many different methods are used depending upon the hardware series and model. The different methods used by Catalyst switches will be looked at shortly. For now, let’s reexamine the process by which a traditional router forwards traffic.
Imagine a very simple network that consists of two hosts that are part of different VLANs on the same switch. Each VLAN is connected to a different port on a traditional router. Recall that even though the two hosts are connected to the same switch, routing is necessary in order for them to communicate, since they are members of different VLANs. Host A will create a packet listing itself as the source IP address, and Host B as the destination IP address. It will then frame the packet, listing itself as the source MAC address, and router interface E0 as the destination MAC address. Once complete, the frame is forwarded across the network to the router.
When the frame arrives at the router’s E0 interface, the CRC is calculated, the MAC framing is stripped away, and the packet is passed to the Network Layer. At this layer the router calculates the IP checksum, and then examines the routing table to determine where the packet should be forwarded next. After decrementing the packet’s TTL by 1, the router reframes the packet with its E1 interface as the source MAC address, and the MAC address of Host B as the destination. Once complete, the packet is forwarded back to the switch, and ultimately to Host B. All this work for just one routed packet!
Ultimately, every routed packet is forwarded in this manner on a traditional IP network. Even though many packets may ultimately need to be forwarded between the same two hosts, this process must still be completed. As you’ll see shortly, Layer 3 switching techniques go a long way towards reducing the amount of overhead associated with routing packets between networks.
During your career in internetworking, you will continuously hear talk of equipment functioning at different “layers”. As a recurring theme since Chapter 1, you should now be aware that the layers being referred to are those in the OSI model. While the concept of a switch or a bridge as a Layer 2 device or a router as a Layer 3 device should now seem elementary, it’s easy to become confused by the mish-mash of marketing lingo that pervades the industry. What is a Layer 4 switch, for example? Well, it depends on whom you ask. Ultimately, vendors tend to use the different layers of the OSI model to represent intelligent decision-making features in their equipment. In some cases, as with Layer 3 switching, the term used represents a clearly defined and valid function. In others, like Layer 4 switching, what the term actually means can be a little less clear.
At the most basic level, the role of a Layer 3 switch is more or less identical to that of a router. Recall that a Layer 2 switch makes forwarding decisions based of the destination MAC address of a frame. In the same way, a Layer 3 switch is also capable of carrying out the functions of a router, making forwarding decisions based on the destination IP address of a packet. For all intents and purposes, a Layer 3 switch is basically a traditional Layer 2 switch that is also capable routing traffic between different subnets or networks. The big difference with a Layer 3 switch is usually speed, namely the speed at which it is capable of routing. Recall that Layer 2 switching is typically a much faster operation than routing, if only because there is less work involved in the forwarding process. With a Layer 3 switch, routing can often occur at close to the same forwarding rates as those associated with Layer 2 switching.
EIGRP operates through the use of four key technologies:
Neighbor Discovery. Similar to link state protocols, EIGRP routers also periodically send out “hello” packets, letting neighboring routers know that they are functioning and available. On a LAN or point-to-point links, these message are sent out as multicasts every 5 seconds. On a multipoint network (like Frame Relay) with speeds lower than T1, these packets are unicast every 60 seconds. As long as these “hello” packets are received, an EIGRP router assumes that its neighbors are available for the purpose of exchanging routing table information. If three “hello” periods pass without receiving a “hello” message, a router will consider its neighbor unavailable and make the necessary routing table changes. On a LAN, this can happen in as little as 15 seconds (3 times the “hello” message interval).
Reliable Transport Protocol. The Reliable Transport Protocol is responsible for ensuring that EIGRP updates actually reach neighboring routers, in the correct order. EIGRP updates are sent out as multicasts to address 18.104.22.168. When a neighboring router receives an update, RTP requires that an acknowledgement be sent. This is different than many routing protocols, which send update traffic in a connectionless manner.
Diffusing Update Algorithm. DUAL is the protocol used by EIGRP to ensure fast convergence and that the most efficient loop-free route advertised by neighbors is the one added to a router’s routing table. DUAL uses the lowest calculated metric to determine the best path to a destination, referred to as the feasible distance. Routers that advertise a lower metric to the destination than the feasible distance are known as feasible successors, and are ultimately used as the next hop router to which packets will be sent. When a topology change occurs, an EIGRP router will use the route provided by the next most feasible successor as the next hop. In cases where all metrics are higher than the feasible distance, the EIGRP router must recompute the route.
Protocol-Dependent Modules. Because it is capable of routing multiple protocols (IP, IPX, and AppleTalk), EIGRP implements what are known as protocol-dependent modules. For example, the IP EIGRP module will automatically redistribute IGRP routes into EIGRP and vice versa. Similarly, AppleTalk EIGRP will redistribute routes into and out of AppleTalk RTMP.
EIGRP offers greater flexibility, reliability, and better convergence times than a traditional distance-vector protocol. One limiting factor is that EIGRP is proprietary to Cisco – as such, EIGRP is limited to networks running Cisco equipment.
While IGRP might be a better solution than RIP when it comes to scalability, EIGRP takes things many steps further. First of all, EIGRP is classless, meaning that it supports the use of VLSM. Unlike IGRP, EIGRP supports the routing of multiple protocols, including IP, AppleTalk, and IPX. EIGRP is usually described as a hybrid protocol, meaning that it displays characteristics of both a distance vector and link state protocol.
EIGRP uses the same metrics as IGRP in making its routing decisions – bandwidth, delay, reliability, load, and MTU. The default metrics used are again the same, bandwidth and delay. However, for a more granular level of control, EIGRP multiplies each of the metrics by 256 before performing the calculation of the composite metric. EIGRP was designed to make much better use of bandwidth, and to allow routers to have a much better awareness of neighboring routers.
Instead of sending its entire routing table out at regular intervals, an EIGRP router instead sends out only partial updates, and even then, only when a route changes. Obviously this makes better use of the available network bandwidth. An EIGRP router also has a more complete view of the network than a typical distance vector protocol – it not only maintains its own routing table, but also keeps a copy of the routing tables of neighboring routers. When an EIGRP router cannot find a route to a network based on all the information it currently has, it sends out a query to other routers, which is propagated until a route is found.
Although RIPv2 represents a significant improvement over the original version, RIPv2 is still a routing protocol used for IPv4 networks only. Because of this, a new version of RIP, referred to as RIPng or RIP version 3 has been developed in order to support this popular distance vector routing protocol on IPv6 networks. In case you’re curious, the “ng” in RIPng stands for “next generation”.
RIPng functions in a manner almost identical to RIPv2, though with a couple of key differences. The first is that instead of using IPv4 addresses in its update messages, RIPng uses IPv6 addresses and prefixes. The second change is that when a RIPng router needs to communicate with other RIPng routers, it uses a special multicast address (FF02::9) as the destination address.
As of this writing, RIPng was still not a finalized Internet standard. It is currently a proposed standard in the RFC process, but Cisco already supports the protocol in their IPv6 IOS images.
RIPv2 is the newer, enhanced version of the RIP routing protocol, and is specified in RFC 1723. In many ways, this newer version is still very similar to its predecessor – it is still a distance vector protocol that uses hop count as its metric (the hop count limit is still 15), and still has a default administrative distance of 120. However, version 2 also introduces a number of features not found in the original version. Firstly, RIPv2 is classless; this means that it can be used on networks that employ variable-length subnet masking (VLSM). This is possible because RIPv2 includes the subnet mask associated with a destination network in its routing table updates. Where routing table updates were broadcast in RIP version 1, RIPv2 instead uses multicasts to send updates – specifically, a router will send updates to the multicast address 22.214.171.124.
RIPv2 is also capable of employing authentication between neighboring routers. This is another feature not found in the original version. You may be asking why authentication might be an issue when it comes to routing table updates. Remember that a RIPv1 update was no more than a broadcast, and that routers completely trust the information provided by neighbors. Now, imagine how easy it would be to anyone to set up another RIP router on a network (even versions of Windows can be configured as a RIP router), and begin broadcasting all sorts of incorrect routing table information! It certainly wouldn’t take long to really mess up those RIP routing tables. With RIP version 2, authentication can be enabled on any router interface using either plain text or MD5 authentication. If authentication is enabled, a router will only accept updates from routers whose updates contain the correct authentication string.
IGMP Snooping is another Layer 2 function that helps to better manage the multicast traffic that a switch comes into contact with. Quite simply, not every environment is going to consist of Cisco equipment only. Routers and switches from other manufacturers may be present on the network, and these will be incapable of using the CGMP protocol. However, many vendors do implement a feature known as IGMP Snooping on their switches to help reconcile some of the multicast traffic issues mentioned earlier. A variety of Cisco Catalyst switch models do support IGMP Snooping, but it is important to recognize that at any given point in time, a Cisco switch can only be configured for either CGMP or IGMP Snooping – not both simultaneously.
As the name suggests, IGMP Snooping is a method that actually “snoops” or inspects IGMP traffic on a switch. When enabled, a switch will watch for IGMP messages passed between a host and a router, and will add the necessary ports to its multicast table, ensuring that only the ports that require a given multicast stream actually receive it. Unfortunately, IGMP Snooping suffers from one major drawback, namely the need for the switch to inspect all IGMP traffic, on top of its other responsibilities. However, in environments that do not support CGMP, IGMP Snooping provides a solid alternative to having all multicast traffic flooded to all ports.
CGMP is a proprietary Cisco protocol and Layer 2 multicasting feature that works in conjunction with IGMP in order to make the forwarding of multicast frames more efficient. Because the protocol is Cisco specific, it only works on Cisco routers and switches. Basically, CGMP is a protocol used by a Cisco router to inform a Catalyst switch of the specific host that wishes to receive a multicast. With information about the specific hosts that wish to receive a multicast, the switch can then filter the multicast traffic, ensuring that only the hosts that require it actually receive it.
You should recall that switches make forwarding decisions based on MAC addresses rather than IP addresses. However, when a host wishes to join a multicast group for the purpose of receiving a transmission, it sends a membership report to the local router. When the router forwards the multicast onto the network, it cannot use the destination MAC address of the single host, because if this were the case, all other hosts who need the multicast would reject it. Instead, the router uses a specially created MAC address that is a variation on the multicast IP address. You don’t to know the technical details of how this happens for the CCDA, but it is sufficient to say that the MAC address used unique identifies the multicast stream. When switches receive these frames, the MAC address shows them as being a multicast, and the frames will be forwarded out all ports. However, when CGMP is enabled on both switches and the local router, a different process takes place.
When the router receives the original IGMP message from a host requesting a multicast, this message includes the host’s source IP address and MAC address. Based on this information, the router is able to send a CGMP message to switches, letting them know both the destination MAC address of the host, and the destination MAC address of the multicast. With this information, a switch is able to dynamically add another entry to its MAC address table, specifying that the port should receive traffic for both MAC addresses. Ultimately, this ensures that only the hosts that require the multicast stream are forwarded it. This eliminates the need for the switch to forward the multicast to all ports.
Before getting into the details of Layer 2 multicasting protocols, let’s first take a look at how IP multicasting works in a more generic sense. Although IP-based communications have not been looked at in detail yet, you have learned that multicasting is a one-to-many method of transmission. In order to facilitate this, a special class of IP addresses are designated or reserved for multicasts – Class D, or those addresses fall into the numerical range between 224 and 239 in the first octet of an IP address. For example, the address 126.96.36.199 would be a valid multicast destination address. Later in this book I’ll cover multicast addresses in more detail.
The primary multicast protocol used on the Internet is the Internet Group Management Protocol (IGMP). IGMP is a Network layer protocol whose primarily responsibility is managing the groups of hosts that which to receive a multicast, just like the name suggests. When a user opens a multicast-enabled application on their system, their computer sends an IGMP message to the local router. This message essentially tells the router that it should forward the requested multicast traffic onto this network. In turn, this router contacts its upstream router, telling that router to forward the particular multicast traffic requested. Overall, this ensures that forwarding a multicast to the entire Internet is unnecessary. Only those routers that actually have network hosts that “need” the multicast will have the traffic forwarded to them.
In fact, there may be many hosts on this network that wish to receive the same multicast transmission. In this case, as they open their multicast applications, they will simply begin processing the same frames forwarded by the router – again, the multicast is only sent once, but multiple hosts are listening and accepting the traffic. In order to reduce unnecessary traffic on the network, the local router will periodically send out what is known as an IGMP host membership query. If no systems respond to these queries, the router will ultimately stop forwarding this multicast traffic to the network.
Recall again that by default, a switch will forward multicast frames to all ports. What this means is that even though there are only perhaps two or three hosts that wish to receive the multicast, it will still be forwarded to all systems, most of which will simply discard the traffic. This can place a heavy load on both the hosts and any switches, especially in cases where many different multicast streams need to be forwarded.
In order to deal with this traffic more effectively, Cisco Catalyst switches can use one of two different Layer 2 multicast features – the Cisco Group Management Protocol (CGMP) and IGMP Snooping.
A multicast is a type of transmission in which a single traffic flow is sent to multiple recipients – in other words, a one-to-many technique. In the world of TCP/IP, multicast transmissions are a Network layer concept, using the Internet Group Management Protocol (IGMP) to manage which systems will ultimately receive a multicast, and which routers will forward it.
You may also recall from other articles that switches will, by default, forward all broadcast, multicast, and unknown destination frames to all connected ports. While this may not seem unreasonable at first glance, imagine a multicast being forwarded to literally hundreds of ports when only one or two hosts actually need the data being sent. Obviously this is wasteful, and some technique is required to both reduce the amount of work required by individual switches, and the number of systems that need to process unnecessary traffic.