Network Switches

When you think of a switch, simply consider it to be a bridge with more ports. Their higher port density helps to make switches a practical and powerful performance replacement for hubs, though more expensive. Much like a bridge, ports on a switch define different collision domains. In this way, a network can be microsegmented into many very small collision domains, especially if each device is connected to its own dedicated port. If multiple systems are connected to a switch port via a hub, the hub-connected systems exist in the same collision domain. Note that while the switch helps to create a number of smaller collision domains, broadcasts and multicasts are still forwarded to all ports. The process by which a switch or bridge forwards broadcast or multicast traffic to all ports is sometimes referred to as “flooding”.

Figure: Switch collision domains.

When a frame enters a switch, the switch looks up the destination hardware address in its MAC table, and will only forward the frame to the port where the destination MAC address exists. If the switch doesn’t yet know about the destination address (perhaps because a system was just recently turned on), the switch will forward the frame to all ports, a concept referred to as flooding. Switching is usually handled by hardware referred to as Application Specific Integrated Circuits (ASICs). These special chips allow switching to take place at what is sometimes referred to as wire-speed. This offers significantly faster performance than a bridge, which usually stores its forwarding logic in software. Much like a bridge, a switch will also calculate the CRC on a frame to be sure it isn’t corrupt, though different configurations are possible. We’ll look at different switching methods in Chapter 3.

Tip: Remember that a bridge or switch segments a network into a greater number of smaller collision domains.

Switching significantly increases performance on a LAN, and replacing hubs with switches should be a primary consideration when attempting to improve network performance. In fact, if every device is connected to its own switch port, collisions will not occur, since every device will be in its own collision domain. The absence of collisions gives you the ability to make use of 100% of the available bandwidth. Consider the figure below. In it, users are connected directly to their own 10 Mbps switch port. The server is connected to a 100 Mbps port. In this scenario, each user has access to a full 10 Mbps of bandwidth to the server, collision free.

Figure: Switch with 10 and 100 Mbps ports.

You may have noticed that switches are often described according to an OSI layer – for example Layer 2 or Layer 3. A Layer 2 switch performs switching based on MAC addresses, as previously described. A Layer 3 switch does this as well, but also includes integrated routing functionality. Layer 3 switching concepts are looked at in detail in Chapter 8.

Tip: An Ethernet switch functions in a manner similar to a transparent bridge.

Author: Dan DiNicolo

Dan DiNicolo is a freelance author, consultant, trainer, and the managing editor of 2000Trainers.com. He is the author of the CCNA Study Guide found on this site, as well as many books including the PC Magazine titles Windows XP Security Solutions and Windows Vista Security Solutions. Click here to contact Dan.