Synchronous Data Link Control (SDLC)

SDLC is a data link control protocol developed by IBM for use on serial WAN links in SNA environments. Developed in the 1970s, SDLC was the basis for the HDLC protocol later standardized by the ISO. Unlike HDLC, in which connected systems operate as equal peers, SDLC follows a hierarchical communications structure made up of primary and secondary stations.

On an SDLC network, a primary station controls the communication process, and determines when a secondary station can send data through a polling mechanism. For example, imagine an SNA network with one central host (a mainframe) and many terminals at remote locations. In this case, the mainframe would act as the primary station, and the terminals as secondaries. As the primary, the mainframe will poll secondary devices, one at a time, to see whether they have data to send. A secondary can only send data when permitted by the primary. If a secondary does have data to send, it will wait to be recognized by the primary, and then send as much data as permitted before control is passed back to the primary.

The figure below illustrates an SNA network with SDLC links between a host and two remote locations.

On an SDLC network, the primary station will poll secondary stations one at a time to see whether they have data to transmit. Secondaries can only send data if polled by a primary.

While the vast majority of networks no longer run the SDLC protocols, a number of other protocols in use on networks today were ultimately derived from the foundation it provided. Derivative protocols of SDLC include:

High Level Data Link Control (HDLC). This protocol, looked at earlier in the chapter, is the default serial interface encapsulation method on Cisco routers.

Link Access Procedure, Balanced (LAPB). A variation of the HDLC protocol that handles framing, error and flow control mechanisms on X.25 networks.

IEEE 802.2 (Logical Link Control). Works with popular LAN protocols like 802.3 (Ethernet) and 802.5 (Token Ring) to provide connectionless and connection-oriented services at the Data Link layer.

Qualified Logical Link Control (QLLC). This protocol provide services at the Data Link layer to allow SNA traffic to be transported over X.25 networks.

Systems Network Architecture (SNA)

Systems Network Architecture (SNA) was developed by IBM in the 1970s as a method to facilitate communication amongst various IBM products and technologies, but mainly between mainframes and terminal devices. Largely considered to be a legacy protocol, SNA is still is use on many networks today that require communications with mainframe and various AS/400 systems. SNA is more of a communications framework than a single protocol. In fact, its layered architecture formed the basis for the OSI model. The SNA and OSI protocol stacks are compared in the figure below for reference purposes only.

Comparison of the OSI and SNA reference models.

Legacy SNA is used in mainframe environments, such as those running IBM S/370 and S/390 systems. Mainframes are typically used in environments that require very high-end transaction processing such as aggregated billing systems, databases, and so forth. SNA traffic usually doesn’t require huge amounts of bandwidth, often working across links as slow as 9600 bps. SNA traffic also tends to be time critical, which often led companies to implement SNA traffic on its own dedicated parallel network in the past. On most networks today, however, SNA traffic is often prioritized using different traffic queuing techniques, many of which are described in Chapter 14. As a general rule, SNA traffic tends to be very lightweight. This is a function of the character-based traffic passed between terminals and mainframes by inquiry/response type applications.

Four main physical entities are found on a traditional SNA network. These are outlined below, and illustrated in the figure below.

Hosts. A host in an SNA environment is typically a large IBM mainframe such as an S/370. This system would be responsible for the centralized processing and storage of data.

Front-end Processors (FEP). A FEP is a communications controller that is used to manage the physical network and related communications links, including those connecting to remote sites.

Cluster Controllers. A cluster controller serves as the connection point for terminals in an SNA environment, handling input/output functions between the host and terminals.

Terminals. A terminal is an end-user device comprised of a keyboard and display screen in mainframe and minicomputer environments. Lacking processing power of its own, a terminal simply acts as an input and display facility, with commands carried out on the mainframe. While hardware terminals like the 3270 were commonplace in the past, these devices have largely been replaced by terminal emulation software that can run on a variety of operating systems.

SNA network environment with both a local and remote location.

SNA traffic can be transmitted over a variety of different Data Link layer technologies, but has traditionally been implemented using Token Ring on LANs, and Synchronous Data Link Control (SDLC) over WAN links. SDLC will be looked at in more detail shortly.
One important issue to keep in mind is that SNA traffic is not routable – this obviously presents an issue in large network environments. One way to help circumvent this issue is to implement Data Link Switching on network routers. This allows SNA traffic to be bridged across a routed network by creating TCP tunnels between routers defined as peers. The Cisco implementation of Data Link Switching is known as Data Link Switching Plus (DLSw+), and is only available as part of certain IOS feature sets.

ATM Reference Model

ATM maps to the Data Link and Physical layers of the OSI model, but uses its own reference model to describe its functions. This model consists of three layers, which map to the OSI model as shown in the figure below.

The ATM reference model maps to the Data Link and Physical layers of the OSI model.

At the lowest layer of the ATM reference model is its Physical layer, which serves the same function as the corresponding layer in the OSI model – defining interfaces, supported media, and so forth. Both the ATM and ATM adaptation layers map to Data Link layer. The lower layer of the two (the ATM layer) is responsible for managing virtual circuits and cell relay functions. The ATM adaptation layer is somewhat more complex – not only does it encapsulate upper-layer data into a cell, but also defines the different service classes associated with ATM traffic.

One of the main benefits of ATM as a network transmission technology is its support for Quality of Service (QoS). Different types of ATM traffic are categorized according to their required bandwidth and the types of connections that they require. The ATM adaptation layer (AAL) supports 4 main types of service for ATM cells, as outlined below:

AAL1. AAL1 is a connection-oriented service that provides a constant bit rate (CBR) for the purpose of transporting very time-sensitive data such as voice or video traffic. AAL1 traffic requires that timing be synchronized between the ultimate source and destination endpoints.

AAL2. AAL2 is also a connection-oriented service that requires clocking between a sender and receiver, but is meant for traffic that is more intermittent or “variable” in nature. This makes it highly applicable to voice traffic, as it usually doesn’t have a constant data flow.

AAL3/4. AAL3/4 provides both connection-oriented and connectionless delivery services. Formerly two separate services (AAL3 is connection-oriented and AAL4 is connectionless), they have been merged into a single service. AAL3/4 provides a variable bit-rate service suitable for traffic that is not time-sensitive.

AAL5. AAL5 is the service most commonly implemented for the purpose of transferring data over ATM networks, such as standard IP traffic. It supports both connection-oriented and connectionless services, and is best suited to traffic that is not delay-sensitive.
Remembering the purpose of the ATM adaptation layers can be tough. Just remember that time-sensitive traffic will generally use a lower ATM AAL number, while traditional data traffic is typically implemented over AAL 5.

ATM Communications

Communication between hosts on an ATM network is usually accomplished through the use of either switched virtual circuits (SVCs) or permanent virtual circuits (PVCs). Through the use of switched virtual circuits, an ATM network can behave almost like a typical circuit-switched network. When PVCs are used, a path must be defined between endpoints across an ATM internetwork, including on intermediary switches. While a PVC provides the benefit of reducing the overhead associated with call setup and teardown, it also limits data to a single path across the network, eliminating redundant paths. In contrast, SVC connections are created on demand – opened as necessary, and then terminated once data transfer is complete.

In an ATM environment, a virtual circuit consists on two different elements – virtual channels, and virtual paths. A virtual channel is an individual circuit that provides a connection between ATM endpoints, such as a client computer and a server. A virtual path is somewhat like a pipe, acting as a multiplexer of individual virtual channels across an ATM network. A virtual path is identified by a number known as a Virtual Path Identifier (VPI), while a Virtual Channel Identifier (VCI) number identifies a particular virtual channel.

As an example, consider the figure below. Two virtual paths across an ATM network are shown in the figure. Each of these virtual paths bundles together multiple virtual channels. The benefit of this model is that virtual paths can be created on an end-to-end basis across an ATM network of switches. Individual virtual channels are not individually switched – instead, all virtual channels that are part of the same virtual path are switched as a single unit. Additional virtual channels can be added to an existing virtual path, thus eliminating the overhead associated with defining additional paths. Similarly, if an ATM switch were to fail, only the virtual path would need to be switched, rather than each individual virtual channel. Both VCI and VPI information is stored in the ATM cell header.

Virtual paths bundle virtual channels between ATM network endpoints.

ATM also supports two different types of connections between systems – point-to-point and point-to-multipoint. As its name suggests, a point-to-point connection consists of communication between only two systems. These connections can transfer data in both directions simultaneously over a single virtual circuit – the process being referred to as bidirectional communication. Conversely, a point-to-multipoint connection allows data to be transferred from one host to many hosts. ATM point-to-multipoint connections support unidirectional communication only. A brief comparison of both is provided in the table below.

A comparison of point-to-point and point-to-multipoint ATM connections.

ATM Network Equipment

Two main types of equipment exist on ATM networks – ATM switches, and ATM endpoints. As its name suggests, an ATM switch handles cell-switching functions across an ATM network. This includes accepting incoming cells from other ATM switches or endpoints, modifying cell header information as necessary, and then sending cells on to the next switch or end device. An ATM endpoint is a network device equipped with an ATM network interface card, such as a router, computer, LAN switch, and so forth. Cisco router models in the 5500 series are commonly equipped with ATM expansion cards for the purpose of connecting to an ATM backbone.

Special terms are used to describe the connection points between ATM equipment. – User Network Interface (UNI) and Network Node Interface (NNI). UNI represents a connection between an endpoint such as an ATM-enabled PC and an ATM switch. NNI is the term used to describe connections between ATM switches. ATM equipment and connection points are illustrated in the figure below.

ATM connection points.

Asynchronous Transfer Mode (ATM)

Asynchronous Transfer Mode (ATM) is a high-speed switching technology whose roots trace back to broadband ISDN initiatives conceived in the 1980s by organizations like the ITU-T. Originally designed with the high-speed transfer of voice, video, and data over public networks in mind, the scope of ATM has broadened to include transmission over private LANs and WANs. Typical transmission rates on ATM networks range from 155Mbps up to multi-gigabit speeds. ATM is capable of running over a variety of network media including fiber optics and UTP.

Although often referred to as a packet switching technology, ATM is best described as a hybrid that includes elements of both circuit and cell switching. While the interconnections between ATM switching equipment can be used to provide multiple redundant paths over which data can travel, ATM uses virtual circuits to establish dedicated connections between network endpoints.

The main protocol data unit of ATM is technically not a packet. ATM instead uses fixed-length “cells” as its method of encapsulating data to be transferred over ATM networks. ATM cells are relatively small at only 53 bytes in length. They include a 5-byte header and 48-byte payload, as illustrated in the figure below. Their small and fixed size makes them perfect for the transmission of time-sensitive traffic like voice and video. This is a superior alternative to traditional packets, whose variable lengths introduce added latency to network communication processes.

An ATM cell is always 53 bytes in length, comprised of a 5-byte header and 48-byte payload.

X.25 Communications

Like Frame Relay, X.25 uses virtual circuits to define a path across a packet-switched network. In cases where connections should be available at all time, PVCs are the better choice, as they eliminate the overhead associated with call setup and teardown. For less frequent connections, SVCs can also be used.

What X.25 lacks in speed, it more than makes up for in reliability. In fact, X.25 was primarily designed with reliability in mind, since the analog circuitry over which X.25 originally ran tended to be rather error-prone, thus requiring a high degree of error checking. The reliability of today’s digital networks is part of the reason why other packet-switching technologies like Frame Relay have grown so popular. Both availability and reliability will dictate whether a company needs to consider X.25 as a WAN solution in some parts of the world.

Communication between DTE devices over an X.25 network is subject to delays because of its reliability features. For example, X.25 networks use a store-and-forward method, where intermediate devices buffer packets as they cross the X.25 network. Not only does this buffering ensure that the receiving device (such as the next PSE in the path) is ready and able to receive data, but it also provides an opportunity for frames to be checked for errors. Additionally, as a frame is forwarded between switches across the point-to-point links of an X.25 network, acknowledgements must be sent back to the device that forwarded the frame, in order to be sure that it arrived. In cases where an acknowledgement is not received, the frame will be retransmitted. To make the acknowledgement process more efficient, a windowing mechanism is used that allows multiple packets to be sent before an acknowledgement must be received.

Consider this figure of an X.25 network. In it, one DTE device is sending data to another across the X.25 network. For illustration purposes, only a single frame is shown traversing the 3 PSEs between the routers using X.25. At each step along the way, an acknowledgement has to be sent back to the sender by the recipient. For example, when the first X.25 switch receives a frame from the sending router, it will send an acknowledgement. After forwarding the frame to the next switch, this second switch will forward an acknowledgement back to the first switch, and so on. This provides a great example of the amount of error checking involved with making X.25 a reliable WAN technology.

X.25 Protocols and Standards

Even though it predates the OSI model, the protocols and physical standards over which X.25 works are considered to map to the model’s lowest three layers. These protocols and standards are described following this reference figure:

X.25 protocols and standards and their relationship to the OSI model. 

Network Layer. At the Network layer, X.25 implements the Packet-Layer Protocol (PLP). PLP is responsible for call setup and teardown functions, data transfer between DTE devices, and the fragmentation and reassembly of data. A PLP header is added to higher-layer data during the encapsulation process, and identifies the type of payload (control information or data), the PLP packet type, and the virtual circuit that the packet is associated with.

Data Link Layer. At the Data Link Layer, X.25 implements a protocol known as Link Access Procedure, Balanced (LAPB). LAPB is a variation of the HDLC protocol that handles framing, error and flow control mechanisms, as well as acknowledgements for frames as they travel between nodes on an X.25 network.

Physical Layer. X.25 is capable of using a variety of different physical and electrical interfaces to connect DTE to DCE devices. X.25 has traditionally used the X.21bis standard to provide full-duplex connectivity at speeds up to 19.2 Kbps. Other physical interfaces like EIA/TIA-232 are also commonly used with X.25.

X.25 Networks and Equipment

X.25 is a packet-switching WAN technology that exists at the Physical, Data Link, and Network layers of the OSI model. Originally conceived in the 1970s, X.25 is still surprisingly popular, largely a result of its worldwide adoption by most telecommunications carriers and its reputation for reliable data transfer. In fact, in many parts of the world, X.25 still represents the only reliable data transfer technology available. While it originally provided connectivity at rates of 56Kbps (and sometimes much less), the X.25 standard was revised in 1992 to support speeds up to 2Mbps.

X.25 Equipment

Three main types of equipment exist on X.25 networks. These include DTE and DCE devices, as well as X.25 packet-switching equipment (PSE). DTE equipment on an X.25 network would typically be a router, computer or terminal of some sort, both located at the customer premises. DCE devices in the X.25 world are located at the carrier’s facilities, and act as an interface between DTE equipment and PSE. Any given carrier’s X.25 network will consist of many PSEs, including interconnections to other service providers. Ultimately, these devices form the X.25 packet-switched “cloud”, as illustrated below.

The X.25 network is comprised of DCE and PSE equipment. DTE resides at the customer premises.

Another common piece of equipment found on an X.25 network is known as a packet assembler/disassember, or PAD. A PAD is equipment that connects a DTE device to the X.25 network. It performs three primary functions: packet buffering, assembly, and disassembly. In many cases, companies dial into a PAD service using a traditional modem. In other cases, an operating system will be capable of running X.25 protocols locally and will use what is known as a “smart card” to connect to the X.25 network. A PAD is physically located between a DTE and DCE device on an X.25 network, as illustrated in Figure 11-25. A Cisco router does not require a PAD to connect to an X.25 network, as it is capable of using X.25 encapsulation on serial interfaces.

Route Summarization and Redistribution

If you recall from our look at Classless Interdomain Routing (CIDR), it is possible to represent many individual subnets or networks in a single routing table entry by allocating a custom subnet mask. Sometimes referred to as supernetting, route summarization simply involves collapsing a number of contiguous routing table entries into a single entry instead. This not only saves routing table space, but also makes routing more efficient.

In the world of routing protocols, route summarization refers to the process by which a protocol (like OSPF) will summarize multiple entries into a single routing table entry at a boundary on the network. For example, consider a network where networks 10.0.8.0/24 through 10.0.15.0/24 all exist behind Router A. Instead of Router A advertising each of these eight networks to a neighboring router, it would make much more sense to summarize the routes into a single routing table entry, and forward information about one network only.

Networks 10.0.8.0/24 through 10.0.15.0/24 can be summarized into a single routing table entry or advertisement – 10.0.8.0/21. If you are having trouble remembering how I came up with the new network prefix of /21, I would strongly suggest reviewing the CIDR section of Chapter 5. For those looking for a quick refresher, eight networks need to be summarized in this example. By stealing 3 bits from the network portion of the existing subnet mask, I am capable of summarizing 8 addresses, since 23 equals 8.