Frame Relay is a popular WAN protocol because it makes it easy to construct reliable and inexpensive networks. Its main advantage over simple point-to-point serial links is the ability to connect one site to many remote sites through a single physical circuit. Frame Relay uses virtual circuits to connect any physical circuit in a cloud to any other physical circuit. Many virtual circuits can coexist on a single physical interface.
This section will offer only a quick refresher of how Frame Relay works. If you are unfamiliar with Frame Relay, we recommend reading the more detailed description of the protocol and its features that are found in T1: A Survival Guide (O'Reilly).
The Frame Relay standard allows for both Switched (SVC) and Permanent (PVC) Virtual Circuits, although support for SVCs in Frame Relay switching equipment continues to be relatively rare. Most fixed Frame Relay WANs use PVCs rather than SVCs. This allows you to configure the routers to look like a set of point-to-point physical connections. SVCs, on the other hand, provide a mechanism for the network to dynamically make connections between any two physical circuits as they are needed. In general, SVCs are more complicated to configure and manage. Most network engineers prefer to use PVCs unless the carrier offers significant cost benefits for using SVCs. SVCs tend to be most practical when the site-to-site traffic is relatively light and intermittent.
Each virtual circuit is identified by a Data Link Connection Identifier (DLCI), which is simply a number between 0 and 1023. In fact, Cisco routers can only use DLCI numbers in the range 16 through 1007 to carry user data.
If the router at Site A wants to send a packet to Site B, it simply specifies the appropriate DLCI number for the virtual circuit that connects to Site B in the Frame Relay header. Although a physical circuit can have many virtual circuits, each connecting to a different remote circuit, there is no ambiguity about where the network should send each individual packet.
It's important to remember, though, that the DLCI number only has local significance. That is, the DLCI number doesn't uniquely identify the whole virtual circuit, just the connection from the local physical circuit to the Frame Relay switch at the Telco central office. The DLCI number associated with this virtual circuit can change several times before it reaches the remote physical circuit.
We like to use this fact to our advantage when constructing a Frame Relay network. Instead of thinking of the DLCI number as a virtual circuit identifier, we use it to uniquely label each physical circuit. Suppose, for example, that Site A has virtual circuits to both Sites B and C. Then we would use the same DLCI number at both Sites B and C to label the virtual circuits that terminate at Site A. This is just one of many possible DLCI numbering schemes, but we prefer it because it makes troubleshooting easier. Unfortunately, while this scheme works well in hub-and-spoke network topologies, it tends to become unworkable in meshed or partially meshed networks.
Frame Relay has several built-in Quality of Service (QoS) features. Each virtual circuit has two important service level parameters, the Committed Information Rate (CIR) and the Excess Information Rate (EIR). The CIR is the contracted minimum throughput of a virtual circuit. As long as you send data at a rate that is less than the CIR value, it should all arrive. The EIR is the available capacity above the CIR. The worst case is when the router is sending data through a single virtual circuit at the line speed of the physical circuit. The network will generally just drop all packets that exceed the EIR, so it is customary to have the sum of CIR and EIR for each virtual circuit equal the line speed of the physical circuit. This makes it physically impossible to exceed the EIR for any PVC.
When the router sends packets faster than the CIR rate that you have contracted with your network provider, the carrier network may drop some or all of the excess packets if there is congestion in the cloud. To indicate which packets are in the excess region, the first switch to receive them will often mark the Discard Eligible (DE) bit in their Frame Relay headers. If there is no congestion, the packet will be delivered normally. But, if the packet goes through a congested part of the carrier's network, the switches will know that they can drop this packet without violating the CIR commitments. By just counting the packets that have their DE bit set on the receiving router, you get a useful measure of how often your network exceeds the CIR on each PVC. Because your traffic patterns will probably not be symmetrical in most networks, you should monitor the number of DE packets received separately on both ends of every PVC.
By default, the router will send frames into the cloud without the DE bit set. What happens next is up to the carrier, but it is common for the first switch to monitor the incoming traffic rate using some variation of the following. During each sample period (typically a short period of time such as a second), the switch will count the incoming bytes on each PVC. If there is more data than the CIR for this PVC, the switch will mark the DE bit in all of the excess frames.
However, it is also possible to configure the router to set the DE bit on low priority traffic in the hopes that the network will drop these low priority packets in preference to the high priority packets. This is something of a gamble, of course, and its success depends critically on the precise algorithm that your WAN vendor uses for handling congestion. You should consult with your vendor to understand their traffic shaping and policing mechanisms before attempting this type of configuration.
There are two other extremely useful flags in the Frame Relay header. These are the Forward Explicit Congestion Notification (FECN) and the Backward Explicit Congestion Notification (BECN) bits. These simply indicate that the packet encountered congestion somewhere in the carrier's network. Congestion is most serious when you are sending at a rate higher than the CIR value; if your carrier marks the DE bit of these excess packets, then congestion in one of their switches could mean dropped packets.
If a packet encounters congestion in a carrier switch, that switch will often set the FECN flag in the packet's header. Then, when the other router finally receives this packet, it will know that it was delayed. But this is actually not all that useful, because the receiving router is not able to directly affect the rate that the sending router forwards packets along this virtual circuit. So the Frame Relay standard also includes the BECN flag.
When a switch encounters congestion and needs to set the FECN flag on a packet, it looks for another packet traversing the same PVC in the opposite direction, and marks it with a BECN flag. This way, the sending router immediately knows that the packets it is sending are encountering congestion.
Note, however, that not all Frame Relay switches implement these features in the same way. So, just because you don't see any FECN or BECN frames doesn't mean you can safely assume that there is no congestion. Similarly, not seeing DE frames doesn't necessarily mean that you aren't exceeding the CIR for a PVC. In Recipe 10.6, for example, we show how to configure a router to act as a Frame Relay switch. But the router does not implement these congestion notification features at all. The DE counter is also not a meaningful indicator of how often you exceed CIR if your devices are configured to send low priority frames with the DE bit set.
By default, the router will not react to FECN and BECN markings. Recipe 10.9 shows how you can look at statistics on FECN and BECN frames to get an idea of the network performance. In the next chapter, Recipe 11.11 shows how to configure a router to automatically adapt to congestion in the carrier network by reading and responding to the BECN flags, reducing the sending rate until the congestion disappears.
Top |