Bandwidth is one of the key factors that affect QoS in a network; the more bandwidth there is, the better the QoS will be. However, simply increasing bandwidth will not necessarily solve all congestion and flow problems.
Intuitively, the easiest way to increase bandwidth would seem to be to increase the link capacity of the network to accommodate allapplications and users, allowing extra, spare bandwidth. Although this solution sounds simple, increasing bandwidth is expensive and takes time to implement. There are often technological limitations in upgrading to a higher bandwidth. In any event, ignoring QoS in favor of increasing bandwidth is at best a temporary fix. The faster the network, the faster traffic will increase and the problems return.Figure illustrates a more rational approach of using advanced queuing and compression techniques. Queuing means to classify traffic into QoS classes and then prioritize each class according to its relative importance. The basic queuing mechanism is first-in, first-out (FIFO). Other queuing mechanisms provide additional granularity to serve voice and business-critical traffic. Such traffic typesshould receive sufficient bandwidth to support their application requirements. Voice traffic should receive prioritized forwarding, and the least important traffic should receive the unallocated bandwidth that remains after prioritized traffic is accommodated. Cisco IOS QoS software provides a variety of mechanisms that can be used to assign bandwidth priority to specific classes of traffic:
* Priority queuing (PQ) or custom queuing (CQ)
* Modified deficit round robin (MDRR)
* Distributed type of service (ToS)-based and QoS group weighted fair queuing (WFQ)
* Class-based weighted fair queuing (CBWFQ)
* Low-latency queuing (LLQ)
A way to increase the available link bandwidth is to optimize link usage by compressing the payload of frames (virtually). Compression,however, also increases delay because of the complexity of compression algorithms. Using hardware compression can accelerate packet payload compressions. Stacker and Predictor are two compression algorithms that are available in Cisco IOS software.
Another mechanism that is used for link bandwidth efficiency is header compression. Header compression is especially effective in networks where most packetscarry small amounts of data (that is, where the payload-to-header ratio is small). Typical examples of header compression are TCP header compression and Real-Time Transport Protocol (RTP) header compression.
Payload compression is always end-to-end compression, and header compression is hop-by-hop compression.
Example: Using Available Bandwidth More Efficiently
In a network with remote sitesthat use interactive traffic and voice for daily business, bandwidth availability is an issue. In some regions, broadband bandwidth services are difficult to obtain or, in the worst case, are not available. This situation means that available bandwidth resources must be used efficiently. Advanced queuing techniques, such as CBWFQ or LLQ, and header compression mechanisms, such as TCP and RTPheader compression, are needed to use the bandwidth much more efficiently.
Figure shows an example of how to use bandwidth efficiently using advanced queuing and header compression mechanisms. In this scenario, a low-speed WAN link connects two office sites. Both sites are equipped with IP phones, PCs, and servers that run interactive applications, such as terminal services. Because the availablebandwidth is limited, an appropriate strategy for efficient bandwidth use must be determined and implemented.
Administrators must chose suitable queuing and compression mechanisms for the network based on the kind of traffic that is traversing the network. The example in Figure uses LLQ and RTP header compression to provide the optimal quality for voice traffic. CBWFQ and TCP header compression...