A Synthesis on the Paper Entitled “Congestion Avoidance and Control” by V. Jacobson and M. Karels

As internetworks evolved since the conception of the idea of linking heterogenous individual networks differing in nature and architecture the issue of congestion across communication channels from end point to end point started to become a concern. Jacobson and Karels, of LBL and UCB respectively, in their paper entitled “Congestion Avoidance and Control” looked into this problem by characterizing what a network congestion is and suggested several approaches to detecting congestion within a network and bringing the network back into a stable configuration or equilibrium from a state of congestion.

As noted by the authors, the first ever occurrence of network congestion (a first of a series) happened within the network connecting LBL to UC Berkeley when a thousand-fold decrease in throughput occurred. Nowadays, we as Internet users are not that much aware of the details of such issues in packet data transmission across the Internet and it is a good thing for common Internet users that such events are already hidden. In response to the unforeseen event, LBL and UCB employed new algorithms into the current (at that time) existing protocol 4BSD TCP connecting both campuses. These are (1) round-trip time variance estimation, (2) exponential retransmit timer backoff, (3) slow start, (4) more aggressive receiver ack policy, (5) dynamic window sizing on congestion, (6) Karn’s clamped retransmit backoff and (7) fast retransmit. The algorithms (1) – (5) were conceptualized based on the principle of “conservation of packets” in a TCP connection. In a TCP connection, the authors denote a connection in equilibrium if packets are not introduced into the network until the old ones leave. The sending end points are conservative in sending out packets to receivers while the receivers are limited only by the capacity of their buffers for receiving packets. When every end point in the network subscribe to this principle of sending out packets, the network will less likely be clogged. This somehow implies a defined synching mechanism for the transmission of packets across the network.

Three scenarios were presented by the authors wherein conservation of packets in a network may fail. These are the scenarios when (1) the connection does not really get into an equilibrium or stable configuration of packet transmission, (2) the senders do not subscribe to the “conservation of packets” principle, and (3) limits in the resource hinders the connection to get into equilibrium. Before the network ever get into equilibrium, it first needs to get kickstarted. Initial packets must be sent out to the destination end points gradually and slowly increasing over time to avoid overwhelming the network. Since an internetwork is dynamic in nature, the process to achieving equilibrium is a slow and non-uniform one. The rate varies depending on the dynamic configuration of the network (some factors which might contribute to the rate are size, topology and type of network).  The principle of “conservation of packets” implies a self-clocking mechanism for the network. This self-clocking mechanism eventually drives the network into a state of equilibrium (given that the hosts subscribe to conservation of packets principle). In line with this, the authors developed a slow-start algorithm which facilitates the gradual increase of sending of packets into the network. This solution enjoys the characteristics of subtlety and simplicity of implementation.

Assuming that the transmission of packet is already assumed to be stable, it is necessary then to look into the handling of retransmission of packets in case of failure (delay or loss). It is therefore also necessary to look into the round-trip time estimator of the protocol. The authors noted that one mistake in considering retransmit time is not considering the variance in the round trip time. They also developed a cheap way for estimating the variation which results to a retransmit timer without the unnecessary retransmissions of packet data. They also noted that another problem with the round-trip timing is the backoff estimation. When a host needs to retransmit packet data it is important to know the interval in time for resending packet data. One approach mentioned in the paper, without proof, is exponential backoff.

Now assuming that the flow of data across the network is already stable and that the timers are already working, failure may now result most likely from lost packet. The authors point out that data packets get lost in the network for reasons that these packets get damaged or there is a congestion somewhere along the path to the packet’s destination and that the buffer capacity is insufficient. In the paper, congestion avoidance is defined to have two necessary components. First is the notification mechanism wherein the network is able to notify the end points about a current congestion or a possible congestion in the network. On the other hand, the end points must be able to respond properly according to the signal of congestion by adjusting the volume of packets sent into the network to reduce the current load. This will eventually result to the dissipation of network load. To decrease the volume of packets sent, a host may decrease its window size. On the other hand, if resources are freed up the hosts must also be able to maximize its operation by utilizing free resources through increasing its window size. A host has no mechanism for knowing existence of freed-up resources though.  What that host could do is to try to increase its windows size until the limit is hit. The question that comes into mind then is the policy on adjusting window size in response to a (or lack of) congestion signal from the network. In the paper the authors stated that the best increase policy is to make small constant changes to the window size. This is what they call as additional increase/multiplicative decrease policy. The authors also pointed out that it is worth noting that the slow-start algorithm and the congestion avoidance algorithm are different. In case that a restart resulting from packet loss is needed, the slow-start algorithm in addition to congestion avoidance is needed in order for the network to cope up with the situation.

A future work mentioned by the authors is on gateway “congestion detection” algorithm. The detection of unfair distribution of resources is not detectable in the end point. There is enough information though in the gateway to detect and balance usage of resources among hosts. A host, upon receipt of notification about a congestion in the network, may opt to implement congestion avoidance or to hug resources instead. In case a host hug resources it will just have its packets dropped which is a statement from the gateway indicating its unfair usage of bandwidth.

Advertisements

About Jeffrey A. Aborot

> Background: BS Computer Science, University of the Philippines Baguio. > Work: Advanced Science and Technology Institute - Department of Science and Technology of the Philippines. > Academics: MS Computer Science (on-going), Algorithms and Complexity Laboratory, Computer Science Department, UP Diliman > Languages: Filipino, Tagalog, Cuyunon, English, Java, Python, C. > Operating Systems: Linux, OSX. > Weird Stuff: Bunch of Pentax film cams.
This entry was posted in Computer Networks and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s