A Synthesis on the Paper Entitled “Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks” by D. Chiu and R. Jain

D. Chiu and R. Jain in their paper entitled “Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks” analyzed several increase/decrease algorithms for increasing or decreasing the load a user is contributing to a network. This is in response to the fact that network congestion persists as an issue across heterogenous networks built on old and new technologies. These algorithms were analyzed with the variable indicators efficiency, fairness, convergence time and size of oscillations as metrics. The authors differentiated congestion control against congestion avoidance with the previous being a reactive mechanism and the latter a proactive mechanism. Both mechanisms are important for a inter/network and are worth devoting time for more in-depth study. The paper though focuses on congestion avoidance. The analysis in this paper came about due to the authors’ selection of algorithms to be used for the variant called binary feedback scheme proposed in another paper. The analysis is also scoped to a class of distributed algorithms designed for distributed decision-making. Specifically, these algorithms belong to the category of decentralized decision making algorithms in which the decision of how to respond to the current status of the network as indicated by the resource managers is distributedly assigned to the hosts.

In the paper the authors defined the state of congestion of a network to be the number of packets in it. Each host’s load contributes to the overall load of the network. Initially each host has an initial value for its resource demand. This demands are served by the resource managers as best as the situation allows. Upon service to hosts the resource managers also describes the status of the network. Upon receipt of the status indicator, cooperative response is expected  from the hosts. On heavy load situations, the hosts are expected to lighten their demands so the network could cope up gradually. On light load situations, the hosts are encouraged to use up resources as much as the system could support. As noted in the paper, the succeeding demand of a host is a function of the network status as defined by the resource managers and the host’s previous resource demand. The network status as indicated by the resource managers is what the authors call as binary feedback given that the values signifying the status are define as 0 for underload and 1 for overload. This choice of design gives the benefit of simplicity of implementation of the algorithm for the resource manager.

The change (increase or decrease) in a host’s resource demand is what the authors define as the host’s control. Along with this several linear controls were also presented, namely, (1) multiplicative increase/multiplicative decrease, (2) additive increase/additive decrease, (3) additive increase/multiplicative decrease, and (4) multiplicative increase/additive decrease. These controls were evaluated by the authors based on the previously mentioned set of criteria, namely, (1) efficiency, (2) fairness, (3) distributednes and (4) convergence. The metric of efficiency describes how close the utilization of the overall resources (load) in the system to the maximum load the network could allow. This metric does not take into consideration the allocation of resources to each host. The metric of fairness is defines the allocation of resources to each host in the network. The hosts are divided into classes defined by the type of resource with which they are having bottleneck. Each host in each class should have an equal allocation of  bottleneck with the other hosts in that class. This scheme of allocation is defined in other papers as maximum fairness criterion. The information on the total load capacity of the network and the total number of hosts sharing the total resources are known to resource managers and the dissemination of these information to all hosts should bear a very minimum effort. Control schemes with shorter convergence time (time it takes the system to get into the state of equilibrium) and smaller oscillation amplitude (defines the variance from the equilibrium state) are also required.

In the paper, the mentioned control schemes are first evaluated to determine which among these schemes converge. From the resulting set of feasible schemes a subset satisfying the distributedness criterion is also identified. From this subset of feasible control schemes a subset which optimizes the trade-off between fairness and efficiency is further identified. For identifying the ratio of fairness to efficiency of a specific control scheme, a two dimensional space is utilized in which the horizontal axis corresponds the resource allocation for a specific user and the horizontal axis for another user. A point in the space, defined by (a, b) represents the allocation of resources for the two hosts a and b. The target level of efficiency is defined in the space to be the line defined by the set of all points such that a + b = total load capacity. On the the other hand, the target level of fairness is defined to be the line defined by the set of all points such that a = b. The goal of a control scheme is to bring the allocation of resources for the two users into the intersection of  these two lines, as it is in that point that the trade-off between fairness and efficiency is optimized. The convergence to fairness and efficiency were separately discussed by the authors in the paper. After considering the convergence of the control schemes, the authors looked into the integration of the distributedness criterion. The distrbutedness criterion limits further the set of feasible controls into those which do not require knowledge of information about the state of the system but only the binary feedback from the resource manager to the hosts. The total set of restrictions therefore limits the set of feasible controls into those which can converge distributively into fairness and efficiency.

In conclusion, the authors note that the total restriction on the set of feasible controls is represented in the two dimensional plane as the intersection of the regions representing the individual restrictions on convergence in fairness and efficiency. In the plane it is represented as a line connecting the origin and point of intersection of convergence to efficiency and fairness.

In addition to what was presented in the paper, the authors also briefly looked into non-linear controls and presented some practical considerations for implementation of the control algorithms.

It is important to note that the success of congestion avoidance based on the control schemes presented in the paper depends greatly on the cooperative response of the hosts. Uncooperative behavior of the hosts can be handled by the network through enforcing controls in the gateways as resource managers. As a host in the network it is still advantageous in general to cooperate with the rest of the network to have a better network usage experience. Implicitly, the idea of collective sacrificial response from the hosts assures the whole network of better performance and therefore positively reflects back to the hosts as better network usage experience.

Advertisements
Posted in Computer Networks | Tagged , , , , , | Leave a comment

A Synthesis on the Paper Entitled “Congestion Avoidance and Control” by V. Jacobson and M. Karels

As internetworks evolved since the conception of the idea of linking heterogenous individual networks differing in nature and architecture the issue of congestion across communication channels from end point to end point started to become a concern. Jacobson and Karels, of LBL and UCB respectively, in their paper entitled “Congestion Avoidance and Control” looked into this problem by characterizing what a network congestion is and suggested several approaches to detecting congestion within a network and bringing the network back into a stable configuration or equilibrium from a state of congestion.

As noted by the authors, the first ever occurrence of network congestion (a first of a series) happened within the network connecting LBL to UC Berkeley when a thousand-fold decrease in throughput occurred. Nowadays, we as Internet users are not that much aware of the details of such issues in packet data transmission across the Internet and it is a good thing for common Internet users that such events are already hidden. In response to the unforeseen event, LBL and UCB employed new algorithms into the current (at that time) existing protocol 4BSD TCP connecting both campuses. These are (1) round-trip time variance estimation, (2) exponential retransmit timer backoff, (3) slow start, (4) more aggressive receiver ack policy, (5) dynamic window sizing on congestion, (6) Karn’s clamped retransmit backoff and (7) fast retransmit. The algorithms (1) – (5) were conceptualized based on the principle of “conservation of packets” in a TCP connection. In a TCP connection, the authors denote a connection in equilibrium if packets are not introduced into the network until the old ones leave. The sending end points are conservative in sending out packets to receivers while the receivers are limited only by the capacity of their buffers for receiving packets. When every end point in the network subscribe to this principle of sending out packets, the network will less likely be clogged. This somehow implies a defined synching mechanism for the transmission of packets across the network.

Three scenarios were presented by the authors wherein conservation of packets in a network may fail. These are the scenarios when (1) the connection does not really get into an equilibrium or stable configuration of packet transmission, (2) the senders do not subscribe to the “conservation of packets” principle, and (3) limits in the resource hinders the connection to get into equilibrium. Before the network ever get into equilibrium, it first needs to get kickstarted. Initial packets must be sent out to the destination end points gradually and slowly increasing over time to avoid overwhelming the network. Since an internetwork is dynamic in nature, the process to achieving equilibrium is a slow and non-uniform one. The rate varies depending on the dynamic configuration of the network (some factors which might contribute to the rate are size, topology and type of network).  The principle of “conservation of packets” implies a self-clocking mechanism for the network. This self-clocking mechanism eventually drives the network into a state of equilibrium (given that the hosts subscribe to conservation of packets principle). In line with this, the authors developed a slow-start algorithm which facilitates the gradual increase of sending of packets into the network. This solution enjoys the characteristics of subtlety and simplicity of implementation.

Assuming that the transmission of packet is already assumed to be stable, it is necessary then to look into the handling of retransmission of packets in case of failure (delay or loss). It is therefore also necessary to look into the round-trip time estimator of the protocol. The authors noted that one mistake in considering retransmit time is not considering the variance in the round trip time. They also developed a cheap way for estimating the variation which results to a retransmit timer without the unnecessary retransmissions of packet data. They also noted that another problem with the round-trip timing is the backoff estimation. When a host needs to retransmit packet data it is important to know the interval in time for resending packet data. One approach mentioned in the paper, without proof, is exponential backoff.

Now assuming that the flow of data across the network is already stable and that the timers are already working, failure may now result most likely from lost packet. The authors point out that data packets get lost in the network for reasons that these packets get damaged or there is a congestion somewhere along the path to the packet’s destination and that the buffer capacity is insufficient. In the paper, congestion avoidance is defined to have two necessary components. First is the notification mechanism wherein the network is able to notify the end points about a current congestion or a possible congestion in the network. On the other hand, the end points must be able to respond properly according to the signal of congestion by adjusting the volume of packets sent into the network to reduce the current load. This will eventually result to the dissipation of network load. To decrease the volume of packets sent, a host may decrease its window size. On the other hand, if resources are freed up the hosts must also be able to maximize its operation by utilizing free resources through increasing its window size. A host has no mechanism for knowing existence of freed-up resources though.  What that host could do is to try to increase its windows size until the limit is hit. The question that comes into mind then is the policy on adjusting window size in response to a (or lack of) congestion signal from the network. In the paper the authors stated that the best increase policy is to make small constant changes to the window size. This is what they call as additional increase/multiplicative decrease policy. The authors also pointed out that it is worth noting that the slow-start algorithm and the congestion avoidance algorithm are different. In case that a restart resulting from packet loss is needed, the slow-start algorithm in addition to congestion avoidance is needed in order for the network to cope up with the situation.

A future work mentioned by the authors is on gateway “congestion detection” algorithm. The detection of unfair distribution of resources is not detectable in the end point. There is enough information though in the gateway to detect and balance usage of resources among hosts. A host, upon receipt of notification about a congestion in the network, may opt to implement congestion avoidance or to hug resources instead. In case a host hug resources it will just have its packets dropped which is a statement from the gateway indicating its unfair usage of bandwidth.

Posted in Computer Networks | Tagged , , , , , , , | Leave a comment

A Synthesis on the Paper Entitled “The Design Philosophy of the DARPA Internet Protocols” by D. Clark

In the paper entitled “The Design Philosophy of the DARPA Internet Protocols” by Clark, he discusses the principles governing the design of Internet as proposed and implemented by DARPA. The main goal of the DARPA Internet Architecture was to provide an effective technique for the usage of connected individual networks. The initial networks considered for connection were ARPANET and ARPA packet radio network so that the radio network could utilize the large service machines of ARPANET. The selected technology for the connection was packet switching instead of circuit switching. It was also opted to connect the existing networks instead of creating a new unified major network to handle future networks which will emerge.

A set of more specific goals were also presented in the paper to define what is sought after the design of an effective internetwork. These goals are persistence of communication despite loss of gateway or network, support for multiple types of communication services, accommodation of variety of networks, provision for distributed management of resources, cost effective, low level of effort for connecting end-point host, and accountability of resources used in the Internet. These goals as mentioned by the author is not a definite set of goals but just a baseline for the design.

The most important goal of the design is to make sure that communication between end-points should persist despite failure in the network, unless there is no physical path in which to route the packets of the communication. Preserving the state of communication means preserving the information on the number of packets transmitted and acknowledged, or keeping track of the outstanding flow of control permissions. A model for persisting the communication is “fate-sharing”, as coined and preferred by the author. With fate-sharing the information on the communication is gathered at the end-point of the communication availing the service. Secondly, the design should support various types of services differing in the requirements for speed, latency and reliability. Several of these services are bi-directional reliable delivery of data and virtual circuit service (e.g.
remote login, file transfer) implemented using TCP. Another type of service is the XNET cross-Internet debugger and real-time delivery of digitized speech. As a result it was decided early that several other transport layers needs to be created to handle various services. The design was also created in order to accommodate several types of networks including long haul nets, local area nets, broadcast satellite nets, packet radio networks and several more variety of networks. Meeting this goal is equally important as the other two as this defines the range of networks which will be accommodated by the design. The other goals are of lesser importance compared to the first three, yet still are desirable to be met by the design. The goal of distributed management of resources of the Internet has been met as not all gateways are managed by a single agency. On the other hand cost efficiency might have been compromised for the sake of interoperability of networks (design of packets). Another point of inefficiency could be the retransmission of lost packets from one end to another, crossing several other networks if necessary. Also, the cost of connecting a host to the Internet seemed more costly as compared to other architectures as a set of services desired to be supported must be implemented in the endpoint. It is worth noting that during the time of publication of the paper, the accounting packet flows is still being studied and so no further details were presented.

The author presented the notion of realizations to describe a set of hosts, gateways and networks. These realizations may differ in magnitude and requirements like speed and time of delay. An Internet designer needs to consider the details of such realizations to make the design fitting for implementation.

Posted in Computer Networks | Tagged , , , , , , , | 1 Comment

A Synthesis on the Paper Entitled “A Protocol for Packet Network Intercommunication” by V. Cerf and R. Kahn

In the paper A Protocol for Packet Network Intercommunication (Cerf and Kahn, 1974), a protocol for communication (sharing of resources) across networks using data packets is proposed by Cerf and Kahn. During this time there are already existing individual packet switching networks with already established protocols for routing packets internally. These individual networks may differ on the protocol used for addressing the receiver of message, on the maximum size of data packet accepted by each host within the network, the time delays in accepting, delivering and transporting data packets within the network, the mechanism used for error correction for corrupted messages and the mechanisms for checking of status information, routing, fault detection and isolation.

Given these much differences between individual networks, if internetwork communication is desired, it is necessary therefore to provide a mechanism for communication in which all these differences are taken into consideration. An obvious way to approach this problem is to make transformations in conventions from the source to the interface into an internetwork-wide agreed set of conventions. This will complicate the design of the internetwork interface though. In the paper, the authors assumed a common set of conventions shared by the hosts or processes in all networks belonging to an internetwork to relieve the interface the design complexities. Instead, the interface’s concern will mainly be the transport of packet data from a network to another or several others. Since the interface acts as an entrance or exit to or from a particular network, the authors obviously called it GATEWAY.

As discussed by the authors, the notion of gateway is a very simple and intuitive one. Any data packet which needs to be transported to another network must pass through the gateway. The authors proposed a protocol in which all the networks which wants to communicate with each other need to subscribe to. In the proposal, formatting is done for each data packet which traverses from a network to another. Upon crossing a gateway, data packets from a network are formatted to meet the requirements of the other network. The data packets are formatted every time they cross a gateway and enter a new network. Splitting or fragmentation of data packets may occur in the gateway so that the packets being transported would fit the requirements of the network it is entering. The details of the information describing the packets are indicated in the packet headers. Most notable are the source and destination information indicated in the internetwork header of the packet.

A mechanism for handling the transmission and acceptance of messages from host to host is assigned to a transmission control program (TCP), as proposed by the authors. TCPs in turn is connected to packet switches which serve the host. Fragmentation of messages into segments may also be done by the TCP for the reasons that the local network has a limit on the maximum message length (in bytes) and that the TCP may be serving several processes which wishes to communicate with the outside networks. The problem of segmenting messages intended for a common destination process or host by the TCP is also discussed and two approaches were considered. The authors opted for the proposed solution wherein each packet are assumed to contain information regarding the destination and source process, which is indicated in the process header of the packet.

The notion of ports is also introduced as a unique identifier for a specific message stream between two communicating processes. Since the identification of ports is different for each operating system there is also a need for a uniform addressing scheme for ports.

The authors also discussed the problem of reassembly and sequencing of segments received by destination TCP. They proposed a scheme to handle this problem in the receiving end. Each arriving packet is identified to a specific port and are reassembled to the original message/text based on their sequence numbers. The check sum for the message is computed once reassembly is done to verify if the reassembled message is not corrupted. Additional flags ES (end of segment) and EM (end of message) are also included in the packet headers to indicate the completion of assembly of messages and segments in the receiving side.

Retransmission and duplicate detection were also tackled by the authors. They proposed an aknowledgement-based approached wherein the receiving end will acknowledge the receipt of packets according to a sliding window of sequence numbers. Any packets arriving outside this window are discarded and are requested to be retransmitted by the sender. As proposed, buffers can also be employed to handle incoming packets. Any packets which cannot be accommodated yet in the buffers are discarded and are requested to be retransmitted by the sending sending end.

The proposal of Cerf and Kahn generally describes how the Internet works as we know it to this day. There could have been deviations from the whole proposal but in general it describes the basic set of protocols in which the Internet subscribes.

Posted in Computer Networks | Tagged , , , , , , , | 1 Comment

A Synthesis on the Paper Entitled “Rethinking the design of the Internet: The end to end arguments vs. the brave new world” by D. Clark

In Clark’s paper “Rethinking the design of the Internet: The end to end arguments vs. the brave new world” various issues and problems regarding the Internet and its use are discussed relative to the end to end arguments design principle of the Internet. The end to end arguments suggests that the lower levels of the system, the core of the network, be kept simple in design. The application level needs should be addressed in the end-points of the network and not on the network itself, since the concern of the network should only be the facilitation of the transport of data packets from one end to another. The identified advantages of keeping the core network simple are reduction in the complexity of the core of the network, flexibility of the network to accommodate new applications, and increase in reliability of end-point applications.

Since the creation of the notion of communicating internetworks several new requirements emerged which posed challenges on the end to end arguments design philosophy of the Internet. These includes the security of transactions across the network, new resource demanding applications, introduction of new players which are involved in the commercialization of services on top of the Internet (like ISPs), the introduction of third parties in the communication of end-points, and the servicing of less savvy users. All these new requirements must be handled by the end to end arguments philosophy to keep the core of the network simple.

In todays transactions over the Internet, security (and sometimes anonymity) are base requirements for communication. Payment over the Internet resulted to rise of parties which mainly serves as intermediary between the two ends involve in the transaction. The government (most likely in other countries) also started joining end-to-end communications over the Internet for purposes of surveillance, censorship and taxing. This is nothing new about the government since the invention of the notion of wire tapping. Attacks over the Internet by hackers and the propagation of computer viruses over the network caused anxiety on end users causing lack of trust on hardware and software.

It is a challenge to the end to end arguments design philosophy to handle these issues without being violated. Technical responses were employed to answer these issues. The end to end arguments suggests that the changes be made to the end-points and not on the network itself. Application specific enhancement on the core network may cause crash of the network itself when the specific application fail or may cause non-flexibility of the network thereby limiting the range of applications which may be attached to the network. It is suggested then that enhancements be made on the application side to limit the scope of damage to the end-points if ever. A violation of the end to end arguments has been employed recently though. These violations are justifiable though based on the purpose which they serve. These are the installation and usage of firewalls, traffic filters, and Network Address Translation elements. They serve the purpose of prevention of anomalies in the end-points and address space management. Along with these enhancements in the core of the network comes issues like imposing of control on the communication path and revealing or hiding of message contents. Labeling of information is adapted in some countries to answer the need for classifying messages running across the network. In addition to technical solutions proposed and adapted to answer the aforementioned issues, nontechnical solutions also plays a major part of the solution. Laws on cyberspace started to be drafted, passed, and enforced in some countries to employ a certain level of control on the transactions over the Internet. This may be a violation of the end to end arguments but its purpose justifies its adaptation. Labeling schemes were proposed and are adapted to classify information over the Internet even though the efficiency and scope of enforcement of this scheme is not totally assured. The nature of the Internet being trans-boundary has a huge effect on the enforcement of cyberspace laws. Voluntary submission to such laws is desired for the “regulation” of transactions over the Internet.

Posted in Computer Networks | Tagged , , , , , , | 1 Comment

Hacking Call Me Maybe :]

Ryan Challinor (I think he is from MIT) hacked his wrist watch with heart rate monitor to control the tempo of artist Carly Rae’s hit song Call Me Maybe played on an music player. The higher his heart beat rate gets the higher the tempo of the song goes. I think the usual tempo of the song when played unaltered is 135. Now that is clever! 😀

Now I am thinking of what other cool applications of a his hack could there be. What about automatically choosing the genre of the songs being played on your music player based on your heart beat rate while you are running? Hmmm..coolness! ~_<

Posted in Software Development | Tagged , , , | Leave a comment

Pick a Book 2012 – Recommended Readings

Oh yes! One of the most awaited time of the year has come again. Once every year our college opens up a pick-a-book week in our department wherein both the faculty and us, students, could choose books which we would like to request to be purchased for the library. The following are the books which The Turtle and the Hedgehog recommended for the department in no order:

  • Data Management of Protein Interaction Networks (M. Cannataro and P. Guzzi)
  • Graph-based Natural Language Processing and Information Retrieval (R. Mihalcea, D. Radev)
  • GPU Computing Gems (W. Hwu)
  • Distributed and Cloud Computing (K. Hwang, J. Dongarra, G. Fox)
  • Mathematical Foundations of Computer Networking (S. Keshav)
  • The Art of Multi-processor Programming (M. Herlihy and N. Shavit)
  • Combinatorics of Permutations (M. Bona)
  • Introduction to Quantum Optics (Y. Shih)

Hopefully our complete reading suggestion will get purchased. Each of these books will contribute to what The Turtle and the Hedgehog are currently doing in each field of interest and will be a big help for other students who will find our interests also interesting. ~_<

Excelsior!

J. A. Aborot

Posted in Bioinformatics, Quantum Computing | Tagged , , , , , , , | Leave a comment