MeshStream: Peer-to-Peer Video Streaming

It has been a while and a lot has happened the past semester, especially in our Advanced Computer Networks course. Peer-to-peer (P2P) overlay networks became our topic for our term mini project. I and my partner looked into the possibility of creating a P2P-based video streaming application and we were able to create a very simple proof of concept software application which uses a P2P client library and a media player library for Java for seeking, downloading, buffering and playback of pieces from other peers. We named our project as MeshStream. Mesh is for its P2P aspect and Stream is for the playback aspect. I personally am planning to further develop the concept by looking into papers about P2P-based video streaming this summer. After further planning and redesigning, we plan to offer the application to the community as an open-source application which anybody could further develop into something more useful. For the mean time, I am back to the drawing board for the redesigning of the application. Source code will be offered for forking once everything is set up and working properly through BitBucket.org. ūüôā Ciao! Excelsior! Jeff

Posted in Computer Networks, Software Development | Tagged , , , , , , , | Leave a comment

A Synthesis on the Paper Entitled ‚ÄúA Survey and Comparison of Peer-to-Peer Overlay Network Schemes‚ÄĚ by E. K. Lua et al

Peer-to-Peer overlay networks offers features which weigh differently according to situation. As the authors mentioned in their paper, these are:

  • robust wide-area routing architecture
  • efficient search of data items
  • selection of nearby peers
  • redundant storage
  • permanence
  • hierarchical naming
  • trust and authentication
  • anonymity
  • massive scalability
  • fault tolerance

The authors also mentioned two classifications  of P2P overlay networks, namely, structured and unstructured.

On Structured P2P Overlay Networks

Structured P2P overlays take advantage of determinism to increase efficiency of queries within the network. Structured P2P networks make use of Distributed Hash Table for mapping data objects to peers within the network using key-value pairs, in which keys are mapped to unique peers. A look-up table is also maintained by each peer for keeping track of its neighboring peers all the time. Note that this requires a periodic updating of each peer’s look-up table to ensure efficiency of query propagation across the network. It is worth noting that in theory, DHT-based P2P overlays ensure a O(log N)¬†in average query roundtrip time in hops for any data object within the network. The physical topology of the underlying network may differ from what is theoretically assumed though. This may cause in increase in latency within the overlay network which may greatly affect the performance of the applications running on the upper tier of the infrastructure. An advantage of the determinism of structured P2P overlay networks is its efficiency in locating “rare” items within the network. Unstructured networks do not scale well in locating this kind of items within the network due to flooding of query. On the other hand, unstructured networks do work well with high distribution of same items in the network.

Content Addressable Network (CAN) is a highly structured P2P overlay network infrastructure. CAN efficiently facilitates query routing through its mode of identifier assignment to the peers of the network.¬†The identifier space (both for data objects and nodes) is represented by a d-dimensional coordinate space divided into sectors or zones which are assigned to peers. As common to structured P2P overlay networks, peers also keep a look-up table of neighbors to facilitate efficient routing of requests across the network. Introducing multiple d-dimensional coordinate spaces will expand the zone of specific peers thereby also allowing scalability within the network. In contrast to CAN, Chord uses a ring structure to facilitate mapping of peers to identifiers in the identifier space. Peers also maintain a lookup table termed as finger table to keep track of neighbor peers. In Chord, traversal of the network and assignment of keys to peers uses a notion of peer successor which is identified by a factor of modulo 2m from an origin peer during queries. Both CAN and Chord provides facility for handling “insertion” of new peers and “deletion” of current peers into the network. CAN handles this with splitting and joining of zones while Chord does reassignment of successor pointers. Other structured P2P overlay networks discussed by the authors are Tapestry, Pastry, and Kademlia.

Note that in each of these structured infrastructures some of the issues commonly addressed are the query request and response time, keeping track of neighbor peers to facilitate efficient propagation of requests, and joining and leaving of peers resulting to dynamic assignment of data objects within the network. The application performance benefits well from the structure of the these network infrastructures in terms of query propagation due to determinism of the assignment of identifiers in the network. The investment in computing space of structured P2P overlay networks results to gain in speed of propagation of request and response across the network.

On Unstructured P2P Overlay Networks

In contrast to structured P2P overlay networks, unstructured overlay networks do not depend on the topology of the overlay network in providing identifier names for data items and peers. In unstructured networks, there are no visualization of d-dimension zones or rings. The topology of the network evolves freely as peers join and leave the network.

Common to some infrastructures under this category is the use of super or ultra-peers. As the name suggests, these are peers with high bandwidth, large disk space, and high processing power. Gnutella protocol and FastTrack file-sharing system share this commonality. Peers publish their files list to these ultra-peers as meta-info to facilitate more efficient querying of data items. Ultra-peers act as directories wherein ordinary peers could lookup data item information, pointing to the peer/s hosting the queried file, if there are any. Queries are directed to the ultra-peers (they are capable of processing queries due to their high capacity) and are propagated using the common flooding approach.

Decentralized unstructured P2P overlay networks, like Freenet network and networks built on top of the Gnutella protocol, have the advantage of high reliability and fault-tolerance as compared to centralized ones like those built on top of the BitTorrent protocol, which has a single point of failure. On the other hand the details of the design of the BitTorrent protocol justifies the choice of centralization for more efficient file-sharing. A tracker is employed to keep track of the activities and the files available for sharing across the network. The protocol employs measures to ensure fair exchange of bits between peers. Among these are pipelining and choking.

Note that any protocol which requires caching or keeping of meta-info in some part/s of the network requires refreshing either in a periodic or aperiodic manner. In centralized networks like those built on BitTorrent, this is most likely done from all peers propagating their files lists to the central management node. This could be done either periodic (peers republishes their files lists) or in a native manner already built into the design of the protocol (refreshes are incorporated in queries and responses). This incurs overhead space cost in maintaining the list in exchange for faster request-response time. This is addressed by some protocols and systems by using high-capacity peers. Space cost is also starting to cease as an issue due to the enormous advancements in hardware research. Time cost usually still comes first as an issue as compared to space. It may vary though depending on the situation and nature of application of the overlay network.

Weight of the integrity and quality of data received by requesting peers may also vary depending on the situation. Downloaders do not really mind much usually about the quality of movie files they are receiving as long as they get a usable copy. Security, as always, remains to be a major concern in P2P file-sharing systems though. Peers are not assured of this when engaging with file-sharing systems in the Internet. They are only assured, usually, of the availability of data they are looking for.

Violation of copyright laws is also a common issue among file-sharing networks. It is desirable to incorporate in the design of these protocols the filtering of data being shared across the network. It is a difficult task to do so though due to volatility of data in the network and the frequent evolution (maybe revolution should be the more correct term) of topologies of networks. Identifiers for the classification of content could be standardized (but by whom?) but this is a far cry to solving this issue though.

Posted in Computer Networks | Tagged , , , , , | 2 Comments

On Frenzy of Reading Books . . . Again

 

The end of year 2012 has come to me extraordinarily as I found my self getting back into the frenzy of reading books…again. I have been a long fan of the case of a boy who flies through the wind at night and dwells in an island called Neverland, never growing old and always chasing out adventure every tick of the clock. Of course that has to be Peter Pan (Hey, who does not know Peter? Kids nowadays maybe?). Add to that the great stories crafted by Tolkien, namely Lord of the Rings, The Hobbit, The Silmarilion and several other pieces, and those written by C.S. Lewis like The Lion, The Witch and the Wardrobe and his several writings on life and philosophy (and maybe faith, subtly).

 

As the end of 2012 approached I found my self reading literary books again. (Most of my readings during year 2012 has been on scientific journals, papers and books.) Every trip to the mall is synonymous to hunting books in Booksale outlets. Some past curiosities of my childhood days came to surface again. Daydreams of dragons, dwarves, elves, halflings, barbarians, vikings, wizards and several other mythical creatures of great stories once again occupied my always working mind. They became the major subjects of my book-hunting sessions in every Booksale outlets I go into. Scientific foundation books are still on my shopping list of course and for a man of science that will always be true. On the other hand, these book-hunting sessions has taken a toll on my resources like time and money without me becoming aware of it. I once went to Booksale to check some new stocks and walked out of the store six to seven books richer and more or less one-thousand five-hundred pesos poorer.

 

Half unaware, I came to start reading several classic pieces like The Pickwick Papers and Oliver Twist by Charles Dickens written during the early 1800s, Stuart Little by E. B. White also written during the 1800s, and of course The Hobbit by J. R. R. Tolkien. I also started looking for several contemporary series by R. A. Salvatore like The Legend of Drizzt (of which, Book 6: The Legacy, I have read just before Christmas ) and his other trilogies (a note, R. A. S. began writing about elves, dwarves and heroes when he got inspired by Tolkien’s The Lord of the Rings). The two books Peter and the Starcatchers and Peter and the Shadow Thieves by D.¬†Barry and R. Pearson¬†also caught my curiosity (and several other books by the authors like Peter and the Secret of Rundoon). In their writings the authors imaginatively told the story of how Peter Pan came to be, from an orphan who does know his family name to a flying, never-aging boy. Gladly, I also got my hands on one of Phillip Pullman‘s works, The Amber Spyglass (of which his other first two books, The Golden Compass and A Subtle Knife, are still on my hunt list) and two of J. Stroud‘s works, Heroes of the Valley and The Golem’s Eye (which is the third book of a trilogy), each of which are still on my soon-to-read list.

 

I admire story tellers, I mean those good ones of course. Not all could tickle your mind and bring it into a lullaby of imagination. Not all authors could put you into Peter Pan’s shoes and soar the night sky or into Mr. Pickwick’s suit and join your gang of gentlemen into some countryside silly adventures. I love books and enjoy reading them as much as I admire their authors. Books are like ticket to a movie house. Only, the movie plays on your mind and it takes a handful of good imagining to experience so. Reading a book is more participative, I should say, as compared to watching a movie wherein the cast and the staff already did all the imagining for you.

 

Deep inside my mind, I am thinking that, perhaps in a very subtle manner, I love reading books and good story tellers because I myself dream of becoming a good story teller at some point in time of my life…ala Bilbo Baggins.

 

Now, it is time to get back to my books…once again.

 

Excelsior! Jeff

 

Posted in Books | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

To Read is to Dream. . .

Aaaah! Christmas break is now just around the corner. For a bookworm and lover of Tolkenish readings like me, this is the season to sit back, sip up some coffee and pick up some books on my To Read list. Of course I also have my reading list of scientific papers which I need and want to look into but that is another story I suppose. There is time for everything under the sun, bids King Solomon. ^__^

In my reading list, I have the following:

  • Heroes of the Valley by Jonathan Stroud – I have already covered some five chapters or more as I remember. I got stuck when busyness in the lab and office got hold of me. I haven’t read J. Stroud’s Bartimaeus Trilogy but I am giving this a shot. I am happy so far about how the story is going.
  • The Hobbit by J. R. R. Tolkien – I have already finished reading the book and as expected Tolkien never ceases to amaze me. I would love to read it¬†again after watching The Hobbit in the cinema. I am glad that my imagination of the dwarves and the dragon basking in the pile of gold fits well the depiction of the characters in the movie.
  • The Silmarillion by J. R. R. Tolkien – I have not finished reading this one again due to busyness. Tolkien has really wonderfully made a rich fantasy in his writings about Middle Earth and his characters that an imaginative person like me couldn’t help but to daydream.
  • The Legacy, Book VII of The Legend of Drizzt series by R. A. Salvatore – I have to postpone this one because I want to read the previous six books of this series first so I can trace citations to the other six books when reading it. So the goal now is to find a bargain of the other books! I hope I won’t get tempted to get a copy in Fullybooked. Grrrr.

These are just some of the books which I would like to enjoy reading during any of my free time. Also, I hope to have time to make reviews of these readings to record my reading experiences and to give others a viewpoint before reading them.

This Christmas, on top of my wish list is.. time. I wish to have time for my self and time to enjoy my books. ^__^

Have a Merry Christmas everyone!

Excelsior! Jeff.

Posted in Books | Tagged , , , , , , , , , , , , , , , | Leave a comment

A Synthesis on the Paper Entitled “Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks” by D. Chiu and R. Jain

D. Chiu and R. Jain in their paper entitled “Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks” analyzed several increase/decrease algorithms for increasing or decreasing the load a user is contributing to a network. This is in response to the fact that network congestion persists as an issue across heterogenous networks built on old and new technologies. These algorithms were analyzed with the variable indicators efficiency, fairness, convergence time and size of oscillations as metrics. The authors differentiated congestion control against congestion avoidance with the previous being a reactive mechanism and the latter a proactive mechanism. Both mechanisms are important for a inter/network and are worth devoting time for more in-depth study. The paper though focuses on congestion avoidance. The analysis in this paper came about due to the authors’ selection of algorithms to be used for the variant called binary feedback scheme proposed in another paper. The analysis is also scoped to a class of distributed algorithms designed for distributed decision-making. Specifically, these algorithms belong to the category of decentralized decision making algorithms in which the decision of how to respond to the current status of the network as indicated by the resource managers is distributedly assigned to the hosts.

In the paper the authors defined the state of congestion of a network to be the number of packets in it. Each host’s load contributes to the overall load of the network. Initially each host has an initial value for its resource demand. This demands are served by the resource managers as best as the situation allows. Upon service to hosts the resource managers also describes the status of the network. Upon receipt of the status indicator, cooperative response is expected¬† from the hosts. On heavy load situations, the hosts are expected to lighten their demands so the network could cope up gradually. On light load situations, the hosts are encouraged to use up resources as much as the system could support. As noted in the paper, the succeeding demand of a host is a function of the network status as defined by the resource managers and the host’s previous resource demand. The network status as indicated by the resource managers is what the authors call as binary feedback given that the values signifying the status are define as 0 for underload and 1 for overload. This choice of design gives the benefit of simplicity of implementation of the algorithm for the resource manager.

The change (increase or decrease) in a host’s resource demand is what the authors define as the host’s control. Along with this several linear controls were also presented, namely, (1) multiplicative increase/multiplicative decrease, (2) additive increase/additive decrease, (3) additive increase/multiplicative decrease, and (4) multiplicative increase/additive decrease. These controls were evaluated by the authors based on the previously mentioned set of criteria, namely, (1) efficiency, (2) fairness, (3) distributednes and (4) convergence. The metric of efficiency describes how close the utilization of the overall resources (load) in the system to the maximum load the network could allow. This metric does not take into consideration the allocation of resources to each host. The metric of fairness is defines the allocation of resources to each host in the network. The hosts are divided into classes defined by the type of resource with which they are having bottleneck. Each host in each class should have an equal allocation of¬† bottleneck with the other hosts in that class. This scheme of allocation is defined in other papers as maximum fairness criterion. The information on the total load capacity of the network and the total number of hosts sharing the total resources are known to resource managers and the dissemination of these information to all hosts should bear a very minimum effort. Control schemes with shorter convergence time (time it takes the system to get into the state of equilibrium) and smaller oscillation amplitude (defines the variance from the equilibrium state) are also required.

In the paper, the mentioned control schemes are first evaluated to determine which among these schemes converge. From the resulting set of feasible schemes a subset satisfying the distributedness criterion is also identified. From this subset of feasible control schemes a subset which optimizes the trade-off between fairness and efficiency is further identified. For identifying the ratio of fairness to efficiency of a specific control scheme, a two dimensional space is utilized in which the horizontal axis corresponds the resource allocation for a specific user and the horizontal axis for another user. A point in the space, defined by (a, b) represents the allocation of resources for the two hosts a and b. The target level of efficiency is defined in the space to be the line defined by the set of all points such that a + b = total load capacity. On the the other hand, the target level of fairness is defined to be the line defined by the set of all points such that a = b. The goal of a control scheme is to bring the allocation of resources for the two users into the intersection of  these two lines, as it is in that point that the trade-off between fairness and efficiency is optimized. The convergence to fairness and efficiency were separately discussed by the authors in the paper. After considering the convergence of the control schemes, the authors looked into the integration of the distributedness criterion. The distrbutedness criterion limits further the set of feasible controls into those which do not require knowledge of information about the state of the system but only the binary feedback from the resource manager to the hosts. The total set of restrictions therefore limits the set of feasible controls into those which can converge distributively into fairness and efficiency.

In conclusion, the authors note that the total restriction on the set of feasible controls is represented in the two dimensional plane as the intersection of the regions representing the individual restrictions on convergence in fairness and efficiency. In the plane it is represented as a line connecting the origin and point of intersection of convergence to efficiency and fairness.

In addition to what was presented in the paper, the authors also briefly looked into non-linear controls and presented some practical considerations for implementation of the control algorithms.

It is important to note that the success of congestion avoidance based on the control schemes presented in the paper depends greatly on the cooperative response of the hosts. Uncooperative behavior of the hosts can be handled by the network through enforcing controls in the gateways as resource managers. As a host in the network it is still advantageous in general to cooperate with the rest of the network to have a better network usage experience. Implicitly, the idea of collective sacrificial response from the hosts assures the whole network of better performance and therefore positively reflects back to the hosts as better network usage experience.

Posted in Computer Networks | Tagged , , , , , | Leave a comment

A Synthesis on the Paper Entitled “Congestion Avoidance and Control” by V. Jacobson and M. Karels

As internetworks evolved since the conception of the idea of linking heterogenous individual networks differing in nature and architecture the issue of congestion across communication channels from end point to end point started to become a concern. Jacobson and Karels, of LBL and UCB respectively, in their paper entitled “Congestion Avoidance and Control” looked into this problem by characterizing what a network congestion is and suggested several approaches to detecting congestion within a network and bringing the network back into a stable configuration or equilibrium from a state of congestion.

As noted by the authors, the first ever occurrence of network congestion (a first of a series) happened within the network connecting LBL to UC Berkeley when a thousand-fold decrease in throughput occurred. Nowadays, we as Internet users are not that much aware of the details of such issues in packet data transmission across the Internet and it is a good thing for common Internet users that such events are already hidden. In response to the unforeseen event, LBL and UCB employed new algorithms into the current (at that time) existing protocol 4BSD TCP connecting both campuses. These are (1) round-trip time variance estimation, (2) exponential retransmit timer backoff, (3) slow start, (4) more aggressive receiver ack policy, (5) dynamic window sizing on congestion, (6) Karn’s clamped retransmit backoff and (7) fast retransmit. The algorithms (1) – (5) were conceptualized based on the principle of “conservation of packets” in a TCP connection. In a TCP connection, the authors denote a connection in equilibrium if packets are not introduced into the network until the old ones leave. The sending end points are conservative in sending out packets to receivers while the receivers are limited only by the capacity of their buffers for receiving packets. When every end point in the network subscribe to this principle of sending out packets, the network will less likely be clogged. This somehow implies a defined synching mechanism for the transmission of packets across the network.

Three scenarios were presented by the authors wherein conservation of packets in a network may fail. These are the scenarios when (1) the connection does not really get into an equilibrium or stable configuration of packet transmission, (2)¬†the senders do not subscribe to the “conservation of packets” principle, and (3) limits in the resource hinders the connection to get into equilibrium. Before the network ever get into equilibrium, it first needs to get kickstarted. Initial packets must be sent out to the destination end points gradually and slowly increasing over time to avoid overwhelming the network. Since an internetwork is dynamic in nature, the process to achieving equilibrium is a slow and non-uniform one. The rate varies depending on the dynamic configuration of the network (some factors which might contribute to the rate are size, topology and type of network).¬† The principle of “conservation of packets” implies a self-clocking mechanism for the network. This self-clocking mechanism eventually drives the network into a state of equilibrium (given that the hosts subscribe to conservation of packets principle). In line with this, the authors developed a slow-start algorithm which facilitates the gradual increase of sending of packets into the network. This solution enjoys the characteristics of subtlety and simplicity of implementation.

Assuming that the transmission of packet is already assumed to be stable, it is necessary then to look into the handling of retransmission of packets in case of failure (delay or loss). It is therefore also necessary to look into the round-trip time estimator of the protocol. The authors noted that one mistake in considering retransmit time is not considering the variance in the round trip time. They also developed a cheap way for estimating the variation which results to a retransmit timer without the unnecessary retransmissions of packet data. They also noted that another problem with the round-trip timing is the backoff estimation. When a host needs to retransmit packet data it is important to know the interval in time for resending packet data. One approach mentioned in the paper, without proof, is exponential backoff.

Now assuming that the flow of data across the network is already stable and that the timers are already working, failure may now result most likely from lost packet. The authors point out that data packets get lost in the network for reasons that these packets get damaged or there is a congestion somewhere along the path to the packet’s destination and that the buffer capacity is insufficient. In the paper, congestion avoidance is defined to have two necessary components. First is the notification mechanism wherein the network is able to notify the end points about a current congestion or a possible congestion in the network. On the other hand, the end points must be able to respond properly according to the signal of congestion by adjusting the volume of packets sent into the network to reduce the current load. This will eventually result to the dissipation of network load. To decrease the volume of packets sent, a host may decrease its window size. On the other hand, if resources are freed up the hosts must also be able to maximize its operation by utilizing free resources through increasing its window size. A host has no mechanism for knowing existence of freed-up resources though.¬† What that host could do is to try to increase its windows size until the limit is hit. The question that comes into mind then is the policy on adjusting window size in response to a (or lack of) congestion signal from the network. In the paper the authors stated that the best increase policy is to make small constant changes to the window size. This is what they call as additional increase/multiplicative decrease policy. The authors also pointed out that it is worth noting that the slow-start algorithm and the congestion avoidance algorithm are different. In case that a restart resulting from packet loss is needed, the slow-start algorithm in addition to congestion avoidance is needed in order for the network to cope up with the situation.

A future work mentioned by the authors is on gateway “congestion detection” algorithm. The detection of unfair distribution of resources is not detectable in the end point. There is enough information though in the gateway to detect and balance usage of resources among hosts. A host, upon receipt of notification about a congestion in the network, may opt to implement congestion avoidance or to hug resources instead. In case a host hug resources it will just have its packets dropped which is a statement from the gateway indicating its unfair usage of bandwidth.

Posted in Computer Networks | Tagged , , , , , , , | Leave a comment

A Synthesis on the Paper Entitled ‚ÄúThe Design Philosophy of the DARPA Internet Protocols‚ÄĚ by D. Clark

In the paper entitled ‚ÄúThe Design Philosophy of the DARPA Internet Protocols‚ÄĚ by Clark, he discusses¬†the principles governing the design of Internet as proposed and implemented by DARPA. The main goal of the¬†DARPA Internet Architecture was to provide an effective technique for the usage of connected individual¬†networks. The initial networks considered for connection were ARPANET and ARPA packet radio network so that¬†the radio network could utilize the large service machines of ARPANET. The selected technology for the¬†connection was packet switching instead of circuit switching. It was also opted to connect the existing networks¬†instead of creating a new unified major network to handle future networks which will emerge.

A set of more specific goals were also presented in the paper to define what is sought after the design of an effective internetwork. These goals are persistence of communication despite loss of gateway or network, support for multiple types of communication services, accommodation of variety of networks, provision for distributed management of resources, cost effective, low level of effort for connecting end-point host, and accountability of resources used in the Internet. These goals as mentioned by the author is not a definite set of goals but just a baseline for the design.

The most important goal of the design is to make sure that communication between end-points should¬†persist despite failure in the network, unless there is no physical path in which to route the packets of the¬†communication. Preserving the state of communication means preserving the information on the number of¬†packets transmitted and acknowledged, or keeping track of the outstanding flow of control permissions. A model¬†for persisting the communication is ‚Äúfate-sharing‚ÄĚ, as coined and preferred by the author. With fate-sharing the¬†information on the communication is gathered at the end-point of the communication availing the service.¬†Secondly, the design should support various types of services differing in the requirements for speed, latency¬†and reliability. Several of these services are bi-directional reliable delivery of data and virtual circuit service (e.g.
remote login, file transfer) implemented using TCP. Another type of service is the XNET cross-Internet debugger and real-time delivery of digitized speech. As a result it was decided early that several other transport layers needs to be created to handle various services. The design was also created in order to accommodate several types of networks including long haul nets, local area nets, broadcast satellite nets, packet radio networks and several more variety of networks. Meeting this goal is equally important as the other two as this defines the range of networks which will be accommodated by the design. The other goals are of lesser importance compared to the first three, yet still are desirable to be met by the design. The goal of distributed management of resources of the Internet has been met as not all gateways are managed by a single agency. On the other hand cost efficiency might have been compromised for the sake of interoperability of networks (design of packets). Another point of inefficiency could be the retransmission of lost packets from one end to another, crossing several other networks if necessary. Also, the cost of connecting a host to the Internet seemed more costly as compared to other architectures as a set of services desired to be supported must be implemented in the endpoint. It is worth noting that during the time of publication of the paper, the accounting packet flows is still being studied and so no further details were presented.

The author presented the notion of realizations to describe a set of hosts, gateways and networks. These realizations may differ in magnitude and requirements like speed and time of delay. An Internet designer needs to consider the details of such realizations to make the design fitting for implementation.

Posted in Computer Networks | Tagged , , , , , , , | 1 Comment

A Synthesis on the Paper Entitled ‚ÄúA Protocol for Packet Network Intercommunication‚ÄĚ by V. Cerf and R. Kahn

In the paper A Protocol for Packet Network Intercommunication (Cerf and Kahn, 1974), a protocol for communication (sharing of resources) across networks using data packets is proposed by Cerf and Kahn. During this time there are already existing individual packet switching networks with already established protocols for routing packets internally. These individual networks may differ on the protocol used for addressing the receiver of message, on the maximum size of data packet accepted by each host within the network, the time delays in accepting, delivering and transporting data packets within the network, the mechanism used for error correction for corrupted messages and the mechanisms for checking of status information, routing, fault detection and isolation.

Given these much differences between individual networks, if internetwork communication is desired, it¬†is necessary therefore to provide a mechanism for communication in which all these differences are taken into¬†consideration. An obvious way to approach this problem is to make transformations in conventions from the¬†source to the interface into an internetwork-wide agreed set of conventions. This will complicate the design of the¬†internetwork interface though. In the paper, the authors assumed a common set of conventions shared by the¬†hosts or processes in all networks belonging to an internetwork to relieve the interface the design complexities.¬†Instead, the interface’s concern will mainly be the transport of packet data from a network to another or several¬†others. Since the interface acts as an entrance or exit to or from a particular network, the authors obviously¬†called it GATEWAY.

As discussed by the authors, the notion of gateway is a very simple and intuitive one. Any data packet which needs to be transported to another network must pass through the gateway. The authors proposed a protocol in which all the networks which wants to communicate with each other need to subscribe to. In the proposal, formatting is done for each data packet which traverses from a network to another. Upon crossing a gateway, data packets from a network are formatted to meet the requirements of the other network. The data packets are formatted every time they cross a gateway and enter a new network. Splitting or fragmentation of data packets may occur in the gateway so that the packets being transported would fit the requirements of the network it is entering. The details of the information describing the packets are indicated in the packet headers. Most notable are the source and destination information indicated in the internetwork header of the packet.

A mechanism for handling the transmission and acceptance of messages from host to host is assigned to a transmission control program (TCP), as proposed by the authors. TCPs in turn is connected to packet switches which serve the host. Fragmentation of messages into segments may also be done by the TCP for the reasons that the local network has a limit on the maximum message length (in bytes) and that the TCP may be serving several processes which wishes to communicate with the outside networks. The problem of segmenting messages intended for a common destination process or host by the TCP is also discussed and two approaches were considered. The authors opted for the proposed solution wherein each packet are assumed to contain information regarding the destination and source process, which is indicated in the process header of the packet.

The notion of ports is also introduced as a unique identifier for a specific message stream between two communicating processes. Since the identification of ports is different for each operating system there is also a need for a uniform addressing scheme for ports.

The authors also discussed the problem of reassembly and sequencing of segments received by destination TCP. They proposed a scheme to handle this problem in the receiving end. Each arriving packet is identified to a specific port and are reassembled to the original message/text based on their sequence numbers. The check sum for the message is computed once reassembly is done to verify if the reassembled message is not corrupted. Additional flags ES (end of segment) and EM (end of message) are also included in the packet headers to indicate the completion of assembly of messages and segments in the receiving side.

Retransmission and duplicate detection were also tackled by the authors. They proposed an aknowledgement-based approached wherein the receiving end will acknowledge the receipt of packets according to a sliding window of sequence numbers. Any packets arriving outside this window are discarded and are requested to be retransmitted by the sender. As proposed, buffers can also be employed to handle incoming packets. Any packets which cannot be accommodated yet in the buffers are discarded and are requested to be retransmitted by the sending sending end.

The proposal of Cerf and Kahn generally describes how the Internet works as we know it to this day. There could have been deviations from the whole proposal but in general it describes the basic set of protocols in which the Internet subscribes.

Posted in Computer Networks | Tagged , , , , , , , | 1 Comment

A Synthesis on the Paper Entitled ‚ÄúRethinking the design of the Internet: The end to end arguments vs. the brave new world‚ÄĚ by D. Clark

In Clark’s paper ‚ÄúRethinking the design of the Internet: The end to end arguments vs. the brave new¬†world‚ÄĚ various issues and problems regarding the Internet and its use are discussed relative to the end to end¬†arguments design principle of the Internet. The end to end arguments suggests that the lower levels of the¬†system, the core of the network, be kept simple in design. The application level needs should be addressed in¬†the end-points of the network and not on the network itself, since the concern of the network should only be the¬†facilitation of the transport of data packets from one end to another. The identified advantages of keeping the¬†core network simple are reduction in the complexity of the core of the network, flexibility of the network to¬†accommodate new applications, and increase in reliability of end-point applications.

Since the creation of the notion of communicating internetworks several new requirements emerged which posed challenges on the end to end arguments design philosophy of the Internet. These includes the security of transactions across the network, new resource demanding applications, introduction of new players which are involved in the commercialization of services on top of the Internet (like ISPs), the introduction of third parties in the communication of end-points, and the servicing of less savvy users. All these new requirements must be handled by the end to end arguments philosophy to keep the core of the network simple.

In todays transactions over the Internet, security (and sometimes anonymity) are base requirements for communication. Payment over the Internet resulted to rise of parties which mainly serves as intermediary between the two ends involve in the transaction. The government (most likely in other countries) also started joining end-to-end communications over the Internet for purposes of surveillance, censorship and taxing. This is nothing new about the government since the invention of the notion of wire tapping. Attacks over the Internet by hackers and the propagation of computer viruses over the network caused anxiety on end users causing lack of trust on hardware and software.

It is a challenge to the end to end arguments design philosophy to handle these issues without being¬†violated. Technical responses were employed to answer these issues. The end to end arguments suggests that¬†the changes be made to the end-points and not on the network itself. Application specific enhancement on the¬†core network may cause crash of the network itself when the specific application fail or may cause non-flexibility of the network thereby limiting the range of applications which may be attached to the network. It is suggested¬†then that enhancements be made on the application side to limit the scope of damage to the end-points if ever. A¬†violation of the end to end arguments has been employed recently though. These violations are justifiable though¬†based on the purpose which they serve. These are the installation and usage of firewalls, traffic filters, and¬†Network Address Translation elements. They serve the purpose of prevention of anomalies in the end-points and¬†address space management. Along with these enhancements in the core of the network comes issues like¬†imposing of control on the communication path and revealing or hiding of message contents. Labeling of¬†information is adapted in some countries to answer the need for classifying messages running across the¬†network. In addition to technical solutions proposed and adapted to answer the aforementioned issues, nontechnical solutions also plays a major part of the solution. Laws on cyberspace started to be drafted, passed,¬†and enforced in some countries to employ a certain level of control on the transactions over the Internet. This may be a violation of the end to end arguments but its purpose justifies its adaptation. Labeling schemes were¬†proposed and are adapted to classify information over the Internet even though the efficiency and scope of¬†enforcement of this scheme is not totally assured. The nature of the Internet being trans-boundary has a huge¬†effect on the enforcement of cyberspace laws. Voluntary submission to such laws is desired for the ‚Äúregulation‚Ä̬†of transactions over the Internet.

Posted in Computer Networks | Tagged , , , , , , | 1 Comment

Hacking Call Me Maybe :]

Ryan Challinor (I think he is from MIT) hacked his wrist watch with heart rate monitor to control the tempo of artist Carly Rae’s hit song Call Me Maybe played on an music player. The higher his heart beat rate gets the higher the tempo of the song goes. I think the usual tempo of the song when played unaltered is 135. Now that is clever! ūüėÄ

Now I am thinking of what other cool applications of a his hack could there be. What about automatically choosing the genre of the songs being played on your music player based on your heart beat rate while you are running? Hmmm..coolness! ~_<

Posted in Software Development | Tagged , , , | Leave a comment