EP1782245A2 - Systemes destines a distribuer des donnees sur un reseau informatique et procedes destines a agencer des noeuds en vue d'une distribution de donnees sur un reseau informatique - Google Patents

Systemes destines a distribuer des donnees sur un reseau informatique et procedes destines a agencer des noeuds en vue d'une distribution de donnees sur un reseau informatique

Info

Publication number
EP1782245A2
EP1782245A2 EP05769257A EP05769257A EP1782245A2 EP 1782245 A2 EP1782245 A2 EP 1782245A2 EP 05769257 A EP05769257 A EP 05769257A EP 05769257 A EP05769257 A EP 05769257A EP 1782245 A2 EP1782245 A2 EP 1782245A2
Authority
EP
European Patent Office
Prior art keywords
node
nodes
server
user
child
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05769257A
Other languages
German (de)
English (en)
Other versions
EP1782245A4 (fr
Inventor
Mike O'neal
John P. Talton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Network Foundation Technologies LLC
Original Assignee
Network Foundation Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Network Foundation Technologies LLC filed Critical Network Foundation Technologies LLC
Publication of EP1782245A2 publication Critical patent/EP1782245A2/fr
Publication of EP1782245A4 publication Critical patent/EP1782245A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1044Group management mechanisms 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1087Peer-to-peer [P2P] networks using cross-functional networking aspects
    • H04L67/1089Hierarchical topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms

Definitions

  • Various embodiments of the present invention relate to systems for distributing data (e.g., content data) over a computer network and methods of arranging nodes for distribution of data (e.g., content data) over a computer network.
  • data e.g., content data
  • the systems and methods of the present invention may be applied to the distribution of streaming audiovisual data over the Internet.
  • node e.g., as used in the phrase a first node communicates with a second node
  • a computer network e.g., a computer network including the Internet, a local-area-network, a wide-area-network, a wireless network.
  • a computer network including the Internet, a local-area-network, a wide-area-network, a wireless network.
  • upstream e.g., as used in the phrase a first computer system sends data via an upstream connection to a second computer system
  • upstream is intended to refer to the communication path between a first computer system and the Internet when the first computer system is sending data to a second computer system via the Internet.
  • downstream e.g., as used in the phrase a first computer system receives data via a downstream connection from a second computer system
  • downstream is intended to refer to the communication path between a first computer system and the Internet when the first computer system is receiving data from a second computer system via the Internet.
  • uptree e.g., as used in the phrase a first node sends data via an uptree connection to a second node
  • uptree is intended to refer to the network topology communication path between a first node and a second node when a first node is sending data to a second node which is higher up on the network topology tree (that is, closer to the root server).
  • the term "docked” (e.g., as used in the phrase a child node docks with a parent node) is intended to refer to forming a connection between nodes via which data may flow in at least one direction (e.g., at least uptree or downtree).
  • each node in the network has an address.
  • a computer system resident at a particular address may have sufficient bandwidth or capacity to receive data from, and to transmit data to, many other computer systems at other addresses.
  • An example of such a computer system is a server, many commercial versions of which can simultaneously exchange data with thousands of other computer systems.
  • a computer system at another location may have only sufficient bandwidth to effectively exchange data with only one other computer system.
  • An example of such a system is an end user's personal computer connected to the Internet by a very low speed dialup modem.
  • an end user's personal computer system may have even greater bandwidth when connected to the Internet by ISDN lines, DSL (e.g., ADSL) lines, cable modems, Tl lines or even higher capacity links.
  • DSL e.g., ADSL
  • Tl lines even higher capacity links.
  • various embodiments of the present invention may take advantage of the availability of such higher capacity end user systems (e.g., those computer systems capable of essentially simultaneously exchanging data with multiple computer systems).
  • a content provider distributes its data by making the data available on a server node 8 simultaneously to a plurality of users at user nodes 12 (of note, the terms “server”, “root server” and “primary server” may be used interchangeably throughout the present application to refer to the same device (i.e., the highest level parent node in a given network).
  • the double-headed arrows show the two-way communication between each end user's computer system and the server.
  • the content provider's server transmits a separate stream of signals to each receiver node.
  • the content provider would typically either add equipment to increase capacity or it would engage a mirror site to accomplish essentially the same result as adding equipment.
  • the capacities of the end user computers is of virtually no consequence in such a system.
  • FIG. 2 Another system for distributing data is the NapsterTM music file exchange system provided by Napster, Inc. of Redwood City, Calif. A schematic of the NapsterTM music file exchange system (as its operation is presently understood) is illustrated in FIG. 2.
  • the server 9 instead maintains a database relating to the various music files on the computers of users who are logged onto the server 9.
  • the first user causes his computer to query the server 9 for the second user's node address and a connection is made between the first and second user's computers through which the first user's computer notifies the second user's computer of the desired file and the second user's computer responds by transmitting a copy of the desired music file directly to the first user's computer.
  • a first user attempting to download a particular file from a second user must start completely over again if the second user cancels its transmission or goes off line during the data transfer.
  • streaming media is a series of packets (e.g., of compressed data), each packet representing moving images and/or audio.
  • the connection is made between the server and the user node (files requested by the user are typically transmitted by the server in full to the user node and the browser program may store the files in buffer memory and display the content on the user's computer system monitor - some files may be more permanently stored in the computer system's memory for later viewing or playing.)
  • the connection with the server is typically terminated once the files have been received at the user node (or the connection may be terminated a short time thereafter). Either way, the connection is usually of a very short time duration.
  • streaming media With streaming media, the contact between the server and user nodes is essentially continuous.
  • the server sends streaming media packets of data to the user node.
  • a streaming media player installed on the user's computer system e.g., software, such as RealMediaTM from RealNetworks, Inc. of Seattle, Wash., causes the data to be stored in buffer memory. The player decompresses the data and begins playing the moving images and/or audio represented by the streaming media data on the user's computer system. As the data from a packet is played, the buffer containing that packet is emptied and becomes available to receive a new packet of data.
  • Continuous action content such as, for example, the display of recorded motion picture films, videos or television shows may be distributed and played in essentially "real time”
  • live events such as, for example, concerts, football games, court trials, and political debates may be transmitted and viewed essentially “live” (with only the brief delays needed for compression of the data being made available on the server, transmission from the server to the user node, and decompression and play on the user's computer system preventing a user from seeing the event at the exact same moment in time as a person actually at the event).
  • the server node and user node may stay connected to each other until all the packets of data representing the content have been transmitted.
  • Various embodiments of the present invention relate to a system for distributing data (e.g., content data) over a computer network and a method of arranging receiver nodes in a computer network such that the capacity of a server is effectively increased (e.g., the capacity of a server may be effectively multiplied many times over; the capacity of the server may be effectively increased exponentially).
  • the present invention may take advantage of the excess capacity many receiver nodes possess, and may use such receiver nodes as repeaters.
  • the distribution system may include node(s) having database(s) which indicate ancestor(s) and/or descendant(s) of the node so that reconfiguration of the distribution network may be accomplished without burdening the system's primary server.
  • the process may include the steps of providing a new user node (or connection requesting user node) with a connection address list of nodes within the network, having the new user node (or connection requesting user node) go to (or attempt to go to) the node at the top of the connection address list, determine whether that node is still part of the distribution network, and connect thereto if it is, and if it is not, to go to (or attempt to go to) the next node on the connection address list.
  • a propagation signal may be transmitted to the nodes below it in the network, causing them to move up in the network in a predetermined order.
  • the present invention may provide a decentralized approach which provides, to each new user node (or connection requesting user node) a path back to the root server.
  • various embodiments of the present invention may be applied to the transmission (e.g., the "appointment" transmission) of audiovisual content such as, for example (which example is intended to be illustrative and not restrictive), live or pre ⁇ recorded concerts, football games, court trials, political debates, motion picture films, videos or television shows.
  • audiovisual content such as, for example (which example is intended to be illustrative and not restrictive), live or pre ⁇ recorded concerts, football games, court trials, political debates, motion picture films, videos or television shows.
  • Such transmission of audiovisual content may be made with acceptable levels of quality, wherein "acceptable levels of quality" may vary depending upon the end users and the type of transmission.
  • Such transmission of audiovisual content may be carried out using streaming media transmissions intended to reach large audiences, in much the way that television shows transmitted over television cable and broadcast media reach large audiences (in this regard, the present invention may enable a server to transmit streaming media to the large number of users which would be essentially simultaneously logging on to view a particular audiovisual presentation under this television-type "large audience" transmission).
  • FIG. 1 is a schematic drawing of a prior art computer information distribution network
  • FIG. 2 is a schematic drawing of another prior art computer information distribution network
  • FIG. 3 is a schematic drawing of an embodiment of a computer information distribution network formed pursuant to the present invention.
  • FIG. 4 is a schematic drawing of another embodiment of a computer information distribution network formed pursuant to the present invention.
  • FIG. 5 is a schematic drawing of another embodiment of a computer information distribution network formed pursuant to the present invention.
  • FIG. 6 is a schematic drawing of a particular topology of a computer information distribution network formed pursuant to an embodiment of the present invention
  • FIG. 7 is a schematic drawing of a particular topology of the computer information distribution network formed pursuant to an embodiment of the present invention as shown in FIG. 6 (after the occurrence of an event);
  • FIG. 8 is a schematic drawing of another topology of a computer information distribution network formed pursuant to an embodiment of the present invention.
  • FIG. 9 is a schematic drawing of the topology of the computer information distribution network formed pursuant to an embodiment of the present invention as shown in FIG. 8 (after the occurrence of an event);
  • FIG. 10 is a flow diagram of an embodiment of the present invention showing a Server Connection Routine which may be performed when a prospective child node seeks to join the distribution network;
  • FIG. 11 is a flow diagram of an embodiment of the present invention showing a Prospective Child Node's Request to Server for a Connection Routine
  • FIGS. 12A-12E are schematic drawings showing varying topologies of the computer information distribution network formed pursuant to an embodiment of the present invention under several circumstances;
  • FIG. 13 is a block diagram of an embodiment of the present invention showing the memory blocks into which the software used may partition a user node's memory
  • FIG. 14 is a flow diagram of an embodiment of the present invention showing a Prospective Child Node's Connection Routine - the routine which a new user node (or connection requesting user node) may go through in attempting to connect to a distribution chain (or tree) after receiving a connection address list;
  • FIG. 15 is a flow diagram of an embodiment of the present invention showing a Prospective Child Node's Connection Routine With Return to Server Subroutine;
  • FIG. 16 is a flow diagram of an embodiment of the present invention illustrating a Prospective Parent Node's Connection Routine
  • FIG. 17 is a flow diagram of an embodiment of the present invention illustrating a Server's Connection Instruction Routine
  • FIG. 18 is a flow diagram of an embodiment of the present invention illustrating a Fully Occupied Parent's Connection Instruction Routine
  • FIG. 19 is a flow diagram of an embodiment of the present invention illustrating a Multi-Node Selection Subroutine
  • FIGS. 2OA and 2OB are together a flow diagram of an embodiment of the present invention illustrating a Universal Connection Address Routine
  • FIG. 21 is a schematic drawing of a topology of a computer information distribution network of an embodiment of the present invention before a new node will be added using the Universal Connection Address Routine;
  • FIG. 22 is a schematic drawing of a topology of the computer information distribution network of an embodiment of the present invention when a new node is added using the Universal Connection Address Routine;
  • FIG. 23 is a schematic drawing of a topology of a computer information distribution network of an embodiment of the present invention before a reconfiguration event;
  • FIG. 24 is a schematic drawing of a topology of a computer information distribution network of an embodiment of the present invention shown in FIG. 23 after a reconfiguration event;
  • FIG. 26 is a schematic drawing of a topology of a computer information distribution network of an embodiment of the present invention shown in FIG. 25 after a reconfiguration event;
  • FIG. 27 is a flow diagram of an embodiment of the present invention showing a Child Node's Propagation Routine;
  • FIG. 28 is a schematic drawing of another topology of a computer information distribution network of an embodiment of the present invention before a reconfiguration event;
  • FIG. 29 is a schematic drawing of a topology of a computer information distribution network of an embodiment of the present invention shown in FIG. 28 after a reconfiguration event;
  • FIG. 30 is a schematic drawing of another topology of a computer information distribution network of an embodiment of the present invention before a "complaint" regarding communications;
  • FIG. 31 is a flow diagram of an embodiment of the present invention showing a Grandparent's Complaint Response Routine.
  • FIG. 32 shows one "Virtual Tree” (or “VTree”) embodiment of the present invention (in which a server of a distribution network is configured to include a number of "virtual" nodes);
  • FIG. 33 shows another "Virtual Tree” (or “VTree”) embodiment of the present invention (in which a server of a distribution network is configured to include a number of "virtual" nodes); and
  • FIGS. 34A-34C relate to various distribution network topology examples according to the present invention.
  • the primary server node (or simply, server) 11 provides content data (e.g., streaming media) to user nodes 12 connected directly to it (sometimes referred to as "first level user nodes").
  • Each first level user node 12 has a second level user node 13 connected to it and each second level user node 13 has a third level user node 14 connected to it.
  • the computer system at each first level user node 12 passes a copy of the content data received from server node 11 to the computer system at the second level user node 13 attached to such first level user node 12.
  • the computer system at each second level user node 13 in turn passes the content data onto the computer system at the fourth level user node 14 attached to it.
  • the computer systems at the server and user nodes may have distribution software installed in them which enables the nodes to be arranged as shown and for the computer systems to receive and re-transmit data.
  • the cascadingly connected arrangement of nodes i.e., first level nodes are connected to the server, second level nodes are connected to first level nodes, third level nodes are connected to second level nodes and so on
  • nodes i.e., first level nodes are connected to the server, second level nodes are connected to first level nodes, third level nodes are connected to second level nodes and so on
  • FIG. 3 takes advantage of the bandwidth available in certain nodes to essentially simultaneously receive and transmit data.
  • the effective distribution capacity of a server is in essence multiplied by the number of levels of nodes linked together.
  • the distribution capacity of the server node is increased from 8 user nodes to 24 in just three levels.
  • many user nodes may have at least sufficient bandwidth (e.g., upstream bandwidth as well as downstream bandwidth) to receive data from one node and to re ⁇ transmit streams of data essentially simultaneously to two or more other nodes.
  • This capacity could be used in another embodiment to set up a computer network with a cascadingly connected exponential propagation arrangement 16 (as shown in FIG. 4).
  • an exponential propagation arrangement effectively increases the distribution capacity of a server exponentially. For example (which example is intended to be illustrative and not restrictive), with just three levels of user nodes, each having the capacity to retransmit two data streams, the distribution capacity of the server in FIG. 4 is increased from 8 user nodes to 56.
  • a distribution network may also be set up as a cascadingly connected hybrid linear/exponential arrangement 18, such as shown in FIG. 5.
  • the effective distribution capacity grows more quickly in a hybrid linear/exponential arrangement with each new level than does the distribution capacity in a linear propagation arrangement, and less quickly than does the distribution capacity in a pure exponential arrangement.
  • any of these arrangements allows a server's distribution capacity to be greatly increased (e.g., with little or no additional investment in equipment for the server). Further, any other desired hybrid arrangement may be used.
  • a user node connected to a server may transmit data to the server indicating the identity of the user node, what the user node wants and/or other data, while the server node may transmits data confirming its identity and containing information and other content to the user node.
  • arrows may not be shown for the sake of simplicity.
  • an exponential propagation arrangement may create the most distribution capacity.
  • many (and perhaps even most) user nodes may have sufficient bandwidth to re-transmit acceptable quality multiple copies of the data
  • a number of user nodes may not have sufficient bandwidth to re-transmit more than one copy of the data with acceptable quality, and some user nodes may not have sufficient bandwidth to re-transmit even a single acceptable copy.
  • a distribution system employing the present invention may be configured, for example, as a hybrid propagation network.
  • a personal computer acting as a server node may reach many (e.g., hundreds, thousands or more) user nodes, even if the server node itself has capacity to transmit content data directly to only one other node.
  • the length of the distribution chains (i.e., the number of levels of user nodes linked through each other to the server) may be kept as small as possible to reduce the probability that the nodes at the ends of the chains will suffer a discontinuity of service.
  • the user nodes having the greatest bandwidth may be placed as far up uptree as possible (i.e., be placed as close as possible to the server, where "close” refers to network node level and not necessarily to geographical proximity).
  • a user node may be deemed unreliable for any of a number of reasons.
  • One example (which example is intended to be illustrative and not restrictive), is that the user node is connected to the Internet by lines having intermittent failures.
  • Another example (which example is intended to be illustrative and not restrictive), is that the user is merely sampling the content available on the network and intentionally disconnects the user node from the distribution network after discerning that he or she has no interest in the content.
  • the user nodes may be positioned in the most advantageous positions, taking into account the dynamic nature of the distribution network, in which many user nodes may enter and leave the distribution network throughout the server's transmission of a streaming media show.
  • the invention may help to preserve the viewing experience of users at user nodes positioned even at the end of a distribution chain (the server and/or user nodes may be enabled to perform the operations required to set up and maintain the distribution network by having data distribution software installed therein).
  • the distribution chains can be viewed as a plurality of family trees, each family tree being rooted to the server 11 through one of the first level nodes 12 (each node in the distribution network may have distribution software loaded in it which enables the node to perform the functions described below; before any new node may join the distribution network, such new node may also have such software loaded in it).
  • the network topologies have a branch factor of two (i.e., no user node is assigned more than two child nodes to be connected directly to it).
  • a network topology with a branch factor of two may be referred to as a binary tree topology. It should be understood, of course, that the teachings set forth herein may be extended to network topologies having other branch factors, for example (which example is intended to be illustrative and not restrictive), branch factors of three, four or more.
  • FIG. 6 is a schematic drawing of a distribution network topology useful for describing the current "User Node Arrival" example.
  • This FIG. 6 shows a distribution chain or family tree rooted to the server 11 through user node A, a first level user node 12 (the dashed lines represent connections from the server to other first level nodes or from node A to another user node).
  • User node A could be thought of as a child node of the server and as a parent node for other user nodes connected directly to it.
  • User node B, a second level user node 13, could be thought of as A's child.
  • User nodes C and D, third level user nodes 14, may be thought of as B's children and A's grandchildren (and each other's siblings).
  • User nodes E and F, fourth level user nodes 15, may be thought of as Cs children (and each other's siblings).
  • User node G also a fourth level user node 15, may be thought of as D's child.
  • user nodes E, F and G may be thought of as B's grandchildren and A's great grandchildren.
  • a new user node (or connection requesting user node) 19, such as node X in FIG. 6, seeks connection to the distribution network, it will first make a temporary connection to the server node (or a connection server, not shown) in order to begin the process for connecting to the distribution system.
  • the server (or connection server) will discern from the user node a bandwidth rating (discussed below) appropriate to that node and, depending upon the available capacity of the server and any existing distribution chains, the server will either assign the new user node to a spot directly connected to the server or will provide the new user node with a connection path through a tree to the server.
  • FIG. 10 is a flow diagram associated with this example showing a Server's Connection Routine which is performed when a prospective child node seeks to join the distribution network.
  • the server performs the Server's Connection Instruction Routine (discussed below), in which the server determines what connection instructions to give to the new user node (or connection requesting user node).
  • the server then goes to step 102 where it determines whether, as a result of the Server's Connection Instruction Routine, the prospective child node is being instructed to dock with the server. If the prospective child node is being instructed to dock with the server, then the server goes to step 103 in which the server would allow the new user node to dock with it, and the server would begin transmitting data (e.g., streaming media) directly to the new user node.
  • data e.g., streaming media
  • two different servers could be used - one to perform the server's connection routine and the other to transmit data (e.g., streaming media) - since both servers would be performing server functions, they will hereinafter sometimes be considered a single server for purposes of the description herein.
  • step 104 the server goes to step 104 in which it provides the new user node with an address connection list and disconnects the new user node from the server.
  • FIG. 11 is a flow diagram associated with this example illustrating the Prospective Child Node's Request to Server for a Connection Routine.
  • the new user node Upon making the temporary connection to the server, the new user node goes to step 111 in which it provides bandwidth rating and performance rating information to the server. It then proceeds to step 112 in which it receives connection instructions from the server. Then the new user node proceeds to step 113 to determine whether it has been instructed to dock directly with the server.
  • step 114 the new user node proceeds to step 114 in which it erases any information which may have been in its ancestor database and, if the distribution software has a Return to Server Subroutine in it, resets the connection attempt counter to zero.
  • the new user node then proceeds to step 115 in which it docks with the server and begins receiving data (e.g., streaming media). If the new user node is not being instructed to dock directly with the server, then the new user node goes to step 116 in which it receives the new connection address list from the server and loads such list into the user node's ancestor database and begins the Prospective Child Node's Connection Routine (discussed below). If the distribution software has a Return to Server Subroutine in it, the connection attempt counter is reset to one in step 116.
  • the server has determined that the new user node X will not be allowed to connect directly to the server. Also, for the purposes of these examples, all of the user nodes are presumed to be equally capable of simultaneously re-transmitting two copies of the content data and that the tree rooted through node A is the most appropriate tree through which node X should be connected to the server. In one embodiment the server may rely on chain length in determining to which particular user node, already in the distribution network, that node X should connect.
  • Path D-B-A-S (where "S" is the server) represents the shortest available path from an end of a chain back to the server (e.g., from a node level point of view and not necessarily from a geographical proximity point of view), and the server 11 gives that path information, or connection address list, to node X during step 104 of FIG. 10 (and node X receives such list during step 116 of FIG. 11). That is, node X will be given a connection address list with the URLs and/or IP addresses of each of nodes D, B, A and S.
  • the distribution software in node X causes the path information to be stored in the ancestor portion (or ancestor database) 132 of node X's topology database 131 shown in FIG. 13.
  • the ancestor database may include an ancestor list, a list of node X's ancestors' addresses from node X's parent back to the server node.
  • FIG. 13 shows an example block diagram (which example is intended to be illustrative and not restrictive) showing the memory blocks into which the software used in connection with the present invention may partition a user node's memory 130.)
  • Node X attempts to contact node D first, the user node most distant from server 11 in the path.
  • node D may have departed from the network, as may have one or more of its ancestors, resulting in a reconfiguration (discussed below) of at least a portion of the tree of which D was a part.
  • FIG. 14 is a flow diagram associated with this example showing the Prospective Child Node's Connection Routine, the routine which a new user node, here node X, will go through in attempting to connect to a distribution chain (or tree) after receiving a connection address list (which node X has stored in the ancestor portion of its topology database) from the server or from a prospective parent node or during a reconfiguration event.
  • node X attempts to contact the first node on the connection address list.
  • the first node, and only node, on the connection address list could be the server itself.
  • node D is the first node on the list.
  • Node X then proceeds to step 142 and determines whether the first node on the connection address list is still on-line and still part of the distribution network (in one example, if no response is received within a predetermined period of time, from the first node on the connection address list, the answer to the query in step 142 will be deemed to be no). If node D is on-line and still part of the distribution network, node X proceeds to step 143 in which node X inquires whether node D has room for node X. This inquiry may need to be made because the distribution network may have gone through a reconfiguration event resulting in node D's not having sufficient capacity to provide a copy of the content data to node X.
  • node X proceeds to step 144 in which it connects (or docks) with node D and begins receiving content data from it. This is depicted in FIG. 7. Note that node X is now one of several level four nodes 15.
  • step 142 if node D is not on-line (e.g., no response is received from node D within a predetermined period of time) or if node D is on-line but is no longer part of the distribution system (e.g., subsequent to the server's obtaining its topology data the user of the system at node D either caused his or her computer system to go off-line or to leave the distribution system, or there was a line failure causing the computer system to go off line), as depicted in FIG.
  • node X goes to step 145 in which it deletes the first address from the connection address list in node X's ancestor database (and, in one example, and for a purpose which will become clear when discussing reconfiguration events below, sets its reconfiguration flag buffer 136 to the lowest propagation rating).
  • node B in the present example, becomes the first node on the connection address list.
  • node X goes back to step 141 and repeats the routine described above. Note that because of node D's leaving the distribution network, a reconfiguration event was triggered which resulted in node G changing from a fourth level node 15 to a third level node 14.
  • step 143 if the prospective parent node has no room for the new node (e.g., the capacity of the prospective parent node is fully occupied), the new node goes to step 146, in which it receives a new connection address list from the prospective parent.
  • the prospective parent may then perform a Fully Occupied Parent's Connection Instruction Routine, discussed below in connection with FIG. 18, wherein it creates the new connection address list based on topology data obtained from its progeny. That new list may include the path back to the server through the prospective parent just in case, as discussed above, there are user node departures along the path.
  • step 143 when node X gets to step 143, node B will respond that it has no room and node X will proceed to step 146.
  • step 146 the new connection address list it receives from node B will be G-B-A-S.
  • step 141 the new connection address list it receives from node B will be G-B-A-S.
  • step 141 the new connection address list it receives from node B will be G-B-A-S.
  • step 141 will repeats the routine from that point on.
  • step 143 will result in node G responding that it has room for node X.
  • step 144 will then perform step 144 and be connected to the distribution network through node G as shown in FIG. 9.
  • node X becomes a fourth level node 15.
  • the distribution software may include a Return to Server Subroutine, comprised of steps 151, 152 and 153 as shown in FIG. 15, as part of the Prospective Child Node's Connection Routine.
  • This subroutine reduces the risk that a prospective child node would enter into an endless cycle of fruitless attempts to dock with nodes in a particular tree. If the answer to the query in step 143 is "no," then node X goes to step 151 in which it increments by one the connection attempt counter 135 in node X's memory. Then node X goes to step 152 in which it determines whether the connection attempt limit has been reached.
  • the limit may be preset at any number greater than one and may depend upon what the designer of a particular distribution network determines would be a reasonable amount of time for a node to attempt to make a connection on a particular tree, or a branch of a tree, before that node should be given an opportunity to obtain a completely new connection address list directly from the server. If the connection attempt limit has not been reached, then node X proceeds with step 146 as discussed above. If the connection attempt limit has been reached, then node X goes to step 153, in which it goes back to the server and begins the connection routine again as discussed above in connection with FIG. 6. If docking with a parent node is successful, then after step 144 node X performs step 154 in which the connection attempt counter is set to zero.
  • FIG. 16 a flow diagram associated with this example illustrating the Prospective Parent Node's Connection Routine is shown.
  • node B may begin performing the Prospective Parent Node's Connection Routine.
  • step 161 in response to node X's query, node B determines whether it is part of the distribution system node X seeks to join. If node B were not part of the distribution network, it would respond in the negative to node X's query and node B would be finished with the Prospective Parent Node's Connection Routine.
  • the answer is "yes" and node B proceeds to step 162.
  • node B determines whether it has room for a new node. If the answer were "yes,” node B would proceed to step 163 where it would allow node X to dock with it, and node B would begin transmitting to node X data (e.g., streaming media) originating from the server. In the example illustrated in FIG. 8, the answer is "no,” and node B, acting as an instructing node, goes to step 164 where it performs the Fully Occupied Parent's Connection Instruction Routine (discussed below) and provides the prospective new child node (here node X) with a new connection address list. As noted above, the new connection address list may include the path back to the server through the node B (the prospective parent in this example) in the event that there are user node departures along the path, which departures may include node B.
  • the new connection address list may include the path back to the server through the node B (the prospective parent in this example) in the event that there are user node departures along the path, which departures
  • the temporary connection between node B and node X is terminated, and node X is sent on its way.
  • the new connection address list is G-B-A-S.
  • node G performs the Prospective Parent Node's Connection Routine discussed above.
  • node G allows node X to dock with it.
  • Distribution Network Construction i.e., the number of levels of user nodes linked through each other to the root server
  • the length of the distribution chains is aimed to be kept as small as possible.
  • the user nodes would be distributed in this example through the first level until all the direct connection slots to the server were filled. Then as new user nodes sought connection to the distribution network, they would be assigned to connections with the first level nodes 12 until all the slots available on the first level nodes were filled. This procedure would be repeated with respect to each level as more and more new user nodes attempted to connect to the distribution network.
  • the server acting as an instructing node, would perform a Server's Connection Instruction Routine in which one step is determining whether there is room on the server for a new user node and, if so, the server would instruct the new user node to connect directly to the server. If there were no room on the server, then the server would perform the step of consulting its network topology database and devising a connection address list having the fewest number of connection links to a prospective parent user node. After performing the Server's Connection Instruction Routine, the server would either allow the new user node to dock directly with the server or send the new user node on its way to the first prospective parent user node on the connection address list.
  • a partially occupied potential parent node in a particular level i.e., a prospective parent node already having a child but still having an available slot for an additional child node
  • unoccupied potential parent nodes on the same level may help to keep the number of interior nodes (i.e., the number of nodes re-transmitting data) to a minimum.
  • nodes that have zero children may be preferred over nodes that already have one child. While it is true that this may increase the number of repeat nodes, what the inventors have found is that by filling the frontier in an "interlaced" fashion, connecting nodes build up their buffers more quickly (allowing a reduction in the incidence of stream interruptions).
  • FIGS. 12A-12C illustrate, in this example, what happens when a partially occupied parent node is preferred over a parent childless node as a destination address for a new user node (assuming all other factors are equal).
  • FIG. 12A is a schematic diagram associated with this example showing a topology wherein nodes C and D, both third level nodes 14, have remaining capacity for one or more child nodes.
  • Server 11 has the choice of sending new user node X to either node C, as shown in FIG. 12B, or node D, as shown in FIG. 12C, without increasing the length of the longest chain in the distribution network. However, user nodes are free to leave the distribution network at any time. With the topology shown in FIG.
  • the user nodes having the greatest reliable capability should be placed as high up in a distribution chain as possible (i.e., as far uptree as possible) because they would have the ability to support the greatest number of child nodes, grandchild nodes and so on.
  • a user node which has been continuously connected to the distribution network for a long period of time may be considered under this example to be likely more reliable (either because of line conditions or user interest) than a user node which has been continuously connected to the network for a short period of time.
  • bandwidth rating is Another factor, which may be determined, for example, by actual testing of the user node when it first attempts to connect to the server or a parent node (e.g., at initial connection time) or determined by the nominal bandwidth as determined by the type of connection made by the user node to the Internet.
  • bandwidth rating may be determined, for example, by actual testing of the user node when it first attempts to connect to the server or a parent node (e.g., at initial connection time) or determined by the nominal bandwidth as determined by the type of connection made by the user node to the Internet.
  • a user node with a 56 Kbits/sec dialup modem connection to the Internet is essentially useless for re-transmission of content data because of its small bandwidth.
  • such a node is assigned a bandwidth rating of zero (0.0).
  • a user node with a cable modem or DSL (e.g., ADSL) connection to the Internet may be given, in this example, a bandwidth rating of one (1.0), because it is a common type of node and has a nominal upstream transmission bandwidth of 128 Kbits/sec, which is large enough to potentially re-transmit two acceptable quality copies of the content data it receives (assuming a 50Kbit/sec content data stream).
  • Such capability fits well into a binary tree topology.
  • a full rate ISDN modem connection nominally has an upstream bandwidth of 56 Kbits/sec and a downstream bandwidth of 56 Kbits/sec, which would potentially support acceptable quality re-transmission of a single copy of the content data stream (assuming a 50Kbit/sec content data stream).
  • a user node with a full rate ISDN modem connection to the Internet may be given, in this example, a bandwidth rating of one-half (0.5), or half the rating of a user node connected to the Internet by a DSL (e.g., ADSL) or cable modem connection.
  • o User nodes with Tl or greater links to the Internet should be able to support more (e.g. at least twice as many) streams as DSL (e.g., ADSL) or cable modems, and therefore may be given, in this example, a bandwidth rating of two (2.0).
  • DSL e.g., ADSL
  • bandwidth ratings greater than 2.0 may be assigned under this example to Internet connections having greater bandwidth than Tl connections.
  • bandwidth networks e.g., 50 Kbit/sec and 100 Kbit/sec
  • other bandwidth networks e.g., 50 Kbit/sec and 100 Kbit/sec
  • the bandwidth needed for a node to be "repeat capable” may depend on the particular network to which it is attached (upon initial connection the connecting node may first test its upstream bandwidth and then report that bandwidth to the network server; the server may determine whether the node is "repeat capable” or not.
  • a node may be deemed "repeat capable” if the node is not firewalled (e.g., port 1138 is open) and the node has sufficient bandwidth to retransmit two copies of the incoming stream.
  • a third factor is performance.
  • a user node's performance rating may, in one example, be zero (0.0) if it is disconnected as a result of a Grandparent's Complaint Response Routine (discussed below in connection with FIG. 31). Otherwise, the user node's performance rating, in this example, may one (1.0).
  • a user node's utility rating may be determined by multiplying connection time by performance rating by bandwidth rating. That is, in this example:
  • Utility Rating Time x Performance Rating x Bandwidth Rating.
  • bandwidth rating may be stored in the user node's elapsed time buffer 137, bandwidth rating buffer 138, performance rating buffer 139, utility rating buffer 121 and potential re-transmission rating buffer 122, respectively.
  • nodes entering the network may be judged to be either "repeat capable” or “non-repeat capable” (wherein non-repeat capable nodes may be called “bumpable” nodes).
  • Repeat capability may be based on: (1) Upstream bandwidth (e.g., tested at initial connection); and (2) the firewall status, opened or closed, of a node. In one specific example, if a node is either firewalled or has upstream bandwidth less than (approx.) 2.5 times the video streaming rate, that node will be deemed bumpable. All nodes joining the network, whether bumpable or non-bumpable, may be placed as close to the server as possible. However, for the purpose of placement (as described for example in the Universal Connection Address Routine of Figure 20), a repeat capable node can bump a bumpable node when being placed in the network.
  • FIG. 17 a flow diagram associated with this example illustrating the Server's Connection Instruction Routine is shown.
  • This routine need not necessarily rely on the new user node's utility rating.
  • the server would want to put those user nodes with the highest potential re-transmission rating as close to the server in a distribution chain as possible (i.e., as high uptree as possible) because they have the greatest likelihood of being able to re ⁇ transmit one or more copies of content data.
  • a potential re-transmission rating of zero in this example indicates that a user node has no ability to (or little expected reliability in) re-transmitting even one copy of content data (the server in this example would want to put a user node with a zero rating as far as reasonably possible from the server in a distribution chain (i.e., as low downtree as possible)).
  • the server is concerned about whether the potential re-transmission rating is zero (i.e., either one or both of the performance rating and bandwidth rating is zero) or greater than zero (i.e., both the performance rating and the bandwidth rating are greater than zero).
  • step 171 the server node interrogates the new user node (or connection requesting node) for its bandwidth rating and performance rating. If the new user node is really new to the distribution network, or if it has returned to the server because all of the user node's ancestor nodes have disappeared from the network, the new user node's performance memory will contain, in this example, a performance rating of 1.0 (e.g., the default rating). However, if the new user node has been dispatched to the server for a new connection to the network because the new user node had failed to provide content data to one of its child nodes, then, in one example, its performance memory will contain a performance rating of zero.
  • a performance rating of 1.0 e.g., the default rating
  • step 172 the server determines, in this example, whether the potential re ⁇ transmission rate of the connection requesting node is greater than zero (i.e., whether both the bandwidth rating and the performance rating are greater than zero, or, if only the bandwidth rating is considered, whether the bandwidth rating is greater than zero (i.e., the connection requesting node is a high-bandwidth node)). If the answer is "yes,” then the server goes to step 173 in which the server determines whether it has room for the new user node. If the answer to the query in step 173 is "yes,” then the server goes to step 174 in which it instructs the new user node to connect directly to the server. Then the server goes to step 102 in the Server's Connection Routine (see FIG. 10).
  • step 173 If the answer to the query in step 173 is "no" (i.e., the server does not have the capacity to serve the new user node directly), then the server goes to step 175.
  • step 175 the server, acting as an instructing node, consults its network topology database and devises a connection address list having the fewest number of connection links to a prospective parent node. That is, the server checks its information regarding the nodes in the level closest to the server (i.e., the highest uptree) to determine whether there are any potential parent nodes with space available for the new user node.
  • the database is checked regarding nodes in the level one link further from the server, and so on until a level is found having at least one potential parent node with space available for the new user node. That is, the server determines which parent node with unused capacity for a child node is closest to the server (i.e., in the highest level, with the first level being the highest), and devises the connection address list from such prospective parent node to the server. The server then goes to step 102 (shown in FIG. 10).
  • the server could skip step 172 and go directly to step 173 (or it could go to step 172 and, if the answer to the query in step 172 is "no" (i.e., either one or both of the bandwidth rating and the performance rating are zero, or, if only the bandwidth rating is considered, whether the bandwidth rating is zero (i.e., the connection requesting node is a low-bandwidth node)), the server could go to step 175).
  • step 176 the server consults its network topology database and devises a connection address list having the greatest number of connection links to a prospective parent node. That is, the server checks its information regarding the nodes in the level furthest from the server (i.e., the lowest downtree) to determine whether there are any potential parent nodes with space available for the new user node. If there are no potential parent nodes with space available, then the database is checked regarding nodes in the level one link closer to the server, and so on until a level is found having at least one potential parent node with space available for the new user node. In this manner, user nodes having limited or no reliable re ⁇ transmission capability may be started off as far from the server as possible and will have a reduced effect on the overall capacity of the distribution network.
  • one or more reconfiguration events may have transpired since the server's topology database was last updated.
  • the first prospective parent node which is actually present on the distribution network for the new user node to contact may not have room for the new user node.
  • the server provides node X with the following connection address list: C-B-A-S.
  • node C had disappeared from the network between the last update of the server's topology database and node X's attempting to contact node C, then node E, by virtue of a reconfiguration event, would be connected, in this example, to node B as shown in FIG. 12D. Then node X, in performing the Prospective Child Node's Connection Routine discussed in connection with FIGS. 14 and 15, would contact node B. Node B, in the Prospective Parent Node's Connection Routine, discussed in connection with FIG. 16, would have to answer the query of step 162 in the negative and go to step 164, in which it performs the Fully Occupied Parent's Connection Instruction Routine.
  • FIG. 18 a flow diagram associated with this example illustrating that routine is shown. It is similar to the Server's Connection Instruction Routine. Since the fully occupied parent node has already determined that it has no room for the new user node (or connection requesting user node), the Fully Occupied Parent's Connection Instruction Routine does not need to include a step in which a determination is made regarding whether there is room for the new user node (or connection requesting node). In the Fully Occupied Parent's Connection Instruction Routine the fully occupied parent node, acting as an instructing node, first performs step 181 in which it interrogates the new user node for its bandwidth rating and performance rating.
  • the fully occupied parent node determines whether the potential re-transmission rate of the new user node is greater than zero. (If only the bandwidth rating is considered, then it determines whether the bandwidth rating is greater than zero (i.e., the connection requesting node is a high-bandwidth node).) If the answer is "yes,” then the fully occupied parent node goes to step 183 in which the fully occupied parent node consults its topology database, which contains the latest information available to that node regarding the links from the fully occupied parent node back to the server and regarding the fully occupied parent node's own progeny (i.e., its children, grandchildren etc.,) and devises a new connection address list having the fewest number of connection links to a new prospective parent node.
  • the fully occupied parent node checks its information regarding the nodes in the level closest to the fully occupied parent node (but not closer to the server) to determine whether there are any potential parent nodes with space available for the new user node. If there are no potential parent nodes with space available, then the database is checked regarding nodes in the level one link further from the fully occupied parent node (and further from the server), and so on until a level is found having at least one potential parent node with space available for the new user node.
  • the fully occupied parent node determines which new prospective parent node with unused capacity for a child node is closest to the fully occupied parent node, and devises the connection address list from such new prospective parent node through the folly occupied parent node and on to the server.
  • the folly occupied parent node then goes to step 184 in which it provides the new user node with the new connection address list and disconnects the new user node from the folly occupied parent.
  • the folly occupied parent node goes to step 185.
  • the folly occupied parent node consults its network topology database and devises a connection address list having the greatest number of connection links to a new prospective parent node. That is, the folly occupied parent node checks its information regarding the nodes in the level furthest from it (and farther from the server than it is) to determine whether there are any potential parent nodes with space available for the new user node.
  • the database is checked regarding nodes in the level one link closer to the folly occupied parent node, and so on until a level is found having at least one potential parent node with space available for the new user node. As discussed above, this helps assure that new user nodes having limited or no reliable re-transmission capability are started off as far from the server as possible.
  • the folly occupied parent node devises the connection address list from such new prospective parent node through the folly occupied parent node and on to the server, the folly occupied parent node then goes to step 184 where it performs as discussed above.
  • the distribution software could be designed such that a folly occupied parent performs an abbreviated Fully Occupied Parent's Connection Instruction Routine, in which steps 181, 182 and 185 are.not performed. That is, it could be presumed that the server has done the major portion of the work needed to determine where the new user node should be placed and that the folly occupied parent user node need only redirect the new user node to the closest available new prospective parent. In such event, only steps 183 and 184 would be performed.
  • connection address C-B-A-S In the example discussed above in which node C had disappeared from the network when new user node X had been given, by the server, connection address C-B-A-S, and in which node B is a folly occupied parent node as shown in FIG. 12D, node B would appear to have the choice of devising either connection address list D-B-A-S or E-B-A-S regardless of whether the full or abbreviated Fully Occupied Parent's Connection Instruction Routine were performed.
  • the distribution software could have an additional subroutine as part of steps 175, 176, 183 and 185.
  • An example of this subroutine, called the Multi-Node Selection Subroutine is illustrated in FIG. 19.
  • step 191 the server or fully occupied parent node deciding where to send a new user node determines whether any of the potential new parent nodes is partially occupied. As discussed earlier, in one example a partially occupied potential parent node may be preferred over an unoccupied potential parent node. In this case, if any of the potential parent nodes is partially occupied, then the server or fully occupied parent node goes to step 192. In step 192 the partially occupied prospective parent node with the highest utility rating is selected as the new prospective parent node. If there were only a single partially occupied potential parent node, then that node is selected.
  • step 191 If in step 191 it is determined that there are no partially occupied potential parent nodes, then the server or fully occupied parent node goes to step 193. In step 193 the unoccupied prospective parent node with the highest utility rating is selected as the new prospective parent node.
  • step 193 the software engineer could have step 193 follow an affirmative response to the query in step 191 and step 192 follow a negative response; in such event, unoccupied prospective parent nodes would be selected ahead of partially occupied prospective parent nodes (all other things being equal, it may be advantageous to select nodes that have zero children over nodes that already have one child; while it is true that this may increase the number of repeat nodes, what the inventors have found is that by filling the frontier in an "interlaced" fashion, connecting nodes build up their buffers more quickly (allowing a reduction in the incidence of stream interruptions).
  • the server or fully occupied parent node After whichever of steps 192 and 193 is completed, the server or fully occupied parent node returns to the step from which it entered the Multi-Node Selection Subroutine (i.e., step 175, 176, 183 or 185), and completes that step.
  • the Multi-Node Selection Subroutine i.e., step 175, 176, 183 or 185
  • node B would perform the Fully Occupied Parent's Connection Instruction Routine. Regardless of the bandwidth and performance ratings of node X, node B would be choosing between nodes D and E in the third level. In step 191 node B would determine that neither D nor E is partially occupied, and therefore node B would go to step 193. Assuming that nodes E and D have equal bandwidth and performance ratings and that node D was connected to the network longer than node E, node D would be selected because it would have the higher utility rating since it was connected to the network longer than node E.
  • Node B would then go to step 194 and then return to the step from which it entered the Multi-Node Selection Subroutine.
  • node B returns to step 183 or 185, it completes that step and moves on to step 184.
  • node B provides new user node X with new connection address list D-B-A-S and node X connects to the distribution network as shown in FIG. 12E.
  • user nodes having the highest bandwidth capabilities are aimed to be kept closer to the server (e.g., in order to allow the greatest expansion of the distribution system).
  • zero bandwidth rated nodes may nevertheless appear relatively far uptree (thereby stunting the growth of that chain).
  • the following method may be used in constructing the distribution network both by servers and by prospective parents which are actually completely occupied, either of which may be thought of as an instructing node (that is, software enabling the routines discussed below could be installed on servers and user nodes alike).
  • each child node reports directly to (or is tested by) its parent node with respect to information relating to the child node's bandwidth rating, performance rating, potential re-transmission rating and utility rating.
  • each parent node reports all the information it has obtained regarding its child nodes on to its own parent node (a parent node also reports to each of its child nodes the address list from that parent back to the server, which list forms what may be referred to as the ancestor portion of the topology database - in addition, a parent node reports to each of its child nodes the addresses of their siblings). The reports occur during what may be referred to as a utility rating event.
  • Utility rating events may occur, for example, on a predetermined schedule (e.g., utility rating events may occur as frequently as every few seconds).
  • each node stores in its topology database the topology of the tree (including all its branches) extending below that node, and the server stores in its topology database the topology of all the trees extending below it.
  • This may be referred to as the descendant portion (or descendant database) 133 of the topology database (see FIG. 13).
  • the descendant database of a particular node may include a descendant list, a list of the addresses of all the nodes cascadingly connected below that particular node. Included in the topology database information may be the utility ratings of the nodes below the node in which that particular topology database resides.
  • each parent node (including the server), acting as an instructing node, devises two lists of prospective (or recommended) parent nodes.
  • the first list or Primary Recommended Parent List ("PRPL"), stored in the Primary Recommended Parent List buffer 123 (see FIG. 13), lists all the nodes in the descendant portion of that node's topology database which have bandwidth available to support another child node (in one example (which example is intended to be illustrative and not restrictive), in a binary tree system, all nodes in the descendant portion of the topology database having (i) a bandwidth rating of at least one and (ii) less than two child nodes would be listed).
  • PRPL Primary Recommended Parent List
  • the PRPL of a second level node would list a third level node with available bandwidth ahead of a fourth level node with available bandwidth even if the fourth level node's utility rating were higher than that of the third level node.
  • the second list in this example or Secondary Recommended Parent List (“SRPL"), stored in the Secondary Recommended Parent List buffer 124 (see FIG. 13), lists all the nodes in the descendant portion of that node's topology database which have the ability to re ⁇ transmit content data to child nodes but are fully occupied, and at least one of its child nodes is incapable of re-transmitting content data to another child node (in one example (which example is intended to be illustrative and not restrictive), in a binary tree system, all nodes in the descendant portion of the topology database having (i) a bandwidth rating of at least one and (ii) at least one child node having a bandwidth rating less than one (i.e., being incapable of re-transmitting content data to two child nodes) would be listed).
  • SRPL Secondary Recommended Parent List
  • the nodes in the SRPL would be listed with those node's which are closest to the node in which that particular topology database resides at the top of the list, and those nodes which are in the same level would be ranked such that the node with the highest utility rating would be listed first, the node with the second highest utility rating would be listed second and so on.
  • the SRPL lists those parent nodes having the growth of their branches (i.e., their further progeny) blocked or partially blocked by a low-bandwidth child node. This may lead to an unbalanced growth of the distribution system, and a limitation on the total capacity of the system.
  • a node including a server
  • a node has room for another child node or is the parent of a low bandwidth node, it may be listed on its own PRPL or SRPL.
  • FIGS. 2OA and 2OB an example flow diagram illustrating what may be referred to as the Universal Connection Address Routine is shown.
  • a server or a fully occupied prospective parent node receiving a connection request may be referred to as an "instructing node.”
  • the instructing node performs step 201, it receives a connection request.
  • step 202 it interrogates or tests node N to determine whether it is a low- bandwidth node
  • low-bandwidth may mean, for example, either or both: (1) a node that is incapable of broadcasting two copies of its incoming stream on to its children; and (2) a node that is incapable of rebroadcasting its stream because it is f ⁇ rewalled).
  • a low-bandwidth node is a node with a bandwidth rating less than one. If node N is a low bandwidth node, the instructing node proceeds to step 203 in which the instructing node determines whether there are any available prospective parent nodes which are not fully occupied. Sometimes the distribution network may be fully occupied.
  • the instructing node's PRPL will be empty. If it is empty, the response to the query in step 203 would be yes. Then the instructing node goes to step 209 in which the new node is sent back to the server to start the connection process from the beginning. If the response to the query in step 203 is no, then the instructing node goes to step 204 in which it selects a prospective (or recommended) parent node for node N. The instructing node then moves on to step 205 in which it consults its topology database and devises a connection address list from the recommended parent node back to the server, and provides such connection address list to node N. (If the instructing node is a user node, then the connection address list leads back to the server through the instructing node.) At this point, node N performs the Prospective Child Node's Connection Routine discussed above in connection with FIG. 14.
  • the instructing node inserts node N into its topology database as a child of the recommended parent node. It does this because other new nodes may seek connection instructions prior to the next utility rating event (i.e., before reports providing updated information regarding the topology of the distribution network are received by the instructing node), and such new nodes should not be sent to a prospective parent which the instructing node could know is fully occupied.
  • the instructing node then goes to step 206 in which it checks its topology database to determine whether the recommended parent, with node N presumptively docked to it as a child node, is fully occupied (in the example here of a binary tree network, whether it has two child nodes).
  • step 207 the instructing node goes to step 207 in which it removes the recommended parent from the PRPL and inserts it into the SRPL, and the instructing node has finished the routine.
  • step 202 If the answer to the query in step 202 is no (e.g., node N is not a low-bandwidth node; in the binary tree network example it is a high-bandwidth node capable of re-transmitting content data to two child nodes), the instructing node moves on to step 208. There it determines whether both the PRPL and SRPL are empty (which may occur under certain circumstances, such as, for example, when the number of levels in a distribution system is capped and at least all the nodes on all but the last level are fully occupied with high- bandwidth nodes). If so, the instructing node goes to step 209 in which the new node is sent back to the server to start the connection process from the beginning.
  • both the PRPL and SRPL are empty (which may occur under certain circumstances, such as, for example, when the number of levels in a distribution system is capped and at least all the nodes on all but the last level are fully occupied with high- bandwidth nodes). If so, the instructing node goes to
  • the instructing node goes to step 210 in which it selects a prospective (or recommended) parent node for node N from the PRPL and an alternate recommended parent node from the SRPL (if either the PRPL or SRPL is empty, the instructing node will make a "nil" selection from that list - the instructing node knows from step 208 that at least one of the lists will not be empty).
  • the instructing node then goes to step 211 in which it determines whether the alternate recommended parent node is closer to the server (i.e., higher uptree) than the recommended parent node derived from the PRPL. If the alternate recommended parent node is on the same level as, or on a lower level than the recommended parent node derived from the PRPL (or if the selection from the SRPL is nil), then the answer to the query in step 211 is no.
  • the instructing node goes to step 212 in which it consults its topology database and devises a connection address list from the recommended parent node back to the server, and provides such connection address list to node N (if the instructing node is a user node, then the connection address list leads back to the server through the instructing node).
  • node N performs the Prospective Child Node's Connection Routine discussed above in connection with FIG. 14, and in step 212, the instructing node inserts node N into its topology database as a child of the recommended parent node.
  • the instructing node moves to step 213 in which it adds node N to an appropriate position on the PRPL. It does this because, as a result of step 202, it knows that the node N. is capable of re-transmitting content data to its own child nodes.
  • the instructing node then goes to step 214 in which it checks its topology database to determine whether the recommended parent, with node N presumptively docked to it as a child node, is fully occupied (in the example here of a binary tree network, whether it has two child nodes). If the answer is no, then the instructing node has finished this routine. If the answer is yes, then the instructing node goes to step 215 in which it removes the recommended parent from the PRPL because it is now deemed to not be an available prospective parent node.
  • the instructing node goes to step 216 in which it consults its topology database to determine whether any of the recommended parent node's children is a low-bandwidth node (in this example, knowing that node N is not a low-bandwidth node and knowing that the recommended parent node has two child nodes, the question is whether the child node other than node N is a low-bandwidth node.) If the answer is no (i.e., all the recommended parent node's children are high-bandwidth nodes), then the instructing node has finished the routine.
  • the instructing node moves on to step 217 in which it adds the recommended parent to the SRPL.
  • step 211 If the answer to the query in step 211 is yes (i.e., the alternate recommended parent (selected from the SRPL) is: (i) on a higher level than the recommended parent (selected from the PRPL) or (ii) the selection from the PRPL is nil), then the instructing node moves to step 218. In that step the instructing node consults its topology database to determine: (i) which of the alternate recommended parent node's child nodes is a low-bandwidth node or (ii) if they both are low-bandwidth nodes, which of the child nodes has been connected to the system a shorter period of time.
  • the instructing node may send a disconnect signal to that child node with instructions to return to the server to start the connection process from the beginning (as a new user node (or connection requesting user node)).
  • the bumped node may climb its path to its grandparent which gives it a new connection path - rather than returning to the root server (doing so typically means that the incoming node that bumped the non-repeat-capable node ends up being that node's new parent).
  • the instructing node moves on to step 219 in which it consults its topology database and devises a connection address list from the alternate recommended parent node back to the server, and provides such connection address list to node N (if the instructing node is a user node, then the connection address list leads back to the server through the instructing node).
  • node N performs the Prospective Child Node's Connection Routine discussed above in connection with FIG. 14, and in step 219, the instructing node inserts node N into its topology database as a child of the alternate recommended parent node.
  • the instructing node moves to step 220 in which it adds node N to an appropriate position on the PRPL. It does this because, as a result of step 202, it knows that the node N is capable of re-transmitting content data to its own child nodes.
  • the instructing node then goes to step 221 in which it checks its topology database to determine whether all the child nodes of the alternate recommended parent are high- bandwidth nodes. If the answer is no, then the instructing node has finished this routine. If the answer is yes, then the instructing node goes to step 222 in which it removes the alternate recommended parent from the SRPL because it is now deemed to not be an available alternative prospective parent node since the growth of its progeny line is not even partially blocked by one of its own children. At this point the instructing node has finished the routine.
  • the distribution network will be built with each new node assigned to the shortest tree (or chain), and those with the fewest number of links between it and the server.
  • low-bandwidth nodes which would tend to block the balanced growth of the distribution network, would be displaced by high-bandwidth nodes and moved to the edges of the network where they would have reduced effect on the growth of the network.
  • FIG. 21 this FIG. may be used as an example to illustrate how the Universal Connection Address Routine works. It shows primary server 11 with node A as first level node 12, its child nodes B and C as second level nodes 13, their child nodes D, E, F and G as third level nodes 14, node D and G's child nodes H, I, J and K as fourth level nodes 15, and node Vs child nodes L and M as fifth level nodes 17. Assume for this example that all available docking links directly to the server 11 are occupied by high-bandwidth nodes, and that all trees are at full capacity other than that rooted to the server 11 through first level node A. The utility rating for each node is set forth in FIG. 21 under its letter designation.
  • Low- bandwidth nodes in this example, those nodes having a bandwidth rating less than one
  • bandwidth rating of " ⁇ ,” which indicates in this example that such nodes will not have any child nodes assigned to them.
  • Nodes A, B, C, D, E, G, I, J and K are high- bandwidth nodes (or repeater nodes (i.e., they are capable of re-transmitting the content data they receive from their respective parent nodes to their respective child nodes, if any)).
  • Nodes F, H, L, and M will not appear on any list in this example since they are low- bandwidth nodes.
  • Nodes B, C, D, G, I and A itself will not appear on the PRPL in this example since these nodes are fully occupied.
  • nodes C, D, and I will appear on the SRPL in this example because they each have at least one low bandwidth child node.
  • the PRPL would be as follows in this example:
  • the SRPL in this example would be as follows:
  • node N is a low-bandwidth node
  • node A will give, in this example, node N the following as its new connection address list: • E-B-A-S.
  • node E Since node E would not have two child nodes, it would remain on the PRPL.
  • node A will compare (step 21 1 of FIG. 20B) the first node on the PRPL (the recommended parent node) with the first node on the SRPL (the alternate recommended parent node).
  • node C the first node on the SRPL is a higher level node than node E, the first node on the PRPL. So, node A will send a disconnect signal (step 218 of FIG. 20B) to node F, node Cs low-bandwidth child node. Then it will provide node N with the following new connection address list and add node N to the PRPL (step 219 of FIG. 20B):
  • node C Since node C would now have two high-bandwidth child nodes (nodes N and G), node C would be removed from the SRPL (step 222 of FIG. 20B).
  • node N is a high-bandwidth node
  • the server when node F returns to the server 11 (or, in another example, goes to Node A) for a new connection, the server (or Node A) will also use the Universal Connection Address Routine (Node F may go to Node A rather than the server because (in one example) when a node is "kicked” it may climb its internal connection path rather than jumping directly to the root server). Since node F is a low-bandwidth node, the server (or Node A) will give node F the following as its new connection address list:
  • FIG. 22 illustrates the new topology. As can be seen, absent intervening events, low- bandwidth node F will end up moving down from the third level to the fourth level and the bandwidth capacity of the third level will increase from six child nodes to eight (its maximum in this binary tree example). The potential bandwidth capacity of the fourth level would also increase, from ten child nodes to twelve.
  • Distribution Network Reconfiguration example (which example is intended to be illustrative and not restrictive), it is noted that under this example the user nodes are free to exit the distribution network at will. And, of course, user nodes may exit the distribution network due to equipment failures between the user node and the communication network (the Internet in the examples discussed herein). This was first discussed in connection with FIGS. 10A-E.
  • the present invention may handle the reconfiguration required by a user node departure by having certain user nodes in the same tree as the departing node perform a Propagation Routine.
  • the results of the propagation routine of this example are illustrated in FIGS. 23 and 24.
  • There a tree rooted to the server through first level user node P is illustrated.
  • Node P has two child nodes, second level nodes Q and X.
  • Through node X P has two grandchild nodes, third level nodes A and B.
  • P Based on the relative utility ratings of nodes Q and X, P has sent signals to its children instructing them to set the color of the flag represented by the information in their respective reconfiguration flag buffers to "green" and "red,” respectively.
  • the use of colors as designators is merely discretionary.
  • the number of grades of propagation ratings assigned by a parent node may be equal to the number of children each parent node has.
  • the maximum number of grades of propagation ratings may be "n.” Since the distribution network in the examples discussed herein is a binary tree distribution network, a parent node will be required to assign at most up to two grades of propagation ratings.
  • node P during the most recent utility rating measurement event, discerned that node Q has a higher utility rating than node X, and therefore P has assigned node Q a green rating, represented by the solid-lined circle surrounding the letter Q in FIG. 23.
  • P has assigned node X a red rating, represented by the dashed-lined circle surrounding the letter X in FIG. 23.
  • node X has assigned green and red ratings to third level nodes A and B, respectively.
  • a parent node assigns a propagation rating to a child node, it may also provide to such child node the address of the child node's sibling, if there is any.
  • a child node may store information about its sibling (or siblings) in the sibling portion (or sibling database) 134 of its topology database (see FIG. 13).
  • the sibling database includes a sibling list, a list of the addresses of a node's siblings (the data relating to the siblings' addresses may also contain information regarding the propagation ratings of the siblings (e.g., in the event that the distribution system has a branch factor greater than two)).
  • nodes Q and X know that they are each other's siblings and nodes A and B know that they are each other's siblings.
  • nodes do not store information about their siblings.
  • a node generally does not know who its sibling is. Instead, cross connections between a red node and its green sibling are implemented via a "priority join" message sent to a red child that includes the IP address of the green sibling as the first entry in the connection path list.
  • node P's remaining child node Q
  • node Q would set its reconfiguration flag buffer 136 (see FIG. 13) to the next highest available propagation rating grade. Since in the example illustrated in FIG. 23, node Q's reconfiguration flag buffer already is set for the highest propagation rating grade (here green), node Q could either do nothing or reset its propagation rating in response to the upgrade propagation rating signal. The result would be the same, node Q's propagation rating would remain green. If node Q's propagation rating were red, then it would set its propagation rating to green in response to the upgrade propagation rating signal.
  • Node Q would do nothing else in response to the upgrade propagation rating signal (note that as a matter of design choice, the software engineer could have the parent of the departed node (here node P is the parent of departed node X) send no upgrade propagation rating signal to its remaining child node (here node Q) if its remaining child node already is at the highest propagation rating).
  • the recipients of the propagation signal would respond thereto as follows. First they would check their own respective propagation ratings. If the propagation signal's recipient has the highest propagation rating grade (here green), it would reset its propagation rating to the lowest rating grade (here red); re-transmit the propagation signal to its own child nodes (if there are any); disconnect the child node which did not have the highest propagation rating prior to its receipt of the propagation signal if the propagation signal's recipient with the green rating has more than one child node; allow its sibling to dock with it as a child node for the purpose of transmitting content data to that child node; and dock (or remained docked), for purposes of receiving content data, with the node sending the propagation signal to it (in systems having a branch factor of more than two, the propagation signal recipient whose propagation rating had been the highest would disconnect its child nodes which did not have the highest propagation rating, prior to the receipt of the propagation signal just sent
  • the propagation signal's recipient If the propagation signal's recipient has other than the highest propagation rating (i.e., just prior to the receipt of the propagation signal), it would upgrade its propagation rating to the next highest rating grade; dock with its sibling which had the highest rating grade; and begin receiving content data from it. If the propagation signal's recipient has other than the highest propagation rating, it does not re-transmit the propagation signal to its own child nodes.
  • nodes A and B receive the propagation signal from node P. Since node A has the highest propagation rating grade, here green, it: (i) sets its reconfiguration flag buffer 136 (see FIG. 13) so that it has the lowest propagation rating grade, here red; and (ii) docks with node P (becoming a second level node 13) to begin receiving content data from node P. Node B changes its propagation rating from red to the next higher propagation rating (and since this example is a binary tree (or branch factor two) system, the next higher rating is the highest, green) and docks with node A to receive content data. The resulting topology is shown in FIG. 24. Note that node B remains a third level node 14.
  • FIGS. 25 and 26 illustrate what happens in this example when the departing node is green.
  • node X is the departing node and it is green.
  • node P sends the upgrade propagation rating signal to node Q, it changes its propagation rating from red to green.
  • the reconfiguration event proceeds essentially as described in the paragraph immediately above, and results in the topology shown in FIG. 26 (which is the same as the topology in FIG. 24).
  • FIG. 27 is a flow diagram associated with an example showing the Child Node's Propagation Routine, the routine which is followed by a child node upon receiving a propagation signal.
  • this example is provided for illustration only, and is not restrictive, and other examples are of course possible (e.g., the color of a node may assigned by its parent such that nodes assign and know the color of their children but not of themselves).
  • the child node first performs step 271 wherein it determines whether it has received such a signal. If it has not, then it does nothing. If it has received a propagation signal, it proceeds to step 272 wherein it checks its propagation rating grade in reconfiguration flag buffer 136 (see FIG. 13) . If its propagation rating is at the highest grade (i.e., it is a "green" node in this example), then it proceeds to step 273 where it sets its reconfiguration flag buffer to the lowest propagation rating grade. It then proceeds to step 274 in which it re-transmits the propagation signal to its own child nodes, and which results in the undocking of all its child nodes except for the one with the highest propagation rating.
  • the child node (i.e., the child node referred to in the second sentence of this paragraph) then performs step 275 in which it determines whether it is already docked with the node which sent the propagation signal. If it is not (i.e., it received the propagation signal from its grandparent), then it proceeds to step 276. In that step it: (i) devises a new connection address list, which is the ancestor portion of its topology database with the current nominal parent node removed, resulting in the grandparent node becoming the first node on the connection address list; and (ii) then performs the Prospective Child Node's Connection Routine (i.e., it goes to step 141 discussed above in connection with FIGS. 14 and 15). The Prospective Child Node's Connection Routine is performed because some likelihood exists that even the grandparent node may have departed from the distribution system between the moment it sent out the propagation signal and the moment that its highest rated grandchild from its missing child attempted to dock with it.
  • step 275 If the answer to the query in step 275 is in the affirmative (i.e., the child node is receiving a propagation signal which has been re-transmitted by its parent), then the child node does nothing more (or as illustrated in FIG. 27, it performs step 277 which is to remain docked to its parent node).
  • step 272 If the answer to the query in step 272 is in the negative (i.e., it does not have the highest propagation rating (it is a "red” node in this example)), then it proceeds to step 278 in which it: (i) sets its reconfiguration flag buffer so that its propagation rating is the next higher grade (in this binary tree system the rating is upgraded from red to green); (ii) undocks from the parent with which it was docked before receiving the propagation signal (if it was docked with a parent before receiving such signal); and (iii) either (a) waits a predetermined period of time during which the node's sibling which was green prior to their receipt of the propagation signal should have re-transmitted the propagation signal to its own child nodes (thereby causing any red child nodes to undock from it) or (b) confirms that the node's sibling which was green prior to their receipt of the propagation signal actual did re-transmit the propagation signal to its child nodes (if it has any).
  • step 279 the child node performs step 279 in which it: (i) devises a new connection address list, which is the ancestor portion of its topology database with its sibling node having the highest propagation rating placed in front of the previous parent as the first node on the connection address list; and (ii) then performs the Prospective Child Node's Connection Routine (i.e., it goes to step 141 discussed above in connection with FIGS. 14 and 15).
  • the Prospective Child Node's Connection Routine is performed because some likelihood exists that the sibling may have departed from the distribution system between the moment that the propagation signal had been sent to the child node and the moment that the child node attempted to dock with its sibling.
  • part (iii) of step 278 may be handled using the "priority join” mechanism. More particularly, in this example, a priority join from a node X cannot be refused by a target node T - it must accept (dock) the incoming node. If the target node T already had two children G & R, the target node T instructs its red child R to connect (via "priority join") to the green node G. The node T then disconnects from its red child R immediately prior to accepting the incoming node X. The bumped node R then sends a priority join to its own green sibling A as the target. These actions cause the reconfiguration events to cascade until the edge of the tree is reached (of note, the algorithm described in this paragraph may have essentially the same effect as illustrated in Figures 28 and 29 - under certain circumstances the algorithm described in this paragraph may be somewhat more reliable).
  • FIG. 28 illustrates an example distribution system comprising node P as a first level node 12; nodes Q and X as second level nodes 13; nodes A and B as third level nodes 14; nodes C, D, E an F as fourth level nodes 15; nodes G and H as fifth level nodes 17; nodes I and J as sixth level nodes 20; nodes K and L as seventh level nodes 21 and nodes M and N as eighth level nodes 22.
  • Nodes Q, B, C E, H, I, L and M have red propagation ratings, as symbolized by the dashed circles, and the remaining nodes have green propagation ratings as symbolized by the solid circles.
  • node P sends out an upgrade propagation rating signal to node Q and a propagation signal to the children of node P's departed child node X (i.e., nodes A and B).
  • node x may send out messages to its parent and children immediately prior to its departure - rather than the parent of node x initiating these messages as described above.
  • node Q changes its propagation rating to green.
  • nodes A and B begin the Child Node's Propagation Routine. With respect to node A, it answers the query of step 271 in the affirmative. Since its rating is green, it also answers the query of step 272 in the affirmative.
  • node A changes the setting of its reconfiguration flag buffer to show a red propagation rating.
  • step 274 it re ⁇ transmits the propagation signal to its child nodes C and D. It answers the query of step 275 in the negative because it is not docked for purposes of receiving content data with node P.
  • Node A then goes to step 276 wherein it consults the ancestor portion of its topology database and creates a new connection address list starting with its grandparent.
  • the new connection address list is P-S.
  • node A performs the Prospective Child Node's Connection Routine, starting with step 141. (See FIG. 14.) Assuming that no other intervening events have occurred, node A will successfully dock with node P.
  • Node B answers the query of step 271 in the affirmative and, because its propagation rating was red when it received the propagation signal, it answers the query of step 272 in the negative. From there it goes to step 278 in which it changes its propagation rating to green. Then in step 279 node B consults the ancestor portion of its topology database and creates a new connection address list with its sibling, node A, placed in front of node X as the first address on the new connection address list. The new connection address list is A-X-P-S. Then node B performs the Prospective Child Node's Connection Routine, starting with step 141. Assuming that no other intervening events have occurred, node B will successfully dock with node A (if node A has disappeared, node B would attempt to dock with node X, but since it is not there, it would move on to node P.)
  • Node A which had the higher propagation rating of the two child nodes of departed node X, moves up one level to become a second level node 13. Since it is a "new" entry to the second level, its initial utility rating and its propagation rating will be lower than that of node Q. As time goes by, at subsequent utility rating measurement events node A's utility rating (and hence its propagation rating) may become higher than that of node Q.
  • Node B which had the lower propagation rating of the two child nodes of departed node X, does not re-transmit the propagation signal to its child nodes (nodes E and F), and they will follow B wherever it goes in the distribution system.
  • node B becomes the child of node A while remaining third level node 14. At least initially, it will be node A's child with the higher utility rating and propagation rating.
  • its child nodes C and D (which were fourth level nodes 15 in FIG. 28 when node X was still in the system) receive a propagation signal. Since node C has a red propagation rating, it, like node B, will remain in its level, change its propagation rating to green and dock with its sibling, with the result being that node D becomes its parent.
  • node D Since node D had a green propagation rating when it received the propagation signal, it answers the queries of steps 271 and 272 in the affirmative and changes its propagation rating to red in step 274. It answers the query of step 275 in the affirmative, and remains docked with node A. As a result, node D moves up a level and becomes an third level node 14 (with, at least until the next utility rating event, a lower utility rating and propagation rating than its new sibling, node B).
  • This process proceeds down the tree, with each child of a node which moves up a level doing one of the following under this example: (i) if it is a node having a green propagation rating, it remains docked with its parent, thereby itself moving up a level, changes its propagation rating to red and re-transmits the propagation signal to its child nodes; or (ii) if it is a node having a red propagation rating, it docks with the node which was its sibling, thereby staying in the same level, and changes its propagation rating to green.
  • reconfiguration using the Child Node's Propagation Routine of this example when a node departs the system does not necessarily result, in the long run, in significantly more inter-node reconnection events than any other reconfiguration method (and may result in fewer reconnection events).
  • reconfiguration using the Child Node's Propagation Routine of this example helps assure that the more reliable nodes are promoted up the distribution network even if many reconfiguration events occur close in time to each other in a particular tree.
  • a propagation routine may be set forth as follows (wherein node promotion during the network reconfiguration event is precipitated by node X's departure):
  • each node's topology database may include information regarding that node's ancestors and descendants.
  • the child may, in one example, send a "complaint" message to its grandparent.
  • the grandparent may check whether the parent is still there. If it is not, then the grandparent may send a propagation signal to the child nodes of the missing parent.
  • the grandparent node may send a disconnect signal to the parent node (e.g., sending it back to the server to begin the connection process again). This may occur, for example, when one of the two following conditions is exists: (i) the child node is the only child node of the parent, or (ii) the child and its sibling(s) are complaining to the grandparent.
  • the grandparent would also send a propagation signal to the child nodes of the disconnected parent, and a reconfiguration event would occur.
  • grandparent may assume that the problem is with the complaining child node.
  • grandparent may send a disconnect signal to the complaining child node (e.g., sending it back to the server to begin the connection process again).
  • the complaining child node had its own child nodes, they would contact the departed child node's parent to complain, starting a reconfiguration event (in another example (which example is intended to be illustrative and not restrictive), the grandparent would send out signals to the malfunctioning parent (instructing it to depart) and to the children (instructing the green child to climb its path to its grandparent and instructing the red child to cross connect to the green child).
  • FIG. 30 depicts a tree in a binary system having a primary server 11, in which node A, a first level node 12, has two child nodes B and B' (which are second level nodes 13); node B has two child nodes C and C (which are third level nodes 14); node C has two child nodes D and D' (which are fourth level nodes 15); and node D has two child nodes E and E' (which are fifth level nodes 17)) and FIG. 31 (which is a flow diagram showing an example Grandparent's Complaint Response Routine).
  • node C Who does node C complain to in this example when node C no longer gets satisfactory service from node B? Node C complains to its grandparent, node A. If node C does not hear back from node A within a predefined amount of time (e.g., if node A has left the network), node C will then exit the network and immediately reconnect as a new node by going to the primary server S for assignment.
  • node A will choose to either remove its child node B or its grandchild node C. A will make this determination based on whether node C alone, or both node C and its sibling node C are experiencing problems with node B, together with the knowledge of whether node A is continuing to receive satisfactory service reports from node B.
  • node A If node A is continuing to get satisfactory service reports from node B and there is no indication that node C (the sibling of node C) is experiencing problems with node B, then node A will assign the "fault" to node C and issue a disconnect order for its removal. At this point the green child of node C (i.e., node D or D') will move up a level connecting to parent B, and the red child of node C (the other of node D or D') will connect as a child of its former sibling. The reconfiguration event will then propagate as discussed above.
  • node A If, on the other hand, node A is not getting satisfactory service reports in this example from node B and/or a complaint from node Cs sibling arrives within a 'window' of node Cs complaint, then node A will assign the "fault" to node B and issue a disconnect order for its removal. At this point the green child of node B (i.e., node C or C) will move up a level connecting to grandparent node A, and the red child of node B (the other of node C or C), will connect as a child of its former sibling. The reconfiguration event will then propagate as discussed above.
  • node C is the only child of node B.
  • node B may be disconnected in this example by node A based solely on the recommendation of node C.
  • the system will not disconnect a node with one child based solely on the complaint of that child (in this case there may be insufficient data to accurately assign blame - disconnecting the node being complained against and promoting the complainer may have the potential to promote a node that is experiencing problems).
  • node A receives a complaint from node C about node B in step 311.
  • Node A then goes to step 312 in which it checks its topology database to determine whether node C is the only child of node B. If the answer is no (as shown in FIG. 30), then node A goes to step 313 in which node A determines whether there is a communication problem between it and node B. If the answer is no, then node A proceeds to step 314 in which it determines whether it has received a similar complaint about node B from node C within a predetermined period of time of node A's having received the complaint from node C.
  • node A goes to step 315 in which it sends a disconnect signal to node C.
  • node A has completed the Grandparent's Complaint Response Routine (in one example a propagation signal does not necessarily have to sent by node A to node Cs child nodes - if any exist, they will complain to node B which, upon discerning that node C is no longer in the distribution network, will send a propagation signal to such complaining nodes).
  • node A proceeds to step 316 in which it disconnects node B.
  • node A then proceeds to step 317 in which it sends a propagation signal to its grandchild nodes (here nodes C and C) and a reconfiguration event will occur as described above.
  • the method of this example described above takes a "2 out of 3" approach on the question of node removal.
  • many interior nodes in a binary tree distribution network constructed pursuant the present invention will have direct connections to three other nodes: their parent, their green child, and their red child.
  • the connections between a node and its three "neighbor" nodes can be thought of as three separate communications channels.
  • a node is removed from the network when there are indications of failure or inadequate performance on two of these three channels. When there are indications that two of the channels are working normally, the complaining node is presumed to be "unhealthy" and is removed.
  • nodes may continually grade their own performance and if they detect that there is insufficient bandwidth on two or more of their links (links to their parent and their children) the node will perform a graceful depart from the network (initiating a reconfiguration as described elsewhere) and then the node will either climb its path to an ancestor node (or go directly to the server) to request a connection path to the edge of the tree.
  • This system allows nodes to "step aside" when they detect connection / bandwidth related issues.
  • the content data buffer 125 shown in FIG. 13 of a user node may be larger the higher uptree it is in a distribution chain (i.e., the closer it is to the root server). The larger the buffer, the greater the amount of time between a particular event (or from the node's receiving the content data), and the node's actually playing the content data in its content data buffer.
  • the content data buffer of a node is sized to vary the time that the playing of the content data is started, the content data buffer is a delay.
  • the delay may be a timer or a delay buffer 126 as shown in FIG. 13, or a combination of elements.
  • the delay's purpose may be to help assure that all users experience the play of the content data approximately simultaneously.
  • the period of time (i.e., the delay time) between:
  • the moment content data is received by a node or, from a predetermined moment such as, for example, a reporting event or utility rating event (the occurrence of which may be based on time periods as measured by a clock outside of the distribution system and would occur approximately simultaneously for all nodes); and
  • the playing of the content data by the node is greater for a node the higher uptree the node is on the distribution chain. That is, the delays in first level nodes create greater delay times than do the delays in second level nodes, the delays in second level nodes create greater delay times than the delays in third level nodes and so on. Described mathematically, in an n level system, where x is a number from and including 2 to and including n, the delay time created by a delay in an (x-1) level node is greater than the delay time created by a delay in an x level node.
  • first level nodes could have a built in effective delay time in playing content data of one- hundred-twenty seconds
  • second level nodes could have a built in effective delay time of one- hundred-five seconds
  • third level nodes could have a built in effective delay time of ninety seconds
  • fourth level nodes could have a built in effective delay time of seventy- five seconds
  • fifth level nodes could have a built in effective delay time of sixty seconds
  • sixth level nodes could have a built in effective delay time of forty- five seconds
  • seventh level nodes could have a built in effective delay time of thirty seconds
  • eighth level nodes could have a built in effective delay time of fifteen seconds.
  • the system may not endeavor to have buffers closer to the server contain more data than buffers of nodes that are further away. Instead, every node may have a buffer that can hold, for example, approximately two minutes worth of A/V content.
  • the system may endeavor to fill every node's buffer as rapidly as possible, regardless of the node's distance from the server, in order to give each node the maximum amount of time possible to deal with network reconfiguration and uptree node failure events.
  • nodes may receive a "current play time index" when they connect to the server. This value may be used to keep all nodes at approximately the same point in the stream - to have all nodes playing approximately the same moment of video at the same time - by telling them, essentially, begin playing A/V content at this time index.
  • the play time index may be, for example, approximately two minutes behind "real time” - allowing for nodes to buffer locally up to two minutes of content (the present invention may permit drift between nodes within the buffer (e.g., two minute buffer) or the present invention may reduce or eliminate this drift to keep all nodes playing closer to "lock step").
  • FIG. 32 shows a server of a distribution network configured to include a number of “virtual” nodes (such virtual nodes may provide "multi-node” or “super-node” capability). More particularly, in this example (which example is intended to be illustrative and not restrictive), the server is a single computer running software which provides functionality for seven distinct nodes (as a result, this single computer includes 4 leaves (Nodes V 3 -V 6 ) to which downtree user nodes (not shown) may connect).
  • root node N may be thought of as "real” - because there is one real machine running the software (all of the other nodes (i.e., Vi, V 2 , V 3 , V 4 , V 5 and V 6 ) may be thought of as "virtual", since they are all running on node N's hardware.
  • the server is a single computer running software which provides functionality for four distinct nodes. More particularly, in the example in FIG.
  • the VTree of the server is configured with: (i) one 3 rd level leaf (Node V 3 ) to which downtree user nodes (not shown) may connect; (ii) a 2 nd level leaf (Node V]) to which one downtree user node (not shown) may connect (and to which Node V 3 is connected); and (iii) another 2 nd level leaf (Node V 2 ) to which downtree user nodes (not shown) may connect.
  • root node N may be thought of as "real” - because there is one real machine running the software (all of the other nodes (i.e., Vi, V 2 , and V 3 ) may be thought of as "virtual", since they are all running on node N's hardware.
  • the computer e.g., server
  • the computer may run multiple instances of software, each of which instances provides the functionality for one of the virtual nodes (e.g., by using a portion of memory as a "port").
  • the computer e.g., server
  • the computer may run a single instance of software to provide the functionality for a plurality (e.g., all) of the virtual nodes on the server (e.g., by using a portion of memory as a "port").
  • such virtual nodes may be implemented by one or more user computers.
  • a distribution network for the distribution of data between a first computer system and a second computer system comprises: a first software program running on the first computer system for enabling the first computer system to perform as a node in the network; and a second software program running on the second computer system for enabling the second computer system to perform as a plurality of virtual nodes in the network.
  • the second computer system is a server.
  • the second computer system is a user computer.
  • first software program and the second software program utilize the same source code. In another example, the first software program and the second software program utilize different source code.
  • a distribution network for the distribution of data between a first computer system and a second computer system comprises: a first software program running on the first computer system for enabling the first computer system to perform as a node in the network; and a plurality of instances of a second software program running on the second computer system for enabling the second computer system to perform as a plurality of virtual nodes in the network.
  • the second computer system is a server.
  • the second computer system is a user computer.
  • the first software program and the second software program utilize the same source code.
  • the first software program and the second software program utilize different source code.
  • data flowing between one or more nodes in the network may be encrypted (e.g., using a public key/private key mechanism).
  • the encryption may be applied to only a checksum portion of the data flow.
  • the encryption may be applied to all of one or more messages.
  • the present invention may provide a mechanism to help ensure that hackers: (1) can not construct nodes which pose as the server and issue destructive commands, such as network shutdown messages; and (2) can not intercept the broadcast data stream and replace that stream with some other stream (e.g., replacing a network feed of CNN with pornography).
  • the present invention may operate by having nodes receive from the server, as part of the meta data sent by a server to all connection requesting nodes, a public key.
  • This public key may then used by (e.g., every node in the system) to verify that: (1) the content of every data packet; and (2) the content of all non-data packets sent in the "secure message format" were produced by the server and have not been altered in any way prior to their arrival at this node, regardless of the number of nodes that have "rebroadcast” the packet.
  • nodes rebroadcast data packets and secure packets to their children completely unaltered from the form in which they were received from their parent.
  • the root server encodes (using its private key) only the checksum information associated with each data packet and secure packet.
  • every node in the network can independently verify what the checksum of a packet should be and also decrypt the encrypted checksum attached to that message by the server, every node can independently verify that a packet has been received unaltered from the root server. Packets that have been altered in any way may be rejected as invalid and ignored by the system.
  • a node e.g., a server node
  • the stored data may be "replayed" (e.g., such as to aid in network reconfiguration).
  • such stored data may be associated with certain time information and such replay may utilize the stored time information for timing purposes.
  • a node may store some or all information regarding connection requests and its response to connection requests.
  • the stored data may be "replayed" (e.g., such as to aid in modeling the network configuration).
  • such stored data may be associated with a system time and such replay may utilize the stored time information to model the network configuration at a particular time.
  • time information may be utilized with propagated network data so that the node (e.g., a server node) can more accurately model the network configuration.
  • the following process may be employed when a node enters the network. More particularly, when the server adds a node X to the system (gives it a connection path to node Y) the server will add node X to its network topology model as a child of node Y. In other words, the server will assume that the connection will take place normally with no errors.
  • the system must account for the fact that it may take a number of propagate NTM/up cycles for the true status of node X to be reported back to the server.
  • the server may compare the newly propagated (updated) NTM it is receiving from its children to the list of "recently added nodes". If a recently added node is not in the propagated NTM it will be re-added to the NTM as a child of node Y. At some point node X will no longer be considered “recently added” and the server will cease adding node X back into its NTM. In other words, the server will assume that, for whatever reason, node X never successfully connected to the network.
  • the following process may be employed when a node gracefully exits the system. More particularly, when a node (node X) leaves the network by performing a graceful exit (e.g., the user selects "exit” or clicks the "x” or issues some other command to close the application) the node will send signals to its parent and child nodes (if any) immediately prior to actually shutting down. Upon receipt of such a "depart" message, the parent of the departing node (node P) will modify its internal NTM (Network Topology Model) to a configuration that corresponds to the predicted shape of the network after the departing node exits.
  • NTM Network Topology Model
  • node P will assume that its NTM is accurate and that the entire cascade of reconfiguration events initiated by the departure of node X will happen normally. As such, node P will now expect the former green child of node X (node XG — if such a child exists) to connect — essentially taking node X's place in the network.
  • node P's NTM represents a prediction of the (anticipated) state of the network rather than the actual reported configuration.
  • node P will mark node XG as "soft" in its NTM - meaning node XG is expected to take up this position in the network but has not yet physically done so.
  • Node P will report this "predicted" NTM (along with node XG 's status as a soft node) up during its propagate NTM/Up cycles, until node P physically accepts a child connection in node XG' s "reserved” spot and receives an actual report of the shape of the downtree NTM from that child, via propagate NTM/ups.
  • the above actions may be taken in order to have the propagated NTM 's reflect, as accurately as possible, the topology of the network.
  • a complete and accurate picture of the current shape of the network at any instance in time can never be guaranteed.
  • the root server has the "widest" view of the network, without the heuristics described herein, that view would be the "most dated" of any node in the network.
  • heuristics such as those just described may be vital to producing NTM's that are as close to complete and accurate as possible (e.g., so that the Universal Connection Address Routine can position incoming nodes so as to keep the distribution tree as nearly balanced as possible).
  • some or all of the data flowing in the network may be associated with a particular data type (e.g., audiovisual data, closed caption data, network control data, file data). Further, data flowing in the network may be stored for user use at a later time (e.g., data of the type "file" may contain program guide data and may be stored for use by a user (or the client program) at a later time).
  • a particular data type e.g., audiovisual data, closed caption data, network control data, file data
  • data flowing in the network may be stored for user use at a later time (e.g., data of the type "file" may contain program guide data and may be stored for use by a user (or the client program) at a later time).
  • the present invention may be utilized to broadcast audio/video content "live" (which traditional peer-to-peer file sharing systems such as BitTorrent, KaZaA, etc. are poorly suited for), various embodiments of the present invention may have the ability to distribute various kinds of content in addition to audio visual material.
  • the system of the present invention may routinely transmit commands between nodes and data concerning the topology of the network.
  • the system of the present invention may be capable of streaming Closed Captioning text information to accompany A/V material.
  • the system of the present invention may capable of transmitting files "in the background” while ATV content is being broadcast.
  • the system of the present invention may be capable of distributing any form of digital data, regardless of type or size.
  • a process for docking a connection requesting user node with a distribution network including the following steps:
  • connection requesting user node selecting from a group of nodes including the instructing node and said docked user nodes a recommended parent node for said connection requesting user node, wherein said recommended parent node has apparent available capacity to transmit content data to said connection requesting user node and is at least as close (i.e., in terms of network topology) to said instructing node as any other docked user node having apparent available capacity to transmit content data to said connection requesting user node (of note, placement may also or alternatively depend, as described elsewhere in the present application, in whole or in part on whether the connection requesting node is repeat-capable or not (i.e., whether it can support children)); and
  • connection address list listing each node in said instructing node's topology database from said recommended parent node back to said instructing node.
  • the process may further comprise the following steps:
  • connection requesting user node go to (or attempt to go to) the node at the top of said connection address list
  • connection requesting user node determine whether the node at the top of said connection address list is part of the distribution network
  • connection address list if the node at top of said connection address list is not part of the distribution network, deleting such node from said connection address list and repeating steps (e) and (f) with respect to the next node at the top of said connection address list;
  • connection address list is part of said distribution network, having said connection requesting user node dock with said node at the top of said connection address list.
  • step (d) may further comprise the following step:
  • the process may further comprise the following steps:
  • connection requesting user node go to (or attempt to go to) the node at the top of said connection address list; (f) having said connection requesting user node determine whether the node at the top of said connection address list is part of the distribution network;
  • connection address list if the node at the top of said connection address list is not part of the distribution network, deleting such node from said connection address list and repeating steps (e) and (f) with respect to the next node at the top of said connection address list;
  • connection address list (i) if the node at the top of said connection address list is part of said distribution network and has available capacity to transmit content data to said connection requesting user node, having said connection requesting user node dock with the node at the top of said connection address list.
  • the process may further comprise the following steps:
  • the process may further comprise the steps of counting the times that step (j) is performed and, when step (j) is performed a predetermined number of times, having said connection requesting user node repeat steps (a)-(d) with said primary server being the instructing node.
  • a process for docking a connection requesting user node with a distribution network including the following steps:
  • connection address list listing each node in said instructing node's topology database from said recommended parent node back to said instructing node and, if said instructing node is not a primary server node, back to said primary server node.
  • a process for docking a connection requesting user node with a distribution network including the following steps:
  • said docked user nodes' respective bandwidth capacities including a designation of whether said respective docked user node's bandwidth capacity is below a predetermined threshold (e.g., said respective docked user node is a low- bandwidth node) or at least said predetermined threshold (e.g., said respective docked user node is a high-bandwidth node);
  • a primary recommended parent node list (“PRPL") comprised of the instructing node (if it has available capacity to transmit content data to said connection requesting user node) plus those docked user nodes having apparent available capacity to transmit content data to said connection requesting user node, with said instructing node being placed first on the PRPL (if it is on the PRPL) and said docked user nodes having apparent available capacity to transmit content date to said connection requesting user node being ranked with those docked nodes which are closer (i.e., in terms of network topology) to the instructing node being ranked higher than those docked nodes which are further away (i.e., in terms of network topology) and with equidistant (i.e., in terms of network topology) docked user nodes being ranked such that those docked nodes which have the higher utility ratings being ranked higher than docked nodes having lower utility ratings;
  • PRPL primary recommended parent node list
  • SRPL secondary recommended parent node list
  • said instructing node if it has no available capacity to transmit content data to said connection requesting user node but does have at least one low-bandwidth node docked directly to it) plus said docked user nodes which (i) are high-bandwidth nodes with no available capacity to transmit content data to said connection requesting user node and (ii) have at least one low- bandwidth node docked directly to it, with said instructing node being placed first on the SRPL (if it is on the SRPL) and said docked user nodes on the SRPL being ranked with those docked nodes which are closer (i.e., in terms of network topology) to the instructing node being ranked higher than those docked nodes which are further away (i.e., in terms of network topology) and with equidistant (i.e., in terms of network topology) docked user nodes being ranked such that those docked no
  • connection requesting user node determines whether the connection requesting user node is a low-bandwidth node
  • connection requesting user node is a low-bandwidth node
  • connection address list listing each node in said instructing node's topology database from said recommended parent node back to said instructing node and, if said instructing node is not a primary server node, back to said primary server node;
  • connection requesting user node is a high-bandwidth node
  • step (f)(ii) may include the following steps:
  • step (g)(iv) may include the following steps: (1) adding said connection requesting user node to the topology database as a child of the recommended parent node; (2) if said recommended parent node would have no apparent available capacity to transmit content data to an additional node with said connection requesting user node docked with it, deleting the recommended parent node from the PRPL; and
  • step (g)(v) may include the following steps:
  • the process may further comprise the following steps:
  • connection requesting user node go to (or attempt to go to) the node at the top of said connection address list
  • connection requesting user node determine whether the node at the top of said connection address list is part of the distribution network
  • the process may further comprise the following steps:
  • connection requesting user node having said connection requesting user node go to (or attempt to go to) the node at the top of connection address list
  • connection requesting user node determine whether the node at the top of said connection address list is part of the distribution network
  • connection address list (k) if the node at the top of connection address list is part of said distribution network, determining whether said node at the top of connection address list has available capacity to transmit content data to said connection requesting user node; and (1) if the node at the top of said connection address list is part of said distribution network and has available capacity to transmit content data to said connection requesting user node, having said connection requesting user node dock with the node at the top of said connection address list.
  • the process may further comprise the following steps:
  • step (m) if the node at the top of said connection address list is part of said distribution network and the node at the top of said connection address list does not have available capacity to transmit content data to said connection requesting user node, repeating steps (a)- (g), with the node at the top of said connection address list being said instructing node and a new connection address list being the product of steps (a)-(g).
  • the process may further comprise the steps of counting the times that step (m) is performed and, when step (m) is performed a predetermined number of times, having said connection requesting user node repeat steps (a)-(g) with said primary server being the instructing node.
  • a process for docking a connection requesting user node with a distribution network including the following steps:
  • connection requesting user node's bandwidth capacity is below a predetermined threshold (e.g., said connection requesting user node is a low- bandwidth node) or at least said predetermined threshold (e.g., said connection requesting user node is a high-bandwidth node);
  • connection requesting user node is a high-bandwidth user node, selecting from a group of nodes including the instructing node and said docked user nodes a recommended parent node for said connection requesting user node, wherein said recommended parent node has apparent available capacity to transmit content data to said connection requesting user node and is at least as close (i.e., in terms of network topology) to said instructing node as any other docked user node having apparent available capacity to transmit content data to said connection requesting user node;
  • connection requesting user node is a low-bandwidth user node, selecting from a group of nodes including the instructing node and said docked user nodes a recommended parent node for said connection requesting user node, wherein said recommended parent node has apparent available capacity to transmit content data to said connection requesting user node and is at least as far (i.e., in terms of network topology) from said instructing node as any other docked user node having apparent available capacity to transmit content data to said connection requesting user node; and
  • connection address list listing each node in said instructing node's topology database from said recommended parent node back to said instructing node.
  • step (f) may further comprise the following step: (i) if said instructing node is not a server node, including in said connection address list a list of nodes cascadingly connected with each other from the instructing node back to said server node.
  • a process for connecting a connection requesting user node to a computer information distribution network having a primary server node and user nodes docked therewith in a cascaded relationship comprising the following steps:
  • connection requesting user node (a) providing said connection requesting user node with a connection address list which sets forth a list of user nodes docked in series with each other back to said primary server;
  • connection requesting user node go to (or attempt to go to) the node at top of said connection address list
  • connection requesting user node determine whether the node at the top of said connection address list is part of the distribution network
  • connection address list if the node at the top of said connection address list is not part of the distribution network, deleting such node from said connection address list and repeating steps (b) and (c) with respect to the next node at the top of connection address list;
  • connection address list (e) if the node at the top of said connection address list is part of said distribution network, having said connection requesting user node dock with the node at the top of said connection address list.
  • a process for connecting a connection requesting user node to a computer information distribution network having a primary server node and user nodes docked therewith in a cascaded relationship comprising the following steps:
  • connection requesting user node (a) providing said connection requesting user node with a connection address list which sets forth a list of user nodes docked in series with each other back to said primary server;
  • connection requesting user node go to (or attempt to go to) the node at the top of said connection address list
  • connection requesting user node determine whether the node at the top of said connection address list is part of the distribution network
  • connection address list if the node at the top of said connection address list is not part of the distribution network, deleting such node from connection address list and repeating steps (b) and (c) with respect to the next node at the top of said connection address list;
  • connection address list (e) if the node at the top of said connection address list is part of said distribution network, determining whether the node at the top of said connection address list has available capacity to transmit content data to said connection requesting user node;
  • connection address list is part of said distribution network and has available capacity to transmit content data to said connection requesting user node, having said connection requesting user node dock with the node at the top of said connection address list.
  • a process for reconfiguring the network in the event of a user node's departure therefrom including the following steps:
  • the node's propagation rating if it is a user node having a parent node which is a user node, wherein a propagation rating is one of a predetermined number of grades ranging from highest to second highest to third highest and so on, with the number of grades being equal to said predetermined maximum number;
  • nodes may not have data about their sibling (i.e., nodes do not know who their sibling is); instead, the mechanism used to get a red node to connect to its green sibling may be for the parent of those two nodes (or a proxy for that parent) to send the red child: (1) a connection path with that red child's green sibling at the top of the path; and (2) a command to "priority join" to that green child - when the green child node accepts the incoming "priority join” connection request from its former red
  • a process for reconfiguring the network in the event of a user node's departure therefrom including the following steps:
  • the node's propagation rating if it is a user node having a parent node which is a user node, wherein a propagation rating is one of a predetermined number of grades ranging from highest to second highest to third highest and so on, with the number of grades being equal to said predetermined maximum number;
  • an ancestor list setting forth the node's ancestor nodes' addresses (if it has any ancestor nodes) back to the primary server node, with the node's parent node being atop the ancestor list;
  • step (c)(iii) may include the following additional step: (1) waiting a predetermined period of time so as to allow the red node's sibling with the highest propagation rating prior to the red node's receipt of the propagation signal (hereinafter the "green node") to retransmit the propagation signal to the green node's child nodes (if any).
  • a process for handling a purported communication interruption between said first user node and said second user node including the following steps:
  • step (b) may include the following step: (ii) sending a signal to the user nodes docked with said first user node indicating the departure of said first user node from said network.
  • each user node may have no more than a predetermined maximum number of child nodes docked with it, and wherein step (b) may include the following steps
  • a propagation rating is one of a predetermined number of grades ranging from highest to second highest to third highest and so on, with the number of grades being equal to said predetermined maximum number;
  • a distribution network for the distribution of content data from a server node to user nodes, wherein said user nodes are connected to said server and each other in cascaded relationship is provided, wherein:
  • At least one of said user nodes is a repeater node connected directly to said server node, wherein said repeater node retransmits content data received by it to a user node docked to it for purpose of receiving content data from said repeater node (hereinafter referred to as a "child node"); and
  • each repeater node has the ability to provide to a user node which is attempting to dock with said repeater node connection address instructions.
  • each repeater node may include a descendant database indicating:
  • connection address instructions may refer said user node which is attempting to dock with said repeater node to a node in said descendant database.
  • each user node may include an ancestor database indicating to which node said user node is docked so that said user node may receive content data therefrom (hereinafter referred to as a "parent node"), and to which node, if any, at said point in time, said parent node is docked so that it may receive content data therefrom.
  • parent node an ancestor database indicating to which node said user node is docked so that said user node may receive content data therefrom
  • said user node may contact another node on its ancestor database.
  • each child node of a repeater node may include a sibling database indicating which user nodes, if any, are also child nodes of said repeater node.
  • a distribution network for the distribution of content data from a server node to user nodes wherein n levels of user nodes are cascadingly connected to said server node, wherein n is a number greater than one, wherein each user node includes a delay which causes the playing of content data by such node to be delayed a period of time (hereinafter "delay time") from a point in time, and wherein delays in higher level nodes create greater delay times than do delays in lower level nodes.
  • delay time a period of time
  • a distribution network for the distribution of content data from a server node to user nodes wherein n levels of user nodes are cascadingly connected to said server node, wherein n is a number greater than one, wherein each user node includes a delay which causes the playing of content data by such node to be delayed a period of time (hereinafter "delay time") from a point in time, wherein x is a number from and including 2 to and including n, and wherein delays in (x-1) level nodes create greater delay times than do delays in x level nodes.
  • delay time a delay which causes the playing of content data by such node to be delayed a period of time
  • said point in time is an event experienced approximately simultaneously by substantially all user nodes.
  • said point in time is when said node receives content data.
  • any desired branch factor may be used ⁇ see, e.g., FIG. 34A showing a branch factor of 1 (i.e., each parent node may have only one child node); FIG. 34B showing a branch factor of 2 (i.e., each parent node may have one or two children nodes); and FIG. 34C showing a branch factor of 3 (i.e., each parent node may have one, two or three children nodes)).
  • a node ID may be based upon: (a) an IP address (external/internal/DHCP); (b) a URL; and/or (c) a Port.
  • the root server may propagate "its" time upon assignment of a connection and the user node may get a "delta" (i.e., difference) from its clock to use (in the event of a reset the user node may re-acquire the time from the server (or from another time-providing source)).
  • the various time intervals discussed herein may be predetermined time intervals (e.g., .5 sec, 1 sec, 1 min.) and/or the time intervals may be based upon some varying criteria (e.g., flow of data in the network, number of nodes in the network, number of levels in the network, position of a given node in the network).
  • partial propagation of data e.g., ancestor data and/or descendent data
  • data e.g., ancestor data and/or descendent data
  • one or more of the steps described herein may be omitted (and the various steps which are performed may not necessarily need to be carried out in the order described herein (which description was intended to represent a number of examples, and not be restrictive)).
  • the root server and each of the user nodes may utilize essentially identical software for carrying out the network configuration, reconfiguration and data transfer operations described herein or the root server may have dedicated server software while the user nodes have different, dedicated user node software.
  • the entire distribution network may have the same branch factor (e.g., a branch factor of 2) or the user nodes may have a first branch factor (e.g., a branch factor of 2) and the root server may have a second branch factor (e.g., the root server may be capable of acting as a parent to more user nodes than 2).
  • branch factor e.g., a branch factor of 2
  • the root server may have a second branch factor (e.g., the root server may be capable of acting as a parent to more user nodes than 2).
  • heuristics may be employed to drive network configuration, reconfiguration and/or data flow (e.g., based upon historic network configuration, reconfiguration and/or data flow).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Divers modes de réalisation de la présente invention concernent un système destiné à distribuer des données (telles que des données de contenu) sur un réseau informatique ainsi qu'un procédé destiné à agencer des noeuds de réception dans un réseau informatique de sorte que la capacité d'un serveur soit augmentée efficacement (par ex., la capacité d'un serveur peut être augmentée efficacement par multiplication, de manière exponentielle, etc.). Dans un mode de réalisation, la présente invention exploite la capacité excédentaire que possèdent plusieurs noeuds de réception, et peut utiliser ces noeuds de réception comme répéteurs. Le système de distribution peut comprendre un ou plusieurs noeuds comportant une ou plusieurs bases de données indiquant un ou plusieurs ancêtres et/ou descendants du noeud de sorte que la reconfiguration du réseau de distribution puisse être accomplie sans surcharge du serveur principal du système. Un mode de réalisation de la présente invention peut inclure un procédé destiné à configurer un réseau informatique de distribution d'informations comprenant un noeud de serveur principal et des noeuds d'utilisateurs connectés en cascade, et à reconfigurer le réseau dans le cas où un noeud d'utilisateur s'éloigne du réseau. Dans un exemple (exemple non restrictif donné à titre d'illustration), le procédé peut consister à utiliser un nouveau noeud d'utilisateur (ou un noeud d'utilisateur demandant une connexion) avec une liste d'adresses de connexion de noeuds à l'intérieur du réseau, à amener le nouveau noeud d'utilisateur (ou le noeud d'utilisateur demandant une connexion) à se déplacer (ou à tenter de se déplacer) vers le noeud situé au sommet de la liste d'adresses de connexion, à déterminer si ce noeud fait encore partie du réseau de distribution, puis, si tel est le cas, à le connecter, et si tel n'est pas le cas, à l'amener à se déplacer (ou à tenter de se déplacer) vers le noeud suivant sur la liste d'adresses de connexion. Dans un autre exemple (exemple non restrictif donné à titre d'illustration), lorsqu'un noeud d'utilisateur s'éloigne du réseau de distribution, un signal de propagation peut être émis en direction des noeuds situés sous celui-ci dans le réseau, ce qui les amène à monter dans le réseau dans un ordre prédéterminé. Dans un autre exemple (exemple non restrictif donné à titre d'illustration), la présente invention peut faire appel à une approche décentralisée permettant d'obtenir, pour chaque nouveau noeud d'utilisateur (ou noeud d'utilisateur demandant une connexion), un chemin de retour vers le serveur racine.
EP05769257A 2004-07-09 2005-07-11 Systemes destines a distribuer des donnees sur un reseau informatique et procedes destines a agencer des noeuds en vue d'une distribution de donnees sur un reseau informatique Withdrawn EP1782245A4 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US58687604P 2004-07-09 2004-07-09
PCT/US2005/024515 WO2006010111A2 (fr) 2004-07-09 2005-07-11 Systemes destines a distribuer des donnees sur un reseau informatique et procedes destines a agencer des noeuds en vue d'une distribution de donnees sur un reseau informatique

Publications (2)

Publication Number Publication Date
EP1782245A2 true EP1782245A2 (fr) 2007-05-09
EP1782245A4 EP1782245A4 (fr) 2010-08-25

Family

ID=35785785

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05769257A Withdrawn EP1782245A4 (fr) 2004-07-09 2005-07-11 Systemes destines a distribuer des donnees sur un reseau informatique et procedes destines a agencer des noeuds en vue d'une distribution de donnees sur un reseau informatique

Country Status (3)

Country Link
EP (1) EP1782245A4 (fr)
CA (2) CA2577443C (fr)
WO (1) WO2006010111A2 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010031001A1 (fr) 2008-09-12 2010-03-18 Network Foundation Technologies, Llc Système de distribution de données de contenu sur un réseau informatique et procédé d'agencement de nœuds pour une distribution de données sur un réseau informatique
CN115277723B (zh) * 2022-07-19 2024-06-18 国能信控互联技术有限公司 边缘采集历史模块基于缓冲事件的断点续传方法及系统

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030051051A1 (en) * 2001-09-13 2003-03-13 Network Foundation Technologies, Inc. System for distributing content data over a computer network and method of arranging nodes for distribution of data over a computer network
WO2003069467A1 (fr) * 2002-02-13 2003-08-21 Horizon, A Glimpse Of Tomorrow, Inc. Systeme d'execution distribuee

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5884031A (en) * 1996-10-01 1999-03-16 Pipe Dream, Inc. Method for connecting client systems into a broadcast network
US7664840B2 (en) * 2001-09-13 2010-02-16 Network Foundation Technologies, Llc Systems for distributing data over a computer network and methods for arranging nodes for distribution of data over a computer network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030051051A1 (en) * 2001-09-13 2003-03-13 Network Foundation Technologies, Inc. System for distributing content data over a computer network and method of arranging nodes for distribution of data over a computer network
WO2003069467A1 (fr) * 2002-02-13 2003-08-21 Horizon, A Glimpse Of Tomorrow, Inc. Systeme d'execution distribuee

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO2006010111A2 *

Also Published As

Publication number Publication date
CA2975031C (fr) 2019-12-17
CA2577443C (fr) 2017-08-22
WO2006010111A2 (fr) 2006-01-26
EP1782245A4 (fr) 2010-08-25
WO2006010111A3 (fr) 2007-04-12
CA2577443A1 (fr) 2006-01-26
CA2975031A1 (fr) 2006-01-26

Similar Documents

Publication Publication Date Title
US7664840B2 (en) Systems for distributing data over a computer network and methods for arranging nodes for distribution of data over a computer network
US7536472B2 (en) Systems for distributing data over a computer network and methods for arranging nodes for distribution of data over a computer network
US7512676B2 (en) Systems for distributing data over a computer network and methods for arranging nodes for distribution of data over a computer network
CA2640407C (fr) Systeme et procede de distribution de donnees sur un reseau informatique
CA2686978C (fr) Systeme et procede de diffusion de contenu vers des noeuds dans des reseaux informatiques
US7035933B2 (en) System of distributing content data over a computer network and method of arranging nodes for distribution of data over a computer network
CA2577287C (fr) Systeme pour la distribution de donnees sur un reseau informatique et procedes pour l'agencement de noeuds pour la distribution de donnees sur un reseau informatique
CA2975031C (fr) Systemes destines a distribuer des donnees sur un reseau informatique et procedes destines a agencer des noeuds en vue d'une distribution de donnees sur un reseau informatique
CA2577129C (fr) Systemes de distribution de donnees via un reseau informatique, et procedes d'agencements de noeuds pour la distribution de donnees via un reseau informatique

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070208

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK YU

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 15/16 20060101AFI20070502BHEP

R17D Deferred search report published (corrected)

Effective date: 20070412

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20100722

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 15/16 20060101AFI20070502BHEP

Ipc: H04L 29/08 20060101ALI20100716BHEP

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20161216

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20180201