WO2005079001A1 - Network architecture - Google Patents
Network architecture Download PDFInfo
- Publication number
- WO2005079001A1 WO2005079001A1 PCT/CA2005/000194 CA2005000194W WO2005079001A1 WO 2005079001 A1 WO2005079001 A1 WO 2005079001A1 CA 2005000194 W CA2005000194 W CA 2005000194W WO 2005079001 A1 WO2005079001 A1 WO 2005079001A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- node
- nodes
- network
- queue
- hspp
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/02—Topology update or discovery
- H04L45/04—Interdomain routing, e.g. hierarchical routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/02—Topology update or discovery
- H04L45/03—Topology update or discovery by updating link state protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/12—Shortest path evaluation
- H04L45/122—Shortest path evaluation by minimising distances, e.g. by selecting a route with minimum of number of hops
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/26—Route discovery packet
Definitions
- the present invention relates generally to electronic, telecommunication and computing devices that communicate with each other and more particularly to a network architecture therefor.
- PSTN public switched telephone network
- LAN local area networks
- WAN wide area networks
- VOIP voice over internet protocol
- IPN4 Internet Protocol Nersion 4
- IP Internet Protocol
- routers and routing tables throughout the Internet are extremely bloated, increasing complexity in traffic routing and increasing network latency.
- IPN6 offers potential relief addresses, but the upgrade to IPN6 is expected to be slow.
- AODN Ad Hoc On Demand Distance Vector
- IETF Internet Engineering Task Force
- DSDV Sequenced Distance Vector'
- DSDV is a proactive protocol that uses a constant flood of updates to create and maintain routes to and from all nodes in the network.
- a detailed description of DSDV is found at http://citeseer.ist.psu.edU/cache/papers/cs/2258/http:zSzzSzwww.srvloc.orgzSzcharliepzSz txtzSzsigcomm94zSzpaper.pdf/perkins94highly.pdf or http://citeseer.ist.psu.edu/perkins94highly.html.
- While DSDV has the advantage of providing loop free routing it has the disadvantage of a only working in small networks. In large networks the control traffic easily exceeds the available bandwidth.
- OLSR Optimized Link State Routing'
- OLSR is a proactive protocol that attempts to build knowledge of the network topology.
- a detailed description of OLSR can be found in this IETF draft http://hipercom.inria.fr/olsr/draft-ietf-manet-olsr-l l.txt. While OLSR has the advantage of being a more efficient link state protocol it is still unable to support larger networks.
- OSPF is a proactive link state protocol that is used by some internet core routers. A detailed description of OSPF can be found in this IETF draft http://www.ietf.org/rfc/rfcl247.txt. While OSPF allows core internet routers to route around failure is has limitations on the size of networks it is able to support.
- a first aspect of the invention provides a network that comprises a plurality of nodes and a plurality of links interconnecting neighbouring ones of the nodes.
- Each of the nodes are operable to maintain information about each of the nodes that are within first portion of the nodes.
- the information includes: a first identity of another one of the nodes within the first portion; and for each first identity, a second identity representing a neighbouring node that is a desired step to reach the another one of the nodes respective to the first identity.
- Each of the nodes are operable to determine a neighbouring node that is a desired step to locate the nodes in a second portion of the nodes that are not included in the first portion.
- the determination is based on which of the neighbouring nodes most frequently appears in each second identity.
- each of the nodes is operable to exchange the information with its neighbouring nodes.
- each link has a set of service characteristics such that any path between two of the nodes has a cumulative set of service characteristics; and wherein the desired step is based on which of the paths has a desired cumulative set of service characteristics.
- the service characteristics include at least one of bandwidth, latency and bit error rate.
- the nodes are at least one of computers, telephones, sensors, personal digital assistants.
- the links are based on at least one of wired and wireless connections.
- a network core is formed between neighbouring nodes that determine each other's desired step to reach the nodes within the second portion.
- each node is operable to instruct other nodes between the core and the node to maintain information about the node.
- each node is operable to request information about the nodes within the second portion; each node being operable to make the request to the other nodes between the core and the node.
- One advantage of the present invention over the prior art is that the network architecture taught herein allows for large scale self-organizing networks. This feature is enabled, for certain embodiments, because very few nodes in the network need actually have knowledge of the entire network. Collectively, all nodes in the network have knowledge of the entire network, and nodes that are unaware of other nodes, but which need find such other nodes, are provided with means of locating those other nodes by seeking such knowledge from other nodes in the network having relevant knowledge. For these and other reasons, the present invention is a novel self-organizing network architecture that enables for substantially larger self-organizing networks than prior art self-organizing network architecture.
- a second aspect of the invention provides a self-organizing network comprising at least 2,000 nodes interconnected by a plurality of links.
- a third aspect of the invention provides a self-organizing network comprising at least 5,000 nodes interconnected by a plurality of links.
- a fourth aspect of the invention provides a self-organizing network comprising at least 10,000 nodes interconnected by a plurality of links.
- a fifth aspect of the invention provides a self-organizing network comprising at least 100,000 nodes interconnected by a plurality of links.
- Figure 1 is a schematic representation of a network in accordance with an embodiment of the invention.
- Figure 2 shows a flow-chart depicting a method of spreading network knowledge in accordance with an embodiment of the invention
- Figure 3 is a schematic representation of a network depicting a performance of a step of the method of Figure 2, in accordance with an embodiment of the invention
- Figure 4 is a schematic representation of a network depicting a performance of a step of the method of Figure 2, in accordance with an embodiment of the invention
- Figure 5 is a schematic representation of a network depicting a performance of a step of the method of Figure 2, in accordance with an embodiment of the invention
- Figure 6 is a schematic representation of a network depicting a performance of a step of the method of Figure 2, in accordance with an embodiment of the invention
- Figure 7 is a schematic representation of a network depicting a performance of a step of the method of Figure 2, in accordance with an embodiment of the invention.
- Figure 8 is a schematic representation of a network depicting a performance of a step of the method of Figure 2, in accordance with an embodiment of the invention.
- Figure 9 is a schematic representation of a network depicting a performance of a step of the method of Figure 2, in accordance with an embodiment of the invention.
- Figure 10 is a schematic representation of a network in accordance with another embodiment of the invention.
- Figure 11 is a schematic representation of a network in accordance with another embodiment of the invention.
- Figure 12 is another schematic representation of the network of Figure 11;
- Figure 13 is a schematic representation of a network in accordance with another embodiment of the invention.
- Figure 14 is a schematic representation of a network in accordance with another embodiment of the invention.
- Figure 15 is another schematic representation of the network of Figure 14;
- Figure 16 is another schematic representation of the network of Figure 14;
- Figure 17 is a schematic representation of a network in accordance with another embodiment of the invention.
- Figure 18 shows a flow-chart depicting a method of obtaining network knowledge in accordance with another embodiment of the invention.
- Figure 19 is a schematic representation of a network in accordance with another embodiment of the invention.
- Figure 20 shows a flow-chart depicting a method of exchanging information to establish a connection between nodes in accordance with another embodiment of the invention
- Figure 21 shows a flow-chart depicting an initialization process for a method of establishing a connection between nodes in accordance with another embodiment of the invention
- Figure 22 is a schematic representation of a network showing the additive property of cumulative link cost' for a method of spreading node knowledge in accordance with another embodiment of the invention.
- Figure 23 shows a flow-chart depicting the flow of node knowledge through a network for a method of spreading node knowledge in accordance with an embodiment of the invention
- Figure 24 shows a flow-chart depicting the flow of node knowledge through a network for a method of spreading node knowledge in accordance with an embodiment of the invention
- Figure 25 shows a flow-chart depicting the flow of node knowledge through a network for a method of spreading node knowledge in accordance with an embodiment of the invention
- Figure 26 shows a flow-chart depicting the flow of node knowledge through a network for a method of spreading node knowledge in accordance with an embodiment of the invention
- Figure 27 is a schematic representation of a network showing a method for detecting an isolated core in accordance with an embodiment of the invention
- Figure 28 shows a flow-chart depicting a method for routing through a network using TCP/IP as an example of a protocol that can be emulated, in accordance with an embodiment of the invention
- Figure 29 is a schematic representation of a network showing node A directly connected to nodes B and C; node C only connected to node A; and node B directly connected to four nodes;
- Figure 30 shows a flow-chart depicting how service time on a queue can be calculated in accordance with an embodiment of the invention
- Figure 31 is a schematic representation of a network showing an arrangement of nodes and queues in accordance with an embodiment of the invention.
- Figure 32 shows a number of flow-charts depicting a series of steps showing knowledge of a queue propagating a network in accordance with an embodiment of the invention
- Figure 33 is a schematic representation of a network showing every node in the network having just become aware of the EUS created queue, in accordance with an embodiment of the invention.
- Figure 34 is a schematic representation of the network of Figure 33 with one of the connections between the node with the EUS created queue removed;
- Figure 35 is a schematic representation of the network of Figure 33 with the directly connected node that lost its connection to the node with the EUS created queue set to a latency of infinity;
- Figure 36 is a schematic representation of the network of Figure 33 with all the node's 'chosen destinations' at infinity;
- Figure 37 is a schematic representation of the network of Figure 33 with all nodes that can be set to infinity being set to infinity;
- Figure 38 is a schematic representation of the network of Figure 33 with every node that has been set to infinity paused for a fixed amount of time, and then picking the lowest latency destination it sees that is not infinity;
- Figure 3,9 is a schematic representation of the network of Figure 33 showing that as soon as a node that was at infinity becomes non-infinity it tells the nodes directly connected to it immediately;
- Figure 40 shows a flow-chart depicting the incoming latency update outlined in the schematic representations of Figures 33-39;
- Figure 41 shows a flow-chart depicting latency at infinity
- Figure 42 is a schematic representation of a network showing the data stream on nodes between the ultimate sender and ultimate receiver;
- Figure 43 is a schematic representation of a network showing an example of a potential loop to be avoided
- Figure 44 shows a chart comparing the median latency over a time period to the maximum latency over another time period
- Figure 45 is graph depicting bytes of data in queue over time, and showing minimum queue levels during time intervals;
- Figure 46 is a schematic representation of a network showing that when a node at capacity sees a GUID it sent to a possible additional chosen destination it knows that choice would be a bad choice;
- Figure 47 shows a flow-chart depicting a method of deciding when to add/remove a chosen destination while not 'At Capacity';
- Figure 48 is a schematic representation of a network showing a loop that was accidentally created in nodes not in the data stream;
- Figure 49 is a schematic representation of a network showing node A and node B negotiating so that node A can send to node B;
- Figure 50 is a schematic representation of a network showing how node A indicates it wants to send more data;
- Figure 51 is a schematic representation of a network showing how two nodes can negotiate transfers of messages when a quota is limited;
- Figure 52 is a schematic representation of a network showing how two nodes can negotiate transfers of messages when a quota is limited;
- Figure 53 is a schematic representation of a network showing how two nodes can negotiate transfers of messages when a quota is limited.
- Figure 54 is a schematic representation of a network showing each node's next best step to the core, and that same network rearranged to better illustrate the hierarchy this process creates.
- Network 30 comprises a plurality of nodes Nl, N2 and N3. Collectively, nodes Nl, N2 and N3 are referred to as nodes N, and generically they are referred to as node N. This nomenclature is used for other elements discussed herein.
- Node Nl is connected to node N2 via a first physical link LI.
- Node N2 is connected to node N3 via a second link L2.
- Node Nl is a neighbour to node N2 and likewise node N2 is a neighbour to node Nl, since they are connected by link LI.
- node N3 is a neighbour to node N2 and likewise node N2 is a neighbour to node N3, since they are connected by link L2.
- the term "neighbour" (and variants thereof, as the context requires) is used herein to refer to nodes N that are connected another node N by a single link L.
- Each node N is any type of computing device that is operable to communicate with another node N via a respective link L. Any type of computing device is contemplated, such a personal computer (“PC”), a laptop computer, a personal digital assistant (“PDA”), a voice over internet protocol (“VOIP”) landline telephone, a cellular telephone, a smart sensor, etc., or combinations thereof. Each node N can be different types of computing devices.
- PC personal computer
- PDA personal digital assistant
- VOIP voice over internet protocol
- Each node N can be different types of computing devices.
- Each link L is based on any type of communications link, or combinations or hybrids thereof, be they wired or wireless, including but not limited to OC3, TI, Code Division Multiple Access (“CDMA”), Orthogonal Frequency Multiple Access (“OFDM”), Global System for Mobile Communications (“GSM”), Global Packet Relay Service (“GPRS”), Ethernet, 802.11 and its variants, Bluetooth etc.
- CDMA Code Division Multiple Access
- OFDM Orthogonal Frequency Multiple Access
- GSM Global System for Mobile Communications
- GPRS Global Packet Relay Service
- Ethernet 802.11 and its variants, Bluetooth etc.
- each node N is operable to connect and communicate with any neighbouring nodes N via the respective link L therebetween.
- Each node N maintains a network information database D that is configured to maintain knowledge about at least some of the other nodes N within network 30.
- Each database D is maintained in volatile storage (e.g. random access memory (“RAM”)) and/or non-volatile storage (e.g. hard disc drive) or combinations of volatile or nonvolatile storage, in a computing environment associated with its respective node N.
- RAM random access memory
- non-volatile storage e.g. hard disc drive
- Database D is used by each node N to locate other nodes N in network 30, so that the particular node N can send traffic to that other node N and/or to share knowledge about those other nodes N.
- Each database D is shown on Figure 1 as an oval indicated with the reference D and located within its respective node N, to represent that node N maintaining its own respective database D. More particularly, database Dl is shown within node Nl, database D2 is shown within node N2, and database D3 is shown within node N3.
- the size, complexity and other overhead metrics that define the structure of each database D are chosen so that a particular database D only occupies a portion of the overall computing resources that are available in its respective node N.
- the structure of database D is thus typically, though not necessarily, chosen to leave a significant portion of the computing resources of node N free to perform the regular computing tasks of that node N. Further details about such overhead metrics will be discussed in greater detail below.
- service characteristics includes any known quality of service (“QOS") metrics including bandwidth, latency, bit error rate, etc that can be used to assess the quality of a link L.
- Service characteristics can also include pricing, in that the financial cost incurred to carry traffic over one link may be different than the financial cost to carry traffic over another link). It will thus be assumed that each database D has substantially the same structure ⁇ an example of such a structure being shown in Table I.
- each database D provides a list of at least some of the nodes N in network 30, other than the node N that is maintaining the particular database D ("other nodes N").
- Each database D also ranks those other nodes N according to their importance within network 30. Metrics that reflect importance include, but are not limited to, the proximity of such other nodes N, and/or which of the other nodes N carries a proportionately greater share of traffic in network 30, and/or the proximity of a node N to a data flow going to another node N. Other metrics will now occur to those of skill in the art, some of which will be discussed in greater detail below.
- Each database D also identifies those other nodes N, and the neighbouring node N that represents the next best step to reach a respective other node N.
- Node name identifies the specific other node N.
- Such a node name can be based on any known or future network addressing scheme. Examples of known node addressing schemes include telephone numbers, or Medium Access Control (“MAC”) addresses, or Internet Protocol ("IP”) addresses. Such an addressing scheme can be chosen according to other factors affecting the design of network 30 and/or the nodes N therein. Of note, however, in the addressing scheme the name of each node N need not reflect the lo ⁇ ation o'f that node N in the network, as is found in other addressing schemes ⁇ e.g. telephone numbers that have area codes corresponding to a geographic location.
- the node name is identified according to the reference character in the Figure s. For example, where a Node Name entry under Column 2 indicates "Nl", then node Nl is being identified.
- each database D When network 30 is initialized (e.g. when all of the nodes N each connect to each other according to the topology shown in Figure 1), the contents of each database D will be empty, except that each database D will contain a "null" entry identifying the particular node N that owns the particular database D. Table II thus shows how database Dl is initially populated with a "null" entry, identifying node Nl.
- Table III thus shows how database D2 is initially populated with a "null" entry, identifying node N2.
- Table IV thus shows how database D3 is initially populated with a "null" entry, identifying node N3.
- a microprocessor on each node N will perform a set of programming instructions. Those instructions can be substantially the same for each node N.
- FIG 2 a flowchart representing a method for maintaining network knowledge in accordance with an embodiment of the invention is indicated generally at 200.
- Method 200 can be implemented into a set of programming instructions for execution on a microprocessor on each node N to populate and maintain the contents of each database D.
- method 200 is operated on each node N in system 30 in order to maintain the database D respective to that node N. The following discussion of method 200 will thus lead to further understanding of system 30 and its various components.
- system 30 and/or method 200 can be varied, and need not be performed in the exact sequence shown in Figure 2, and that system 30 and/method 200 need not work exactly as discussed herein, and that such variations are within the scope of the present invention.
- step 210 the presence of neighbours is determined.
- each node N determines whether it has any new neighbouring nodes N, or whether any existing neighbouring nodes N have ceased connecting that that node N.
- node Nl When step 210 is first performed by node Nl, node Nl will thus send out an initialization message over link LI to node N2 in order to query the existence of node N2 and the end of link LI.
- Such an initialization message can be performed according to any known means corresponding to the type of protocol used to implement link LI.
- step 210 will also be performed by node N2, and node N2 will thus send out a network initialization signal over link LI to node Nl in order to query the existence of node Nl.
- node N2 will thus send out a network initialization signal over link L2 to node N3 in order to query the existence of node N3.
- step 210 will also be performed by node N3, and node N3 will thus send out a network initialization signal over link L2 to node N2 in order to query the ' existence of node N2.
- initialization message IM1-2 is being sent from node Nl to node N2; initialization message IM3-2 is being sent from node N3 to node N2; initialization message IM2-3 is being sent from node N2 to node N3; initialization message IM2-1 is being sent from node N2 to node Nl.
- initialization messages IM do not exchange node knowledge, in order to simplify initialization messages IM, and allow node knowledge of a node N to spread in substantially the same manner for all nodes N.
- This initialization message IM can contain processing and memory characteristics of node N as it relates to the node's ability to maintain network knowledge. Such processing and memory characteristics can include, the memory of the node N that is dedicated to maintaining network knowledge, and the like. In the present embodiment, however, node names N themselves are not exchanged as part of the initialization messages IM.
- each node N will now be aware of its neighbouring nodes N, and thus be in a position to begin populating and maintaining its respective database D by making use of neighbouring databases D.
- method 200 will advance from step 210 to step 220 at which point network knowledge will be . exchanged between neighbour nodes N, such neighbours having been identified at step 210.
- Each node N can now make use of a neighbouring database D to gain more knowledge about network N.
- N is represented by showing a set of bi-directional knowledge exchange messages KEM.
- the knowledge exchange between node Nl and node N2 is indicated as knowledge exchange message KEM 1-2, while the knowledge exchange between node N2 and node N3 is indicated as knowledge exchange message KEM2-3.
- step 220 advances from step 220 to step 230, at which point local knowledge is updated as a result of the information exchange from step 220.
- databases Dl, D2 and D3 can be updated to reflect information about neighbouring nodes N, as shown in Tables V, VI, VII respectively.
- Table V thus shows how database Dl is now populated after the initial performance of step 230 by node Nl .
- Row 1 is now populated, showing that node Nl now has knowledge of a node named node N2, and that node N2 is the best neighbour through which node N2 can be reached.
- Table VI thus shows how database D2 is now populated after the initial performance of step 230 by node N2.
- Table VI (Undated from Table IV) Database D2
- Row 1 is now populated, showing that node N2 now has knowledge of a node named node Nl, and that node Nl is the best neighbour through which node Nl can be reached.
- Row 2 is now populated, showing that node N2 now has knowledge of a node named node N3, and that node N3 is the best neighbour through which node N3 can be reached.
- node Nl has been given a rank of "1”
- node N3 has been given a rank of "3”.
- rankings were made purely as matter of convenience given that no metrics exist in which to actually choose which to rank higher. However, rankings made on more complex bases will be discussed in greater detail below.
- Table VII thus shows how database D3 is populated after the initial performance of step 230 by node N3.
- Row 1 is now populated, showing that node N3 now has knowledge of a node named node N2, and that node N2 is the best neighbour through which node N2 can be reached.
- Knowledge path Kl-2 corresponds with Row 1 of Table V, indicating that node Nl has knowledge of N2
- knowledge path K2-1 corresponds with Row 1 of Table VI, indicating that node N2 has knowledge of Nl
- knowledge path K2-3 corresponds with Row 2 of Table VI, indicating that node N2 has knowledge, of node N3
- knowledge path K3-2 corresponds with Row 1 of Table Nil, indicating node ⁇ 3 has knowledge of node N2.
- Payload traffic generated at an origin node N that is intended for a destination node N can now actually be delivered to nodes N in accordance with knowledge paths K. Where a knowledge path exists between an origin node N and a destination node N. Such delivery of payload traffic can be effected via the best neighbour routings shown in Column 3, to the extent that Column 2 is populated in the database D of the origin node N with network knowledge abo ⁇ t the dbstination node N.
- payload traffic or “payload” refers to any data generated by an application executing on the origin node N that is intended for a destination node N.
- payload traffic can include emails, web pages, application files, printer files, audio files, video files or the like.
- payload traffic can include voice transmissions.
- Other types of payload data will now occur to those of skill in the art.
- nodes Nl and nodes N2 can now exchange payload traffic, since they have knowledge of each other.
- Nodes N2 and N3 can also exchange payload traffic, since they have knowledge of each other.
- nodes Nl and N3 cannot exchange traffic since they do not have knowledge of each other.
- method 200 then cycles back from step 230 to step 200 where method 200 begins anew for the second time.
- step 210 the presence of neighbours are determined.
- method 200 will advance again from step 210 to step 220 at which point additional network knowledge will be exchanged between neighbour nodes N.
- each node N can now make use of a neighbouring database D to gain more knowledge about network N.
- method 200 then advances, for the second time, from step 220 to step 230, at which point local knowledge is updated as a result of the information exchange from step 220.
- databases Dl, D2 and D3 can be updated to reflect information about neighbouring nodes N, as shown in Tables NIII, IX, X respectively.
- Table NIII thus shows how database Dl is now populated after the second performance of step 230 by node ⁇ l .
- Rows 0 and 1 remain the same as from Table V. However, Row 2 is now populated, showing that node Nl now has knowledge of a node named node N3, and that node N2 is the best neighbour through which node N3 can be reached.
- Table IX thus shows how database D2 is now populated after the initial performance of step 230 by node N2.
- Table X thus shows how database D3 is populated after the initial performance of step 230 by node N3.
- Payload traffic generated at an origin node N that is intended for a destination node N can now actually be delivered to nodes N in accordance with knowledge paths K. Where a knowledge path exists between an origin node N and a destination node N. Such delivery of payload traffic can be effected via the best neighbour routings shown in Column 3, to the extent that Column 2 is populated in the database D of the origin node N with network knowledge about the destination node N. Thus, more specifically, all nodes N can all now exchange payload traffic, since they have knowledge of each other. Of particular note, after this pass through method 200, node Nl and node N3 can send payload traffic to each other, via node N2 as the step between them.
- step 230 the presence of neighbours are determined.
- step 220 knowledge is exchanged with neighbours according to the neighbours found present at step 210.
- step 230 local knowledge is updated based on the exchange.
- step 230 After step 230, and as shown in Figure 8, the result is that database Dl remains the same, maintaining the contents as shown in Table VIII, because insufficient cycles of method 200 have occurred for the loss of node N3 to propagate to database Dl.
- database D2 is now updated in accordance with Table XL
- Database D3 is also updated to reflect the initial data found in Table IV.
- step 210 This is represented in Figure 8, and no existing nodes N are removed. Accordingly, nothing occurs at step 210 since no changes have occurred and method 200 advances from step 210 to step 220.
- Network 30a includes substantially the same elements as network 30, and like elements include like references but followed by the suffix "a". More specifically, network 30a includes more nodes Na and links La, but the basic structure of those nodes Na and links La are substantially the same as their counterparts in system 30. To simplify explanation, however, network 30a is shown without specific tables showing the contents of databases Da.
- Network 30a includes nodes Nal , Na2 and Na3 that are connected via links
- network 30a has undergone two complete passes through method 200 and thus databases D are in the same state as shown for network 30 in Figure 7.
- network 30a includes a fourth node Na4, that is initially, not connected to any other node Na.
- node N4a reaches node Nla via node Nla. This is reflected in Table XII at Row 4, Column 12, wherein node N2a is reflected as the next best neighbour to reach node Nla from node N4a.
- service characteristics for each link can vary, and databases for each node incorporate knowledge of such service characteristics when selecting a best neighbour as a next best step through which to route payload traffic.
- Network 30b is substantially the same as network 30a, and like elements include like references but followed by the suffix "b". More specifically, network 30b includes links Lb, which follow the same paths as links La in network 30a. Also, network 30b includes four nodes Nb, which are substantially the same as nodes Na in network 30a. However in network 30b each link Lb has different service characteristics, whereas in network 30a each link La has the same service characteristics. Table XIII shows an exemplary set of service characteristics for each link Lb. Table XIII ervi eristics for Links Lb
- Table XIII identifies the particular link Lb in question.
- Column 2 identifies the bandwidth of the link Lb identified in the same row.
- Column 3 indicates the financial cost for carrying traffic over a particular link Lb in terms of cents per kilobyte.
- Table XIII can include any other service characteristics that are desired, such as bit error rate, latency etc.
- the information for each link Lb can thus be made part of each database Db, and propagated through network 30b using method 200 or a suitable variant thereof, in much the same manner as node knowledge can be propagated throughout network 30b.
- Databases Db respective to each node Nb know the details of each link Lb to which they are directly connected. For example, Node N4b will know the details of links L3b and L4b as shown in Table XIII. By the same token, node N3b will know the details of links L4b and L2b. In a present embodiment, each node Nb only knows about itself and the links Lb that it has to directly connected nodes Nb. But each node Nb need no knows anything about the overall network topology.
- each node Nb Databases Db respective to each node Nb on either end of a particular pathway will know the cumulative service characteristics associated with the links Lb that define that pathway, once that database Db has knowledge of that node.
- node N4b knows about node Nib
- node N4b will also know the cumulative service characteristics, (and therefore the cumulative 'cost') of all links Lb between node N4b and node Nib.
- that node Nb can use such information in order to determine the "Best Neighbour" as the next best step through which to route traffic.
- node N4b can determine that node N3b is the next best step to reach both nodes N2b and nodes Nib, if speed of delivery of payload traffic is a priority.
- Table XIV thus shows how a portion of databases Db would appear if node N4b made such a determination (assuming that the information in Table XIII is not shown in Table XIV).
- Figure 12 shows the path Pb of payload traffic for traffic originating from node N4b destined for node Nb based on the contents of database D4b as shown in Table XIV.
- path Pb does not travel via link L3b, but instead travels via links L4b and L2b.
- Table XIV can thus be populated optimizing service characteristics of link Lb, optimizing for bandwidth, cost, bit error rate, etc.
- Table XIV would change if the best neighbour was chosen based on the next best step having the least financial cost, and ignoring bandwidth altogether.
- Network 30c includes the same types of elements as networks 30, 30a and 30b and like elements include like references but followed by the suffix "c".
- links Lc have substantially the same length and substantially the same service characteristics, though in other embodiments links Lc can have varying lengths and service characteristics, similar to links Lb.
- Network 30c includes more nodes Na and links La, and to simplify explanation, however, network 30c is shown without specific tables showing the contents of databases Dc.
- each node is does not maintain knowledge about the entire network 30c, but only a portion of the network 30c. (Such a configuration is in fact presently preferred when the teachings herein are applied to networks of a size where knowledge of the entire network results in an unpractically large consumption of the overall computing resources of a given node.)
- each database Dc can store eleven rows of information. The first row is the null row as previously described in relation to Table II, which identifies the node Nc to which a particular database Dc belongs. The remaining nine rows allow the database Dc to maintain knowledge of nine other nodes Nc within network 30c.
- nodes Nc in network 30c are constructed to a limit of nine other nodes for explanation purposes.
- Databases Dc for each node Nc maintain a concept of a "core". Where a specific node Nc is not included in a particular database Dc, then the core represents a default path for which that given node Nc may be located.
- network 30c includes a core Cc which lies along link L6c, the details of which will be discussed in greater detail below.
- the size of the network according to the architecture of network 30c will thus complement the collective storage capacity of the two nodes Nc that define the core Cc.
- node N6c and node N9c have sufficient capacity such that the nine rows in each of databases D6c and D9c are sufficient to maintain knowledge of every node within network 30c.
- each node Nc While each node Nc performs method 200, it will "hear" of more other nodes Nc than that node Nc will store. Accordingly, each node Nc is also operable to perform a prioritization operation to choose which nine other nodes Nc within network 30c to maintain knowledge of within its database Dc. Such a prioritization operation can be based on any prioritization criterion or criteria, or combinations thereof, such as which other nodes Nc are closest, which other nodes Nc carry the most traffic, which other nodes Nc does that particular node typically send payload traffic, etc, and such other criteria as will now occur to those of skill in the art.
- Such prioritization criteria thus also provides the "rank" of each node Nc in order of importance, thereby defining the order in which the database Dc is populated, and the order for which node knowledge should be sent to other nodes Nc in the network. [0096] In the present example, it will be assumed that the prioritization criteria for each node Nc is to populate its database Dc in order maintain knowledge of:
- proximal nodes Nc the other nodes Nc that are closest that that node Nc.
- Proximal nodes Nc are ranked in order of proximity;
- Originating and destination nodes are ranked according to the amount of payload traffic being carried on their behalf, and supersede proximal nodes.
- Payload traffic nodes Nc are ranked according to the importance of a particular payload traffic in relation to another, and supersede all proximal nodes and supersede up to half of the originating or destination nodes Nc. Importance of payload traffic can be based on volume of traffic, or speed of traffic, or the like;
- nodes are ordered by their distance from a marked data stream value, except in such cases where:
- This node is in the path of a High Speed Propagation Path ("HSPP", which is discussed in greater detail below) for this destination node, and this directly connected node is: In the path to the core and the HSPP is a notify HSPP. One of the nodes that told us of this HSPP and the HSPP is a request HSPP. 2. This node is marked in the data stream for this destination node. If a node is marked in the data stream it will tell its directly connected nodes that have not marked it in the data stream a Distance from Data Stream (also referred herein as a Distance From Stream or "DFS”) of 0. those that have marked it in the data stream it will tell a DFS equal to the link Cost ("LC”) associated with the Service Characteristics of the links to the destination node. This will be explained in greater detail below.
- LC Link Cost
- Figure 15 shows the other nodes Nc with which database D2c will be populated, represented as a closed dashed Figure and referred to herein as knowledge path block K2c-xc.
- Knowledge paths block K2c-xc surrounds all of the other nodes Nc of which node N2c is aware, i.e. nodes N3c-N10c, and node N12c.
- Figure 16 shows the other nodes Nc with which database D6c will b,e populated, represented as a closed dashed Figure and refened to herein as knowledge path block K6c-xc.
- Knowledge path block K6c-xc surrounds all of the other nodes Nc of which node N6c is aware, i.e. nodes N2c- N5c, and nodes N7c-Nl lc, and node N12c. While not shown in the Figures, those of skill in the art will now appreciate the contents of the other databases Dc at this point in the present example.
- link L6c represents the "core" of network 30c at this point in the example.
- the core is shown in Figure 14 as an ellipse encircling link L6c and indicated at Cc.
- link L6c is specifically the core Cc of network 30c need not be expressly maintained in each database Dc. Rather, each database Dc will determine a "Best Neighbour" indicating a neighbour that is the next best step in order to reach core Cc.
- the "Best Neighbour" to reach core Cc can be determined by examining database Dc to find which neighbouring node Nc is most frequently referred to as the "Best Neighbour" to reach the other nodes that are expressly maintained in database Dc. In the event that no neighbouring node Nc appears more frequently as a Best Neighbour, then the Best Neighbour appearing in Row 1, associated with the top-most ranked other node, can be selected as the Best Neighbour to reach the network 30c.
- a core is formed any time that two neighbouring nodes Nc point to each other as being the Best Neighbour to reach the core.
- node Nlc will populate database Die with knowledge about nodes N2c-N10c, while node N14c will populate database D14c with knowledge about nodes N5c-N13c. Thus, nodes Nlc and N14c will not have knowledge of each other. Now assume that node Nlc wishes to send payload traffic to node N14c. [0105] Since node Nlc has no knowledge of node N14c, at this point node Nlc can perform method 800 shown in Figure 18 in order to gain such knowledge. Beginning at step 810, originating node Nlc will receive a request to send payload traffic to destination node N14c. Such a request can come from another application executing on a computing environment associated with originating node Nlc.
- a request sent to a neighbor node can be in the form of : 'if you see route information for my destination node, can you make sure to tell me about it so I can make a good choice on where to send my payload data'. If a .node has some payload to send, but no place to send it, it will hang onto that payload until a timeout on the payload expires (if there is one), or it needs that room for other packets, or it gets told a route update that will allow it to route to a directly connected node.)
- step 820 a determination is made as to whether the destination node to which the payload traffic is destined is located in the local database Dc.
- database Die does not include! information about, destination node N14c, and so the result of this determination would be "no", and method 800 would advance from step 820 to 830.
- payload traffic could be sent via the Best Neighbour identified in the database, in much the same manner as was described in relation to network 30a in Figure 12, or network 30b in Figure 13.
- a query will be sent towards the core asking for knowledge of destination node 14c.
- Such a query will be passed towards the core Cc, by each neighbouring node Nc, along the path of "Best Neighbours" that lead to core Cc, until the query reaches a node Nc that has knowledge of node N14c.
- each node Nc will receive the query, examine its own database Dc, and, if it has knowledge of destination node N14c, it will send such knowledge back through the path to originating node Nlc.
- node Nc receiving the query does not have knowledge of destination node N14c, then it will pass the query on to the neighbouring node Nc that is its Best Neighbour leading to core Cc, until the query reaches a node Nc that has knowledge of node N14c.
- the query from node Nlc will follow the path from node N2c; to node N3c; to node N6c; and finally to node N9c, since node N9c will have knowledge of node N14c due the prioritization criteria defined above.
- node N14c will be passed back through node N6c; to node N3c; to node N2c and finally to node Nlc, with nodes N6c, N3c, N2c each keeping a record of the knowledge of node N14c in their respective databases Dc so that they can pass payload traffic on behalf of network Nlc.
- a response will eventually be received by the originating node Nc to the query generated at step 830.
- node Nlc will thus receive knowledge back from node N9c about node N14c, and, at step 850, node Nlc will update its database Die with knowledge of node N14c.
- Method 800 can then advance from step 850 to step 830 and payload traffic can be sent to node N14c from node Nlc, in much the same manner as was described in relation to network 30a in Figure 12, or network 30b in Figure 13.
- link L14c is added to join node N12c and node N8c; link L15c is added between node N8c and node N5c; and link LI 6c is added between node N5c and node N2c.
- ach node Nc performs method 200 a number of times to absorb the knowledge of these new links Lc. As such knowledge propagates throughout network 30c, eventually, the path of payload traffic from node Nlc to node N14c will travel via nodes N2c; N5c; N8c; N12c and N13c.
- method 800 is directed to "pulling" knowledge of a destination node
- node Nc that is not known by an originating node from the core Cc
- node Nc can also "push" knowledge of itself towards nodes at the core Cc, so that when method 800 is performed an originating node Nc can be sure that it will find information about the new/destination node Nc at core Cc.
- such a "push" of knowledge was not needed due to the performance criteria that automatically ensured that node N9c at core C would gain knowledge of node N14c.
- a "push" of knowledge of nodes Nc at the core Cc can be desired.
- each node N (and its variants) can also keep a separate record of all information that was sent to that node N (and its variants) by neighbouring nodes N (and its variants), even if that particular neighbouring node N (and its variants) was not chosen as the Best Neighbour for storage in that database D. This can allow that node N with its Best Neighbour removed to select its next Best Neighbour from the remaining neighbouring nodes N . ithout having to rerun method 200, or otherwise wait for an update from all other remaining neighbour nodes.
- the present invention thus provides a novel system, method and apparatus for networking.
- the network architecture of the present invention can enable individual nodes in the network to coordinate their activities such that the sum of their activities allows communication between nodes in the network.
- the principle limitation of existing ad-hoc networks used in a wireless environment networks is the ability to scale past a few hundred nodes, yet the network architecture and associated methods at least mitigate and in certain circumstances overcome prior art scaling problems.
- Each node in a network is directly connected to one or more other nodes via a link.
- a node could be a computer, network adapter, switch, wireless access point or any device that contains memory and ability to process data.
- the form of a node is not particularly limited.
- a link is a connection between two nodes that is capable of carrying information.
- a link between two nodes could be several different links that are 'bonded' together.
- a link could be physical (wires, etc), actual physical items (such as boxes, widgets, liquids, etc), computer buses, radio, microwave, light, quantum interactions, sound, etc.
- a link could be a series of separate links of varying types. However, the form of a link is not particularly limited.
- Link cost is a value that is used to describe the quality of the link.
- the link cost for the link can be based on (but not limited to):
- link cost of a pipe has an approximately direct relationship to its quality. For example, a 1Mbit pipe should have 10 times the link cost of 10Mbit pipe. These link costs will be used to find the best path through the network using a Dykstra like algorithm
- An alternative embodiment involves randomly varying the calculated link cost by a small random amount that does not exceed 1% (for example) of the total link cost.
- Each node in the network has a unique name.
- This unique name could be: 1. Generated by the node. 2. Assigned prior to node activation. 3. Requested from a central location by the node in a manner similar in result to a DHCP (Dynamic Host Configuration Protocol) server. If a node was to request a name from a central location using this described network, it would first pick a random unique name and use that name to request a name from the central location.
- DHCP Dynamic Host Configuration Protocol
- a node may keep its name permanently, or may generate a new name on startup or any time it wants to. Node A can send a message to node B if node A knows the name of node B.
- a node may have multiple names in order to emulate several different nodes. For example a node might have two names: 'Print_Server' and 'File_Server'.
- a node may generate new a name for each new connection that is established to it.
- Ports are discussed as a destination for messages, however the use of ports in these examples is not meant to limit the invention to only using ports.
- a person skilled in the art would be aware of other mechanisms that could be used as message destinations. For example, nodes could generate a unique name for each connection.
- nodes should have a unique name.
- An alternative embodiment would allow a node to share a name with another node or nodes in the network. This will be discussed in detail later.
- each node has only one unique name associated with it. This should not be seen as limiting the scope of this invention.
- a node may share the same name as one or more other nodes in the network.
- nodes will need to exchange some information to establish that connection. This information may include version numbers, etc.
- Alternative embodiments could include the exchanging of a 'tie-breaker' number that will allow a node to choose between to otherwise equal links. It is suggested that the same tie-breaker value is given to all directly connected nodes. If a node A tells node B that it has already seen an equivalent tie-breaker number from some other node then node B will need to pick a new tie-breaker number and send it to all of its directly connected nodes. This process is illustrated in Figure 20.
- Alternative embodiments could include a maximum count of nodes that this node wants to know about. For example, if node A has limited memory it would tell node B to tell it about no more then X different nodes.
- Alternative embodiments could include exchanging of link costs for the link that was used to establish the connection. If the link cost changes during the operation of the network a node may send a message to its directly connected node on the other end of the link that the link cost has changed. If link costs are exchanged, nodes may agree on the same link cost or may still pick different link costs, indicating an asymmetrical connection.
- connection is assumed to be able to deliver the messages in order and error free. If this is not possible is it assumed that the connection will be treated as 'failed'.
- node A In order for node A to send a message to node B, node A needs to know the name of node B as well as the directly connected node or link that is the next best step to get to node B.
- node A or node B wants another node to send them messages then they have to tell at least one directly connected node about their name.
- Node information includes the name of the node and the cumulative link cost to reach that node.
- no node knows about the names of any other node except itself, thus the initial cumulative link cost for the nodes that it knows about (itself) would be 0.
- Each node stores the information that it has received from each link. A node does not need to know the name of the node on the other end of the link. All it needs to do is record the knowledge that the node on the other end of the link is sending it. A node will store all the node updates it has received from neighbour nodes.
- node N When a node N has received knowledge of node B from a link it will compare the cumulative link costs for node B that it has received from other links. It will pick the link with the lowest cumulative link cost as its "Best Neighbour" for the messages flowing to node B. When a node N sends an update for node B to its directly connected nodes it will tell them the name of the node and the lowest cumulative link cost that it has received from its directly connected nodes.
- Figures 23-26 show the flow of node knowledge through a network. All the links are assumed to have cost of one. This is considered to be an example only and in no way is meant to limit the generality of this invention. For example, links may have different link costs, and the number of nodes and their specific interconnections may be infinitely varied. ⁇
- An alternative embodiment could use the tie-breaker number (discussed earlier) to pick between two or more links with the lowest cumulative link cost.
- the fCumulativeLinkCost should be set to zero on the node with that particular name.
- Alternative embodiments could have the fCumulativeLinkCost set to nonzero on the node with that particular name. This could be used to disguise the true location of a destination node. Setting the fCumulativeLinkCost to non-zero on the node with that particular name (for example 50) will not affect the convergence of the network.
- Neighbour' It will also need to tell its neighbors about any nodes where the lowest cumulative link cost for a particular destination node changed.
- Neighbour' for a destination node A changes, and after the change that link is still not chosen as a 'Best Neighbour' for destination node A, then the cumulative link cost would remain the same for node A and no updates would need to be sent to directly connected nodes.
- the core of the network will most likely have nodes with more memory and bandwidth then an average node, and most likely to be centrally located topologically.
- nodes can only approximate where the core of the network is.
- An alternative embodiment could be the node using some combination of factors to determine what its 'next best step to the core' is. These factors could some combination of (although not limited to):
- a node does not need to know where the center of the network is, only its
- a core can be defined as when two nodes select each other as their next best step towards the core. There is nothing special about the core, the two nodes that form the core act as any other nodes in the network would act.
- the directly connected node with the highest 'tie-breaker' value (which was passed during initialization) will be selected as the next best step towards the core. This mechanism will ensure that there are no loops in a non-trival network (besides Node A -> Node B -> Node A type loops). If this tie-breaker embodiment is not used, then a random selection can be made.
- This idea of using a nodes 'next best step to the core' forms a hierarchy.
- This hierarchy can be used to push specific node knowledge up the hierarchy to the top of the tree.
- the HSPP's discussed later
- Figure 54 is an example of network where each node has selected a directly connected node as its next best step to the core. The network is then reananged to better show the nature of the hierarchy that is created. As the network topology changes so will the hierarchy that is formed. Detecting an isolated core
- a core is defined as two directly connected nodes that have selected each other as the next best to the core.
- Figure 27 illustrates and example of this.
- both nodes that form the core in this example Node A and Node B will check to see how many directly connected nodes they have. If there is more then one directly connected node then, they will examine all the other directly connected nodes.
- log(fCumulativeLinkCost +1)*500 can be the credit assigned.
- This metric has the advantage of giving more weight to those destination nodes that are further away. In a dense mesh with similar connections and nodes, this type of metric can help better, more centralized cores form.
- Another possible embodiment which can be used to extend the idea of providing more weighting to destination nodes that are further away is to order all destination nodes by their link costs, and only use the x% (for example, 50%) that are the furthest away to determine the next best step to the core.
- Another embodiment can use a weighting value assigned to each node. This weight could be assigned by the node that created the name. For example, if this weighting value was added to the node update structure it would look like this: struct sNodeKnowledge ⁇ Name NameOfTheNode; Float fCumulativeLinkCost; Int nWeight; ⁇
- nWeight value (that is in the sNodeKnowledge structure) can be used to help cores form near more powerful nodes.
- the credit assigned could be multiplied by 10 ⁇ n Weight (where 10 is an example).
- nWeight value should be assigned in a consistent fashion across all nodes in the network. Possible nWeight values for types of nodes:
- link is given an asymmetric cost, for example the link L that joins nodes
- a and B has a cost of 10 when going from A to B and a cost of 20 when going from B to A then an alternative embodiment is useful to help the core form in a single location in the network.
- the nodes agree on the link cost for a particular link and used their 'Best Neighbour' selection based on this shared link cost.
- asymmetric link costs are used to determine the 'Best Neighbour', then using symmetric link costs can be used to choose the next step to the core. Using symmetric link costs can help ensure that a core actually forms.
- a node will agree with the node it is linked to on an alternative cost for the link.
- This alternative link cost will be the same for both nodes.
- This alternative link cost will be used to adjust the cumulative link cost.
- a choice for 'Best Neighbour' will be made with this alternative cumulative link cost.
- This 'Best Neighbour' will be assigned the credit that goes towards picking it as the next best step to the core, even if it was the not 'Best Neighbour' picked as the next best step to the actual node.
- AltemativeCumulativeLinkCost ActualCumulativeLinkCost + (AlternativeLinkCost- ActualLinkCost) High Speed Propagation Path(s) (“HSPP")
- nodes not at the core of the network will generally not have as much memory as nodes at the core, they may forget about a node N that relies on them to allow others to connect to node N. If these nodes did forget, no other node in the network would be able to connect to that node N.
- An approach is to use the implicit hierarchy created by each nodes choice as to its 'next best step to the core'. Node knowledge is pushed up and down this hierarchy to the core. This allows efficient transfer of node knowledge to and from the center of the network.
- Node knowledge can be pushed/pulled using a methodology referred to herein as a High Speed Propagation Path ("HSPP").
- HSPP High Speed Propagation Path
- An HSPP can be thought of as a marked path/paths between a node and the core. Once that path has been set up it is maintained until the node that created it has been removed.
- HSPP There are two types of HSPP 's. the first is a notify HSPP.
- the Notify HSPP pulls knowledge of a particular node towards the core. Nodes that have an HSPP ranning through them are not allowed to forget about that node that is associated with the HSPP. All nodes create a Notify HSPP to drive knowledge of themselves towards the core.
- a request HSPP is only created when node is looking for knowledge of another node.
- the request HSPP operates in the identical way to the notify HSPP except that instead of pulling knowledge towards to the core it pulls knowledge back to the node that created it.
- An HSPP travels to the core using each nodes next best step the core. Each nodes 'next best step to the core' creates an implicit hierarchy.
- An HSPP is not a path for user messages itself, rather it forces nodes on the path to retain knowledge of the node or nodes in question, and send knowledge of that node or nodes quickly along the HSPP. It also raises the priority of updates for the node names associated with the HSPP. This has the effect of sending route update quickly towards the top of this implicit hierarchy.
- the HSPP does not specify where user data messages flow.
- the HSPP is only there to guarantee that there is always at least one path to the core, and to help nodes form an initial connection to each other. Once an initial connection has been formed, nodes no longer need to use the HSPP.
- An HSPP may be referenced as belonging to one node name, or being associated with one node name. This in no way limits the number, or type of nodes that an HSPP may be associated with.
- the name of the HSPP is the usually the name of the node that the HSPP will be pushing/pulling to/from the core.
- An HSPP is tied to a particular node name or class/group of node names. If a node hosts an HSPP for a particular destination node it will immediately process and send node knowledge of any nodes that are referenced by that HSPP.
- An alternative embodiment could limit that processing to: 1. Initial knowledge of the destination node 2. When the destination node fCumulativeLinkCost goes to infinity 3. When the destination node fCumulativeLinkCost moves from infinity to some other value
- the HSPP only sets up a path with a very high priority for knowledge of a particular node or nodes. This means that node updates for those nodes referenced by the HSPP will be immediately sent.
- An HSPP is typically considered a bi-direction path.
- Alternative embodiments can have two types of types of HSPP's.
- One type pushes knowledge of a node towards the core. This type of HSPP could be called a notify HSPP.
- a node When a node is first connected to the network, it can create an HSPP based on its node name. This HSPP will push knowledge of this newly created node towards the core of the network. The HSPP created by this node can be maintained for the life of the node, or for as long as the node wants to maintain the HSPP. If the node is disconnected or the node decides to no longer maintain the HSPP, then it will be removed.
- An alternative embodiment could have this HSPP be a 'push HSPP' instead of a bi-directional HSPP.
- a node If a node is trying to connect to another node N, it will create an HSPP that references that node N. This HSPP will travel to the core and help pull back knowledge of node N to the node that created the HSPP and wants to connect to node N.
- An alternative embodiment could have this HSPP be a 'pull HSPP' instead of a bi-directional HSPP.
- Both types of HSPP will travel to the core.
- the HSPP that sends knowledge of a node to the core can be maintained for the life tof the node.
- the request HSPP will probably be maintained for the life of the comiection.
- An alternative embodiment has nodes that create an HSPP send that HSPP to all directly connected nodes, instead of only to their next best step to the core of the network. This embodiment allows the network to be more robust while moving and shifting.
- An alternative embodiment includes making sure that a node will not send a directly connected node more HSPP's then the maximum nodes requested by that directly connected node.
- An HSPP does not specify where user data should flow, it only helps to establish a connection (possibly non-optimal) between nodes, or one node and the core.
- An HSPP will travel to the core even if it encounters node knowledge before it reaches the core.
- Alternative embodiments can have the HSPP stop before it reaches the core. How an HSPP is Established and Maintained
- a node limits the number of nodes it wants to be told about, that node stores as many HSPP's as are given to it. A node should not send more HSPP's to a directly connected node then the maximum destination node count that directly connected node requested.
- the amount of memory available on nodes will be such that it can assumed that there is enough memory, and that no matter how many HSPP's pass through a node it will be able to store them all. This is even more likely because the number of HSPP's on a node will be roughly related to how close this node is to the core, and a node is usually not close to a core unless it has lots of capacity, and therefore probably lots of memory.
- An HSPP takes the form of: struct sHSPP ⁇ // The name of the node could be replaced with a number // (discussed later). It may also represent a class of nodes // or node name. sNodeName nnName; // a boolean to tell the node if the HSPP is being // activated or removed, bool bActive; // a boolean to decide if this a UR (or US generated HSPP) bool bURGenerated;
- an HSPP H is considered to be called HSPP H regardless of what bActive or bURGenerated (more generally the HSPP Type) are.
- the HSPP H can derive its name from the node name that it represents.
- HSPP H is not linked to the node name (or names) it references.
- the HSPP structure in this embodiment might look like this (for example): struct sAltemateHSPP ⁇ // a unique name to represent this HSPP ' sHSPPName HSPPName; // The name of the node could be replaced with a number // (discussed later). It may also represent a class of nodes // or node name. sNodeName nnName; // a boolean to tell the node if the HSPP is being // activated or removed. bool bActive; // a boolean to decide if this a UR (or USS generated HSPP) bool bURGenerated;
- a UR generated HSPP can also be called a 'Notify HSPP' and a US generated HSPP can also be called a 'Request HSPP'.
- HSPP's path is changed or broken. This should be guaranteed by the process in which the next step to the core of the network is generated.
- the HSPP finds a non-looping path to the core, and when it reaches the core it stops spreading. It does this because the two nodes that form the core will select each other as their next best step towards the core.
- the purpose of the HSPP generated by the UR is to maintain a path between it and the core at all times, so that all nodes in the system can find it by sending a US generated HSPP (a request HSPP) to the core.
- a node N receives an active HSPP H from any of its directly connected nodes, it will send on an active HSPP H to the node (or nodes) selected as its next best step to the core, assuming that that node (or nodes) that has been selected as its next best step to the core has not sent it an active HSPP H.
- Node A When a node A establishes a connection to another node B, Node A can use an HSPP to pull route information for node B to itself (called a request HSPP). This HSPP should also be sent to all directly connected nodes.
- An alternative embodiment has only one type of HSPP that moves data bi- directionally. This type of HSPP would be able to replace both a push/notify HSPP and a pull/request HSPP. In this embodiment that bURGenerated parameter is omitted.
- An HSPP does not need to be continually resent. Once an HSPP has been established in a static network, no addition HSPP messages need to be sent. This will be apparent to someone skilled in the art.
- Each node remembers which directly onnected nodes have told it about which HSPP's, a node also typically remembers which HSPPs it has told to directly connected nodes.
- This alternative embodiment can help maintain connection in a low bandwidth environment.
- the 'Priority Notify HSPP' is the same as 'notify HSPP' except that it will be sent before all 'Notify HSPP's'. This will be discussed later.
- a node For example, if a node is attempting to communicate with another node, or is aware that another node is attempting to communicate with it, then it can change its notify HSPP's into 'Priority Notify HSPP's'.
- the HSPP structure might be amended to look like this: struct sHSPP ⁇ // The name of the node could be replaced with a number // (discussed previously) sNodeName nnName; // a boolean to tell the node if the HSPP is being // activated or removed.
- bool bActive // the HSPP Type (for ex: Request HSPP, Notify HSPP, // Priority Notify HSPP) int nHSPPType; ⁇ ;
- An alternative embodiment adjusts the order that HSPP's are sent.
- This alternative embodiment can be used to stop simple loops from forming.
- node A that has picked node B as a 'Best Neighbour' for messages going to node N then node A will tell node B that it has been picked.
- node A could send node B a message that looks like this: struct sIsBestNeighbour ⁇ Name NodeName; Boolean blsBestNeighbour; ⁇
- node A has told node B that it is the 'Best Neighbour' for messages going to node N then node B will be unable to pick node A as the 'Best Neighbour' for messages going to node N. If the only possible choice node B has for messages going to node N is node A then B will select no 'Best Neighbour' and set its cumulative link cost to node N to infinity.
- This alternative embodiment can be used to mark those nodes that are in the data stream.
- a node is only considered as 'in the data stream' if it is marked as 'in the data stream'.
- a node may forward payload packets without being marked in the data stream. If a node is forwarding payload packets, but is not marked in the data stream it is not considered as 'in the data stream'.
- node A If a node A has attempted to establish a data connection to another node N in the network it will tell the node B that it has selected as its 'Best Neighbour' to node N that node B is a 'Best Neighbour' for node N and it is.in the data stream for node N.
- node B If a node B has been told that it is in the data stream by a directly connected node that has told B that it is a 'Best Neighbour' then node B will tell the directly connected node C that it has selected as a 'Best Neighbour' for node N that node C is in the data stream.
- this message might look like this: struct slnTheDataStream ⁇ Name NodeName; Boolean blsInTheDataStream; ⁇ [0276] If node B was marked as in the data stream for messages going to node N it would tell the node that it has selected as its next best step to node N that it is in the data stream. If node B is no longer marked as in the data stream because:
- node B will tell its 'Best Neighbour' C that it is no longer in the data stream.
- a node is only marked as being the data stream by this flag.
- a node may forward message packets without being marked as in the data stream.
- This alternative embodiment can be used to order the node updates in a network. This ordering allows the network to become much more efficient by sending updates to maintain and converge data flows before other updates.
- the sNodeKnowledge structure used to pass node knowledge around might be modified to look like this: (for example) struct sNodeKnowledge ⁇ Name NameOfTheNode; Float fCumulativeLinkCost; Float fCumulativeLinkCostFromStream; ⁇
- the fCumulativeLinkCostFromStream is incremented in the same way as the fCumulativeLinkCost. However, if a node is in the data stream for a particular node it will reset the fCumulativeLinkCostFromStream to 0 before sending the update to its directly connected nodes.
- the fCumulativeLinkCostFromStream is also initialized to zero.
- An alternative embodiment to help in low bandwidth environments is to have nodes set their fCumulativeLinkCostFromStream to a non-zero value (for example 50) if they are not exchanging user data with another node. If they are in communication with another node they would set their fCumulativeLinkCostFromStream to 0.
- An alternative embodiment could also set a non-zero fCumulativeLinkCostFromStream to a multiple of the min, max, average (etc) of the link costs associated with the links that this node has established.
- fCumulativeLinkCostFromStream fCumulativeLinkCost + O.lf;
- the TreeMap is a data structure that allows items to be removed from in by ascending key order. This allows more important updates to be sent to the directly connected node before less important updates.
- the fTempCumLinkCostFStream should also be used on a per connection basis to determine which destination node updates should be sent. For example, if there are five destination nodes with fTempCumLinkCostFStream values of:
- This node would then send an infinity update for node G. It would then schedule a delayed send for destination node M. (See 'Delayed Sending')
- An infinity destination node update makes sure that the messages needed to pass this information is sent for the node that is getting an infinity update.
- This example includes several different embodiments, for those that are not used someone skilled in the * art will be able to omit the relevant item(s).
- This alternative embodiment helps node knowledge to be removed from the network when a node is removed from the network. [0310] If node knowledge is not removed from the network, then a proper hierarchy and core will have trouble forming.
- the update should be delayed.
- the update should be delayed by (Latency+1)*2 ms, or in this example 22ms. This latency should also exceed a multiple of the delay between control packet updates (see 'Propagation Priorities)
- someone skilled in the art will be able to experiment and find good delay values for their application.
- This alternative embodiment helps node knowledge to be removed from the network when a node is removed from the network.
- This node will send the directly connected node N an update of infinity for this destination node A. Then after a suitable delay (See Delayed Sending) this node will send a non-infinity update for this destination node to the directly connected node, except in the case where this node A still meets criteria 1.
- An infinity update is an update with the fCumulativeLinkCost value set to infinity (see above for a more complete definition).
- This delayed sending does not need to occur if either on these conditions is met: 1.
- This node is in the path of an HSPP for this destination node, and this directly connected node is: c. In the path to the core and the HSPP is a notify HSPP or there is only one type of HSPP. d.
- One of the nodes that told us of this HSPP and the HSPP is a request HSPP or there is only one type of HSPP.
- This network system and method can be used to emulate most other network protocols, or as a base for an entirely new network protocol.
- TCP/IP will be used as an example of a protocol that can be emulated.
- TCP/IP as an example is not meant to limit the application of this invention to TCP/IP.
- IP Address can be guaranteed to be unique, then the IP address could serve as the GUID, providing a seamless replacement of an existing TCP/IP network stack with this new network invention.
- IP addresses would not necessarily need to reflect a nodes position in the network hierarchy.
- the network will determine a route to the destination (if such a route exists), and continually improve the route until an optimal route has been found.
- the receiving end will look identical to TCP/IP, except a request to determine the IP address of the connecting node will yield a GUID instead, (or an IP address is those are being used as GUIDs).
- Figure 28 is an example of where this routing method would fit into the TCP/IP example.
- this new routing approach allows a TCP/IP like interface for end user applications. This is an example not meant to limit this routing approach to any particular interface (TCP/IP for example) or application.
- node A If node A wishes to establish a connection with node B, it will first send out a request HSPP (discussed earlier) to all directly connected nodes. This request HSPP will draw and maintain route information about node B to node A. This request HSPP will be sent out even if node A already has knowledge of node B.
- HSPP request HSPP
- node A can change .its notify hspp to a priority notify hspp and inform all its directly connected nodes. This can help connectivity in low bandwidth mobile environments since it would allow nodes that are communicating to have their information spread before those nodes that are not communicating.
- This HSPP will travel to the core even if it encounters node route knowledge before reaching the core.
- Node A Once Node A has a non-infinity, next best step to node B it will send out a 'connection request message' to the specified port on node B. This request will be sent to the directly connected node that has been selected as the 'Best Neighbour' for messages going to node B. [0345] If the 'marking the data stream' embodiment is used then node A will tell its directly connect node that it has selected as a 'Best Neighbour' that it is in the data stream for node B.
- ports are for example only and is not meant to limit the scope of this invention.
- a possible alternative could be a new node name specifically for incoming connections. Someone skilled in the art would be aware of variations.
- connection request message might contain the GUID of node A, and what port to send the connection reply message ' to. It may also contain a nUniqueRequestID that is used to allow node B to detect and ignore duplicate requests from node A.
- connection request message looks like this (for example): struct sConnectionRequest ⁇ // the name of node A, could be replaced with a number // for reduced overhead.
- sNodeName nnNameA // Which port on node A to reply to int nSystemDataPort; // Which port to send end user messages to on node A int nUserDataPort; // a unique request id that node B can use to // decide which duplicate requests to ignore int nUniqueRequestID;
- node B When node B receives the 'connection request' message from node A it will generate a request HSPP for node A and send it to all directly connected nodes. This will draw and maintain route information about node A to node B. [0350] If the alternative embodiment that uses 'priority notify hspp' is used then node B can change its notify HSPP to a priority notify HSPP and inform all its directly connected nodes. This can help connectivity in low bandwidth mobile environments.
- Node B will wait until it sees where its next best step to node A is, and then mark the route to node A as 'In the data stream'.
- Node B will then send a sConnectionAccept message to node A on the port specified (sConnectionRequest.nSystemDataPort).
- This message looks like this: struct sConnectionAccept ⁇ // the name of node B sNodeName nnNameB; II the port for user data on node B int nUserDataPortB; // the unique request ID provided by A in the // sConnectionRequest message int ⁇ nUniqueRequestID; ⁇
- the sConnectionAccept message will be sent until node A sends a sConnectionConfirmed message that is received by node B, or a timeout occurs.
- the sConnectionConfirmed message looks like this: struct sConnectionConfirmed ⁇ // the name of node A, could be replaced with a number // for reduced overhead. sNodeName nnNameA; // the unique request ID provided by A in the // sConnectionRequest message int nUniqueRequestID; [0355] If a timeout occurs during the process the connection is deemed to have failed and will be dismantled. The request HSPP's that both nodes have generated will be removed, and the 'in the data stream' flag(s) will be removed (if they were added).
- both nodes may send user data messages to each others respective ports. These messages would then be routed to the end user software via sockets (in the case of TCP/IP).
- An alternative embodiment would not require a connection to be established, just the sending of EUS messages/payload packets when route to the destination node was located.
- This alternative embodiment can be used to optimize messages and name passing.
- a structure for this could look like (for example): struct sCreateQNameMapping ⁇ // size of the name for the node int nNameSize; // name of the node char cNodeName[Size]; // the number that will represent this node name int nMappedNumber; ⁇ ; [0362] Then instead of sending the long node name each time it wants to send a destination node update, or message - it can send a number that represents that node name (sCreateQNameMapping.nMappedNumber). When node A decides it no longer wants to tell node B about the destination node called 'THISISALONGNODENAME.GUID', it could tell B to forget about
- That structure would look like: struct sRemoveQNameMapping ⁇ ' int nMappedNumber; ⁇ ;
- Each node would maintain its own internal mapping of what names mapped to which numbers. It would also keep a translation table so that it could convert a name from a directly connected node to its own naming scheme. For example, a node A might use:
- node B would have a mapping that would allow it to convert node A's numbering scheme to a numbering scheme that makes sense for node B. In this example it would be:
- Using this numbering scheme also allows messages to be easily tagged as to which destination node they are destined for. For example, if the system had a message of 100 bytes, it would reserve the first four bytes to store the destination node name the message is being sent to, followed by the message. This would make the total message size 104 bytes.
- An example of this structure also includes the size of the message: struct sMessage ⁇ // the number that maps to the name of the node where // this message is being sent to int uiNodelD; // the size of the payload packet int uiMsgSize; // the actual payload packet char cMsg[uiMsgSize]; ⁇
- this message When this message is received by a node, that node would refer to its translation table and convert the destination mapping number to its own mapping number. It can then use this mapping number to decide if this node is the destination for the payload packet, or if it needs to send this payload packet to another directly connected node.
- This alternative embodiment is used to help remove name mappings that are no longer needed.
- a destination node has a fCumulativeLinkCost of infinity continuously for more then X ms (for example 5000 ms) and it has sent this update to all directly connected nodes, then this node will remove knowledge of this destination node.
- a node should attempt to reuse forgotten internal node numbers before using new numbers.
- This alternative embodiment is used to speed up the routing of packets.
- node B would have a mapping that would allow it to convert node A's numbering scheme to a numbering scheme that makes sense for node B. In this example it would be:
- the directly connected nodes reuse internal node numbers, and the number of destination nodes that these nodes know about are less the amount of memory available for storing these node numbers. Then the node can use anay lookups for sending messages.
- This alternative embodiment helps ensure that O(l) routing can be used and avoids the use of a hash map.
- a node could provide each directly comiected node with a unique node number- name mappings. This will ensure that the directly connected node won't need to resort to using a hash table to perform message routing (see above)
- a user data packet can be sent whenever there is a route available.
- a Time-To-Live (TTL) scheme may also be implemented by someone skilled in the art.
- TTL might be a multiple of the fCumulativeLinkCost value for the destination node that is calculated by the node that creates the payload packets. Each node will then subtract its LinkCost for the iink that the packet is received on from the TTL. If the TTL goes below zero the packet could be removed.
- hops such as a protocol like TCP/IP
- This alternative embodiment can help shift network traffic away from overly congested nodes, or nodes that are running low on battery.
- This alternative embodiment allows nodes to share a name.
- the node required a state-full connection, then it would initially connect to the closest node with that name. That closest node would then return its unique name that could be used to establish a connection that needed state.
- That closest node would then return its unique name that could be used to establish a connection that needed state.
- the name of the node returned could be the unique name of the node.
- Total 'control' bandwidth should be limited to a percent of the maximum bandwidth available for all data.
- control bandwidth allocated to route updates There should be a split between the amount of control bandwidth allocated to route updates and the amount of control bandwidth allocated to HSPP updates. For example, 75% of the control bandwidth could be allocated to route updates and the remaining 25% could be allocated to HSPP updates. Someone skilled in the art could modify these numbers to better suit their implementation.
- This section of the document describes an embodiment that allows multiple paths for end user data to form between two communicating nodes. This embodiment also allows for paths to move and shift to avoid congestion.
- This network does not rely on any agent possessing global knowledge of the network.
- the constituents of the network are nodes and queues.
- a node will only send messages to directly connected nodes that it has specified as chosen destinations. 2. A node will only send a message to a chosen destination if the latency of data in the queue on that node is greater than the latency of that chosen destination minus the minimum latency of all chosen destinations. 3. Nodes not cunently in the data stream have only one chosen destination. Nodes in the data stream can have multiple chosen destinations. 4. When looking for a better chosen destination, nodes not in the data stream use passive loop checking, while nodes in the data stream use active loop checking. 5. Connections are established and maintained in a TCP/IP manner. 6. Nodes in large networks look for knowledge in the core of the network. Data Flow Principles 1. A stream of data must not cause its own path latencies to change, except in the case where the flow is past capacity.
- Each node in this network is directly connected to one or more other nodes.
- a node could be a computer, network adapter, switch, or any device that contains memory and an ability to process data. Each node has no knowledge of other nodes except those nodes to which it is directly connected.
- a connection between two nodes could be several different connections that are 'bonded' together. The connection could be physical (wires, etc), actual physical items (such as boxes, widgets, liquids, etc), computer buses, radio, microwave, light, quantum interactions, etc.
- 'Chosen Destinations' is equivalent to 'Best Neighbour"s. It is used in this section of the document since 'Best Neighbour' may be somewhat misleading since there can only really be one 'best neighbour', whereas there can be multiple 'chosen destinations'.
- Queues are used as the destination for EUS messages/payload, as well as messages that are used to establish and maintain reliable communication. Every node that is aware of the existence of a queue has a conesponding queue with an identical name. This conesponding queue is a copy of the original queuel however the contents of queues on different machines will be different.
- Messages are transfened between nodes using queues of the same name. A message will continue to be transfened until it reaches the original queue.
- the original queue is the queue that was actually created by the EUS, or the system, to be the message recipient.
- a node that did not create the original queue does not know which node created the original queue.
- Each queue created in the system is given a unique label that includes an EUS or system assigned queue number and a globally unique identifier (GUID).
- GUID globally unique identifier
- Each node can support multiple queues. There is no requirement that specific queues need to be associated with specific nodes. A node is not required to remember all queues it has been told about.
- any node attempt to build a global network map, or have any knowledge of the network as a whole except of the nodes it is directly connected to.
- the only knowledge is has is that a queue exists, how long a message will take to reach the node that originally created that queue, and the maximum time a latency update from the original node will take to reach this node.
- Latencies play a central role in choosing the best path for data in the network.
- node B tells node A that its latency is X seconds, it is saying that if node A were to pass a message to node B, that message would take X seconds to arrive at the ultimate destination and be de-queued by the EUS.
- This latency value as calculated by node B is:
- Min Over Time Period is a period of time determined by the time it takes to perform a minimum of five sends or receives from the send and receive nodes associated with this queue. It is also a minimum time of 30ms (or a reasonable multiple of the granularity of the fast system timer) This will be discussed in more detail later.
- Service Time On This Queue is the time is takes for the node to attempt to send data from all other queues before it comes back to this one, excluding this particular queue.
- Figure 30 illustrates how service time could be calculated
- Some types of nodes will have system timers with different resolutions available. Many times the low resolution timer is much faster to read the time from, thus it makes sense to use the lower resolution timer to increase the performance of a node. [0452] The tradeoff is a slightly more complex algorithm for determining the service time on a queue.
- the node will also record how many messages it was able to send from all the queues.
- Each queue will also have recorded the number of messages that were sent from that queue to that particular chosen destination during that time period.
- Service Time ([TotalMessagesSentToCD]- [TotalMessagesSentFromQToCD]) / [TotalMessagesSentToCD] * [TotalTimelnSecondsForlterations]
- This value is only calculated when it is being sent as part of a latency calculation. This reduces computational overhead.
- This value is very similar to the value of 'ping' in a traditional TCP/IP network.
- This physical network latency is added to the latency provided to directly connected nodes, every time a calculation is performed using the latency that is provided by a directly connected node. For example, physical network latency would be used when:
- This value can be initialized by sending and timing a series of predefined packets to the directly connected node.
- Physical Network Latency [AverageMsgSize]/([TotalBytesSentDuringPeriod]/[ElapsedTime])
- the time period should be chosen to be similar to the time period used to calculate service time.
- Node A Creates QueueAl and sends a message to QueueB with a request to open communication. It asks for a reply to be sent to QueueAl .
- the request would have a structure that looks like this: struct sConnectionRequest ⁇ // queue Al ( could be replaced with a number - // discussed later) sQNameType qnReplyQueueName; // update associated with queue Al ( explained // later) Includes Latency, UpdateLatency, etc . sQUpdate quQueue ⁇ pdate;
- Node A If Node A has not seen a reply from node B in queue Al, and queue Al on node A is not marked 'in the data stream' (indicating that there is an actual connection between node B and queue Al), and it still has non-infinity knowledge of queue B (indicating that queue B, and thus node B still exists alpd is functioning), it will resend this message.
- Node B Sends a message to Queue Al saying: I've created a special Queue BI for you to send messages to. I've allocated a buffer of X bytes to re-order out- of-order messages. struct sConnectionReply ⁇ // queueBl sQNameType qnDestQueueForMessages; // update associated with queue BI (explained ⁇ // later) Includes Latency, UpdateLatency, etc.. sQUpdate quQueueUpdate; // buffer used to re-order incoming messages integer ui aximumOutstanding essageBytes ; .
- Node B If Node B does not see a reply from node A in queue B, and queue BI on node B is not 'in the data stream', and node B still has non-infinity knowledge of queue Al, it will resend this message.
- Node B will continue resending this message until it receives a sConf irmConnection message, and queue BI is marked 'in the data stream'.
- Node A whenever node receives a sConnectionReply from node B on queue Al, and it has knowledge of queue BI, it will send a reply to queue B indicating a connection is successfully set up.
- Node A can then start sending messages. It must not have more then the given buffer size of bytes in flight at a time. Node B sends acknowledgements of received messages from node A. Node B sends these acknowledgements as messages to queue Al.
- Acknowledgements of sent messages can be represented as a range of messages. Acknowledgments will be coalesced together. For example the acknowledgement of message groups 10-35 and 36-50 will become acknowledgement of message group 10-50. This allows multiple acknowledgements to be represented in a single message.
- acknowledgement message looks like: ⁇ ⁇ struct sAckMsg ⁇ ' integer uiFirstAcked essagelD; integer uiLastAckedMessagelD; ⁇
- ACKs Acknowledgements
- the message is stored on the node where the EUS created them, until they have been acknowledged. This allows the messages to be resent if they were lost in transit.
- node B If the network informs node B that queue Al is no long visible it will remove queue BI from the network and de-allocate all buffers associated with the communication. If the network informs node A that queue B 1 is no longer visible then node A will remove queue Al. [0492] This will only occur if all possible paths between node A and node B have been removed, or one or both of the nodes decides to terminate communication.
- node A If messages are not acknowledged in time by node B (via an acknowledgement message in queue Al) then node A will resend those messages.
- Node B can increase or decrease the 're-order' buffer size at any time and will inform node A of the new size with a message to queue Al. It would change the size depending on the amount of data that could be allocated to an individual queue. The amount of data that could be allocated to a particular queue is dependent on : 1. How much memory the node has 2. How many queues it remembers 3. How many data flows are going through it 4. How many queues originate on this node
- This resize message looks like this: struct sResizeReOrderBuf fer ⁇ // since messages can arrive out of order, // the version number will help the sending // node determine the most recent // ⁇ ResizeReorderBuffer' .
- integer uiVersion // the size of the buffer integer uiNewReOrder S i ze ;
- node A There is also a buffer on the send side (node A).
- the size of that buffer is controlled by the system software running on that node. It will always be equal or less then the maximum window size provided by node B.
- a node is considered in the data stream if it is on the path for data flowing between an ultimate sender and ultimate receiver. A node knows it is in the data stream because a directly connected node tells it that it is in the data stream. [0498] Data may flow through a node that is not marked as in the data stream.
- the first node to tell another node that it is 'in the data stream' is the node where the EUS resides that is sending a message to that particular queue. For example, if node B wants to send a message to queue Al. Node B would be the first node to tell another node that it is 'in the data steam' for queue Al. A node will send a queue's like queue B without marking them 'in the data stream'.
- a node in a data stream for a particular queue will tell all its nodes ' that are
- Nodes communicate with directly connected nodes to send messages created by an EUS to another EUS. Nodes will also send messages used to establish and maintain a reliable communication with another EUS .
- This update takes the structure of: struct sQUpdate ⁇ // the name o the queue. Can be replaced with // a number (discussed later) sQName qnName; // the time it would take one message to travel // from this node to ultimate receiver and be // consumed by the EUS float fLatency; • // if true, this node is already handling as // much data as it can send, (discussed later) bool bAtCapacity; // the maximum time a latency update will // take to travel from the ultimate receiver // to this node. (discussed later) float fUpdateLatency; // calculated in a similar fashion // to 'fUpdateLatency' . and records the distance // from a marked data stream for this node, float fLatencyFromStream' ; ⁇ ; [0513] Regardless of whether this is a previously unknown queue, or an update to an already known queue the same information can be sent.
- a node picks a directly connected node as a 'chosen destination', it must tell that node that it was selected as a 'chosen destination'.
- the structure of the message looks like this: struct sPickedAsChosenDestination ⁇ / / the name of the queue .
- a node will never pick another node as a 'chosen destination' if that node already has this node as a 'chosen destination' for that queue. If this happens because both nodes pick each other at the same time it needs be resolved instantly.
- Figure 32 is a series of steps showing knowledge of a queue propagating the network.
- the linkages between nodes and the number of nodes in this diagram are exemplar only, whereas in fact there could be indefinite variations of linkages within any network topography, both from any node, between any number of nodes.
- Every node keeps track of what queue's it has told its directly connected nodes about. Every new queue that the directly connected node has not been told about will be immediately sent (see Propagation ..Priorities). In the case of a brand new connection, nodes on either side of that connection would send knowledge of every queue they were aware of.
- a node does not contain enough memory to store the names, latencies, etc of every queue in the network the node can 'forget' those queues it deems as un-important. The node will choose to forget those queues where this node is furthest from a marked data stream. The node will use the value 'fLatencyFromStream' to decide how far this node is from the marked data stream.
- An alternative embodiment could use the value fLatencyFromStream' to represent its distance from either a marked data stream, or a node carrying payload packets.
- a structure for this could look like: struct sCreateQNameMapping ⁇ int nNameSize; char cQueueName [Size] ; int nMappedNumber; ⁇ ;
- That structure would look like: struct sRemoveQNameMapping ⁇ int nNameSize; char cQueueName [Size] ; int nMappedNumber; ⁇ ;
- Each node would maintain its own internal mapping of what names mapped to which numbers. It would also keep a translation table so that it could convert a name from a directly connected node to its own naming scheme. For example, a node A might use:
- node B would have a mapping that would allow it to convert node
- A's numbering scheme to a numbering scheme that makes sense for node B In this example it would be: ⁇
- This node will use a GUID probe to check for loops in the remaining possible nodes using the 'GUID probe' process described later in 'Adding additional routes'.
- the node will set its latency to infinity, and tell all connected nodes immediately of this new latency.
- a node If a node has a 'chosen destination' tell it a latency of infinity, it will instantly stop sending data to that node and will remove that node as a 'chosen destination'. If all 'chosen destinations' have been removed the node will set its own latency to infinity and immediately tell its directly connected nodes. [0546] Once a node has set its latency for a queue to infinity and tells its directly connected nodes, it waits for a certain time period (one second for example). At the end of this time period the node will instantly choose as a chosen destination any directly connected node that does not have a latency of infinity, and resume the sending of data.
- This time period is based on a multiple of how long it would take this node to send the update that this queue has gone to infinity. (See Propagation priorities later). This value is then multiplied by 10, or a suitably large number that is dependant on the interconnectedness of the network.
- an EUS has created a queue on one of the nodes that has a direct connection to two nodes, one on each side of the network.
- EUS created queue will check to see if any of the nodes it is directly connected to are using it as sender. Since all of them are, it does not need to probe them with GUIDs to determine if they loop. This node then sets itself to a latency of infinity. This is shown in Figure 35.
- a node's latency for a queue is at infinity for more then several seconds the node can assume that there is no other alternative route to the ultimate receiver and any messages in the queue can be deleted along with knowledge of the queue.
- a node that is not cunently in the data stream will always try to improve its latency to the ultimate receiver by selecting a node with a lower latency then its current chosen destination.
- a node needs to be sure that when it is selecting a different 'chosen destination' that it will not introduce a loop.
- a node looking to upgrade its connection will prefer any node that is not 'at capacity' (explained later) over any node that is 'at capacity', regardless of latency.
- Figure 43 is another example of a potential loop to be avoided
- node B is trying to decide if node F is a better choice then node A. It will compare the difference in 'fUpdateLatency' from node F and node A. The two values in this example would be:
- Node B can't immediately discard node F as a valid new 'chosen destination' just because it has a higher 'fUpdateLatency'. This is because the alternative route that node F provides, although potentially a longer path to the ultimate destination it could be faster because of congestion on the route provided by node A.
- the basic idea behind passive loop testing is the following. • The fUpdateLatency difference between A and F (in this example 5 seconds) is how long it will take at maximum for a latency update sent from node B to reach node F. • If a loop is present, then the maximum latency value from node F during this period of time will be greater then the median latency value from node A during the same time period before this time.
- the total time period for the median must never be longer then value of node A's 'fUpdateLatency'. For example, if the difference between the 'fUpdateLatency' values of node A and node F was 500 seconds, and node A's 'fUpdateLatency' was 8 seconds, the time period for calculating the median would be only 8 seconds. The time period watching for a maximum would be 500 seconds.
- Figure 44 illustrates this. [0583] This technique may yield a false positive for a loop, however it will only very rarely yield a false negative. Dealing with a loop is discussed later.
- node F If the. 'fUpdateLatency' for node F is less then node A, or becomes less during the course of the comparison, then no loop is possible and the node can select node F as a new chosen destination without further delay.
- Each queue of each node also has a mechanism to detect when it is sending or receiving data at capacity.
- a queue on a node is considered at capacity when the latency of data in its queue exceeds max([all chosen destination latencies]) - min([all chosen destination latencies])
- a time interval is defined as the time every destination able to send has sent a certain number of messages (for instance, 10), or a minimum of a certain time period (for example, double the minimum granulation of the fast system timer), or a maximum of another time period (for example 6 seconds).
- a node is considered at capacity if it is unable to bring the queue latency down to this level over this time period. If it is unable to do so, then there is too much data flowing into the node to successfully send it out almost as soon as it arrives.
- nodes that are not in the flow of data will attempt to find non-looping alternatives to 'chosen destinations' that become marked 'at capacity'. If a node is in the data stream, it will not attempt to remove an 'at capacity' node as a 'chosen destination' because of its 'at capacity' status, it will make its decision to remove that node based on latency only. i
- a node 'at capacity' because it has too much data flowing into it will make a list of all possible additional routes using directly connected nodes.
- a possible additional route is a node that:
- GUID This GUID will be sent down each possible route to test each of the routes for a loop. If a loop is detected that route is discarded from the list of possible additional routes.
- Each GUID that conesponds to a possible route is sent to the destination node next along that route. That node will store and forward that GUID on to all nodes it has as 'chosen destinations'. If the node chooses a new node for a destination then the GUID will be passed to that new node. A node will deactivate a GUID by telling all 'chosen destinations' to forget the GUID. If all the nodes telling it to remember the GUID, tell it to stop remembering the GUID, or tell them that they are no longer chosen as a destination, or they are disconnected, the GUID is deactivated.
- Figure 46 is an example of this. In Figure 46 if the node at capacity sees a
- GUID it sent to a possible additional chosen destination it knows that choice would be a bad choice.
- GUID message is composed of a GUID, the name of the queue in question, a 'travel time', and a note telling the node to either 'remember' or 'forget' this GUID.
- GUID message will take the structure of: struct sGUIDProbe ⁇ // could also be a number that represents this queue // (discussed previously) sQueueName qnQueueName; // true if the node is supposed to remember this GUID // false if its supposed to forget it.
- bool bRememberGUID // the actual GUID char cGUID [constant Guid Size]; // how far the GUID will travel (based on fUpdateLatency) , float fMaximumGUIDTravelTime; [0608]
- the travel time for the GUID is set as triple (for example) the difference between the fLatencyUpdate of node looking for a new route and the fLatencyUpdate of the possible new route that is not at capacity.
- a node receives a GUID probe it subtracts its contribution to the fLatencyUpdate value from the fMaximumGUIDTravelTime time before it tells its directly connected nodes of this GUID probe (instead of adding this value to fLatencyUpdate they way it normally does). If after it subtracts its contribution from fMaximumGUIDTravelTime the value is less then 0, the GUID probe is not passed on to any chosen destinations.
- the 'at capacity' node will remove the GUID's. If this node is still 'at capacity' after a period of time it retry the process looking for alternatives. It will wait (for example) three times the maximum 'fMaximumGUIDTravelTime' used for the last round of GUID probes. [0614] Even though a node has several choices where to send data, the maximum latency allowed in the queue is still max([all chosen destination latencies]) - min([all chosen destination latencies]) subject to available memory on that node. As soon as this new destination is chosen the node will be able to clear its 'at capacity' status.
- nodes not in the data stream only ever have one chosen destination, they don't remove additional sources, instead they switch from one source to a better source, (discussed previously).
- Nodes in the data stream are the only nodes that are given the potential to develop multiple data paths, (discussed previously).
- a node in the data stream does not use a particular 'chosen destination' to send data for a certain amount of time, then the node will remove that chosen destination from its list of chosen destinations and alert that node that it is no longer a chosen destination.
- the certain amount of time to wait before removing an un-used chosen destination should be relatively long compared to the amount of time required to create the connection in the first place.
- the amount of time a chosen destination is maintained could also be dynamically adjusted over time based on how much time elapsed between when a node is removed until when it is re-added in.
- a node must always have at least one 'chosen destination' if any possible choice exists, (if not its latency would be at infinity)
- the node does this in hope" that it will replace its cu ent 'chosen destinations' with better choices. This will allow the node to make the entire route faster, as well as need less buffer space for messages passing through it.
- the node will probe the possible choice with a GUID probe (described above). If the GUID probe fails (a loop was detected) then next time the node attempts to optimize this connection it will pick another directly connected node with the next lowest latency.
- Figure 47 is a flowchart that illustrates this process.
- Figure 48 shows a loop that was accidentally created in nodes not in the data stream.
- GUID probe (see previously) is set up to travel PLT *5 (for example) time through the network.
- Latency and 'at capacity' updates are passed both in token updates (defined later), as well as a constant stream that is throttled not to exceed X % of node to node bandwidth. Usually this number would be 1-5%. The node would cycle through all known available latencies in a round-robin fashion. (See Propagation Priorities) Other ways to determine what order or frequency to send queue updates could also be used:
- Each node has a variable amount of memory, primarily RAM, used to support information relevant to connections to other nodes and queues, e.g. message data, latencies, GUIDs, chosen destinations etc.
- An example of the need for flow control is if node A has chosen node B as a destination for messages. It is important that node A is not allowed to overrun node B with too much data.
- Flow control operates using the mechanism of tokens.
- Node B will give node A a certain number of tokens corresponding to the number of bytes that node A can send to node B. Node A is not allowed to transfer more bytes then this number. When node B has more space. available and it realizes node A is getting low on tokens, node B can send node A more tokens.
- Node-to-node flow control is used to constrain the total number of bytes of any data (queues and system messages) sent from node A to node B.
- Queue-to-queue flow control is used to constrain the number of bytes that move from a queue in node A to a queue in node B with the same name.
- node B When node B first gives node A tokens, it limits the total number of outstanding tokens to a small number as a start-up state from which to adjust to maximize throughput from node A.
- Node B knows it has not given node A a high enough Outstanding tokens' limit when two conditions are met: • if node A has told node B that is had more messages to send but could not because it ran out of tokens, and • Node B has encountered a 'no data to send' condition where a destination would have accepted data if node B had had it to send.
- node B will wait for a 'no data to send' condition before increasing the 'outstanding tokens' limit for node A.
- Node B keeps track of how many tokens it thinks node A has by subtracting the sizes of messages it sees from the number of tokens it has given node A. If it sees node A is below 50% of the 'outstanding limit' that node B assigned node A, and node B is able to accept more data, then node B will send more tokens up to node A. Node B can give node A tokens at its discretion up to the 50% point, but at that point it must act.
- Node B has created the default quota it wants to provide to node A. It then sends a message to node A with the quota (the difference between the cunent and the maximum). It also includes a version number that is incremented each time the maximum limit is changed.
- node A wants to send a message of 5 bytes to node B it will not have enough quota. Node A would then send a message to node B saying 'I'd like to send more'. It will then set its 'Last Want More Ner' to match the cunent version. This will prevent node A from asking over and over again for more quota if node B has not satisfied the original request.
- This message looks like this: struct sRequestMoreQuota ⁇ // the queue name or number (see previous) sQ ⁇ ame qn ⁇ ame ; ⁇ ;
- Node B has no data in its queue and yet it would have been able to send to its chosen destination, so it will increase the maxi um quota limit for node A to 100 bytes. It will send along the new quota along with the new version number.
- Figure 52 shows this state.
- Node A now has enough quota to send its 5 byte message.
- node A removes 5 bytes from its available quota.
- the message is received by node B, it removes 5 bytes from the cunent quota it thinks node A has.
- Figure 53 shows this state.
- the same approach to expanding the 'outstanding limit' for queue-to-queue flow control also applies to node-to-node flow control.
- the 'outstanding limit' is also constantly shrunk at a small but fixed rate by the system (for example, 1% every second). This allows automatic conection over time for 'outstanding limits' that may have grown large in a high capacity environment but are now in a low capacity environment and the 'outstanding limit' is unnecessarily high. If this constant shrinking drops the 'outstanding limit' too low, then the previous mechanism (requesting more tokens and more being given if the receiving node encounters a 'no data • to send' condition) will detect it and increase it again.
- TCP/IP window size selection is important. If the window size in TCP/IP is too small performance will suffer, if it is too large system resources will be used up without increasing performance.
- This invention allows rapid convergence to the best window size using a
- a key metric that a node uses to determine which nodes they will send to is latency. If there are a thousand seconds of data remaining to send, then all paths with a latency to the destination of under 1000 seconds should be considered. If there is a very small amount of data and the latency to send it is 10ms, then very few paths (and only the fastest) will be used to transfer data. [0686] This allows nodes to recruit as many or as few nodes as needed to insure the fastest transfer of data. This technique allows us to implicitly increase bandwidth when needed by trading off latency that is not needed.
- the amount of data in transit is also limited by the size of buffer the sending node can allocate to that queue.
- the best size for the send buffer is such that its latency is:
- SendBufferLatency Max(AUChosenDestinationLatencys) - Min(AllChosenDestinationLatencys)
- the node with the EUS sending the messages should allow this send buffer to grow to a point where the EUS can keep the queue 'at capacity' (in the same way as flow control works). This ensures that all 'chosen destinations' can be used as much as possible.
- the size of the re-order buffer has no relation to the number of messages
- Control messages will be broken into two groups. Both these groups will be individually bandwidth throttled based on a percentage of maximum bandwidth. Each directly connected node will have its own version of these two groups.
- the first bandwidth throttled group sends these messages. These messages should be concatenated together to fit into the size of block control messages fit into. 1. Name to number mappings for queues needed for the following messages. 2. Standard flow control messages 3. GUID probes 4. Informing a node if its now a 'Chosen Destination' 5. HSPP messages 6. Initial Queue Knowledge/To Infinity/From Infinity of HSPP queues 7. Initial Queue Knowledge/To Infinity/From Infinity of non-HSPP queues.
- the second group sends latency updates for queues. It divides the queues into three groups, and sends each of these groups in a round robin fashion interleaved with each other 1 :1 :1.
- the first two groups are created by ordering all queues using the value of 'fLatencyFromStream'. If the queue has multiple chosen destinations, then the 'chosen destination' with the lowest latency is used to decide which 'fLatencyFromStream' value we're going to use.
- the queues are ordered in ascending order in a similar manner described previously in the single path embodiment. They are divided into two based on how many updates can be sent in a half a second using the throttled bandwidth. This ensures that the first group will be entirely updated frequently, and the rest will still be updated - but less frequently.
- Each latency update includes a value 'fUpdateLatency'. This value
- 'fUpdateLatency' is calculated separately for queues in each of the three groups. It is calculated as the amount of time that it takes to send all items in the group once. This value is added to the 'fUpdateLatency' of the chosen destination with the lowest 'fLatency'.
- This value is also used when determining how far a GUID probe will travel.
- the time to send each of the three groups should be constantly updated based on current send rates.
- a queue can only be a member of one of these groups at a time. This is important, otherwise the 'fUpdateLatency' would be difficult to calculate.
- node A For example, if node A is in the data stream, and its time to update the group which the particular queue is in takes 3 seconds, it will tell all directly connected nodes that it is 3 seconds from the data stream. Alternatively, it could tell all directly connected nodes that it is 0 seconds from the data stream.
- this software could be used to integrate generating, transmission and consumption to deal with both ordinary changes and untoward events by making decisions based on predetennined criteria and acting immediately, 13. Used to enable remote computing by dynamically linking users and remote sites with no human intervention. 14. Used in air traffic control by managing and coordinating aircraft, air traffic and ground resources 15. Used to coordinate and network varying communications technologies such as wireless, land line, satellite, computer and airborne systems 16. Used to create efficient routes for the physical delivery of goods to various destinations, such routes able to be altered dynamically for varying circumstances such as traffic pattern changes, additions or deletions to the route destinations. 17. Used as a mathematical tool similar to biological computing for solving multiple simultaneous computations to find a conect solution, especially to complex problems that involve many criteria.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/598,020 US7961650B2 (en) | 2004-02-16 | 2005-02-16 | Network architecture |
EP05714444A EP1721414A4 (en) | 2004-02-16 | 2005-02-16 | Network architecture |
CA2558002A CA2558002C (en) | 2004-02-16 | 2005-02-16 | Network architecture |
JP2006553397A JP4611319B2 (en) | 2004-02-16 | 2005-02-16 | Network architecture |
US13/102,169 US20110267981A1 (en) | 2004-02-16 | 2011-05-06 | Network architecture |
Applications Claiming Priority (14)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002457909A CA2457909A1 (en) | 2004-02-16 | 2004-02-16 | Method and system for self-organizing reliable, multiple path data flow transmission of data on a network |
CA2,457,909 | 2004-02-16 | ||
US54434104P | 2004-02-17 | 2004-02-17 | |
US60/544,341 | 2004-02-17 | ||
CA002464274A CA2464274A1 (en) | 2004-04-20 | 2004-04-20 | System and method for a self-organizing, reliable, scalable network |
CA2,464,274 | 2004-04-20 | ||
CA002467063A CA2467063A1 (en) | 2004-05-17 | 2004-05-17 | System and method for a self-organizing, reliable, scalable network |
CA2,467,063 | 2004-05-17 | ||
CA2,471,929 | 2004-06-22 | ||
CA 2471929 CA2471929A1 (en) | 2004-06-22 | 2004-06-22 | System and method for a self-organizing, reliable, scalable network |
CA002476928A CA2476928A1 (en) | 2004-08-16 | 2004-08-16 | Pervasive mesh network |
CA2,476,928 | 2004-08-16 | ||
CA002479485A CA2479485A1 (en) | 2004-09-20 | 2004-09-20 | System and method for a self-organizing, reliable, scalable network |
CA2,479,485 | 2004-09-20 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/102,169 Continuation US20110267981A1 (en) | 2004-02-16 | 2011-05-06 | Network architecture |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005079001A1 true WO2005079001A1 (en) | 2005-08-25 |
Family
ID=34865560
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CA2005/000194 WO2005079001A1 (en) | 2004-02-16 | 2005-02-16 | Network architecture |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1721414A4 (en) |
JP (1) | JP4611319B2 (en) |
WO (1) | WO2005079001A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102291797A (en) * | 2011-07-04 | 2011-12-21 | 东南大学 | Method for designing wireless terminal having routing function |
KR101203473B1 (en) | 2005-12-30 | 2012-11-21 | 삼성전자주식회사 | Method for beacon exchange between devices with asymmetric links and system thereof |
CN103312574A (en) * | 2013-06-28 | 2013-09-18 | 北京奇艺世纪科技有限公司 | Interactive method and device for nodes in point-to-point network |
WO2020076394A1 (en) * | 2018-10-08 | 2020-04-16 | EMC IP Holding Company LLC | Resource allocation using restore credits |
US11005775B2 (en) | 2018-10-08 | 2021-05-11 | EMC IP Holding Company LLC | Resource allocation using distributed segment processing credits |
US11201828B2 (en) | 2018-10-08 | 2021-12-14 | EMC IP Holding Company LLC | Stream allocation using stream credits |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103916215B (en) * | 2014-03-14 | 2017-08-01 | 上海交通大学 | The implementation method of real-time mobile Ad hoc networks based on token passing mechanism |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040003111A1 (en) * | 2001-04-20 | 2004-01-01 | Masahiro Maeda | Protocol and structure for self-organizing network |
US20040018839A1 (en) * | 2002-06-06 | 2004-01-29 | Oleg Andric | Protocol and structure for mobile nodes in a self-organizing communication network |
US6788650B2 (en) * | 2002-06-06 | 2004-09-07 | Motorola, Inc. | Network architecture, addressing and routing |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002158669A (en) * | 2000-11-17 | 2002-05-31 | Sanyo Electric Co Ltd | Internet-connecting method |
JP3797157B2 (en) * | 2001-08-27 | 2006-07-12 | 日本電信電話株式会社 | Wireless node path registration method, wireless node, program, and recording medium recording program |
KR100916162B1 (en) * | 2001-11-29 | 2009-09-08 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | Fingerprint database maintenance method and system |
US7117264B2 (en) * | 2002-01-10 | 2006-10-03 | International Business Machines Corporation | Method and system for peer to peer communication in a network environment |
-
2005
- 2005-02-16 WO PCT/CA2005/000194 patent/WO2005079001A1/en active Application Filing
- 2005-02-16 JP JP2006553397A patent/JP4611319B2/en not_active Expired - Fee Related
- 2005-02-16 EP EP05714444A patent/EP1721414A4/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040003111A1 (en) * | 2001-04-20 | 2004-01-01 | Masahiro Maeda | Protocol and structure for self-organizing network |
US20040018839A1 (en) * | 2002-06-06 | 2004-01-29 | Oleg Andric | Protocol and structure for mobile nodes in a self-organizing communication network |
US6788650B2 (en) * | 2002-06-06 | 2004-09-07 | Motorola, Inc. | Network architecture, addressing and routing |
Non-Patent Citations (1)
Title |
---|
See also references of EP1721414A4 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101203473B1 (en) | 2005-12-30 | 2012-11-21 | 삼성전자주식회사 | Method for beacon exchange between devices with asymmetric links and system thereof |
CN102291797A (en) * | 2011-07-04 | 2011-12-21 | 东南大学 | Method for designing wireless terminal having routing function |
CN103312574A (en) * | 2013-06-28 | 2013-09-18 | 北京奇艺世纪科技有限公司 | Interactive method and device for nodes in point-to-point network |
US11005776B2 (en) | 2018-10-08 | 2021-05-11 | EMC IP Holding Company LLC | Resource allocation using restore credits |
US10630602B1 (en) | 2018-10-08 | 2020-04-21 | EMC IP Holding Company LLC | Resource allocation using restore credits |
US11005775B2 (en) | 2018-10-08 | 2021-05-11 | EMC IP Holding Company LLC | Resource allocation using distributed segment processing credits |
WO2020076394A1 (en) * | 2018-10-08 | 2020-04-16 | EMC IP Holding Company LLC | Resource allocation using restore credits |
CN112805684A (en) * | 2018-10-08 | 2021-05-14 | Emc Ip控股有限公司 | Resource allocation using recovery borrowing |
GB2591928A (en) * | 2018-10-08 | 2021-08-11 | Emc Ip Holding Co Llc | Resource allocation using restore credits |
US11201828B2 (en) | 2018-10-08 | 2021-12-14 | EMC IP Holding Company LLC | Stream allocation using stream credits |
US11431647B2 (en) | 2018-10-08 | 2022-08-30 | EMC IP Holding Company LLC | Resource allocation using distributed segment processing credits |
GB2591928B (en) * | 2018-10-08 | 2023-05-17 | Emc Ip Holding Co Llc | Resource allocation using restore credits |
US11765099B2 (en) | 2018-10-08 | 2023-09-19 | EMC IP Holding Company LLC | Resource allocation using distributed segment processing credits |
US11936568B2 (en) | 2018-10-08 | 2024-03-19 | EMC IP Holding Company LLC | Stream allocation using stream credits |
Also Published As
Publication number | Publication date |
---|---|
JP2007523546A (en) | 2007-08-16 |
JP4611319B2 (en) | 2011-01-12 |
EP1721414A1 (en) | 2006-11-15 |
EP1721414A4 (en) | 2011-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7961650B2 (en) | Network architecture | |
Baras et al. | A probabilistic emergent routing algorithm for mobile ad hoc networks | |
US9608912B2 (en) | Computing disjoint paths for reactive routing mesh networks | |
Wang et al. | Adaptive multipath source routing in ad hoc networks | |
JP4060316B2 (en) | Multipath reactive routing in mobile ad hoc networks | |
EP1579716B1 (en) | Routing scheme based on virtual space representation | |
US8923305B2 (en) | Flooding-based routing protocol having database pruning and rate-controlled state refresh | |
Wu | An extended dynamic source routing scheme in ad hoc wireless networks | |
JP2005524336A (en) | Policy processing of traffic in mobile ad hoc networks | |
JP2005524318A (en) | Traffic tracking in mobile ad hoc networks | |
JP2005524363A (en) | Channel assignment in mobile ad hoc networks | |
Asokan | A review of Quality of Service (QoS) routing protocols for mobile Ad hoc networks | |
WO2005079001A1 (en) | Network architecture | |
US7245640B2 (en) | Packet origination | |
US6615273B1 (en) | Method for performing enhanced target identifier (TID) address resolution | |
CA2558002C (en) | Network architecture | |
Raju et al. | Scenario-based comparison of source-tracing and dynamic source routing protocols for ad hoc networks | |
Lü et al. | Adaptive swarm-based routing in communication networks | |
Shu et al. | Provisioning QoS guarantee by multipath routing and reservation in ad hoc networks | |
Ziane et al. | Inductive routing based on dynamic end-to-end delay for mobile networks | |
CN116016336B (en) | HRPL-based efficient inter-node communication method | |
Layuan et al. | A QoS multicast routing protocol for mobile ad-hoc networks | |
Doshi et al. | SAFAR: An adaptive bandwidth-efficient routing protocol for mobile ad hoc networks | |
Deshpande et al. | DISTRBUTED CACHE ARCHITECTURE FOR SCALABLE QUALIY OF SERVICS FOR DISRIBUTED NETWORKS | |
Park et al. | Distributed semantic service discovery for MANET |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006553397 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2558002 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005714444 Country of ref document: EP Ref document number: 3378/CHENP/2006 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580011384.6 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 2005714444 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10598020 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 10598020 Country of ref document: US |