AU4740099A - Recursive traffic distribution IP/data network model - Google Patents
Recursive traffic distribution IP/data network model Download PDFInfo
- Publication number
- AU4740099A AU4740099A AU47400/99A AU4740099A AU4740099A AU 4740099 A AU4740099 A AU 4740099A AU 47400/99 A AU47400/99 A AU 47400/99A AU 4740099 A AU4740099 A AU 4740099A AU 4740099 A AU4740099 A AU 4740099A
- Authority
- AU
- Australia
- Prior art keywords
- node
- group
- level
- nodes
- units
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/12—Shortest path evaluation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Description
P/00/011 28/5/91 Regulation 3.2
AUSTRALIA
Patents Act 1990
ORIGINAL
COMPLETE SPECIFICATION STANDARD PATENT Invention Title: "RECURSIVE TRAFFIC DISTRIBUTION IP/DATA NETWORK MODEL" r e r
S
*r The following statement is a full description of this invention, including the best method of performing it known to us: Technical field This invention relates to a method and arrangement for transferring network node status information between nodes.
Background Art Least cost routing examines the network topography to determine the shortest path between the source of a message and its destination. This method does not take account of the load status of the individual links of the chosen path so one or more of the links may become overloaded, preventing or disrupting the delivery of the message to the destination.
An alternative proposal is to take into account the load status of the links when determining the chosen path. Thus the system may determine all the shortest paths and make the path selection on the basis of the path with the links carrying the least traffic. To implement this, it is necessary for all the nodes to exchange load status information. As the number of nodes and links in a network grow, the implementation of this technique requires the exchange of a large amount of load S° status traffic, as each node broadcasts the load status of its associated links to the other nodes.
In our co-pending application number 44470/99 (127045 SY), we disclose a network in which the message is spread over all the practical paths between the source and destination. An advantageous embodiment of that invention involves the elimination of paths having heavily loaded links. Again, the implementation of this technique requires the exchange of load status information between nodes, generating a large volume of traffic.
Disclosure of the Invention This specification discloses a network arrangement for a plurality of nodes each node being connected to one or more other nodes by corresponding node links, the network being arranged into a recursive hierarchy of units having two or more levels, the nodes being the units of the first level of the hierarchy, the units of higher levels of the hierarchy being formed by groupings of the units of the previous level, wherein the units of a level exchange a corresponding load status information.
Brief Description of the Drawings Figure 1 is a schematic representation of a network in which the nodes are arranged in a recursive hierarchy, in accordance with an embodiment of.the invention.
Figure 2 represents a load status monitor.
.Figure 3 illustrates the message structure for exchanging information at .different levels.
Best Mode of Carrying out the Invention Figure 1 shows a network of nodes interconnected by links. According to the embodiment shown in Figure 1, the nodes are linked in a logical hierarchy.
.The nodes exemplified by the circles, some of which are numbered 101 202 303 are shown interconnected by node links, represented by the lines drawn between the nodes.
The nodes are formed into groups 10, 11, 12, 20, 30. The groups are interconnected by group links, for example 1001 between node 105 of group and node 121 of groupl 2.
The group links are shown in the following Table 1.
TABLE 1 GROUP LINKS Node/Group Node/Group Link 105 10 121 12 1001 112 11 122 12 1002 103 10 111 11 1003 123 12 202 20 1201 101 10 302 30 1301 102 10 303 30 1302 Preferably the group links have a larger traffic capacity than node links.
Group links may equate to regional trunks within a particular carrier's network, or to links between different carriers, different countries or different global regions, for example.
As shown in Figure 1, the levels of the hierarchy start with the nodes. The units of the second level are the groups of nodes. The units L31, L32, L33, of the third level are one or more groups, and the units of the fourth level, L41, L42 are formed by aggregating units of the third level. Thus L41 is the aggregation of L31 and L33, while L42 encompasses L32.
Preferably the nodes are grouped on the basis of communication path topography so that there are relatively few links in the shortest path between any two nodes in a group.
~The groups themselves, 10, 11, 12, 20, 30 are linked by designated links 1001, 1002, 1003 1201, 1301, 1302 connecting designated nodes in the corresponding groups.
Within each group, a master node is assigned, 104, 113, 124, 201, 301.
Because each node in a group knows the load status of all the links in that group, the master node has the information to enable it to compile available capacity reports adapted to meet the requirements of the higher levels.
The physical node interconnections are illustrated at A in Figure 1 and B,C and D illustrate the conceptual logical links for the second, third and fourth levels of hierarchy.
At level A, the nodes of each group communicate load status information to each of the other nodes of the corresponding group.
Thus the master nodes 104, 113, 124, of the network 1 shown at B are aggregated under the supervision of a single master node 113 which is assigned to the next higher level. Because, in the example shown, networks 2 and 3 have only one group, the same master 201, 301 is used all levels.
At level C, the master nodes 113, 201,301 exchange information as to the overall load status of their associated networks 1, 2, and 3.
The nodes at level C are then 113, 201,301. By then aggregating 113 and 301, one of these two can be designated to mange the level D information exchange for both network 1 and network 2. Thus at level D, 201 and 301 can exchange information on the available capacity for the regions covered by L42 and L41.
At level C, information on the capacity of networks 1, 2 and 3 is exchanged between the networks. At level B, information on the capacity of groups 10, 11, 12, 30 is exchanged between the groups.
At level A, the nodes interchange information on traffic capacity at the node link level, within the groups.
Figure 2 shows an arrangement for monitoring the available capacity of the o*oo links connected to a node.
For each link connected to a node there is a buffer 51,52, 53, e.g. in the form of FIFO.
The traffic level monitor 50 checks the level of the contents of the buffers to measure the available capacity on the basis of the speed of the link associated with the buffer. The result of the monitoring is then reported to the other nodes in the same group.
S
In a simplified measuring system the monitor may report whether or not a link has spare capacity, e.g. by checking whether a buffer's content is above or below a predetermined threshold.
The load status information exchange is carried out on the following basis.
The nodes within a group each notify the other nodes within that group of the load status of the links connected to the notifying node.
At the group level, each group notifies the other groups of the load status of the links connected to the notifying group and of the load status of internal paths within the group available for interconnecting the group links connected to the notifying group.
For example, Group 12 is connected to Group 20 via link 1201, to Group via link 1001, and to Group 11 via link 1002.
a a.
a Preferably, the designated as a master node manages the interchange of information between the groups.
Table 2 shows the master nodes for each group.
TABLE 2 GROUP MASTER NODE 104 11 113 12 124 201 30 301 As can be seen in Figure 1, the master nodes take part in the higher level exchanges but their number is progressively reduced by the recursive grouping.
Thus, in the embodiment shown in Figure 1, while there are 5 master nodes 10 shown at level B, there are only 3 at level C and 2 at level D.
Preferably, the grouping is carried out on the basis of proximity in the sense of the number of links in the path. Of course this is not a strict rule at the node level because the nodes at either end of a group link are joined by a single link, while there may be more than 2 links between nodes within a group. Other factors which influence grouping are geographical proximity and network ownership, as well as the traffic flows.
For example, the nodes of network 2 may be geographically close to node of network 1, but network 1 may be owned by a different carrier from network 2.
At level D, the nodes 201 and 301 exchange information on the available capacity between network 3 and network 2. This information would, for example, be based on the load status of links 1201, 1301, 1302, and the capacity across network 1 between link 1201 and the links 1301, 1302. The information need only identify the maximum available capacity at the time, which varies in accordance with the load on the various network elements.
For the sake of clarity the information will be given the following names: Level D Regional; Level C Network; Level B Group; Level A Node.
Regional information may be, for example, the maximum available capacity between the "electrically" remotest groups. The term "electrically" refers to the number of links and may include cable, optical and radio links.
Network information may be, for example, the capacity between the various networks, including the trans-network capacity between the network links 1201, 10 1301, 1302.
oooo Group information could be typified by the capacity between groups, including the trans-group capacity between the group links.
6*o* Node information is the information broadcast by a node to the other nodes within its group as the load status of the node and its associated links.
Group information can be deduced from node information. Each node in a group knows the load status of all the nodes in that group. Thus the master none 124 in group 12 knows the status of group links 1001 from node 121, group link 1002 from node 122, and group/network link 1201 from node 123, as well as the "status of all the internal nodes and links within group 12. Node 124 can therefore 20 calculate the available capacity across the group 12 between any pair of the links oO.o 1201, 1001, 1002. Preferably the master node 124 would use the "all practical paths" algorithm of our Australian Patent application 44470/99 (Docket No.
127045 SY) to calculate the trans-group capacity. This group information is interchanged between the group master nodes 201, 124, 113, 104, 301 at level B.
The units of the level B group domain are again grouped together, in this embodiment, into 3 network groups. The network groups include two one member groups 201 and 301, and one three member group 124, 113, 104. The network master of each one member group is the member of the group, while 113 is designated as the master of the three member network group.
The three network masters from level B interchange network information at the level C network domain. The information relates to the network links connecting the respective networks, and the trans-network information relating to the capacity between the pairs of network links.
At the regional domain, level D, the network masters 201, 113, 301 have been formed into two groups, resulting in two regional masters 201, 301, which exchange information on the available capacity between the two regions.
The regional master nodes 201, 301, convey the regional link capacity information to the other regional nodes. In the present embodiment 301 conveys the information to 113. 201 is the only regional node in the other regional grouping.
The regional nodes 201, 113 and 301 are all network master nodes and they convey the inter-regional and inter-network capacity information to the network level ooo.
nodes. In our embodiments, 113 conveys this information to the nodes 104, 124.
Each of the network level nodes 201,124, 113, 104, 301 is a group master and relays the higher level information to each of the nodes in its group.
**The grouping of the units at each level means that the information exchanged at each level becomes more generalised.
This means that a node has detailed capacity information about the other nodes in its group. Capacity information about other groups in its network, capacity •information about the other networks in its region, and information about the interregional capacity.
S: 20 In a preferred embodiment the group master handles the interchange of node link capacity information. Each node, instead of broadcasting its load status to all the other nodes in the group, sends the information only to the group master, which collates the information from each node and relays the information to the other nodes. The message from the group master preferably incorporates the higher level load status information, so that each node has an overall picture of the entire system.
Thus the group master may broadcast a message including the information shown in Figure 3. The first segment RL includes the load status at the regional link level D. A second portion of the payload includes a number of segments of information on the inter-network load status NL. A third portion includes segments GL on the inter-group load status, and the fourth portion includes segments NL on the load status of the nodes within the group.
*oo **o
Claims (8)
1. A network arrangement for a plurality of nodes each node being connected to one or more other nodes by corresponding node links, the network being arranged into a recursive hierarchy of units having two or more levels, the nodes being the units of the first level of the hierarchy, the units of higher levels of the hierarchy being formed by groupings of the units of the previous level, wherein the units of a level exchange a corresponding load status 10 information. ooo*
2. An arrangement as claimed in claim 1 wherein, within, each group of units, a *oo* master entity is designated, the master entity conveying inter-unit load status 0%go information relating to the units of that level to the next higher level.
An arrangement as claimed claim 1 or claim 2 wherein, in the first level, a selected node in each group is designated as the master node for the corresponding group, the master node managing the transfer of node load status information within its corresponding group.
4. An arrangement as claimed in claim 1 or claim 2, or claim 3 wherein the S 20 load status information includes information on the available traffic capacity between the ports of each unit.
An arrangement as claimed in any one of claims 1 to 4 wherein each node includes node load status monitoring means to monitor the load status of the links connected to the node.
6. An arrangement as claimed in any one of claims 1 to 5 wherein at least one node of each second level group is connected to a node of at least one other second level group via a corresponding group link whereby group load status information can be interchange.
7. An arrangement as claimed in claim 6 wherein the units of the third level are formed by mutually interconnected second level units.
8. A network arrangement for interchanging load status information substantially as herein described with reference to the accompanying drawings. DATED THIS EIGHTEENTH DAY OF AUGUST 1999 ALCATEL *ee e
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU47400/99A AU4740099A (en) | 1999-09-06 | 1999-09-06 | Recursive traffic distribution IP/data network model |
AU68099/00A AU6809900A (en) | 1999-09-06 | 2000-08-30 | Recursive traffic distribution ip/data network model |
PCT/AU2000/001023 WO2001019019A1 (en) | 1999-09-06 | 2000-08-30 | Recursive traffic distribution ip/data network model |
EP00955957A EP1216540A4 (en) | 1999-09-06 | 2000-08-30 | Recursive traffic distribution ip/data network model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU47400/99A AU4740099A (en) | 1999-09-06 | 1999-09-06 | Recursive traffic distribution IP/data network model |
Publications (1)
Publication Number | Publication Date |
---|---|
AU4740099A true AU4740099A (en) | 2001-03-08 |
Family
ID=3734255
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU47400/99A Abandoned AU4740099A (en) | 1999-09-06 | 1999-09-06 | Recursive traffic distribution IP/data network model |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1216540A4 (en) |
AU (1) | AU4740099A (en) |
WO (1) | WO2001019019A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FI20010552A0 (en) | 2001-03-19 | 2001-03-19 | Stonesoft Oy | Processing of state information in a network element cluster |
GB0707666D0 (en) * | 2007-04-20 | 2007-05-30 | Prolego Technologies Ltd | Analysis of path diversity structure in networks using recursive abstraction |
EP2963875B1 (en) * | 2014-07-02 | 2018-02-28 | ABB Schweiz AG | Method for processing data streams including time-critical messages of a power network |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE59207963D1 (en) * | 1991-10-15 | 1997-03-06 | Siemens Ag | METHOD FOR NON-HIERARCHARIC ROUTING IN A COMMUNICATION NETWORK |
JP2581011B2 (en) * | 1993-07-23 | 1997-02-12 | 日本電気株式会社 | Local area network traffic control system |
EP0660569A1 (en) * | 1993-12-22 | 1995-06-28 | International Business Machines Corporation | Method and system for improving the processing time of the path selection in a high speed packet switching network |
US5872773A (en) * | 1996-05-17 | 1999-02-16 | Lucent Technologies Inc. | Virtual trees routing protocol for an ATM-based mobile network |
US5905871A (en) * | 1996-10-10 | 1999-05-18 | Lucent Technologies Inc. | Method of multicasting |
ATE315861T1 (en) * | 1997-02-18 | 2006-02-15 | Cit Alcatel | ROUTING METHOD IN HIERARCHICAL STRUCTURED NETWORKS |
JP3063721B2 (en) * | 1997-04-30 | 2000-07-12 | 日本電気株式会社 | Topology information exchange device and machine-readable recording medium recording program |
DE19742582C1 (en) * | 1997-09-26 | 1999-04-29 | Siemens Ag | Telecommunication network management method |
DE19746904B4 (en) * | 1997-10-23 | 2004-09-30 | Telefonaktiebolaget L M Ericsson (Publ) | Traffic data evaluation device and associated method for a network with dynamic switching |
-
1999
- 1999-09-06 AU AU47400/99A patent/AU4740099A/en not_active Abandoned
-
2000
- 2000-08-30 EP EP00955957A patent/EP1216540A4/en not_active Withdrawn
- 2000-08-30 WO PCT/AU2000/001023 patent/WO2001019019A1/en not_active Application Discontinuation
Also Published As
Publication number | Publication date |
---|---|
EP1216540A1 (en) | 2002-06-26 |
EP1216540A4 (en) | 2005-01-05 |
WO2001019019A1 (en) | 2001-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104488238B (en) | The system and method controlled for cluster link aggregation in network environment | |
CN102204188B (en) | Routing computation method and host node device in virtual network element | |
CN101605278B (en) | Method for realizing adaptive signaling in distributed control collaborative optical networks | |
CA2212278C (en) | Route finding in communications networks | |
US10021025B2 (en) | Distributed determination of routes in a vast communication network | |
US20070242607A1 (en) | Method and system for controlling distribution of network topology information | |
US20030031124A1 (en) | Inter-working mesh telecommunications networks | |
CN101227377B (en) | Method for implementing shared risk link circuit group separation crossing field path | |
CN104871490B (en) | The multipath communication device of energy ecology and its method for distributing business for improving energy ecology can be improved | |
EP1083696A3 (en) | System and method for packet level distributed routing in fiber optic rings | |
US9762479B2 (en) | Distributed routing control in a vast communication network | |
WO2004066641A3 (en) | Routing signaling messages to the same destination over different routes using message origination information | |
US8948178B2 (en) | Network clustering | |
CN101674217B (en) | Method for realizing permanent ring network protection in MESH network | |
CN107547365A (en) | A kind of message transmissions routing resource and device | |
CN106941455A (en) | The method and device that balanced load is shared | |
CN107888492A (en) | A kind of method and apparatus of VRRP load balancing | |
CN106875501A (en) | A kind of highway tolling system multichannel connection communication method | |
CN101753455A (en) | Retransmission method and device | |
CN100546273C (en) | The processing method of multiplex section loop chain road in ASON | |
CN106453121B (en) | A kind of link dynamic load configuration method, system and server | |
CN1953409A (en) | A networking method for semi-network configuration of network and its system | |
AU4740099A (en) | Recursive traffic distribution IP/data network model | |
Kim et al. | Adaptive packet routing in a hypercube | |
CN110139173A (en) | A kind of network dividing area method reducing optical transfer network end-to-end time delay |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NAA1 | Application designating australia and claiming priority from australian document |
Ref document number: 6809900 Country of ref document: AU |
|
MK1 | Application lapsed section 142(2)(a) - no request for examination in relevant period |