WO2001019038A1 - Systeme et procede de routage internodal d'information dans un reseau de communication - Google Patents

Systeme et procede de routage internodal d'information dans un reseau de communication Download PDF

Info

Publication number
WO2001019038A1
WO2001019038A1 PCT/SE1999/001545 SE9901545W WO0119038A1 WO 2001019038 A1 WO2001019038 A1 WO 2001019038A1 SE 9901545 W SE9901545 W SE 9901545W WO 0119038 A1 WO0119038 A1 WO 0119038A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
routing
nodes
link
communication system
Prior art date
Application number
PCT/SE1999/001545
Other languages
English (en)
Inventor
Karl Johan Mårten SUNDLING
Christer GILÉN
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/SE1999/001545 priority Critical patent/WO2001019038A1/fr
Priority to GB0205944A priority patent/GB2371944B/en
Priority to AU14205/00A priority patent/AU1420500A/en
Priority to DE19983979T priority patent/DE19983979T1/de
Publication of WO2001019038A1 publication Critical patent/WO2001019038A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing

Definitions

  • the present invention relates to communications networks having a number of interconnected components or nodes, and, in particular, to a system and method for improving intemodal communications within a network of interconnected nodes.
  • IT information technology
  • a typical IT system or intranet may contain dozens of computers, servers and peripheral (printers, facsimile machines, etc.) equipment interconnected in a lattice or network.
  • switching processors and other components are also becoming increasingly interconnected.
  • nodes intercommunications between the various components, also referred to herein as nodes, are also growing rapidly with the increasing thirst for information sharing resources.
  • real-time communications are necessary to coordinate phone calls therethrough.
  • Existing protocols for implementing intemodal data communications although generally providing sufficient speed, efficiency and error handling for large networks and lengthy data communications, are not well-suited for small information package exchange. It has been found that for intemodal communications of roughly 50 bytes or less between a small number, e.g., under 40, of nodes, existing protocols are not efficient. In fact, the "overhead" for such small scale communications under existing protocols in addressing, routing and error checking is often much larger than the original information content.
  • a first object of the present invention to provide an improved system protocol so that small information packages may be transported internodally with only a small amount of overhead.
  • the present invention is directed to a system and method for facilitating the routing of small information packages between nodes in a network.
  • the system and method of the present invention utilize a routing table at each node of the network.
  • the routing table for each node contains a list of the nodes, ordered the same in each table, and an indexing value is utilized to refer to particular node values.
  • Intemodal linkage information is also stored in the routing table along with alternative links should the primary link fail. The addition/deletion of nodes and the failure of any link is handled by an update function.
  • FIGURE 1 is an exemplary network system having a relatively small number of nodes, upon which the principles of the system and method of the present invention may be employed;
  • FIGURE 2 is a routing table employed at the various nodes within the network system shown in FIGURE 1 and exemplary contents therein;
  • FIGURE 3 illustrate further examples of routing tables, such as shown in FIGURE 2, used in conjunction with the system and method of the present invention
  • FIGURE 4 is a state diagram illustrating various states for various nodes when practicing the system and method of the present invention.
  • FIGURE 5 is an illustration of a message header utilized in practicing the system and method of the present invention.
  • FIGURE 6 is an illustration of a link command used in the present invention
  • FIGURE 7 is an illustration of another link command used in the present invention
  • FIGURE 8 is an illustration of a routing command used in the present invention.
  • FIGURE 9 is an illustration of an acknowledgment command used in the present invention
  • FIGURE 10 is an illustration of a another acknowledgment command used in the present invention
  • FIGURE 11 is an illustration of another routing command used in the present invention.
  • FIGURE 12 is an illustration of another acknowledgment command used in the present invention.
  • FIGURE 13 is an illustration of a sample network used in conjunction with the description to demonstrate the messaging capabilities of the system and method of the present invention.
  • FIGURE 1 of the drawings there is illustrated an exemplary embodiment of a small, i.e., four node, network within which the principles of the present invention may be implemented. It should, of course, be understood that the principles of the present invention maybe implemented on larger (and smaller) networks of nodes.
  • a network generally designated by the reference numeral 100, contains therein various nodes, i.e., nodes Nl, N2, N3 and N4, as well as a number of links, segments or paths therebetween.
  • each of the nodes constitutes or contains therein a computer, server or other processing unit, i.e., units Ul, U2, U3 and U4.
  • node Nl contains unit Ul therein and has two links therefrom, i.e., a first link N1:L1 which connects node Nl to node N3 across a first intemodal segment 105, and a second link N1:L2 which connects node Nl to node N2 across a second segment 110.
  • node N2 contains unit U3 therein and has two links therefrom, i.e., a first link N2:L1 which connects node N2 to node Nl across the segment 110, and a second link N2:L2 which connects node N2 to node N3 across another segment 115.
  • Node N3, likewise, contains unit U4 therein and has three links: (1) a link N3:L1 which connects node N3 to node N4 across a further segment 120, (2) a link N3:L2 which connects node N3 to node N2 across the segment 115, and (3) a link N3:L3 which connects node N3 to node Nl across the segment 105.
  • node N4 contains unit U2 therein and has only one link, i.e., link N4:L1 , which connects node N4 to node N3 across the segment 120. Intercommunications between the nodes according to the present invention is accomplished through the utilization of a simplified and uniform addressing scheme, which is described in more detail later in this specification.
  • the addressing scheme of the present invention employs address indexing, whereby the full identifier or address of a node or processing unit need not be employed to specify that unit. Instead, a smaller identifier is utilized to economize on message size and thereby increase transmission speed.
  • addresses and identifiers are employed, e.g., the aforementioned unique nodal unit address, a smaller index identifier and a physical or topological network address, all discussed in more detail below.
  • the aforementioned nodal unit address is a concatenation of various information relating to that unit, e.g., the unit's product number and serial number, forming a unique identifier for the unit, e.g., a mother board or processor. It should, therefore, be understood that if the unit were replaced at a given node, the identity or unit address for that node, e.g., unit Ul in node Nl of FIGURE 1, would change, e.g., to unit U5.
  • the improvements of the system and method of the present invention are most suitable to relatively static networks that infrequently add or delete units or nodes. It should also be understood, however, that since the unit address information is typically utilized infrequently, e.g., usually during network setup only, the lengthy and ungainly unit address identifiers of the prior art may be employed to form the simpler index identifiers that serve the same function in the present invention using a fraction of the transmission bandwidth.
  • FIGURE 2 Shown in FIGURE 2 is an exemplary routing table 200 which is preferably utilized at each of the nodes of the network 100 shown in FIGURE 1.
  • the first column of the routing table 200 contains the above unit addresses of the various nodes within network 100 which are sorted in accordance to the values of the respective identities or unit addresses, identified in sorted order as units Ul to U4 in FIGURE 1.
  • the unit addresses constitute an alphanumeric string of up to 32 characters.
  • the ordering of the routing table or list 200 is independent of the configuration or topology of the network 100, and is, preferably, solely dependent on the alphanumeric (or other defined) ordering of the particular unit addresses of all of the nodes, i.e., Ul to U4, in the given network 100.
  • the remaining nodes bear no ordinal relationship: unit U2 is paired with node N4; unit U3 with node N2; and unit U4 with node N3.
  • each node in the network 100 contains a respective routing table 200 therein, each having the same unit address entries Ul to U4 in that order.
  • each routing table 200 Since each routing table 200 is sorted using the aforedescribed unit addresses, each routing table 200 in each node of network 100 has a uniform structure. With reference now to FIGURE 3, there are illustrated routing tables
  • each routing table is ordered consistently, i.e., according to the particular unit addresses Ul to U4 within network 100, along with the corresponding node associated therewith, i.e., nodes Nl , N4, N2 and N3, respectively.
  • each node within network 100 need not transmit the lengthy, e.g., 32 byte, unit address identification to other nodes within the network 100. Instead, only a short, e.g., 8-bit, integer reference or index is needed to refer to a given unit address within each ordered list.
  • unit Ul and nodeNl
  • Unit U2 and node N4
  • Unit U3 and U4 indices of two and three, respectively.
  • Using an 8-bit variable would allow up to 256 distinct unit index addresses.
  • a unit address (node device) is removed from or added to the network 100, the ordering thereafter changes and the index values would have to be updated, as discussed further hereinafter.
  • Another address utilized in the present invention is a physical address associated with the respective node's physical location within the network 100.
  • the physical address field pertains to the topology or physical interrelationships between the discrete nodes, e.g., node N3 has a particular value associated therewith which is separate and distinct from the other nodes in network 100.
  • the topology and connectivity information between the various nodes within a network 100 is typically defined when the network 100 is setup and remains the same even if a particular unit, e.g., a new motherboard or processor (U) for a given node, is exchanged.
  • a particular unit e.g., a new motherboard or processor (U) for a given node
  • the values for the physical address in the preferred embodiment of the present invention are also short integers, facilitating their use within a short intemodal message. It should be understood, however, that there is no correlation between the physical addresses and the routing protocol, and that the physical addresses are needed when a "user" at the transport layers needs to send data.
  • the improved routing protocol of the present invention needs to be initialized, although it should be understood that no particular set-up, start or configuration phase is required for the present invention, i.e., any configurations or reconfigurations may be due at any time.
  • a routing table 200 such as the one illustrated in FIGURE 2, is created for each node.
  • the routing table 200 is built up with information from routing messages received from each node in the network 100.
  • each node preferably sends a routing message on all of the links to which it is connected.
  • the routing message contains information on the node's unit address, i.e., the lengthy identifier and the number of links needed to reach it.
  • a node When a node receives a routing message on a link, the node checks if this information has already been received. If the information in the message has been received before, it will be discarded; otherwise, the information, together with information on which link the message was received on, is stored in the routing table 200. The receiving node will also add one to the number of links needed to reach the specified node in the message before forwarding the message on all links except the link the message was received on.
  • the routing table 200 contains a list of alternative links to reach that node, the list sorted after the number of links involved in each alternative.
  • a node will use a second alternative link in case the first alternative link goes down, and a third if the preceding two go down, etc.
  • the routing table contains a row of information for each node (and corresponding units) in the network.
  • the rows in all routing tables are sorted by the node's unit address so that all routing tables in all nodes will have their rows sorted in the same order.
  • the routing tables maybe sorted differently for a short time after something has changed in the network, e.g., a unit in a node has been replaced, before all routing tables have been updated again.
  • the routing table information can be altered if any of the following events occur: (1) an incoming link, that was previously not available to the node, comes up working; (2) an incoming link, that was previously available to the node, turns out not to work; and (3) an incoming routing message on a link indicates an update change. If the routing on any of the routing tables are updated with route information that affects the first alternative route, the nodes will share this information with their neighbors. This contributes to a chain- reaction where everything is triggered from this type of event. Every node has this type of behavior so that the changes in the routing tables can be passed on to the neighboring nodes.
  • Routing table 300 contains a first or primary link to node N4 in the second row, i.e., link Nl :L1 , and a secondary link Nl :L2 should the primary link be down.
  • the unit index address used in referencing node N4 in this example is one, i.e., the second listing in the ordered list of routing table
  • the source node Nl after consulting the corresponding routing table 300, sends the message on the primary link N1 :L1, which corresponds to segment 105 in FIGURE 1.
  • node N3 Upon receipt of the message at node N3, i.e., the terminating end of segment or link 105, node N3 examines the message to obtain information on the message's destination.
  • the message includes the unit index address of the destination node, i.e., one, providing node N3 with a simplified indication of where to send the message, i.e., check row two (index one) of the routing table 320, which specifies link N3:L1 as the primary link. No secondary link is specified for this example due to the topology of the network 100.
  • node N4 After consulting its routing table 320, forwards the message along with the index value on the link N3:L1, which corresponds to segment 120, to destination node N4.
  • node N4 examines the message, retrieves the aforementioned unit index address (one) and examines its own routing table 330 to determine where the message is destined, as above with node N3. As shown in FIGURE 3, however, no links are provided, indicating a final destination node. For assurance, node N4 verifies that the user address indicated by the index, i.e., unit U4 by index value one, corresponds. If so, node N4 sends an acknowledgment message back to node Nl using the aforedescribed mechanism. Should node Nl not receive the acknowledgment, node Nl retransmits the message.
  • a particularly preferred embodiment of the system and method of the present invention is within a distributed system of interconnected processors serving a particular overall task, e.g., a digital telecommunications switching system such as exemplified in Applicants'
  • a particularly preferred embodiment at the present time is utilized in an extension module group in which a small number of processors, e.g., five to seven, form a small group or network serving a large number of telephone extension numbers, e.g., about 2,000.
  • a small number of processors e.g., five to seven
  • a large number of telephone extension numbers e.g., about 2,000.
  • the invention is particularly directed to such systems and methods where the unit or device addresses are uniformly sorted in each routing list in a predetermined manner across all of the processors and uniquely indexable in each.
  • OSI Open Systems Interconnection
  • the network layer is responsible for delivering messages to one specified destination node, as described hereinabove, or to all nodes with the assistance of the lower layers.
  • the network layer is connected via one or more segments or links to its neighboring nodes in the network 100.
  • Each link at the network layer is made up of a number of 64 kbit timeslots defined by a configuration application.
  • the processors in the presently preferred extension module group for example, share bandwidth across a common link to a local station, such as a node within a Public Switched Telephone Network (PSTN), illustrated by a link segment 125 in FIGURE 1.
  • PSTN Public Switched Telephone Network
  • the common link 125 may have a high bandwidth capacity, e.g., two Mbits, which is time shared by the various nodes in 64 kbit timeslots.
  • a routing table such as the routing table 200 in FIGURE 2, is used to find the appropriate link when transferring messages to a particular destination node.
  • a setup routine is necessary to generate the requisite routing tables for each network node.
  • An originating node broadcast message is first sent on all links to that node and back to the transport layer because the Transport Service Access Points (TSAPs) are not allowed to communicate directly with each other in one transport layer entity. In all other node broadcasts, messages are sent on all links except the link where the message is received and the message is delivered to the transport layer.
  • TSAPs Transport Service Access Points
  • the aforementioned routing tables maintain information about all possible routes to reach a given node, as well as a cost associated therewith.
  • the cost for each route is calculated as the sum of the delay for each link in the actual route.
  • the delay for one link may be normalized to 1 ,000/Bandwidth (BW), where BW is the number of 64 kbit timeslots in the link. For example, a link with 4 timeslots has a set cost of 250, and the term 1,000/BW is the delay for an 8 byte package in microseconds. Accordingly, the delay to reach a particular node will depend upon the number of discrete nodes to pass and the available bandwidth.
  • the routing table 300 for node Nl would make the secondary link, i.e., Nl :L2, the primary link and delete the failed link Nl :L1.
  • Each of the routing tables are preferably built by a routing protocol, such as Applicants' Assignee's Link Routing Information Protocol (LRIP), after the network is restarted. Each routing table is thereafter updated dynamically when necessary, as exemplified above. If, however, a node disappears or appears in the specified network 100, all of the routing tables in all of the nodes are updated, typically by a network supervision function.
  • LRIP Link Routing Information Protocol
  • each node within the routing tables is sorted using the unit address (NODE-ID), a 32 byte alphanumeric string, and an integer index address is derived therefrom to minimize data package overhead when addressing the nodes within the network 100.
  • NODE-ID unit address
  • each linkage alternative within routing table 200 e.g., a first 205, a second
  • 210 and a third 215, have an associated cost, i.e., Cl, C2 and C3, respectively, representing the microseconds of delay to reach the destination node using that link for an 8-byte information package.
  • Cl, C2 and C3 respectively, representing the microseconds of delay to reach the destination node using that link for an 8-byte information package.
  • a cost of zero means that the information has reached its destination and no further routing is required.
  • the costs are preferably ordered as follows:
  • node index addresses to simplify addressing. From an origination node, as soon as a link becomes available, the node begins to establish contact with a neighboring node at the terminating end of that link. Upon contact, all requisite routing table data within the origination node is transmitted to the connected node, and then all other nodes connected thereto (along with the costs). Messages are then sent containing the NODE-ID and the respective cost to transfer messages via this node.
  • a routing table such as table 200 in FIGURE 2
  • this new linkage information is forwarded to all other available links, which means all nodes since the routing table updating is repeated in each node.
  • each update on each link has to be acknowledged before an acknowledgment is sent to the originator.
  • Each generated update message is marked with a session number, unique for each route, that makes it possible to identify such a message if it comes back to a node on another link than it was sent on. The session number makes it possible to have several sessions in parallel, that is, an update message can be sent on several links in parallel. If a message for a particular node (NODE-ID) provides no new link or no lower cost to the routing table, the information is discarded.
  • the routing table in a node is built up by a sorted list of all known NODE-IDs, ordered by a precedence operation common to all the nodes, e.g., ordering by ASCII character values.
  • the first entry will be given node address equal to 0, the next entry in the list will be given node address equal to 1 and so on.
  • the routing table list can be resorted during traffic when nodes disappear or new nodes are added. This means that some data packages might be delivered to the wrong transport layer entity if the nodes have been renumbered, e.g., previous node 4 has been renumbered to node 5.
  • the higher layer entity that receives the message has to solve this problem. It also means that broadcast data might not be delivered to all nodes.
  • each node keeps track of previous routing table node addresses, that is, there is a memory upon which the node number each node had before last time the routing table was resorted.
  • the previous number will only be valid for a specified time after that the routing table has been resorted.
  • the information packages or messages are routed by use of a network primitive, ROUTECHANGE, described in more detail hereinafter, which holds information as to the cost to reach a given node, through the sending node and via the particular link on which it is sent.
  • ROUTECHANGE a network primitive
  • Various rules govern how the aforedescribed routing tables are handled, when ROUTECHANGE is generated and how to act upon reception of a ROUTECHANGE message.
  • a first rule is that when a link state, discussed in more detail hereinafter in connection with FIGURE 4, is changed from “not connected” to "connected”, this status change along with the transmission cost associated therewith is sent to all reachable neighboring nodes to that new available link.
  • the respective newly-connected neighboring nodes can update their routing tables with the NODE-IDs that can be reached via the link of the received message, ensuring that the routing table only contains the cheapest cost for each possible link to a certain node.
  • rule 2 is directed to the situation when a link state changes to "not connected" in which case all occurrences of that link must be removed from the respective routing tables. If due to this link removal the cheapest cost is changed for any node, ROUTECHANGE messages are then generated to the other links with the cost change information for all affected nodes in the respective tables.
  • the entry for that node is preferably removed from the routing table. For example, in FIGURE 1 if segment or link 120 becomes disabled, node N4 is unreachable, and is preferably removed from the respective routing tables. The reason for this is that nodes can switch identities when repaired. Therefore, requiring re- entry of the node means that proper addressing and identification is preferred.
  • the cost for the second best route when the cost for the second best route changes that change is communicated along the primary, best-cost route, guaranteeing that all of the nodes always have all possible routes available.
  • the change to the second, third or subsequent best routes may be made as discovered or queued until a substantive modification or other additional changes are made.
  • the various updates of the routing tables are preferably performed by the aforedescribed LRJP messages, which stop automatically after all updating has been performed (rule 6).
  • the transmission cost is set to zero within a node since no transference or links are needed. As discussed, the cost is set to 1000/BW in each adjacent neighboring node's routing tables. The cost is increased by 1000/BW in each receiving node along the route to a destination node.
  • FIGURE 4 there is shown an infinite state machine model illustrating various states for the available links.
  • states there are three possible states for a given link, "not connected”, “1st time-out” and “connected”.
  • the states work as follows: a given link always starts in the "not connected” state. A timer is started and every time the timer expires a so-called "ping" message is sent on the link. If a ping or a responsive "pong” message is received back, the state then changes to the "connected” state, which means that the link has established a contact with a neighboring node on this link.
  • a timer also supervises the link connection. The timer is triggered upon each message that is received from the link.
  • the routing table is updated and information about which nodes that no longer can be reached by this path are sent to all other links that are in the state "connected” or " 1 st timeout". Every time the state is changed from "not connected” to "connected” the existing information in the routing table is sent to the connected node upon this link.
  • FIGURE 4 is exemplary of a potential mechanism for handling link status issues, and the scope of the present invention is not limited to this technique.
  • the first byte of a network header message in accordance with the present invention is used to determine the type of message.
  • a presently preferred configuration of the bits within that byte is shown in FIGURE 5.
  • the first bit, the leftmost bit labeled "7" in the figure, is a mode bit, i.e., if a zero, a particular node is indicated in the following seven bits, i.e., bits six to zero, designating up to 128 distinct nodes.
  • One command i.e., a broadcast to all nodes, is represented by a series of ones
  • various LRTP messages and link supervision commands are indicated when the second bit, labeled "6", is zero.
  • the various network primitives involved are the aforedescribed PING, PONG and ROUTECHANGE commands, along with a Primary Acknowledgment (PRACK), a Final Acknowledgment (FRACK), a REROUTECHANGE and a FRACK Acknowledgment (FRACKA).
  • PRACK Primary Acknowledgment
  • FRACK Final Acknowledgment
  • REROUTECHANGE REROUTECHANGE
  • FRACKA FRACK Acknowledgment
  • PING is used to establish contact upon a link and to supervise a link
  • PONG is used to acknowledge a PING upon a link
  • ROUTECHANGE is used to transmit routing information to connected nodes.
  • a PRACK command is sent back on the link from which a ROUTECHANGE was received.
  • a FRACK command is also sent back on the link where a ROUTECHANGE was received, but when a given node determines that the ROUTECHANGE does not have to be forwarded on to other nodes or when all outstanding ROUTECHANGES have been acknowledged by FRACKs.
  • Each FRACK traverses backwards in the network to the originating node of the ROUTECHANGE. If after a given period of time no FRACKA has been received, the FRACK is retransmitted.
  • the ROUTECHANGE command is retransmitted.
  • PING and PONG commands respectively, designated by the respective hexadecimal values 80 and 81 in the right-most bits in the network header message of
  • FIGURE 5 The ROUTECHANGE command is further illustrated in FIGURE 8 with various additional fields: a 5-bit session number length (SL), a 15-bit cost factor in hexadecimal, 32-byte alphanumeric DESTINATION and ORIGINATION node strings, and a 2-33 byte sequence number.
  • the cost field represents the cheapest link cost for sending messages to the specified DESTINATION node via the ORIGINATION node generating the message. If the cost is set to all ones, i.e., hexadecimal FFFF, this indicates that the DESTINATION node is no longer reachable via the generating node, which means that the path must be removed from the routing tables. Upon removal of all of the paths, the whole record for the DESTINATION node is removed.
  • the SNO field indicates the session number.
  • a primary acknowledgment message or PRACK is illustrated.
  • PRACK a primary acknowledgment message
  • the neighboring node When a neighboring node receives a ROUTECHANGE from an adjacent note, the neighboring node by acknowledging that the routing command was received takes on the responsibility of further supervision when that neighbor is not an end or terminating node. The remaining fields are as described in connection with FIGURE 8.
  • the FRACK command illustrated in FIGURE 10, handles the final acknowledgment of routing information to a source node.
  • a REROUTECHANGE command is illustrated in FIGURE 11
  • a FRACKA command a FRACK command acknowledgment
  • FIGURE 13 an interlinked network 1300 of nodes A-D are illustrated.
  • a proposed node E discussed further herein, is shown in outline. To simplify cost calculations all links have a cost factor of one. To better illustrate the manner how the session number (SNO) is built up, the nodes are included in the SNO by a colon and a current link numbers. The sequence number is excluded for convenience. It should be understood that although the following sequences are presented serially, the sequences may also be practiced, were applicable, in parallel.
  • the message For a message to a particular destination node, e.g., from origination node A to destination node D, the message first passes to node C along a segment 1310. Using the aforedescribed index address to the routing table at node C, the network layer looks up node D (via the index) and finds the cheapest link therefrom to the destination node D, i.e., the only link thereto along segment 1320. After forwarding the message on the node D, the time to live is decremented by one and the link timer for link 0 in node C is triggered. Upon arrival of the message in node D, the network layer looks up node D and finds no further links, i.e., the message has reached its destination.
  • a message maybe transmitted to all of the nodes in the network 1300 from, for example, node A.
  • the network layer Upon arrival at node C, the network layer checks that the message has not been received earlier. If the message has not been received earlier, then the network layer presents the message to the transport layer and forwards the message on all other links. The time to live is then decremented by one, and the link timer for path 0 in node C is triggered. In this manner, the broadcast message is sent from node C to nodes B and D in FIGURE 13, attached thereto.
  • nodes B and D have no other links upon which to send a ROUTECHANGE. Nodes B and D send FRACK messages to node C and sessions C:l and C:2 terminate.
  • new node E when a new node, e.g., node E, is attached to the network 1300, e.g., at node B across a segment 1340, new node E, via a ROUTECHANGE command, connects to node B using the new NODE-ID for node E.
  • the cost is set to zero and the SNO is set to E:0.
  • the cost is set to one (received cost incremented by one). After receiving a PRACK from node B, supervision over node E is turned off.
  • Node B is then responsible for forwarding ROUTECHANGES to its adjacent nodes, i.e., nodes A and C, designating the NODE-ID for node E, costs of one meaning node E is reachable, and sessions E:0 + B:0, respectively.
  • Node B thereafter receives respective PRACKS from nodes A and C, respectively, and supervision is turned off.
  • Nodes A and C are then responsible for forwarding ROUTECHANGE messages to their own nodes.
  • Node C for example, then updates node A attached thereto with a ROUTECHANGE command, designating NODE-ID for node E, SNO equal to E:0 + B:0 + C:0 and a cost of two, meaning reachable via node C.
  • a final acknowledgment or FRACK is then received at node A from node B since node B is the node that initiated the session, which in a session list is identified by E:0 + B:0. Since node A has received the FRACK from node B, node A can then FRACK node C, which it should be understood must wait on a FRACK (not yet sent) to node D before the respective FRACK message is sent to node B. Node C initiates the appropriate ROUTECHANGE to node D, designating the NODE-ID for node E, session E:O + B:O + C:2, and a cost of two meaning node C is reachable by node B. Node C, upon receiving the FRACK from node D, sends a FRACK message to node B, which must wait for a FRACK from node A before sending FRACK to node E.
  • Node A forwards a ROUTECHANGE command to node C, again designating NODE-JD as node E, the SNO is E:O + B:2 + A:l, and a cost of two.
  • Node C sends a PRACK message back to node A.
  • Node C then initiates a ROUTECHANGE to node B, designating the NODE-ID as node E, session
  • node C need not send a message to node D since a cheaper way already exists.
  • Node B sends a FRACK to node C since node B initiated the session.
  • Node B looks up its session list and identifies a session E:O + B:2.
  • Node C then sends a FRACK to node A, which sends a FRACK message to node B, which finally sends a FRACK to the new node E.
  • the nodes according to the present invention e.g., computers, may be interlinked in any manner, e.g., a ring formation, a star or in any other way.
  • system and method of the present invention in a preferred implementation utilizes datagram-type transport services, i.e., a connectionless application.
  • a further advantage of the present invention in distributing the administration and control of the network to all of the nodes is the elimination of a master node, which if it fails means the entire network is on hold, necessitating the creation of a standby node to handle command functions until the master node reappears. Also, without a master node no startup sequence is needed since the routing mechanism is the same throughout the system, creating an advantageous symmetry of functionality.
  • each node performs the addressing without a master node coordinating it all.
  • each node performs a mapping translation, mapping a large address space (the lengthy NODE-ID identifiers) into a smaller address space (the short index values referencing the list element within the respective routing tables).
  • the system and method of the present invention is most useful in real-time critical systems requiring immediate communications of short messages across redundant pathways.
  • the number of nodes increases substantially, e.g., over a hundred, the principles of the present invention became less advantageous, although still applicable for other efficiencies.
  • the number of nodes is under one hundred, preferably under forty or fifty and most preferably under ten.
  • the principles of the present invention may be implemented in various ranges, e.g., 2-100, or more preferably, 2-50, 2-10 or 5-10, or most preferably 5-7.

Abstract

L'invention concerne un système et un procédé permettant de faciliter le routage de petits paquets d'information entre les noeuds (N) d'un réseau (100). De manière plus spécifique, ce système et ce procédé font appel à une table (200) de routage dans chaque noeud (N) du réseau. Cette table (200) de routage de chaque noeud contient une liste des noeuds, classés dans le même ordre dans chaque table, et une valeur d'indexation est utilisée pour désigner des valeurs spécifiques de noeuds. L'information relative aux liaisons internodales est également mémorisée dans la table de routage, ainsi qu'une liaison (210, 215) de remplacement en cas de défaillance de la liaison (205) principale. L'adjonction/la suppression de noeuds et la défaillance d'une liaison quelconque sont régis par une fonction de mise à jour.
PCT/SE1999/001545 1999-09-06 1999-09-06 Systeme et procede de routage internodal d'information dans un reseau de communication WO2001019038A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/SE1999/001545 WO2001019038A1 (fr) 1999-09-06 1999-09-06 Systeme et procede de routage internodal d'information dans un reseau de communication
GB0205944A GB2371944B (en) 1999-09-06 1999-09-06 System and method for internodal information routing within a communications network
AU14205/00A AU1420500A (en) 1999-09-06 1999-09-06 System and method for internodal information routing within a communications network
DE19983979T DE19983979T1 (de) 1999-09-06 1999-09-06 System und Verfahren zum Weiterleiten von Zwischenknoteninformation innerhalb eines Kommunikationsnetzes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SE1999/001545 WO2001019038A1 (fr) 1999-09-06 1999-09-06 Systeme et procede de routage internodal d'information dans un reseau de communication

Publications (1)

Publication Number Publication Date
WO2001019038A1 true WO2001019038A1 (fr) 2001-03-15

Family

ID=20415397

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE1999/001545 WO2001019038A1 (fr) 1999-09-06 1999-09-06 Systeme et procede de routage internodal d'information dans un reseau de communication

Country Status (4)

Country Link
AU (1) AU1420500A (fr)
DE (1) DE19983979T1 (fr)
GB (1) GB2371944B (fr)
WO (1) WO2001019038A1 (fr)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003021867A2 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. Reseaux actifs de vehicules relies
WO2003021895A1 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. A Corporation Of The State Of Delaware Reseau actif de vehicule adapte a une architecture specifique
WO2003021894A1 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. Reseau actif de vehicule avec redondance de chemin de communication
WO2003021889A1 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. A Corporation Of The State Of Delaware Reseau vehicule actif utilisant des chemins de communication multiples
WO2003021897A1 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. Reseau actif de vehicule a structure centrale
WO2003021898A1 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. Reseau actif pour vehicules a dispositifs redondants
WO2003021893A1 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. Reseau actif pour vehicules a redondance de donnees
WO2003021892A1 (fr) * 2001-08-31 2003-03-13 Motorola Inc. Reseau actif pour vehicules a portion reservee
WO2003021896A1 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. Reseau actif de vehicule comportant des appareils tolerants aux fautes
WO2004023719A2 (fr) * 2002-09-09 2004-03-18 Sheer Networks Inc. Correlation de cause profonde dans des reseaux sans connexion
EP1575226A1 (fr) * 2004-03-12 2005-09-14 Alcatel Procédé de transmission de paquets de données dans un réseau de télécommunication et dispositif mettant en oeuvre ce procédé

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004031717A1 (de) * 2004-06-30 2006-01-26 Siemens Ag Effiziente Berechnung von Routingtabellen für ein Routing anhand von Zieladressen

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999033232A2 (fr) * 1997-12-19 1999-07-01 Telefonaktiebolaget Lm Ericsson (Publ) Procede et dispositif pour reseau a commutation par paquets

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999033232A2 (fr) * 1997-12-19 1999-07-01 Telefonaktiebolaget Lm Ericsson (Publ) Procede et dispositif pour reseau a commutation par paquets

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WILLIAM STALLINGS: "Data and computer communications", 1991, MACMILLAN PUBLISHING COMPANY, NEW YORK, XP002927480 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003021896A1 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. Reseau actif de vehicule comportant des appareils tolerants aux fautes
WO2003021893A1 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. Reseau actif pour vehicules a redondance de donnees
WO2003021894A1 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. Reseau actif de vehicule avec redondance de chemin de communication
WO2003021889A1 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. A Corporation Of The State Of Delaware Reseau vehicule actif utilisant des chemins de communication multiples
US7027387B2 (en) 2001-08-31 2006-04-11 Motorola, Inc. Vehicle active network with data redundancy
WO2003021898A1 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. Reseau actif pour vehicules a dispositifs redondants
WO2003021895A1 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. A Corporation Of The State Of Delaware Reseau actif de vehicule adapte a une architecture specifique
WO2003021892A1 (fr) * 2001-08-31 2003-03-13 Motorola Inc. Reseau actif pour vehicules a portion reservee
WO2003021897A1 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. Reseau actif de vehicule a structure centrale
WO2003021867A3 (fr) * 2001-08-31 2003-10-09 Motorola Inc Reseaux actifs de vehicules relies
WO2003021867A2 (fr) * 2001-08-31 2003-03-13 Motorola, Inc. Reseaux actifs de vehicules relies
US8194536B2 (en) 2001-08-31 2012-06-05 Continental Automotive Systems, Inc. Vehicle active network with fault tolerant devices
US6747365B2 (en) 2001-08-31 2004-06-08 Motorola, Inc. Vehicle active network adapted to legacy architecture
US6931004B2 (en) 2001-08-31 2005-08-16 Motorola, Inc. Vehicle active network with backbone structure
US7415508B2 (en) 2001-08-31 2008-08-19 Temic Automotive Of North America, Inc. Linked vehicle active networks
US7173903B2 (en) * 2001-08-31 2007-02-06 Temic Automotive Of North America, Inc. Vehicle active network with communication path redundancy
WO2004023719A2 (fr) * 2002-09-09 2004-03-18 Sheer Networks Inc. Correlation de cause profonde dans des reseaux sans connexion
US7373563B2 (en) 2002-09-09 2008-05-13 Sheer Networks Inc. Root cause correlation in connectionless networks
EP1953962A1 (fr) * 2002-09-09 2008-08-06 Cisco Technology, Inc. Corrélation de cause profonde dans des réseaux sans connexion
WO2004023719A3 (fr) * 2002-09-09 2004-05-06 Sheer Networks Inc Correlation de cause profonde dans des reseaux sans connexion
FR2867643A1 (fr) * 2004-03-12 2005-09-16 Cit Alcatel Procede de transmission de paquets de donnees dans un reseau de telecommunication et dispositif mettant en oeuvre ce procede
EP1575226A1 (fr) * 2004-03-12 2005-09-14 Alcatel Procédé de transmission de paquets de données dans un réseau de télécommunication et dispositif mettant en oeuvre ce procédé
US7567563B2 (en) 2004-03-12 2009-07-28 Alcatel Methods and systems for detecting malfunctioning nodes in a telecommunication network

Also Published As

Publication number Publication date
AU1420500A (en) 2001-04-10
GB2371944A (en) 2002-08-07
GB0205944D0 (en) 2002-04-24
DE19983979T1 (de) 2002-08-01
GB2371944B (en) 2003-10-29

Similar Documents

Publication Publication Date Title
US6785277B1 (en) System and method for internodal information routing within a communications network
US6065062A (en) Backup peer pool for a routed computer network
US6115751A (en) Technique for capturing information needed to implement transmission priority routing among heterogeneous nodes of a computer network
EP0836781B1 (fr) Procede et dispositif de synchronisation de transmissions de donnees sur des liaisons a la demande dans un reseau
US6023733A (en) Efficient path determination in a routed network
US7778161B2 (en) Signaling system for telecommunications
US5687168A (en) Link state routing device in ATM communication system
US5309433A (en) Methods and apparatus for routing packets in packet transmission networks
US6084879A (en) Technique for capturing information needed to implement transmission priority routing among heterogeneous nodes of a computer network
US6298061B1 (en) Port aggregation protocol
US6791948B1 (en) Distributed switch and connection control arrangement and method for digital communications network
WO2005020022A2 (fr) Reseau arborescent d'auto-guerison
JPH09130401A (ja) 順方向及び逆方向仮想接続ラベルに基づくatmネットワークを走査するシステム及び方法
US6147992A (en) Connectionless group addressing for directory services in high speed packet switching networks
EP1009130A1 (fr) Services de répertoire distribué pour localiser des ressources de réseau dans un trés grand réseau de commutation par paquets
WO2001019038A1 (fr) Systeme et procede de routage internodal d'information dans un reseau de communication
US7246168B1 (en) Technique for improving the interaction between data link switch backup peer devices and ethernet switches
US6765908B1 (en) System and method for transferring packets in a “connectionless” network
US6490618B1 (en) Method and apparatus for SNA/IP correlation in a mixed APPN and DLSW network
US6791979B1 (en) Mechanism for conveying data prioritization information among heterogeneous nodes of a computer network
US20040170130A1 (en) Spontaneous topology discovery in a multi-node computer system
US6850518B1 (en) DLSw RIF passthru technique for providing end-to-end source route information to end stations of a data link switching network
US6865178B1 (en) Method and system for establishing SNA connection through data link switching access services over networking broadband services
Cisco Designing APPN Internetworks
Cisco Designing APPN Internetworks

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
ENP Entry into the national phase

Ref document number: 200205944

Country of ref document: GB

Kind code of ref document: A

RET De translation (de og part 6b)

Ref document number: 19983979

Country of ref document: DE

Date of ref document: 20020801

WWE Wipo information: entry into national phase

Ref document number: 19983979

Country of ref document: DE

122 Ep: pct application non-entry in european phase