EP1714446A1 - Reliable message distribution with enhanced emfc for ad hoc mesh networks - Google Patents

Reliable message distribution with enhanced emfc for ad hoc mesh networks

Info

Publication number
EP1714446A1
EP1714446A1 EP05713134A EP05713134A EP1714446A1 EP 1714446 A1 EP1714446 A1 EP 1714446A1 EP 05713134 A EP05713134 A EP 05713134A EP 05713134 A EP05713134 A EP 05713134A EP 1714446 A1 EP1714446 A1 EP 1714446A1
Authority
EP
European Patent Office
Prior art keywords
multicast
data
node
nodes
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05713134A
Other languages
German (de)
French (fr)
Inventor
Fred Bauer
Lee Boynton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Packethop Inc
Original Assignee
Packethop Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Packethop Inc filed Critical Packethop Inc
Publication of EP1714446A1 publication Critical patent/EP1714446A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/24Connectivity information management, e.g. connectivity discovery or connectivity update
    • H04W40/28Connectivity information management, e.g. connectivity discovery or connectivity update for reactive routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/16Multipoint routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/24Connectivity information management, e.g. connectivity discovery or connectivity update
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/24Connectivity information management, e.g. connectivity discovery or connectivity update
    • H04W40/32Connectivity information management, e.g. connectivity discovery or connectivity update for defining a routing cluster membership

Definitions

  • FIG. 1 shows a typical mesh network 12 with a node A communicating with a node B through multiple hops, links, nodes 14, etc.
  • the links 14 can be any combination of wired or wireless mobile communication devices such as a portable computers that may include wireless modems, network routers, switches, Personal Digital Assistants (PDAs), cell phones, or any other type of mobile processing device that can communicate within mesh network 12.
  • PDAs Personal Digital Assistants
  • the network nodes 14 in mesh network 12 all communicate by sending messages to each other using the Internet Protocol (IP). Each message consists of one or more multicast User Datagram Protocol (UDP) packets.
  • IP Internet Protocol
  • UDP User Datagram Protocol
  • Each node 14 includes one or more network interfaces which are all members which may be part of a same multicast group. Each node has an associated nodeid used for identifying both the source and the intended set of recipients of a message. Because the messages are multicast, routing details are transparent to the application.
  • Transmission Control Protocol (TCP) is commonly used to provide reliability for point to point communication, using a direct Acknowledgement (ACK) based design, but in a multicast scenario with multiple peers, the more efficient approach is a Negative Acknowledgement (NACK) based design. That is, when data is successfully transmitted, no additional communication is needed to affirm that fact; there are no acknowledgements. When packet loss is detected, the NACK is sent back to the source to request retransmission.
  • Sequence numbers are used to detect missing or out of sequence packets.
  • the present invention is such a NACK-based design.
  • it is necessary to maintain certain data consistency between the different nodes 14. For example, all the nodes 14 may need to know which devices are part of the same multicast groups. This requires all of the nodes 14 to have the same versions of different multicast tables. This is typically done by exchanging data between nodes 14 and then responding with NACK responses if the data is not successfully received. A substantial amount of bandwidth and processing resources are required to maintain data consistency between the different nodes 14. Current techniques for maintaining data consistency between different mobile nodes is also inefficient. Computers in the modern Internet communicate using a common language based on the well-understood mechanisms of routing.
  • Routers in the Internet compute the best path to all known computers and act as traffic cops to direct such traffic. The results of these computations are stored in what is known as a forwarding table.
  • This forwarding table specifies a next hop for each possible destination. The next hop is the computer to which traffic must be forwarded for a particular destination address. Frequently a default router is specified as the preferred router to which to forward traffic when the destination is not known to a router.
  • Non-router computers known as hosts, also have a forwarding table. In the conventional Internet, a host's forwarding table tends to be much simpler than a router's forwarding table because hosts typically are connected to the Internet by one interface and the specified default router handles most addresses. These assumptions do not hold for hosts in a mobile mesh network.
  • FIGS. 12 and 13 show a network topology where a node A provides unicast forwarding. Table 1 shows a unicast forwarding table for the 4-node network topology shown in FIGS. 12 and 13.
  • Node A's unicast forwarding table Internet addresses are the 32-bit integer addresses specified in Internet
  • IPv4 IP version 4
  • IPv6 Internet Protocol version 6
  • Human readable addresses such as www.packethop . com are translated by Directory Name Servers (DNS) into their integer equivalents. These addresses are commonly known as unicast addresses. Unicast addresses specify a unique computer on the Internet. A portion of Internet addresses, however, are reserved for multicast. Multicast addresses are used for 1 -to-many communication from a computer to a group of computers. Traffic sent to a multicast address will arrive not at one computer, but will arrive at many computers. Examples of applications that might use multicast include classroom lectures and video conferences. Routers that receive multicast traffic need to simultaneously forward that multicast traffic to one or more destinations.
  • MFC Multicast Forwarding Cache
  • a router To compute either the forwarding table or the multicast forwarding cache, a router first needs to compute paths to known destinations using a routing protocol.
  • routing protocols exist, all of which are based on well known graph theory algorithms established by mathematicians following on Euler's original work on the K ⁇ nigsberg Bridge problem in 1736. These routing algorithms may be broadly categorized into link-state and distance-vector algorithms. Distance-vector algorithms exchange shortest-path distances to destinations between communicating routers. Based on this shortest- path distance information, each router independently computes its forwarding table. The prime example of a distance- vector based routing algorithm is the Routing Information Protocol (RIP).
  • RIP Routing Information Protocol
  • a link-state routing protocol distributes the topology of the network to all nodes, each of which independently computes its forwarding table.
  • Link-state based routing protocols are the most widely deployed in the Internet. Once the routing protocol has computed the shortest path to all destinations, the router may update its forwarding table. These updates usually take place each time the network topology changes in a way that results in a forwarding table change. In a similar fashion, the router must update the multicast forwarding cache based on available information about multicast sources and receivers. To update the multicast forwarding cache, the router uses a multicast routing protocol. A multicast routing protocol may or may not use the previously computed unicast routing table.
  • the Protocol Independent Multicast (PIM) Protocol uses the results computed by the unicast routing protocol while the Distance Vector Distance Multicast Routing Protocol (DNMRP) uses its own internal unicast routing protocol.
  • the multicast routing protocol updates the multicast forwarding cache on each router.
  • Internet hosts that are multicast receivers identify themselves to nearby routers using the Internet Group Management Protocol (IGMP).
  • IGMP Internet Group Management Protocol
  • Each router then distributes this multicast group receiver membership information to peer routers using its multicast routing protocol.
  • Internet hosts that are multicast sources simply send multicast packets destined for the appropriate multicast group address to its nearby routers.
  • Each router is then responsible for forwarding those multicast packets as dictated by its multicast forwarding cache.
  • Mobile mesh networks are also known as Mobile Ad-hoc Networks
  • MANET Mobile mesh networks
  • IETF Internet Engineering Task Force
  • TRPF Reverse-Path Forwarding
  • OLSR Link State Routing Protocol
  • AODN Ad hoc On-Demand Distance Nector Routing
  • DSR Dynamic Source Routing Protocol for Mobile Ad Hoc Networks
  • a computer in a mobile mesh network may be constantly changing its position and connection to peer nodes.
  • a mesh node may have continuously changing attributes such as location, IP address, and connection to peers. This breaks many of the assumptions built into conventional Internet protocols and networks.
  • a second consequence of mesh node mobility is commonly referred to as the
  • hidden node problem refers to the inability of all mesh nodes to hear each other's traffic through the same wireless interface. This contrasts with the ability of wired interfaces to hear all traffic from connected neighbors.
  • Conventional multicast forward caches do not support either changing IP addresses or interfaces suffering from the hidden node problem. For example, in FIG. 15, node C does not hear node A's transmissions and thus node C's scheduling of transmissions may interfere with node B's intended reception from node A. In this sense, node C is hidden from node A and vice versa.
  • a computer may be viewed as a router or host depending on its relative position with the topology. Routers forward traffic on for peers, hosts do not.
  • PDAs Personal Digital Assistants
  • a multicast forwarding cache is not typically available in end-user platforms such as Windows XP and CE operating systems.
  • every node is by definition a router and may also be a host. That is to say, each node must be capable of forwarding traffic on for peers. This blurs the distinction between host and router for mesh nodes.
  • Mobile mesh network wireless interfaces have stronger hurdles than wired interface equivalents in terms of Quality of Service (QoS). As a consequence, QoS continues to be important in parts of the Internet like mobile mesh networks.
  • QoS Quality of Service
  • a Data Distribution Service transfers information between nodes in an ad hoc mobile mesh network.
  • the DDS includes many different novel features including techniques for coalescing retransmit requests to minimize traffic, providing a reasonable level of reliability for event oriented communications, multicasting retransmissions for use by many nodes, and providing other optimizations for multicast traffic.
  • the DDS uses UDP datagrams for communications. Communications operate in a truly peer-to-peer fashion without requiring central authority or storage, and can be purely ad hoc and not depend on any central server. The need for traditional acknowledgement packets can also be eliminated under normal operation. Such a NACK-based protocol proves to be more efficient than the traditional approach like TCP.
  • An Enhanced Multicast Forwarding Cache supports multicast transmissions in mobile mesh networks.
  • the enhanced MFC is designed to support mesh node mobility, quality of service, and security requirements that are particular to mesh networks. To achieve these goals, the enhanced MFC draws from a global state maintained by a unicast routing protocol, multicast aware applications, and distributed services. The eMFC distributes this derived global state through the use of an eMFC-specific multicast packet header.
  • Information contained within the eMFC header is also used to collect and derive multicast traffic statistics at each mesh node. To maintain backwards compatibility, multicast traffic without the eMFC-specific header is also honored by the MFC. Mobile mesh network specific interfaces, such as radio interfaces, as well as conventional interface types are supported. Security is maintained through the use of authentication and encryption techniques.
  • FIG. 1 is a diagram of a mesh network.
  • FIG. 2 is a block diagram of a Data Distribution Service (DDS) provided in a mesh network.
  • FIG. 3 is a diagram of a source data packet.
  • FIG. 4 is a block diagram of a source node shown in FIG. 2.
  • FIG. 5 is a block diagram of a receiver node shown in FIG. 2.
  • FIG. 6 is a diagram showing different DDS messages.
  • FIGS. 7-11 show different communications scenarios that are provided by the DDS.
  • FIG. 12 shows a conventional network topology.
  • FIG. 13 shows unicast paths for a node in the conventional network shown in
  • FIG. 14 shows a multicast path for multicast source A and destinations B,C, and D.
  • FIG. 15 shows three mesh nodes illustrating a hidden node problem.
  • FIG. 16 shows an enhanced MFC system architecture.
  • FIG. 17 shows a multicast packet that includes an enhanced MFC (eMFC) packet header.
  • FIG. 18 shows how the multicast packet in FIG. 6 is sent between different nodes in a mesh network.
  • FIG. 19 is a table that shows the different fields in the eMFC header.
  • FIG. 20 is block diagram showing how the multicast packets can be overlay ed with different mesh networks.
  • FIGS. 21-23 are diagrams showing how a mesh node forwards multicast packets according to mesh interface information.
  • FIGS. 21-23 are diagrams showing how a mesh node forwards multicast packets according to mesh interface information.
  • FIGS. 21-23 are diagrams showing how a mesh node forwards multicast packets according to mesh interface information.
  • FIGS. 24-26 are diagrams showing how duplicate multicast packets are handled in a mesh network.
  • FIG. 27 is a diagram showing a malicious listener within radio range of mesh nodes.
  • FIGS. 28-30 show how Quality of Service (QoS) operations are performed using eMFC.
  • FIG. 31 is a block diagram of the components in one of the mesh nodes.
  • FIG. 2 shows several nodes 22 that may operate in a mesh network 20 similar to the mesh network 12 previously shown in FIG. 1.
  • the nodes 22 can be any type of mobile device that conducts wireless or wired peer to peer mesh communications. For example, personal computers with wireless modems, Personal Digital Assistants (PDAs), cell phones, etc.
  • Other nodes 22 can be wireless routers that communicate to other nodes through wired or wireless IP networks.
  • the mobile devices 22 can move ad-hoc into and out of the network 20 and dynamically reestablish peer-to-peer communications with the other nodes. It may be necessary that each individual node 22 A, 22B and 22C have some or all of the same versions for different data items 26.
  • the data items 26 in one example, may be certain configuration data used by the nodes 22 for communicating with other nodes.
  • the configuration data 26 may include node profile information, video settings, etc.
  • the data 26 can include multicast information, such as multicast routing tables, that identify nodes that are members of the same multicast groups.
  • a Data Distribution Service (DDS) 24 is used to more efficiently maintain consistency between the data 26 in the different nodes 22.
  • a source node 22A is defined as one of the nodes that may have internally updated their local data 26A and then sent out data message 28 notifying the other nodes 22B and 22C of the data change.
  • Receiver nodes 22B and 22C are defined as nodes that need to be updated with the changes made internally by source node 22A.
  • the DDS 24 sends and receives source data packets 38 as shown in FIG. 3 that are used in a variety of different ways.
  • the source data packet 38 can be used by the source node 22A to multicast a status or data message 28 that notifies other nodes 22B and 22C of a change in or current status for data 26A.
  • the receiver nodes 22B or 22C may multicast a Negative Acknowledgement (NACK) message 32.
  • NACK Negative Acknowledgement
  • the receiver node 22C may multicast NACK message 32 when data 26C is missing updates for some of the data items in data 26 A. This could happen for example, when the mobile device 22C has temporarily been out of contact with mesh network 20 such that it did not receive a data message 28.
  • the source node 22A may multicast a repair message 30.
  • the repair message 30 may provide information necessary to update data 26C in receiver node 22C and possibly data 26B in receiver node 22B with the latest changes made to data 26 A.
  • the repair message 30 in one example, may be an EXPIRED message indicating the requested data item is no longer available or a CHANGE message identifying the data items in data 26A that have been changed.
  • the Data Distribution Service (DDS) 24 in one implementation uses symbolic keys (data names) across all nodes in the mesh network 20 to maintain consistency between data items 26.
  • the DDS 24 can be built within a reliable transport scheme or can be implemented as described below.
  • the DDS 24 prevents data from being buffered persistently twice and retransmitting previous revisions of a particular data record, when only the most recent data item is required. This is a more complete design and solves the particular problems associated with replicating a "small" database across a mesh network.
  • DDS 24 works as an distributed hash table, binding keys (data names) to values (data).
  • the nodes 22 talk to each other via the DDS protocol described below and use a single unified view of key-to-value (data name-to-data) mapping to maintain consistency of data 26 across the entire mesh network 20. Data consistency is provided by proactively replicating the database 26 in each node 22.
  • the database 26 can include any object stored internally in the nodes 22. Changes in data 26 is tracked by the DSS 24 across multiple network nodes
  • the DSS 24 thus maintains a set of data items 26 distributed across many nodes 22.
  • the DDS 24 does not need to separately buffer transmissions like a reliable transport because the data 26 is already stored persistently before the communication commences.
  • FIG. 3 shows one example of a source data packet 38 as previously described in FIG. 2.
  • the source data packet 38 includes a header 52 that is used for conducting DDS operations. Some or all of the fields in header 52 may be used for sending messages 28, 30 or 32 described in FIG. 2.
  • the source data packet 38 includes a nodeid field 40 that is unique to the originating node sending the message. Every source data packet 38 includes a packetid field 42 that is global among all transmissions from the originating node sending the packet. The value in packetid field 42 is a monotonically increasing number. Receiver nodes, record packetids as the packets are received.
  • the header 52 also includes a Global Revision Value (GRV) in global revision field 44 that identifies a latest revision to the data 26 in the node 22 sending the source data packet 38.
  • the GRV defines a latest revision that has been made to the data 26 in a particular source node 22, regardless of the data item and what type of revision was made.
  • the GRV for source node 22A corresponds with the number of changes that have been made locally to data 26A (FIG. 2). So a first revision to a data item A would increment the GRV by 1 and a different revision to a different data item would cause the GRV to increment again by 1.
  • a history field 46 indicates how many packetids are remembered for possible retransmission.
  • the GRV value and the liistory count in the history field 46 are each tracked by the nodes receiving the source data packets 38.
  • the history field 46 in combination with the GRV 44, defines a window of packetids from the node 22 identified by nodeid field 40 that are available for retransmission. Communication of this history-based window allows peers to avoid sending a NACK for data that is known to be expired. It the responsibility of every receiver node 22 to then request retransmission of packets that it determines have not been successfully received.
  • An action field 48 identifies a type of message that is associated with the source data packet 38.
  • the source data packet 38 may be used for sending a STATUS, DATA, NACK, EXPIRED, CHANGE, OR RETRANSMIT message as will he described in further detail below in FIG. 6.
  • Data Model A payload 50 is included in the source data packet 38.
  • the payload 50 includes a data-name (key) and associated data-revision number.
  • the data-name as described above is essentially a key identifying a particular data item.
  • the data- revision number identifies a revision for the particular data item identified by the data-name. For example, a fourth revision to a "profile" data item may have the entry "profile:4" in the payload 50.
  • a fifth revision to a "video settings" data item would be identified in payload 50 as "video settings:5".
  • the payload 50 will also contain the actual revised data (data- value) as changed by the source node.
  • the actual revised data data- value
  • the payload 50 will also contain the actual revised data (data- value) as changed by the source node.
  • the synchronization granularity is at the object level - the value of a key. All properties of an object (if relevant) are part of the object and do not need to be synchronized separately.
  • the data 26 (FIG. 2) that is stored for each data- name has its own revision number (data-revision), so that a CHANGE message can coalesce multiple changes for a given datum down to the minimum when replicating over the mesh.
  • Every node 22 in the mesh network 20 (FIG. 2) emits its Global Revision Value (GRV) as part of every packet.
  • GBV Global Revision Value
  • changes to specific objects, or other data items in the node can be tracked at a finer resolution with the second data-revision number that is associated with each individual data item.
  • a receiver node for example receiver node 22B or 22C in FIG. 2 finds "holes" in the transmission window defined by [GRV-history, data-revision], it sends a NACK message 32 back to the source node 22A to request a repair (FIG. 2).
  • the actual NACK request may include multiple sequence numbers or GRV values so that NACK messages 32 are coalesced.
  • the NACK message 32 may not be sent immediately, but may be sent after a random back-off interval.
  • the back-off interval may be exponentially distributed. Because the repair packets are multicast, any node that requests a particular data item, could cause other nodes 22 to receive the same data item.
  • the random back-off transmission period for NACK messages allow other nodes to suppress similar NACK requests, thus reducing the number of NACKs that need to be sent.
  • FIG. 4 shows a source node 22A that includes a Central Processing Unit (CPU) 58 that operates software that provides the Data Distribution
  • CPU Central Processing Unit
  • FIG. 5 shows a CPU 78 in one of the receiver nodes 22B or 22C that includes software for operating the DDS 24.
  • the CPU 78 wirelessly sends and receive DDS messages via a transceiver 82 connected to an antenna 84.
  • the CPU 58 in the source node 22A also keeps track of the Global Revision Value (GRV) 62 that is associated with changes made to any of the data items 52,
  • GMV Global Revision Value
  • the CPU 58 has currently made twelve GRV changes to the data items 52, 54 and 56.
  • the GRV 11 was made to data item 54 and the GRVs 10 and 12 were made to data item 52.
  • Changes in the GRV 62 can also be attributed to multiple changes in the same data item.
  • GRV 10 is attributed to revision 4 for profile 52
  • GRV 12 is attributed to revision 5 for profile 52.
  • the current state of receiver node 22C will be shown in FIG. 5.
  • Receiver node 22C contains a "profile" data item 72, "video settings" data item 74 and a "multicast table" data item 76.
  • FIG. 6 shows the different DDS messages that can be sent by the DDS 24 in the different mesh nodes.
  • Status Message A STATUS message 90 is periodically sent by the source node 22A when no other packets are being sent. In one embodiment, the STATUS message 90 is sent out periodically to indicate nothing has changed. In the example shown in FIG. 6, the STATUS message 90 includes the nodeid 40 for the source node 22A and the
  • GRV Global Revision Value
  • the action field 48 indicates the packet as a STATUS message. If the receiver node 22B or 22C has the same GRV 44 for the same nodeid 40, then no further action is required.
  • a DATA message 92 is sent whenever the source node 22A changes, modifies, adds, removes, etc. a data item.
  • the DATA message 92 carries the actual data that needs to be updated or added to all of the receiver nodes 22B and 22C.
  • the DATA message 92 is multicast out to the other nodes in the mesh network 20 and contains the data-name and its data-revision number.
  • Negative Acknowledgements Every receiver node keeps track of received packets, adding the packetID and its received timestamp to its list of received packets, which is maintained on a per- source basis. For example, the source data packets 38 (FIG. 3) are indexed by the source nodeid. The receiving nodes also store the global revision value and history values for the other nodes and updates them for every received packet, if the GRV has changed. The nodeids outside the window are removed from the list, and the resulting list is scanned for missing packetids. A NACK message 94 is sent by the receiver node when missing data is detected. An exponentially distributed random time interval can be calculated and used before sending out the NACK message 94.
  • the receiver node 22C schedules itself to wake up after the random time interval to check the GRV again and to possibly send a NACK 94 corresponding to the missing source data packet 38.
  • CHANGE messages 98 or EXPIRED messages 96 are received from source node 22A responsive to the NACKS 94, all pending NACKs are updated and received packets removed from the list.
  • the NACK messages 94 received from other nodes that request the same information are also removed from the list.
  • the packetids that fall outside the transmission window also get removed. If a NACK list becomes empty while waiting, it gets completely removed.
  • a NACK message 94 gets sent, it is added to a repair-requested list with the timestamp of when it was sent. Similarly, subsequent NACKs sent by peers are also added to this list.
  • the receiver node 22C still wakes up every so often to scan the lists of all nodes, and restart the NACK process for peers that are missing packets but have not had any other activity. This is retried until the lifetime of the packets has expired. The period of retrying could be adapted if no traffic at all is detected for a node. This could also better handle the case of a node completely disappearing from the mesh network 20.
  • GRV 12 and an associated source nodeid for source node 22 A.
  • the source node 22A does not send the actual data in response to the NACK message 94.
  • the source node 22A sends a changelist, enumerating the keys (data-names) and their specific data-revision numbers for the GRVs identified in the NACK message 94.
  • the resulting CHANGE message 98 (FIG. 6), enumerates all the global revision values that are being addressed, and a list of key/values/revisions 99 that are associated with those global revision values. For example, in FIG.
  • the data itself for both the video settings and the profile may not be sent in the CHANGE message 98.
  • the single most current data can be sent out to satisfy older repair requests. This is accomplished with a CHANGE message 98, which identifies what older global revisions should be updated with a single new version of the data.
  • older values are not kept and only the most current version of the data item and the corresponding revision number are maintained. For example, in FIG.
  • the receiver node 22C notes all the old global revisions that are being handled, and looks at the key/values/revisions to determine which keys it needs to request for retransmission, and sends a RETRANSMIT message 100 (FIG. 6) back to the source node 22A.
  • the receiver node 22C does not send a retransmit request for the video settings.
  • the receiver node 22C determines that current profile data item 72 is out of date.
  • the CPU 78 in FIG. 5 sends a RETRANSMIT message 100 as shown in FIG. 6 that requests the source node 22A to send the profile data associated with GRV 12.
  • the source node 22A responds to the RETRANSMIT message 100 by producing another DATA message 92 as shown in FIG.
  • the source node 22A may send an EXPIRED message 96. This handles race conditions when the data expires while this exchange is happening.
  • the EXPIRED indication can alternatively be part of the CHANGE message 98. If any of the intervening messages are lost, the whole process starts over.
  • the source node 22A receives the NACK message 94 (FIG. 6), it first checks that the faulty packetid is within the transmission window. If not, it sends the
  • EXPIRED message 96 indicating that peers should stop asking for it. If within the transmission window, the original data item is fetched and resent. The source data items remain in a sent packet list until its original lifetime has expired, independent of the number of retransmissions. The entry is updated to indicate the time of the most recent transmission. If a NACK message 94 is received within a small time after a data packet is sent out by the source node 22A, it may be ignored with the assumption that the just sent data packet will satisfy the NACK 94. In the case of a race condition that fails in favor of dropping the NACK 94, the receiver node 22C will request the data item again. Note that any node in the mesh network 20 (FIG.
  • the source node 22A can be any node that has the requested data.
  • a random back-off delay is utilized to prevent every node from simultaneously doing so.
  • This optional choice implies that the node providing the repairs store global revision to data revision mappings for all other nodes.
  • the protocol has another advantage in that the DDS message exchange prevents data from being sent multiple times, particularly for the case that a single key is getting repeatedly updated. In this case only the most current value will get transmitted, whereas with the reliable multicast transport, every change is sent, just to overwrite the previous value.
  • FIGS. 7-11 explain some of the different DDS delivery scenarios that can occur during data consistency operations.
  • sequential delivery FIG. 7 shows one of the simplest cases, where the source data packets 38 (FIG. 3) are sent from source node 22A and successfully received sequentially in their original sending order by the receiver node 22B.
  • Each packet N has a global version number that indicates that the packet being received is the most current, so no additional action need be taken.
  • the source data packets are sent by the source node 22 A and successfully received by the receiver node 22B, but their original order is not maintained.
  • FIG. 8 shows the case where the packet N+1 is received by the receiver node 22B before the expiration of random delay time T. In this case the NACK message 94 is suppressed.
  • FIG. 9 shows the case when a packet sequence number skips by one and the missing source data packet N+1 has still not been received by the time the NACK message 94 is scheduled to go out.
  • a NACK message 94 is pending and packet N+1 has not been received within time T.
  • the receiver node 22B sends NACK message 94 to the source node 22A that identifies the source data packet N+1 as described above.
  • the source node 22A through the DDS protocol (the CHANGE and RETRANSMIT messages are not shown) then resends the source data packet N+1.
  • FIG. 10 shows the situation when more than one source data packet is detected as missing.
  • the set of packet numbers, data items, or GRVs can be encapsulated into a single NACK message 94.
  • the NACK message 94 is ready to be sent (i.e. the soonest random backoff interval) all pending NACKs for packets N+1 and N+2 are included in NACK message 94.
  • the protocol between the NACK and the retransmission of the data is not shown.
  • FIG. 11 shows the situation where multiple receiver nodes 22B and 22C detect missing packets.
  • Each receiver node 22B and 22C could potentially send a NACK message 94.
  • each receiver node 22B and 22C may use a random delay before sending their NACK message 94. This will likely cause one of the receiver nodes 22B or 22C to send a NACK message before the other receiver node.
  • receiver node 22C is scheduled to send the NACK message 94 at random time interval T and receiver node 22B is scheduled to send the same NACK message 94 at random time interval T+l after receiver node 22C. Because the NACK messages 94 are multicast, other nodes will see the first NACK message 94 sent by receiver node 22C.
  • an Enhanced MFC (eMFC) system architecture 212 is a distributed multicast routing mechanism and consists of a multicast forwarding cache 224 and a multicast table computation 222. These two components derive information from global and local states available on a mesh node to properly route multicast traffic. All of the nodes running the enhanced MFC 212 create an overlay network over both mobile mesh networks and conventional Internet Protocol (IP) based networks.
  • IP Internet Protocol
  • Multicast aware applications 216 use socket application program interface (API) calls 217 to open a multicast socket 220, declare itself as a multicast source, set the multicast data type (e.g., video, voice, bulk data, and so forth), send multicast data 242, receive multicast data 242, and close the socket 220.
  • API application program interface
  • These socket calls 217 rely on the underlying multicast forwarding cache 224 to select the zero or more network interfaces 226 for forwarding multicast traffic 242.
  • the multicast forwarding cache 224 is maintained by a multicast table computation component 222.
  • Multicast table 222 fills in the multicast forwarding cache 224 with entries for each known multicast source and group.
  • the multicast table computation component 222 derives these multicast group senders and groups from global state information available within the mobile mesh network.
  • a public example of such a global state distribution protocol is the Multicast Session Directory sdr modeled on work done by Van Jacobson at Lawrence Berkeley National Laboratory (LBNL).
  • the multicast table computation component 222 derives a network topology from the underlying unicast routing protocol 218.
  • protocol 218 is a proactive, ad-hoc, link-state based protocol, however it need not be.
  • Internet multicast protocols Distance Vector Multicast Routing Protocol (DVMRP) and Multicast Extensions to OSPF (Open Shortest Path First), for example, derive their topology information from distance vector and link-state protocols respectively.
  • DVMRP Distance Vector Multicast Routing Protocol
  • OSPF Open Shortest Path First
  • Multicast membership information 228 and legacy multicast support 230 are provided to the multicast table computation 222.
  • the multicast membership information 228 in one example is global state information that all mesh nodes contain that is distributed using a Distributed Distribution Service (DDS) as described above.
  • Legacy multicast support 230 relates to existing multicast support in either the MFC 224 or in the multicast packet. If a node does not include an eMFC header, then the packet can revert back to using legacy multicast support 230 from a conventional multicast packet.
  • Mesh interface support 234 relates to specific mesh node information.
  • a node may determine that a particular interface is a mesh interface and accordingly provide any necessary routing decision to account for the mesh network.
  • the enhanced MFC 212 is provided through a distributed member state 232 that is relayed to the nodes in the mesh network through an enhanced MFC header 250 that is shown in more detail in FIG. 17.
  • Three eMFC operations of particular interest include duplicate packet detection 236, security feature support 238 and QoS enhancements 240.
  • the enhanced MFC 212 is a distributed multicast routing mechanism that maintains global state using proprietary packet header 251 pre-pended on multicast packets 250. This distributed state is necessary for proper support of features such as Quality of Service and link quality measures.
  • This header 251 contains fields necessary to distribute eMFC state to peer mesh nodes 270 and support features such as Quality of Service (QoS).
  • QoS Quality of Service
  • this same eMFC header 251 is seen by each enhanced MFC 212 along the path to the final multicast destinations. This is because each mesh node 250 consults the eMFC 212 before forwarding the multicast packet 250.
  • a first mesh node 270A may be located in a vehicle
  • a second mesh node 270B may operate in a Personal Digital Assistant (PDA)
  • a third mesh node 270C may operate in a wireless laptop computer.
  • the multicast packet 250 is sent by node 270A and is prepended with the eMFC header 251.
  • the eMFC 212 in mesh node 270B routes the multicast packet 250 to mesh node 270C according to the information in the eMFC header 251.
  • Mesh node 270C receives and possibly continues to route the multicast packet 250 according to the information in eMFC header 251.
  • the enhanced MFC packet header 251 serves to distribute state for this multicast stream to all mesh nodes along its path.
  • the MFC packet header 251 allows the mesh nodes to conduct more effective multicast related operations such as duplicate packet detection 236, security feature support 238, and QoS enhancements 240 (FIG. 16) that are not currently supported in conventional mesh networks.
  • the individual fields of the eMFC header 251 are described in further detail in FIG. 19.
  • a version number 252 is used for backwards compatibility with other multicast versions.
  • Router ID 254 Router Identifier
  • the router ID 254 remains constant throughout the lifetime of the mobile mesh network, is associated with a particular mesh node, and is not tied to any IP source address. Thus the router identifier 254 can remain the same for a mesh node 270 even when the node moves to another location in the same or a different mesh network.
  • the header 251 includes a sequence number 256 that identifies the multicast packet number in the multi-cast stream sent by a particular router ID 254.
  • a destination address 257 and destination port 258 identify the multicast address for a particular multicast group such as shown in tables 2 and 3 above.
  • the traffic category 260 is used for QoS operations in the mesh nodes 270.
  • the eMFC header 251 can also be used in the nodes to derive multicast traffic statistics. These statistics can be used for quality of service features described below.
  • An optional encryption value 262 shown in FIG. 17 can be used for identifying a type of encryption scheme used with the multicast packet 250.
  • the eMCF header 251 is located after the IP header and before the data payload 264.
  • FIG. 20 shows how nodes within the mobile mesh network are either directly connected to other nodes in the same mesh or with other mobile mesh networks via an overlay network.
  • mesh 1 and mesh 2 communicate between themselves via a rendezvous 280.
  • the rendezvous is a publicly know, pre-established server that connects to meshes 1 and 2 via a tunnel 281.
  • the rendezvous server 280 itself contains an enhanced MFC 212 and appears as a mesh node peer to mesh nodes 1 and 2.
  • the nodes on mesh 1 and mesh 2 can communicate with each other using eMFC 212 or can communicate with other nodes in Internet 282 via conventional multicast protocols.
  • eMFC 212 can communicate with other nodes in Internet 282 via conventional multicast protocols.
  • two nodes on disparate mesh networks or on different mesh and Internet networks can exchange the eMFC information contained in the eMFC header 251 (FIG. 17).
  • Enhanced MFC 212 supports multicast on both conventional Internet network interfaces and mesh-specific network interfaces. Specifically, the eMFC 212 supports interfaces that suffer from the hidden node problem by repeating multicast traffic on those mesh node interfaces that face multicast listeners that may not normally hear multicast traffic.
  • a multicast packet 250 sent from a conventional interface may be expected to reach all peers connected to that interface.
  • Ethernet interfaces on a hub or switch are examples of conventional interfaces. Even if this assumption is not true, for example in the case of multicast transmissions passing through some switches, the underlying system components, in this case the switch, have been designed to compensate. In the case of mesh interfaces, however, no such compensation exists. Instead mesh nodes must repeat multicast traffic on some mesh interfaces for the benefit of downstream nodes that can not hear the original multicast transmission.
  • mesh node A may send out a multicast packet 250 that is destined for node C.
  • node C may not be in range to receive packet 250 directly from node A.
  • node B has to operate as a router to relay multicast packet 250 from node A to node C.
  • blindly repeating multicast packet 250 to every node within range can create broadcast storms where all nodes are broadcasting the same multicast packets.
  • the mesh nodes take into account mesh interface information when making decisions regarding forwarding multicast packets.
  • node B (FIG. 22) may receive a multicast packet in block 300.
  • Node B determines that the packet 250 has an enhanced MFC header 251 in block 302.
  • Node B in decision block 304 determines whether or not the packet must be repeated on the received mesh interface. If not, then any conventional multicast routing is performed in block 306. However, if the packet must be repeated on the received mesh interface in decision block 304, then the node determines if it has any downstream receivers associated with the mesh interface in decision block 308. If not, then the multicast packet need not be repeated and normal multicast operations are conducted in block 306. If node B does have downstream receivers in decision block 308, the multicast packet is repeated to the identified downstream nodes in block 310 on the received mesh interface, thus forwarding traffic onwards to downstream nodes that can hear the received mesh interface but not the original multicast packet (FIG. 22).
  • Downstream nodes may or may not be members of the multicast group associated with the multicast address in the eMFC header 251 (FIG. 17).
  • the multicast packet 250 is sent to mesh node B from node A.
  • node B may still forward the packet to node C since node C is a downstream receiver for node B. This allows another mesh node downstream from node C, that is a member of the multicast group, to successfully receive multicast packet 250 from node C.
  • node D is not a designated downstream mesh node for node B. Thus, node C will not transmit multicast packet 250 to mesh node D.
  • FIG. 23 shows in more detail how node B routes multicast packets 250.
  • Node B receives the multicast packet in block 320.
  • Node B identifies the members of the multicast group in block 322 according to router ID 254, the destination address 257 and destination port 258 (FIG. 17) in the eMFC header 251 and the distributed multicast routing table.
  • the source of the multicast packet is identified in block 322 via the router identifier 254 in the eMFC header 251.
  • node B (FIG. 22) identifies any nodes for forwarding the multicast packet 250 according to local routing tables.
  • the multicast routing table in block 326 may require node B to forward the multicast packet from the source node identified in block 322 to one or more of the nodes identified in block 322. Accordingly, node B forwards the multicast packet 250 to the identified nodes in block 328, if they pass the mesh interface criteria described in FIG. 21.
  • Conventional routing protocols notify nodes of their associated downstream nodes. This for example, is performed by the multicast membership information 228 in FIG. 16.
  • the distributed eMFC headers 251 then identify the particular multicast group associated with the multicast packet 250.
  • Duplicate Packet Detection Nodes in the mesh network have the possible disadvantage of receiving duplicate multicast packets.
  • a mesh node may receive multiple copies of the same multicast traffic for a variety of reasons including mobility or interface changes. For example, in FIG. 24 a node A may send out a multicast packet 250 to node B. Node B may then broadcast the same multicast packet 250 to node C. However, that same broadcast of multicast packet 250 may also be received back at node A. The duplicate multicast packet 250 can cause node A to repeat processing on the same multicast packet. Thus, duplicate packet detection is particularly important in the mobile, wireless environment of a mobile mesh network.
  • the duplicate packets 250 are identified by the eMFC 212 in node A and silently dropped in operation 346 before reaching the application that processes the packet.
  • FIG. 25 shows the basic logic performed at the eMFC 212 to detect and drop duplicate packets.
  • the node receives a multicast packet.
  • the enhanced MFC 212 in the node reads the information in the eMFC header 251 (FIG. 17). If the eMFC information 251 indicates a received multicast packet is a duplicate of a packet previously received by the same node, the packet is dropped in block 346. If not, the packet is forwarded in block 348. Duplicate multicast packets are detected using a combination of the router ID
  • FIG. 26 explains in more detail.
  • a mesh node receives a multicast packet.
  • the eMFC 212 checks the router ID 254 in the packet header 251
  • the node forwards the multicast packet in a normal manner in block 360. If the node has received other packets with the same router ID in block 352, then the destination address 257, destination port 258, and packet sequence number 256 values are checked in block 354. If these values are different than other recently transmitted packets, the packet is forwarded in block 360. If the router ID, destination addresses, and sequence number are the same as another packet flows recently transmitted in block 360, the packet is determined to be a duplicate and discarded in block 358.
  • the enhanced MFC 212 tags each multicast packet at the source node with a monotonically increasing sequence number 256.
  • the sequence number 256 is accordingly used at each hop in the path from source to receivers to weed out and drop duplicate multicast packets.
  • multicast packets may arrive out of order, so the eMFC 212 checks for reception of multicast sequence numbers rather than simply keeping a maximum sequence number for each multicast stream. Likewise sequence numbers may "roll-over". A sequence number rolls-over when the maximum sequence number has been assigned and the next packet is marked with the lowest sequence number. The eMFC 212 also compensates for sequence number roll-over.
  • multicast traffic between nodes running the enhanced MFC 212 is secured by supporting security features such as authenticating adjacent neighbors and encrypting multicast traffic hop-by-hop.
  • Security is particularly important in a frequently changing, mobile wireless network, such as mobile mesh network.
  • Each mobile node A-D using the eMFC 212 may take advantage of security features available in the system. For example, each mobile mesh node A-D authenticates itself to directly connected neighbors. After authenticating each other and exchanging certificates, mobile node peers then encrypt multicast traffic on a hop-by-hop basis. Thus multicast traffic destined for a mobile node peer that mistakenly arrives at a listener within radio range 366 does not arrive in the clear.
  • a malicious listener 364 must first break the encrypted multicast packet as sent by the previous hop. This encryption is carried out across tunnels established between mesh nodes A-D and the rendezvous 280 (FIG. 20) as well.
  • an encryption identifier 262 may optionally be contained in the eMFC header 251 to identify a particular type of encryption scheme used by the source of the multicast packet 250.
  • QoS Enhancements Enforcement of Quality of Service is particularly important in a wireless environment with limited bandwidth and potential radio interference such as in mobile mesh networks.
  • the enhanced MFC 212 supports quality service through traffic measurement and enforcement measures such as packet prioritization, admission control, and traffic shaping. Applications aware of the eMFC 212 support these QoS features by marking application packets into well known categories. Legacy application packets are marked as "best effort" by default.
  • FIG. 28 shows multiple mesh nodes 270 that each may transmit and receive multicast packets 250. One or more of the mesh nodes may make QoS decisions regarding received packets.
  • a node 270A may be located in a vehicle that sends multicast packets 250 to a PDA node 270B.
  • PC mesh nodes 270C and 270D may also send multicast packets 250 to the PDA node 270B.
  • the PDA node 270B may not have the capacity to process and forward all of the multicast packets received from nodes 270A, 270C and 270D. In this case, some of the packets 250 may have to be dropped in QoS operation 370.
  • the PDA node 270B may be able to process some or all of the received packets 250, but must prioritize their processing order.
  • Multicast packets handled by the eMFC 212 are prioritized according to their traffic category 260 (FIG. 17). Sample traffic categories are shown in the priority table in FIG. 30.
  • eMFC 212 can also mark multicast packets with the appropriate differentiated services field codepoints (DSCP) bits as defined by the IETF. This permits further prioritization below the eMFC 212 by interfaces that support traffic prioritization such as 802.1 li interfaces.
  • FIG. 29 describes in more detail how the enhanced MFC 212 in the nodes 270 in FIG. 28 are used for conducting QoS services.
  • the nodes 270 are configured with a priority table for different traffic categories.
  • a priority table is shown in FIG.
  • the enhanced MFC 212 sends and receives multicast traffic from peers in block 382, it also measures link quality hop-by-hop with respect to multicast traffic. It does so by tracking the number of multicast packets sent and received successfully for each directly connected peer mesh node running the eMFC 212. These measurements are taken in block 384 according to different combinations of the router ID 254, destination address 256, destination port 258, sequence number 256, and traffic category 260 in the eMFC header 251 (FIG. 17). Link costs, as computed by the multicast table computation component 222 (FIG.
  • the eMFC 212 can impose multicast rate limits if desired. Policy set by network administration on traffic limits for multicast packets will be enforced by the eMFC 212.
  • a service level agreement (SLA) concerning the amount of video traffic permissible in the mobile mesh network can be enforced to limit the video traffic allowed at each hop during multicast transmission. Video sources that exceed this limit would not be allowed past the first eMFC 212, sparing the mobile mesh network from excessive traffic.
  • the eMFC 212 in block 386 identifies video traffic in block 386 via the traffic category 260 in FIG. 17.
  • the eMFC 212 identifies the source of the video traffic and the amount of video traffic received from that source in block 384 according to the router ID 254 and corresponding sequence number 256.
  • the eMFC 212 then prioritizes the processing of the video traffic in block 388 according to the priority table shown in FIG. 30. As shown in the priority table of FIG.
  • highest priority may be given to different types of low bandwidth control traffic.
  • the larger data traffic such as video data may be given a lower priory.
  • the eMFC 212 may then either drop or delay the processing of some or all of the video traffic according to the amount of received traffic.
  • the multicast groups identified by the destination address 257 and the destination port 258 may have different priority levels. This allows messages from particular users, such as supervisors or emergency personnel to send messages at a higher priority than other users.
  • the combination of the router ID 254, destination address 257, destination port 258 and traffic category 260 is used to assign particular groups of users different priority levels.
  • the eMFC 212 can enforce multicast session characteristics such as the number of multicast sessions, throughput per session, or multicast participants per session.
  • the eMFC 212 can then track the statistics for particular types of data such as packets received from a particular source (router ID), destination address and/or port, or packets having a particular traffic category.
  • the statistics can identify the amount of packets received for the particular type of traffic and the percentage of that type of traffic that was successfully processed, dropped, etc.
  • FIG. 31 shows the components inside a mesh node 270 used for conducting eMFC 212.
  • a Central Processing Unit (CPU) 402 accesses software that provides the eMFC operations 212.
  • the CPU 402 sends and receives multicast packets via a transceiver 404 and antenna 406.
  • a memory 402 may include the multicast routing tables and priority tables described above.
  • the enhanced MFC 212 supports multicast traffic generated both with and without eMFC headers 251.
  • the eMFC supports both legacy multicast applications and those written using eMFC features. Not all multicast applications will take advantage of the features of the eMFC 212. Consequently, support for legacy multicast applications is built in to the eMFC. Using this legacy source and receiver information, the eMFC 212 sets the multicast forwarding cache 224
  • FIG. 16 forwards multicast packets from multicast source applications according to the eMFC 212.
  • Legacy multicast packets received without the eMFC headers 251 are passed directly to the applications registered for that multicast group.
  • Legacy multicast applications running on mesh nodes hosting an eMFC 212 use standard multicast socket API calls 217 (FIG. 16). These calls are intercepted, noted, and passed along by the eMFC 212.
  • Legacy multicast sources running on nodes in the mobile mesh network that do not host the eMFC 212 are detected by neighbor nodes running the eMFC 212. An example of such a multicast source would be a camera within the mesh sending video multicast traffic.
  • Multicast receivers running on nodes in the mobile mesh network not running the eMFC 212 are detected via the IGMP messages issued by every multicast receiver.
  • Legacy multicast sender and receiver information is propagated as global state.
  • Legacy multicast packets are marked for "best effort" delivery, the default quality of service class.
  • the system described above can use dedicated processor systems, micro controllers, programmable logic devices, or microprocessors that perform some or all of the operations. Some of the operations described above may be implemented in software and other operations may be implemented in hardware. For the sake of convenience, the operations are described as various interconnected functional blocks or distinct software modules. This is not necessary, however, and there may be cases where these functional blocks or modules are equivalently aggregated into a single logic device, program or operation with unclear boundaries.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A Data Distribution Service DDS transfers information between nodes (14) in an ad hoc mobile mesh network (12). The DDS includes many different novel features including techniques for coalescing retransmit requests to minimize traffic, providing a reasonable level of reliability for event oriented communications, multicasting retransmissions for use by many nodes (14), and providing other optimizations for multicast traffic. An Enhanced Multicast Forwarding Cache (eMFC) supports multicast transmissions in mobile mesh networks (12) and can be used in combination with the DDS. The enhanced MFC is designed to support mesh node mobility, quality of service, and security requirements that are particular to mesh networks. To achieve these goals, the enhanced MFC draws from a global state maintained by a unicast routing protocol, multicast aware applications, and the distributed DDS and EMFC services. The eMFC distributes this derived global state through the use of an eMFC-specific multicast packet header.

Description

RELIABLE MESSAGE DISTRIBUTION WITH ENHANCED EMFC FOR AD HOC MESH NETWORKS
This application claims priority from U.S. Provisional Application Serial No. 60/543,352, filed February 9, 2004 and U.S. Provisional Application Serial No. 60/543,353, filed February 9, 2004. Background FIG. 1 shows a typical mesh network 12 with a node A communicating with a node B through multiple hops, links, nodes 14, etc. The links 14 can be any combination of wired or wireless mobile communication devices such as a portable computers that may include wireless modems, network routers, switches, Personal Digital Assistants (PDAs), cell phones, or any other type of mobile processing device that can communicate within mesh network 12. ' The network nodes 14 in mesh network 12 all communicate by sending messages to each other using the Internet Protocol (IP). Each message consists of one or more multicast User Datagram Protocol (UDP) packets. Each node 14 includes one or more network interfaces which are all members which may be part of a same multicast group. Each node has an associated nodeid used for identifying both the source and the intended set of recipients of a message. Because the messages are multicast, routing details are transparent to the application. Transmission Control Protocol (TCP) is commonly used to provide reliability for point to point communication, using a direct Acknowledgement (ACK) based design, but in a multicast scenario with multiple peers, the more efficient approach is a Negative Acknowledgement (NACK) based design. That is, when data is successfully transmitted, no additional communication is needed to affirm that fact; there are no acknowledgements. When packet loss is detected, the NACK is sent back to the source to request retransmission. Sequence numbers are used to detect missing or out of sequence packets. The present invention is such a NACK-based design. In mesh networks, it is necessary to maintain certain data consistency between the different nodes 14. For example, all the nodes 14 may need to know which devices are part of the same multicast groups. This requires all of the nodes 14 to have the same versions of different multicast tables. This is typically done by exchanging data between nodes 14 and then responding with NACK responses if the data is not successfully received. A substantial amount of bandwidth and processing resources are required to maintain data consistency between the different nodes 14. Current techniques for maintaining data consistency between different mobile nodes is also inefficient. Computers in the modern Internet communicate using a common language based on the well-understood mechanisms of routing. Routers in the Internet compute the best path to all known computers and act as traffic cops to direct such traffic. The results of these computations are stored in what is known as a forwarding table. This forwarding table specifies a next hop for each possible destination. The next hop is the computer to which traffic must be forwarded for a particular destination address. Frequently a default router is specified as the preferred router to which to forward traffic when the destination is not known to a router. Non-router computers, known as hosts, also have a forwarding table. In the conventional Internet, a host's forwarding table tends to be much simpler than a router's forwarding table because hosts typically are connected to the Internet by one interface and the specified default router handles most addresses. These assumptions do not hold for hosts in a mobile mesh network. FIGS. 12 and 13 show a network topology where a node A provides unicast forwarding. Table 1 shows a unicast forwarding table for the 4-node network topology shown in FIGS. 12 and 13.
Table 1 : Node A's unicast forwarding table Internet addresses are the 32-bit integer addresses specified in Internet
Protocol version 4 (IPv4) or 128-bit address specified in Internet Protocol version 6 (IPv6). Human readable addresses such as www.packethop . com are translated by Directory Name Servers (DNS) into their integer equivalents. These addresses are commonly known as unicast addresses. Unicast addresses specify a unique computer on the Internet. A portion of Internet addresses, however, are reserved for multicast. Multicast addresses are used for 1 -to-many communication from a computer to a group of computers. Traffic sent to a multicast address will arrive not at one computer, but will arrive at many computers. Examples of applications that might use multicast include classroom lectures and video conferences. Routers that receive multicast traffic need to simultaneously forward that multicast traffic to one or more destinations. To do so, routers need to use a specific version of the forwarding table commonly known as the Multicast Forwarding Cache (MFC). Example operations for a multicast forwarding cache are shown in FIG 14. A multicast group consisting of nodes B, C, and D are denoted by a single multicast address G. Table 2 shows node A's multicast forwarding cache and Table 3 shows node B's multicast forwarding cache.
Table 3: Node B's multicast forwarding cache
To compute either the forwarding table or the multicast forwarding cache, a router first needs to compute paths to known destinations using a routing protocol. Several such routing protocols exist, all of which are based on well known graph theory algorithms established by mathematicians following on Euler's original work on the Kδnigsberg Bridge problem in 1736. These routing algorithms may be broadly categorized into link-state and distance-vector algorithms. Distance-vector algorithms exchange shortest-path distances to destinations between communicating routers. Based on this shortest- path distance information, each router independently computes its forwarding table. The prime example of a distance- vector based routing algorithm is the Routing Information Protocol (RIP). A link-state routing protocol, by contrast, distributes the topology of the network to all nodes, each of which independently computes its forwarding table. The prime example of a link-state algorithm is Open Shortest Path First (OSPF). Link-state based routing protocols are the most widely deployed in the Internet. Once the routing protocol has computed the shortest path to all destinations, the router may update its forwarding table. These updates usually take place each time the network topology changes in a way that results in a forwarding table change. In a similar fashion, the router must update the multicast forwarding cache based on available information about multicast sources and receivers. To update the multicast forwarding cache, the router uses a multicast routing protocol. A multicast routing protocol may or may not use the previously computed unicast routing table.
For example, the Protocol Independent Multicast (PIM) Protocol uses the results computed by the unicast routing protocol while the Distance Vector Distance Multicast Routing Protocol (DNMRP) uses its own internal unicast routing protocol. In either case, the multicast routing protocol updates the multicast forwarding cache on each router. Internet hosts that are multicast receivers identify themselves to nearby routers using the Internet Group Management Protocol (IGMP). Each router then distributes this multicast group receiver membership information to peer routers using its multicast routing protocol. Internet hosts that are multicast sources simply send multicast packets destined for the appropriate multicast group address to its nearby routers. Each router is then responsible for forwarding those multicast packets as dictated by its multicast forwarding cache. Mobile mesh networks are also known as Mobile Ad-hoc Networks
(MANET). Mobile mesh networks, for example, are used by emergency services personnel where the communication nodes are wireless devices that are constantly moving. The Internet Engineering Task Force (IETF) has started the MANET workgroup to address mobile mesh network routing challenges. Four unicast routing protocols specific to mobile mesh networks, referred to as ad-hoc routing protocols, have come out of the IETF MANET working group: Topology Dissemination Based on Reverse-Path Forwarding (TBRPF), Link State Routing Protocol (OLSR), Ad hoc On-Demand Distance Nector Routing (AODN), and The Dynamic Source Routing Protocol for Mobile Ad Hoc Networks (DSR). Three of these protocols have advanced to experimental Request For Comment
(RFC) status (RFCs 3684, 3626, and 3561 respectively). The fourth ad-hoc protocol, DSR, is expected to advance to experimental RFC shortly. Multicast ad-hoc protocols have not yet been standardized. Mesh Multicast Forwarding Caches Mobile mesh networks differ from conventional Internet networks in a number of ways. The differences of most relevance to the multicast forwarding cache are mobility, lack of distinction between hosts and routers, Quality of Service
(QoS) requirements, and security requirements. A computer in a mobile mesh network, referred to as a node, may be constantly changing its position and connection to peer nodes. Unlike computers in more conventional wired computer networks, a mesh node may have continuously changing attributes such as location, IP address, and connection to peers. This breaks many of the assumptions built into conventional Internet protocols and networks. A second consequence of mesh node mobility is commonly referred to as the
"hidden node problem" as shown in FIG. 15. The hidden node problem refers to the inability of all mesh nodes to hear each other's traffic through the same wireless interface. This contrasts with the ability of wired interfaces to hear all traffic from connected neighbors. Conventional multicast forward caches do not support either changing IP addresses or interfaces suffering from the hidden node problem. For example, in FIG. 15, node C does not hear node A's transmissions and thus node C's scheduling of transmissions may interfere with node B's intended reception from node A. In this sense, node C is hidden from node A and vice versa. In the conventional Internet, a computer may be viewed as a router or host depending on its relative position with the topology. Routers forward traffic on for peers, hosts do not. Typical computer users rarely, if ever, use a router. This is reflected in many design assumptions applied to computers used most often by users such as laptops, Personal Digital Assistants (PDAs) and personal computers. For example, a multicast forwarding cache is not typically available in end-user platforms such as Windows XP and CE operating systems. In a mobile mesh network, however, every node is by definition a router and may also be a host. That is to say, each node must be capable of forwarding traffic on for peers. This blurs the distinction between host and router for mesh nodes. Mobile mesh network wireless interfaces have stronger hurdles than wired interface equivalents in terms of Quality of Service (QoS). As a consequence, QoS continues to be important in parts of the Internet like mobile mesh networks. Conventional multicast forwarding caches however have little or no support for QoS. Wireless mobile mesh network traffic is also more susceptible to interception than conventional wired networks. Because of the ease of interception, mobile mesh network traffic must be more carefully guarded, even at the transport level. For these four reasons, conventional multicast forwarding cache technology fails to meet the needs of mobile mesh network nodes. This invention addresses this and other such problems.
Summary of the Invention A Data Distribution Service (DDS) transfers information between nodes in an ad hoc mobile mesh network. The DDS includes many different novel features including techniques for coalescing retransmit requests to minimize traffic, providing a reasonable level of reliability for event oriented communications, multicasting retransmissions for use by many nodes, and providing other optimizations for multicast traffic. The DDS uses UDP datagrams for communications. Communications operate in a truly peer-to-peer fashion without requiring central authority or storage, and can be purely ad hoc and not depend on any central server. The need for traditional acknowledgement packets can also be eliminated under normal operation. Such a NACK-based protocol proves to be more efficient than the traditional approach like TCP. The DDS is amenable to very long recovery intervals, matching well with nodes on wireless networks that lose coverage for significant periods of time and also works well with constantly changing network topologies. Reliability can also be handled over a span of time that might correspond to losing wireless coverage. An Enhanced Multicast Forwarding Cache (eMFC) supports multicast transmissions in mobile mesh networks. The enhanced MFC is designed to support mesh node mobility, quality of service, and security requirements that are particular to mesh networks. To achieve these goals, the enhanced MFC draws from a global state maintained by a unicast routing protocol, multicast aware applications, and distributed services. The eMFC distributes this derived global state through the use of an eMFC-specific multicast packet header. Information contained within the eMFC header is also used to collect and derive multicast traffic statistics at each mesh node. To maintain backwards compatibility, multicast traffic without the eMFC-specific header is also honored by the MFC. Mobile mesh network specific interfaces, such as radio interfaces, as well as conventional interface types are supported. Security is maintained through the use of authentication and encryption techniques. The foregoing and other objects, features and advantages of the invention will become more readily apparent from the following detailed description of a preferred embodiment of the invention which proceeds with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram of a mesh network. FIG. 2 is a block diagram of a Data Distribution Service (DDS) provided in a mesh network. FIG. 3 is a diagram of a source data packet. FIG. 4 is a block diagram of a source node shown in FIG. 2. FIG. 5 is a block diagram of a receiver node shown in FIG. 2. FIG. 6 is a diagram showing different DDS messages. FIGS. 7-11 show different communications scenarios that are provided by the DDS. FIG. 12 shows a conventional network topology. FIG. 13 shows unicast paths for a node in the conventional network shown in
FIG. 1. FIG. 14 shows a multicast path for multicast source A and destinations B,C, and D. FIG. 15 shows three mesh nodes illustrating a hidden node problem. FIG. 16 shows an enhanced MFC system architecture. FIG. 17 shows a multicast packet that includes an enhanced MFC (eMFC) packet header. FIG. 18 shows how the multicast packet in FIG. 6 is sent between different nodes in a mesh network. FIG. 19 is a table that shows the different fields in the eMFC header. FIG. 20 is block diagram showing how the multicast packets can be overlay ed with different mesh networks. FIGS. 21-23 are diagrams showing how a mesh node forwards multicast packets according to mesh interface information. FIGS. 24-26 are diagrams showing how duplicate multicast packets are handled in a mesh network. FIG. 27 is a diagram showing a malicious listener within radio range of mesh nodes. FIGS. 28-30 show how Quality of Service (QoS) operations are performed using eMFC. FIG. 31 is a block diagram of the components in one of the mesh nodes.
DETAILED DESCRIPTION FIG. 2 shows several nodes 22 that may operate in a mesh network 20 similar to the mesh network 12 previously shown in FIG. 1. The nodes 22 can be any type of mobile device that conducts wireless or wired peer to peer mesh communications. For example, personal computers with wireless modems, Personal Digital Assistants (PDAs), cell phones, etc. Other nodes 22 can be wireless routers that communicate to other nodes through wired or wireless IP networks. The mobile devices 22 can move ad-hoc into and out of the network 20 and dynamically reestablish peer-to-peer communications with the other nodes. It may be necessary that each individual node 22 A, 22B and 22C have some or all of the same versions for different data items 26. The data items 26 in one example, may be certain configuration data used by the nodes 22 for communicating with other nodes. For example, the configuration data 26 may include node profile information, video settings, etc. In another example, the data 26 can include multicast information, such as multicast routing tables, that identify nodes that are members of the same multicast groups.
Data Distribution Service A Data Distribution Service (DDS) 24 is used to more efficiently maintain consistency between the data 26 in the different nodes 22. A source node 22A is defined as one of the nodes that may have internally updated their local data 26A and then sent out data message 28 notifying the other nodes 22B and 22C of the data change. Receiver nodes 22B and 22C are defined as nodes that need to be updated with the changes made internally by source node 22A. The DDS 24 sends and receives source data packets 38 as shown in FIG. 3 that are used in a variety of different ways. For example, the source data packet 38 can be used by the source node 22A to multicast a status or data message 28 that notifies other nodes 22B and 22C of a change in or current status for data 26A. In response to the multicast status or data message 28, the receiver nodes 22B or 22C may multicast a Negative Acknowledgement (NACK) message 32. For example, the receiver node 22C may multicast NACK message 32 when data 26C is missing updates for some of the data items in data 26 A. This could happen for example, when the mobile device 22C has temporarily been out of contact with mesh network 20 such that it did not receive a data message 28. In response to the NACK message 32, the source node 22A may multicast a repair message 30. The repair message 30 may provide information necessary to update data 26C in receiver node 22C and possibly data 26B in receiver node 22B with the latest changes made to data 26 A. The repair message 30 in one example, may be an EXPIRED message indicating the requested data item is no longer available or a CHANGE message identifying the data items in data 26A that have been changed. The Data Distribution Service (DDS) 24 in one implementation uses symbolic keys (data names) across all nodes in the mesh network 20 to maintain consistency between data items 26. The DDS 24 can be built within a reliable transport scheme or can be implemented as described below. The DDS 24 prevents data from being buffered persistently twice and retransmitting previous revisions of a particular data record, when only the most recent data item is required. This is a more complete design and solves the particular problems associated with replicating a "small" database across a mesh network. Conceptually, DDS 24 works as an distributed hash table, binding keys (data names) to values (data). The nodes 22 talk to each other via the DDS protocol described below and use a single unified view of key-to-value (data name-to-data) mapping to maintain consistency of data 26 across the entire mesh network 20. Data consistency is provided by proactively replicating the database 26 in each node 22. The database 26 can include any object stored internally in the nodes 22. Changes in data 26 is tracked by the DSS 24 across multiple network nodes
22 by detecting a global revision value for neighboring nodes, noting what revision of the data the local node already possesses, asking for change lists describing the delta between two such revisions, and potentially requesting retransmission of the missing data. The DSS 24 thus maintains a set of data items 26 distributed across many nodes 22. The DDS 24 does not need to separately buffer transmissions like a reliable transport because the data 26 is already stored persistently before the communication commences.
Source Data Packets FIG. 3 shows one example of a source data packet 38 as previously described in FIG. 2. The source data packet 38 includes a header 52 that is used for conducting DDS operations. Some or all of the fields in header 52 may be used for sending messages 28, 30 or 32 described in FIG. 2. The source data packet 38 includes a nodeid field 40 that is unique to the originating node sending the message. Every source data packet 38 includes a packetid field 42 that is global among all transmissions from the originating node sending the packet. The value in packetid field 42 is a monotonically increasing number. Receiver nodes, record packetids as the packets are received. The header 52 also includes a Global Revision Value (GRV) in global revision field 44 that identifies a latest revision to the data 26 in the node 22 sending the source data packet 38. The GRV defines a latest revision that has been made to the data 26 in a particular source node 22, regardless of the data item and what type of revision was made. For example, the GRV for source node 22A corresponds with the number of changes that have been made locally to data 26A (FIG. 2). So a first revision to a data item A would increment the GRV by 1 and a different revision to a different data item would cause the GRV to increment again by 1. A history field 46 indicates how many packetids are remembered for possible retransmission. The GRV value and the liistory count in the history field 46 are each tracked by the nodes receiving the source data packets 38. The history field 46, in combination with the GRV 44, defines a window of packetids from the node 22 identified by nodeid field 40 that are available for retransmission. Communication of this history-based window allows peers to avoid sending a NACK for data that is known to be expired. It the responsibility of every receiver node 22 to then request retransmission of packets that it determines have not been successfully received. An action field 48 identifies a type of message that is associated with the source data packet 38. For example, the source data packet 38 may be used for sending a STATUS, DATA, NACK, EXPIRED, CHANGE, OR RETRANSMIT message as will he described in further detail below in FIG. 6. Data Model A payload 50 is included in the source data packet 38. The payload 50 includes a data-name (key) and associated data-revision number. The data-name as described above is essentially a key identifying a particular data item. The data- revision number identifies a revision for the particular data item identified by the data-name. For example, a fourth revision to a "profile" data item may have the entry "profile:4" in the payload 50. A fifth revision to a "video settings" data item would be identified in payload 50 as "video settings:5". The payload 50 will also contain the actual revised data (data- value) as changed by the source node. By including some or all of the information in header 52 in the source data packet 38, no additional control traffic is required for maintaining consistency between the data 26 in the different nodes 22 (FIG. 2). Note that the synchronization granularity is at the object level - the value of a key. All properties of an object (if relevant) are part of the object and do not need to be synchronized separately. The data 26 (FIG. 2) that is stored for each data- name has its own revision number (data-revision), so that a CHANGE message can coalesce multiple changes for a given datum down to the minimum when replicating over the mesh.
Data Protocol Every node 22 in the mesh network 20 (FIG. 2) emits its Global Revision Value (GRV) as part of every packet. As described above, changes to specific objects, or other data items in the node, can be tracked at a finer resolution with the second data-revision number that is associated with each individual data item. When a receiver node, for example receiver node 22B or 22C in FIG. 2 finds "holes" in the transmission window defined by [GRV-history, data-revision], it sends a NACK message 32 back to the source node 22A to request a repair (FIG. 2). The actual NACK request may include multiple sequence numbers or GRV values so that NACK messages 32 are coalesced. The NACK message 32 may not be sent immediately, but may be sent after a random back-off interval. The back-off interval may be exponentially distributed. Because the repair packets are multicast, any node that requests a particular data item, could cause other nodes 22 to receive the same data item. The random back-off transmission period for NACK messages, allow other nodes to suppress similar NACK requests, thus reducing the number of NACKs that need to be sent. To explain further, FIG. 4 shows a source node 22A that includes a Central Processing Unit (CPU) 58 that operates software that provides the Data Distribution
Service (DDS) 24. The CPU 58 wirelessly sends and receives DDS messages 63, via a transceiver 60 that is coupled to an antenna 61. Similarly, FIG. 5 shows a CPU 78 in one of the receiver nodes 22B or 22C that includes software for operating the DDS 24. The CPU 78 wirelessly sends and receive DDS messages via a transceiver 82 connected to an antenna 84. The source node 22A in this example contains three different data items. Data item 52 contains profile data, data item 54 contains video settings, and data item 56 contains multicast tables or other types of configuration data. Each data item includes an associated data-revision number. For example, the profile data item 52 currently has a data-revision = 5 and the video settings 54 currently has a data- revision = 4. The CPU 58 in the source node 22A also keeps track of the Global Revision Value (GRV) 62 that is associated with changes made to any of the data items 52,
54, and 56. In the example shown in FIG. 4, the CPU 58 has currently made twelve GRV changes to the data items 52, 54 and 56. The GRV 11 was made to data item 54 and the GRVs 10 and 12 were made to data item 52. Changes in the GRV 62 can also be attributed to multiple changes in the same data item. For example, GRV 10 is attributed to revision 4 for profile 52 and GRV 12 is attributed to revision 5 for profile 52. For explanation purposes, the current state of receiver node 22C will be shown in FIG. 5. Receiver node 22C contains a "profile" data item 72, "video settings" data item 74 and a "multicast table" data item 76. The profile data item 72 currently has an associated data-revision number = 3 and the video settings data item
74 currently have a data-revision number = 4. The receiver node 22C keeps track a current Global Revision Value (GRV) 79 associated with source node 22A as GRV=9. For example, the last data update received from source node 22 A had an associated GRV=9. FIG. 6 shows the different DDS messages that can be sent by the DDS 24 in the different mesh nodes. Status Message A STATUS message 90 is periodically sent by the source node 22A when no other packets are being sent. In one embodiment, the STATUS message 90 is sent out periodically to indicate nothing has changed. In the example shown in FIG. 6, the STATUS message 90 includes the nodeid 40 for the source node 22A and the
Global Revision Value (GRV) 44 for the source node 22A. The action field 48 indicates the packet as a STATUS message. If the receiver node 22B or 22C has the same GRV 44 for the same nodeid 40, then no further action is required. In the example shown in FIGS. 4 and 5, a status message 90 sent by source node 22A includes a GRV = 12 and the corresponding GRV in the receiver node 22C is GRV = 9. This prompts the receiver node 22C to send a NACK message 94.
Data Message A DATA message 92 is sent whenever the source node 22A changes, modifies, adds, removes, etc. a data item. The DATA message 92 carries the actual data that needs to be updated or added to all of the receiver nodes 22B and 22C.
When data 26A (FIG. 2) is changed, the DATA message 92 is multicast out to the other nodes in the mesh network 20 and contains the data-name and its data-revision number. In this example, the DATA message 92 identifies data-name= profile and data-revision=5 and includes the actual updated version 5 profile data. The DATA message 92 also includes the latest global revision value GRV=12 for the source node 22A. After receiving the DATA message 92, or the STATUS message 48, the receiver node 22C compares the GRV=12 in the DATA or STATUS message 92 with the most recently received GRV for that nodeid. Normally the GRV received from the source node 22A should be incremented by 1, corresponding to this packet's data change, in which case the local most recent revision from the source node is updated, with the data in data message 92.
Negative Acknowledgements (NACK) Every receiver node keeps track of received packets, adding the packetID and its received timestamp to its list of received packets, which is maintained on a per- source basis. For example, the source data packets 38 (FIG. 3) are indexed by the source nodeid. The receiving nodes also store the global revision value and history values for the other nodes and updates them for every received packet, if the GRV has changed. The nodeids outside the window are removed from the list, and the resulting list is scanned for missing packetids. A NACK message 94 is sent by the receiver node when missing data is detected. An exponentially distributed random time interval can be calculated and used before sending out the NACK message 94. This prevents NACK implosion, where multiple receiver nodes try to send NACK messages 94 for the same requested packet at the same time. The receiver node 22C schedules itself to wake up after the random time interval to check the GRV again and to possibly send a NACK 94 corresponding to the missing source data packet 38. As CHANGE messages 98 or EXPIRED messages 96 are received from source node 22A responsive to the NACKS 94, all pending NACKs are updated and received packets removed from the list. The NACK messages 94 received from other nodes that request the same information are also removed from the list. The packetids that fall outside the transmission window also get removed. If a NACK list becomes empty while waiting, it gets completely removed. When a NACK message 94 gets sent, it is added to a repair-requested list with the timestamp of when it was sent. Similarly, subsequent NACKs sent by peers are also added to this list. In the absence of any activity, the receiver node 22C still wakes up every so often to scan the lists of all nodes, and restart the NACK process for peers that are missing packets but have not had any other activity. This is retried until the lifetime of the packets has expired. The period of retrying could be adapted if no traffic at all is detected for a node. This could also better handle the case of a node completely disappearing from the mesh network 20. If the receiving node 22C misses the DATA message 92, it won't be noticed until the next DATA, STATUS, EXPIRED, or CHANGE message is received from the source node 22A. All of these messages also include the global revision value (GRV) for the source node 22A. At that point, the receiving node 22C notes that the GRV 62 for the sender node 22A is greater than what it last saw (GRV=9), and multicasts the NACK message 94 identifying the GRV number, or numbers, it is missing. For example, in FIG. 4, the CPU 58 may send out a status message 63 with a
GRV = 12 and an associated source nodeid for source node 22 A. The receiver node
22C has a current GRV for that source nodeid of GRV=9. Accordingly, the receiver node 22C multicasts a NACK message 94 as shown in FIG. 6 that identifies missing GRVs 10-12.
CHANGE. RETRANSMIT and EXPIRED Messages The source node 22A does not send the actual data in response to the NACK message 94. Alternatively, the source node 22A sends a changelist, enumerating the keys (data-names) and their specific data-revision numbers for the GRVs identified in the NACK message 94. The resulting CHANGE message 98 (FIG. 6), enumerates all the global revision values that are being addressed, and a list of key/values/revisions 99 that are associated with those global revision values. For example, in FIG. 4, the source node 22A may send back a CHANGE message 98 that includes data-name=video settings:4 for GRV = 11 and dataname=profile:5 for GRV = 12. The data itself for both the video settings and the profile may not be sent in the CHANGE message 98. In another example, if a data item changes multiple times before another node notices, the single most current data can be sent out to satisfy older repair requests. This is accomplished with a CHANGE message 98, which identifies what older global revisions should be updated with a single new version of the data. In one implementation, older values are not kept and only the most current version of the data item and the corresponding revision number are maintained. For example, in FIG. 4, the source node 22A might not send back the data- revision associated with GRV=10, since GRV=10 is associated with a previous older version of the profile 52 (data-name=profile, data-revision=4) that is no longer valid. The receiver node 22C notes all the old global revisions that are being handled, and looks at the key/values/revisions to determine which keys it needs to request for retransmission, and sends a RETRANSMIT message 100 (FIG. 6) back to the source node 22A. For example, the "video setting" data item in the receiver node 22C in FIG. 5 has a data-revision value=4 for the nodeid associated with source node 22A. This is the same data-revision value indicated in CHANGE message 98 in FIG. 6. Therefore, the receiver node 22C does not send a retransmit request for the video settings. The profile data item 72 in the receiver node 22C in FIG. 5 has a data- revision value = 3. However, the profile in the change message 98 in FIG. 6 has a data-revision value = 5. Thus, the receiver node 22C determines that current profile data item 72 is out of date. Accordingly, the CPU 78 in FIG. 5 sends a RETRANSMIT message 100 as shown in FIG. 6 that requests the source node 22A to send the profile data associated with GRV 12. The source node 22A responds to the RETRANSMIT message 100 by producing another DATA message 92 as shown in FIG. 6 that contains the profile data item 52 in FIG. 4. Alternatively, if the profile data item 52 in FIG. 4 is no longer available, the source node 22A may send an EXPIRED message 96. This handles race conditions when the data expires while this exchange is happening. The EXPIRED indication can alternatively be part of the CHANGE message 98. If any of the intervening messages are lost, the whole process starts over. When the source node 22A receives the NACK message 94 (FIG. 6), it first checks that the faulty packetid is within the transmission window. If not, it sends the
EXPIRED message 96 indicating that peers should stop asking for it. If within the transmission window, the original data item is fetched and resent. The source data items remain in a sent packet list until its original lifetime has expired, independent of the number of retransmissions. The entry is updated to indicate the time of the most recent transmission. If a NACK message 94 is received within a small time after a data packet is sent out by the source node 22A, it may be ignored with the assumption that the just sent data packet will satisfy the NACK 94. In the case of a race condition that fails in favor of dropping the NACK 94, the receiver node 22C will request the data item again. Note that any node in the mesh network 20 (FIG. 2) that has the required repair data and mappings of global revisions to particular data revisions can respond to the NACK message 94. Thus, in the above discussion, the source node 22A can be any node that has the requested data. A random back-off delay is utilized to prevent every node from simultaneously doing so. This optional choice implies that the node providing the repairs store global revision to data revision mappings for all other nodes. The protocol has another advantage in that the DDS message exchange prevents data from being sent multiple times, particularly for the case that a single key is getting repeatedly updated. In this case only the most current value will get transmitted, whereas with the reliable multicast transport, every change is sent, just to overwrite the previous value.
Scenarios FIGS. 7-11 explain some of the different DDS delivery scenarios that can occur during data consistency operations. No errors, sequential delivery FIG. 7 shows one of the simplest cases, where the source data packets 38 (FIG. 3) are sent from source node 22A and successfully received sequentially in their original sending order by the receiver node 22B. Each packet N has a global version number that indicates that the packet being received is the most current, so no additional action need be taken.
No errors, out of order delivery In the next case shown in FIG. 8, the source data packets are sent by the source node 22 A and successfully received by the receiver node 22B, but their original order is not maintained. This causes a NACK message 94 (FIG. 6) to be sent after a random delay T, unless the "skipped" packet N+1 is received before that delay time T expires. FIG. 8 shows the case where the packet N+1 is received by the receiver node 22B before the expiration of random delay time T. In this case the NACK message 94 is suppressed.
Simple Repair FIG. 9 shows the case when a packet sequence number skips by one and the missing source data packet N+1 has still not been received by the time the NACK message 94 is scheduled to go out. In this case, a NACK message 94 is pending and packet N+1 has not been received within time T. Accordingly, the receiver node 22B sends NACK message 94 to the source node 22A that identifies the source data packet N+1 as described above. The source node 22A through the DDS protocol (the CHANGE and RETRANSMIT messages are not shown) then resends the source data packet N+1.
Coalesced Repair FIG. 10 shows the situation when more than one source data packet is detected as missing. The set of packet numbers, data items, or GRVs can be encapsulated into a single NACK message 94. At the time the NACK message 94 is ready to be sent (i.e. the soonest random backoff interval) all pending NACKs for packets N+1 and N+2 are included in NACK message 94. Again, the protocol between the NACK and the retransmission of the data is not shown.
NACK suppression FIG. 11 shows the situation where multiple receiver nodes 22B and 22C detect missing packets. Each receiver node 22B and 22C could potentially send a NACK message 94. However, each receiver node 22B and 22C may use a random delay before sending their NACK message 94. This will likely cause one of the receiver nodes 22B or 22C to send a NACK message before the other receiver node. For example, receiver node 22C is scheduled to send the NACK message 94 at random time interval T and receiver node 22B is scheduled to send the same NACK message 94 at random time interval T+l after receiver node 22C. Because the NACK messages 94 are multicast, other nodes will see the first NACK message 94 sent by receiver node 22C. This causes receiver node 22B to suppress sending the same NACK message as long as the NACK message 94 received from receiver node 22C contains the packet ids missing in receive node 22B. The DDS protocol between the NACKs and the retransmission of the DATA is again not shown. Referring to FIG. 16, an Enhanced MFC (eMFC) system architecture 212 is a distributed multicast routing mechanism and consists of a multicast forwarding cache 224 and a multicast table computation 222. These two components derive information from global and local states available on a mesh node to properly route multicast traffic. All of the nodes running the enhanced MFC 212 create an overlay network over both mobile mesh networks and conventional Internet Protocol (IP) based networks. FIG. 16 shows a node 270 that operates the enhanced MFC 212 in a mesh network. Multicast aware applications 216 use socket application program interface (API) calls 217 to open a multicast socket 220, declare itself as a multicast source, set the multicast data type (e.g., video, voice, bulk data, and so forth), send multicast data 242, receive multicast data 242, and close the socket 220. These socket calls 217 rely on the underlying multicast forwarding cache 224 to select the zero or more network interfaces 226 for forwarding multicast traffic 242. The multicast forwarding cache 224 is maintained by a multicast table computation component 222. Multicast table 222 fills in the multicast forwarding cache 224 with entries for each known multicast source and group. The multicast table computation component 222 derives these multicast group senders and groups from global state information available within the mobile mesh network. A public example of such a global state distribution protocol is the Multicast Session Directory sdr modeled on work done by Van Jacobson at Lawrence Berkeley National Laboratory (LBNL). The multicast table computation component 222 derives a network topology from the underlying unicast routing protocol 218. Ideally, protocol 218 is a proactive, ad-hoc, link-state based protocol, however it need not be. Internet multicast protocols Distance Vector Multicast Routing Protocol (DVMRP) and Multicast Extensions to OSPF (Open Shortest Path First), for example, derive their topology information from distance vector and link-state protocols respectively. The conventional elements in block 214 are well known to those skilled in the art and are therefore not described in further detail. Enhanced multicast operations are shown in block 213. Multicast membership information 228 and legacy multicast support 230 are provided to the multicast table computation 222. The multicast membership information 228 in one example is global state information that all mesh nodes contain that is distributed using a Distributed Distribution Service (DDS) as described above. Legacy multicast support 230 relates to existing multicast support in either the MFC 224 or in the multicast packet. If a node does not include an eMFC header, then the packet can revert back to using legacy multicast support 230 from a conventional multicast packet. Mesh interface support 234 relates to specific mesh node information. For example, a node may determine that a particular interface is a mesh interface and accordingly provide any necessary routing decision to account for the mesh network. The enhanced MFC 212 is provided through a distributed member state 232 that is relayed to the nodes in the mesh network through an enhanced MFC header 250 that is shown in more detail in FIG. 17. Three eMFC operations of particular interest include duplicate packet detection 236, security feature support 238 and QoS enhancements 240.
Distributed Multicast State Referring to FIGS. 17 and 18, the enhanced MFC 212 is a distributed multicast routing mechanism that maintains global state using proprietary packet header 251 pre-pended on multicast packets 250. This distributed state is necessary for proper support of features such as Quality of Service and link quality measures. As each multicast packet 250 flows through the enhanced MFC 212 (FIG. 16) operating on a mesh node, it is marked by pre-pending the eMFC specific header 251. This header 251 contains fields necessary to distribute eMFC state to peer mesh nodes 270 and support features such as Quality of Service (QoS). As the multicast packet 250 flows across a mobile mesh network 269 (FIG. 18), this same eMFC header 251 is seen by each enhanced MFC 212 along the path to the final multicast destinations. This is because each mesh node 250 consults the eMFC 212 before forwarding the multicast packet 250. For example, in FIG. 18, a first mesh node 270A may be located in a vehicle, a second mesh node 270B may operate in a Personal Digital Assistant (PDA), and a third mesh node 270C may operate in a wireless laptop computer. The multicast packet 250 is sent by node 270A and is prepended with the eMFC header 251. The eMFC 212 in mesh node 270B routes the multicast packet 250 to mesh node 270C according to the information in the eMFC header 251. Mesh node 270C receives and possibly continues to route the multicast packet 250 according to the information in eMFC header 251. As the multicast packet 250 flows through the mesh network 269, moving from one mobile mesh network node 270 to another, the enhanced MFC packet header 251 serves to distribute state for this multicast stream to all mesh nodes along its path. The MFC packet header 251 allows the mesh nodes to conduct more effective multicast related operations such as duplicate packet detection 236, security feature support 238, and QoS enhancements 240 (FIG. 16) that are not currently supported in conventional mesh networks. The individual fields of the eMFC header 251 are described in further detail in FIG. 19. A version number 252 is used for backwards compatibility with other multicast versions. Mesh nodes sending multicast packets 250 are identified with a Router Identifier (Router ID) 254 to eliminate dependence on IP addresses that may or may not change over time as the node 270 moves and turns interfaces on and off. The router ID 254 remains constant throughout the lifetime of the mobile mesh network, is associated with a particular mesh node, and is not tied to any IP source address. Thus the router identifier 254 can remain the same for a mesh node 270 even when the node moves to another location in the same or a different mesh network. The header 251 includes a sequence number 256 that identifies the multicast packet number in the multi-cast stream sent by a particular router ID 254. A destination address 257 and destination port 258 identify the multicast address for a particular multicast group such as shown in tables 2 and 3 above. The traffic category 260 is used for QoS operations in the mesh nodes 270. In addition to distributing state, the eMFC header 251 can also be used in the nodes to derive multicast traffic statistics. These statistics can be used for quality of service features described below. An optional encryption value 262 shown in FIG. 17 can be used for identifying a type of encryption scheme used with the multicast packet 250. In one implementation, the eMCF header 251 is located after the IP header and before the data payload 264. FIG. 20 shows how nodes within the mobile mesh network are either directly connected to other nodes in the same mesh or with other mobile mesh networks via an overlay network. For example, two meshes named mesh 1 and mesh 2 communicate between themselves via a rendezvous 280. The rendezvous is a publicly know, pre-established server that connects to meshes 1 and 2 via a tunnel 281. The rendezvous server 280 itself contains an enhanced MFC 212 and appears as a mesh node peer to mesh nodes 1 and 2. The nodes on mesh 1 and mesh 2 can communicate with each other using eMFC 212 or can communicate with other nodes in Internet 282 via conventional multicast protocols. Thus two nodes on disparate mesh networks or on different mesh and Internet networks can exchange the eMFC information contained in the eMFC header 251 (FIG. 17). Mesh Interface Support
Enhanced MFC 212 supports multicast on both conventional Internet network interfaces and mesh-specific network interfaces. Specifically, the eMFC 212 supports interfaces that suffer from the hidden node problem by repeating multicast traffic on those mesh node interfaces that face multicast listeners that may not normally hear multicast traffic. Referring to FIGS. 21 and 22, a multicast packet 250 sent from a conventional interface may be expected to reach all peers connected to that interface. Ethernet interfaces on a hub or switch are examples of conventional interfaces. Even if this assumption is not true, for example in the case of multicast transmissions passing through some switches, the underlying system components, in this case the switch, have been designed to compensate. In the case of mesh interfaces, however, no such compensation exists. Instead mesh nodes must repeat multicast traffic on some mesh interfaces for the benefit of downstream nodes that can not hear the original multicast transmission.
For example, in FIG. 22, mesh node A may send out a multicast packet 250 that is destined for node C. However, node C may not be in range to receive packet 250 directly from node A. In this situation, node B has to operate as a router to relay multicast packet 250 from node A to node C. However, blindly repeating multicast packet 250 to every node within range can create broadcast storms where all nodes are broadcasting the same multicast packets. To eliminate this and other problems, the mesh nodes take into account mesh interface information when making decisions regarding forwarding multicast packets. For example, in FIG. 21, node B (FIG. 22) may receive a multicast packet in block 300. Node B determines that the packet 250 has an enhanced MFC header 251 in block 302. Node B in decision block 304 determines whether or not the packet must be repeated on the received mesh interface. If not, then any conventional multicast routing is performed in block 306. However, if the packet must be repeated on the received mesh interface in decision block 304, then the node determines if it has any downstream receivers associated with the mesh interface in decision block 308. If not, then the multicast packet need not be repeated and normal multicast operations are conducted in block 306. If node B does have downstream receivers in decision block 308, the multicast packet is repeated to the identified downstream nodes in block 310 on the received mesh interface, thus forwarding traffic onwards to downstream nodes that can hear the received mesh interface but not the original multicast packet (FIG. 22). Downstream nodes may or may not be members of the multicast group associated with the multicast address in the eMFC header 251 (FIG. 17). For example in FIG. 221, the multicast packet 250 is sent to mesh node B from node A. Even though node C may not be identified in the multicast group for multicast packet 250, node B may still forward the packet to node C since node C is a downstream receiver for node B. This allows another mesh node downstream from node C, that is a member of the multicast group, to successfully receive multicast packet 250 from node C. In this example, node D is not a designated downstream mesh node for node B. Thus, node C will not transmit multicast packet 250 to mesh node D. This prevents the broadcasting storms that normally occur when multicast packets are sent over a mesh network. FIG. 23 shows in more detail how node B routes multicast packets 250. Node B receives the multicast packet in block 320. Node B identifies the members of the multicast group in block 322 according to router ID 254, the destination address 257 and destination port 258 (FIG. 17) in the eMFC header 251 and the distributed multicast routing table. The source of the multicast packet is identified in block 322 via the router identifier 254 in the eMFC header 251. In block 326 node B (FIG. 22) identifies any nodes for forwarding the multicast packet 250 according to local routing tables. In other words, the multicast routing table in block 326 may require node B to forward the multicast packet from the source node identified in block 322 to one or more of the nodes identified in block 322. Accordingly, node B forwards the multicast packet 250 to the identified nodes in block 328, if they pass the mesh interface criteria described in FIG. 21. Conventional routing protocols notify nodes of their associated downstream nodes. This for example, is performed by the multicast membership information 228 in FIG. 16. The distributed eMFC headers 251 then identify the particular multicast group associated with the multicast packet 250.
Duplicate Packet Detection Nodes in the mesh network have the possible disadvantage of receiving duplicate multicast packets. A mesh node may receive multiple copies of the same multicast traffic for a variety of reasons including mobility or interface changes. For example, in FIG. 24 a node A may send out a multicast packet 250 to node B. Node B may then broadcast the same multicast packet 250 to node C. However, that same broadcast of multicast packet 250 may also be received back at node A. The duplicate multicast packet 250 can cause node A to repeat processing on the same multicast packet. Thus, duplicate packet detection is particularly important in the mobile, wireless environment of a mobile mesh network. The duplicate packets 250 are identified by the eMFC 212 in node A and silently dropped in operation 346 before reaching the application that processes the packet. FIG. 25 shows the basic logic performed at the eMFC 212 to detect and drop duplicate packets. In block 340, the node receives a multicast packet. In block 342 the enhanced MFC 212 in the node reads the information in the eMFC header 251 (FIG. 17). If the eMFC information 251 indicates a received multicast packet is a duplicate of a packet previously received by the same node, the packet is dropped in block 346. If not, the packet is forwarded in block 348. Duplicate multicast packets are detected using a combination of the router ID
254, sequence number 256, destination address 257 and destination port 258 in the eMFC header 251 (FIG. 17). This provides more exact determination of duplicate packet reception. FIG. 26 explains in more detail. In block 350 a mesh node receives a multicast packet. The eMFC 212 checks the router ID 254 in the packet header 251
(FIG. 17). If packets with the same router ID have never been processed, the node forwards the multicast packet in a normal manner in block 360. If the node has received other packets with the same router ID in block 352, then the destination address 257, destination port 258, and packet sequence number 256 values are checked in block 354. If these values are different than other recently transmitted packets, the packet is forwarded in block 360. If the router ID, destination addresses, and sequence number are the same as another packet flows recently transmitted in block 360, the packet is determined to be a duplicate and discarded in block 358. The enhanced MFC 212 tags each multicast packet at the source node with a monotonically increasing sequence number 256. The sequence number 256 is accordingly used at each hop in the path from source to receivers to weed out and drop duplicate multicast packets. Note that multicast packets may arrive out of order, so the eMFC 212 checks for reception of multicast sequence numbers rather than simply keeping a maximum sequence number for each multicast stream. Likewise sequence numbers may "roll-over". A sequence number rolls-over when the maximum sequence number has been assigned and the next packet is marked with the lowest sequence number. The eMFC 212 also compensates for sequence number roll-over.
Security Feature Support
In FIG. 27, multicast traffic between nodes running the enhanced MFC 212 is secured by supporting security features such as authenticating adjacent neighbors and encrypting multicast traffic hop-by-hop. Security is particularly important in a frequently changing, mobile wireless network, such as mobile mesh network. Each mobile node A-D using the eMFC 212 may take advantage of security features available in the system. For example, each mobile mesh node A-D authenticates itself to directly connected neighbors. After authenticating each other and exchanging certificates, mobile node peers then encrypt multicast traffic on a hop-by-hop basis. Thus multicast traffic destined for a mobile node peer that mistakenly arrives at a listener within radio range 366 does not arrive in the clear. A malicious listener 364 must first break the encrypted multicast packet as sent by the previous hop. This encryption is carried out across tunnels established between mesh nodes A-D and the rendezvous 280 (FIG. 20) as well. In addition, an encryption identifier 262 may optionally be contained in the eMFC header 251 to identify a particular type of encryption scheme used by the source of the multicast packet 250.
QoS Enhancements Enforcement of Quality of Service (QoS) is particularly important in a wireless environment with limited bandwidth and potential radio interference such as in mobile mesh networks. The enhanced MFC 212 supports quality service through traffic measurement and enforcement measures such as packet prioritization, admission control, and traffic shaping. Applications aware of the eMFC 212 support these QoS features by marking application packets into well known categories. Legacy application packets are marked as "best effort" by default. To explain further, FIG. 28 shows multiple mesh nodes 270 that each may transmit and receive multicast packets 250. One or more of the mesh nodes may make QoS decisions regarding received packets. For example, a node 270A may be located in a vehicle that sends multicast packets 250 to a PDA node 270B. At the same time, PC mesh nodes 270C and 270D may also send multicast packets 250 to the PDA node 270B. Unfortunately, the PDA node 270B may not have the capacity to process and forward all of the multicast packets received from nodes 270A, 270C and 270D. In this case, some of the packets 250 may have to be dropped in QoS operation 370. Alternatively, the PDA node 270B may be able to process some or all of the received packets 250, but must prioritize their processing order. Multicast packets handled by the eMFC 212 are prioritized according to their traffic category 260 (FIG. 17). Sample traffic categories are shown in the priority table in FIG. 30. If an eMFC transmission queue becomes too full, packets are dropped using drop priorities specified by the traffic categories 260. For example, all video packets that make up a video frame may be dropped at once rather than simply dropping individual video packets. The eMFC 212 can also mark multicast packets with the appropriate differentiated services field codepoints (DSCP) bits as defined by the IETF. This permits further prioritization below the eMFC 212 by interfaces that support traffic prioritization such as 802.1 li interfaces. FIG. 29 describes in more detail how the enhanced MFC 212 in the nodes 270 in FIG. 28 are used for conducting QoS services. In block 380, the nodes 270 are configured with a priority table for different traffic categories. One example of a priority table is shown in FIG. 30 and may be distributed to the different nodes 270 using the DDS system described above. As the enhanced MFC 212 sends and receives multicast traffic from peers in block 382, it also measures link quality hop-by-hop with respect to multicast traffic. It does so by tracking the number of multicast packets sent and received successfully for each directly connected peer mesh node running the eMFC 212. These measurements are taken in block 384 according to different combinations of the router ID 254, destination address 256, destination port 258, sequence number 256, and traffic category 260 in the eMFC header 251 (FIG. 17). Link costs, as computed by the multicast table computation component 222 (FIG. 16), are a combination of link capacity, link quality, and the node's willingness to serve as a router averaged over time. The final metric is a combination of these factors as well as platform characteristics such as CPU speed, total memory, and battery capacity. As link quality changes, link costs reflect the changes and the multicast table computation component prefers those links with better metrics when computing multicast forwarding cache entries. Given individual link metrics, traffic category distributions, and maximum link capacity derived in block 384, the eMFC 212 can impose multicast rate limits if desired. Policy set by network administration on traffic limits for multicast packets will be enforced by the eMFC 212. For example, a service level agreement (SLA) concerning the amount of video traffic permissible in the mobile mesh network can be enforced to limit the video traffic allowed at each hop during multicast transmission. Video sources that exceed this limit would not be allowed past the first eMFC 212, sparing the mobile mesh network from excessive traffic. For example, the eMFC 212 in block 386 identifies video traffic in block 386 via the traffic category 260 in FIG. 17. The eMFC 212 identifies the source of the video traffic and the amount of video traffic received from that source in block 384 according to the router ID 254 and corresponding sequence number 256. The eMFC 212 then prioritizes the processing of the video traffic in block 388 according to the priority table shown in FIG. 30. As shown in the priority table of FIG. 30, highest priority may be given to different types of low bandwidth control traffic. The larger data traffic, such as video data may be given a lower priory. The eMFC 212 may then either drop or delay the processing of some or all of the video traffic according to the amount of received traffic. In another implementation, the multicast groups identified by the destination address 257 and the destination port 258 may have different priority levels. This allows messages from particular users, such as supervisors or emergency personnel to send messages at a higher priority than other users. Thus, the combination of the router ID 254, destination address 257, destination port 258 and traffic category 260 is used to assign particular groups of users different priority levels. The eMFC 212 can enforce multicast session characteristics such as the number of multicast sessions, throughput per session, or multicast participants per session. In block 390 the eMFC 212 can then track the statistics for particular types of data such as packets received from a particular source (router ID), destination address and/or port, or packets having a particular traffic category. The statistics can identify the amount of packets received for the particular type of traffic and the percentage of that type of traffic that was successfully processed, dropped, etc. FIG. 31 shows the components inside a mesh node 270 used for conducting eMFC 212. A Central Processing Unit (CPU) 402 accesses software that provides the eMFC operations 212. The CPU 402 sends and receives multicast packets via a transceiver 404 and antenna 406. A memory 402 may include the multicast routing tables and priority tables described above.
Legacy Multicast Support The enhanced MFC 212 supports multicast traffic generated both with and without eMFC headers 251. Thus the eMFC supports both legacy multicast applications and those written using eMFC features. Not all multicast applications will take advantage of the features of the eMFC 212. Consequently, support for legacy multicast applications is built in to the eMFC. Using this legacy source and receiver information, the eMFC 212 sets the multicast forwarding cache 224
(FIG. 16) and forwards multicast packets from multicast source applications according to the eMFC 212. Legacy multicast packets received without the eMFC headers 251 are passed directly to the applications registered for that multicast group. Legacy multicast applications running on mesh nodes hosting an eMFC 212 use standard multicast socket API calls 217 (FIG. 16). These calls are intercepted, noted, and passed along by the eMFC 212. Legacy multicast sources running on nodes in the mobile mesh network that do not host the eMFC 212 are detected by neighbor nodes running the eMFC 212. An example of such a multicast source would be a camera within the mesh sending video multicast traffic. Multicast receivers running on nodes in the mobile mesh network not running the eMFC 212 are detected via the IGMP messages issued by every multicast receiver. Legacy multicast sender and receiver information is propagated as global state. Legacy multicast packets are marked for "best effort" delivery, the default quality of service class. The system described above can use dedicated processor systems, micro controllers, programmable logic devices, or microprocessors that perform some or all of the operations. Some of the operations described above may be implemented in software and other operations may be implemented in hardware. For the sake of convenience, the operations are described as various interconnected functional blocks or distinct software modules. This is not necessary, however, and there may be cases where these functional blocks or modules are equivalently aggregated into a single logic device, program or operation with unclear boundaries. In any event, the functional blocks and software modules or features of the flexible interface can be implemented by themselves, or in combination with other operations in either hardware or software. Having described and illustrated the principles of the invention in a preferred embodiment thereof, it should be apparent that the invention may be modified in arrangement and detail without departing from such principles. We claim all modifications and variation coming within the spirit and scope of the following claims.

Claims

Claims
1. A network processing node, comprising: a processor sending Data Distribution Service (DDS) messages associated with data items contained in the network processing node, the DDS messages identifying a global revision value associated with a revision status for multiple different data items in the network processing node and data-revision numbers that identify revision status for the individual data items.
2. The network processing node according to claim 1 wherein the processor operates an enhanced multicast forwarding protocol that provides a Multicast Forwarding Header (MFH) for multicast packets transmitted over the mesh network, the MFH including a device identifier for a sending node and being independent of any Internet Protocol (IP) address associated with the sending node and further including a multicast group identifier identifying nodes in the mesh network associated with a same multicast group.
3. The network processing node according to claim 1 wherein the processor sends a STATUS message that identifies the network processing node sending the STATUS message and the global revision value associated with the last revision to the data items in the identified network processing node.
4. The network processing node according to claim 1 wherein the processor sends a DATA message that identifies: the node sending the DATA message, a global revision value for the node sending the DATA message, a data-name identifying a data item contained in the DATA message, a data-revision value for the data item, and the actual data item identified by the data-name.
5. The network processing device according to claim 1 wherein the processor receives a Negative Acknowledge (NACK) message that identifies global revision values for data that was not successfully received.
6. The network processing device according to claim 5 wherein the NACK message identifies the network processing node associated with the identified global revision values.
7. The network processing device according to claim 5 wherein the
NACK message contains a group of multiple global revision values each associated with the same or different data items that have not been successfully received.
8. The network processing device according to claim 5 wherein the processor sends a CHANGE message identifying for the global revision values identified in the NACK message a data-name identifying a data item and a data- revision number for the identified data item.
9. The network processing device according to claim 8 wherein the processor receives a RETRANSMIT message that identifies the data-names in the CHANGE message that are requested to be retransmitted, the processor in response to the RETRANSMIT message sending a DATA message that contains the data items for the identified data-names.
10. The network processing device according to claim 8 wherein the processor only sends global revision values and associated data-names in the CHANGE message only for the latest versions of the data items associated with the global revision values in the NACK message.
11. The network processing device according to claim 8 wherein the processor maintains time stamps for the data items and sends out EXPIRED messages for requested updates to data items with expired time stamps.
12. The network processing device according to claim 2 wherein the processor prioritizes the multicast packets according to a traffic category, priority table, device identifier, multicast group identifier and a sequence number in the MFH.
13. The node according to claim 12 wherein the priority table and a multicast routing table used by the processor for prioritizing multicast packet processing are automatically distributed to the node using the DDS messages.
14. An ad-hoc mesh network, comprising: multiple mobile nodes that conduct logical point-to-point wireless communications with their neighbors within the mesh network and further provide hops for forwarding messages between other nodes in the mesh network, the nodes providing a mesh multicast protocol that forwards multicast packets between different nodes according to both a mesh network routing table and a mesh based multicast header in the multicast packets.
15. The network according to claim 44 including a device identifier in the multicast header associated with a particular device sending the multicast packets that does not change when the device moves to different locations in and out of the mesh network.
16. The network according to claim 15 including a source router ID, multicast destination address and port address in the multicast header that identifies nodes in the mesh network that are members of a same multicast group.
17. The network according to claim 15 including a sequence number in the multicast header used in combination with the device identifier to identify duplicate multicast packets sent from and returned back to the same node.
18. The network according to claim 14 including a traffic category in the multicast header used by the nodes to prioritize the processing and forwarding of packets to other nodes in the mesh network.
19. The network according to claim 18 including a priority table and a multicast routing table that are automatically distributed to the different nodes in the mesh network that are used in combination with a device identifier, a sequence number, a multicast group address and the traffic category in the multicast header to prioritize the processing and forwarding of the multicast packets.
20. The network according to claim 14 including at least some of the nodes operating a Data Distribution Service (DDS) that sends and receives DDS messages that use Global Revision Values (GRVs) for tracking changes to groups of different data items located on different nodes and for maintaining consistent versions of the data items on the different nodes.
21. The network according to claim 20 wherein the DDS messages identify data-revision values for individual data items associated with the GRVs and are used to distribute a priority table and a multicast routing table to the different nodes that are used in combination with a device identifier, a sequence number, a multicast group address and the traffic category in the multicast header to prioritize the processing and forwarding of the multicast packets.
EP05713134A 2004-02-09 2005-02-08 Reliable message distribution with enhanced emfc for ad hoc mesh networks Withdrawn EP1714446A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US54335304P 2004-02-09 2004-02-09
US54335204P 2004-02-09 2004-02-09
PCT/US2005/003985 WO2005079026A1 (en) 2004-02-09 2005-02-08 Reliable message distribution with enhanced emfc for ad hoc mesh networks

Publications (1)

Publication Number Publication Date
EP1714446A1 true EP1714446A1 (en) 2006-10-25

Family

ID=34864520

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05713134A Withdrawn EP1714446A1 (en) 2004-02-09 2005-02-08 Reliable message distribution with enhanced emfc for ad hoc mesh networks

Country Status (3)

Country Link
EP (1) EP1714446A1 (en)
JP (1) JP2007527160A (en)
WO (1) WO2005079026A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7623458B2 (en) * 2005-09-30 2009-11-24 The Boeing Company System and method for providing integrated services across cryptographic boundaries in a network
JP4513730B2 (en) * 2005-11-29 2010-07-28 沖電気工業株式会社 Wireless communication apparatus, wireless communication method, and wireless communication system
JP2008211682A (en) * 2007-02-27 2008-09-11 Fujitsu Ltd Reception program, transmission program, transmission/reception system, and transmission/reception method
US8169974B2 (en) 2007-04-13 2012-05-01 Hart Communication Foundation Suspending transmissions in a wireless network
US8570922B2 (en) 2007-04-13 2013-10-29 Hart Communication Foundation Efficient addressing in wireless hart protocol
US8325627B2 (en) 2007-04-13 2012-12-04 Hart Communication Foundation Adaptive scheduling in a wireless network
US8230108B2 (en) 2007-04-13 2012-07-24 Hart Communication Foundation Routing packets on a network using directed graphs
US8356431B2 (en) 2007-04-13 2013-01-22 Hart Communication Foundation Scheduling communication frames in a wireless network
WO2009104538A1 (en) * 2008-02-22 2009-08-27 株式会社オートネットワーク技術研究所 Vehicle-mounted electronic controller
WO2010008867A2 (en) 2008-06-23 2010-01-21 Hart Communication Foundation Wireless communication network analyzer
US8274908B2 (en) * 2009-07-24 2012-09-25 Intel Corporation Quality of service packet processing without explicit control negotiations
CN101873273A (en) * 2010-07-08 2010-10-27 华为技术有限公司 Routing forwarding method, routing node and wireless communication network
JP2013183252A (en) * 2012-03-01 2013-09-12 Nec Commun Syst Ltd Communication terminal on ad hoc network, method and system for information control between terminals
JP6086195B2 (en) * 2012-09-26 2017-03-01 岩崎通信機株式会社 Wireless mesh network system and wireless communication apparatus
JP6086194B2 (en) * 2012-09-26 2017-03-01 岩崎通信機株式会社 Wireless mesh network system and wireless communication apparatus
FR3119068B1 (en) * 2021-01-20 2023-11-03 Ad Waibe Method for managing helicopter mission messages and device for its implementation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0833294A4 (en) * 1996-03-15 2007-11-28 Sony Corp Data transmitter, data transmission method, data receiver, data receiving method, data transfer device, and data transfer method
US6404739B1 (en) * 1997-04-30 2002-06-11 Sony Corporation Transmitter and transmitting method, receiver and receiving method, and transceiver and transmitting/receiving method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2005079026A1 *

Also Published As

Publication number Publication date
JP2007527160A (en) 2007-09-20
WO2005079026A1 (en) 2005-08-25

Similar Documents

Publication Publication Date Title
WO2005079026A1 (en) Reliable message distribution with enhanced emfc for ad hoc mesh networks
US20050175009A1 (en) Enhanced multicast forwarding cache (eMFC)
US7924728B2 (en) Systems and methods for energy-conscious communication in wireless ad-hoc networks
Perkins et al. RFC3561: Ad hoc on-demand distance vector (AODV) routing
US8005054B2 (en) Communication system, communication method, communication terminal device, control method thereof, and program
Perkins et al. Ad hoc on-demand distance vector (AODV) routing
Li et al. OTERS (On-tree efficient recovery using subcasting): A reliable multicast protocol
US20050160345A1 (en) Apparatus, system, method and computer program product for reliable multicast transport of data packets
US20050174972A1 (en) Reliable message distribution in an ad hoc mesh network
US20020065930A1 (en) Collaborative host masquerading system
JP4598073B2 (en) Transmitting apparatus and transmitting method
JP2008005455A (en) Communication terminal and retransmission request method
WO2014186614A2 (en) Disrupted adaptive routing
Kunz Multicasting in mobile ad-hoc networks: achieving high packet delivery ratios
Gopinath et al. An experimental study of the cache-and-forward network architecture in multi-hop wireless scenarios
Clausen et al. The lln on-demand ad hoc distance-vector routing protocol-next generation (loadng)
Braun et al. Energy-efficient TCP operation in wireless sensor networks
Jawhar et al. Towards more reliable and secure source routing in mobile ad hoc and sensor networks
JP2009010575A (en) Repeater for multicast communication, and terminal device
Kafaie et al. FlexONC: Joint cooperative forwarding and network coding with precise encoding conditions
Lee et al. Active request management in stateful forwarding networks
Garg et al. A Survey of QoS parameters through reactive routing in MANETs
US7450512B1 (en) Recirculating retransmission queuing system and method
Reddy et al. Ant‐inspired level‐based congestion control for wireless mesh networks
Sathiyajothi Performance analysis of routing protocols for manet using NS2 simulator

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20060905

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

RIN1 Information on inventor provided before grant (corrected)

Inventor name: BAUER, FRED

Inventor name: BOYNTON, LEE

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20090831