WO2011034785A1 - Methods for improved server redundancy in dynamic networks - Google Patents

Methods for improved server redundancy in dynamic networks Download PDF

Info

Publication number
WO2011034785A1
WO2011034785A1 PCT/US2010/048378 US2010048378W WO2011034785A1 WO 2011034785 A1 WO2011034785 A1 WO 2011034785A1 US 2010048378 W US2010048378 W US 2010048378W WO 2011034785 A1 WO2011034785 A1 WO 2011034785A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
network
dynamic
active
ssn
Prior art date
Application number
PCT/US2010/048378
Other languages
French (fr)
Inventor
Raymond B. Miller
Edward Grinshpun
Original Assignee
Alcatel-Lucent Usa Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alcatel-Lucent Usa Inc. filed Critical Alcatel-Lucent Usa Inc.
Priority to CN2010800412798A priority Critical patent/CN102576324A/en
Priority to EP10760830A priority patent/EP2478439A1/en
Priority to JP2012529806A priority patent/JP5697672B2/en
Priority to KR1020127009686A priority patent/KR101479919B1/en
Publication of WO2011034785A1 publication Critical patent/WO2011034785A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2041Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with more than one idle spare processing component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1658Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
    • G06F11/1662Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit the resynchronized component or unit being a persistent storage device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/40Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/2028Failover techniques eliminating a faulty processor or activating a spare
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2097Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements maintaining the standby controller/processing unit updated

Definitions

  • Dynamic networks such as those for emergency response, disaster recovery and/ or military operations, require high availability and server redundancy. Simultaneously, these types of networks must also account for the higher probability of individual nodes or clusters of nodes entering and leaving the network due to normal operational scenarios and/or catastrophic failures. These network entries and network exits result in a higher occurrence of controlled and uncontrolled switchovers between redundant nodes in which the network is left unprotected and vulnerable. To reduce network vulnerability, the transient time during these switchovers for which the network is left unprotected should be reduced.
  • Example embodiments employ a "make-before-break"
  • Example embodiments utilize new interactions with mechanisms for assigning redundancy roles to the servers, new interactions with the existing redundancy mechanisms to trigger check-up replication with the new secondary server, as well as bi- casting of replicated information from the primary server to both the active and new secondary servers during a transition period.
  • the dynamic network nodes may be mounted on emergency or military vehicles to provide wireless access to first responders and/ or military personnel.
  • 4G 4 th Generation
  • example embodiments may improve the data transfer rate to and from first responders as well as improve in-building penetration for wireless communication .
  • network nodes form a mesh network (1) for inter-node communication, (2) to support mobility of end-users, and (3) to improve scalability (e.g., more vehicles at the scene will be able to support more end-users on the access side).
  • the mesh networks are dynamic in nature in that the network nodes (and vehicles on which the network nodes are mounted) may enter or leave the scene during normal operations.
  • a role assignment algorithm is executed to elect a new secondary server and trigger a redundancy mechanism to synchronize the new secondary server prior to, for example, signaling the active secondary server that it may leave (if the active secondary server is leaving) or prior to the active secondary server becoming the new primary server (if the active primary server is leaving) .
  • FIG. 1 illustrates a rapidly deployable network in which example embodiments may be implemented
  • FIG. 2 is a signal flow diagram for illustrating a method for improved server redundancy according to an example embodiment.
  • example embodiments may be practiced without these specific details.
  • systems and networks may be shown in block diagrams so as not to obscure the example embodiments in unnecessary detail.
  • well-known processes, structures and techniques may be shown and/ or discussed without unnecessary detail in order to avoid obscuring example embodiments.
  • Example embodiments may be described as a process depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, signal flow diagram or a block diagram.
  • a signal flow diagram may describe the operations or interactions as a sequential process, many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of the operations or interactions may be re-arranged.
  • a process may be terminated when its operations are completed, but may also have additional steps not included in the figure.
  • a process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination may correspond to a return of the function to the calling function or the main function.
  • Illustrative embodiments will be described with reference to acts and symbolic representations of operations (e.g., in the form of a signal flow diagram) that may be implemented as program modules or functional processes include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and may be implemented using existing hardware at existing network elements or control nodes (e.g., a network nodes or servers with a mesh network).
  • Such existing hardware may include one or more Central Processing Units (CPUs), digital signal processors (DSPs), application-specific-integrated- circuits, field programmable gate arrays (FPGAs) computers or the like.
  • CPUs Central Processing Units
  • DSPs digital signal processors
  • FPGAs field programmable gate arrays
  • calculating,” “determining,” “displaying” or the like refer to the action and processes of a computer system, or similar electronic computing device (e.g., a network node or server within a mesh network), that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • a computer system or similar electronic computing device (e.g., a network node or server within a mesh network), that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • the software implemented aspects of the example embodiments are typically encoded on some form of programmable or computer readable storage medium.
  • the storage medium may be magnetic (e.g., a floppy disk or a hard drive) or optical (e.g., a compact disk read only memory, or "CD ROM”), and may be read only or random access.
  • CD ROM compact disk read only memory
  • FIG. 1 illustrates a dynamic rapidly deployable network (RDN) including two smaller mesh networks 1000 and 2000 in which methods according to example embodiments may be implemented.
  • RDN dynamic rapidly deployable network
  • the dynamic RDN includes a satellite backhaul network 1 16, which is connected to a cellular backhaul network 1 18 by a private intranet or the Internet 100.
  • a first mesh network 1000 is connected to the Internet or private intranet 100 via the cellular backhaul network 1 18.
  • a wireless mesh network is a wireless communications network composed of radio nodes organized in a mesh topology.
  • Wireless mesh networks are dynamic.
  • wireless mesh networks include mesh routers and gateways.
  • Mesh routers are generally computers / servers that forward traffic to and from each other as well as mesh gateways.
  • Mesh gateways are also computers/ servers that may connect to, for example, the Internet.
  • wireless mesh networks can be implemented with various wireless technologies including 802.1 1 , 802.16, cellular technologies or combinations of more than one type.
  • the mesh routers and/ or gateways generally have wireless
  • End-user terminals may include mobile phones, laptops, personal digital assistants (PDAs) or any other device having wireless transmission capabilities.
  • PDAs personal digital assistants
  • the first mesh network 1000 includes a plurality of network servers or nodes 102, 104, 106 connected to one another via wireless communications links. These servers in the first mesh network 1000 are referred to as RDN Mobile Network Nodes (RDN MNNs).
  • RDN MNNs RDN Mobile Network Nodes
  • the plurality of servers 102, 104, 106 may be mesh gateways or mesh routers having well-known capabilities as well as the additional capabilities discussed herein.
  • the mesh network 1000 may serve a plurality of end-user terminals (not shown) within reach of each nodes cellular boundary.
  • a second mesh network 2000 is connected to the first mesh network 1000 via the satellite backhaul network 1 16, the Internet or the private intranet 100 and the cellular backhaul network 1 18.
  • the second mesh network 2000 also includes a plurality of network servers or nodes 1 10, 1 12, 1 14 connected to one another via wireless communications links.
  • the plurality of servers 1 10, 1 12, 1 14 may be mesh gateways or mesh routers.
  • a plurality of end-user terminals may also be present in the second mesh network 2000.
  • the plurality of network nodes or servers shown in FIG. 1 may be mounted on, for example, emergency vehicles to provide wireless access to emergency first responders.
  • the size required for a particular incident may be scaled relatively easily by deploying more (or less) RDN MNNs to a given location (e.g., disaster location, forward area, etc.).
  • each of the plurality of servers 102, 104, 106 of the first mesh network 1000 may be assigned a role within the mesh network. Although specific roles will be discussed with regard to specific ones of the plurality of servers 102, 104, 106, it will be understood that each of these servers or nodes may serve any of the roles within the mesh network 1000.
  • each of mesh network 1000 and 2000 may include any number of network nodes.
  • FIG. 2 is a signal flow diagram for describing a method for improving server redundancy in a dynamic network.
  • the signal flow diagram shown in FIG. 2 illustrates example interaction between servers of the mesh network 1000 in a situation in which a controlled shutdown of an active primary server with a make-before-break redundancy is performed. It will be understood that example embodiments may be implemented in conjunction with other networks, dynamic or otherwise.
  • Initialization includes, but is not limited to, power up of the system and its components, starting of the system processes and services to a stable state of operation, and completion of network entry of the server into the mesh network 1000.
  • a role assignment algorithm is performed within the mesh network 1000.
  • Role assignment is a mechanism that runs on each network server upon entry into (initialization within) the wireless mesh network to negotiate and assign roles to nodes within the wireless mesh network.
  • An_example role assignment mechanism is described in U.S. Patent Application No. 1 1 /953,426 (Publication No. 2009/0147702) to Buddhikot et al., which was filed on December 10, 2007.
  • each network node is assigned a "role" based on network topology and what roles have already been assigned to network nodes in the mesh network.
  • a network node may be assigned more than one role.
  • the possible roles for each node include: Relay Node (RN): This is the basic role in a dynamic RDN. In this role, a given network node has its access and relay interfaces active. The RN participates in the mesh network with its neighboring network nodes and provides access to end-user- devices. The backhaul interfaces for the RN are inactive.
  • RN cannot directly access the backhaul networks such as an external macro-cellular, satellite, or wireline network. Instead, the RN accesses the external backhaul networks via a Gateway Node (GN).
  • PSN Primary Server Node
  • This network node acts as a Relay Node (RN) as well as hosts the centralized services necessary for operation of the RDN wireless network.
  • Centralized services may be any service where there is a single active instance of the service for the entire instance of a particular mesh network. Conversely, distributed services are on a per-node basis. There is one active PSN per mesh network.
  • SSN Secondary Server Node
  • This network node acts as a RN as well as a redundant network node in the event the active PSN fails or exits the network.
  • Applications running on the PSN synchronize their dynamic persistent data to their counterpart applications in the SSN, thereby enabling a relatively smooth handoff between the PSN and SSN when necessary.
  • Candidate Gateway Node (CGN) This network node acts as a RN and has an enabled backhaul interface. The CGN is capable of providing access to a backhaul network such as an external macro-cellular, satellite, or wireline network, but does not act as an active gateway for any network node.
  • Gateway Node (GN) The GN acts as a RN and has its backhaul interface active. The GN also serves as a gateway to a backhaul network such as an external macro-cellular, satellite, or wireline network.
  • GN Gateway Node
  • the network nodes 102, 104 and 106 within the mesh network 1000 determine which of the nodes will serve as the PSN. For example, if network node 102 is the first and only node initialized within the mesh network 1000, this network node is assigned the role of primary server or PSN. For the sake of discussion, it is assumed that network node 102 is assigned the primary server role. As such, the network node 102 is sometimes referred to herein as the PSN 102. After having assigned the network node 102 the primary server role, normal active system operations begin.
  • normal active system operations include authentication / authorization and possible admittance of subscribers on end-user terminals into the mesh network 1000, transmitting and receiving information to and from end-users within the mesh network 1000, traffic policies and applying Quality of Service (QoS) mechanisms, and other operations consistent with the operation of mesh routers.
  • QoS Quality of Service
  • the role assignment algorithm is performed between network nodes 102 and 104 at step 4.
  • the network node 104 is assigned a secondary (standby) server role (at step 5).
  • node 104 is selected as the SSN since it is the only other node present.
  • the selection of the SSN may include, but is not limited to, factors such as the number of network hops away, the routing tree topology, and processor load on the particular network node.
  • network node 104 attaches to the active PSN 102 as the designated secondary server or SSN.
  • dynamic persistent data at the active PSN 102 is synchronized with the dynamic persistent data at the active SSN 104 at step 7.
  • dynamic persistent data at the active PSN 102 is bulk-checkpointed to the active SSN 104.
  • Bulk-checkpointing is the transfer of the complete data store from one network node to another to synchronize the data at the network nodes in the dynamic RDN. Bulk-checkpointing is performed when a SSN has no redundant data from the active PSN. In this example, the bulk-checkpointing is performed after the SSN 104 has initially been assigned the secondary server role.
  • dynamic data is the data of a running application, such as state information, that is stored in memory and usually does not survive a process restart.
  • Dynamic persistent data is dynamic data that is stored in some persistent memory store so that the data persists across a process restart.
  • dynamic persistent data may be dynamic object state information (e.g., object being called, service flow, etc.) that a device (e.g., a network node) is required to maintain in software to provide the associated functions and services.
  • static data is data that is stored in a long term storage device, such as a database on a disk drive.
  • static data may be system configuration data such as host name, domain name, or the provisioning of subscriber information.
  • Bulk-checkpointing of dynamic persistent data between network nodes may be performed in various ways.
  • the data may be saved in a memory or on a disk shared between network nodes depending on the required access speed.
  • specific dynamic data on the PSN 102 is modified as a result of transactions or events within or to the applications, while this dynamic data on the SSN 104 is modified as a result of replication of the active PSN 102.
  • synchronization of the dynamic persistent data between the active PSN 102 and the active SSN 104 provides 1 + 1 redundancy in the mesh network.
  • synchronization between the active PSN 102 and the active SSN 104 is updated.
  • subsequent dynamic persistent data changes at the active PSN 102 are
  • Incremental checkpointing is the transfer of only the delta or difference between the two synchronized network nodes (e.g., the active PSN 102 and the active SSN 104). This occurs, for example, when some, but not all of the dynamic persistent data on the active PSN 102 has been modified since the most recent bulk-checkpointing between the active PSN 102 and the active SSN 104. Incremental checkpointing may be triggered each time a dynamic persistent data object is modified, each time a set of dynamic persistent data objects is modified, or on a more granular basis. That is, for example, incremental checkpointing may occur on a per modified dynamic persistent data object basis, a per modified set of dynamic persistent data objects basis, or some more granular basis. Incremental checkpointing is not performed if dynamic persistent data on the active PSN 102 has not been modified.
  • the active SSN 104 monitors the health of the PSN 102. In the event that the active PSN 102 fails or performs a controlled shutdown, the active SSN 104 is ready to assume the primary server role. For example, SSN 104 may send a message to PSN 102, expecting an
  • network nodes 106 After establishing the mesh network 1000, other systems (e.g., network nodes 106) may enter and be initialized within the mesh network 1000. An example situation in which this occurs will be described in more detail below with reference to FIG. 2.
  • a network node 106 enters the mesh network 1000 in the same manner as discussed above with regard to step 3.
  • the role assignment algorithm is performed between the active PSN 102, the active SSN 104 and the newly entered node 106. Because both the primary and secondary server roles have already been assigned in the wireless mesh network 1000, network node 106 is assigned a non-redundant role at step 1 1. For example, network node 106 is designated as a relay node (RN). The RN 106 then begins normal operations.
  • RN relay node
  • the active PSN 102 may initiate a controlled shutdown (e.g., in expectation of leaving the mesh network). As noted above, example embodiments will be discussed with regard to this situation. However, a similar process may occur if the active PSN 102 begins to fail, rather than initiate a controlled shutdown.
  • a controlled shutdown e.g., in expectation of leaving the mesh network.
  • the active PSN 102 initiates a controlled shutdown because it intends to leave the mesh network 1000.
  • the role assignment algorithm is triggered to identify a candidate SSN at step 13.
  • the candidate SSN is a standby SSN, which will become the active SSN after being synchronized with the active PSN and active SSN.
  • a candidate SSN is identified and assigned the candidate
  • the network node 106 is identified as the candidate or standby SSN.
  • running dynamic persistent data at the active PSN 102 is synchronized with dynamic persistent data at the candidate SSN 106.
  • the running dynamic persistent data at the active PSN 102 is bulk-checkpointed to the candidate SSN 106.
  • any incremental changes to the dynamic persistent data are bi-casted to both active SSN 104 and the candidate SSN 106 to ensure if either the active SSN 104 (the candidate PSN) or the active PSN 102 servers fail, at least one SSN is still present in the mesh network 1000.
  • dynamic persistent data at the active PSN 102, the active SSN 104 and the candidate SSN 106 is synchronized at step 17. Once synchronized, the role assignment algorithm is complete and the dynamic network is in a transient 1+2 redundant state.
  • the active SSN 104 continues to monitor the health of the active PSN 102 and is prepared to assume the primary server role when the active PSN 102 completes the controlled shutdown (or alternatively fails).
  • the candidate SSN 106 is ready to become the active SSN if the active SSN 104 or the active PSN 102 fails or leaves the mesh network 1000.
  • step 19 when the active PSN 102 completes the controlled shutdown (e.g., exits the dynamic RDN), the active SSN 104 detects the exit of the active PSN 102 at step 20, and the active SSN 104 promotes itself to the active primary server role at step 21. The SSN 104 then informs all other nodes of its new status.
  • the active PSN 102 completes the controlled shutdown e.g., exits the dynamic RDN
  • the active SSN 104 detects the exit of the active PSN 102 at step 20, and the active SSN 104 promotes itself to the active primary server role at step 21.
  • the SSN 104 then informs all other nodes of its new status.
  • the candidate SSN 106 detects the exit of the active PSN 102 from the mesh network 1000, and promotes itself to the active secondary server role at step 23. Note that in order to reduce (e.g., minimize) the time that the network is left unprotected, SSN 106 does not wait for the message sent from SSN 104 indicating its promotion to PSN status before SSN 106 promotes itself to active SSN. Thus, the promotion of the candidate SSN 106 to the active SSN may actually occur before or in parallel with the promotion of the SSN 104 to the PSN.
  • the now active SSN 106 attaches to the newly active PSN 104 and at least 1 + 1 redundancy is continuously maintained in the mesh network 1000 despite the controlled shutdown of the PSN 102.
  • Example embodiments ensure relatively high availability of dynamic networks and realized services. More generally, example embodiments improve availability of dynamic networks and services for all networks, but especially those which are dynamic and/ or prone to multiple simultaneous failures.
  • Example embodiments reduce the time that dynamic networks are left unprotected, for example, during a transient period of redundancy switchover. This may increase the availability of the network and services, which are particularly important during emergency situations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Hardware Redundancy (AREA)

Abstract

In one embodiment, a server (106) is assigned a candidate secondary server role such that the dynamic network (1000) employs a "make-before-break" redundancy where redundant nodes proactively synchronize replicated data and state information with a standby secondary server prior to releasing the responsibilities of active primary (102) and/ or secondary (104) server(s). The "make-before- break" redundancy ensures relatively high availability of dynamic networks and realized services.

Description

METHODS FOR IMPROVED SERVER REDUNDANCY IN DYNAMIC
NETWORKS
BACKGROUND OF THE INVENTION
Dynamic networks, such as those for emergency response, disaster recovery and/ or military operations, require high availability and server redundancy. Simultaneously, these types of networks must also account for the higher probability of individual nodes or clusters of nodes entering and leaving the network due to normal operational scenarios and/or catastrophic failures. These network entries and network exits result in a higher occurrence of controlled and uncontrolled switchovers between redundant nodes in which the network is left unprotected and vulnerable. To reduce network vulnerability, the transient time during these switchovers for which the network is left unprotected should be reduced.
SUMMARY OF THE INVENTION
Current server redundancy mechanisms establish
synchronization between a primary server or node and a secondary server or node for the purposes of data and state replication as well as signaling for health monitoring between the systems. When the primary server fails or is interrupted, the secondary server assumes the primary server role and a new secondary server is identified. But, these mechanisms address only single failure conditions.
Conventional mechanisms do not handle the case in which the secondary server fails at the same time as the primary server or during the transition from primary to secondary server. This is acceptable for static networks because the likelihood of dual failures is somewhat remote. However, in the case of dynamic networks, nodes enter and leave the network more frequently as part of normal operations. Further, when dynamic networks are deployed as emergency networks or in military situations, they may be more susceptible to multiple failure conditions due to the harsh user environments in which they are deployed. Further still, because current server redundancy mechanisms only address single failure conditions, dynamic networks are left unprotected while establishing data and state synchronization with a new secondary server after the primary or secondary server fails.
Example embodiments employ a "make-before-break"
redundancy where the redundant nodes proactively synchronize replicated data and state information with a standby secondary server prior to releasing the responsibilities of active primary and/ or secondary server(s). Example embodiments utilize new interactions with mechanisms for assigning redundancy roles to the servers, new interactions with the existing redundancy mechanisms to trigger check-up replication with the new secondary server, as well as bi- casting of replicated information from the primary server to both the active and new secondary servers during a transition period.
In connection with example embodiments, the dynamic network nodes may be mounted on emergency or military vehicles to provide wireless access to first responders and/ or military personnel. By using 4th Generation (4G) broadband wireless cellular access technology, example embodiments may improve the data transfer rate to and from first responders as well as improve in-building penetration for wireless communication .
According to at least one example embodiment, network nodes form a mesh network (1) for inter-node communication, (2) to support mobility of end-users, and (3) to improve scalability (e.g., more vehicles at the scene will be able to support more end-users on the access side). The mesh networks are dynamic in nature in that the network nodes (and vehicles on which the network nodes are mounted) may enter or leave the scene during normal operations.
After having been established, if either the primary or secondary server nodes within the mesh network fails or indicates its intent to leave the network, a role assignment algorithm is executed to elect a new secondary server and trigger a redundancy mechanism to synchronize the new secondary server prior to, for example, signaling the active secondary server that it may leave (if the active secondary server is leaving) or prior to the active secondary server becoming the new primary server (if the active primary server is leaving) .
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will become more fully understood from the detailed description given herein below and the accompanying drawings, wherein like elements are represented by like reference numerals, which are given by way of illustration only and thus are not limiting of the present invention and wherein:
FIG. 1 illustrates a rapidly deployable network in which example embodiments may be implemented; and
FIG. 2 is a signal flow diagram for illustrating a method for improved server redundancy according to an example embodiment.
DETAILED DESCRIPTION OF THE EMBODIMENTS
Various example embodiments will now be described more fully with reference to the accompanying drawings in which some example embodiments are shown. Specific structural and functional details disclosed herein are merely representative for the purposes of describing example embodiments. Accordingly, while example embodiments are capable of various modifications and alternative forms, embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit example embodiments to the particular forms disclosed. On the contrary, example embodiments are to cover all modifications, equivalents, and alternatives. Like numbers refer to like elements throughout the description of the figures.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term "and/ or" includes any and all combinations of one or more of the associated listed items.
It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. By contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., "between" versus "directly between" "adjacent" versus "directly adjacent," etc.).
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a," "an," and "the," are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used herein, specify the presence of stated features, integers, steps, actions operations, elements and/ or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/ or groups thereof.
It should also be noted that in some alternative
implementations, the functions/ acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the
functionality/ acts involved.
Specific details are provided in the following description to provide a thorough understanding of example embodiments.
However, it will be understood by one of ordinary skill in the art that example embodiments may be practiced without these specific details. For example, systems and networks may be shown in block diagrams so as not to obscure the example embodiments in unnecessary detail. In other instances, well-known processes, structures and techniques may be shown and/ or discussed without unnecessary detail in order to avoid obscuring example embodiments.
Example embodiments may be described as a process depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, signal flow diagram or a block diagram. Although a signal flow diagram may describe the operations or interactions as a sequential process, many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of the operations or interactions may be re-arranged. A process may be terminated when its operations are completed, but may also have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination may correspond to a return of the function to the calling function or the main function.
Illustrative embodiments will be described with reference to acts and symbolic representations of operations (e.g., in the form of a signal flow diagram) that may be implemented as program modules or functional processes include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and may be implemented using existing hardware at existing network elements or control nodes (e.g., a network nodes or servers with a mesh network). Such existing hardware may include one or more Central Processing Units (CPUs), digital signal processors (DSPs), application-specific-integrated- circuits, field programmable gate arrays (FPGAs) computers or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these
quantities. Unless specifically stated otherwise, or as is apparent from the discussion, terms such as "processing," "computing,"
"calculating," "determining," "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device (e.g., a network node or server within a mesh network), that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Note also that the software implemented aspects of the example embodiments are typically encoded on some form of programmable or computer readable storage medium. The storage medium may be magnetic (e.g., a floppy disk or a hard drive) or optical (e.g., a compact disk read only memory, or "CD ROM"), and may be read only or random access. The example embodiments are not limited by these aspects of any given implementation.
FIG. 1 illustrates a dynamic rapidly deployable network (RDN) including two smaller mesh networks 1000 and 2000 in which methods according to example embodiments may be implemented.
Referring to FIG. 1 , the dynamic RDN includes a satellite backhaul network 1 16, which is connected to a cellular backhaul network 1 18 by a private intranet or the Internet 100.
A first mesh network 1000 is connected to the Internet or private intranet 100 via the cellular backhaul network 1 18.
As is well-known, a wireless mesh network (WMN) is a wireless communications network composed of radio nodes organized in a mesh topology. Wireless mesh networks are dynamic. Often, wireless mesh networks include mesh routers and gateways.
Mesh routers are generally computers / servers that forward traffic to and from each other as well as mesh gateways. Mesh gateways are also computers/ servers that may connect to, for example, the Internet. In one example, wireless mesh networks can be implemented with various wireless technologies including 802.1 1 , 802.16, cellular technologies or combinations of more than one type. The mesh routers and/ or gateways generally have wireless
communication capabilities to serve and provide access to end-user terminals. End-user terminals may include mobile phones, laptops, personal digital assistants (PDAs) or any other device having wireless transmission capabilities.
Referring back to FIG. 1 , the first mesh network 1000 includes a plurality of network servers or nodes 102, 104, 106 connected to one another via wireless communications links. These servers in the first mesh network 1000 are referred to as RDN Mobile Network Nodes (RDN MNNs). In the mesh network 1000 shown in FIG. 1 , the plurality of servers 102, 104, 106 may be mesh gateways or mesh routers having well-known capabilities as well as the additional capabilities discussed herein. The mesh network 1000 may serve a plurality of end-user terminals (not shown) within reach of each nodes cellular boundary.
In FIG. 1 , a second mesh network 2000 is connected to the first mesh network 1000 via the satellite backhaul network 1 16, the Internet or the private intranet 100 and the cellular backhaul network 1 18. The second mesh network 2000 also includes a plurality of network servers or nodes 1 10, 1 12, 1 14 connected to one another via wireless communications links. As was the case with the mesh network 1000, the plurality of servers 1 10, 1 12, 1 14 may be mesh gateways or mesh routers. Although not shown in FIG. 1 , a plurality of end-user terminals may also be present in the second mesh network 2000.
The plurality of network nodes or servers shown in FIG. 1 may be mounted on, for example, emergency vehicles to provide wireless access to emergency first responders.
By utilizing wireless mesh networks such as the mesh networks 1000 and 2000 shown in FIG. 1 , the size required for a particular incident may be scaled relatively easily by deploying more (or less) RDN MNNs to a given location (e.g., disaster location, forward area, etc.).
As will be discussed in more detail below, each of the plurality of servers 102, 104, 106 of the first mesh network 1000 may be assigned a role within the mesh network. Although specific roles will be discussed with regard to specific ones of the plurality of servers 102, 104, 106, it will be understood that each of these servers or nodes may serve any of the roles within the mesh network 1000.
Moreover, although only three network nodes are shown in the mesh network 1000, it will be understood that each of mesh network 1000 and 2000 may include any number of network nodes.
FIG. 2 is a signal flow diagram for describing a method for improving server redundancy in a dynamic network. The signal flow diagram shown in FIG. 2 illustrates example interaction between servers of the mesh network 1000 in a situation in which a controlled shutdown of an active primary server with a make-before-break redundancy is performed. It will be understood that example embodiments may be implemented in conjunction with other networks, dynamic or otherwise.
Referring to FIG. 2, at step 1 , a first server 102 is initialized. Initialization includes, but is not limited to, power up of the system and its components, starting of the system processes and services to a stable state of operation, and completion of network entry of the server into the mesh network 1000.
At step 2, a role assignment algorithm is performed within the mesh network 1000. Role assignment is a mechanism that runs on each network server upon entry into (initialization within) the wireless mesh network to negotiate and assign roles to nodes within the wireless mesh network. An_example role assignment mechanism is described in U.S. Patent Application No. 1 1 /953,426 (Publication No. 2009/0147702) to Buddhikot et al., which was filed on December 10, 2007.
For example, after or in response to initialization within the mesh network, each network node is assigned a "role" based on network topology and what roles have already been assigned to network nodes in the mesh network. In the example mesh network 1000 shown in FIG. 1 , there are five possible roles for a network node. In some cases, a network node may be assigned more than one role. The possible roles for each node include: Relay Node (RN): This is the basic role in a dynamic RDN. In this role, a given network node has its access and relay interfaces active. The RN participates in the mesh network with its neighboring network nodes and provides access to end-user- devices. The backhaul interfaces for the RN are inactive. As a result, the RN cannot directly access the backhaul networks such as an external macro-cellular, satellite, or wireline network. Instead, the RN accesses the external backhaul networks via a Gateway Node (GN). Primary Server Node (PSN): This network node acts as a Relay Node (RN) as well as hosts the centralized services necessary for operation of the RDN wireless network. Centralized services may be any service where there is a single active instance of the service for the entire instance of a particular mesh network. Conversely, distributed services are on a per-node basis. There is one active PSN per mesh network. Secondary Server Node (SSN): This network node acts as a RN as well as a redundant network node in the event the active PSN fails or exits the network. Applications running on the PSN synchronize their dynamic persistent data to their counterpart applications in the SSN, thereby enabling a relatively smooth handoff between the PSN and SSN when necessary. Candidate Gateway Node (CGN): This network node acts as a RN and has an enabled backhaul interface. The CGN is capable of providing access to a backhaul network such as an external macro-cellular, satellite, or wireline network, but does not act as an active gateway for any network node. Gateway Node (GN): The GN acts as a RN and has its backhaul interface active. The GN also serves as a gateway to a backhaul network such as an external macro-cellular, satellite, or wireline network. Thus, other network nodes in the mesh network may access the external networks via the GN.
Referring back to Step 2 in FIG. 2, through the role assignment algorithm, the network nodes 102, 104 and 106 within the mesh network 1000 determine which of the nodes will serve as the PSN. For example, if network node 102 is the first and only node initialized within the mesh network 1000, this network node is assigned the role of primary server or PSN. For the sake of discussion, it is assumed that network node 102 is assigned the primary server role. As such, the network node 102 is sometimes referred to herein as the PSN 102. After having assigned the network node 102 the primary server role, normal active system operations begin. As is known in the art, normal active system operations include authentication / authorization and possible admittance of subscribers on end-user terminals into the mesh network 1000, transmitting and receiving information to and from end-users within the mesh network 1000, traffic policies and applying Quality of Service (QoS) mechanisms, and other operations consistent with the operation of mesh routers.
Upon (or in response to) initialization of another network node, for example, network node 104 at step 3, the role assignment algorithm is performed between network nodes 102 and 104 at step 4. During the role assignment algorithm, because the network node 102 is already serving as the PSN, the network node 104 is assigned a secondary (standby) server role (at step 5). In this simple example, node 104 is selected as the SSN since it is the only other node present. However, in larger, more complex network topologies, the selection of the SSN may include, but is not limited to, factors such as the number of network hops away, the routing tree topology, and processor load on the particular network node. At step 6, network node 104 attaches to the active PSN 102 as the designated secondary server or SSN. After being designated the SSN, dynamic persistent data at the active PSN 102 is synchronized with the dynamic persistent data at the active SSN 104 at step 7. For example, at step 7 dynamic persistent data at the active PSN 102 is bulk-checkpointed to the active SSN 104.
Bulk-checkpointing is the transfer of the complete data store from one network node to another to synchronize the data at the network nodes in the dynamic RDN. Bulk-checkpointing is performed when a SSN has no redundant data from the active PSN. In this example, the bulk-checkpointing is performed after the SSN 104 has initially been assigned the secondary server role.
In more detail, "dynamic data" is the data of a running application, such as state information, that is stored in memory and usually does not survive a process restart. "Dynamic persistent data" is dynamic data that is stored in some persistent memory store so that the data persists across a process restart. For example, dynamic persistent data may be dynamic object state information (e.g., object being called, service flow, etc.) that a device (e.g., a network node) is required to maintain in software to provide the associated functions and services. "Static data" is data that is stored in a long term storage device, such as a database on a disk drive. For example, static data may be system configuration data such as host name, domain name, or the provisioning of subscriber information.
Bulk-checkpointing of dynamic persistent data between network nodes may be performed in various ways. For example, the data may be saved in a memory or on a disk shared between network nodes depending on the required access speed. In a redundant architecture example, specific dynamic data on the PSN 102 is modified as a result of transactions or events within or to the applications, while this dynamic data on the SSN 104 is modified as a result of replication of the active PSN 102.
As shown visually in FIG. 2, synchronization of the dynamic persistent data between the active PSN 102 and the active SSN 104 provides 1 + 1 redundancy in the mesh network.
At step 8, synchronization between the active PSN 102 and the active SSN 104 is updated. In one example, at step 8, subsequent dynamic persistent data changes at the active PSN 102 are
incrementally checkpointed to the active SSN 104. Incremental checkpointing is the transfer of only the delta or difference between the two synchronized network nodes (e.g., the active PSN 102 and the active SSN 104). This occurs, for example, when some, but not all of the dynamic persistent data on the active PSN 102 has been modified since the most recent bulk-checkpointing between the active PSN 102 and the active SSN 104. Incremental checkpointing may be triggered each time a dynamic persistent data object is modified, each time a set of dynamic persistent data objects is modified, or on a more granular basis. That is, for example, incremental checkpointing may occur on a per modified dynamic persistent data object basis, a per modified set of dynamic persistent data objects basis, or some more granular basis. Incremental checkpointing is not performed if dynamic persistent data on the active PSN 102 has not been modified.
After having been synchronized with the active PSN 102, the active SSN 104 monitors the health of the PSN 102. In the event that the active PSN 102 fails or performs a controlled shutdown, the active SSN 104 is ready to assume the primary server role. For example, SSN 104 may send a message to PSN 102, expecting an
acknowledgement from PSN 102. If, after a number of retries to account for possible packet loss within the network, SSN 104 does not receive an acknowledgement from PSN 102, SSN 104 assumes the primary server role responsibilities. If, in the future, PSN 102 becomes reachable again, the role assignment mechanism may downgrade one of the nodes to SSN status.
After establishing the mesh network 1000, other systems (e.g., network nodes 106) may enter and be initialized within the mesh network 1000. An example situation in which this occurs will be described in more detail below with reference to FIG. 2.
Referring still to FIG. 2, for example, at step 9 a network node 106 enters the mesh network 1000 in the same manner as discussed above with regard to step 3. After (or in response to) initialization, at step 10, the role assignment algorithm is performed between the active PSN 102, the active SSN 104 and the newly entered node 106. Because both the primary and secondary server roles have already been assigned in the wireless mesh network 1000, network node 106 is assigned a non-redundant role at step 1 1. For example, network node 106 is designated as a relay node (RN). The RN 106 then begins normal operations.
During normal operations, the active PSN 102 may initiate a controlled shutdown (e.g., in expectation of leaving the mesh network). As noted above, example embodiments will be discussed with regard to this situation. However, a similar process may occur if the active PSN 102 begins to fail, rather than initiate a controlled shutdown.
Referring back to FIG. 2, at step 12 the active PSN 102 initiates a controlled shutdown because it intends to leave the mesh network 1000. In response to an indication or notification that the active PSN 102 intends to leave the mesh network 1000, the role assignment algorithm is triggered to identify a candidate SSN at step 13. The candidate SSN is a standby SSN, which will become the active SSN after being synchronized with the active PSN and active SSN. At step 14, a candidate SSN is identified and assigned the candidate
secondary server role pending completion of dynamic persistent data synchronization with the active PSN 102 and SSN 104. In the example shown in FIG. 1 , the network node 106 is identified as the candidate or standby SSN.
At step 15, running dynamic persistent data at the active PSN 102 is synchronized with dynamic persistent data at the candidate SSN 106. In one example, the running dynamic persistent data at the active PSN 102 is bulk-checkpointed to the candidate SSN 106.
While synchronizing the dynamic persistent data between the active PSN 102 and the candidate SSN 106, any incremental changes to the dynamic persistent data are bi-casted to both active SSN 104 and the candidate SSN 106 to ensure if either the active SSN 104 (the candidate PSN) or the active PSN 102 servers fail, at least one SSN is still present in the mesh network 1000.
Upon completion of dynamic persistent data synchronization between the active PSN 102 and the candidate SSN 106, dynamic persistent data at the active PSN 102, the active SSN 104 and the candidate SSN 106 is synchronized at step 17. Once synchronized, the role assignment algorithm is complete and the dynamic network is in a transient 1+2 redundant state.
The active SSN 104 continues to monitor the health of the active PSN 102 and is prepared to assume the primary server role when the active PSN 102 completes the controlled shutdown (or alternatively fails). In addition, the candidate SSN 106 is ready to become the active SSN if the active SSN 104 or the active PSN 102 fails or leaves the mesh network 1000.
Returning to step 19, when the active PSN 102 completes the controlled shutdown (e.g., exits the dynamic RDN), the active SSN 104 detects the exit of the active PSN 102 at step 20, and the active SSN 104 promotes itself to the active primary server role at step 21. The SSN 104 then informs all other nodes of its new status.
In parallel, at step 22 the candidate SSN 106 detects the exit of the active PSN 102 from the mesh network 1000, and promotes itself to the active secondary server role at step 23. Note that in order to reduce (e.g., minimize) the time that the network is left unprotected, SSN 106 does not wait for the message sent from SSN 104 indicating its promotion to PSN status before SSN 106 promotes itself to active SSN. Thus, the promotion of the candidate SSN 106 to the active SSN may actually occur before or in parallel with the promotion of the SSN 104 to the PSN.
At step 24, the now active SSN 106 attaches to the newly active PSN 104 and at least 1 + 1 redundancy is continuously maintained in the mesh network 1000 despite the controlled shutdown of the PSN 102.
Subsequent dynamic persistent data changes of the active PSN 104 are then incrementally checkpointed to the active SSN 106 at step 25.
After having assumed the secondary server role, the active SSN
106 monitors the health of the PSN 104 and is prepared to become the PSN if PSN 104 fails or performs a controlled shutdown.
Subsequently, similar scenarios occur when the designated or active SSN leaves the network or when any three of the nodes fail in the dynamic network. When the active SSN leaves the network, 1 + 1 redundancy is maintained because the candidate SSN (e.g., node 108) promotes itself to the active secondary server role.
Moreover, although not discussed explicitly herein, the same procedures may be performed within to the second mesh network 2000 shown in FIG. 1. Because these procedures are substantially the same as those discussed above with regard to the first mesh network 1000 in FIG. 1 , a detailed discussion is omitted.
Example embodiments ensure relatively high availability of dynamic networks and realized services. More generally, example embodiments improve availability of dynamic networks and services for all networks, but especially those which are dynamic and/ or prone to multiple simultaneous failures.
Example embodiments reduce the time that dynamic networks are left unprotected, for example, during a transient period of redundancy switchover. This may increase the availability of the network and services, which are particularly important during emergency situations.
The invention being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the invention, and all such
modifications are intended to be included within the scope of the invention.

Claims

WE CLAIM:
1. A method of operating a dynamic network (1000), the method CHARACTERIZED BY:
assigning a primary server role to a first server (102);
assigning a secondary server role to the second server (104); synchronizing dynamic persistent data between the first and second servers;
assigning a non-redundant server role to the third server (106); and
assigning a candidate secondary server role to at least the third server in response to notification of a pending removal of at least one of the first and second servers from the dynamic network.
2. The method of claim 1 , further CHARACTERIZED BY:
initiating controlled shutdown of the first server in expectation of removal of the first server from the dynamic network.
3. The method of claim 2, further CHARACTERIZED BY:
identifying the third server as a candidate secondary server; synchronizing dynamic persistent data of the first server with each of the second and third servers; and
promoting the second server to the primary server role in response to an indication that the first server has been removed from the dynamic network.
4. The method of claim 3, wherein the synchronizing dynamic persistent data of the first server with each of the second and third servers is CHARACTERIZED BY:
bulk-checkpointing dynamic persistent data of the first server with the second server.
5. The method of claim 4, wherein the synchronizing dynamic persistent data of the first server with each of the second and third servers is CHARACTERIZED BY:
bulk-checkpointing dynamic persistent data of the first server with the third server.
6. The method of claim 5, wherein the synchronizing dynamic persistent data of the first server with each of the second and third servers is CHARACTERIZED BY:
bicasting subsequent, incremental changes to the dynamic persistent data of the first server to each of the second and third servers.
7. The method of claim 3, wherein the synchronizing dynamic persistent data of the first server with each of the second and third servers is CHARACTERIZED BY:
bicasting subsequent, incremental changes to the dynamic persistent data of the first server to each of the second and third servers.
8. The method of claim 1 , wherein the synchronizing step is CHARACTERIZED BY:
bulk-checkpointing dynamic persistent data of the first server to the second server.
9. The method of claim 8, wherein the synchronizing step is further CHARACTERIZED BY:
incrementally checkpointing subsequent changes to the dynamic persistent data of the first server to the second server.
10. The method of claim 1 , further CHARACTERIZED BY:
reassigning the primary server role to the second server in response to an indication that the first server has been removed from the dynamic network; and
reassigning the secondary server role to the third server in response to the indication that the first server has been removed from the dynamic network.
PCT/US2010/048378 2009-09-18 2010-09-10 Methods for improved server redundancy in dynamic networks WO2011034785A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN2010800412798A CN102576324A (en) 2009-09-18 2010-09-10 Methods for improved server redundancy in dynamic networks
EP10760830A EP2478439A1 (en) 2009-09-18 2010-09-10 Methods for improved server redundancy in dynamic networks
JP2012529806A JP5697672B2 (en) 2009-09-18 2010-09-10 A method for improved server redundancy in dynamic networks
KR1020127009686A KR101479919B1 (en) 2009-09-18 2010-09-10 Methods for improved server redundancy in dynamic networks

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/585,576 US9569319B2 (en) 2009-09-18 2009-09-18 Methods for improved server redundancy in dynamic networks
US12/585,576 2009-09-18

Publications (1)

Publication Number Publication Date
WO2011034785A1 true WO2011034785A1 (en) 2011-03-24

Family

ID=43513919

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2010/048378 WO2011034785A1 (en) 2009-09-18 2010-09-10 Methods for improved server redundancy in dynamic networks

Country Status (6)

Country Link
US (1) US9569319B2 (en)
EP (1) EP2478439A1 (en)
JP (1) JP5697672B2 (en)
KR (1) KR101479919B1 (en)
CN (1) CN102576324A (en)
WO (1) WO2011034785A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103095837A (en) * 2013-01-18 2013-05-08 浪潮电子信息产业股份有限公司 Method achieving lustre metadata server redundancy
US9332413B2 (en) 2013-10-23 2016-05-03 Motorola Solutions, Inc. Method and apparatus for providing services to a geographic area
KR101587766B1 (en) * 2015-07-16 2016-01-22 주식회사 케이티 System and method for portable backpack base station under TVWS or satellite backhaul
CN107688584A (en) * 2016-08-05 2018-02-13 华为技术有限公司 A kind of method, node and the system of disaster tolerance switching
JP6787576B2 (en) * 2017-02-20 2020-11-18 ウイングアーク1st株式会社 Cloud relay system and relay server
CN111800476A (en) * 2020-06-14 2020-10-20 洪江川 Data processing method based on big data and cloud computing and cloud big data server

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020188711A1 (en) * 2001-02-13 2002-12-12 Confluence Networks, Inc. Failover processing in a storage system
US20030177411A1 (en) 2002-03-12 2003-09-18 Darpan Dinker System and method for enabling failover for an application server cluster
US20070260696A1 (en) 2006-05-02 2007-11-08 Mypoints.Com Inc. System and method for providing three-way failover for a transactional database
US20080016386A1 (en) 2006-07-11 2008-01-17 Check Point Software Technologies Ltd. Application Cluster In Security Gateway For High Availability And Load Sharing

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6353834B1 (en) * 1996-11-14 2002-03-05 Mitsubishi Electric Research Laboratories, Inc. Log based data architecture for a transactional message queuing system
DE19836347C2 (en) 1998-08-11 2001-11-15 Ericsson Telefon Ab L M Fault-tolerant computer system
JP4689137B2 (en) * 2001-08-08 2011-05-25 株式会社日立製作所 Remote copy control method and storage system
US6691244B1 (en) * 2000-03-14 2004-02-10 Sun Microsystems, Inc. System and method for comprehensive availability management in a high-availability computer system
US6854069B2 (en) * 2000-05-02 2005-02-08 Sun Microsystems Inc. Method and system for achieving high availability in a networked computer system
US7143167B2 (en) * 2000-05-02 2006-11-28 Sun Microsystems, Inc. Method and system for managing high-availability-aware components in a networked computer system
US6934875B2 (en) 2000-12-29 2005-08-23 International Business Machines Corporation Connection cache for highly available TCP systems with fail over connections
US6871296B2 (en) * 2000-12-29 2005-03-22 International Business Machines Corporation Highly available TCP systems with fail over connections
US6769071B1 (en) * 2001-01-23 2004-07-27 Adaptec, Inc. Method and apparatus for intelligent failover in a multi-path system
US7164676B1 (en) * 2001-03-21 2007-01-16 Cisco Technology, Inc. Method and apparatus for a combined bulk and transactional database synchronous scheme
US20030005350A1 (en) 2001-06-29 2003-01-02 Maarten Koning Failover management system
JP2005018510A (en) 2003-06-27 2005-01-20 Hitachi Ltd Data center system and its control method
US7222340B2 (en) * 2004-01-27 2007-05-22 Research In Motion Limited Software-delivered dynamic persistent data
US7383463B2 (en) * 2004-02-04 2008-06-03 Emc Corporation Internet protocol based disaster recovery of a server
US20060023627A1 (en) 2004-08-02 2006-02-02 Anil Villait Computing system redundancy and fault tolerance
JP4671399B2 (en) * 2004-12-09 2011-04-13 株式会社日立製作所 Data processing system
US7664788B2 (en) * 2005-01-10 2010-02-16 Microsoft Corporation Method and system for synchronizing cached files
JP2006215954A (en) * 2005-02-07 2006-08-17 Hitachi Ltd Storage system and archive management method for storage system
US7783742B2 (en) * 2005-06-02 2010-08-24 Microsoft Corporation Dynamic process recovery in a distributed environment
US7765427B2 (en) * 2005-08-05 2010-07-27 Honeywell International Inc. Monitoring system and methods for a distributed and recoverable digital control system
US7793147B2 (en) * 2006-07-18 2010-09-07 Honeywell International Inc. Methods and systems for providing reconfigurable and recoverable computing resources
US7890662B2 (en) * 2007-08-14 2011-02-15 Cisco Technology, Inc. System and method for providing unified IP presence
US20090147702A1 (en) 2007-12-10 2009-06-11 Buddhikot Milind M Method and Apparatus for Forming and Configuring a Dynamic Network of Mobile Network Nodes
JP5192226B2 (en) 2007-12-27 2013-05-08 株式会社日立製作所 Method for adding standby computer, computer and computer system
US7917494B2 (en) * 2008-07-11 2011-03-29 Adobe Software Trading Company Limited System and method for a log-based data storage

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020188711A1 (en) * 2001-02-13 2002-12-12 Confluence Networks, Inc. Failover processing in a storage system
US20030177411A1 (en) 2002-03-12 2003-09-18 Darpan Dinker System and method for enabling failover for an application server cluster
US20070260696A1 (en) 2006-05-02 2007-11-08 Mypoints.Com Inc. System and method for providing three-way failover for a transactional database
US20080016386A1 (en) 2006-07-11 2008-01-17 Check Point Software Technologies Ltd. Application Cluster In Security Gateway For High Availability And Load Sharing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2478439A1

Also Published As

Publication number Publication date
EP2478439A1 (en) 2012-07-25
KR101479919B1 (en) 2015-01-07
JP2013505499A (en) 2013-02-14
US20110072122A1 (en) 2011-03-24
JP5697672B2 (en) 2015-04-08
KR20120054651A (en) 2012-05-30
US9569319B2 (en) 2017-02-14
CN102576324A (en) 2012-07-11

Similar Documents

Publication Publication Date Title
US9569319B2 (en) Methods for improved server redundancy in dynamic networks
US6920320B2 (en) Method and apparatus for stable call preservation
KR20070026327A (en) Redundant routing capabilities for a network node cluster
US10666554B2 (en) Inter-chassis link failure management system
WO2020057445A1 (en) Communication system, method, and device
WO2007048319A1 (en) A disaster recovery system and method of service controlling device in intelligent network
WO2012155629A1 (en) Network disaster recovery method and system
WO2022217786A1 (en) Cross-network communicaton method, apparatus, and system for multi-bus network, and storage medium
US10986015B2 (en) Micro server built-in switch uplink port backup mechanism
WO2016177098A1 (en) Conference backup method and device
US20180048487A1 (en) Method for handling network partition in cloud computing
US8559940B1 (en) Redundancy mechanisms in a push-to-talk realtime cellular network
WO2023284366A1 (en) Dbng-cp backup method and apparatus
US20240323707A1 (en) Network resilience
CN110603798A (en) Resilient consistency high availability in multiple single boards
CN114301763A (en) Distributed cluster fault processing method and system, electronic device and storage medium
CN117478488B (en) Cloud management platform switching system, method, equipment and medium
US20200322260A1 (en) Systems and methods for automatic traffic recovery after vrrp vmac installation failures in a lag fabric
US10277700B2 (en) Control plane redundancy system
KR101588715B1 (en) A Building Method of High-availability Mechanism of Medical Information Systems based on Clustering Algorism
JP2013192022A (en) Network device, link aggregation system and redundancy method therefor
CN108418716B (en) Network connection recovery method, device and system and readable storage medium
CN108199946B (en) Data forwarding method and communication system
US20210135981A1 (en) Spanning Tree Enabled Link Aggregation System
CN116450413A (en) Container cluster, management method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080041279.8

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10760830

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2010760830

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2010760830

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2291/CHENP/2012

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 2012529806

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20127009686

Country of ref document: KR

Kind code of ref document: A