WO2018076972A1 - Procédé, dispositif et système de basculement - Google Patents

Procédé, dispositif et système de basculement Download PDF

Info

Publication number
WO2018076972A1
WO2018076972A1 PCT/CN2017/102802 CN2017102802W WO2018076972A1 WO 2018076972 A1 WO2018076972 A1 WO 2018076972A1 CN 2017102802 W CN2017102802 W CN 2017102802W WO 2018076972 A1 WO2018076972 A1 WO 2018076972A1
Authority
WO
WIPO (PCT)
Prior art keywords
level node
node
target
disaster
backup
Prior art date
Application number
PCT/CN2017/102802
Other languages
English (en)
Chinese (zh)
Inventor
张书兵
孙艳
黄泽旭
黄凯耀
徐日东
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2018076972A1 publication Critical patent/WO2018076972A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/10Architectures or entities
    • H04L65/1016IP multimedia subsystem [IMS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1073Registration or de-registration

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a failover method, apparatus, and system.
  • DCs Data Centers
  • IMS IP Multimedia Subsystem
  • LTE Long Term Evolution
  • the service data in DC1 can be backed up in DC2, the service data in DC2 can be backed up in DC3, and the service data in DC3 can be backed up in DC1.
  • DC1 fails, other DCs can be used.
  • DC2 and DC3 share the service of DC1, and when it is required to obtain the service data in DC1, it can be obtained from the backup DC (ie, DC2) of DC1 by other DCs (ie, DC2 and DC3), that is, accessing the backup through DC mode. DC to achieve data access to the failed DC.
  • Embodiments of the present invention provide a failover method, apparatus, and system, which can reduce problems such as packet loss or surge caused by data access across DCs.
  • an embodiment of the present invention provides a method for failover, including: a first-level node acquiring a target disaster-tolerant group identifier corresponding to a target user equipment, where the target disaster-tolerant group identifier is used to indicate: a second-level node is located The correspondence between the primary DC and the backup DC (the first-level node is the front-end node of the second-level node), and the service data of the target user equipment is stored in the primary DC and the backup DC; the first-level node receives The service request sent by the target user equipment; if the second-level node in the primary DC fails, the first-level node switches the service request to the second level in the backup DC according to the target disaster-tolerant group identifier. node.
  • the first-level node of the front-end node can directly switch the service request of the target user equipment to the service data storing the target user equipment according to the target disaster-tolerant group identifier.
  • the service request is executed by the second-level node in the backup DC, thereby avoiding the need to cross-DC access to the service data required to perform the service request of the target user equipment, thereby reducing the cross-DC. Problems such as packet loss or surge caused by accessing data.
  • the first-level node obtains the target disaster-tolerant group identifier corresponding to the target user equipment, and the first-level node receives the registration request sent by the target user equipment;
  • the second-level node sends the registration request, so that the second-level node determines the target disaster-tolerant group identifier corresponding to the target user equipment; and the first-level node receives the target disaster-tolerant group identifier sent by the second-level node.
  • the second-level node determines the target disaster-tolerant group identifier corresponding to the target user equipment, and the second-level node determines a capacity from the multiple disaster-tolerant groups according to the registration request.
  • the disaster recovery group is the target disaster recovery group of the second-level node, and each disaster recovery group includes a primary DC and a backup DC; the second-level node backs up the service data of the target user equipment in the target disaster recovery group.
  • the primary DC and the backup DC; the second-level node sends the target disaster-tolerant group identifier to the first-level node.
  • the second-level node when the first-level node is an S-CSCF node and the second-level node is an AS, the second-level node sends the target disaster-tolerant group identifier to the first-level node, including The AS sends a registration response message to the S-CSCF node, where the registration response message includes a first private parameter, where the first private parameter is used to carry the target disaster tolerant group identifier.
  • the registration request carries the DC information of the first-level node, where the DC information is used to indicate the primary DC and the backup DC where the first-level node is located; wherein the second-level node Determining, according to the registration request, a disaster recovery group from the plurality of disaster tolerance groups as the target disaster recovery group of the second level node, including: the second level node, according to the registration request, the primary DC of the first level node As the primary DC of the second-level node, the backup DC of the first-level node is used as the backup DC of the second-level node to determine the target disaster-tolerant group.
  • the registration request when the first-level node is an S-CSCF node and the second-level node is an AS, the registration request includes a second private parameter, and the second private parameter is used to carry the S - DC information of the CSCF node.
  • the second-level node determines, according to the registration request, a disaster recovery group from the plurality of disaster tolerance groups as the target disaster recovery group of the second-level node, including: the second-level node
  • the current DC is used as the primary DC of the second-level node
  • any DC other than the primary DC is used as the backup DC of the second-level node to determine the target disaster-tolerant group.
  • the second-level node determines, according to the registration request, a disaster recovery group from the plurality of disaster tolerance groups as the target disaster recovery group of the second-level node, including: the second-level node Receiving the special identifier information sent by the first-level node; the second-level node uses the disaster-tolerant group corresponding to the special identifier information as the target disaster-tolerant group, and the second-level node stores the special identifier information and the target disaster tolerance Correspondence between groups.
  • the method further includes: if the second-level node in the primary DC has not failed, the first level The node sends the service request to the second-level node in the primary DC according to the disaster recovery group identifier.
  • the method further includes: The second-level node records the correspondence between the target disaster-tolerant group identifier and the target user equipment in the HSS; wherein, if the second-level node in the primary DC fails, the first-level node according to the target
  • the disaster group identifier after the service request is switched to the second-level node in the backup DC, the method further includes: the second-level node in the backup DC determines the new disaster-tolerant group identifier after the service request is switched, and the new disaster-tolerant group
  • the correspondence between the primary DC and the backup DC indicated by the group identifier is opposite to the correspondence between the primary DC and the backup DC indicated by the target disaster tolerant group identifier; the second-level node in the backup DC is in the HSS
  • the corresponding relationship between the target disaster tolerance group identifier and the target user equipment is updated to be the correspondence between the new disaster tolerance group
  • the first-level node switches the service request to the backup according to the target disaster-tolerant group identifier.
  • the method further includes: the GSC node monitors whether each level node in each DC is faulty; if it detects that the second-level node in the primary DC fails, the GSC node goes to the backup DC
  • the secondary node sends a capacity expansion instruction, and the expansion instruction is used to indicate that the second-level node in the backup DC performs the capacity expansion operation, so that the second-level node in the backup DC can prepare the corresponding resource, and the second-level node in the proxy failure DC Process the corresponding business request.
  • the embodiment of the present invention provides a first-level node, including: an acquiring unit, configured to acquire a target disaster-tolerant group identifier corresponding to a target user equipment, where the target disaster-tolerant group identifier is used to indicate: a second level Corresponding relationship between the primary DC and the backup DC where the node is located, the first-level node is the front-end node of the second-level node, and the service data of the target user equipment is stored in the primary DC and the backup DC; receiving the target And a sending unit, configured to: if the second-level node in the primary DC fails, switch the service request to the second-level node in the backup DC according to the target disaster-tolerant group identifier.
  • the acquiring unit is further configured to receive a registration request sent by the target user equipment, where the sending unit is further configured to send the registration request to the second level node, so that the second level The node determines the target disaster tolerance group identifier corresponding to the target user equipment, and the acquiring unit is further configured to receive the target disaster tolerance group identifier sent by the second level node.
  • the sending unit is further configured to: if the second-level node in the primary DC does not fail, send the service request to the primary DC according to the disaster tolerance group identifier. Secondary node.
  • an embodiment of the present invention provides a second-level node, including: a determining unit, configured to determine, according to a registration request sent by the first-level node, a disaster-tolerant group from the multiple disaster-tolerant groups as the second A target disaster recovery group of the level node.
  • Each disaster recovery group includes a primary DC and a backup DC.
  • the first-level node is the front-end node of the second-level node.
  • the backup unit is used to process the service data of the target user equipment.
  • the backup unit is in the primary DC and the backup DC in the target disaster recovery group, and the sending unit is configured to send the target disaster recovery group identifier to the first-level node.
  • the sending unit is specifically configured to send a registration response message to the S-CSCF node, where the registration is performed.
  • the response message includes a first private parameter, where the first private parameter is used to carry the target disaster group identifier.
  • the registration request carries the DC information of the first-level node, where the DC information is used to indicate the primary DC and the backup DC where the first-level node is located; the determining unit is specifically used to According to the registration request, the primary DC of the first-level node is used as the primary DC of the second-level node, and the backup DC of the first-level node is used as the backup DC of the second-level node to determine the target disaster-tolerant group.
  • the determining unit is specifically configured to use the current DC as the primary DC of the second-level node, and use any DC other than the primary DC as the backup of the second-level node. DC to determine the target disaster recovery group.
  • the second-level node further includes an obtaining unit, where the acquiring unit is configured to receive special identifier information sent by the first-level node; the determining unit is specifically configured to be used with the special The disaster recovery group corresponding to the identifier information is used as the target disaster recovery group.
  • the second level node stores the correspondence between the special identifier information and the target disaster tolerance group.
  • the second level node further includes a recording unit, wherein
  • the recording unit is configured to record, in the HSS, a correspondence between the target disaster tolerant group identifier and the target user equipment, and the determining unit is further configured to determine a new disaster recovery group identifier after the service request is switched, the new capacity
  • the mapping between the primary DC and the backup DC indicated by the disaster group identifier is opposite to the mapping between the primary DC and the backup DC indicated by the target disaster recovery group identifier.
  • the recorded disaster recovery group identifier is recorded.
  • the correspondence between the target user equipment is updated to the correspondence between the new disaster tolerant group identifier and the target user equipment.
  • an embodiment of the present invention provides a first level node, including: a processor, a memory, a bus, and a communication interface; the memory is configured to store a computer execution instruction, and the processor is connected to the memory through the bus.
  • the processor executes the computer execution instruction stored by the memory to cause the first level node to perform any one of the above failover methods.
  • an embodiment of the present invention provides a second level node, including: a processor, a memory, a bus, and a communication interface; the memory is configured to store a computer to execute an instruction, and the processor is connected to the memory through the bus.
  • the processor executes the computer execution instruction stored by the memory to cause the second level node to perform any one of the above failover methods.
  • an embodiment of the present invention provides a GSC node, including: a monitoring unit, configured to monitor whether a node at each level in each DC fails; and a sending unit, configured to detect a second level in the primary DC If the node is faulty, the expansion command is sent to the second-level node in the backup DC.
  • the expansion command is used to indicate the second-level node in the backup DC to perform the capacity expansion operation.
  • the embodiment of the present invention provides a failover system, comprising the first level node according to any one of the above second aspects, and the second level node according to any one of the above third aspects.
  • the above failover system further includes the GSC node as described in the sixth aspect, the GSC node being connected to both the first level node and the second level node.
  • an embodiment of the present invention provides a computer storage medium for storing computer software instructions used by the first level node, and/or the second level node, and/or the GSC node, which includes A program designed for the first level node and/or the second level node in performing any of the above aspects.
  • the names of the first-level node, the second-level node, and the GSC node are not limited to the device itself. In actual implementation, these devices may appear under other names. As long as the functions of the respective devices are similar to the present invention, they are within the scope of the claims and the equivalents thereof.
  • FIG. 1 is a schematic diagram of an application scenario of performing failover in the prior art
  • FIG. 2 is a schematic structural diagram 1 of a fault switching system according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram 2 of a fault switching system according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram 1 of interaction of a fault handover method according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram 2 of interaction of a fault handover method according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram 3 of interaction of a fault handover method according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram 3 of a fault switching system according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram 4 of a fault switching system according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of an NFV system according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a first-level node according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of a second-level node according to an embodiment of the present disclosure.
  • FIG. 12 is a first schematic structural diagram of a hardware structure of a first level node/second level node according to an embodiment of the present disclosure
  • FIG. 13 is a second schematic structural diagram of a hardware structure of a first-level node/second-level node according to an embodiment of the present invention.
  • first and second are used for descriptive purposes only, and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, features defining “first” and “second” may include one or more of the features either explicitly or implicitly. In the description of the present invention, "a plurality” means two or more unless otherwise stated.
  • the embodiment of the present invention provides a failover method, which is applicable to the failover system 100 shown in FIG. 2, wherein the failover system 100 includes at least a first level node 21 and a second level node 22, first The level node 21 is a front end node of the second level node 22.
  • the plurality of first level nodes 21 may be disposed in the same or different DCs.
  • the plurality of second level nodes 22 may also be disposed within the same or different DCs.
  • the second-level node 22 may be an S-CSCF at the back end of the P-CSCF node. (Serving-Call Session Control Function) node; when the first-level node 21 is an S-CSCF node, the second-level node 22 may be an AS (Application Server) of the S-CSCF node back-end. .
  • P-CSCF Proxy-Call Session Control Function
  • AS Application Server
  • the second-level node 22 may be an MME (Mobility Management Entity) of the RAN node backend.
  • MME Mobility Management Entity
  • the second-level node 22 may be a forwarding device of the MME back-end, for example, an SGW (Serving Gateway) or a PGW (Packet Data Network Gateway). Wait.
  • a P-CSCF node 1, an S-CSCF node 1, and an AS1 are disposed in the DC1;
  • a P-CSCF node 2, an S-CSCF node 2, and an AS2 are disposed in the DC2;
  • P-CSCF node 3, S-CSCF node 3, and AS3 are provided.
  • service data for example, registration data and/or session data
  • service data for example, registration data and/or session data
  • DC1 in FIG. 3 is taken as the main DC.
  • the service data of 20 user equipments is stored in DC1, that is, the 20 user equipments are all attributed to DC1.
  • each user equipment registers it can be The service data of the user equipment 1 - the user equipment 10 is backed up to the DC 2, and the service data of the user equipment 11 - the user equipment 20 is backed up to the DC 3, and each level node is assigned a disaster recovery group identifier to the user equipment that is registered.
  • the first level node 11 in each DC The disaster recovery group identifiers of the corresponding second-level nodes 22 are stored in different user equipments, and the service data of the user equipments are stored in the primary DC and the backup DC indicated by the disaster recovery group identifier.
  • the first-stage node 21 shown in FIG. 2 is taken as the P-CSCF node 1 shown in FIG. 3, and when the P-CSCF node 1 receives the user equipment 1 (ie, the target user equipment), When the service is requested, the P-CSCF node 1 may determine the second-level node 22 (that is, the back-end node of the P-CSCF node 1 according to the obtained disaster-tolerant group identifier corresponding to the user equipment 1 (ie, the target disaster-tolerant group identifier).
  • the primary DC where the S-CSCF node is located is DC1
  • the backup DC where it is located is DC2.
  • the P-CSCF node 1 can directly switch the service request to the S- in the backup DC (ie, DC2) according to the target disaster tolerance group identifier.
  • the CSCF node 2 stores the service data of the user equipment 1 itself in the DC2. Therefore, the S-CSCF node 2 in the DC2 can directly use the service data to execute the service request.
  • the first-level node 21 can directly switch the service request of the target user equipment according to the target disaster-tolerant group identifier.
  • the second-level node 22 in the backup DC performs the service request of the target user equipment, thereby avoiding the need to perform the target user equipment across DC accesses as in the prior art.
  • the business requests the required business data, which in turn reduces problems such as packet loss or surge caused by DC access data.
  • the disaster recovery group identifier may be a domain name or a floating service IP address
  • the failover system 100 is applied to a CS (Circuit Switch)/PS
  • the foregoing disaster tolerant group identifier may be a GT code, such as an MSC (Mobile Switching Center) number or an MME number, and the present embodiment does not impose any limitation.
  • the P-CSCF node is the first level node 21, the S-CSCF node is the second level node 22, and the S-CSCF node is the first level node 21,
  • the AS is the second-level node 22 as an example.
  • the method for obtaining the target disaster-tolerant group identifier corresponding to the target user equipment in the registration process of the user equipment is as follows. As shown in FIG. 4, the method includes:
  • the P-CSCF node 1 receives a first registration request sent by the target user equipment.
  • the corresponding P-CSCF node for example, the P-CSCF node 1 in the DC1, and the P-CSCF node 1 may be selected according to the preset policy.
  • a registration request for example, the P-CSCF node 1 in the DC1
  • the P-CSCF node 1 may be selected according to the preset policy.
  • the P-CSCF node 1 sends the first registration request to the S-CSCF node 1.
  • the first registration request may be sent to the default S-CSCF node.
  • the foregoing first registration request may be sent to the S-CSCF node in the DC (ie, DC1), that is, the S-CSCF node 1, so that the crossover between the first-level node and the second-level node may be reduced.
  • DC1 the S-CSCF node in the DC
  • DC1 the S-CSCF node 1
  • the S-CSCF node 1 determines, according to the first registration request, a disaster tolerance group from a plurality of disaster recovery groups (for example, N disaster recovery groups, N>1) as the first target capacity corresponding to the target user equipment. Disaster group.
  • a disaster tolerance group from a plurality of disaster recovery groups (for example, N disaster recovery groups, N>1) as the first target capacity corresponding to the target user equipment.
  • Disaster group for example, N disaster recovery groups, N>1
  • Each of the six disaster recovery groups includes one primary DC and one backup DC. If the current IMS network includes three DCs, the IMS network includes six disaster recovery groups.
  • the six disaster recovery groups are: DC1-DC2. , DC1-DC3, DC2-DC1, DC2-DC3, DC3-DC1 and DC3-DC2.
  • the first registration request may use the PATH parameter to carry the DC information of the first-level node, where the DC information is used to indicate the primary DC and the backup DC where the first-level node is located, that is, the P-CSCF node.
  • the S-CSCF node 1 may set the P-CSCF node 1 according to the first registration request described above.
  • the primary DC acts as its own primary DC
  • the backup DC of the P-CSCF node 1 is used as its own backup DC to determine the first target disaster recovery group.
  • the primary DC of the P-CSCF node 1 is DC1
  • the backup DC is DC2.
  • the primary DC of the S-CSCF node 1 is also DC1
  • the backup DC is also DC2.
  • P-CSCF node 1 and S-CSCF node 1 can be switched together to the same standby DC (ie, DC2), and subsequent P-CSCF node 2 and S-CSCF node 2 do not need to interact.
  • DC2 standby DC
  • the second-level node that is, the S-CSCF node 1
  • the second-level node can use the current DC as the primary DC of the second-level node, and the Any DC other than the primary DC is used as the backup DC of the second-level node to determine the first target disaster-tolerant group.
  • the DC where the S-CSCF node 1 is currently located is DC1, that is, DC1 is the master of the S-CSCF node 1.
  • DC the backup DC of the S-CSCF node 1 may be any DC other than DC1, because the P-CSCF node 1 is generally connected to the S-CSCF node (ie, the S-CSCF node 1) in the DC. If DC1 is also the primary DC of S-CSCF node 1, then at least in the normal traffic flow in which the primary DC has not failed, S-CSCF node 1 and P-CSCF node 1 will not generate an access operation across DCs.
  • the S-CSCF node 1 backs up the service data of the target user equipment in the primary DC and the backup DC in the first target disaster tolerance group.
  • data backup is performed at the granularity of the user equipment, so that a certain network element in the DC fails.
  • the different user devices belonging to the network element can be migrated to different backup DCs instead of migrating all user devices to the same backup DC, thereby reducing the load pressure of the backup DC.
  • the S-CSCF node 1 records the correspondence between the first target disaster tolerance group identifier and the target user equipment in the HSS (Home Subscriber Server).
  • the network element that interacts with the S-CSCF node 1 can obtain the correspondence between the primary and backup DCs of the S-CSCF node 1 corresponding to the target user equipment from the HSS.
  • the S-CSCF node 1 sends the first target disaster tolerance group identifier to the P-CSCF node 1.
  • the S-CSCF node 1 may carry the Service-Route header field in the 200 ok message, and transmit the first target disaster tolerance group identifier in the Service-route, so that the first-level node (P-CSCF node 1) obtains The target disaster tolerant group identifier of the second-level node corresponding to the target user equipment.
  • the first-level node can perform the service of the target user equipment according to the target disaster-tolerant group identifier.
  • the request is switched to the second node in the backup DC, and the second node in the backup DC stores the service data of the target user equipment. Therefore, the second node in the backup DC can perform the service request without performing a cross-DC operation.
  • the P-CSCF node 1 can obtain the disaster recovery group identifier of the P-CSCF node corresponding to each user equipment that is registered.
  • the S-CSCF node 1 can also download the subscription information of the target user equipment from the HSS, if the signing is The S-CSCF node 1 sends a third-party registration request to the AS1.
  • the S-CSCF node is the first-level node 21 and the AS is the second-level node. 22, Then, the method for the first-level node 21 shown in FIG. 4 to obtain the target disaster-tolerant group identifier corresponding to the target user equipment further includes the following steps 201-205:
  • the S-CSCF node 1 sends a second registration request to AS1.
  • the AS1 determines, according to the second registration request, that the one disaster recovery group is the second target disaster tolerance group corresponding to the target user equipment.
  • step 201 when the S-CSCF node 1 sends a registration request to AS1, the existing standard may be extended, and the second private parameter is defined in the second registration request, for example, in the above
  • the contact parameter defined in the second registration request is the second private parameter, and the second private parameter can be used to carry the DC information of the S-CSCF node 1 (ie, the first-level node), thereby implementing the S-CSCF between the node 1 and the AS1. Delivery of DC information.
  • AS1 may use the primary DC of the S-CSCF node 1 as its own primary DC and the backup DC of the S-CSCF node 1 as its own backup DC according to the second registration request to determine the first The second target disaster recovery team.
  • the active-standby DC relationship of the P-CSCF node, the active-standby DC relationship of the S-CSCF node, and the active/standby DC relationship of the AS are the same, so that the access operation across the DC can be avoided to the greatest extent.
  • the AS1 may use the current DC as the primary DC of the second-level node, and use any DC other than the primary DC as the first
  • the backup node of the secondary node is DC to determine the second target disaster tolerance group.
  • the iFC standard carries the special identifier information, for example, a predefined special domain name, so that the S-CSCF node 1 obtains the special domain name carried in the iFC criterion, and then sends the special domain name to the Request-URI or Route header field of the registration message to the AS1, and AS1, that is, the second-level node stores the correspondence between the special domain name and the target disaster-tolerant group. Therefore, AS1 can use the disaster-tolerant group corresponding to the received special domain name as the second target disaster-tolerant group. In order to make the active and standby DCs corresponding to different user equipments in these special group services the same.
  • the foregoing special identifier information may also be a predefined string or any other identifier, and the correspondence between the special identifier information and the target disaster tolerance group is stored in the second-level node, and it is understood that the technology in the field
  • the above-mentioned special identification information may be set by a person according to an actual application, and the embodiment of the present invention does not impose any limitation on this.
  • the AS1 backs up the service data of the target user equipment in the primary DC and the backup DC in the second target disaster tolerance group.
  • the AS1 records the correspondence between the identifier of the second target disaster tolerance group and the target user equipment in the HSS.
  • the AS1 sends the second target disaster tolerance group identifier to the S-CSCF node 1.
  • the specific header is not defined by the SIP (Session Initiation Protocol) and the 3GPP (3rd Generation Partnership Project) protocol between the S-CSCF node and the AS.
  • the domain enables the AS to pass its own information, such as the second target disaster tolerance group identifier, to the S-CSCF node. Therefore, the first private parameter can be defined in the 200 ok message (ie, the registration response message), for example, at 200 ok above.
  • the contact parameter is defined as the first private parameter, and the first private parameter is used to carry the target disaster tolerance group identifier of the AS to the S-CSCF node.
  • the target bearer group identifier of the AS may be carried by the contact header field in the redirect message to the S-CSCF node without changing the existing standard.
  • the S-CSCF node can record the target disaster tolerance group label of the received AS.
  • the destination disaster recovery group identifier is used to send a redirect message to the AS, that is, the target disaster recovery group identifier is carried in the redirect message, and finally the AS sends a 200 ok message to the S-CSCF node to end the registration process.
  • the S-CSCF node 1 sends a 200 OK message to the target user equipment through the P-CSCF node 1 to complete the registration process.
  • the target user equipment acquires the target disaster tolerance group of the second-level node 22 corresponding to the target user equipment, as the first-level node 21 of the front-end network element.
  • the target user equipment when the target user equipment completes the registration, it can initiate a service request to the P-CSCF node 1 in the primary DC, for example, a call request, etc., since the P-CSCF node 1 (ie, the first level node) is registered as described above.
  • the target disaster recovery group identifier of the S-CSCF node that is, the second-level node
  • the P-CSCF node can directly send the service request to the target disaster-tolerant group identifier according to the target disaster-tolerant group identifier.
  • the S-CSCF node 1 in the indicated primary DC because the service data of the target user equipment is already stored in the S-CSCF node 1 in the primary DC, the S-CSCF node 1 can directly perform the foregoing service according to the service data. request.
  • the P-CSCF node may send the service request to the S in the backup DC indicated by the target disaster tolerant group identifier according to the target disaster tolerance group identifier.
  • -CSCF node 2 since the service data of the target user equipment has been backed up in the S-CSCF node 2 in the backup DC during the above registration process, the S-CSCF node 2 can directly execute the above service request according to the service data, thereby Data access operations across DCs are avoided.
  • the S-CSCF node is a first-level node
  • the AS is a second-level node.
  • the foregoing failover method may be performed in a steady state call process, or may be performed in a new call process, which is not limited in this embodiment of the present invention.
  • the above-mentioned failover method will be described in detail by taking the sequential faults of the calling S-CSCF node and the called S-CSCF node in the new call process. Specifically, as shown in FIG. 5, the method includes:
  • the calling P-CSCF node receives a new call service request sent by the target user equipment, that is, an invite request.
  • the calling P-CSCF node has acquired the second-level node, that is, the target disaster-tolerant group identifier of the calling S-CSCF node, and the target disaster-tolerant group identifier indicates the calling party S.
  • the primary DC and backup DC where the CSCF node is located for example, the calling S-CSCF node in the primary DC is the primary S-CSCF node 1, and the calling S-CSCF node in the backup DC is the primary S-CSCF node 2 .
  • the calling P-CSCF node can directly send the above invite request to the calling S-CSCF node 1, and when the calling P-CSCF node detects the calling S - When the CSCF node 1 fails, the following step 302 is performed.
  • the calling P-CSCF node sends the invite request to the backup DC indicated by the target disaster tolerant group identifier according to the target disaster tolerant group identifier of the S-CSCF node.
  • the calling party is S-CSCF node 2.
  • the calling S-CSCF node 2 in the backup DC can directly process the above invoke request according to the service data.
  • the calling S-CSCF node 2 sends the above invite request to the I-CSCF node.
  • the I-CSCF node queries the HSS for the target disaster tolerant group identifier of the called S-CSCF node.
  • the I-CSCF node is an entry point of the IMS network. During the call, the call to the IMS network is first routed to the I-CSCF node, and then the I-CSCF node obtains the S-CSCF node registered by the user equipment from the HSS. Address, and further Route the service request to the corresponding S-CSCF node.
  • the I-CSCF node after receiving the invite request sent by the calling S-CSCF node 2, the I-CSCF node queries the HSS for the second-level node, that is, the target disaster-tolerant group identifier of the called S-CSCF node,
  • the target disaster tolerance group identifier indicates the primary DC and the backup DC where the called S-CSCF node is located.
  • the called S-CSCF node in the primary DC is the called S-CSCF node 1, and the called S- in the backup DC.
  • the CSCF node is the called S-CSCF node 2.
  • the I-CSCF node can directly send the above invite request to the called S-CSCF node 1, and when the I-CSCF node detects the called S-CSCF node 1 When a failure occurs, perform the following step 305.
  • the I-CSCF node sends the above invite request to the backup DC indicated by the target disaster tolerant group identifier according to the target disaster tolerant group identifier of the called S-CSCF node.
  • the called S-CSCF node 2 sends the above invite request to the backup DC indicated by the target disaster tolerant group identifier according to the target disaster tolerant group identifier of the called S-CSCF node.
  • the called S-CSCF node 2 sends the updated new disaster tolerant group identifier of the called S-CSCF node to the calling S-CSCF node 2.
  • the calling S-CSCF node 2 sends the updated new disaster tolerant group identifier of the calling S-CSCF node to the calling P-CSCF node.
  • the called S-CSCF node 2 in order to make a subsequent network element directly interacting with the called S-CSCF node 2, for example, the calling S-CSCF node 2, the received service request (for example, an update message) Or bye message), directly sent to the called S-CSCF node 2 that is not faulty, then, in step 306, the called S-CSCF node 2 can identify the new disaster recovery group identifier of the updated called S-CSCF node. And sent to the calling S-CSCF node 2 through the I-CSCF node.
  • the received service request for example, an update message
  • Or bye message the called S-CSCF node 2
  • the called S-CSCF node 2 can identify the new disaster recovery group identifier of the updated called S-CSCF node. And sent to the calling S-CSCF node 2 through the I-CSCF node.
  • the calling P-CSCF node can directly send the service request to the unsuccessful calling S-CSCF node 2
  • the active/standby DC relationship indicated by the target S-CSCF node of the calling S-CSCF node is switched, and the updated new disaster recovery group identifier of the calling S-CSCF node is obtained.
  • the target disaster tolerance group identifier may be updated to the new disaster tolerance in the surrounding network element by using a standard protocol by the re-registration process of the target user equipment.
  • the group identifier is such that when the neighboring network element needs to interact with the second-level node, it can directly interact with the second-level node in the non-failed backup DC according to the new disaster-tolerant group identifier of the second-level node.
  • the re-registration process of the target user equipment is as shown in FIG. 6 , and the re-registration process includes:
  • the target user equipment initiates a re-registration request to the P-CSCF node.
  • the P-CSCF node initiates a re-registration request of the target user equipment to the I-CSCF node.
  • the I-CSCF node sends the re-registration request to the S-CSCF node 2 in the backup DC according to the target disaster tolerant group identifier of the S-CSCF node.
  • the S-CSCF node 2 instructs the HSS to update the correspondence between the recorded target disaster group ID and the target user equipment to the corresponding relationship between the new disaster tolerance group identifier and the target user equipment.
  • the second-level node in the backup DC (that is, the above-mentioned S-CSCF node 2) can perform the second-level node (that is, the S-CSCF node already recorded in step 105) in the HSS.
  • Target disaster recovery group identification updated to The new disaster recovery group identifier is configured, so that the network element that interacts with the second-level node can obtain the new disaster recovery group identifier corresponding to the target user equipment from the HSS, so as to identify and update the new disaster recovery group according to the new disaster recovery group identifier.
  • the second level nodes in the DC interact.
  • the S-CSCF node 2 carries the new disaster tolerance group identifier in a 200 OK message and sends the identifier to the P-CSCF node.
  • the subsequent P-CSCF node can directly send the service request of the target user equipment to the non-failed S-CSCF node 2 according to the new disaster tolerance group identifier, because The service data of the target user equipment is already stored in the DC where the S-CSCF node 2 is located, that is, the backup DC indicated by the original target disaster recovery group identifier. Therefore, the S-CSCF node 2 can directly execute the foregoing service request according to the service data. This avoids data access operations across DCs and reduces the risk of packet loss or surges caused by data access across DCs.
  • the P-CSCF node sends a 200 OK message to the target user equipment to complete the registration process.
  • the failover system 100 provided by the foregoing embodiment, and any of the foregoing failover methods, may be applied to the IMS network, or may be applied to the LTE network, and the embodiment of the present invention does not impose any limitation on this, for example, when When the above-mentioned failover system 100 is applied to an LTE network, as shown in FIG. 7, one or more RAN (Residential Access Network) nodes and an MME (Mobility Management Entity) may be disposed in each DC.
  • a network element such as a function, a SGW (Serving Gateway), or a PGW (Packet Data Network Gateway).
  • the registration process of the user equipment in the EPC (Evolved Packet Core) network is taken as an example.
  • the MME 1 The user equipment allocates the corresponding SGW 1 and PGW1.
  • the SGW 1 can be used as the first-level node and the PGW 1 can be used as the second-level node.
  • the PGW 1 can determine a disaster recovery group from the N disaster recovery groups.
  • the target disaster recovery group of the PGW 1 is sent to the SGW 1 in the response message for establishing the bearer request, and the SGW 1 sends the target disaster group ID of the PGW 1 by using the attach response message.
  • the target DR group identifier may be a floating service IP address.
  • the IP address of the floating service is determined to be effective on the backup DC indicated by the target DR group after the primary DC is faulty.
  • the SGW 1 can also save the target disaster tolerant group identifier of the PGW 1 to the local. Subsequently, when the user equipment performs a service request such as data and voice, the SGW 1 can detect the failure of the PGW 1 once the SGW 1 detects that the PGW 1 is faulty.
  • the service of the user equipment is switched to the backup DC (for example, DC2) indicated by the target disaster tolerance group identifier according to the target disaster tolerant group identifier of the PGW 1 , and the service request is executed by the PGW 2 in the DC 2, and the DC2 is performed.
  • the service data of the user equipment is backed up. Therefore, after the service of the user equipment is switched to DC2, data access operations across DCs are not caused, thereby reducing packet loss or surge caused by data access across DCs. risk.
  • the SGW 1 is the first-level node
  • the PGW 1 is the second-level node.
  • the first-level node may be any network element in the EPC network, for example, for example.
  • a GSC (Global Service Control) node 23 may be further added.
  • the GSC node 23 is connected to each of the DCs in the failover system 100.
  • the GSC node 23 can be used to send an indication of elastic expansion to the second-level node 22 in the backup DC that prepares the fault tolerant takeover.
  • the second-level node 22 in the backup DC can prepare the corresponding resources, and the second-level node in the proxy failure DC processes the corresponding service request.
  • the NFV system includes: a network function virtualization orchestrator. , NFVO), virtualized network function manager (VNFM), virtualized infrastructure manager (VIM), operations support system (OSS), or business support system , BSS), element manager (EM), VNF node, network function virtualization infrastructure (NFVI) and other functional nodes.
  • NFVO network function virtualization orchestrator
  • VNFM virtualized network function manager
  • VIM virtualized infrastructure manager
  • OSS operations support system
  • BSS business support system
  • EM element manager
  • VNF node network function virtualization infrastructure
  • NFVI network function virtualization infrastructure
  • NFVO NFV management and orchestration
  • NFV-MANO NFV management and orchestration domain
  • NFV-MANO NFV management and orchestration domain
  • VNFM is responsible for lifecycle management of VNF instances, such as instantiation, expansion/contraction, query, update, and termination
  • VIM is the management portal for infrastructure and resources, providing resource management for VNF instances, including providing a foundation for VNF instances.
  • NFVO can perform operations such as operation, management, and coordination of VIM, and NFVO can be connected to all VIMs and VNFMs in the NFV system.
  • a GSC node 23 may be added in a conventional NFV system, and the GSC node 23 may be specifically deployed on a VNF, NFV-MANO, EM, or an independent network node.
  • the GSC node 23 can monitor and maintain the state of the VNF, where the VNF can be regarded as any node within the DC, for example, the first level node or the second level node.
  • an interface may be added between the GSC node 23 and the VNFM, so that after detecting the failure of the second-level node in the primary DC, the GSC node 23 indicates to the VNFM that the second-level node in the backup DC is elastically expanded.
  • the GSC node 23 can send the calling S-CSCF node 2 to the VNFM (That is, the expansion command of the second-level node in the backup DC, so that the VNFM performs the capacity expansion operation on the VNF where the calling S-CSCF node 2 is located in the backup DC, so that the calling S-CSCF node 2 can prepare the corresponding resources in advance. And thereby accepting the service request of the failed calling S-CSCF node 1.
  • the GSC node 23 can carry the number of target user equipments in the foregoing expansion instruction, so that the VNFM can be based on the target user.
  • the number of the devices determines the specific policy of the capacity expansion and the size of the capacity expansion.
  • the GSC node 23 can carry the number of VMs (virtual machines) that need to be expanded in the foregoing expansion command, so that the VNFM can be based on the number of VMs.
  • the specific embodiment of the present invention does not impose any limitation on the specific policy of the expansion and the size of the expansion.
  • the GSC node 23 may also expose the interface to the NFVO, so as to provide the NFVO with information about the DCs in the VNF, so as to implement the NFVO to arrange the overall network resources, which is not limited in this embodiment of the present invention.
  • the embodiment of the present invention provides a failover method, in which the first-level node of the front-end node obtains the target disaster-tolerant group identifier corresponding to the target user equipment during the registration process of the target user equipment, and the target capacity is obtained.
  • the disaster group identifier is used to indicate: the correspondence between the primary DC and the backup DC where the second-level node of the back-end node is located, and the service data of the target user equipment is stored in the primary DC and the backup DC; After receiving the service request sent by the target user equipment, if the second-level node in the primary DC is faulty, the first-level node can directly switch the service request to the backup DC according to the target disaster-tolerant group identifier.
  • the second level node is stored in the second level node in the backup DC.
  • the service data of the target user equipment, and therefore, the foregoing service request can be directly executed by the second-level node in the backup DC, thereby avoiding the need to cross-DC access to the service data required for executing the service request of the target user equipment, as in the prior art. In turn, it reduces problems such as packet loss or surge caused by DC access data.
  • each network element such as the first level node 21, the second level node 22, and the GSC node 23, in order to implement the above functions, includes corresponding hardware structures and/or software modules for performing various functions.
  • the present invention can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
  • the embodiment of the present invention may divide the function modules of the first-level node 21 and the second-level node 22 according to the foregoing method example.
  • each function module may be divided according to each function, or two or more functions may be used.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 10 is a schematic diagram showing a possible structure of the first-level node 21 involved in the foregoing embodiment.
  • the first-level node 21 includes: an obtaining unit 31 and a sending unit. Unit 32.
  • the obtaining unit 31 is configured to support the first level node 21 to execute the process 101 in FIG. 4 and the processes 301 and 304 in FIG. 5.
  • the sending unit 32 is configured to support the first level node 21 to execute the process 102 and the diagram in FIG. Processes 302 and 305 in 5. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • FIG. 11 shows a possible structural diagram of the second-level node 22 involved in the foregoing embodiment
  • the second-level node 22 includes: an acquiring unit. 41.
  • a transmitting unit 42 a backup unit 43, a recording unit 44, and a determining unit 45.
  • the obtaining unit 41 is configured to support the second level node 22 to perform the processes 101 and 201 in FIG. 4 and the process 302 in FIG. 5, and the sending unit 42 is configured to support the second level node 22 to execute the processes 106 and 206 in FIG. And the processes 303, 306, 307 in FIG. 5;
  • the backup unit 43 is configured to support the second level node 22 to perform the processes 104 and 203 in FIG.
  • the recording unit 44 is configured to support the second level node 22 to perform the process in FIG. 105, 204 and process 308 in FIG. 5; determining unit 45 is operative to support second level node 22 to perform processes 103 and 202 of FIG. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • FIG. 12 shows a possible structural diagram of the first level node 21/second level node 22 involved in the above embodiment.
  • the first level node 21 / the second level node 22 includes a processing module 1302 and a communication module 1303.
  • the processing module 1302 is configured to control and manage the actions of the first level node 21 / the second level node 22, for example, the processing module 1302 is configured to support the node 21 to perform the processes 101-106, 201-205 in FIG. 4, in FIG. Processes 301-307, processes 401-406 in FIG. 6, and/or other processes for the techniques described herein.
  • the communication module 1303 is configured to support communication between the first-level node 21/second-level node 22 and other network entities, and the first-level node 21/second-level node 22 may further include a storage module 1301 for storing the first-level node 21 / Program code and data of the second level node 22.
  • the processing module 1302 may be a processor or a controller, and may be, for example, a central processing unit (CPU), a general-purpose processor, or a digital signal processor (DSP). Application-Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 1303 may be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module 1301 may be a memory.
  • the processing module 1302 is a processor
  • the communication module 1303 is a communication interface
  • the storage module 1301 is a memory
  • the node involved in the embodiment of the present invention may be the first-level node 21/second-level node 22 shown in FIG.
  • the first level node 21/second level node 22 includes a processor 1312, a communication interface 1313, a memory 1311, and a bus 1314.
  • the communication interface 1313, the processor 1312, and the memory 1311 are connected to each other through a bus 1314.
  • the bus 1314 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. Wait.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • Wait The bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in FIG. 13, but it does not mean that there is only one bus or one type of bus.
  • the steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware, or may be implemented by a processor executing software instructions.
  • the software instructions may be composed of corresponding software modules, which may be stored in a random access memory (RAM), a flash memory, a read only memory (ROM), an erasable programmable read only memory ( Erasable Programmable ROM (EPROM), electrically erasable programmable read only memory (EEPROM), registers, hard disk, removable hard disk, compact disk read only (CD-ROM) or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC. Additionally, the ASIC can be located in a core network interface device.
  • the processor and the storage medium may also exist as discrete components in the core network interface device.
  • the functions described herein can be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Telephonic Communication Services (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Hardware Redundancy (AREA)

Abstract

Un mode de réalisation de la présente invention concerne un procédé de basculement, un dispositif et un système, se rapportant au domaine technique des communications, et pouvant réduire les problèmes de perte ou d'afflux de paquets, etc. provoqués par l'accès de données par des DC. Le procédé comprend les étapes suivantes : un nœud de premier niveau obtient un identifiant de groupe de récupération après sinistre cible correspondant à un équipement utilisateur cible, l'identifiant de groupe de récupération après sinistre cible étant utilisé pour indiquer : la corrélation entre une DC principale où est situé un nœud de deuxième niveau et une DC de secours (le nœud de premier niveau est un nœud frontal du nœud de second niveau), et à la fois la DC principale et la DC de secours stockent les données de service de l'équipement utilisateur cible ; le nœud de premier niveau reçoit une demande de service envoyée par l'équipement utilisateur cible ; et si le nœud de deuxième niveau dans la DC principale présente une défaillance, le nœud de premier niveau fait passer la demande de service vers le nœud de deuxième niveau dans la DC de secours selon l'identifiant de groupe de récupération après sinistre cible.
PCT/CN2017/102802 2016-10-28 2017-09-21 Procédé, dispositif et système de basculement WO2018076972A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610964275.6A CN108011737B (zh) 2016-10-28 2016-10-28 一种故障切换方法、装置及系统
CN201610964275.6 2016-10-28

Publications (1)

Publication Number Publication Date
WO2018076972A1 true WO2018076972A1 (fr) 2018-05-03

Family

ID=62024356

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/102802 WO2018076972A1 (fr) 2016-10-28 2017-09-21 Procédé, dispositif et système de basculement

Country Status (2)

Country Link
CN (1) CN108011737B (fr)
WO (1) WO2018076972A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109450604A (zh) * 2018-09-25 2019-03-08 国家电网有限公司客户服务中心 一种面向灾备的异地双活系统业务等级划分方法
CN112615903A (zh) * 2020-11-30 2021-04-06 中科热备(北京)云计算技术有限公司 一种云备份智能冗余方法
CN112954264A (zh) * 2019-12-10 2021-06-11 浙江宇视科技有限公司 一种平台备份保护方法及装置
CN113268378A (zh) * 2021-05-18 2021-08-17 Oppo广东移动通信有限公司 数据容灾方法、装置、存储介质及电子设备
CN113535464A (zh) * 2020-04-17 2021-10-22 海能达通信股份有限公司 一种容灾备份方法、服务器、集群系统和存储装置
CN114095342A (zh) * 2021-10-21 2022-02-25 新华三大数据技术有限公司 备份的实现方法及装置
CN115102872A (zh) * 2022-07-05 2022-09-23 广东长天思源环保科技股份有限公司 一种基于工业互联网标识解析的环保监测数据自证明系统
CN115277376A (zh) * 2022-09-29 2022-11-01 深圳华锐分布式技术股份有限公司 灾备切换方法、装置、设备及介质
CN115396296A (zh) * 2022-08-18 2022-11-25 中电金信软件有限公司 业务处理方法、装置、电子设备及计算机可读存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109995765A (zh) * 2019-03-11 2019-07-09 河北远东通信系统工程有限公司 一种ims网络agcf双归属注册方法
CN112199240B (zh) * 2019-07-08 2024-01-30 华为云计算技术有限公司 一种节点故障时进行节点切换的方法及相关设备
CN112311566B (zh) * 2019-07-25 2023-10-17 中国移动通信集团有限公司 业务容灾方法、装置、设备和介质
CN114079612B (zh) * 2020-08-03 2024-06-04 阿里巴巴集团控股有限公司 容灾系统及其管控方法、装置、设备、介质
CN112099990A (zh) * 2020-08-31 2020-12-18 新华三信息技术有限公司 一种容灾备份方法、装置、设备及机器可读存储介质
CN114422331B (zh) * 2022-01-21 2024-04-05 中国工商银行股份有限公司 容灾切换方法、装置及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101094237A (zh) * 2007-07-30 2007-12-26 中兴通讯股份有限公司 一种ip多媒体子系统中网元间的负荷分担方法
CN101447890A (zh) * 2008-04-15 2009-06-03 中兴通讯股份有限公司 一种下一代网络中改进的应用服务器容灾的系统及方法
CN101459533A (zh) * 2008-04-16 2009-06-17 中兴通讯股份有限公司 一种下一代网络中改进的应用服务器容灾的系统及方法
US7783618B2 (en) * 2005-08-26 2010-08-24 Hewlett-Packard Development Company, L.P. Application server (AS) database with class of service (COS)
CN102546544A (zh) * 2010-12-20 2012-07-04 中兴通讯股份有限公司 一种ims网络中应用服务器的组网结构

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461779B (zh) * 2014-11-28 2018-02-23 华为技术有限公司 一种分布式数据的存储方法、装置及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7783618B2 (en) * 2005-08-26 2010-08-24 Hewlett-Packard Development Company, L.P. Application server (AS) database with class of service (COS)
CN101094237A (zh) * 2007-07-30 2007-12-26 中兴通讯股份有限公司 一种ip多媒体子系统中网元间的负荷分担方法
CN101447890A (zh) * 2008-04-15 2009-06-03 中兴通讯股份有限公司 一种下一代网络中改进的应用服务器容灾的系统及方法
CN101459533A (zh) * 2008-04-16 2009-06-17 中兴通讯股份有限公司 一种下一代网络中改进的应用服务器容灾的系统及方法
CN102546544A (zh) * 2010-12-20 2012-07-04 中兴通讯股份有限公司 一种ims网络中应用服务器的组网结构

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109450604A (zh) * 2018-09-25 2019-03-08 国家电网有限公司客户服务中心 一种面向灾备的异地双活系统业务等级划分方法
CN112954264B (zh) * 2019-12-10 2023-04-18 浙江宇视科技有限公司 一种平台备份保护方法及装置
CN112954264A (zh) * 2019-12-10 2021-06-11 浙江宇视科技有限公司 一种平台备份保护方法及装置
CN113535464A (zh) * 2020-04-17 2021-10-22 海能达通信股份有限公司 一种容灾备份方法、服务器、集群系统和存储装置
CN113535464B (zh) * 2020-04-17 2024-02-02 海能达通信股份有限公司 一种容灾备份方法、服务器、集群系统和存储装置
CN112615903A (zh) * 2020-11-30 2021-04-06 中科热备(北京)云计算技术有限公司 一种云备份智能冗余方法
CN113268378A (zh) * 2021-05-18 2021-08-17 Oppo广东移动通信有限公司 数据容灾方法、装置、存储介质及电子设备
CN114095342B (zh) * 2021-10-21 2023-12-26 新华三大数据技术有限公司 备份的实现方法及装置
CN114095342A (zh) * 2021-10-21 2022-02-25 新华三大数据技术有限公司 备份的实现方法及装置
CN115102872A (zh) * 2022-07-05 2022-09-23 广东长天思源环保科技股份有限公司 一种基于工业互联网标识解析的环保监测数据自证明系统
CN115102872B (zh) * 2022-07-05 2024-02-27 广东长天思源环保科技股份有限公司 一种基于工业互联网标识解析的环保监测数据自证明系统
CN115396296A (zh) * 2022-08-18 2022-11-25 中电金信软件有限公司 业务处理方法、装置、电子设备及计算机可读存储介质
CN115277376A (zh) * 2022-09-29 2022-11-01 深圳华锐分布式技术股份有限公司 灾备切换方法、装置、设备及介质

Also Published As

Publication number Publication date
CN108011737A (zh) 2018-05-08
CN108011737B (zh) 2021-06-01

Similar Documents

Publication Publication Date Title
WO2018076972A1 (fr) Procédé, dispositif et système de basculement
US10743175B2 (en) Method, apparatus, and system for disaster recovery of IMS
CN110536330B (zh) 一种ue迁移方法、装置、系统及存储介质
US8719617B2 (en) Method and device for realizing IP multimedia subsystem disaster tolerance
US11316708B2 (en) Gx session recovery for policy and charging rules function
US8713355B2 (en) Method and apparatus for managing communication services for user endpoint devices
WO2020063412A1 (fr) Procédé, appareil et système de rétablissement de session pdu, et support de stockage
US11889330B2 (en) Methods and related devices for implementing disaster recovery
WO2015018248A1 (fr) Procédé, appareil correspondant et système pour récupérer un service appelé d'un terminal
US20140359340A1 (en) Subscriptions that indicate the presence of application servers
WO2006053502A1 (fr) Procede pour assurer la conformite d'information entre les noeuds de reseau
CN105592486A (zh) 一种容灾方法及网元、服务器
CN104185220B (zh) Ims核心网设备失效切换方法和边缘接入控制设备
US20150312387A1 (en) Methods and apparatus for resolving data inconsistencies in an ims network
CN108141440A (zh) 具有多个标识符的sip服务器
US10659427B1 (en) Call processing continuity within a cloud network
CN110024358B (zh) 对由分布式数据存储系统提供的服务的访问
WO2018076973A1 (fr) Procédé, appareil et système d'ajustement de charge
WO2009124439A1 (fr) Procédé de traitement de reprise après défaillance d'une fonction de commande de session d'appel de service
WO2015062538A1 (fr) Méthode, dispositif et système de commutation de rétablissement lors d'une catastrophe
EP3736696A1 (fr) Détection et réponse précoces de défaillances de session gx/rx
WO2024125269A1 (fr) Procédé d'établissement de service vocal, dispositif de réseau et support de stockage
CN118042449A (zh) 网络存储功能故障检测及容灾方法及相关设备
CN105049230A (zh) 一种基于域名系统的分布式多媒体子系统的车辆容灾方法及其车辆容灾系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17866323

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17866323

Country of ref document: EP

Kind code of ref document: A1