CN117527534A - Disaster recovery method, device and system for data center - Google Patents

Disaster recovery method, device and system for data center Download PDF

Info

Publication number
CN117527534A
CN117527534A CN202210912562.8A CN202210912562A CN117527534A CN 117527534 A CN117527534 A CN 117527534A CN 202210912562 A CN202210912562 A CN 202210912562A CN 117527534 A CN117527534 A CN 117527534A
Authority
CN
China
Prior art keywords
network element
information
function set
network
data center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210912562.8A
Other languages
Chinese (zh)
Inventor
李卓明
张继东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210912562.8A priority Critical patent/CN117527534A/en
Publication of CN117527534A publication Critical patent/CN117527534A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The application provides a method and a device for disaster recovery of a data center, comprising the following steps: the first network function set logic network element receives first information, wherein the first information is used for indicating a first instance access and mobility management network element to terminate service, and the first instance access and mobility management network element is positioned in a second data center; determining that a second instance access and mobility management network element serves the terminal equipment according to the first information, wherein the second instance access and mobility management network element is located in a first data center; and determining a first corresponding relation, wherein the first corresponding relation is the corresponding relation between the AMF UE NGAP ID of the terminal equipment and the second instance access and mobility management network element. According to the scheme disclosed by the application, when one data center is unavailable, the AMF instance serving the terminal equipment is replaced by the AMF instance of the other data center, no extra interface signaling is needed, smooth realization of disaster recovery of the data center can be ensured, and meanwhile, interruption or delay of service processing can be avoided.

Description

Disaster recovery method, device and system for data center
Technical Field
The present application relates to the field of communications, and more particularly, to a method, apparatus, and system for disaster recovery in a data center.
Background
In the fifth generation new radio access technology (5 th Generation new radio,5G NR) In a network architecture in which a communication system radio access network (next generation radio access network, NG-RAN) is connected to a core network (5g core network,5GC) and a data center disaster recovery is implemented, a user plane layer between the access network (radio access network, RAN) and a user plane function (user plane function, UPF) is a transport network capable of performing routing interworking, and a user plane connection for a session is dynamically established by means of a control plane-specified endpoint address, without requiring that both parties mutually configure a fixed connection in advance. The control plane between the RAN and the access and mobility management network elements (access and mobility management function, AMF) needs to mutually configure information of each other in advance to establish a connection. The RAN needs to configure each AMF to be connected, including configuration AMF name, public land mobile network identity (public land mobile network identification, PLMN ID), supported network slice list, interface address. The AMF also needs to obtain configuration information for each RAN to establish a signaling connection.
In order to increase the reliability of the RAN to AMF connection. There is currently a technology in which one AMF Set (AMF Set) is composed of a plurality of AMFs having the same slice supporting capability within the AMF region. All AMFs within an AMF Set have the ability to access context information of the same user terminal device. If a certain AMF fails, or if a link between the RAN and the AMF fails, the RAN may reselect another AMF within the AMF Set to continue to service this terminal device via next generation (wireless) application protocol (next generation application protocol, NGAP) messages. To provide data center disaster recovery, AMF instances within AMF sets are often deployed within Data Centers (DCs) in different geographic locations, with each RAN simultaneously connecting at least two AMF instances within an AMF Set deployed at different DCs.
However, in the above technology, when all AMF instances deployed in a certain data center are unavailable, the AMF that provides service for the UE is replaced to the AMF instance of another data center in the AMF set, and the next generation application protocol interface identifier AMF UE NGAP ID on the side of the access and mobility management network element is allocated to the UE again, and then notified to the RAN that provides service for the UE, so that additional NGAP interface signaling is required in the process of replacing the AMF instance; and the UE can continue to process the service (e.g., location update, session establishment, etc.) only after successfully establishing a connection with the new AMF instance, the process of replacing the AMF instance may cause interruption or delay in the original service process.
Therefore, a method and apparatus for disaster recovery of a data center are needed, which can ensure smooth implementation of disaster recovery of the data center, and avoid interruption or delay of service processing without additional interface signaling.
Disclosure of Invention
The application provides a method, a device and a system for disaster recovery of a data center, which can simplify interface configuration and reduce signaling overhead.
In a first aspect, a method for disaster recovery in a data center is provided. The method comprises the following steps: the first network function set logic network element receives first information, the first information is used for indicating a first instance access and mobility management network element to terminate serving the terminal equipment, the first network function set logic network element is located in a first data center, the first instance access and mobility management network element is located in a second data center, and the second data center is different from the first data center; the first network function set logic network element determines that a second instance access and mobility management network element serves terminal equipment according to first information, the second instance access and mobility management network element is located in a first data center, and the first instance access and mobility management network element and the second instance access and mobility management network element belong to the same access and mobility management network element set; the first network function set logic network element determines a first corresponding relation, wherein the first corresponding relation is a corresponding relation between an AMF UE NGAP ID and a second instance access and mobility management network element of a next generation application protocol interface identifier (API) of the terminal equipment at the access and mobility management network element side.
According to the scheme disclosed by the application, when one data center is unavailable, the AMF instance serving the terminal equipment is replaced by the AMF instance of the other data center, no extra interface signaling is needed, smooth realization of disaster recovery of the data center can be ensured, and meanwhile, interruption or delay of service processing can be avoided.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes: the first network function set logic network element receives second information, wherein the second information is uplink information of the terminal equipment; the first network function set logic network element sends second information to the second instance access and mobility management network element according to the first corresponding relation. In this way, the uplink information of the terminal device can be quickly sent to the second instance access and mobility management network element for providing service for the terminal device according to the first corresponding relation, the AMF UE NGAP ID does not need to be allocated again for the user terminal device, no extra interface signaling is needed, the service interruption or delay of the terminal device is avoided, and the user experience is improved.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes: the first network function set logic network element acquires the next generation application protocol interface identifier RAN UE NGAP ID of the terminal equipment in the wireless access network; the first network function set logic network element establishes a second corresponding relation, wherein the second corresponding relation is the corresponding relation between the RAN UE NGAP ID and a first link of the first network function set logic network element.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes: the first network function set logic network element receives third information, wherein the third information is downlink information of the terminal equipment; the first network function set logic network element sends third information from the first link to the first radio access network according to the second corresponding relation, and the first radio access network is a radio access network serving the terminal equipment. In this way, the downlink information of the terminal equipment can be sent to the first wireless access network for providing service for the terminal equipment according to the second corresponding relation, so that service interruption of the terminal equipment is avoided, and user experience is improved.
With reference to the first aspect, in certain implementations of the first aspect, the first instance access and mobility management network element terminates serving the terminal device, including at least one of: the link between the second data center and the first wireless access network fails; the first instance access and mobility management network element fails; the second data center fails or shuts down.
With reference to the first aspect, in certain implementation manners of the first aspect, the first network function set logic network element receives first information, including: the first network function set logical network element receives first information from a unified distributed database, the unified distributed database being used for sharing data between the first data center and the second data center.
With reference to the first aspect, in certain implementation manners of the first aspect, the first network function set logic network element receives first information, including: the first network function set logic network element receives the first information from a second network function set logic network element or a network repository network element, the second network function set logic network element being located in a second data center.
With reference to the first aspect, in certain implementation manners of the first aspect, the determining, by the first network function set logic network element, a first correspondence includes: the first network function set logic network element acquires a third corresponding relation, wherein the third corresponding relation is the corresponding relation between the AMF UE NGAP ID and the first instance access and the mobility management network element, and the third corresponding relation is determined by the second network function set logic network element; the first network function set logic network element establishes a first corresponding relation according to the first information and the third corresponding relation. In this way, the first network function set logic network element can determine that the terminal equipment identified by the AMF UE NGAP ID cannot continue to use the first instance to access and provide services with the mobility management network element according to the first information and the third corresponding relation, and then determine the first corresponding relation according to the actual conditions (such as the load condition, the number of AMF instances and the like) of the data center, so that the AMF instance with better service quality can be selected for the terminal equipment, and the service performance of the terminal equipment is facilitated to be improved.
With reference to the first aspect, in other implementation manners of the first aspect, the determining, by the first network function set logic network element, a first correspondence includes: the first network function set logic network element receives fourth information, wherein the fourth information comprises a first corresponding relation, and the first corresponding relation is determined by the second network function set logic network element; the first network function set logic network element acquires the AMF UE NGAP ID according to the second information, and further acquires the first corresponding relation according to the AMF UE NGAP ID and the fourth information. In this way, the first network function set logic network element acquires other network elements to determine the first corresponding relation, no additional processing is needed, service delay is reduced, service interruption or delay of the terminal equipment is avoided, and user experience is improved.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes: the first network function set logic network element sends fifth information to the unified distributed database, wherein the fifth information comprises at least one of the first corresponding relation and the second corresponding relation. In this way, other network function set logic network elements can acquire the corresponding relation from the unified distributed database, so that the up-down information of the terminal equipment can be sent to the target network element, service interruption or delay of the terminal equipment is avoided, and user experience is improved.
With reference to the first aspect, in some implementations of the first aspect, the first link is a link newly added by a first network function set logic network element to the first radio access network, and the method further includes: the first network function set logic network element transmits sixth information, the sixth information including interface information of the first link. In this way, the first network function set logic network element can be added to the link data of the first wireless access network in the disaster recovery process of the data center, so that the service processing performance is improved, the service interruption or delay of the terminal equipment is avoided, and the user experience is improved.
In a second aspect, a method for disaster recovery in a data center is provided. The method comprises the following steps: the second network function set logic network element determines that the first instance access and mobility management network element is terminated to serve the terminal equipment, and the second network function set logic network element and the first instance access and mobility management network element are positioned in a second data center; the second network function set logic network element sends first information, the first information is used for the first network function set logic network element to determine that the second instance access and mobility management network element serves the terminal equipment, the first network function set logic network element and the second instance access and mobility management network element are located in a first data center, and the second data center is different from the first data center.
According to the scheme disclosed by the application, when one data center is unavailable, the AMF instance serving the terminal equipment is replaced by the AMF instance of the other data center, no extra interface signaling is needed, smooth realization of disaster recovery of the data center can be ensured, and meanwhile, interruption or delay of service processing can be avoided.
With reference to the second aspect, in certain implementation manners of the second aspect, the method further includes: when the second information is the initial uplink information of the terminal equipment, the second network function set logic network element distributes the next generation application protocol interface identifier AMF UE NGAP ID corresponding to the mobile management network element side for the terminal equipment; the second network function set logic network element determines that the first instance access and mobility management network element serves the terminal equipment; the second network function set logic network element establishes a third corresponding relation, wherein the third corresponding relation is the corresponding relation between the AMF UE NGAP ID and the first instance access and mobility management network element.
With reference to the second aspect, in certain implementation manners of the second aspect, the method further includes: the second network function set logic network element receives second information, wherein the second information is uplink information of the terminal equipment; and the second network function set logic network element sends second information to the second instance access and mobility management network element according to the third corresponding relation.
With reference to the second aspect, in certain implementation manners of the second aspect, the method further includes: the second network function set logic network element determines that the first instance access and the mobility management network element are terminated to serve the terminal equipment, and the second network function set logic network element determines that the second instance access and the mobility management network element are served the terminal equipment; the second network function set logic network element modifies the third corresponding relation into a first corresponding relation, wherein the first corresponding relation is the corresponding relation between the AMF UE NGAP ID and the second instance access and mobility management network element.
With reference to the second aspect, in certain implementation manners of the second aspect, the method further includes: the second network function set logic network element sends fourth information to the unified distributed database, wherein the fourth information comprises the first corresponding relation, and the unified distributed database is used for sharing data between the first data center and the second data center.
With reference to the second aspect, in certain implementation manners of the second aspect, the method further includes: the second network function set logic network element obtains the NGAP ID of the RAN UE corresponding to the next generation application protocol interface identifier of the terminal equipment at the wireless access network side; the second network function set logic network element determines a fourth corresponding relation, wherein the fourth corresponding relation is the corresponding relation between the RAN UE NGAP ID and a second link of the second network function set logic network element.
With reference to the second aspect, in certain implementation manners of the second aspect, the method further includes: the second network function set logic network element receives third information, wherein the third information is downlink information of the terminal equipment; the second network function set logic network element sends third information from the second link to the first radio access network according to the fourth corresponding relation, and the first radio access network is a radio access network serving the terminal equipment.
With reference to the second aspect, in certain implementation manners of the second aspect, the method further includes: the second network function set logic network element transmits seventh information, wherein the seventh information comprises at least one of a third corresponding relation and a fourth corresponding relation.
With reference to the second aspect, in some implementations of the second aspect, the terminating of the first access and mobility management network element for serving the terminal device includes at least one of: the link between the second data center and the first wireless access network fails; the first access and mobility management network element fails; the second data center fails or shuts down.
With reference to the second aspect, in some implementations of the second aspect, the second network function set logic network element sends the first information, including: the second network function set logic network element sends the first information to the unified distributed database.
With reference to the second aspect, in other implementations of the second aspect, the sending, by the second network function set logic network element, the first information includes: the second network function set logic network element sends the first information to the first network function set logic network element or the network warehouse function network element.
In a third aspect, a device for disaster recovery in a data center is provided. The device comprises: the receiving and transmitting unit is used for receiving first information, the first information is used for indicating a first instance access and mobility management network element to terminate serving the terminal equipment, the first network function set logic network element is located in a first data center, the first instance access and mobility management network element is located in a second data center, and the second data center is different from the first data center; the processing unit is used for determining that the second instance access and mobility management network element serves the terminal equipment according to the first information, the second instance access and mobility management network element is located in the first data center, and the first instance access and mobility management network element and the second instance access and mobility management network element belong to the same access and mobility management network element set; the processing unit is further configured to determine a first correspondence, where the first correspondence is a correspondence between a next generation application protocol interface identifier AMF UE NGAP ID of the terminal device at the access and mobility management network element side and a second instance access and mobility management network element.
According to the scheme disclosed by the application, when one data center is unavailable, the AMF instance serving the terminal equipment is replaced by the AMF instance of the other data center, the AMF UE NGAP ID is not required to be allocated for the user terminal equipment again, extra interface signaling is not required, smooth realization of disaster recovery of the data center can be ensured, and meanwhile, interruption or delay of service processing can be avoided.
With reference to the third aspect, in some implementations of the third aspect, the transceiver is further configured to receive second information, where the second information is uplink information of the terminal device; and the processing unit is also used for sending the second information to the second instance access and mobility management network element according to the first corresponding relation.
With reference to the third aspect, in some implementations of the third aspect, the transceiver unit is further configured to obtain a next generation application protocol interface identifier RAN UE NGAP ID of the terminal device on a radio access network side; the processing unit is further configured to establish a second corresponding relationship, where the second corresponding relationship is a corresponding relationship between the RAN UE NGAP ID and the first link of the first network function set logic network element.
With reference to the third aspect, in some implementations of the third aspect, the transceiver unit is further configured to receive third information, where the third information is downlink information of the terminal device; and the processing unit is further used for sending the third information from the first link to the first radio access network according to the second corresponding relation, wherein the first radio access network is a radio access network serving the terminal equipment.
With reference to the third aspect, in certain implementations of the third aspect, the first instance access and mobility management network element terminates serving the terminal device, including at least one of: the link between the second data center and the first wireless access network fails; the first instance access and mobility management network element fails; the second data center fails or shuts down.
With reference to the third aspect, in some implementations of the third aspect, the transceiver unit is specifically configured to receive, by the first network function set logic network element, the first information from a unified distributed database, where the unified distributed database is configured to share data with the first data center and the second data center.
With reference to the third aspect, in some implementations of the third aspect, the transceiver unit is specifically configured to receive the first information from a second network function set logic network element or a network repository network element, where the second network function set logic network element is located in the second data center.
With reference to the third aspect, in some implementations of the third aspect, the transceiver unit is configured to obtain a third corresponding relationship, where the third corresponding relationship is a corresponding relationship between an AMF UE NGAP ID and a first instance access and mobility management network element, and the third corresponding relationship is determined by a second network function set logic network element; and the processing unit is used for establishing a first corresponding relation according to the first information and the third corresponding relation.
With reference to the third aspect, in other implementation manners of the third aspect, the transceiver unit is configured to receive fourth information, where the fourth information includes a first correspondence, and the first correspondence is determined by a second network function set logic network element; and the processing unit is used for acquiring the first corresponding relation according to the second information.
With reference to the third aspect, in some implementations of the third aspect, the transceiver unit is further configured to send fifth information to the unified distributed database, where the fifth information includes at least one of the first correspondence and the second correspondence.
With reference to the third aspect, in some implementations of the third aspect, the first link is a link that the first network function set logic network element is newly added to the first radio access network, and the transceiver unit is further configured to send sixth information, where the sixth information includes interface information of the first link.
In a fourth aspect, a data center disaster recovery device is provided. The device comprises: the processing unit is used for determining that the first instance access and mobility management network element is terminated to serve the terminal equipment, and the second network function set logic network element and the first instance access and mobility management network element are positioned in a second data center; the receiving and transmitting unit is used for transmitting first information, the first information is used for determining that the second instance access and mobility management network element serves the terminal equipment by the first network function set logic network element, the first network function set logic network element and the second instance access and mobility management network element are located in a first data center, and the second data center is different from the first data center.
According to the scheme disclosed by the application, when one data center is unavailable, the AMF instance serving the terminal equipment is replaced by the AMF instance of the other data center, no extra interface signaling is needed, smooth realization of disaster recovery of the data center can be ensured, and meanwhile, interruption or delay of service processing can be avoided.
With reference to the fourth aspect, in some implementations of the fourth aspect, the processing unit is further configured to allocate, to the terminal device, a next generation application protocol interface identifier AMF UE NGAP ID of the terminal device corresponding to the mobility management network element side; the processing unit is further configured to establish a third corresponding relationship, where the third corresponding relationship is a corresponding relationship between the AMF UE NGAP ID and the first instance access and mobility management network element.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver is further configured to receive second information, where the second information is uplink information of the terminal device; and the processing unit is further used for sending the second information to the second instance access and mobility management network element according to the third corresponding relation.
With reference to the fourth aspect, in some implementations of the fourth aspect, the processing unit is further configured to determine that the second instance access and mobility management network element serves the terminal device; the processing unit is further configured to modify the third corresponding relationship into a first corresponding relationship, where the first corresponding relationship is a corresponding relationship between the AMF UE NGAP ID and the second instance access and mobility management network element.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver unit is further configured to send fourth information to a unified distributed database, where the fourth information includes the first correspondence, and the unified distributed database is used for sharing data between the first data center and the second data center.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver unit is further configured to obtain a next generation application protocol interface identifier RAN UE NGAP ID corresponding to the terminal device at the radio access network side; the processing unit is further configured to determine a fourth corresponding relationship, where the fourth corresponding relationship is a corresponding relationship between the RAN UE NGAP ID and the second link of the second network function set logic network element.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver is further configured to receive fifth information, where the fifth information is downlink information of the terminal device; and the transceiver unit is further used for transmitting the fifth information from the second link to the first radio access network according to the fourth corresponding relation, wherein the first radio access network is a radio access network serving the terminal equipment.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver unit is further configured to send seventh information, where the seventh information includes at least one of the third correspondence and the fourth correspondence.
With reference to the fourth aspect, in some implementations of the fourth aspect, the terminating of the first access and mobility management network element to the terminal device includes at least one of: the link between the second data center and the first wireless access network fails; the first access and mobility management network element fails; the second data center fails or shuts down.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver unit is further configured to send the first information to a unified distributed database.
With reference to the fourth aspect, in other implementations of the fourth aspect, the transceiver unit is further configured to send the first information to a first network function set logical network element or a network repository function network element.
In a fifth aspect, a disaster recovery device for a data center is provided for implementing the above method. The disaster recovery device of the data center may be the first network function set logic network element entity in the first aspect or the second aspect, or a device including the first network function set logic network element entity; alternatively, the disaster recovery device of the data center may be the second network function set logic network element entity in the first aspect or the second aspect, or a device including the second network function set logic network element entity. The disaster recovery device of the data center comprises a module, a unit or a means (means) for realizing the method, wherein the module, the unit or the means can be realized by hardware, software or realized by executing corresponding software by hardware. The hardware or software includes one or more modules or units corresponding to the functions described above.
With reference to the fifth aspect, in some possible implementations, the disaster recovery device of the data center may include a processing module and a transceiver module. The transceiver module, which may also be referred to as a transceiver unit, is configured to implement the transmitting and/or receiving functions of any of the above aspects and any possible implementation thereof. The transceiver module may be formed by a transceiver circuit, transceiver or communication interface. The processing module may be configured to implement the processing functions of any of the aspects described above and any possible implementation thereof. The processing module may be, for example, a processor.
With reference to the fifth aspect, in some possible implementations, the transceiver module includes a transmitting module and a receiving module, which are respectively configured to implement the transmitting and receiving functions in any one of the foregoing aspects and any possible implementation thereof.
In a sixth aspect, a disaster recovery device for a data center is provided, including: a processor; the processor is configured to perform the method of any of the above aspects in accordance with the instructions after being coupled to the memory and reading the instructions in the memory. The disaster recovery device of the data center may be the first network function set logic network element entity in the first aspect or the second aspect, or a device including the first network function set logic network element entity; alternatively, the disaster recovery device of the data center may be the second network function set logic network element entity in the first aspect or the second aspect, or a device including the second network function set logic network element entity.
With reference to the sixth aspect, in one possible implementation manner, the disaster recovery device of the data center further includes a memory, where the memory is used to store necessary program instructions and data.
With reference to the sixth aspect, in one possible implementation manner, the disaster recovery device of the data center is a chip or a chip system. Optionally, when the disaster recovery device of the data center is a chip system, the disaster recovery device can be formed by a chip, and can also comprise the chip and other discrete devices.
In a seventh aspect, a disaster recovery device for a data center is provided, including: a processor and interface circuit; interface circuit for receiving computer program or instruction and transmitting to processor; the processor is configured to execute the computer program or instructions to cause the data center disaster recovery device to perform the method of any one of the above aspects.
With reference to the seventh aspect, in one possible implementation manner, the disaster recovery device of the data center is a chip or a chip system. Optionally, when the disaster recovery device of the data center is a chip system, the disaster recovery device can be formed by a chip, and can also comprise the chip and other discrete devices.
In an eighth aspect, a communication system is provided that includes the first network function set logical network element of the first aspect and the second network function set logical network element of the second aspect.
The first network function set logic network element is used for receiving first information, the first information is used for indicating that the first instance access and mobility management network element is terminated to serve the terminal equipment, the first network function set logic network element is located in a first data center, the first instance access and mobility management network element is located in a second data center, the second data center is different from the first data center, the second instance access and mobility management network element is determined to serve the terminal equipment according to the first information, the second instance access and mobility management network element is located in the first data center, the first instance access and mobility management network element and the second instance access and mobility management network element belong to the same access and mobility management network element set, and a first corresponding relation is determined, wherein the first corresponding relation is the corresponding relation between an AMF UE NGAP ID and the second instance access and mobility management network element. And the second network function set logic network element is used for determining that the first instance access and mobility management network element is terminated to serve the terminal equipment and sending the first information to the first network function set logic network element.
In a ninth aspect, there is provided a computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of the above aspects.
It should be noted that, the above computer program code may be stored in whole or in part on a first storage medium, where the first storage medium may be packaged together with the processor or may be packaged separately from the processor, and embodiments of the present application are not limited in this regard.
In a tenth aspect, there is provided a computer readable medium storing program code which, when run on a computer, causes the computer to perform the method of the above aspects.
In an eleventh aspect, a chip system is provided, comprising a memory for storing a computer program and a processor for calling and running the computer program from the memory, such that a communication device in which the chip system is installed performs the method of any of the above-mentioned first to fourth aspects and possible implementations thereof.
The chip system may include an input chip or interface for transmitting information or data, and an output chip or interface for receiving information or data, among other things.
Drawings
Fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a communication system to which the present application is applicable.
Fig. 3 is a schematic diagram of a service architecture of a 5G system to which the present application is applicable.
Fig. 4 is a schematic diagram of a RAN and 5G core network to which the present application is applicable.
Fig. 5 is a system architecture diagram provided in an embodiment of the present application.
Fig. 6 is a schematic flow chart of a first data center disaster recovery method of the present application.
Fig. 7 is a schematic flow chart of a disaster recovery method for a second data center of the present application.
Fig. 8 is a schematic flow chart of a third data center disaster recovery method of the present application.
Fig. 9 is a schematic flow chart of a fourth data center disaster recovery method of the present application.
Fig. 10 is a schematic block diagram of an example of a disaster recovery device for a data center provided in the present application.
Fig. 11 is a schematic block diagram of an example of a disaster recovery device for a data center provided in the present application.
Detailed Description
The technical solutions in the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a communication system provided herein. As shown in fig. 1, the system includes a first network function set logical network element 110 and a second network function set logical network element 120. Optionally, the system 100 may further include a unified database network element 130 and a network repository network element 140. System 100 may be used to perform a method of disaster recovery for a data center in accordance with embodiments of the present application.
The first network function set logic network element 110 is configured to receive first information, where the first information is used to instruct the first instance access and mobility management network element to terminate serving for the terminal device, the first network function set logic network element 110 is located in a first data center, the first instance access and mobility management network element is located in a second data center, and the second data center is different from the first data center; determining that a second instance access and mobility management network element serves the terminal equipment according to the first information, wherein the second instance access and mobility management network element is located in a first data center, and the first instance access and mobility management network element and the second instance access and mobility management network element belong to the same access and mobility management network element set; determining a first corresponding relation, wherein the first corresponding relation is a corresponding relation between an AMF UE NGAP ID of a next generation application protocol interface identifier of the terminal equipment on the side of an access and mobility management network element and a second instance access and mobility management network element.
The second network function set logic network element 120 is configured to determine that the first instance access and mobility management network element terminates serving the terminal device, and send the first information.
Optionally, the unified database network element 130 is configured to send the first information to the first network function set logic network element 110.
Optionally, the network repository network element 140 is configured to send the first information to the first network function set logic network element 110.
According to the communication system provided by the application, when one data center is unavailable, the AMF instance serving the terminal equipment is replaced by the AMF instance of the other data center, the AMF UE NGAP ID is not required to be allocated to the user terminal equipment again, extra interface signaling is not required, smooth realization of disaster recovery of the data center can be ensured, and meanwhile, interruption or delay of service processing can be avoided.
The system 100 shown in fig. 1 may be applied to the fifth generation (5th generation,5G) network architecture shown in fig. 2 or fig. 3, and may of course also be used in future network architectures, such as the sixth generation (6th generation,6G) network architecture, which is not specifically limited in this embodiment of the present application.
A 5G system in a different scenario will be illustrated in connection with fig. 2 and 3. It should be understood that the 5G system described herein is merely an example and should not be construed as limiting the present application in any way.
Fig. 2 shows a schematic architecture of a basic 5G system 200. As shown in fig. 2, the system 200 includes: policy control function (policy control function, PCF), AMF, session management function (session management function, SMF), radio access network (radio access network, RAN), unified data management (unified data management, UDM), data Network (DN), user plane function (user plane function, UPF), UE, application function (application function, AF), network slice selection function (network slice selection function, NSSF), and/or authentication server function (authentication server function, AUSF), etc. Optionally, the following functions (not shown in fig. 2) may also be included in fig. 2: unified data store (unified data repository, UDR), capability open function (network exposure function, NEF), or network storage function (NF repository function, NRF).
Wherein, the main functions of each network element are described as follows:
1. terminal equipment
The terminal device in the embodiment of the present application may be: a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a user terminal, a wireless communication device, a user agent, or a user equipment, etc.
The terminal device may be a device providing voice/data connectivity to a user, e.g., a handheld device with wireless connectivity, an in-vehicle device, etc. Currently, some examples of terminals are: mobile phone (mobile phone), tablet, notebook, palm, mobile internet device (mobile internet device, MID), wearable device, virtual Reality (VR) device, augmented reality (augmented reality, AR) device, wireless terminal in industrial control (industrial control), wireless terminal in unmanned (self-driving or autopilot), wireless terminal in teleoperation (remote medical surgery), wireless terminal in smart grid (smart grid), wireless terminal in transportation security (transportation safety), wireless terminal in smart city (smart city), wireless terminal in smart home (smart home), cellular phone, cordless phone, session initiation protocol (session initiation protocol, SIP) phone, wireless local loop (wireless local loop, WLL) station, personal digital assistant (personal digital assistant, PDA), handheld device with wireless communication function, public or other processing device connected to wireless modem, vehicle-mounted device, wearable device, terminal device in future 5G network or evolving land communication terminal (public land mobile network), and the like, without being limited to this embodiment.
By way of example, and not limitation, in embodiments of the present application, the terminal device may also be a wearable device. The wearable device can also be called as a wearable intelligent device, and is a generic name for intelligently designing daily wear by applying wearable technology and developing wearable devices, such as glasses, gloves, watches, clothes, shoes and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device includes full functionality, large size, and may not rely on the smart phone to implement complete or partial functionality, such as: smart watches or smart glasses, etc., and focus on only certain types of application functions, and need to be used in combination with other devices, such as smart phones, for example, various smart bracelets, smart jewelry, etc. for physical sign monitoring. Furthermore, in the embodiment of the present application, the terminal device may also be a terminal device in an internet of things (internet of things, ioT) system.
2. Radio access network RAN
The radio access network is an access network implementing an access network function based on a wireless communication technology. The wireless access network can manage wireless resources, provide wireless access or air interface access service for the terminal, and further complete the forwarding of control signals and user data between the terminal and the core network.
As an example and not by way of limitation, the radio access network may be an evolved NodeB (eNB or eNodeB) in an LTE system, may also be a radio controller in a cloud radio access network (cloud radio access network, CRAN) scenario, or the access device may be a relay station, an access point, a vehicle device, a wearable device, and an access device in a 5G network, or an access device in a future evolved PLMN network, etc., may be an Access Point (AP) in a WLAN, may be a gNB in an NR system, and embodiments of the present application are not limited.
3. Access and mobility management function network element AMF
The access and mobility management function network element is mainly used for mobility management, access management and the like, and can be used for realizing other functions besides session management in the functions of a mobility management entity (mobility management entity, MME), such as legal interception, access authorization (or authentication) and the like, and is also used for transferring user policies between the UE and the PCF. In the embodiment of the application, the method and the device can be used for realizing the functions of the access and mobile management network elements.
4. Session management function network element SMF
The session management function network element is mainly used for session management, internet protocol (internet protocol, IP) address allocation and management of terminal equipment, selecting a user plane function (user plane function, UPF) network element, a termination point of a policy control and charging function interface, downlink data notification, and the like. In the embodiment of the application, the method and the device can be used for realizing the function of the session management network element.
5. User plane function network element UPF
The user plane function network element can be used for packet routing and forwarding, qoS parameter processing of user plane data, and the like. User data may be accessed to a Data Network (DN) through the network element. In the embodiment of the present application, the function of the user plane network element may be implemented, for example, when a session is established on a different UPF, the service experience of the UE may also be different, so that the SMF is required to select an appropriate UPF for the session of the UE.
6. Policy control network element PCF
The policy control network element is used for guiding a unified policy framework of network behavior, and provides policy rule information for control plane function network elements (such as AMF, SMF network elements, etc.). The method is mainly responsible for policy control functions such as charging, qoS bandwidth guarantee, mobility management, UE policy decision and the like aiming at session and service flow levels.
7. Network element NEF with open network capability
The network capability opening function network element is used to open services and network capability information (such as a terminal location, whether a session is reachable) provided by the 3GPP network function to the outside, etc.
8. Application function network element AF
The application function network element is mainly used for transmitting the requirement of the application side on the network side, such as QoS requirement or user state event subscription. The AF may be a third party functional entity or may be an application service deployed by an operator, such as an IMS voice call service. For the application function entity of the third party application, when the application function entity interacts with the core network, authorization processing can be performed through the NEF, for example, the third party application function directly sends a request message to the NEF, the NEF judges whether the AF is allowed to send the request message, and if the authentication is passed, the request message is forwarded to the corresponding PCF or unified data management UDM.
9. Unified data management network element UDM
The unified data management network element is mainly used for unified data management, and supports authentication trust status processing, user identity processing, access authorization, registration and mobility management, subscription management, short message management and the like in a 3GPP authentication and key negotiation mechanism.
10. Unified data storage network element UDR
The unified data storage network element is mainly used for the access function of subscription data, policy data, application data and other types of data.
In the above architecture, the respective interface functions are described as follows:
n7: and the interface between PCF and SMF is used for issuing PDU session granularity and service data flow granularity control strategy.
N15: and the interface between the PCF and the AMF is used for issuing UE strategies and access control related strategies.
N5: and the interface between the AF and the PCF is used for issuing application service requests and reporting network events.
N4: the interface between SMF and UPF is used for transferring information between control plane and user plane, including control plane-oriented forwarding rule, qoS control rule, flow statistics rule, etc. issuing and user plane information reporting.
N11: an interface between the SMF and the AMF for conveying PDU session tunnel information between the RAN and the UPF, conveying control messages sent to the UE, conveying radio resource control information sent to the RAN, etc.
N2: and an interface between the AMF and the RAN, which is used for transmitting radio bearer control information and the like from the core network side to the RAN.
N1: the interface between the AMF and the UE, access independent, is used to deliver QoS control rules etc. to the UE.
N8: the interface between the AMF and the UDM is used for the AMF to acquire subscription data and authentication data related to access and mobility management from the UDM, and the AMF registers the current mobility management related information of the UE from the UDM.
N10: and the interface between the SMF and the UDM is used for the SMF to acquire session management related subscription data from the UDM, registering the current session related information of the UE from the UDM, and the like.
In addition, although not shown in fig. 2, the UDR may also have a direct interface with the PCF and the UDM, corresponding to the N36 interface and the N35 interface, respectively, where the N36 interface is used for the PCF to obtain policy related subscription data and application data related information from the UDR, and the N35 interface is used for the UDM to obtain user subscription data information from the UDR.
It should be understood that the network architecture applied to the embodiments of the present application is merely an exemplary network architecture described from the perspective of a conventional point-to-point architecture and a service architecture, and the network architecture to which the embodiments of the present application are applicable is not limited thereto, and any network architecture capable of implementing the functions of the respective network elements described above is applicable to the embodiments of the present application.
It should be understood that the names of interfaces between the network elements in fig. 2 are only an example, and the names of interfaces in the specific implementation may be other names, which are not specifically limited in this application. Furthermore, the names of the transmitted messages (or signaling) between the various network elements described above are also merely an example, and do not constitute any limitation on the function of the message itself.
The network element may also be referred to as an entity, a device, an apparatus, a module, or the like, and the present application is not particularly limited. Also, in this application, for ease of understanding and explanation, a description of network elements is omitted in some descriptions, for example, SMF network elements are abbreviated as SMF, in which case, the "SMF" is understood as an SMF network element, and hereinafter, description of the same or similar cases is omitted.
It will be appreciated that the network elements or functions described above may be either network elements in a hardware device, software functions running on dedicated hardware, or virtualized functions instantiated on a platform (e.g., a cloud platform). Alternatively, the network element or the function may be implemented by one device, or may be implemented by a plurality of devices together, or may be a functional module in one device, which is not specifically limited in this embodiment of the present application.
It should also be understood that in the communication system shown in fig. 2, the functions of the respective constituent network elements are merely exemplary, and that not all the functions are necessary when the respective constituent network elements are applied in the embodiments of the present application.
In addition, the naming of each network element (such as PCF, AMF, etc.) as included in fig. 2 is only one name, and the name does not limit the function of the network element itself. In 5G networks and other networks in the future, the above-mentioned network elements may also be named, which is not specifically limited in the embodiments of the present application. For example, in a 6G network, some or all of the above network elements may use the terminology in 5G, and other names may also be used, which is generally described herein and not described in detail herein.
It should be further noted that, in fig. 2, communication between network elements of the control plane function is described by taking a non-service interface as an example, but the protection scope of the embodiments of the present application is not limited. Those skilled in the art will understand that each network element of the control plane function in fig. 2 may also communicate through a service interface, for example, the service interface provided by the AMF to the outside may be Namf; the servitization interface provided by the SMF may be Nsmf; the service interface provided by the UDM to the outside can be Nudm, and the service interface provided by the AF can be Naf; the server interface provided by the PCF may be Npcf, etc.
The network elements in fig. 2 are reference point-based architectures, and are not limited to the embodiments of the present application. Fig. 3 presents a schematic architecture diagram based on a servitization interface. As shown in fig. 3, the architecture includes: NSSF, AUSF, UDM, NEF, NRF, PCF, AF, AMF, SMF, UE, RAN, UPF, and/or DN, etc. In fig. 3, the service interface provided by NSSF to the outside may be Nnssf, the service interface provided by NEF to the outside may be Nnef, the service interface provided by NRF to the outside may be Nnrf, and the service interface provided by AMF to the outside may be Namf; the servitization interface provided by the SMF may be Nsmf; the service interface provided by the UDM to the outside can be Nudm, and the service interface provided by the AF can be Naf; the service interface provided by PCF to the outside can be Npcf, the service interface provided by AUSF to the outside can be Nausf, and the service interface provided by CHF to the outside can be Nchf; the interface between the control plane functions and the RAN and UPF is a non-service interface. The UE is connected with the AMF through an N1 interface, and is connected with the RAN through a radio resource control (radio resource control, RRC) protocol; the RAN is connected with the AMF through an N2 interface, and the RAN is connected with the UPF through an N3 interface; the UPF is connected with DN through N6 interface, and at the same time, UPF is connected with SMF through N4 interface. The related description may refer to the 5G system architecture (5G system architecture) in the standard, and the connection relationship of the architecture 300 is not described herein for brevity.
Fig. 4 shows a network architecture diagram of a 5G mobile communication network with an existing radio access network (next generation radio access network, NG-RAN) and a 5G core network (5g core network,5GC) connected and implementing data center disaster recovery. As shown in fig. 4, the (R) AN is AN access network, through which the terminal device accesses the core network. In 5G radio access networks, there are two base stations, one being a gNB (gndeb) supporting the 5G new radio and the other being an evolved base station NG-eNB (NG-eNodeB) supporting the 4G radio, both base stations being connected to the 5G core network via an NG interface (also called N2 interface). The AMF receives and transmits the signaling of the RAN mainly through the NG interface, and completes the registration flow of the user, the forwarding of the signaling of session management (session management, SM) and the mobility management. UPF is a user plane function responsible for forwarding user data. The NG-RAN node in the current technology is connected to the core network through an NG interface, which includes an NG-C interface between the AMF and the NG-RAN, and an NG-U interface between the UPF and the NG-U. In order to increase reliability, one AMF Set (AMF Set) is composed of a plurality of AMFs having the same slice supporting capability within the AMF region. All AMFs within an AMF Set have the ability to access context information of the same user terminal device. If a certain AMF fails, or if a link between the RAN and the AMF fails, the RAN may reselect another AMF within the AMF Set to continue to service the terminal device via a next generation (wireless) application protocol (next generation application protocol, NGAP) message. As shown in fig. 4, to provide data center disaster recovery, AMF instances within an AMF Set are often deployed within Data Centers (DCs) in different geographic locations, with each RAN simultaneously connecting at least two AMF instances within an AMF Set deployed at different DCs.
In the networking shown in fig. 4, the RAN in the prior art needs to configure each AMF to be connected, which includes an AMF name, a public land mobile network identifier PLMN ID, a supported network slice list, and an interface address. Although the UE may be served by another AMF taking over the failed AMF, this replacement is not completely transparent to the RAN, and still requires that the RAN be informed via an additional NGAP interface message that the AMF serving the UE has been replaced on another AMF within the AMF set.
When one AMF fails, another AMF in the AMF Set takes over the failed AMF to provide service for the terminal equipment, the new AMF allocates a new AMF UE NGAP ID for the terminal equipment, and simultaneously sends a message of the binding update of the terminal transmission network layer association UE (UE transport network layer association, UE TNLA) to the RAN, informs the RAN of the newly allocated AMF UE NGAP ID for the UE, and provides service for the UE by the new AMF and a new NG interface link. Among other things, the purpose of UE TNLA binding is to specify which NG interface link to use for delivering this UE-related NGAP message.
When the signaling link from the RAN to a certain AMF fails, the RAN selects another reachable AMF in the AMF Set to send a request message, and after receiving the request message, the new AMF allocates a new AMF UE NGAP ID to the terminal equipment and sends the new AMF UE NGAP ID to the RAN. This terminal equipment is now started to be served by the new AMF.
Therefore, when all AMF instances deployed in a certain data center are unavailable in the above technology, additional NGAP interface signaling is required to allocate a new AMF UE NGAP ID to the UE, and the AMF serving the UE is replaced to the AMF instance of another data center in the AMF set; the UE can continue to process the service (e.g., location update, session establishment, etc.) only after successfully establishing a connection with the new AMF instance, and the process of replacing the AMF may cause an interruption or delay in the original service process.
Based on this, the application provides a method and a device for disaster recovery of a data center, which can replace an AMF instance serving a UE in an AMF Set from one data center to another data center, and does not need to allocate a new AMF UE NGAP ID for the UE, and does not need additional interface signaling, and meanwhile, interruption or delay of service processing is avoided.
Fig. 5 shows a system architecture diagram provided in an embodiment of the present application. As shown in fig. 5, compared with the existing system, the system architecture provided in the embodiment of the present application has a new network-function-function set logic function (NSLF) added. NSLF uses an existing NG interface to connect to the RAN (either the gNB or the NG-eNB, etc.) and presents one to more logical AMFs to the RAN. Each data center is provided with an NSLF, the NSLF manages all NG interface links from the data center to the RAN, the NSLF is connected with each AMF of the AMF Set in the data center, the NSLF of each data center forms an NSLF Set, and a plurality of AMF instances in different data centers in the AMF Set are virtualized into a logic AMF. The system architecture provided by the embodiment of the application is further added with a unified distributed database (universal distributed data base, UDDB), and NSLF and AMF instances can access the UDDB no matter in which data center, so that data across DCs can be accessed and shared uniformly.
When the NSLF is in the data center, the normal available AMF instance and NG interface link exist, the NSLF selects the serving AMF instance for the UE in the data center, otherwise, the UE related NGAP message is routed to the NSLF of other data centers, the AMF instance of the other data centers is selected to serve the UE, the NSLF of the new data center (namely the NSLF of the other data center) binds the UE TNLA to the NG interface link of the data center, and the subsequent UE related NGAP message is received and sent from the NG interface link of the new data center.
In the whole process, the RAN does not sense the change of the AMF instance, does not need to allocate new AMF UE NGAP ID for the UE again, and does not need additional NGAP signaling. Meanwhile, since the NG interface link of whichever data center belongs to the logic AMF connected with it in the view of the RAN, the binding relationship between the NGAP association of the UE and the newly selected link is automatically updated after the RAN receives the NGAP message related to the UE from the NG interface link of another data center, and the currently ongoing service is not interrupted or delayed.
Fig. 6 shows a schematic flow chart of a first data center disaster recovery method of the present application. The following steps are combined to apply the data center disaster recovery method 600 to the network architecture shown in fig. 2 to 5. The instance access and mobility management network element may also be referred to as an AMF instance, and the logical access and mobility management network element may also be referred to as a logical AMF.
S610, the first network function set logic network element receives first information, where the first information is used to instruct the first instance access and mobility management network element to terminate serving the terminal device, the first network function set logic network element is located in a first data center, the first instance access and mobility management network element is located in a second data center, and the second data center is different from the first data center.
Wherein the first instance access and mobility management network element terminates serving the terminal device, including but not limited to at least one of: the link between the second data center and the first wireless access network fails; the first instance access and mobility management network element fails; the second data center fails or shuts down.
As one possible implementation, the first network function set logical network element receives the first information from a unified distributed database, the unified distributed database being used for sharing data by the first data center and the second data center.
As another possible implementation, the first network function set logic network element receives the first information from a second network function set logic network element or a network repository network element, the second network function set logic network element being located in the second data center.
S620, the first network function set logic network element determines that the second instance access and mobility management network element serves the terminal equipment according to the first information, the second instance access and mobility management network element is located in the first data center, and the first instance access and mobility management network element and the second instance access and mobility management network element belong to the same access and mobility management network element set.
As a possible implementation manner, the first network function set logic network element may determine the first correspondence by itself. For example, the first network function set logic network element obtains a third corresponding relation, where the third corresponding relation is a corresponding relation between the AMF UE NGAP ID and the first instance access and mobility management network element, the third corresponding relation is determined by the second network function set logic network element, and the first network function set logic network element establishes the first corresponding relation according to the first information and the third corresponding relation.
As another possible implementation manner, the first network function set logic network element may obtain the first correspondence from other network elements or devices. For example, the first network function set logic network element receives fourth information from the second network function set logic network element, the fourth information including a first correspondence, the first correspondence determined by the second network function set logic network element; the first network function set logic network element obtains a first corresponding relation according to the second information.
S630, the first network function set logic network element determines a first corresponding relation, wherein the first corresponding relation is a corresponding relation between the next generation application protocol interface identifier AMF UE NGAP ID of the terminal equipment at the access and mobility management network element side and the second instance access and mobility management network element.
Optionally, in the embodiment of the present application, the first network function set logic network element receives second information, where the second information is uplink information of the terminal device, and sends the second information to the second instance access and mobility management network element according to the first correspondence. In this way, the uplink information of the terminal equipment can be quickly sent to the second instance access and mobility management network element for providing service for the terminal equipment according to the first corresponding relation, so that service interruption of the terminal equipment is avoided, and user experience is improved.
Optionally, the first network function set logic network element acquires a next generation application protocol interface identifier RAN UE NGAP ID of the terminal device at the radio access network side, and establishes a second corresponding relationship, where the second corresponding relationship is a corresponding relationship between the RAN UE NGAP ID and the first link of the first network function set logic network element.
Optionally, the first link is a link newly added by the first network function set logic network element for the first radio access network.
After determining the second correspondence, when the first network function set logic network element receives third information, where the third information is downlink information of the terminal device, the first network function set logic network element may send the third information from the first link to a first radio access network serving the terminal device according to the second correspondence. In this way, the downlink information of the terminal equipment can be sent to the first wireless access network for providing service for the terminal equipment according to the second corresponding relation, so that service interruption of the terminal equipment is avoided, and user experience is improved.
Optionally, the first network function set logic network element sends fifth information to the unified distributed database, where the fifth information includes at least one of the first correspondence and the second correspondence. Optionally, the first network function set logic network element sends sixth information, where the sixth information includes interface information of the first link. In this way, other network function set logic network elements can acquire the corresponding relation from the unified distributed database, so that the up-down information of the terminal equipment can be sent to the target network element, the service interruption of the terminal equipment is avoided, and the user experience is improved.
According to the scheme disclosed by the application, when one data center is unavailable, the AMF instance serving the terminal equipment is replaced by the AMF instance of the other data center, a new AMF UE NGAP ID is not required to be reassigned to the terminal equipment, extra interface signaling is not required, smooth realization of disaster recovery of the data center can be ensured, and meanwhile, interruption or delay of service processing can be avoided.
Fig. 7 is a schematic flow chart of a disaster recovery method for a second data center of the present application. In this embodiment, the RAN to DC2 link fails entirely, providing data center disaster recovery. Where NSLF 1 and AMF instance 11 are at DC 1 and NSLF 2 and AMF instance 21 are at DC 2. Wherein the first network function set logic network element corresponds to NSLF 1 located in the first data center DC 1, the second network function set logic network element corresponds to NSLF 2 located in the second data center DC2, the first instance access and mobility management network element corresponds to AMF instance 21 located in the second data center DC2, and the second instance access and mobility management network element corresponds to AMF instance 11 located in the first data center DC 1.
S701, the RAN sends a UE NGAP initial request message to NSLF 2.
Specifically, the RAN may configure name or address information of the AMF Set according to the current technology. When configuring the name, the RAN can query the corresponding address information through a domain name server (domain name server, DNS) and the name of the AMF Set. The address information of the AMF Set is an IP address of the logical AMF and a corresponding SCTP port number. When the UE accesses the RAN and performs services such as network registration, the RAN will send the NGAP message related to the UE to NSLF (the RAN considers to send to the logical AMF). If the uplink message is an initial message (i.e., the uplink message does not carry an AMF UE NGAP ID), the RAN determines that there are two NG interface links to the logical AMF, and selects one of the links for load sharing, for example, selects a link connected to NSLF 2 located at DC2, and sends a UE NGAP initial request message (UE NGAP Initial Request), where the UE NGAP initial request message carries the UE's RAN UE NGAP ID.
S702, NSLF2 selects AMF instance 21 to serve the UE.
Specifically, NSLF2 determines that AMF instance 21 and other possible AMF instances, such as AMF instance 22 (not shown), exist in AMF Set in DC (DC 2), assigns an AMF UE NGAP ID to this UE, and determines that AMF instance 21 is assigned to provide services for the UE.
S703, NSLF2 creates a NSLF context for the UE.
Specifically, NSLF2 creates and stores the NSLF context of the UE in the UDDB, where a third correspondence is recorded, that is, the correspondence between the AMF UE NGAP ID and the AMF instance 21, and the subsequent NSLF (including, but not limited to, NSLF 1 and NSLF 2) may directly forward, according to the correspondence, the NGAP message (for example, the NGAP message carrying the session establishment request) related to the UE to the AMF instance 21 for processing according to the AMF UE NGAP ID.
Optionally, the NSLF context of the UE further includes a fourth correspondence, that is, a correspondence between the RAN UE NGAP ID and the second link of the NSLF2, that is, a correspondence between the RAN UE NGAP ID and the second link of the logical AMF (that is, binding information of the UE TNLA), and a downlink message (for example, third information) of the UE sent by the AMF instance, where the NSLF2 sends, according to the recorded correspondence between the RAN UE NGAP ID and the second link, the corresponding NG interface link, and finally sends the downlink message to the RAN that provides services for the UE.
S704, NSLF 2 sends a UE-related NGAP message to AMF instance 21.
The UE-related NGAP message may include second information, where the second information is uplink information of the UE, and the second information may include an AMF UE NGAP ID of the UE.
S705, AMF instance 21 creates a UE context.
Specifically, the AMF instance 21 creates a UE context for the UE and stores it in the UDDB, from which any one of the AMF instances in the subsequent AMF Set can access the UE context.
S706, AMF instance 21 sends a UE-related NGAP message to NSLF 2.
Specifically, AMF instance 21 sends UE downstream NGAP message, i.e. UE NGAP initial response message (UE NGAP Initial Response) to NSLF 2 of the present DC
S707, NSLF 2 queries the NSLF context of the UE.
Specifically, after receiving the downlink message, NSLF 2 queries the UE NSLF context from UDDB according to the RAN UE NGAP ID in the message, to obtain a fourth corresponding relationship, that is, a second link corresponding relationship between the RAN UE NGAP ID and NSLF 2, that is, binding information of UE TNLA.
S708, NSLF 2 sends a UE NGAP initial reply message to the RAN.
Specifically, NSLF 2 sends a UE NGAP initial response message (UE NGAP Initial Response) to the RAN from the NG interface link of the present DC (DC 2) according to the binding information of the UE TNLA queried in step S707.
S709, the other core network element sends the service message of the UE to the AMF instance 21.
Specifically, other core network functions that serve this UE, such as a session management network element SMF, send service messages of the UE to the AMF, which may carry a uniform resource identifier (uniform resource identifier, URL).
S710, NSLF 2 selects AMF instance 21 to serve the UE.
Specifically, NSLF 2 determines from the URL that AMF instance 21 provides services for the UE.
S711, NSLF 2 sends a service message of the UE to AMF instance 21.
It should be understood that after S711 is performed, the AMF instance 21 may also send a downlink service message of the UE to NSLF 2, and the process may refer to steps S706 to S708, which are not described herein for brevity.
S712, the RAN sends a non-initial uplink message to NSLF 2.
The non-initial uplink message indicates that the uplink message includes an AMF UE NGAP ID of the UE.
S713, NSLF 2 queries the NSLF context of the UE.
Specifically, after receiving the uplink message, NSLF 2 queries the UE NSLF context from the UDDB according to the AMF UE NGAP ID in the message, to obtain a third correspondence between the AMF UE NGAP ID and AMF instance 21.
S714, NSLF 2 sends a UE-related NGAP message to AMF instance 21.
S715, the RAN sends a UE-related NGAP message to NSLF 1.
Specifically, when all links between the RAN and NSLF 2 fail, after NSLF 2 detects a link failure, the latest link state is refreshed into the UDDB. The RAN selects another link to the logical AMF, namely, a link between the RAN and NSLF 1, and sends an uplink NGAP message of the UE, where the message carries an AMF UE NGAP ID and a RAN UE NGAP ID.
S716, NSLF 1 selects AMF instance 11 to serve the UE.
Specifically, after receiving the uplink NGAP message of the UE, NSLF 1 queries the UDDB for the NSLF context of the UE according to the AMF UE NGAP ID, finds that the AMF instance 21 for providing services for the UE recorded in the context is not in the DC (i.e., DC 1), and further queries the status of each NG interface link through the UDDB, obtains the first information, determines the link failure of DC 2, and then determines that the AMF instance 11 of DC 1 provides services for the UE, and performs UE NGAP message processing.
S717, NSLF 1 sends a UE-related UE NGAP message to AMF instance 11.
S718, NSLF 1 updates the NSLF context of the UE.
Specifically, NSLF 1 updates the UE NSLF context, and modifies the third correspondence to be the first correspondence, that is, modifies the correspondence between the AMF UE NGAP ID and the AMF instance 21 to correspond to the AMF UE NGAP ID and the AMF instance 11, and the subsequent NSLF (including but not limited to NSLF 1 and NSLF 2) may directly forward the NGAP message related to the UE to the AMF instance 11 according to the AMF UE NGAP ID according to the correspondence.
Optionally, NSLF 1 updates the UE NSLF context, and modifies the fourth correspondence to be the second correspondence, that is, modifies the correspondence between the RAN UE NGAP ID and the second link of NSLF 2 to be the correspondence between the RAN UE NGAP ID and the first link of NSLF 1, and the NSLF transmits, according to the recorded RAN UE NGAP ID and link correspondence, the downlink message of this UE sent by the subsequent AMF instance from the NG interface link of DC 1 to the RAN that provides the service for the UE. Optionally, the modification of the fourth correspondence relationship to the second correspondence relationship by NSLF 1 may also be completed between the subsequent steps 719 and 720.
S719, AMF instance 11 sends a UE-related UE NGAP message to NSLF 1.
The UE related UE NGAP message may be a downlink message, which may include third information, where the third information is downlink information of the terminal device.
S720, NSLF 1 queries the UE for updated NSLF context.
Specifically, after receiving the downlink message, NSLF 1 queries the UE NSLF context from the UDDB according to the RAN UE NGAP ID in the message, and obtains the corresponding relationship between the RAN UE NGAP ID and the first link of NSLF 1, that is, the binding information of UE TNLA.
S721, NSLF 1 sends a UE related UE NGAP message to the RAN.
Specifically, NSLF 1 transmits a UE NGAP message (including third information) from the NG interface link of the present DC (DC 1) to the RAN according to the binding information of the UE TNLA.
S722, the RAN updates the TNLA binding of the UE.
Specifically, after the RAN receives the NGAP message of the UE from the link of NSLF 1, the TNLA binding update of the UE is completed.
According to the scheme disclosed by the application, when one data center is unavailable, the AMF instance serving the terminal equipment is replaced by the AMF instance of the other data center, no extra interface signaling is needed, smooth realization of disaster recovery of the data center can be ensured, and meanwhile, interruption or delay of service processing can be avoided.
Fig. 8 is a schematic flow chart of a third data center disaster recovery method of the present application. In this embodiment, AMF instance 21 at DC 2 fails, and data center disaster recovery occurs. Wherein the first network function set logic network element corresponds to NSLF 1 located in the first data center DC 1, the second network function set logic network element corresponds to NSLF 2 located in the second data center DC 2, the first instance access and mobility management network element corresponds to AMF instance 21 located in the second data center DC 2, and the second instance access and mobility management network element corresponds to AMF instance 11 located in the first data center DC 1.
S801, the RAN sends a UE NGAP initial request message to NSLF 2.
S802, NSLF 2 selects AMF instance 21 to serve the UE.
S803, NSLF 2 creates a NSLF context for the UE.
S804, NSLF 2 sends a UE-related NGAP message to AMF instance 21.
S805, AMF instance 21 creates a UE context.
S806, the AMF instance 21 sends a UE-related NGAP message to NSLF 1.
S807, NSLF 2 queries the NSLF context of the UE.
S808, NSLF 2 sends a UE NGAP initial reply message to the RAN.
In steps S801 to S808, the RAN completes the initial connection establishment of the NGAP of the UE, and the AMF instance 21 located in DC 2 provides services for the UE. Reference may be made specifically to the descriptions in steps S701 to S708 in fig. 7, and for brevity, the description is omitted here.
S809, the RAN sends a UE-related NGAP message to NSLF 2.
The UE-related NGAP message is a non-initial uplink message, that is, the uplink message includes an AMF UE NGAP ID.
S810, NSLF 2 queries the NSLF context of the UE.
Specifically, after receiving the uplink message, NSLF 2 queries the UE NSLF context from the UDDB according to the AMF UE NGAP ID in the message, to obtain a third corresponding relationship, that is, the corresponding relationship between the AMF UE NGAP ID and the AMF instance 21.
S811, NSLF 2 determines that AMF instance 21 terminates the service, and selects AMF instance 11 to provide the service for the UE.
Specifically, NSLF 2 discovers that AMF instance 21, which is recorded in the context and serves the UE, currently terminates the service (or fails), and that no other AMF instance is available in the present DC (i.e., DC 2), then determines that AMF instance 11 at DC 1 serves the UE, and processes the NGAP message of the UE.
S812, NSLF 2 updates the NSLF context of the UE.
Specifically, NSLF 2 updates the UE NSLF context, and modifies the third correspondence to be the first correspondence, that is, modifies the correspondence between the AMF UE NGAP ID and the AMF instance 21 to the correspondence between the AMF UE NGAP ID and the AMF instance 11, and the subsequent NSLF (including but not limited to NSLF 1 and NSLF 2) may directly forward the NGAP message related to the UE to the AMF instance 21 for processing according to the AMF UE NGAP ID according to the correspondence (the first correspondence). Further, NSLF 1 may send fifth information to the UDDB, and the fifth information may include the first correspondence.
S813, NSLF 2 sends a UE-related NGAP message to NSLF 1.
The UE-related NGAP message may include second information, where the second information is uplink information of the UE, and the information includes an AMF UE NGAP ID of the UE.
S814, NSLF 1 queries the NSLF context of the UE.
Specifically, after receiving the uplink message, NSLF 1 queries the UE NSLF context from the UDDB according to the AMF UE NGAP ID in the message, to obtain a first corresponding relationship, that is, a corresponding relationship between the AMF UE NGAP ID and the AMF instance 11.
S815, NSLF 1 sends a UE-related NGAP message to AMF instance 11.
Specifically, NSLF 1 sends a UE-related NGAP message (e.g., second information) to AMF instance 11 according to the first correspondence.
S816, AMF instance 11 sends a UE-related NGAP message to NSLF 1.
Specifically, AMF instance 11 obtains the UE context from UDDB and processes it, and then sends the UE's downstream NGAP message to NSLF 1 of the present DC (DC 1).
Optionally, the downlink NGAP message includes third information, where the third information is downlink information of the UE, and the third information may include an AMF UE NGAP ID and a RAN UE NGAP ID.
S817, NSLF 1 queries the NSLF context of the UE.
Specifically, after receiving the downlink message, NSLF 1 queries the UE NSLF context from UDDB according to the RAN UE NGAP ID in the message, to obtain a fourth corresponding relationship, that is, a link corresponding relationship between the RAN UE NGAP ID and NSLF 2, which may also be referred to as binding information of UE TNLA. NSLF 1 finds the NG interface link bound at DC 2, then queries the status of the AMF instances from UDDB, determines that all AMF instances of DC 2 are not available, and then decides to send a downlink message to the RAN using the NG interface link (i.e., the first link) of the present DC (DC 1).
S818, NSLF 1 determines that the link of the present DC is normal, updates the TNLA binding of the UE.
And S819, the NSLF 1 updates the NSLF context of the UE.
Specifically, NSLF 1 determines that the link of the present DC1 is normal, modifies the binding information of UE TNLA in the UE NSLF context in UDDB, and modifies the fourth corresponding relationship to be the second corresponding relationship, that is, modifies the link corresponding relationship between the RAN UE NGAP ID and NSLF 2 to be the first link corresponding relationship between the RAN UE NGAP ID and NSLF 1. Further, NSLF 1 may send fifth information to the UDDB, and the fifth information may include the second correspondence.
S820, NSLF 1 sends a UE-related NGAP message to the RAN.
Specifically, NSLF 1 sends a UE-related NGAP message (e.g., third information) to the RAN serving the UE through the first link of NSLF 1 according to the second correspondence.
S821, the RAN updates the TNLA binding of the UE.
According to the scheme disclosed by the application, when one data center is unavailable, the AMF instance serving the terminal equipment is replaced by the AMF instance of the other data center, no extra interface signaling is needed, smooth realization of disaster recovery of the data center can be ensured, and meanwhile, interruption or delay of service processing can be avoided.
Fig. 9 is a schematic flow chart of a fourth data center disaster recovery method of the present application. In this embodiment, NSLF 2 at DC 2 fails or is scheduled to shut down, NSLF 1 at DC 1 adds a new link (e.g., link 503 shown in dashed lines in fig. 5) to provide link redundancy for data center disaster recovery. Wherein the first network function set logic network element corresponds to NSLF 1 located in the first data center DC 1, the second network function set logic network element corresponds to NSLF 2 located in the second data center DC 2, the first instance access and mobility management network element corresponds to AMF instance 21 located in the second data center DC 2, and the second instance access and mobility management network element corresponds to AMF instance 12 located in the first data center DC 1.
S901, NSLF 1 determines that NSLF 2 is out of service.
Wherein, NSLF 1 determines that NSLF 2 service stopping can be planned to be closed by S901a shown by a dashed line in the figure, NSLF 2 is planned to be closed for reasons of upgrading, maintenance or energy saving, and the like, and then sends a planned closing notification to NSLF 1 located at DC 1; or, the NSLF 2 is notified to stop the service through the NRF or the network management function network element. For example, NSLF 1 may receive first information from NSLF 2, NRF or network management function network elements, which may indicate that second data center DC 2 terminates serving the terminal device.
S902, NSLF 1 queries NG interface link information.
Specifically, NSLF 1 queries the UDDB for link information for all NG interfaces, and considers all links in DC 2 unavailable. If no other DC links are connected to the RAN, NSLF 1 decides to add a new link (e.g., link 503 shown in dashed lines in fig. 5) to provide link redundancy, increasing reliability.
S903, NSLF 1 sends AMF configuration update information to the RAN.
Specifically, NSLF 1 sends an AMF configuration update message (AMF Configuration update) to the RAN, where AMF Configuration update includes sixth information, which may include an AMF TNLA to Add cell, where the interface information of the newly added link (i.e., the first link) is included in the AMF TNLA to Add cell.
Optionally, the sixth information may further include AMF TNLA to Remove cells, AMF TNLA to Remove carrying all links in DC 2 (i.e., the second link). After reception, the RAN establishes a new link (i.e., a first link) with NSLF 1 and tears down the link (i.e., a second link) established with DC 2.
S904, NSLF 1 updates NG interface link information.
Specifically, NSLF 1 may send the sixth information in step S903 to the UDDB, updating NG interface link information.
S905, the RAN sends a UE-related NGAP message to NSLF 1.
The UE-related NGAP message is a non-initial uplink message, that is, the uplink message includes an AMF UE NGAP ID.
S906, NSLF 1 queries the NSLF context of the UE.
Specifically, after receiving the uplink message, NSLF 1 queries the UE NSLF context from the UDDB according to the AMF UE NGAP ID in the message, to obtain a third corresponding relationship, that is, the corresponding relationship between the AMF UE NGAP ID and the AMF instance 21.
S907, NSLF 1 determines that NSLF 2 terminates the service, and selects AMF instance 12 to provide the service for the UE.
Specifically, NSLF 1 finds that NSLF 2 of the DC (DC 2) where the AMF instance 21 serving the UE recorded in the context is located is not available, determines that the AMF instance 12 located at DC 1 serves the UE, and processes the NGAP message of the UE.
S908, NSLF 1 updates the NSLF context of the UE.
Specifically, NSLF 1 updates the UE NSLF context, and modifies the third correspondence to be the first correspondence, that is, modifies the correspondence between the AMF UE NGAP ID and the AMF instance 21 to be the correspondence between the AMF UE NGAP ID and the AMF instance 12.
S909, NSLF 1 sends a UE-related NGAP message to AMF instance 12.
The UE-related NGAP message is an uplink message, for example, the message may include second information, and the second information may include an AMF UE NGAP ID of the UE.
S910, the AMF instance 12 sends a UE-related NGAP message to NSLF 1.
Specifically, the AMF instance 12 obtains the context of the UE from the UDDB and processes the UE, and then sends a downlink NGAP message of the UE to NSLF 1 of the DC (DC 1), for example, the message may include third information, where the third information carries an AMF UE NGAP ID and a RAN UE NGAP ID.
S911, NSLF 1 queries the NSLF context of the UE.
Specifically, after receiving the downlink message, NSLF 1 queries the UE NSLF context from UDDB according to the RAN UE NGAP ID in the message, to obtain a fourth corresponding relationship, that is, a link corresponding relationship between the RAN UE NGAP ID and NSLF 2, which is also referred to as binding information of UE TNLA.
S912, NSLF 1 determines that the DC link is normal, and updates the TNLA binding of the UE.
Specifically, NSLF 1 discovers that the bonded NG interface links are at DC 2, determines that none of the NG links of DC 2 are available, and then decides to send a downlink message to the RAN using the NG interface link of the present DC (DC 1). The NSLF 1 determines that the link of the DC1 is normal, modifies the binding information of the UE TNLA in the UE NSLF context in the UDDB, and modifies the fourth corresponding relation to be a second corresponding relation, namely, modifies the link corresponding relation of the RAN UE NGAP ID and the NSLF 2 to be the link corresponding relation of the RAN UE NGAP ID and the NSLF 1.
S913, NSLF 1 sends a UE-related NGAP message to the RAN.
Specifically, NSLF 1 transmits a downlink NGAP message (e.g., third information) of the UE from a link (e.g., first link) of NSLF 1 to the RAN.
S914, the RAN updates the TNLA binding of the UE.
According to the scheme disclosed by the application, when one data center is unavailable, the AMF instance serving the terminal equipment is replaced by the AMF instance of the other data center, and additional interface signaling is not needed, so that smooth realization of disaster recovery of the data center can be ensured, and meanwhile, interruption or delay of service processing can be avoided.
It should be understood that the sequence numbers of the above processes do not mean the order of execution, and the execution order of the processes should be determined by the functions and internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present application.
It is also to be understood that in the various embodiments of the application, terms and/or descriptions of the various embodiments are consistent and may be referenced to one another in the absence of a particular explanation or logic conflict, and that the features of the various embodiments may be combined to form new embodiments in accordance with their inherent logic relationships.
It will be appreciated that in the above embodiments of the present application, the method implemented by the communication device may also be implemented by a component (e.g. a chip or a circuit) that may be configured inside the communication device.
In the above description, the "terminal device" in the present application is taken as an example of the UE, and each method is described, and in practical application, the UE may be replaced by another terminal device, which is not limited in the present application.
The method for disaster recovery of the data center provided in the embodiment of the present application is described in detail above with reference to fig. 6 to 9. The disaster recovery method of the data center is mainly introduced from the interaction angle among the network elements. It will be appreciated that each network element, in order to implement the above-described functions, includes corresponding hardware structures and/or software modules that perform each function. Those of skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The following describes in detail the disaster recovery device for data center provided in the embodiment of the present application with reference to fig. 10 and 11. It should be understood that the descriptions of the apparatus embodiments and the descriptions of the method embodiments correspond to each other, and thus, descriptions of details not shown may be referred to the above method embodiments, and for the sake of brevity, some parts of the descriptions are omitted.
The embodiment of the application may divide the function modules of the transmitting end device or the receiving end device according to the above method example, for example, each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation. The following description will take an example of dividing each functional module into corresponding functions.
Fig. 10 is a schematic block diagram of an example of a disaster recovery device 1000 for a data center provided in the present application. Any of the network elements involved in any of the methods 700 to 900, such as a network function set logic network element, a mobility management network element, a radio access network, etc., may be implemented by the data center disaster recovery device shown in fig. 10.
It should be appreciated that data center disaster recovery device 1000 may be a physical device, a component (e.g., an integrated circuit, a chip, etc.) of a physical device, or a functional module in a physical device.
As shown in fig. 10, the data center disaster recovery device 1000 includes: one or more processors 1010. The processor 1010 may store execution instructions for performing the methods of embodiments of the present application. Optionally, an interface may be invoked in the processor 1010 to implement the receive and transmit functions. The interface may be a logical interface or a physical interface, which is not limited. For example, the interface may be a transceiver circuit, or an interface circuit. The transceiver circuitry, or interface circuitry, for implementing the receive and transmit functions may be separate or may be integrated. The transceiver circuit or the interface circuit may be used for reading and writing codes/data, or the transceiver circuit or the interface circuit may be used for transmitting or transferring signals.
Alternatively, the interface may be implemented by a transceiver. Optionally, the data center disaster recovery device 1000 can also include a transceiver 1030. The transceiver 1030 may be referred to as a transceiver unit, a transceiver circuit, a transceiver, etc. for implementing a transceiver function.
Optionally, the data center disaster recovery device 1000 can also include a memory 1020. The specific deployment location of the memory 1020 is not specifically limited in the embodiments of the present application, and the memory may be integrated into the processor or may be independent of the processor. In the case where the data center disaster recovery device 1000 does not include a memory, the data center disaster recovery device 1000 may have a processing function, and the memory may be disposed in other locations (e.g., a cloud system).
The processor 1010, memory 1020, and transceiver 1030 communicate with each other via internal communication paths to transfer control and/or data signals.
It is understood that although not shown, data center disaster recovery device 1000 can also include other devices, such as input devices, output devices, batteries, and the like.
Optionally, in some embodiments, the memory 1020 may store execution instructions for performing the methods of embodiments of the present application. The processor 1010 may execute instructions stored in the memory 1020 in conjunction with other hardware (e.g., transceiver 1030) to perform the steps of the method execution shown below, the specific operation and benefits of which may be found in the description of the method embodiments above.
The method disclosed in the embodiments of the present application may be applied to the processor 1010 or implemented by the processor 1010. The processor 1010 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the method may be performed by integrated logic circuitry in hardware in a processor or by instructions in software. The processor may be a general purpose processor, a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a memory medium well known in the art such as random access memory (random access memory, RAM), flash memory, read-only memory (ROM), programmable read-only memory, or electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads instructions from the memory and, in combination with its hardware, performs the steps of the method described above.
It is to be appreciated that memory 1020 can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory ROM, a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory. The volatile memory may be a random access memory RAM, which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
Fig. 11 is a schematic block diagram of a data center disaster recovery device 1100 provided by the present application.
Alternatively, the specific form of the disaster recovery device 1100 in the data center may be a general purpose computer device or a chip in the general purpose computer device, which is not limited in the embodiment of the present application. As shown in fig. 11, the disaster recovery device for a data center includes a processing unit 1110 and a transceiver unit 1120.
Specifically, the data center disaster recovery device 1100 may be any network element related to the present application, and may implement functions that can be implemented by the network element. It should be appreciated that data center disaster recovery device 1100 may be a physical device, a component (e.g., an integrated circuit, a chip, etc.) of a physical device, or a functional module in a physical device.
In one possible design, the disaster recovery device 1100 of the data center may be the first network function set logic network element device in the above method embodiment, or may be a chip for implementing the function of the first network function set logic network element device in the above method embodiment.
For example, the transceiver unit 11201120 is configured to receive first information, where the first information is used to instruct the first instance access and mobility management network element to terminate serving the terminal device, the first network function set logic network element is located in a first data center, the first instance access and mobility management network element is located in a second data center, and the second data center is different from the first data center; a processing unit 11101110, configured to determine, according to the first information, that the second instance access and mobility management network element serves the terminal device, where the second instance access and mobility management network element is located in the first data center, and the first instance access and mobility management network element and the second instance access and mobility management network element belong to the same access and mobility management network element set; the processing unit 11101110 is further configured to determine a first correspondence, where the first correspondence is a correspondence between a next generation application protocol interface identifier AMF UE NGAP ID of the terminal device at the access and mobility management network element side and a second instance access and mobility management network element.
Optionally, the transceiver unit 1120 is further configured to receive second information, where the second information is uplink information of the terminal device; the processing unit 1110 is further configured to send second information to the second instance access and mobility management network element according to the first correspondence.
Optionally, the transceiver unit 1120 is further configured to obtain a next generation application protocol interface identifier RAN UE NGAP ID of the terminal device on the radio access network side; the processing unit 1110 is further configured to establish a second correspondence, where the second correspondence is a correspondence between the RAN UE NGAP ID and a first link of the first network function set logical network element.
Optionally, the transceiver unit 1120 is further configured to receive third information, where the third information is downlink information of the terminal device; the processing unit 1110 is further configured to send third information from the first link to a first radio access network according to the second correspondence, where the first radio access network is a radio access network serving the terminal device.
Optionally, the first instance access and mobility management network element terminates serving the terminal device, including at least one of: the link between the second data center and the first wireless access network fails; the first instance access and mobility management network element fails; the second data center fails or shuts down.
Optionally, the transceiver unit 1120 is specifically configured to receive the first information from the first network function set logic network element by using a unified distributed database, where the unified distributed database is used for sharing data between the first data center and the second data center.
Optionally, the transceiver unit 1120 is specifically configured to receive the first information from a second network function set logic network element or a network repository network element, where the second network function set logic network element is located in the second data center.
Optionally, the transceiver unit 1120 is configured to obtain a third corresponding relationship, where the third corresponding relationship is a corresponding relationship between the AMF UE NGAP ID and the first instance access and mobility management network element, and the third corresponding relationship is determined by the second network function set logic network element; the processing unit 1110 is configured to establish a first correspondence relationship according to the first information and the third correspondence relationship.
Optionally, the transceiver unit 1120 is configured to receive fourth information, where the fourth information includes a first correspondence, and the first correspondence is determined by a second network function set logic network element; the processing unit 1110 is configured to obtain a first correspondence according to the second information.
Optionally, the transceiver unit 1120 is further configured to send fifth information to the unified distributed database, where the fifth information includes at least one of the first correspondence and the second correspondence.
Optionally, the first link is a link in which the first network function set logic network element is newly added to the first radio access network, and the transceiver unit 1120 is further configured to send sixth information, where the sixth information includes interface information of the first link.
It should be understood that, when the data center disaster recovery device 1100 is a policy control device, the transceiver unit 1120 in the data center disaster recovery device 1100 may be implemented through a communication interface (such as a transceiver or an input/output interface), and the processing unit 1110 in the data center disaster recovery device 1100 may be implemented through at least one processor, for example, may correspond to the processor 1010 shown in fig. 10.
Optionally, the data center disaster recovery device 1100 may further include a storage unit, where the storage unit may be used to store instructions or data, and the processing unit may call the instructions or data stored in the storage unit to implement a corresponding operation.
It should also be understood that the specific process of each unit performing the corresponding steps has been described in detail in the above method embodiments, and is not described herein for brevity.
In another possible design, the disaster recovery device 1100 of the data center may be the second network function set logic network element in the above method embodiment, or may be a chip for implementing the second network function set logic network element function in the above method embodiment.
For example, the processing unit 1110 is configured to determine that the first instance access and mobility management network element terminates serving the terminal device, and the second network function set logic network element and the first instance access and mobility management network element are located in a second data center; the transceiver unit 1120 is configured to send first information, where the first information is used by the first network function set logic network element to determine that the second instance access and mobility management network element serves the terminal device, and the first network function set logic network element and the second instance access and mobility management network element are located in a first data center, and the second data center is different from the first data center.
According to the scheme disclosed by the application, when one data center is unavailable, the AMF instance serving the terminal equipment is replaced by the AMF instance of the other data center, no extra interface signaling is needed, smooth realization of disaster recovery of the data center can be ensured, and meanwhile, interruption or delay of service processing can be avoided.
With reference to the fourth aspect, in some implementations of the fourth aspect, the processing unit 1110 is further configured to allocate, to the terminal device, a next generation application protocol interface identifier AMF UE NGAP ID of the terminal device corresponding to the mobility management network element side; the processing unit 1110 is further configured to establish a third corresponding relationship, where the third corresponding relationship is a corresponding relationship between the AMF UE NGAP ID and the first instance access and mobility management network element.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver 1120 is further configured to receive second information, where the second information is uplink information of the terminal device; the processing unit 1110 is further configured to send second information to the second instance access and mobility management network element according to the third correspondence.
With reference to the fourth aspect, in some implementations of the fourth aspect, the processing unit 1110 is further configured to determine that the second instance access and mobility management network element serves the terminal device; the processing unit 1110 is further configured to modify the third corresponding relationship to a first corresponding relationship, where the first corresponding relationship is a corresponding relationship between the AMF UE NGAP ID and the second instance access and mobility management network element.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver unit 1120 is further configured to send fourth information to a unified distributed database, where the unified distributed database is used for sharing data between the first data center and the second data center, and the fourth information includes the first correspondence.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver unit 1120 is further configured to obtain a RAN UE NGAP ID of a next generation application protocol interface identifier corresponding to the terminal device on the radio access network side; the processing unit 1110 is further configured to determine a fourth correspondence, where the second correspondence is a correspondence between a RAN UE NGAP ID and a second link of the second network function set logical network element.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver 1120 is further configured to receive third information, where the third information is downlink information of the terminal device; the transceiver 1120 is further configured to send third information from the second link to the first radio access network according to the fourth correspondence, where the first radio access network is a radio access network serving the terminal device.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver unit 1120 is further configured to send seventh information, where the seventh information includes at least one of the third correspondence and the fourth correspondence.
With reference to the fourth aspect, in some implementations of the fourth aspect, the terminating of the first access and mobility management network element to the terminal device includes at least one of: the link between the second data center and the first wireless access network fails; the first access and mobility management network element fails; the second data center fails or shuts down.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver unit 1120 is further configured to send the first information to a unified distributed database.
With reference to the fourth aspect, in other implementations of the fourth aspect, the transceiver unit 1120 is further configured to send the first information to a first network function set logic network element or a network repository function network element.
It should be appreciated that, when the data center disaster recovery device 1100 is a second network function set logic network element, the transceiver unit 1120 in the data center disaster recovery device 1100 may be implemented through a communication interface (such as a transceiver or an input/output interface), for example, may correspond to the communication interface 1030 shown in fig. 10, and the processing unit 1110 in the data center disaster recovery device 1100 may be implemented through at least one processor, for example, may correspond to the processor 1010 shown in fig. 10.
Optionally, the data center disaster recovery device 1100 may further include a storage unit, where the storage unit may be used to store instructions or data, and the processing unit may call the instructions or data stored in the storage unit to implement a corresponding operation.
It should also be understood that the specific process of each unit performing the corresponding steps has been described in detail in the above method embodiments, and is not described herein for brevity.
It should also be understood that the apparatus 1100 may also be used to implement the functions of the UDDB, NRF, etc. network elements in the above method embodiments, where the transceiver unit 1120 may be used to implement operations related to reception and transmission, and the processing unit 1110 may be used to implement operations other than reception and transmission, which may be specifically referred to in the above method embodiments, and are not listed here.
In addition, in the present application, the data center disaster recovery device 1100 is presented in the form of a functional module. A "module" herein may refer to an application specific integrated circuit ASIC, an electronic circuit, a processor and memory that execute one or more software or firmware programs, an integrated logic circuit, and/or other devices that can provide the described functionality. In a simple embodiment, one skilled in the art will recognize that the apparatus 1100 may take the form shown in FIG. 11. The processing unit 1110 may be implemented by the processor 1010 shown in fig. 10. Alternatively, if the computer device shown in fig. 10 includes a memory 1020, the processing unit 1110 may be implemented by the processor 1010 and the memory 1020. The transceiving unit 1120 may be implemented by the transceiver 1030 shown in fig. 10. The transceiver 1030 includes a receiving function and a transmitting function. In particular, the processor is implemented by executing a computer program stored in a memory. Alternatively, when the apparatus 1100 is a chip, the functions and/or implementation procedures of the transceiver unit 1120 may also be implemented by pins or circuits, etc. Alternatively, the memory may be a storage unit in the chip, such as a register, a cache, or the like, and the storage unit may also be a storage unit in the computer device that is located outside the chip, such as the memory 1020 shown in fig. 10, or may be a storage unit disposed in another system or device, which is not in the computer device. Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Various aspects or features of the present application can be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques. The term "article of manufacture" as used herein encompasses a computer program accessible from any computer-readable device, carrier, or media. For example, computer-readable media may include, but are not limited to: magnetic storage devices (e.g., hard disk, floppy disk, or magnetic tape, etc.), optical disks (e.g., compact Disk (CD), digital versatile disk (digital versatile disc, DVD), etc.), smart cards, and flash memory devices (e.g., erasable programmable read-only memory (EPROM), cards, sticks, key drives, etc.). Additionally, various storage media described herein can represent one or more devices and/or other machine-readable media for storing information. The term "machine-readable medium" can include, without being limited to, wireless channels and various other media capable of storing, containing, and/or carrying instruction(s) and/or data.
According to the method provided by the embodiment of the application, the application further provides a computer program product, which comprises: computer program code which, when run on a computer, causes the computer to perform the method of any of the embodiments shown in fig. 7 to 9.
According to the method provided in the embodiments of the present application, there is further provided a computer readable medium storing a program code, which when run on a computer, causes the computer to perform the method of any one of the embodiments shown in fig. 7 to 9.
According to the method provided by the embodiment of the application, the application further provides a system which comprises the device or equipment.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a high-density digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
As used in this specification, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Furthermore, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with one another in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should also be understood that the term "and/or" is merely one association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It should be further understood that the reference numerals "first," "second," etc. in the embodiments of the present application are merely introduced to distinguish different objects, for example, different "information," or "device," or "unit," and that understanding the specific objects and the corresponding relationships between different objects should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (26)

1. A method of disaster recovery for a data center, comprising:
the method comprises the steps that a first network function set logic network element receives first information, wherein the first information is used for indicating a first instance access and mobility management network element to terminate serving terminal equipment, the first network function set logic network element is located in a first data center, the first instance access and mobility management network element is located in a second data center, and the second data center is different from the first data center;
the first network function set logic network element determines that a second instance access and mobility management network element serves the terminal equipment according to the first information, the second instance access and mobility management network element is located in the first data center, and the first instance access and mobility management network element and the second instance access and mobility management network element belong to the same access and mobility management network element set;
the first network function set logic network element determines a first corresponding relation, wherein the first corresponding relation is a corresponding relation between a next generation application protocol interface identifier AMF UE NGAP ID of the terminal equipment at the access and mobility management network element side and the second instance access and mobility management network element.
2. The method according to claim 1, wherein the method further comprises:
the first network function set logic network element receives second information, wherein the second information is uplink information of the terminal equipment;
and the first network function set logic network element sends the second information to the second instance access and mobility management network element according to the first corresponding relation.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
the first network function set logic network element acquires a next generation application protocol interface identifier (RAN UE NGAP ID) of the terminal equipment in a wireless access network;
and the first network function set logic network element establishes a second corresponding relation, wherein the second corresponding relation is the corresponding relation between the RAN UE NGAP ID and a first link of the first network function set logic network element.
4. A method according to claim 3, characterized in that the method further comprises:
the first network function set logic network element receives third information, wherein the third information is downlink information of the terminal equipment;
and the first network function set logic network element sends the third information from the first link to a first wireless access network according to the second corresponding relation, wherein the first wireless access network is a wireless access network serving the terminal equipment.
5. The method according to any of claims 1 to 4, wherein the first instance access and mobility management network element terminates serving the terminal device, comprising at least one of:
a link between the second data center and the first radio access network fails;
the first instance access and mobility management network element fails;
the second data center fails or shuts down.
6. The method according to any of claims 1 to 5, wherein the first network function set logical network element receives first information, comprising:
the first network function set logic network element receives the first information from a unified distributed database, the unified distributed database being used for sharing data between the first data center and the second data center.
7. The method according to any of claims 1 to 5, wherein the first network function set logical network element receives first information, comprising:
the first network function set logic network element receives the first information from a second network function set logic network element or a network warehouse network element, and the second network function set logic network element is located in the second data center.
8. The method according to any of claims 1 to 7, wherein the first network function set logical network element determining a first correspondence comprises:
the first network function set logic network element obtains a third corresponding relation, wherein the third corresponding relation is a corresponding relation between the AMF UE NGAP ID and the first instance access and mobility management network element, and the third corresponding relation is determined by the second network function set logic network element;
and the first network function set logic network element establishes the first corresponding relation according to the first information and the third corresponding relation.
9. The method according to any of claims 1 to 7, wherein the first network function set logical network element determining a first correspondence comprises:
the first network function set logic network element receives fourth information, wherein the fourth information comprises the first corresponding relation, and the first corresponding relation is determined by the second network function set logic network element;
and the first network function set logic network element acquires the first corresponding relation according to the second information.
10. The method according to any one of claims 1 to 9, further comprising:
The first network function set logic network element sends fifth information to the unified distributed database, wherein the fifth information comprises at least one of the first corresponding relation and the second corresponding relation.
11. The method according to any one of claims 1 to 10, wherein the first link is a link newly added by the first network function set logical network element to the first radio access network,
the method further comprises the steps of:
the first network function set logic network element sends sixth information, wherein the sixth information comprises interface information of the first link.
12. A method of disaster recovery for a data center, comprising:
determining that a first instance access and mobility management network element is terminated to serve a terminal device by a second network function set logic network element, wherein the second network function set logic network element and the first instance access and mobility management network element are positioned in a second data center;
the second network function set logic network element sends first information, the first information is used for the first network function set logic network element to determine that a second instance access and mobility management network element serves the terminal equipment, the first network function set logic network element and the second instance access and mobility management network element are located in the first data center, and the second data center is different from the first data center.
13. The method according to claim 12, wherein the method further comprises:
the second network function set logic network element distributes the next generation application protocol interface identifier AMF UE NGAP ID corresponding to the mobile management network element side to the terminal equipment;
and the second network function set logic network element establishes a third corresponding relation, wherein the third corresponding relation is the corresponding relation between the AMF UE NGAP ID and the first instance access and mobility management network element.
14. The method of claim 13, wherein the method further comprises:
the second network function set logic network element receives second information, wherein the second information is uplink information of the terminal equipment;
and the second network function set logic network element sends second information to the second instance access and mobility management network element according to the third corresponding relation.
15. The method according to claim 13 or 14, characterized in that the method further comprises:
the second network function set logic network element determines that the second instance access and mobility management network element serves the terminal equipment;
the second network function set logic network element modifies the third corresponding relation into a first corresponding relation, wherein the first corresponding relation is a corresponding relation between the AMF UE NGAP ID and the second instance access and mobility management network element.
16. The method of claim 15, wherein the method further comprises:
the second network function set logic network element sends fourth information to a unified distributed database, wherein the fourth information comprises the first corresponding relation, and the unified distributed database is used for sharing data between the first data center and the second data center.
17. The method according to any one of claims 12 to 16, further comprising:
the second network function set logic network element acquires a next generation application protocol interface identifier RAN UE NGAP ID corresponding to the terminal equipment at a wireless access network side;
and the second network function set logic network element determines a fourth corresponding relation, wherein the second corresponding relation is the corresponding relation between the RAN UE NGAP ID and a second link of the second network function set logic network element.
18. The method of claim 17, wherein the method further comprises:
the second network function set logic network element receives third information, wherein the third information is downlink information of the terminal equipment;
and the second network function set logic network element sends third information from the second link to the first wireless access network according to the fourth corresponding relation, wherein the first wireless access network is a wireless access network serving the terminal equipment.
19. The method according to any one of claims 12 to 18, further comprising:
the second network function set logic network element sends seventh information, wherein the seventh information comprises at least one of the third corresponding relation and the fourth corresponding relation.
20. The method according to any of claims 12 to 19, wherein the first access and mobility management network element terminates serving terminal devices, comprising at least one of:
the link between the second data center and the first wireless access network fails;
the first access and mobility management network element fails;
the second data center fails or shuts down.
21. The method according to any of claims 12 to 20, wherein the second network function set logical network element sends the first information comprising:
and the second network function set logic network element sends the first information to the unified distributed database.
22. The method according to any of claims 12 to 21, wherein the second network function set logical network element sends the first information comprising:
the second network function set logic network element sends the first information to the first network function set logic network element or a network warehouse function network element.
23. A data center disaster recovery device, comprising:
a memory for storing computer instructions;
a processor for executing computer instructions stored in the memory, to cause the data center disaster recovery device to perform the method of any one of claims 1 to 11, or,
causing the data center disaster recovery device to perform the method of any one of claims 12 to 22.
24. A system for disaster recovery of a data center is characterized by comprising a first network function set logic network element and a second network function set logic network element, wherein,
the first network function set logic network element is configured to perform the method of any one of claims 1 to 11,
the second network function set logic network element is configured to perform the method of any of claims 12 to 22.
25. A computer readable storage medium, having stored thereon a computer program which, when executed by a communication device, causes the method of any of claims 1 to 22 to be performed.
26. A computer program product containing instructions which, when run on a computer, cause the method of any one of claims 1 to 22 to be performed.
CN202210912562.8A 2022-07-30 2022-07-30 Disaster recovery method, device and system for data center Pending CN117527534A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210912562.8A CN117527534A (en) 2022-07-30 2022-07-30 Disaster recovery method, device and system for data center

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210912562.8A CN117527534A (en) 2022-07-30 2022-07-30 Disaster recovery method, device and system for data center

Publications (1)

Publication Number Publication Date
CN117527534A true CN117527534A (en) 2024-02-06

Family

ID=89765058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210912562.8A Pending CN117527534A (en) 2022-07-30 2022-07-30 Disaster recovery method, device and system for data center

Country Status (1)

Country Link
CN (1) CN117527534A (en)

Similar Documents

Publication Publication Date Title
CN112153098B (en) Application migration method and device
US11419035B2 (en) Method and function for handling traffic for an application
US20220360977A1 (en) Communication related to change of application server
EP4024956A1 (en) Communication method, apparatus, and system
KR102469973B1 (en) Communication method and device
CN113746585B (en) Time service method and communication device
WO2021030946A1 (en) A method of registration with access and mobility management function re-allocation
CN114513864A (en) Multipath transmission method and communication device
JP2023526542A (en) Method and apparatus for providing local data network information to terminal in wireless communication system
CN113660703A (en) Traffic routing towards local area data networks according to application function requests
CN112040471A (en) Method, device and system for sending terminal strategy
WO2022253137A1 (en) Service accessing method and apparatus, and system
US20240015606A1 (en) Method and apparatus for handover
CN117527534A (en) Disaster recovery method, device and system for data center
CN116420393A (en) Identification transmitting method and communication device
WO2024027320A1 (en) Wireless communication method, device and system
CN111836402A (en) Data transmission method and device
US20230116405A1 (en) Method and device for session breakout of home routed session in visited plmn in wireless communication system
WO2023024864A1 (en) Session recreation method, apparatus and system
EP4271113A1 (en) Communication method and apparatus
EP4156795A1 (en) Network slice rejection at tai and amf level
US20220360969A1 (en) Communication method and apparatus
CN118102407A (en) Method, device and system for cutting number segments
CN116567768A (en) Method and device for determining routing policy of user equipment
CN117676730A (en) Session migration method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication