WO2018049567A1 - 一种应用迁移方法、装置及系统 - Google Patents

一种应用迁移方法、装置及系统 Download PDF

Info

Publication number
WO2018049567A1
WO2018049567A1 PCT/CN2016/098883 CN2016098883W WO2018049567A1 WO 2018049567 A1 WO2018049567 A1 WO 2018049567A1 CN 2016098883 W CN2016098883 W CN 2016098883W WO 2018049567 A1 WO2018049567 A1 WO 2018049567A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual machine
controller
application migration
interruption
data
Prior art date
Application number
PCT/CN2016/098883
Other languages
English (en)
French (fr)
Inventor
张丰裕
金爱进
王岩
徐长春
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2016/098883 priority Critical patent/WO2018049567A1/zh
Publication of WO2018049567A1 publication Critical patent/WO2018049567A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to an application migration method, apparatus, and system.
  • a unified collaborative network element mobile collaboration (English: Mobility Coordinator, referred to as MC) is used to cooperate with user equipment (English: User Equipment, UE for short) and application mobile events to implement a virtual machine. (English: Virtual Machine, VM for short) to another VM application migration, and data cache before the migration begins.
  • the method of data caching is started before the VM migration, so that the cache time is similar to the total VM migration time, which causes the uplink data cache time to be too long, and the service interruption time is too long, resulting in a poor user experience.
  • the upstream data cache is not used before the VM is migrated, the upstream data may continue to be sent to the source side, which may cause packet loss.
  • current application migration solutions are prone to packet loss or long service interruptions.
  • the embodiments of the present invention provide an application migration method, device, and system, which can avoid the problem of packet loss or long service interruption time occurring during application migration.
  • an embodiment of the present invention provides an application migration method, including:
  • the first controller determines that the application migration needs And migrating to the second virtual machine; and acquiring an interrupt event during application migration from the first virtual machine to the second virtual machine; sending a notification message for indicating the interrupt event to the second controller, so that The second controller controls to cache data sent to the first virtual machine during the application migration process based on the interrupt event.
  • the first controller may be a controller of an application side (user plane), and the second controller may be a controller of a network side (control plane).
  • the acquired interrupt event may include an interruption time point and an interruption duration during application migration, and the like. Therefore, the second controller can control the buffering of the data in the interrupt process corresponding to the interrupt event based on the interrupt event information such as the interrupt time point and the interrupt duration to reduce the cache time, thereby reducing the service interruption time.
  • the interrupt event may be predicted, and the notification message includes an interrupt time point and an interrupt duration; the acquiring an interrupt event during application migration from the first virtual machine to the second virtual machine Specifically, the first controller acquires a migration state parameter in an application migration process from the first virtual machine to the second virtual machine, and predicts an interruption time point and an interruption duration in the application migration process according to the migration state parameter.
  • the migration state parameter may include parameters such as a dirty page generation rate, a link bandwidth, a migrated amount, and a total migration amount, and may predict a total migration time, an interruption duration, an interruption time point, and the like based on the migration state parameter. Further, the interruption time point may be based on the predicted total migration time, the interruption duration, and the current time.
  • the interrupt event may be obtained by monitoring the migration status in real time, and the interrupt event in the application migration process from the first virtual machine to the second virtual machine may be specifically: first The controller monitors whether an interrupt event occurs during an application migration from the first virtual machine to the second virtual machine. Further, the sending, by the second controller, a notification message for indicating the interrupt event, may be specifically: when the interrupt event is detected, the first controller sends the second controller to indicate the interrupt Notification message for the event.
  • the method for determining the second virtual machine to which the application migration needs to be migrated may be specifically: the second controller determines the target gateway, and sends the identifier information of the target gateway to the first controller; The first controller receives the identifier information of the target gateway sent by the second controller, and determines, according to the identifier information of the target gateway, the second virtual machine to which the application migration needs to be migrated. Thereby determining the target gateway and the second virtual machine.
  • the target gateway is a gateway after the application is migrated.
  • the method for determining the second virtual machine to which the application migration needs to be migrated may be specifically: the first controller acquires location information of the user equipment, and determines, according to the location information, that an application migration requirement is needed. The second virtual machine that was migrated to. Further, the first controller may further send location information of the second virtual machine to the second controller, so that the second controller determines the target gateway according to the location information of the second virtual machine. Thereby determining the target gateway and the second virtual machine.
  • the second controller may further send a service stop command to the first controller; the first controller receives the service stop command sent by the second controller, and controls the first virtual in response to the service stop command
  • the machine stops the service, performs a memory copy between the first virtual machine and the second virtual machine, and synchronizes with a central processing unit (English: Central Processing Unit, CPU for short), and controls the second virtual machine to start the service when the synchronization is completed.
  • a central processing unit English: Central Processing Unit, CPU for short
  • an embodiment of the present invention further provides an application migration method, including:
  • the second controller receives a notification message sent by the first controller, where the notification message indicates an interruption event in an application migration process from the first virtual machine to the second virtual machine that needs to perform application migration; and the cache application is controlled based on the interrupt event. Data sent to the first virtual machine during the migration process.
  • the first controller may be a controller on the application side
  • the second controller may be a controller on the network side.
  • the acquired interrupt event may include an interruption time point and an interruption duration during application migration, and the like. Therefore, the second controller can control the buffering of the data in the interrupt process corresponding to the interrupt event based on the interrupt event information such as the interrupt time point and the interrupt duration.
  • the notification message may include an interruption time point and an interruption duration of the interruption event
  • the cached data in the interruption process may be the first gateway corresponding to the first virtual machine, that is, application migration.
  • the manner of controlling the data sent to the first virtual machine during the application migration process based on the interrupt event may be specifically: the second controller generates a data cache command including the interrupt time point and the interrupt duration. Sending the data cache command to the first gateway corresponding to the first virtual machine, so that the first gateway caches data sent to the first virtual machine during the application migration process based on the interruption time point and the interrupt duration.
  • the notification message may include an interruption time point and an interruption duration of the interruption event
  • the cached data in the interruption process may be stored in a gateway of the second gateway, that is, after the application is migrated. . Then, based on the interrupt event, the cache is sent to the application during the migration process.
  • the manner of the data of the first virtual machine may be specifically: the second controller generates a data cache command including the interrupt time point and the interrupt duration; and the second gateway selected by the second controller sends a data cache command, so that the second The second gateway caches data sent to the first virtual machine during the application migration based on the interruption time point and the interruption duration.
  • the cache location of the data sent to the first virtual machine during the application migration process may be the first gateway corresponding to the first virtual machine (ie, the gateway before the application migration), or the second gateway corresponding to the second virtual machine. (that is, the gateway after the migration is applied), or the base station corresponding to the second virtual machine (that is, the base station after the migration is applied).
  • the above cache device may reserve a cache space according to the interrupt duration in the application migration process to perform data caching when the interrupt time point arrives.
  • the cache memory can match the interrupt duration, that is, the cache device can reserve memory for buffering the uplink data based on the interrupt duration.
  • the second controller may obtain the location information of the user equipment, and determine the second gateway according to the location information, and send the identifier information of the second gateway to the first controller, so that the first controller is configured according to the location
  • the identification information of the second gateway determines the second virtual machine to which the first virtual machine that needs to perform application migration is to be migrated. Thereby determining the second gateway and the second virtual machine.
  • the second controller may receive location information of the second virtual machine to which the first virtual machine that needs to perform application migration is to be migrated, and determine, according to the location information of the second virtual machine, Second gateway.
  • the second virtual machine is selected by the first controller to determine the second gateway and the second virtual machine.
  • the notification message includes an interruption time point and an interruption duration of the interruption event
  • the cached data in the interruption process may be stored in a base station corresponding to the second virtual machine, that is, after the application is migrated.
  • the manner of controlling the data sent to the first virtual machine during the application migration process based on the interrupt event may be specifically: the second controller generates a data cache command including the interrupt time point and the interrupt duration. Sending the data cache command to the base station corresponding to the second virtual machine, so that the base station corresponding to the second virtual machine caches data sent to the first virtual machine during the application migration process based on the interruption time point and the interrupt duration .
  • the data cache command may be sent to the cache device, such as the first gateway or the second gateway or the base station, in real time, or may be based on the time point of the interruption, and sent when the timed time arrives. Give the cache device. If it is timed to send, the second controller can monitor whether the interruption time point arrives before sending the data cache command; when it is detected that the interruption time point arrives, The step of transmitting the data cache command is performed. The data cache command is sent to the cache device for data caching to reduce the cache time, thereby reducing service interruption time.
  • the cache device such as the first gateway or the second gateway or the base station
  • the second controller may further send a service stop command to the first controller to stop the first controller based on the service.
  • the command controls the memory copy and CPU synchronization between the first virtual machine and the second virtual machine.
  • the last iteration is entered through the service stop command to reduce the total migration time.
  • the second controller may update the data forwarding rule; wherein the updated data forwarding rule indicates that the data cached during the application migration process is forwarded to the second On the virtual machine.
  • the embodiment of the present invention further provides an application migration device, which may be specifically configured in the first controller, and includes: a determining module, an event acquiring module, and a sending module, where the application migration device can be implemented by using the foregoing module.
  • the embodiment of the present invention further provides an application migration device, which may be specifically configured in the foregoing second controller, and includes: a message receiving module and a cache control module, where the application migration device can implement the foregoing
  • the second aspect applies some or all of the steps of the migration method.
  • an embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a program, and the program includes some or all of the steps of the application migration method of the first aspect.
  • an embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a program, and the program includes some or all of the steps of the application migration method of the second aspect.
  • the embodiment of the present invention further provides a controller, including: a communication interface, a memory, and a processor, where the processor is respectively connected to the communication interface and the memory; wherein
  • the memory is used to store driver software
  • the processor reads the driver software from the memory and performs some or all of the steps of the application migration method of the first aspect described above by the driver software.
  • the embodiment of the present invention further provides a controller, including: a communication interface, a memory, and a processor, where the processor is respectively connected to the communication interface and the memory; wherein
  • the memory is used to store driver software
  • the processor reads the driver software from the memory and performs some or all of the steps of the application migration method of the second aspect described above by the driver software.
  • a ninth aspect, the embodiment of the present invention further provides an application migration system, including: a first controller, a second controller, a first virtual machine, and a second virtual machine; wherein
  • the first controller is configured to perform part or all of the steps of the application migration method of the foregoing first aspect
  • the second controller is configured to perform some or all of the steps of the application migration method of the second aspect above.
  • the first controller on the application side may determine, when the first virtual machine needs to perform application migration, the second virtual machine to which the application migration needs to be migrated, by acquiring the first virtual machine to the second An interrupt event in the application migration process of the virtual machine, and sending a notification message indicating the interrupt event to the second controller on the network side, so that the second controller can control the cache application to be sent to the cache application during the migration process based on the interrupt event
  • the data of the first virtual machine can obtain the interruption event in the application migration process, and notify the network side, so that the network side can timely buffer the uplink data packet to ensure reliable session continuity, prevent packet loss, and enable data caching. Time can be as close as possible to the interruption time of virtual machine migration, thus shortening the data cache time.
  • FIG. 1 is a schematic diagram of application migration according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of an application migration method according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of another application migration method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a scenario of application migration according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of interaction of an application migration method in the scenario of FIG. 4;
  • FIG. 6 is a schematic diagram of interaction of another application migration method in the scenario of FIG. 4;
  • FIG. 7 is a schematic diagram of another application migration scenario provided by an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of interaction of an application migration method in the scenario of FIG. 7;
  • FIG. 9 is a schematic diagram of interaction of another application migration method in the scenario of FIG. 4;
  • FIG. 10 is a schematic diagram of interaction of another application migration method in the scenario of FIG. 7;
  • FIG. 11 is a schematic diagram of interaction of another application migration method in the scenario of FIG. 4;
  • FIG. 12 is a schematic diagram of interaction of another application migration method in the scenario of FIG. 7;
  • FIG. 13 is a schematic diagram of interaction of another application migration method in the scenario of FIG. 4;
  • FIG. 14 is a schematic diagram of interaction of another application migration method in the scenario of FIG. 7;
  • FIG. 15 is a schematic structural diagram of an application migration apparatus according to an embodiment of the present disclosure.
  • FIG. 16 is a schematic structural diagram of another application migration apparatus according to an embodiment of the present invention.
  • FIG. 17 is a schematic structural diagram of an application migration system according to an embodiment of the present invention.
  • FIG. 18 is a schematic structural diagram of a controller according to an embodiment of the present invention.
  • FIG. 19 is a schematic structural diagram of another controller according to an embodiment of the present invention.
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • Time Division-Synchronous Code Division Multiple Access English: Time TD-SCDMA
  • Universal Mobile Telecommunication System UMTS
  • Long Term Evolution English: Long Term Evolution, English
  • LTE Long Term Evolution
  • the user equipment (English: User Equipment, UE for short) may also be referred to as a terminal, a mobile station (English: Mobile Station, MS for short), or a mobile terminal. It can be wireless An access network (such as a RAN, radio access network) communicates with one or more core networks, and the user equipment may be a mobile terminal, such as a mobile phone (or "cellular" phone) and a computer with a mobile terminal, or may be Portable, pocket, handheld, computer built-in or in-vehicle mobile devices that exchange language and/or data with a wireless access network, and the like.
  • a mobile terminal such as a mobile phone (or "cellular" phone) and a computer with a mobile terminal
  • MS Mobile Station
  • the base station may be a base station in GSM or CDMA, such as a base transceiver station (English: Base Transceiver Station, BTS for short), or a base station in WCDMA, such as a NodeB, or an evolution in LTE.
  • a base station such as an eNB or an e-NodeB (evolutional Node B), or a base station in a future network, is not limited in the embodiment of the present invention.
  • FIG. 1 is a schematic diagram of application migration according to an embodiment of the present invention.
  • VM virtual machine
  • the migration can be started.
  • the memory data of the source VM needs to be sent to the target VM during migration to ensure the continuity of the service provided by the VM, which requires multiple rounds of iteration.
  • the first iteration can transmit all the memory data in the source VM.
  • Subsequent iterations will iteratively copy the newly updated data, such as the dirty page data written by the VM.
  • the process repeats until the memory data is compared. Less as dirty pages are small enough.
  • the memory has been iteratively copied, it does not interrupt the running of the program.
  • the VM memory data is small, such as the number of dirty pages is lower than the preset number threshold, or the memory data size is lower than the preset memory threshold, etc., the last memory data can be copied for the last round of iteration.
  • the VM being migrated is suspended, that is, the source VM is no longer in memory update, and other non-memory data (such as CPU and network status) is also sent to the target VM.
  • other non-memory data such as CPU and network status
  • the interruption duration of the migration also known as the downtime.
  • total migration time that is, the time to start migration to the target VM to provide services
  • interruption duration that is, the time when the source VM stops serving and the last dirty page iteration and CPU synchronization
  • the interruption duration that is, the time when the source VM stops serving and the last dirty page iteration and CPU synchronization
  • the shorter the duration of the interruption the less noticeable the user is, and the better the business continuity.
  • linux uses a page as a cache unit. When a process modifies data in the cache, the page is marked as dirty by the kernel.
  • the total migration time may be based on the size of the migrated memory.
  • the service type, the link bandwidth, the dirty page generation rate, and the like are determined, and the interrupt duration may be determined according to parameters such as a dirty page generation rate, a link bandwidth, a service type, and an iteration number.
  • the first controller involved in the embodiment of the present invention may be an application-side controller, such as a cloud controller Cloud controller, such as a management and organization (MAN: Management and Organization, MANO), and the second controller may be The controller on the network side is in the control plane (English: Controller Plane, CP for short).
  • the VM is actually migrated from one host to another through the control and management of the hypervisor in the host machine, and the VM service is smoothed during the migration process.
  • Running the VM itself does not sense the migration process.
  • the Libvirt interface in the host can provide a unified interface for different hypervisor technologies, such as an open source kernel-based virtual machine (Kernel-based Virtual Machine, KVM for short), Xen, and the like.
  • Cloud controllers can control VM migration through the Libvirt client and sense the state of VM migration in real time.
  • the VM migration technology includes a pre-copy, a post-copy, and a log system-based migration technology.
  • the pre-copy mode is taken as an example to describe the application migration process.
  • the embodiment of the invention discloses an application migration method, a controller and a system, which can reduce packet loss occurring during the application migration process and reduce the time of service interruption. The details are explained below.
  • FIG. 2 it is a schematic flowchart of an application migration method according to an embodiment of the present invention.
  • the method of the embodiment of the present invention may be specifically applied to a controller of the application side (user plane), that is, the first controller, as shown in FIG. 2, the application migration method of the embodiment of the present invention includes the following step:
  • the first virtual machine needs to perform application migration, determine a second virtual machine to which the application migration needs to be migrated.
  • the determining, by the second virtual machine, that the application migration needs to be migrated may be: receiving identifier information of a target gateway sent by the second controller, where the target gateway is And determining, by the second controller, the second virtual machine to which the application migration needs to be migrated according to the identifier information of the target gateway.
  • the second controller such as the CP, may select a target gateway user plane function (English: User Function of Gateway, GW-U for short), that is, the target gateway, and notify the application side of the selected GW-U information, the first control
  • the cloud controller Cloud controller can select the target VM, that is, the second virtual machine, according to the target GW-U.
  • the determining that the application migration needs to migrate to the second virtual The device may be specifically configured to: obtain location information of the user equipment, and determine, according to the location information, a second virtual machine that needs to be migrated to the application migration. Further, the first controller may further send location information of the second virtual machine to the second controller, so that the second controller determines the target gateway according to the location information of the second virtual machine. . Specifically, the Cloud controller may first select the target VM location according to the UE location information, notify the network side, and the CP further selects the appropriate GW-U as the target gateway by combining the current location information of the UE and the target VM location information.
  • the migration state parameter may be obtained in real time from the hypervisor layer, and may include total migration data volume, migrated amount, link bandwidth, dirty page generation rate, etc., thereby predicting the first virtual machine by using the migration state parameter.
  • the point in time at which the source VM stops servicing that is, the point in time of interruption.
  • the interruption time point may be based on the predicted total migration time, the interruption duration, and the current time.
  • the acquiring an interruption event in the application migration process from the first virtual machine to the second virtual machine may be specifically: monitoring from the first virtual machine to the second virtual machine Whether an interrupt event occurs during the migration process.
  • the sending, by the second controller, a notification message for indicating the interrupt event may be specifically: when the interrupt event is detected, sending, to the second controller, the interrupt event Notification message. That is to say, the interrupt event in the application migration process can be detected in real time, and the second controller is notified to control the data buffer when the interrupt event is detected. Further, when the end of the interrupt event is detected, the second controller may be notified to control to end the data buffer.
  • the first controller may further receive a service stop command sent by the second controller; in response to the service stop command, control the first virtual machine to stop serving, perform the first virtual machine, and The memory copy and CPU synchronization between the two virtual machines; when the synchronization is completed, the second virtual machine is controlled to start the service.
  • the Libvirt Driver can be modified to control the source side VM to immediately stop the current iterative process for the last migration, and the capability can be opened by the Nova API, that is, the second controller can directly control the source VM through the Libvirt interface to immediately stop the current Iterative process to make the last dirty page iteration and CPU state synchronization process, thus reducing the total time of the iteration.
  • the CP may send a service stop command to the Cloud controller, and the command source VM immediately stops the current iterative process, enters the last iteration, performs dirty page copying and CPU synchronization, and when the synchronization is completed, the second virtual machine starts to be started.
  • Service is provided through the target VM to reduce the total migration time.
  • FIG. 3 is a schematic flowchart of an application migration method according to another embodiment of the present invention.
  • the method in the embodiment of the present invention may be specifically applied to a controller on the network side, that is, in the second controller, as in the foregoing CP, as shown in FIG. 3, the application migration method in the embodiment of the present invention.
  • the controller on the network side that is, in the second controller, as in the foregoing CP, as shown in FIG. 3, the application migration method in the embodiment of the present invention.
  • 201 Receive a notification message sent by the first controller, where the notification message indicates an interruption event in an application migration process from the first virtual machine to the second virtual machine that needs to perform application migration.
  • the point and interrupt duration caches data sent to the first virtual machine during the application migration process.
  • the notification message includes an interruption time point and an interruption duration of the interruption event; and the buffering, according to the interruption event, is configured to cache data sent to the first virtual machine during the application migration process,
  • the method may be: generating a data cache command including the interruption time point and the interruption duration; sending, by the second gateway selected by the second controller, the data cache command, so that the second gateway is based on the interruption time point And interrupting the duration of time buffering data sent to the first virtual machine during the application migration process.
  • the second controller may acquire location information of the user equipment, and Determining, according to the location information, the second gateway, sending the identifier information of the second gateway to the first controller, so that the first controller determines, according to the identifier information of the second gateway, that an application needs to be performed.
  • the second virtual machine to which the migrated first virtual machine is migrated may acquire location information of the user equipment, and Determining, according to the location information, the second gateway, sending the identifier information of the second gateway to the first controller, so that the first controller determines, according to the identifier information of the second gateway, that an application needs to be performed.
  • the second controller may further receive, by the first controller, location information of the second virtual machine to which the first virtual machine that needs to perform application migration is to be migrated; according to the second virtual machine The location information determines the second gateway.
  • the location of the uplink data cache in the application migration process may be the first gateway corresponding to the first virtual machine (that is, the gateway before the application is migrated), or the second gateway corresponding to the second virtual machine (that is, after the application is migrated) Gateway), or a base station corresponding to the second virtual machine (ie, applying the migrated base station).
  • the cached data may be data that needs to be sent to the first virtual machine during the application migration interruption, such as uplink data sent by the UE to the first virtual machine.
  • the notification message further includes an interruption time point of the interruption event; before the sending the data cache command, the first controller may further monitor whether the interruption time point arrives; When the interruption time point arrives, the step of transmitting the data cache command is performed.
  • the second controller may immediately go to the cache device, such as the first gateway and the second, according to the interruption time point of the interruption event.
  • the gateway or base station sends a cache command to cause the cache device to cache data based on the point in time of the interruption.
  • the second controller may also send a cache command to the cache device immediately after receiving the interrupt event message, and the cache device detects the arrival of the interrupt time point according to the interrupt time, and then performs data buffering based on the interrupt duration.
  • the second controller may further send a service stop command to the first controller, so that the first controller stops the command based on the service. Controlling the memory copy between the first virtual machine and the second virtual machine and the CPU of the central processing unit Synchronize.
  • the second controller may further update the data forwarding rule when the application migration is completed.
  • the updated data forwarding rule indicates that the data cached in the application migration process is forwarded to the second virtual machine.
  • the first controller on the application side may determine, when the first virtual machine needs to perform application migration, the second virtual machine to which the application migration needs to be migrated, by acquiring the first virtual machine to the second An interrupt event in the application migration process of the virtual machine, and sending a notification message indicating the interrupt event to the second controller on the network side, so that the second controller can control the cache application to be sent to the cache application during the migration process based on the interrupt event
  • the data of the first virtual machine can obtain the interruption event in the application migration process, and notify the network side, so that the network side can timely buffer the uplink data packet to ensure reliable session continuity, prevent packet loss, and enable data caching. Time can be as close as possible to the interruption time of virtual machine migration, thus shortening the data cache time.
  • FIG. 4 is a schematic diagram of a scenario of application migration according to an embodiment of the present invention.
  • the UE moves in the scenario and triggers edge application migration.
  • the UE mobile handover may be specifically S1 handover and X2 handover.
  • the embodiment of the present invention uses S1 handover as an example for description.
  • the service needs to be migrated from the source VM, that is, the first virtual machine, to the target VM, that is, the second virtual machine.
  • the source VM that is, the first virtual machine
  • the target VM that is, the second virtual machine.
  • the source GW-U is the gateway before the application migration, that is, the first gateway;
  • the target GW-U is the gateway after the application migration, that is, the second gateway;
  • the base station, the target e NB is the base station after the application is migrated, that is, the base station corresponding to the second virtual machine.
  • FIG. 5 is a schematic diagram of interaction of an application migration method in the scenario of FIG.
  • the second controller such as the CP
  • the target VM can be selected according to the selected target GW-U.
  • the application migration method in the embodiment of the present invention may include the following steps:
  • the UE moves to send a measurement report to the source eNB.
  • the measurement data can be obtained through the air interface measurement, and a measurement report (Measure report) can be generated, and the measurement report including the measurement data is reported to the source eNB.
  • the source eNB judges the UE movement based on the measurement data, thereby making a handover decision.
  • the measurement data is It includes location information when the UE performs air interface measurement.
  • the source eNB sends a UE mobility handover request to the CP.
  • the CP selects a target GW-U according to the location information of the UE.
  • the source eNB determines that the UE moves according to the measurement data reported by the UE, and when the application migration is required, the UE mobile handover request may be generated to the network side CP. After receiving the UE mobile handover request, the CP may select a GW-U as the target GW-U based on the UE location information.
  • the CP sends a handover coordination request to the Cloud controller.
  • the handover cooperation request may include selected user identification information such as a UE ID and information of the selected target GW-U.
  • the UE ID can be used to determine an application to which the UE is connected, and the application side Cloud controller can maintain a correspondence between the UE ID and the connected application.
  • the selected target GW-U information may include an Internet Protocol (IP) address information that is connected to the application side by the target GW-U.
  • IP Internet Protocol
  • the Cloud controller selects a target VM.
  • the network side and the application side can synchronize their respective processes. For example, for the network side, a process of sending a session creation request, creating an indirect data forwarding tunnel, eNB handover, radio bearer creation, bearer update, and the like may be performed.
  • the Cloud controller can select a VM as the target VM to be migrated according to the UE location information.
  • the target VM may be selected based on a selection principle such as path optimization or distance.
  • the target VM may be selected based on network topology and service requirements pre-configured on the Cloud controller and based on UE location information and the relative load of each computing node.
  • the Cloud controller may return a handover cooperative response message to the CP, where the response message may include the identifier and location information of the selected target VM, and the like.
  • the VM begins the migration, making memory copies and dirty page iterations.
  • the Cloud controller can enable the VM migration interrupt event prediction function.
  • the default interface (such as the Libvirt client mentioned above) can obtain the migration status parameters, such as dirty page generation rate, link bandwidth, migrated amount, and total migration. Parameters such as quantity to predict the interrupt event during the VM migration process, including parameters such as the predicted interrupt time point and the interrupt duration.
  • the interruption time point can be obtained according to the predicted total migration time, the interruption duration, and the current time.
  • the Cloud controller sends a notification message including the interrupt event to the CP.
  • the notification message including the predicted interruption time point and the interruption duration is sent to the CP, indicating the service stop time of the source VM.
  • the CP may return a confirmation message to the Cloud controller.
  • the CP sends a cache command to the target GW-U.
  • the target GW-U caches data.
  • the CP can learn the specific time point of the source VM service interruption, that is, the interruption time point, from the notification message.
  • the CP may start a timer according to the interruption time point.
  • the CP sends an uplink data cache command to the target GW-U, and the data cache command may carry the interruption duration.
  • the CP may also send a data cache command including the interrupt time point and the interrupt duration to the target GW-U when the interrupt event is predicted, so that the target GW-U may reserve the cache space according to the interrupt duration.
  • Data caching is performed at the point of arrival of the interruption.
  • the cache memory can match the interrupt duration, that is, the target GW-U can reserve memory for buffering the uplink data based on the interrupt duration.
  • the CP may also send a service stop command to the Cloud controller, and the source VM immediately stops the current iterative process, interrupts the service, enters the last iteration, and performs a memory copy and a CPU synchronization process to reduce the total migration time.
  • the target VM can start the service, and the Cloud controller can send a migration completion message to the CP.
  • the VM migration is completed, and the CP updates the data forwarding rule.
  • the CP can update the data (message) forwarding rule on the target GW-U, and switch the data forwarding of the target GW-U from the source VM to the destination VM.
  • the CP sends a message to the target GW-U, forwards the buffered uplink data, forwards it to the target VM, and provides the service through the target VM.
  • the bearer between the source GW-U and the source eNB may be deleted, and the indirect forwarding tunnel deletion procedure is performed.
  • FIG. 6 is a schematic diagram of interaction of another application migration method in the scenario of FIG.
  • the application side Cloud controller may select the target VM according to the location information of the UE, and notify the location information of the target VM selected by the network side CP, and select the CP according to the selection.
  • the location information of the target VM is selected as the target GW-U.
  • the application migration method in the embodiment of the present invention includes the following steps:
  • the UE moves, and sends a measurement report to the source eNB.
  • the source eNB sends a UE mobility handover request to the CP.
  • the CP sends a handover coordination request to the Cloud controller.
  • the handover cooperation request may include location information, a UE ID, and/or a VM ID of the UE.
  • the location information of the UE may be information such as a cell identifier, or a tracking area identifier (TAI).
  • TAI tracking area identifier
  • the request may be sent when the location information changes. For example, if the request includes a Cell ID, the request may be sent when the cell is switched; if the TAI is included in the request, the UE may enter another TA. That is, when the TAI changes, it is sent.
  • the Cloud controller selects the target VM according to the location information of the UE.
  • the Cloud controller may select a suitable VM as the target VM according to the location information of the UE, and the target VM may be selected according to the selection principle of the path optimization or the nearest distance principle, and details are not described herein.
  • the CP selects the target GW-U.
  • the Cloud controller may return a handover cooperative response message to the CP, where the handover coordinated response message may carry location information of the target VM.
  • the CP may select a suitable GW-U as the target GW-U according to the location information of the UE and the location information of the target VM.
  • the CP may notify the Cloud controller of the selected target GW-U information, including the egress IP information of the target GW-U.
  • the CP can perform a 3GPP handover procedure, including a process of sending a session creation request, creating an indirect data forwarding tunnel, eNB handover, radio bearer creation, and bearer update.
  • the Cloud controller may return a handover cooperative response message to the CP, where the response message may include location information of the selected target VM.
  • the VM begins the migration, making memory copies and dirty page iterations.
  • the Cloud controller You can enable the VM migration interrupt event prediction function. You can obtain the migration status parameters, such as the dirty page generation rate, link bandwidth, migrated quantity, and total migration quantity, through the default interface (such as the above-mentioned Libvirt client). Prediction of interrupt events during VM migration, including parameters such as predictive break time and interrupt duration.
  • the Cloud controller sends a notification message including the interrupt event to the CP.
  • the CP sends a cache command to the target GW-U.
  • the target GW-U caches data.
  • the VM migration is completed, and the CP updates the data forwarding rule.
  • the process of the X2 handover is similar to the S1 handover process.
  • the application side UE location information and/or the selected target GW-U information are notified, and the UE mobile and the application mobile collaboration in the X2 handover process may be
  • the mobile coordinated processing mode in the S1 handover process which is not described here.
  • FIG. 7 is a schematic diagram of another application migration scenario according to an embodiment of the present invention.
  • the UE does not move, and the application moves to near the UE.
  • the service needs to be migrated from the source VM, that is, the first virtual machine, to the target VM, that is, the second virtual machine.
  • the source GW-U is a gateway before application migration, that is, the first gateway
  • the target GW-U is a gateway after application migration, that is, the foregoing second gateway
  • the eNB is the first virtual machine.
  • the corresponding base station is also the base station corresponding to the second virtual machine, that is, the UE before the migration and the post-migration UE are under the same base station.
  • the application is migrated, and the UE does not move, for example, when the UE is attached, or the resource is preempted, or the current user experience is poor, and the quality of service (English: Quality of Service, QoS for short) does not meet the user's requirements.
  • the application needs to actively migrate to the UE.
  • FIG. 8 is a schematic diagram of interaction of an application migration method in the scenario of FIG. Specifically, as shown in FIG. 8, the application migration method in the embodiment of the present invention includes the following steps:
  • the Cloud controller decides to apply the migration and selects the target VM.
  • the Cloud controller may perform migration when the UE is attached, or the resource is preempted, or the current user experience is poor, the QoS does not meet the user requirement, and the decision application is migrated, and may be based on the current location information of the UE. Select a VM as the target VM.
  • the Cloud controller returns a coordinated message to the CP.
  • the CP selects the target GW-U according to the location information of the target VM.
  • the Cloud controller can send a coordinated message including the target VM ID, the location information of the target VM, and the like to the target VM.
  • the CP can determine which application of the UE is currently being migrated according to the VM ID, and select a GW-U as the target GW-U according to the location information of the target VM.
  • the CP can perform a 3GPP handover procedure, including a process of sending a session creation request, creating an indirect data forwarding tunnel, eNB handover, radio bearer creation, and bearer update.
  • the indirect data forwarding tunnel may be specifically a source eNB->source GW-U->target GW-U->target eNB->UE for downlink data reaching the source eNB.
  • the CP may also send the export IP information of the target GW-U and the VM to the Cloud controller.
  • the VM starts to migrate, performs memory iteration and dirty page copying process, and can further predict the source VM interrupt event.
  • the steps 503 and 504 can be performed simultaneously, or the order of execution thereof is not limited.
  • the Cloud controller sends a notification message including the interrupt event to the CP.
  • the notification message including the predicted interruption time point, the interruption duration, and the source VM ID may be sent to the CP, indicating the service stop time of the source VM.
  • the CP may return a confirmation message to the Cloud controller.
  • the CP sends a cache command to the target GW-U.
  • the target GW-U caches data.
  • the CP can determine from the source VM ID in the notification message which application of the UE is to stop the service, and learn the interruption time of the source VM service interruption. point.
  • the CP may start a timer according to the interruption time point.
  • the CP sends an uplink data cache command to the target GW-U, and the data cache command may carry the interrupt duration and the target VM ID.
  • the CP may also immediately send a data cache command including the interrupt time point and the interrupt duration to the target GW-U when the interrupt event is predicted, thereby
  • the target GW-U may reserve a buffer space according to the interrupt duration to perform data caching when the interrupt time point arrives.
  • the cache memory can match the interrupt duration, that is, the target GW-U can reserve memory for buffering the uplink data based on the interrupt duration.
  • the CP may also send a service stop command to the Cloud controller, and the source VM immediately stops the current iterative process, and enters the last iteration of the memory copy and the CPU synchronization process to reduce the total migration time.
  • the target VM can start the service, and the Cloud controller can send a migration completion message to the CP.
  • the VM migration is completed, and the CP updates the data forwarding rule.
  • the CP can update the data (message) forwarding rule on the target GW-U, and switch the data forwarding of the target GW-U from the source VM to the destination VM.
  • the CP sends a message to the target GW-U, forwards the buffered uplink data, forwards it to the target VM, and provides the service through the target VM.
  • the bearer between the source GW-U and the eNB may be deleted, and the indirect forwarding tunnel deletion procedure is performed.
  • the network side and the application side can interact with the UE and apply the movement events of the two objects, and increase the interruption event prediction function of the Cloud controller to predict the interruption event, so that the network side source can be notified in advance.
  • the network side performs uplink data buffering based on the interrupt event, and caches the uplink data to the target GW-U, so that the uplink service interruption time is close to the interruption duration of the application migration, compared to the prior art.
  • the way to cache upstream data when the virtual machine starts to migrate greatly reduces the time of the upstream data cache, even from the second level to the millisecond level, effectively ensuring reliable session continuity and preventing packet loss. happened.
  • FIG. 9 is a schematic diagram of another interaction of the application migration method in the scenario of FIG. Specifically, as shown in FIG. 9, the application migration method in the embodiment of the present invention may include the following steps:
  • the UE moves to send a measurement report to the source eNB.
  • the source eNB sends a UE mobility handover request to the CP.
  • the CP selects the target GW-U.
  • the CP sends a handover coordination request to the Cloud controller.
  • the Cloud controller selects the target VM.
  • the Cloud controller sends a notification message including the interrupt event to the CP.
  • the specific manners of the steps 601 to 607 can refer to the descriptions of the steps 301 to 307 or the steps 401 to 408 in the foregoing embodiment, that is, the CP selects the target GW-U, and notifies the application side of the selected target GW-U information.
  • the Cloud controller selects the target VM according to the selected target GW-U; or the Cloud controller selects the target VM and notifies the location information of the target VM selected by the CP, and the CP selects the target GW-U according to the location information of the selected target VM. I will not go into details here.
  • the CP sends a cache command to the source GW-U.
  • the source GW-U caches data.
  • the CP can learn the interruption time point of the source VM service interruption from the notification message.
  • the CP can start a timer according to the interruption time point.
  • the CP sends an uplink data cache command to the source GW-U, and the data cache command can carry the interruption duration.
  • the CP may also send a data cache command including the interrupt time point and the interrupt duration to the source GW-U when the interrupt event is predicted, so that the source GW-U may reserve the cache space according to the interrupt duration.
  • Data caching is performed at the point of arrival of the interruption.
  • the cache memory can match the interrupt duration, that is, the source GW-U can reserve memory for buffering the uplink data based on the interrupt duration.
  • the CP may also send a service stop command to the Cloud controller, and the source VM immediately stops the current iterative process, and enters the last round of memory copy and CPU synchronization process to reduce the total migration time.
  • the target VM can start the service, and the Cloud controller can send a migration completion message to the CP.
  • the VM migration is completed, and the CP updates the data forwarding rule.
  • the CP can update the data (message) forwarding rule on the source GW-U, and the data forwarding of the source GW-U is switched from the source VM to the destination VM.
  • the CP sends a message to the source GW-U, forwards the buffered uplink data, forwards it to the target VM, and provides the service through the target VM.
  • the bearer between the source GW-U and the source eNB may be deleted, and the indirect forwarding tunnel deletion procedure is performed.
  • FIG. 10 is another application migration method in the scenario of FIG. 7 .
  • the Cloud controller decides to apply the migration and selects the target VM.
  • the Cloud controller sends a notification message including the interrupt event to the CP.
  • the notification message including the predicted interruption time point and the interruption duration is sent to the CP, indicating the service stop time of the source VM.
  • the CP may return a confirmation message to the Cloud controller.
  • the CP sends a cache command to the source GW-U.
  • the source GW-U caches data.
  • the CP selects the target GW-U.
  • the target VM can enable the service, and after the Cloud controller captures the event, the migration completion message carrying the location information of the target VM can be sent to the CP.
  • the CP may select a GW-U as the target GW-U according to the target VM location information and the location information of the UE.
  • the selected target GW-U may be placed in the VM and have a one-to-one relationship with the VM; or the selected target GW-U may also be placed in the VM, and the VM is in multiple pairs.
  • the target GW-U may be selected from a plurality of GW-Us connected by the VM, for example, may be selected according to conditions such as GW-U node load and end-to-end link delay.
  • the CP updates the data forwarding rule.
  • the CP can update the data (message) forwarding rule on the source GW-U, and the data forwarding of the source GW-U is switched from the source VM to the destination VM.
  • the CP sends a message to the source GW-U, forwards the buffered uplink data, forwards it to the target VM, and provides the service through the target VM.
  • the bearer between the source GW-U and the eNB may be deleted, and the indirect forwarding tunnel deletion procedure is performed.
  • the CP may send a create session request to the target.
  • the session request message may include location information of the target VM, such as egress IP information of the target VM.
  • the target GW-U may reply to the CP for the response message of the session request, and the response message may carry an egress IP message that the target GW-U is connected to the target VM.
  • the CP may send a request to create an indirect data pre-transmission tunnel to the target GW-U.
  • the target GW-U may open a port number for the indirect forward tunnel and send a response message carrying the port number information to the CP.
  • the target GW-U may open a port number for the indirect forward tunnel and send a response message carrying the port number information to the CP.
  • the CP sends an indirect data pre-transmission tunnel message to the source GW-U, including the port number information that the target GW-U is opened for the indirect pre-transmission tunnel, and the source GW-U sends an indirect pre-transmission tunnel reply message to the CP.
  • the CP sends a GW-U reselection completion message to the Cloud controller.
  • the message includes the egress IP information of the target GW-U connected to the target VM, that is, the application side.
  • the CP can send a buffer command to the source GW-U after the indirect forwarding tunnel is established.
  • the indirect data forwarding tunnel can be the source eNB for the downlink data arriving at the source eNB.
  • the source GW-U->target GW-U->target eNB->UE in the embodiment of the present invention, the source eNB and the target eNB are the same eNB).
  • the CP sends a path switch request to the eNB, and the path is switched from the source GW-U to the target GW-U, where the request carries the port information of the target GW-U connected to the eNB.
  • the eNB sends a path switch notification message to the source GW-U.
  • the path switch notification message may include an end marker data packet, where the end marker may be used to notify the source GW-U that the eNB does not send data packets to the source GW-U.
  • the source GW-U can start releasing the bearer and deleting the forwarding path to implement path switching.
  • the eNB may send a handover complete notification message to the Cloud controller.
  • the bearer between the source GW-U and the eNB may be deleted, and the indirect forwarding tunnel deletion procedure is performed.
  • the network side and the application side can interact with the UE and apply the movement events of the two objects, and increase the interruption event prediction function of the Cloud controller to predict the interruption event, so that the network side source can be notified in advance.
  • the network side performs uplink data buffering based on the interrupt event, buffers the uplink data to the source GW-U, and after the VM migration is completed, performs the target GW-U selection and the establishment of the indirect data pre-transmission tunnel. Let the uplink service interruption time be close to the interruption duration of the application migration. Compared with the prior art method of buffering the uplink data when the virtual machine starts to migrate, the uplink data cache time is greatly reduced, even from the second level.
  • the uplink data is cached on the source GW-U, and then forwarded to the target VM after the VM migration is completed.
  • the above method can effectively avoid the link failure caused by the application migration failure. If the VM migration fails, the VM state is rolled back, the source VM continues to provide services, and the cached data still sends out the source side VM. The source VM provides services, which further prevents packet loss.
  • FIG. 11 is a schematic diagram of interaction of another application migration method in the scenario of FIG. Specifically, as shown in FIG. 11, the application migration method in the embodiment of the present invention may include the following steps:
  • the UE moves, and sends a measurement report to the source eNB.
  • the source eNB sends a UE mobility handover request to the CP.
  • the CP selects the target GW-U.
  • the CP sends a handover coordination request to the Cloud controller.
  • the Cloud controller selects the target VM.
  • the Cloud controller sends a notification message including the interrupt event to the CP.
  • the specific manners of the steps 801 to 807 may refer to the descriptions of the steps 301 to 307 or the steps 401 to 408 in the foregoing embodiment, that is, the CP selects the target GW-U, and notifies the application side of the selected target GW-U information.
  • the Cloud controller selects the target VM according to the selected target GW-U; or the Cloud controller selects the target VM and notifies the location information of the target VM selected by the CP, and the CP selects the target GW-U according to the location information of the selected target VM. I will not go into details here.
  • the CP sends a cache command to the target eNB.
  • the target eNB caches data.
  • the CP can learn the interruption time point of the source VM service interruption from the notification message.
  • the CP may start a timer according to the interruption time point.
  • the CP sends an uplink data buffer command to the target eNB, and the data cache command may carry the interruption duration.
  • the CP may also send a data cache command including the interrupt time point and the interrupt duration to the target eNB when the interrupt event is predicted, so that the target eNB may reserve the cache space according to the interrupt duration, at the interrupt time. Data caching occurs when the point arrives.
  • the cache memory may match the interrupt duration, that is, the target eNB may reserve memory for buffering the uplink data based on the interrupt duration.
  • the CP may also send a service stop command to the Cloud controller, and the source VM immediately stops the current iterative process, and enters the last round of memory copy and CPU synchronization process to reduce the total migration time.
  • the target VM can start the service, and the Cloud controller can send a migration completion message to the CP.
  • the CP may send a message to the target eNB, forward the cached data, that is, forward to the target VM, and receive downlink data of the source VM, and the service starts to be provided by the target VM.
  • the process of the X2 handover is similar to the S1 handover process.
  • the application side UE location information and/or the selected target GW-U information are notified, and the UE mobile and the application mobile collaboration in the X2 handover process may be
  • the mobile coordinated processing mode in the S1 handover process which is not described here.
  • FIG. 12 is a schematic diagram of another interaction of the application migration method in the scenario of FIG. Specifically, as shown in FIG. 12, the application migration method in the embodiment of the present invention may include the following steps:
  • the Cloud controller decides to apply the migration and selects the target VM.
  • the Cloud controller returns a coordinated message to the CP.
  • the CP selects the target GW-U according to the location information of the target VM.
  • the Cloud controller sends a notification message including the interrupt event to the CP.
  • the CP sends a cache command to the eNB.
  • the eNB caches data.
  • the CP can determine from the source VM ID in the notification message which application of the UE is to stop the service, and learn the interruption time of the source VM service interruption. point.
  • the CP may start a timer according to the interrupt time point.
  • the CP sends an uplink data cache command to the eNB, where the data cache command may carry the interrupt duration and the target VM ID.
  • immediately send a data cache command to the eNB so that the eNB performs data caching based on the interrupt time point and the interrupt duration.
  • the cache memory can be held with the interrupt
  • the continuation time matches, that is, the eNB can reserve memory for buffering the uplink data based on the interrupt duration.
  • the CP may also send a service stop command to the Cloud controller, and the source VM immediately stops the current iterative process, and enters the last iteration of the memory copy and the CPU synchronization process to reduce the total migration time.
  • the target VM can start the service, and the Cloud controller can send a migration completion message to the CP.
  • the VM migration is completed, and the CP updates the data forwarding rule.
  • the CP can update the packet forwarding rule on the eNB, and forward the data packet forwarding of the target GW-U from the source VM to the destination VM.
  • the CP sends a message to the eNB, forwards the buffered uplink data, forwards it to the target VM, and provides the service through the target VM.
  • the bearer between the source GW-U and the eNB may be deleted, and the indirect forwarding tunnel deletion procedure is performed.
  • the network side and the application side can interact with the UE and apply the movement events of the two objects, and increase the interruption event prediction function of the Cloud controller to predict the interruption event, so that the network side source can be notified in advance.
  • the network side performs uplink data buffering based on the interrupt event, and buffers the uplink data to the eNB, so that the uplink service interruption time is close to the interruption duration of the application migration, compared with the prior art.
  • the method of buffering the uplink data greatly reduces the time of the uplink data cache, even from the second level to the millisecond level, effectively ensuring reliable session continuity and preventing packet loss.
  • FIG. 13 is a schematic diagram of another interaction of the application migration method in the scenario of FIG.
  • an interrupt event in the VM migration process is captured by the existing interface and function of the Cloud controller, and reported to the network side.
  • the application migration method in the embodiment of the present invention may include the following steps:
  • the UE moves to send a measurement report to the source eNB.
  • the source eNB sends a UE mobility handover request to the CP.
  • the CP selects the target GW-U.
  • the CP sends a handover coordination request to the Cloud controller.
  • the Cloud controller selects the target VM.
  • the specific manner of the steps 1001 to 1005 can refer to step 301 in the foregoing embodiment.
  • the CP selects the target GW-U, and notifies the application side of the selected target GW-U information
  • the Cloud controller selects the target VM according to the selected target GW-U; or, the Cloud controller selects
  • the target VM notifies the location information of the target VM selected by the CP
  • the CP selects the target GW-U according to the location information of the selected target VM. I will not go into details here.
  • the Cloud controller monitors the interrupt event during the migration in real time.
  • a notification message indicating that the source VM stops the service is sent to the CP.
  • the Cloud controller sends a notification message including the interrupt event to the CP.
  • the CP may send an acknowledgement message to the Cloud controller, and may obtain the interruption time point of the source VM service interruption from the notification message.
  • the CP may start a timer according to the interruption time point.
  • the CP sends an uplink data buffer command to the source GW-U or the target GW-U or the target eNB for data buffering.
  • the cache command can carry the interrupt duration.
  • the CP may also immediately send a data cache command including the interrupt time point and the interrupt duration to the source GW-U or the target GW-U or the target eNB when the interrupt event is predicted, so that the source GW-U or the target GW
  • the -U or the target eNB may reserve a buffer space according to the interrupt duration, and perform data caching when the interrupt time point arrives.
  • the CP may also send a service stop command to the Cloud controller, and the source VM immediately stops the current iterative process, and enters the last round of memory copy and CPU synchronization process to reduce the total migration time.
  • the target VM can start the service, and the Cloud controller can send a migration completion message to the CP.
  • the VM migration is completed, and the CP updates the data forwarding rules.
  • the CP may send a message to the source GW-U or the target GW-U or the target eNB, and forward the cached data, that is, forward to the target VM.
  • the process of the X2 handover is similar to the S1 handover process.
  • the application side UE location information and/or the selected target GW-U information are notified, and the UE mobile and the application mobile collaboration in the X2 handover process may be
  • the mobile coordinated processing mode in the S1 handover process which is not described here.
  • FIG. 14 is another application migration method in the scenario of FIG. Schematic diagram of the interaction.
  • the application migration method in the embodiment of the present invention may include the following steps:
  • the Cloud controller decides to apply the migration and selects the target VM.
  • the CP selects the target GW-U.
  • the Cloud controller sends a notification message including the interrupt event to the CP.
  • a notification message indicating that the source VM stops the service is sent to the CP.
  • step 1105 can be referred to the description of the step 1008 in the foregoing embodiment, and details are not described herein.
  • the VM migration is completed, and the CP updates the data forwarding rule.
  • the CP may update the packet forwarding rule on the eNB, and forward the data packet of the cache device, such as the source GW-U or the target GW-U or the eNB, from the source VM to the destination VM. .
  • the CP sends a message to the cache device, such as the source GW-U or the target GW-U or the eNB, forwards the buffered uplink data, forwards it to the target VM, and provides the service through the target VM.
  • the bearer between the source GW-U and the eNB may be deleted, and the indirect forwarding tunnel deletion procedure is performed.
  • the interrupt event in the application migration process can be monitored in real time and reported to the network side to implement timely buffering of the uplink data, thereby reducing the time of service interruption, and only need to utilize the existing interface, which reduces the System cost.
  • FIG. 15 is a schematic structural diagram of an application migration apparatus according to an embodiment of the present invention.
  • the application migration apparatus of the embodiment of the present invention may include a determining module 11, an event obtaining module 12, and a sending module 13. among them,
  • the determining module 11 is configured to determine, when the first virtual machine needs to perform application migration, a second virtual machine to which the application migration needs to be migrated;
  • the event obtaining module 12 is configured to acquire an interrupt event during an application migration process from the first virtual machine to the second virtual machine;
  • the sending module 13 is configured to send a notification message for indicating the interruption event to another controller, so that the another controller sends the to the application migration process based on the interrupt event control cache The data of the first virtual machine.
  • the application migration device may be disposed in the first controller, and the other controller may correspond to the second controller.
  • the first controller may be a controller on the application side
  • the second controller may be a controller on the network side.
  • the acquired interrupt event may include an interruption time point and an interruption duration during application migration, and the like. Therefore, the second controller can control the buffering of the data in the interrupt process corresponding to the interrupt event based on the interrupt event information such as the interrupt time point and the interrupt duration to reduce the cache time, thereby reducing the service interruption time.
  • the notification message includes an interruption time point and an interruption duration.
  • the event obtaining module 12 may specifically include:
  • a parameter obtaining unit configured to acquire a migration state parameter in an application migration process from the first virtual machine to the second virtual machine
  • a prediction unit configured to predict, according to the migration state parameter acquired by the parameter obtaining unit, an interruption time point and an interruption duration in the application migration process.
  • the event obtaining module 12 is specifically configured to:
  • the sending module 13 can be specifically configured to:
  • a notification message for indicating the interruption event is sent to another controller.
  • determining module 11 is specifically configured to:
  • determining module 11 is specifically configured to:
  • the sending module 13 is further configured to send location information of the second virtual machine to the another controller, so that the another controller determines the target gateway according to the location information of the second virtual machine.
  • controller may further include:
  • a receiving module configured to receive a service stop command sent by the another controller
  • control module configured to: in response to the service stop command, control the first virtual machine to stop serving, perform a memory copy between the first virtual machine and the second virtual machine, and synchronize with a central processing unit CPU;
  • the control module is further configured to control the second virtual machine to start a service when the synchronization is completed.
  • FIG. 16 is a schematic structural diagram of another application migration apparatus according to an embodiment of the present invention.
  • the application migration apparatus of the embodiment of the present invention may include a message receiving module 21 and a cache control module 22. among them,
  • the message receiving module 21 is configured to receive a notification message sent by another controller, where the notification message indicates an interruption event in an application migration process from the first virtual machine to the second virtual machine that needs to perform application migration;
  • the cache control module 22 is configured to control, according to the interrupt event, the data sent to the first virtual machine during the application migration process.
  • the controller according to the embodiment of the present invention may be disposed in the second controller, and the other controller may correspond to the first controller.
  • the first controller may be a controller on the application side
  • the second controller may be a controller on the network side.
  • the acquired interrupt event may include an interruption time point and an interruption duration during application migration, and the like. Therefore, the second controller can control the buffering of the data in the interrupt process corresponding to the interrupt event based on the interrupt event information such as the interrupt time point and the interrupt duration to reduce the cache time, thereby reducing the service interruption time.
  • the notification message includes an interruption time point and an interruption duration of the interruption event;
  • the cache control module 22 may be specifically configured to:
  • the notification message includes an interruption time point and an interruption duration of the interruption event;
  • the cache control module 22 may be specifically configured to:
  • the notification message includes an interruption time point and an interruption duration of the interruption event;
  • the cache control module 22 may be specifically configured to:
  • the device may further include:
  • the time monitoring module is configured to monitor whether the interruption time point arrives, and notify the cache control module 22 to send the data cache command when the arrival of the interruption time point is detected.
  • the device may further include:
  • a first determining module configured to acquire location information of the user equipment, and determine the second gateway according to the location information
  • a first sending module configured to send the identifier information of the second gateway to another controller, so that the another controller determines, according to the identifier information of the second gateway, a first virtual machine that needs to perform application migration The second virtual machine to migrate to.
  • the message receiving module 21 is further configured to receive, by another controller, location information of the second virtual machine to which the first virtual machine that needs to perform application migration is to be migrated;
  • the device may also include:
  • the second determining module is configured to determine the second gateway according to the location information of the second virtual machine.
  • the device may further include:
  • a second sending module configured to send a service stop command to another controller, so that the another controller performs control between the first virtual machine and the second virtual machine based on the service stop command control Memory copy and CPU synchronization of the central processing unit.
  • the device may further include:
  • An update module configured to update a data forwarding rule when the application migration is completed
  • the updated data forwarding rule indicates that the data cached in the application migration process is forwarded to the second virtual machine.
  • the first controller on the application side may determine, when the first virtual machine needs to perform application migration, the second virtual machine to which the application migration needs to be migrated, by acquiring the first virtual machine to the second An interrupt event in the application migration process of the virtual machine, and sending a notification message indicating the interrupt event to the second controller on the network side, so that the second controller can control the cache application to be sent to the cache application during the migration process based on the interrupt event
  • the data of the first virtual machine can obtain the interruption event in the application migration process, and notify the network side, so that the network side can timely buffer the uplink data packet to ensure reliable session continuity, prevent packet loss, and enable data caching. Time can be as close as possible to the interruption time of virtual machine migration, thus shortening the data cache time.
  • FIG. 17 is a schematic structural diagram of an application migration system according to an embodiment of the present invention.
  • the application migration system of the embodiment of the present invention may include: a first controller 1, a second controller 2, a first virtual machine 3, and a second virtual machine 4;
  • the first controller 1 is configured to determine, when the first virtual machine 3 needs to perform application migration, the second virtual machine 4 to which the application migration needs to be migrated; acquire the first virtual machine from the first virtual machine 3 to an interruption event in the application migration process of the second virtual machine 4; sending a notification message for indicating the interruption event to the second controller 2;
  • the second controller 2 is configured to receive a notification message sent by the first controller 1, and control, according to the interruption event, to cache data sent to the first virtual machine 3 during the application migration process.
  • the first controller, the second controller, the first virtual machine, and the second virtual machine in the embodiment of the present invention may refer to the related descriptions of the corresponding embodiments in FIG. 1-14, and details are not described herein again.
  • FIG. 18 is a schematic structural diagram of a controller according to an embodiment of the present invention.
  • the controller of the embodiment of the present invention includes: a communication interface 300, a memory 200, and a processor 100.
  • the processor 100 is connected to the communication interface 300 and the memory 200, respectively.
  • the memory 200 may be a high speed RAM memory or a non-unstable memory. (non-volatile memory), such as at least one disk storage.
  • the communication interface 300, the memory 200, and the processor 100 may be connected to each other through a bus, or may be connected by other means. In the present embodiment, a bus connection will be described.
  • the controller in the embodiment of the present invention may correspond to the first controller in the corresponding embodiment of FIG. 2 to FIG. 14 , and may be specifically a controller on the application side in the communication network, such as a Cloud controller. Please refer to the related description of the first controller in the corresponding embodiment of FIG. 2 to FIG. 14. among them,
  • the memory 200 is configured to store driver software
  • the processor 100 reads the driver software from the memory and executes it under the action of the driver software:
  • the notification message includes an interruption time point and an interruption duration; the processor 100 performs the acquiring an application from the first virtual machine to the second virtual machine by using the driving software.
  • the processor 100 performs the following steps:
  • the interruption time point and the interruption duration in the application migration process are predicted according to the migration state parameter.
  • the processor 100 performs an interrupt event in the application migration process from the first virtual machine to the second virtual machine by using the driver software, and specifically performs the following steps:
  • a notification message for indicating the interruption event including:
  • the processor 100 performs, by using the driver software, the second virtual machine that determines that the application migration needs to be migrated, and specifically performs the following steps:
  • the processor 100 performs, by using the driver software, the second virtual machine that determines that the application migration needs to be migrated, and specifically performs the following steps:
  • the method further includes:
  • processor 100 is further configured to perform the following steps under the action of the driver software:
  • the second virtual machine is controlled to start a service.
  • FIG. 19 is a schematic structural diagram of another controller according to an embodiment of the present invention.
  • the controller of the embodiment of the present invention includes: a communication interface 600, The memory 500 and the processor 400 are respectively connected to the communication interface 600 and the memory 500.
  • the memory 500 may be a high speed RAM memory or a non-volatile memory such as at least one disk memory.
  • the communication interface 600, the memory 500, and the processor 400 may be connected to each other through a bus, or may be connected by other means. In the present embodiment, a bus connection will be described.
  • the controller in the embodiment of the present invention may correspond to the second controller in the corresponding embodiment of FIG. 2 to FIG. 14 , and may specifically be a controller on the network side in the communication network, such as a CP. Refer to Figures 2 to 14 Corresponding description of the second controller in the corresponding embodiment. among them,
  • the memory 500 is configured to store driver software
  • the processor 400 reads the driver software from the memory and executes it under the action of the driver software:
  • the notification message includes an interruption time point and an interruption duration of the interruption event; and the processor 400 performs, according to the interruption software, the cache application migration according to the interrupting software.
  • the following steps are specifically performed:
  • the notification message includes an interruption time point and an interruption duration of the interruption event; and the processor 400 performs, according to the interruption software, the cache application migration according to the interrupting software.
  • the following steps are specifically performed:
  • the notification message includes an interruption time point and an interruption duration of the interruption event; and the processor 400 performs, according to the interruption software, the cache application migration according to the interrupting software.
  • the following steps are specifically performed:
  • processor 400 is further configured to perform the following steps before the sending the data cache command by using the driver software:
  • the step of transmitting the data cache command is performed when it is detected that the interruption time point arrives.
  • the processor 400 is further configured to perform the following steps by using the driver software:
  • the processor 400 is further configured to perform the following steps by using the driver software:
  • the processor 400 after performing the receiving the notification message sent by the first controller by using the driver software, is further configured to perform the following steps:
  • the processor 400 is further configured to perform the following steps by using the driver software:
  • the updated data forwarding rule indicates that the data cached in the application migration process is forwarded to the second virtual machine.
  • the first controller on the application side may determine, when the first virtual machine needs to perform application migration, the second virtual machine to which the application migration needs to be migrated, by acquiring from the first virtual machine to An interrupt event in the application migration process of the second virtual machine, and sending a notification message indicating the interrupt event to the second controller on the network side, so that the second controller can control the cache application migration process based on the interrupt event
  • the data sent to the first virtual machine is cached to the first gateway, the second gateway, or the base station, so that the interrupt event in the application migration process is obtained, and the network side is notified, so that the network side caches the uplink data report in time.
  • data caching time can be as close as possible to the interrupt time of virtual machine migration, thus shortening the data cache time.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules is only a logical function division.
  • there may be another division manner for example, multiple modules or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or module, and may be electrical, mechanical or otherwise.
  • the modules described as separate components may or may not be physically separated.
  • the components displayed as modules may or may not be physical modules, that is, may be located in one place, or may be distributed to multiple network modules. . Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of hardware plus software function modules.
  • the above-described integrated modules implemented in the form of software function modules can be stored in a computer readable storage medium.
  • the software function modules described above are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (English: Read-Only Memory, ROM for short), a random access memory (English: Random Access Memory, RAM), a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本发明实施例公开了一种应用迁移方法、装置及系统,其中,所述方法包括:当第一虚拟机需要进行应用迁移时,确定出所述应用迁移需要迁移到的第二虚拟机;获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的中断事件;向第二控制器发送用于指示所述中断事件的通知消息,以使所述第二控制器基于所述中断事件控制缓存所述应用迁移过程中发送至所述第一虚拟机的数据。采用本发明实施例,能够避免应用迁移过程中出现的丢包或业务中断时间长的问题。

Description

一种应用迁移方法、装置及系统 技术领域
本发明涉及通信技术领域,尤其涉及一种应用迁移方法、装置及系统。
背景技术
随着通信技术的不断发展,业务时延要求越来越高,如在未来网络如第五代移动通信技术(英文:The Fifth Generation Mobile Communication Technology,简称5G)网络的一些应用场景中,将会出现大量低时延要求的业务。为了满足低时延要求,数据中心(英文:Data Center,简称DC)逐渐下沉,分布式部署在网络边缘,以达到更靠近用户的目的,减少路径回传。同时,随着虚拟化技术与分布式云技术的发展,应用可以进行虚拟化、实例化,使得应用的迁移成为可能。
目前,在应用迁移时,是通过一个统一的协同网元移动协同(英文:Mobility Coordinator,简称MC)来协同用户设备(英文:User Equipment,简称UE)以及应用的移动事件,以实现一个虚拟机(英文:Virtual Machine,简称VM)到另一个VM的应用迁移,并在迁移开始之前就进行数据缓存。然而,该在VM迁移之前就开始进行数据缓存的方式,使得缓存时间与VM总迁移时间相近,这就造成了上行数据缓存时间过长,进而业务中断时间过长,导致的用户体验较差。而如果在VM迁移之前不作上行数据缓存,则源侧VM停止服务后,上行数据可能继续往源侧发送,则容易造成丢包现象。综上,目前的应用迁移方案容易导致丢包或业务中断时间长。
发明内容
本发明实施例提供一种应用迁移方法、装置及系统,能够避免应用迁移过程中出现的丢包或业务中断时间长的问题。
第一方面,本发明实施例提供了一种应用迁移方法,包括:
当第一虚拟机需要进行应用迁移时,第一控制器确定出所述应用迁移需要 迁移到的第二虚拟机;并获取从该第一虚拟机到该第二虚拟机的应用迁移过程中的中断事件;向第二控制器发送用于指示该中断事件的通知消息,以使所述第二控制器基于该中断事件控制缓存该应用迁移过程中发送至该第一虚拟机的数据。
其中,该第一控制器可以为应用侧(用户面)的控制器,第二控制器可以为网络侧(控制面)的控制器。该获取的中断事件可包括在应用迁移过程中的中断时间点和中断持续时间等等。从而第二控制器可基于该中断时间点和中断持续时间等中断事件信息控制该中断事件对应的中断过程中的数据的缓存,以减少缓存时间,从而减少业务中断时间。
在可选的实施例中,该中断事件可以是预测得到的,则该通知消息包括中断时间点以及中断持续时间;该获取从第一虚拟机到第二虚拟机的应用迁移过程中的中断事件,可以具体为:第一控制器获取从第一虚拟机到第二虚拟机的应用迁移过程中的迁移状态参数;根据该迁移状态参数预测该应用迁移过程中的中断时间点以及中断持续时间。
其中,该迁移状态参数可包括脏页产生速率、链路带宽、已迁移量、总迁移量等参数,并可基于该迁移状态参数预测得到总迁移时间、中断持续时间和中断时间点等等。进一步的,该中断时间点可基于预测的总迁移时间、中断持续时间,并结合当前时刻得到的。
在可选的实施例中,该中断事件还可以是通过实时监测迁移状态得到的,则该获取从第一虚拟机到第二虚拟机的应用迁移过程中的中断事件,可以具体为:第一控制器监测从所述第一虚拟机到所述第二虚拟机的应用迁移过程中是否发生中断事件。进一步的,所述向第二控制器发送用于指示该中断事件的通知消息,可以具体为:当监测到所述中断事件发生时,第一控制器向第二控制器发送用于指示该中断事件的通知消息。
在可选的实施例中,该确定出应用迁移需要迁移到的第二虚拟机的方式可以具体为:第二控制器确定出目标网关,并向第一控制器发送该目标网关的标识信息;第一控制器接收第二控制器发送的目标网关的标识信息,并根据该目标网关的标识信息确定出该应用迁移需要迁移到的第二虚拟机。从而确定出目标网关和第二虚拟机。
其中,该目标网关为应用迁移后的网关。
在可选的实施例中,该确定出应用迁移需要迁移到的第二虚拟机的方式可以具体为:第一控制器获取用户设备的位置信息,并根据所述位置信息确定出需要应用迁移需要迁移到的第二虚拟机。进一步的,第一控制器还可向所述第二控制器发送所述第二虚拟机的位置信息,以使所述第二控制器根据所述第二虚拟机的位置信息确定出目标网关。从而确定出目标网关和第二虚拟机。
在可选的实施例中,第二控制器还可向第一控制器发送服务停止命令;第一控制器接收第二控制器发送的服务停止命令,并响应该服务停止命令,控制第一虚拟机停止服务,进行第一虚拟机和第二虚拟机之间的内存拷贝及中央处理器(英文:Central Processing Unit,简称CPU)同步,并当同步完成时,控制第二虚拟机启动服务。从而通过服务停止命令进入最后一轮迭代,以减少总迁移时间。
第二方面,本发明实施例还提供了一种应用迁移方法,包括:
第二控制器接收第一控制器发送的通知消息,该通知消息指示了从需要进行应用迁移的第一虚拟机到第二虚拟机的应用迁移过程中的中断事件;基于该中断事件控制缓存应用迁移过程中的发送至所述第一虚拟机的数据。
其中,该第一控制器可以为应用侧的控制器,第二控制器可以为网络侧的控制器。该获取的中断事件可包括在应用迁移过程中的中断时间点和中断持续时间等等。从而第二控制器可基于该中断时间点和中断持续时间等中断事件信息控制该中断事件对应的中断过程中的数据的缓存。
在可选的实施例中,该通知消息中可包括该中断事件的中断时间点和中断持续时间,且该中断过程中的缓存数据可以是存储在第一虚拟机对应的第一网关即应用迁移前的网关中的。则该基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据的方式可以具体为:第二控制器生成包括该中断时间点和中断持续时间的数据缓存命令;向该第一虚拟机对应的第一网关发送该数据缓存命令,以使该第一网关基于所述中断时间点和中断持续时间缓存应用迁移过程中发送至该第一虚拟机的数据。
在可选的实施例中,该通知消息中可包括所述中断事件的中断时间点和中断持续时间,且该中断过程中的缓存数据可以是存储在第二网关即应用迁移后的网关中的。则该基于所述中断事件控制缓存所述应用迁移过程中的发送至所 述第一虚拟机的数据的方式可以具体为:第二控制器生成包括该中断时间点和中断持续时间的数据缓存命令;向第二控制器选取的第二网关发送数据缓存命令,以使第二网关基于该中断时间点和中断持续时间缓存应用迁移过程中发送至该第一虚拟机的数据。
也就是说,该应用迁移过程中发送至第一虚拟机的数据的缓存位置可以为第一虚拟机对应的第一网关(即应用迁移前的网关),或者第二虚拟机对应的第二网关(即应用迁移后的网关),或者第二虚拟机对应的基站(即应用迁移后的基站)等。上述的缓存设备可根据该应用迁移过程中的中断持续时间预留缓存空间,以在该中断时间点到达时进行数据缓存。可选的,缓存内存可与该中断持续时间相匹配,即缓存设备可基于该中断持续时间预留用于缓存上行数据的内存。
进一步可选的,第二控制器可获取用户设备的位置信息,并根据该位置信息确定出第二网关;向第一控制器发送该第二网关的标识信息,以使第一控制器根据该第二网关的标识信息确定出需要进行应用迁移的第一虚拟机要迁移到的第二虚拟机。从而确定出该第二网关和第二虚拟机。
进一步可选的,第二控制器可接收第一控制器发送的需要进行应用迁移的第一虚拟机要迁移到的第二虚拟机的位置信息;根据该第二虚拟机的位置信息确定出第二网关。该第二虚拟机是第一控制器选取出的,从而确定出该第二网关和第二虚拟机。
在可选的实施例中,该通知消息中包括所述中断事件的中断时间点和中断持续时间,且该中断过程中的缓存数据可以是存储在第二虚拟机对应的基站即应用迁移后的基站中的。则该基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据的方式可以具体为:第二控制器生成包括该中断时间点和中断持续时间的数据缓存命令;向第二虚拟机对应的基站发送该数据缓存命令,以使该第二虚拟机对应的基站基于该中断时间点和中断持续时间缓存该应用迁移过程中发送至所述第一虚拟机的数据。
在可选的实施例中,该数据缓存命令可以实时发送给缓存设备如上述的第一网关或第二网关或基站的,也可以是基于该中断时间点进行定时,并在定时时间到达时发送给该缓存设备的。若为定时发送,则在发送该数据缓存命令之前,第二控制器可监测该中断时间点是否到达;当监测到该中断时间点到达时, 执行所述发送所述数据缓存命令的步骤。从而将该数据缓存命令发送给该缓存设备进行数据缓存,以减少缓存时间,从而减少业务中断时间。
在可选的实施例中,在所述接收第一控制器发送的通知消息之后,第二控制器还可向第一控制器发送服务停止命令,以使所述第一控制器基于该服务停止命令控制进行第一虚拟机和第二虚拟机之间的内存拷贝及CPU同步。从而通过服务停止命令进入最后一轮迭代,以减少总迁移时间。
在可选的实施例中,当所述应用迁移完成时,第二控制器即可更新数据转发规则;其中,更新后的数据转发规则指示将该应用迁移过程中缓存的数据转发到该第二虚拟机上。从而实现应用迁移。
第三方面,本发明实施例还提供了一种应用迁移装置,具体可设置于上述的第一控制器中,包括:确定模块、事件获取模块以及发送模块,该应用迁移装置可通过上述模块实现上述第一方面的应用迁移方法的部分或全部的步骤。
第四方面,本发明实施例还提供了一种应用迁移装置,具体可设置于上述的第二控制器中,包括:消息接收模块以及缓存控制模块,该应用迁移装置可通过上述模块实现上述第二方面的应用迁移方法的部分或全部的步骤。
第五方面,本发明实施例还提供了一种计算机存储介质,所述计算机存储介质存储有程序,所述程序执行时包括上述第一方面的应用迁移方法的部分或全部的步骤。
第六方面,本发明实施例还提供了一种计算机存储介质,所述计算机存储介质存储有程序,所述程序执行时包括上述第二方面的应用迁移方法的部分或全部的步骤。
第七方面,本发明实施例还提供了一种控制器,包括:通信接口、存储器和处理器,所述处理器分别与所述通信接口和存储器连接;其中,
所述存储器用于存储驱动软件;
所述处理器从所述存储器读取所述驱动软件并在所述驱动软件的作用下执行上述第一方面的应用迁移方法的部分或全部的步骤。
第八方面,本发明实施例还提供了一种控制器,包括:通信接口、存储器和处理器,所述处理器分别与所述通信接口和存储器连接;其中,
所述存储器用于存储驱动软件;
所述处理器从所述存储器读取所述驱动软件并在所述驱动软件的作用下执行上述第二方面的应用迁移方法的部分或全部的步骤。
第九方面,本发明实施例还提供了一种应用迁移系统,包括:第一控制器、第二控制器、第一虚拟机和第二虚拟机;其中,
所述第一控制器用于执行上述第一方面的应用迁移方法的部分或全部的步骤;
所述第二控制器用于执行上述第二方面的应用迁移方法的部分或全部的步骤。
在本发明实施例中,应用侧的第一控制器可在第一虚拟机需要进行应用迁移时,确定出该应用迁移需要迁移到的第二虚拟机,通过获取从第一虚拟机到第二虚拟机的应用迁移过程中的中断事件,并向网络侧的第二控制器发送用于指示该中断事件的通知消息,以使第二控制器能够基于该中断事件控制缓存应用迁移过程中发送至该第一虚拟机的数据,从而能够通过获取应用迁移过程中的中断事件,并通知网络侧,让网络侧及时缓存上行数据报文,以确保可靠的会话连续性,防止丢包,使得数据缓存时间能尽量地接近虚拟机迁移的中断时间,从而缩短了数据缓存时间。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种应用迁移示意图;
图2是本发明实施例提供的一种应用迁移方法的流程示意图;
图3是本发明实施例提供的另一种应用迁移方法的流程示意图;
图4是本发明实施例提供的一种应用迁移的场景示意图;
图5是图4场景下的一种应用迁移方法的交互示意图;
图6是图4场景下的另一种应用迁移方法的交互示意图;
图7是本发明实施例提供的另一种应用迁移的场景示意图;
图8是图7场景下的一种应用迁移方法的交互示意图;
图9是图4场景下的又一种应用迁移方法的交互示意图;
图10是图7场景下的另一种应用迁移方法的交互示意图;
图11是图4场景下的又一种应用迁移方法的交互示意图;
图12是图7场景下的又一种应用迁移方法的交互示意图;
图13是图4场景下的又一种应用迁移方法的交互示意图;
图14是图7场景下的又一种应用迁移方法的交互示意图;
图15是本发明实施例提供的一种应用迁移装置的结构示意图;
图16是本发明实施例提供的另一种应用迁移装置的结构示意图;
图17是本发明实施例提供的一种应用迁移系统的结构示意图;
图18是本发明实施例提供的一种控制器的结构示意图;
图19是本发明实施例提供的另一种控制器的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
应理解,本申请的技术方案可应用于各种通信系统,如码分多址(英文:Code Division Multiple Access,简称CDMA)、宽带码分多址(英文:Wideband Code Division Multiple Access,简称WCDMA)、时分同步码分多址(英文:Time Division-Synchronous Code Division Multiple Access,简称TD-SCDMA)、通用移动通信系统(英文:Universal Mobile Telecommunication System,简称UMTS)、长期演进(英文:Long Term Evolution,简称LTE)系统等,随着通信技术的不断发展,本申请的技术方案还可用于未来网络,如第五代移动通信技术(英文:The Fifth Generation Mobile Communication Technology,简称5G)网络,本发明实施例不做限定。
在本申请中,用户设备(英文:User Equipment,简称UE)还可称之为终端、移动台(英文:Mobile Station,简称MS)或移动终端等。其可以经无线 接入网(如RAN,radio access network)与一个或多个核心网进行通信,用户设备可以是移动终端,如移动电话(或称为“蜂窝”电话)和具有移动终端的计算机,还可以是便携式、袖珍式、手持式、计算机内置的或者车载的移动装置,它们与无线接入网交换语言和/或数据,等等。在本发明实施例中,基站可以是GSM或CDMA中的基站,如基站收发台(英文:Base Transceiver Station,简称BTS),也可以是WCDMA中的基站,如NodeB,还可以是LTE中的演进型基站,如eNB或e-NodeB(evolutional Node B),或未来网络中的基站,本发明实施例不做限定。
下面对本发明实施例的应用迁移场景进行介绍。请参见图1,图1是本发明实施例提供的一种应用迁移示意图。如图1所示,在需要对某一虚拟机(VM)即源VM进行应用(实例)迁移,并确定出需要迁移到的VM即目标VM之后,即可开始进行迁移,该应用迁移具体可指VM业务的迁移。因VM的整个执行状态都存储在内存中的,则在迁移时需向目标VM发送源VM的内存数据以确保VM提供服务的连续性,这就需要经过多轮迭代。第一轮迭代可传输源VM中所有的内存数据,后续迭代会不断地迭代复制刚更新过的数据,如被VM写过的脏页(dirty page)数据,该过程重复进行,直至内存数据较少如脏页足够少。该迭代过程中,尽管一直在迭代复制内存,但并不中断程序的运行。当VM内存数据较少,比如脏页数量低于预设数目阈值,或内存数据大小低于预设内存阈值等时,可复制最后的内存数据,进行最后一轮迭代。在最后一轮传输内存数据时,挂起正在被迁移的VM,即源VM不再进行内存更新,且其他非内存数据(如CPU和网络状态)也同时发送给目标VM。这段时间源VM和目的VM都不提供服务,且其应用不再运行,直至目标VM启动服务。这一段的应用不运行的时间称为迁移的中断持续时间(又称停机时间)。其中,衡量VM迁移性能的两个重要指标为:总迁移时间,即开始迁移到目标VM提供服务的时间;以及中断持续时间,即源VM停止服务而进行最后一次脏页迭代以及CPU同步的时间,此时源VM和目的VM都不提供服务。该中断持续时间越短,使得用户无法觉察,业务连续性越好。其中,以linux系统为例,linux是以页作为高速缓存的单位,当进程修改了高速缓存里的数据时,该页就被内核标记为脏页。可选的,该总迁移时间可以是根据迁移内存大小、 业务类型、链路带宽、脏页产生速率等参数确定出的,该中断持续时间可以是根据脏页产生速率、链路带宽、业务类型、迭代次数等参数确定出的。
具体的,本发明实施例涉及的第一控制器可以为应用侧的控制器,如云控制器Cloud controller,例如管理和编排器(英文:Management and Organization,简称MANO),第二控制器可以为网络侧的控制器如控制面(英文:Controller Plane,简称CP)中。在进行应用迁移时,实际上是通过宿主机中的Hypervisor(虚拟机监控层)的控制和管理将VM从一个宿主机中迁移到另一个宿主机中,并在迁移过程中保持VM业务的平滑运行,VM本身业务不感知迁移过程。可选的,宿主机中的Libvirt接口可为不同的Hypervisor技术,如开源的基于内核的虚拟机(英文:Kernel-based Virtual Machine,简称KVM)、Xen等提供统一的接口。Cloud controller(如MANO)可通过Libvirt client来控制VM的迁移,并实时感知VM迁移的状态。VM迁移技术包括Pre-copy、post-copy、基于日志系统的迁移技术等,本发明实施例以Pre-copy方式为例,对应用迁移过程进行说明。
本发明实施例公开了一种应用迁移方法、控制器及系统,能够减少应用迁移过程中出现的丢包,以及减少业务中断的时间。以下分别详细说明。
请参见图2,是本发明实施例的一种应用迁移方法的流程示意图。具体的,本发明实施例的所述方法可具体应用于应用侧(用户面)的控制器,即第一控制器中,如图2所示,本发明实施例的所述应用迁移方法包括以下步骤:
101、当第一虚拟机需要进行应用迁移时,确定出所述应用迁移需要迁移到的第二虚拟机。
作为一种可选的实施方式,所述确定出所述应用迁移需要迁移到的第二虚拟机,可以具体为:接收第二控制器发送的目标网关的标识信息,所述目标网关为所述第二控制器确定出的;根据所述目标网关的标识信息确定出所述应用迁移需要迁移到的第二虚拟机。具体的,第二控制器如CP可根据UE移动事件选取目标网关用户面功能(英文:User Function of Gateway,简称GW-U)即目标网关,通知应用侧选取的GW-U信息,第一控制器如云控制器Cloud controller可根据目标GW-U选取目标VM,即第二虚拟机。
作为一种可选的实施方式,所述确定出所述应用迁移需要迁移到的第二虚 拟机,可以具体为:获取用户设备的位置信息,并根据所述位置信息确定出需要所述应用迁移需要迁移到的第二虚拟机。进一步的,该第一控制器还可向所述第二控制器发送所述第二虚拟机的位置信息,以使所述第二控制器根据所述第二虚拟机的位置信息确定出目标网关。具体的,还可以是Cloud controller根据UE位置信息先选取目标VM位置,通知网络侧,CP再结合UE当前位置信息以及目标VM位置信息选取合适的GW-U作为目标网关。
102、获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的中断事件。
103、向第二控制器发送用于指示所述中断事件的通知消息,以使所述第二控制器基于所述中断事件控制缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
可选的,所述通知消息包括中断时间点以及中断持续时间;所述获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的中断事件,可以具体为:获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的迁移状态参数;根据所述迁移状态参数预测所述应用迁移过程中的中断时间点以及中断持续时间。具体的,该迁移状态参数可以是从Hypervisor层实时获取的,其可包括总迁移数据量、已迁移量,链路带宽,脏页产生速率等,从而通过该迁移状态参数预测第一虚拟机即源VM停止服务的时间点,即中断时间点。其中,该中断时间点可基于预测的总迁移时间、中断持续时间,并结合当前时刻得到。
可选的,所述获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的中断事件,可以具体为:监测从所述第一虚拟机到所述第二虚拟机的应用迁移过程中是否发生中断事件。进一步的,所述向第二控制器发送用于指示所述中断事件的通知消息,可以具体为:当监测到所述中断事件发生时,向第二控制器发送用于指示所述中断事件的通知消息。也就是说,可以通过实时检测应用迁移过程中的中断事件,并在检测到该中断事件时通知第二控制器控制进行数据缓存。进一步的,在检测到中断事件结束时,则可通知第二控制器控制结束该数据缓存。
进一步可选的,该第一控制器还可接收所述第二控制器发送的服务停止命令;响应所述服务停止命令,控制第一虚拟机停止服务,进行第一虚拟机和第 二虚拟机之间的内存拷贝及CPU同步;当同步完成时,控制第二虚拟机启动服务。其中,可通过修改Libvirt Driver,实现控制源侧VM立即停止当前迭代过程来作最后一次迁移,并可通过Nova API开放该能力,即目前第二控制器可以通过Libvirt接口直接控制源VM立即停止当前的迭代过程,来进行最后一次脏页迭代以及CPU状态同步过程,从而缩短迭代的总时间。具体的,CP可发送服务停止命令给Cloud controller,命令源VM立即停止当前迭代过程,进入最后一轮迭代,进行脏页拷贝以及CPU同步,当同步完成后,第二虚拟机开始启动,即可通过目标VM提供服务,以减少总迁移时间。
请参见图3,图3是本发明另一实施例提供的一种应用迁移方法的流程示意图。具体的,本发明实施例的所述方法可具体应用于网络侧的控制器,即第二控制器中,如上述的CP中,如图3所示,本发明实施例的所述应用迁移方法包括以下步骤:
201、接收第一控制器发送的通知消息,所述通知消息指示了从需要进行应用迁移的第一虚拟机到第二虚拟机的应用迁移过程中的中断事件。
202、基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据。
可选的,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据可以具体为:生成包括所述中断时间点和中断持续时间的数据缓存命令;向所述第一虚拟机对应的第一网关发送所述数据缓存命令,以使所述第一网关基于所述中断时间点和中断持续时间缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
可选的,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据,可以具体为:生成包括所述中断时间点和中断持续时间的数据缓存命令;向第二控制器选取的第二网关发送所述数据缓存命令,以使所述第二网关基于所述中断时间点和中断持续时间缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
作为一种可选的实施方式,该第二控制器可获取用户设备的位置信息,并 根据所述位置信息确定出所述第二网关;向第一控制器发送所述第二网关的标识信息,以使所述第一控制器根据所述第二网关的标识信息确定出需要进行应用迁移的第一虚拟机要迁移到的第二虚拟机。
作为一种可选的实施方式,第二控制器还可接收第一控制器发送的需要进行应用迁移的第一虚拟机要迁移到的第二虚拟机的位置信息;根据所述第二虚拟机的位置信息确定出所述第二网关。
可选的,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据可以具体为:生成包括所述中断时间点和中断持续时间的数据缓存命令;向所述第二虚拟机对应的基站发送所述数据缓存命令,以使所述第二虚拟机对应的基站基于所述中断时间点和中断持续时间缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
也就是说,该应用迁移过程中上行数据缓存的位置可以为第一虚拟机对应的第一网关(即应用迁移前的网关),或者第二虚拟机对应的第二网关(即应用迁移后的网关),或者第二虚拟机对应的基站(即应用迁移后的基站)。其缓存的数据可以是应用迁移中断过程中需要发送至第一虚拟机的数据,如UE发送给该第一虚拟机的上行数据。
进一步可选的,所述通知消息还包括所述中断事件的中断时间点;在所述发送所述数据缓存命令之前,该第一控制器还可监测所述中断时间点是否到达;当监测到所述中断时间点到达时,执行所述发送所述数据缓存命令的步骤。具体的,第一控制器预测到应用迁移过程中的中断事件并通知第二控制器之后,第二控制器可根据该中断事件的中断时间点立刻向上述的缓存设备如第一网关、第二网关或基站发送缓存命令,以使缓存设备基于该中断时间点缓存数据。或者第二控制器还可在接收到中断事件消息时,立刻向缓存设备发送缓存命令,缓存设备根据中断时间检测到中断时间点到达时,再基于中断持续时间进行数据缓存。
进一步可选的,在所述接收第一控制器发送的通知消息之后,该第二控制器还可向第一控制器发送服务停止命令,以使所述第一控制器基于所述服务停止命令控制进行第一虚拟机和第二虚拟机之间的内存拷贝及中央处理器CPU 同步。
进一步可选的,该第二控制器还可在所述应用迁移完成时,更新数据转发规则。其中,更新后的数据转发规则指示将所述应用迁移过程中缓存的数据转发到所述第二虚拟机上。
在本发明实施例中,应用侧的第一控制器可在第一虚拟机需要进行应用迁移时,确定出该应用迁移需要迁移到的第二虚拟机,通过获取从第一虚拟机到第二虚拟机的应用迁移过程中的中断事件,并向网络侧的第二控制器发送用于指示该中断事件的通知消息,以使第二控制器能够基于该中断事件控制缓存应用迁移过程中发送至该第一虚拟机的数据,从而能够通过获取应用迁移过程中的中断事件,并通知网络侧,让网络侧及时缓存上行数据报文,以确保可靠的会话连续性,防止丢包,使得数据缓存时间能尽量地接近虚拟机迁移的中断时间,从而缩短了数据缓存时间。
请参见图4,图4是本发明实施例提供的一种应用迁移的场景示意图。如图4所示,该场景下UE移动切换,触发边缘应用迁移。该UE移动切换可以具体为S1切换和X2切换,本发明实施例以S1切换为例进行说明。该场景下,业务需要从源VM即上述的第一虚拟机需要迁移到目标VM即上述的第二虚拟机。如图4所示,该源GW-U为应用迁移前的网关,即上述的第一网关;该目标GW-U为应用迁移后的网关,即上述的第二网关;源eNB为应用迁移前的基站,目标e NB为应用迁移后的基站,即第二虚拟机对应的基站。
请结合图4,一并参见图5,图5是图4场景下的一种应用迁移方法的交互示意图。在本发明实施例中,第二控制器如CP可根据UE移动事件选取目标GW-U,并通知应用侧该选取的目标GW-U信息,应用侧的第一控制器如云控制器Cloud controller可根据选取的目标GW-U选取目标VM。具体的,如图5所示,本发明实施例的所述应用迁移方法可包括以下步骤:
301、UE移动,向源eNB发送测量报告。
具体的,当UE移动时,可通过空口测量,得到测量数据,并可生成一个测量报告(Measure report),将包括该测量数据的测量报告上报给源eNB。源eNB基于该测量数据判断UE移动,从而进行切换决策。其中,该测量数据可 包括UE进行空口测量时的位置信息。
302、源eNB向CP发送UE移动切换请求。
303、CP根据UE位置信息选取目标GW-U。
具体的,源eNB根据UE上报的测量数据判断得到UE发生移动,需要进行应用迁移时,可向网络侧CP发生UE移动切换请求。CP接收到UE移动切换请求之后,即可基于UE位置信息为其选取一个GW-U作为目标GW-U。
304、CP向Cloud controller发送切换协同请求。
其中,该切换协同请求中可包括选取的用户标识信息如UE ID以及选取的目标GW-U的信息。该UE ID可用于确定UE所连接的应用,应用侧Cloud controller可维护UE ID与所连接的应用的对应关系。该选取的目标GW-U信息可包括该目标GW-U与应用侧相连接的出口互联网协议(Internet Protocol,简称IP)地址信息。
305、Cloud controller选取目标VM。
在CP发送切换协同请求之后,网络侧与应用侧可同步进行各自相应的流程。例如,对于网络侧,可执行发送会话创建请求、创建间接数据转发隧道、eNB切换,无线承载创建、承载更新等流程。对于应用侧,Cloud controller可根据UE位置信息,选取一个VM作为需要迁移到的目标VM。可选的,该目标VM可基于路径最优或距离最近等选取原则进行选取。例如,该目标VM可以是基于预先配置于Cloud controller的网络拓扑和业务需求并根据UE位置信息以及当前各计算节点的相对负荷选择出的。
306、进行VM迁移,Cloud controller预测迁移过程中的中断事件。
具体的,应用侧选取好目标VM后,Cloud controller可向CP返回一个切换协同响应消息,该响应消息中可包括选取的目标VM的标识和位置信息等等。VM开始进行迁移,进行内存拷贝以及脏页迭代过程。在迁移开始时,Cloud controller可开启VM迁移中断事件预测功能,具体可通过预设接口(如上述的Libvirt client)获取迁移状态参数,如脏页产生速率、链路带宽、已迁移量、总迁移量等参数,以实现对VM迁移过程中的中断事件的预测,包括预测中断时间点及中断持续时间等参数。其中,中断时间点可以根据预测的总迁移时间、中断持续时间,并结合当前时刻得出。
307、Cloud controller向CP发送包括该中断事件的通知消息。
具体的,Cloud controller预测到源VM停止服务的中断事件以后,即可发送包括该预测得到的中断时间点和中断持续时间等参数的通知消息给CP,指示源VM的服务停止时间。CP接收到该通知消息后可向Cloud controller返回一个确认消息。
308、CP向目标GW-U发送缓存命令。
309、目标GW-U缓存数据。
具体实施例中,CP收到指示源VM停止服务的中断事件的通知消息后,可从该通知消息中获知源VM服务中断的具体时间点,即中断时间点。CP可根据该中断时间点启动定时器,当定时时间达到时,CP发送上行数据缓存命令给目标GW-U,该数据缓存命令可携带中断持续时间。或者,CP还可在预测到该中断事件时,立即向目标GW-U发送包括该中断时间点和中断持续时间的数据缓存命令,从而目标GW-U可根据该中断持续时间预留缓存空间,以在该中断时间点到达时进行数据缓存。可选的,缓存内存可与该中断持续时间相匹配,即目标GW-U可基于该中断持续时间预留用于缓存上行数据的内存。
进一步可选的,CP还可向Cloud controller发送服务停止命令,命令源VM立即停止当前迭代过程,中断服务,进入最后一轮迭代,进行内存拷贝以及CPU同步过程,以减少总迁移时间。当CPU同步完成后,目标VM可启动服务,Cloud controller可向CP发送一个迁移完成消息。
310、VM迁移完成,CP更新数据转发规则。
具体的,迁移完成之后,CP即可更新目标GW-U上的数据(报文)转发规则,将目标GW-U的数据转发从源VM切换到目的VM上。在目标GW-U上的数据转发规则更新好后,CP发送消息给目标GW-U,将缓存的上行数据进行转发,转发给目标VM,并通过目标VM提供服务。此外,可删除源GW-U与源eNB之间的承载,执行间接转发隧道删除流程。
请结合图4,一并参见图6,图6是图4场景下的另一种应用迁移方法的交互示意图。在本发明实施例中,应用侧Cloud controller可根据UE位置信息选取目标VM,并通知网络侧CP选取的目标VM的位置信息,CP根据选取 的目标VM的位置信息选取目标GW-U。具体的,如图6所示,本发明实施例的所述应用迁移方法包括以下步骤:
401、UE移动,向源eNB发送测量报告。
402、源eNB向CP发送UE移动切换请求。
具体的,该步骤401至402的具体方式可参照上述实施例中步骤301至302的描述,此处不赘述。
403、CP向Cloud controller发送切换协同请求。
具体的,该切换协同请求中可包括UE的位置信息、UE ID和/或VM ID。其中,该UE位置信息可以是小区标识cell ID或者跟踪区标识(Tracking Area Identity,简称TAI)等信息。进一步的,该请求可在位置信息发生变化时发送,例如,若该请求中包括Cell ID,则可在小区切换时发送该请求;若该请求中包括TAI,则可在UE进入另一个TA,即TAI变化时发送。
404、Cloud controller根据UE位置信息选取目标VM。
具体的,Cloud controller可根据UE位置信息,选取一个合适的VM作为目标VM,该目标VM可以是基于路径最优或距离最近原则等选取原则选取出的,此处不再赘述。
405、返回切换协同响应消息。
406、CP选取目标GW-U。
具体的,Cloud controller在选取了目标VM之后,即可向CP返回一个切换协同响应消息,该切换协同响应消息可携带目标VM的位置信息。CP接收到Cloud controller发送的切换协同响应消息之后,可结合UE的位置信息以及目标VM的位置信息,选取一个合适的GW-U作为目标GW-U。目标GW-U选取好后,CP可通知Cloud controller该选取的目标GW-U的信息,包括该目标GW-U的出口IP信息。CP可执行3GPP切换流程,包括发送会话创建请求、创建间接数据转发隧道、eNB切换,无线承载创建、承载更新等流程。
407、进行VM迁移,Cloud controller预测迁移过程中的中断事件。
具体的,应用侧选取好目标VM后,Cloud controller可向CP返回一个切换协同响应消息,该响应消息中可包括选取的目标VM的位置信息。VM开始进行迁移,进行内存拷贝以及脏页迭代过程。在迁移开始时,Cloud controller 可开启VM迁移中断事件预测功能,具体可通过预设接口(如上述的Libvirt client)获取迁移状态参数,如脏页产生速率、链路带宽、已迁移量、总迁移量等参数,以实现对VM迁移过程中的中断事件的预测,包括预测中断时间点及中断持续时间等参数。
408、Cloud controller向CP发送包括该中断事件的通知消息。
409、CP向目标GW-U发送缓存命令。
410、目标GW-U缓存数据。
411、VM迁移完成,CP更新数据转发规则。
具体的,该步骤408至411的具体方式可参照上述实施例中步骤307至310的描述,此处不赘述。
其中,基于X2切换的过程与S1切换过程类似,在网络侧切换过程中都会通知应用侧UE位置信息和/或选取的目标GW-U信息,该X2切换过程中的UE移动与应用移动协同可具体参照S1切换过程中的移动协同处理方式,此处不赘述。
请参见图7,图7是本发明实施例提供的另一种应用迁移的场景示意图。如图7所示,该场景下UE不移动,应用移动至靠近UE处。该场景下,业务需要从源VM即上述的第一虚拟机需要迁移到目标VM即上述的第二虚拟机。如图7所示,该源GW-U为应用迁移前的网关,即上述的第一网关;该目标GW-U为应用迁移后的网关,即上述的第二网关;eNB为第一虚拟机对应的基站,也为第二虚拟机对应的基站,即该应用迁移前和迁移后UE处于同一基站下。具体的,该场景下只有应用迁移,而UE不发生移动,比如可为UE附着时,或资源抢占,或当前用户体验不佳、服务质量(英文:Quality of Service,简称QoS)不满足用户需求等场景,应用需主动迁移到靠近UE。
请结合图7,一并参见图8,图8是图7场景下的一种应用迁移方法的交互示意图。具体的,如图8所示,本发明实施例的所述应用迁移方法包括以下步骤:
501、Cloud controller决策应用迁移,选取目标VM。
具体的,Cloud controller可在UE附着时,或资源抢占,或当前用户体验不佳、QoS不满足用户需求,决策应用进行迁移,并可根据UE当前的位置信 息选取一个VM作为目标VM。
502、Cloud controller向CP返回协同消息。
503、CP根据目标VM的位置信息选取目标GW-U。
具体的,应用侧选取好目标VM后,Cloud controller即可向CP发送包括目标VM ID、目标VM的位置信息等目标VM的信息的协同消息。CP收到目标VM协同消息后,即可根据VM ID确定UE当前的哪个应用正在迁移,并根据目标VM的位置信息,选取一个GW-U作为目标GW-U。CP可执行3GPP切换流程,包括发送会话创建请求、创建间接数据转发隧道、eNB切换,无线承载创建、承载更新等流程。其中,对于到达源eNB的下行数据,该间接数据转发隧道可以具体为源eNB->源GW-U->目标GW-U->目标eNB->UE。
进一步的,CP还可向Cloud controller发送目标GW-U与VM连接的出口IP信息。
504、进行VM迁移,Cloud controller预测迁移过程中的中断事件。
此外,应用侧选取好目标VM后,VM开始迁移,进行内存迭代和脏页拷贝过程,并可进一步对源VM中断事件进行预测。可选的,该步骤503和504可以同时进行,或者其执行先后顺序不受限定。
505、Cloud controller向CP发送包括该中断事件的通知消息。
具体的,Cloud controller预测到源VM停止服务的中断事件以后,即可发送包括该预测得到的中断时间点、中断持续时间、源VM ID等参数的通知消息给CP,指示源VM的服务停止时间。CP接收到该通知消息后可向Cloud controller返回一个确认消息。
506、CP向目标GW-U发送缓存命令。
507、目标GW-U缓存数据。
具体实施例中,CP收到指示源VM停止服务的中断事件的通知消息后,即可从该通知消息中的源VM ID确定UE当前的哪个应用将停止服务,获知源VM服务中断的中断时间点。CP可根据该中断时间点启动定时器,当定时时间达到时,CP发送上行数据缓存命令给目标GW-U,该数据缓存命令可携带中断持续时间和目标VM ID。或者,CP还可在预测到该中断事件时,立即向目标GW-U发送包括该中断时间点和中断持续时间的数据缓存命令,从而 目标GW-U可根据该中断持续时间预留缓存空间,以在该中断时间点到达时进行数据缓存。可选的,缓存内存可与该中断持续时间相匹配,即目标GW-U可基于该中断持续时间预留用于缓存上行数据的内存。
进一步可选的,CP还可向Cloud controller发送服务停止命令,命令源VM立即停止当前迭代过程,进入最后一轮迭代内存拷贝以及CPU同步过程,以减少总迁移时间。当CPU同步完成后,目标VM可启动服务,Cloud controller可向CP发送一个迁移完成消息。
508、VM迁移完成,CP更新数据转发规则。
具体的,迁移完成之后,CP即可更新目标GW-U上的数据(报文)转发规则,将目标GW-U的数据转发从源VM切换到目的VM上。在目标GW-U上的数据转发规则更新好后,CP发送消息给目标GW-U,将缓存的上行数据进行转发,转发给目标VM,并通过目标VM提供服务。此外,可删除源GW-U与eNB之间的承载,执行间接转发隧道删除流程。
在本发明实施例中,可通过网络侧与应用侧交互UE与应用两种对象的移动事件,并通过对Cloud controller增加中断事件的预测功能,对中断事件进行预测,使得能够提前告知网络侧源VM停止服务的时间点,让网络侧基于中断事件进行上行数据缓存,将上行数据缓存至目标GW-U上,从而让上行服务中断时间接近于应用迁移的中断持续时间,相比于现有技术中的在虚拟机开始迁移时就缓存上行数据的方式,则大大地减少了上行数据缓存的时间,甚至从秒级别降到了毫秒级别,有效地保障了可靠的会话连续性,且防止了丢包的发生。
请结合图4,一并参见图9,图9是图4场景下的又一种应用迁移方法的交互示意图。具体的,如图9所示,本发明实施例的所述应用迁移方法可包括以下步骤:
601、UE移动,向源eNB发送测量报告。
602、源eNB向CP发送UE移动切换请求。
603、CP选取目标GW-U。
604、CP向Cloud controller发送切换协同请求。
605、Cloud controller选取目标VM。
606、进行VM迁移,Cloud controller预测迁移过程中的中断事件。
607、Cloud controller向CP发送包括该中断事件的通知消息。
具体的,该步骤601至607的具体方式可参照上述实施例中步骤301至307或步骤401至408的描述,也即CP选取目标GW-U,并通知应用侧该选取的目标GW-U信息,Cloud controller根据选取的目标GW-U选取目标VM;或者,Cloud controller选取目标VM,并通知CP选取的目标VM的位置信息,CP根据选取的目标VM的位置信息选取目标GW-U。此处不赘述。
608、CP向源GW-U发送缓存命令。
609、源GW-U缓存数据。
具体实施例中,CP收到指示源VM停止服务的中断事件的通知消息后,可从该通知消息中获知源VM服务中断的中断时间点。CP可根据该中断时间点启动定时器,当定时时间达到时,CP发送上行数据缓存命令给源GW-U,该数据缓存命令可携带中断持续时间。或者,CP还可在预测到该中断事件时,立即向源GW-U发送包括该中断时间点和中断持续时间的数据缓存命令,从而源GW-U可根据该中断持续时间预留缓存空间,以在该中断时间点到达时进行数据缓存。可选的,缓存内存可与该中断持续时间相匹配,即源GW-U可基于该中断持续时间预留用于缓存上行数据的内存。
进一步可选的,CP还可向Cloud controller发送服务停止命令,命令源VM立即停止当前迭代过程,进入最后一轮内存拷贝以及CPU同步过程,以减少总迁移时间。当CPU同步完成后,目标VM可启动服务,Cloud controller可向CP发送一个迁移完成消息。
610、VM迁移完成,CP更新数据转发规则。
具体的,迁移完成之后,CP即可更新源GW-U上的数据(报文)转发规则,将源GW-U的数据转发从源VM切换到目的VM上。在源GW-U上的数据转发规则更新好后,CP发送消息给源GW-U,将缓存的上行数据进行转发,转发给目标VM,并通过目标VM提供服务。此外,可删除源GW-U与源eNB之间的承载,执行间接转发隧道删除流程。
请结合图7,一并参见图10,图10是图7场景下的另一种应用迁移方法 的交互示意图。具体的,如图10所示,本发明实施例的所述应用迁移方法包括以下步骤:
701、Cloud controller决策应用迁移,选取目标VM。
702、进行VM迁移,Cloud controller预测迁移过程中的中断事件。
703、Cloud controller向CP发送包括该中断事件的通知消息。
具体的,Cloud controller预测到源VM停止服务的中断事件以后,即可发送包括该预测得到的中断时间点、中断持续时间的通知消息给CP,指示源VM的服务停止时间。CP接收到该通知消息后可向Cloud controller返回一个确认消息。
704、CP向源GW-U发送缓存命令。
705、源GW-U缓存数据。
具体的,该步骤704至705的具体方式可参照上述实施例中步骤608至609的描述,此处不赘述。
706、VM迁移完成,CP选取目标GW-U。
具体的,应用侧内存拷贝和CPU状态同步完成后,目标VM即可启用服务,Cloud controller捕获到该事件后,即可发送携带目标VM的位置信息的迁移完成消息给CP。进一步的,CP可根据目标VM位置信息以及UE的位置信息,选取一个GW-U作为目标GW-U。可选的,该选取的目标GW-U可以是置于VM中的,与VM为一对一关系;或者,该选取的目标GW-U也可以是置于VM中的,与VM为多对一关系,则该目标GW-U可从VM相连接的多个GW-U中选取出,比如可根据GW-U节点负荷、端到端链路时延等条件选取出。
707、CP更新数据转发规则。
具体的,迁移完成之后,CP即可更新源GW-U上的数据(报文)转发规则,将源GW-U的数据转发从源VM切换到目的VM上。在源GW-U上的数据转发规则更新好后,CP发送消息给源GW-U,将缓存的上行数据进行转发,转发给目标VM,并通过目标VM提供服务。此外,可删除源GW-U与eNB之间的承载,执行间接转发隧道删除流程。
具体实施例中,在选取好目标GW-U后,CP可发送创建会话请求给目标 GW-U。该会话请求消息中可包括目标VM的位置信息,如目标VM的出口IP信息。目标GW-U可回复针对该会话请求的响应消息给CP,该响应消息中可携带目标GW-U与目标VM相连的出口IP消息。具体的,CP可发送创建间接数据前传隧道请求给目标GW-U。目标GW-U接收到该请求后,可为为间接前传隧道开启一个端口号,并发送携带该端口号信息的响应消息给CP。此外,
CP发送创建间接数据前传隧道消息给源GW-U,包括目标GW-U为间接前传隧道开启的端口号信息,源GW-U发送创建间接前传隧道回复消息给CP。CP发送GW-U重选完成消息给Cloud controller。消息中包括目标GW-U与目标VM即应用侧相连接的出口IP信息。
进一步的,目标GW-U选取完成,间接转发隧道建立完成之后,CP即可发送缓存命令给源GW-U,此时对于到达源eNB的下行数据,其间接数据转发隧道可以为源eNB->源GW-U->目标GW-U->目标eNB->UE(在本发明实施例中,源eNB与目标eNB为同一eNB)。CP发送路径切换请求给eNB,将路径从源GW-U切换到目标GW-U,该请求中携带目标GW-U与eNB相连接的端口信息。同时,eNB发送路径切换通知消息给源GW-U,该路径切换通知消息中可包括end marker数据包,该end marker可用于通知源GW-U该eNB不再向源GW-U发送数据报文,源GW-U在发送完当前数据报文后可开始释放承载并删除转发路径,以实现路径切换。在路径切换完成之后,eNB可发送一个切换完成通知消息给Cloud controller。此外,可删除源GW-U与eNB之间的承载,执行间接转发隧道删除流程。
在本发明实施例中,可通过网络侧与应用侧交互UE与应用两种对象的移动事件,并通过对Cloud controller增加中断事件的预测功能,对中断事件进行预测,使得能够提前告知网络侧源VM停止服务的时间点,让网络侧基于中断事件进行上行数据缓存,将上行数据缓存至源GW-U上,待VM迁移完成后再进行目标GW-U选取以及间接数据前传隧道的建立,从而让上行服务中断时间接近于应用迁移的中断持续时间,相比于现有技术中的在虚拟机开始迁移时就缓存上行数据的方式,则大大地减少了上行数据缓存的时间,甚至从秒级别降到了毫秒级别,有效地保障了可靠的会话连续性,且防止了丢包的发生。而且,该将上行数据缓存在源GW-U上,待VM迁移完成后再转发到目标VM 上的方式,则能够有效地避免因应用迁移失败造成的链路失效等现象,因如果VM迁移失败,则VM状态回滚,由源VM继续提供服务,缓存数据仍然发完源侧VM,由源VM提供服务,这就进一步防止了丢包。
请结合图4,一并参见图11,图11是图4场景下的又一种应用迁移方法的交互示意图。具体的,如图11所示,本发明实施例的所述应用迁移方法可包括以下步骤:
801、UE移动,向源eNB发送测量报告。
802、源eNB向CP发送UE移动切换请求。
803、CP选取目标GW-U。
804、CP向Cloud controller发送切换协同请求。
805、Cloud controller选取目标VM。
806、进行VM迁移,Cloud controller预测迁移过程中的中断事件。
807、Cloud controller向CP发送包括该中断事件的通知消息。
具体的,该步骤801至807的具体方式可参照上述实施例中步骤301至307或步骤401至408的描述,也即CP选取目标GW-U,并通知应用侧该选取的目标GW-U信息,Cloud controller根据选取的目标GW-U选取目标VM;或者,Cloud controller选取目标VM,并通知CP选取的目标VM的位置信息,CP根据选取的目标VM的位置信息选取目标GW-U。此处不赘述。
808、CP向目标eNB发送缓存命令。
809、目标eNB缓存数据。
具体实施例中,CP收到指示源VM停止服务的中断事件的通知消息后,可从该通知消息中获知源VM服务中断的中断时间点。CP可根据该中断时间点启动定时器,当定时时间达到时,CP发送上行数据缓存命令给目标eNB,该数据缓存命令可携带中断持续时间。或者,CP还可在预测到该中断事件时,立即向目标eNB发送包括该中断时间点和中断持续时间的数据缓存命令,从而目标eNB可根据该中断持续时间预留缓存空间,在该中断时间点到达时进行数据缓存。可选的,缓存内存可与该中断持续时间相匹配,即目标eNB可基于该中断持续时间预留用于缓存上行数据的内存。
进一步可选的,CP还可向Cloud controller发送服务停止命令,命令源VM立即停止当前迭代过程,进入最后一轮内存拷贝以及CPU同步过程,以减少总迁移时间。当CPU同步完成后,目标VM可启动服务,Cloud controller可向CP发送一个迁移完成消息。
进一步的,迁移完成之后,CP可发送消息给目标eNB,将缓存数据进行转发,即转发给目标VM,并接收源VM的下行数据,服务开始由目标VM提供。
其中,基于X2切换的过程与S1切换过程类似,在网络侧切换过程中都会通知应用侧UE位置信息和/或选取的目标GW-U信息,该X2切换过程中的UE移动与应用移动协同可具体参照S1切换过程中的移动协同处理方式,此处不赘述。
请结合图7,一并参见图12,图12是图7场景下的又一种应用迁移方法的交互示意图。具体的,如图12所示,本发明实施例的所述应用迁移方法可包括以下步骤:
901、Cloud controller决策应用迁移,选取目标VM。
902、Cloud controller向CP返回协同消息。
903、CP根据目标VM的位置信息选取目标GW-U。
904、进行VM迁移,Cloud controller预测迁移过程中的中断事件。
905、Cloud controller向CP发送包括该中断事件的通知消息。
具体的,该步骤901至905的具体方式可参照上述实施例中步骤501至505的描述,此处不赘述。
906、CP向eNB发送缓存命令。
907、eNB缓存数据。
具体实施例中,CP收到指示源VM停止服务的中断事件的通知消息后,即可从该通知消息中的源VM ID确定UE当前的哪个应用将停止服务,获知源VM服务中断的中断时间点。CP可根据该中断时间点启动定时器,当定时时间达到时,CP发送上行数据缓存命令给eNB,该数据缓存命令可携带中断持续时间和目标VM ID。或者立即向eNB发送数据缓存命令,以使eNB基于该中断时间点和中断持续时间进行数据缓存。可选的,缓存内存可与该中断持 续时间相匹配,即eNB可基于该中断持续时间预留用于缓存上行数据的内存。
进一步可选的,CP还可向Cloud controller发送服务停止命令,命令源VM立即停止当前迭代过程,进入最后一轮迭代内存拷贝以及CPU同步过程,以减少总迁移时间。当CPU同步完成后,目标VM可启动服务,Cloud controller可向CP发送一个迁移完成消息。
908、VM迁移完成,CP更新数据转发规则。
具体的,CP接收到迁移完成消息后,即可更新eNB上的报文转发规则,将目标GW-U的数据报文转发从源VM切换到目的VM上。在目标GW-U上的数据转发规则更新后,CP发送消息给eNB,将缓存的上行数据进行转发,转发给目标VM,并通过目标VM提供服务。此外,可删除源GW-U与eNB之间的承载,执行间接转发隧道删除流程。
在本发明实施例中,可通过网络侧与应用侧交互UE与应用两种对象的移动事件,并通过对Cloud controller增加中断事件的预测功能,对中断事件进行预测,使得能够提前告知网络侧源VM停止服务的时间点,让网络侧基于中断事件进行上行数据缓存,将上行数据缓存至eNB上,从而让上行服务中断时间接近于应用迁移的中断持续时间,相比于现有技术中的在虚拟机开始迁移时就缓存上行数据的方式,则大大地减少了上行数据缓存的时间,甚至从秒级别降到了毫秒级别,有效地保障了可靠的会话连续性,且防止了丢包的发生。
请结合图4,一并参见图13,图13是图4场景下的又一种应用迁移方法的交互示意图。在本发明实施例中,可通过Cloud controller现有接口及功能来捕获VM迁移过程中的中断事件,并上报给网络侧。具体的,如图13所示,本发明实施例的所述应用迁移方法可包括以下步骤:
1001、UE移动,向源eNB发送测量报告。
1002、源eNB向CP发送UE移动切换请求。
1003、CP选取目标GW-U。
1004、CP向Cloud controller发送切换协同请求。
1005、Cloud controller选取目标VM。
具体的,该步骤1001至1005的具体方式可参照上述实施例中步骤301 至305或步骤401至406的描述,也即CP选取目标GW-U,并通知应用侧该选取的目标GW-U信息,Cloud controller根据选取的目标GW-U选取目标VM;或者,Cloud controller选取目标VM,并通知CP选取的目标VM的位置信息,CP根据选取的目标VM的位置信息选取目标GW-U。此处不赘述。
1006、进行VM迁移,Cloud controller实时监测迁移过程中的中断事件。
具体的,当Cloud controller监测到源VM中断事件时,即可发送指示源VM停止服务的中断事件的通知消息给CP。
1007、Cloud controller向CP发送包括该中断事件的通知消息。
1008、进行数据缓存。
具体实施例中,CP收到指示源VM停止服务的中断事件的通知消息后,可发送一个确认消息给Cloud controller,并可从该通知消息中获知源VM服务中断的中断时间点。CP可根据该中断时间点启动定时器,当定时时间达到时,CP发送上行数据缓存命令给源GW-U或目标GW-U或目标eNB,以进行数据缓存。该缓存命令可携带中断持续时间。或者,CP还可在预测到该中断事件时,立即向源GW-U或目标GW-U或目标eNB发送包括该中断时间点和中断持续时间的数据缓存命令,从而源GW-U或目标GW-U或目标eNB可根据该中断持续时间预留缓存空间,在该中断时间点到达时进行数据缓存。
进一步可选的,CP还可向Cloud controller发送服务停止命令,命令源VM立即停止当前迭代过程,进入最后一轮内存拷贝以及CPU同步过程,以减少总迁移时间。当CPU同步完成后,目标VM可启动服务,Cloud controller可向CP发送一个迁移完成消息。
1009、VM迁移完成,CP更新数据转发规则。
进一步的,迁移完成之后,CP可发送消息给源GW-U或目标GW-U或目标eNB,将缓存数据进行转发,即转发给目标VM。
其中,基于X2切换的过程与S1切换过程类似,在网络侧切换过程中都会通知应用侧UE位置信息和/或选取的目标GW-U信息,该X2切换过程中的UE移动与应用移动协同可具体参照S1切换过程中的移动协同处理方式,此处不赘述。
请结合图7,一并参见图14,图14是图7场景下的又一种应用迁移方法 的交互示意图。具体的,如图14所示,本发明实施例的所述应用迁移方法可包括以下步骤:
1101、Cloud controller决策应用迁移,选取目标VM。
1102、CP选取目标GW-U。
具体的,该步骤1101至1102的具体方式可参照上述实施例中步骤501至502的描述,此处不赘述。
1103、进行VM迁移,Cloud controller实时监测迁移过程中的中断事件。
1104、Cloud controller向CP发送包括该中断事件的通知消息。
具体的,当Cloud controller监测到源VM中断事件时,即可发送指示源VM停止服务的中断事件的通知消息给CP。
1105、进行数据缓存。
具体的,该步骤1105的具体方式可参照上述实施例中步骤1008的描述,此处不赘述。
1106、VM迁移完成,CP更新数据转发规则。
具体的,CP接收到迁移完成消息后,即可更新eNB上的报文转发规则,将缓存设备如源GW-U或目标GW-U或eNB的数据报文转发从源VM切换到目的VM上。在数据转发规则更新后,CP发送消息给缓存设备如源GW-U或目标GW-U或eNB,将缓存的上行数据进行转发,转发给目标VM,并通过目标VM提供服务。此外,可删除源GW-U与eNB之间的承载,执行间接转发隧道删除流程。
在本发明实施例中,可通过实时监测应用迁移过程中的中断事件并上报网络侧,以实现及时缓存上行数据,从而减小了业务中断的时间,仅需利用现有接口,这就降低了系统成本。
请参见图15,图15是本发明实施例提供的一种应用迁移装置的结构示意图。具体的,如图15所示,本发明实施例的所述应用迁移装置可包括确定模块11、事件获取模块12以及发送模块13。其中,
所述确定模块11,用于当第一虚拟机需要进行应用迁移时,确定出所述应用迁移需要迁移到的第二虚拟机;
所述事件获取模块12,用于获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的中断事件;
所述发送模块13,用于向另一控制器发送用于指示所述中断事件的通知消息,以使所述另一控制器基于所述中断事件控制缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
其中,本发明实施例所述的应用迁移装置可设置于上述的第一控制器中,该另一控制器可与上述的第二控制器相对应。该第一控制器可以为应用侧的控制器,第二控制器可以为网络侧的控制器。该获取的中断事件可包括在应用迁移过程中的中断时间点和中断持续时间等等。从而第二控制器可基于该中断时间点和中断持续时间等中断事件信息控制该中断事件对应的中断过程中的数据的缓存,以减少缓存时间,从而减少业务中断时间。
可选的,所述通知消息包括中断时间点以及中断持续时间;所述事件获取模块12可具体包括:
参数获取单元,用于获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的迁移状态参数;
预测单元,用于根据所述参数获取单元获取的所述迁移状态参数预测所述应用迁移过程中的中断时间点以及中断持续时间。
可选的,所述事件获取模块12可具体用于:
监测从所述第一虚拟机到所述第二虚拟机的应用迁移过程中是否发生中断事件;
所述发送模块13可具体用于:
当监测到所述中断事件发生时,向另一控制器发送用于指示所述中断事件的通知消息。
进一步可选的,所述确定模块11可具体用于:
接收另一控制器发送的目标网关的标识信息,所述目标网关为所述另一控制器确定出的;
根据所述目标网关的标识信息确定出所述应用迁移需要迁移到的第二虚拟机。
进一步可选的,所述确定模块11可具体用于:
获取用户设备的位置信息,并根据所述位置信息确定出需要所述应用迁移需要迁移到的第二虚拟机;
所述发送模块13,还用于向所述另一控制器发送所述第二虚拟机的位置信息,以使所述另一控制器根据所述第二虚拟机的位置信息确定出目标网关。
进一步可选的,所述控制器还可包括:
接收模块,用于接收所述另一控制器发送的服务停止命令;
控制模块,用于响应所述服务停止命令,控制所述第一虚拟机停止服务,进行所述第一虚拟机和所述第二虚拟机之间的内存拷贝及中央处理器CPU同步;
所述控制模块,还用于当同步完成时,控制所述第二虚拟机启动服务。
请参见图16,图16是本发明实施例提供的另一种应用迁移装置的结构示意图。具体的,如图16所示,本发明实施例的所述应用迁移装置可包括消息接收模块21以及缓存控制模块22。其中,
所述消息接收模块21,用于接收另一控制器发送的通知消息,所述通知消息指示了从需要进行应用迁移的第一虚拟机到第二虚拟机的应用迁移过程中的中断事件;
所述缓存控制模块22,用于基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据。
其中,本发明实施例所述的控制器可设置于上述的第二控制器中,该另一控制器可与上述的第一控制器相对应。该第一控制器可以为应用侧的控制器,第二控制器可以为网络侧的控制器。该获取的中断事件可包括在应用迁移过程中的中断时间点和中断持续时间等等。从而第二控制器可基于该中断时间点和中断持续时间等中断事件信息控制该中断事件对应的中断过程中的数据的缓存,以减少缓存时间,从而减少业务中断时间。
可选的,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述缓存控制模块22可具体用于:
生成包括所述中断时间点和中断持续时间的数据缓存命令;
向所述第一虚拟机对应的第一网关发送所述数据缓存命令,以使所述第一网关基于所述中断时间点和中断持续时间缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
可选的,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述缓存控制模块22可具体用于:
生成包括所述中断时间点和中断持续时间的数据缓存命令;
向预先选取的第二网关发送所述数据缓存命令,以使所述第二网关基于所述中断时间点和中断持续时间缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
可选的,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述缓存控制模块22可具体用于:
生成包括所述中断时间点和中断持续时间的数据缓存命令;
向所述第二虚拟机对应的基站发送所述数据缓存命令,以使所述第二虚拟机对应的基站基于所述中断时间点和中断持续时间缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
进一步的,在可选的实施例中,所述装置还可包括:
时间监测模块,用于监测所述中断时间点是否到达,并在监测到所述中断时间点到达时,通知所述缓存控制模块22发送所述数据缓存命令。
进一步的,在可选的实施例中,所述装置还可包括:
第一确定模块,用于获取用户设备的位置信息,并根据所述位置信息确定出所述第二网关;
第一发送模块,用于向另一控制器发送所述第二网关的标识信息,以使所述另一控制器根据所述第二网关的标识信息确定出需要进行应用迁移的第一虚拟机要迁移到的第二虚拟机。
进一步的,在可选的实施例中,
所述消息接收模块21,还用于接收另一控制器发送的需要进行应用迁移的第一虚拟机要迁移到的第二虚拟机的位置信息;
所述装置还可包括:
第二确定模块,用于根据所述第二虚拟机的位置信息确定出所述第二网关。
进一步的,在可选的实施例中,所述装置还可包括:
第二发送模块,用于向另一控制器发送服务停止命令,以使所述另一控制器基于所述服务停止命令控制进行所述第一虚拟机和所述第二虚拟机之间的 内存拷贝及中央处理器CPU同步。
进一步的,在可选的实施例中,所述装置还可包括:
更新模块,用于当所述应用迁移完成时,更新数据转发规则;
其中,更新后的数据转发规则指示将所述应用迁移过程中缓存的数据转发到所述第二虚拟机上。
在本发明实施例中,应用侧的第一控制器可在第一虚拟机需要进行应用迁移时,确定出该应用迁移需要迁移到的第二虚拟机,通过获取从第一虚拟机到第二虚拟机的应用迁移过程中的中断事件,并向网络侧的第二控制器发送用于指示该中断事件的通知消息,以使第二控制器能够基于该中断事件控制缓存应用迁移过程中发送至该第一虚拟机的数据,从而能够通过获取应用迁移过程中的中断事件,并通知网络侧,让网络侧及时缓存上行数据报文,以确保可靠的会话连续性,防止丢包,使得数据缓存时间能尽量地接近虚拟机迁移的中断时间,从而缩短了数据缓存时间。
请参见图17,图17是本发明实施例提供的一种应用迁移系统的结构示意图。具体的,如图17所示,本发明实施例的所述应用迁移系统可包括:第一控制器1、第二控制器2、第一虚拟机3和第二虚拟机4;其中,
所述第一控制器1,用于在所述第一虚拟机3需要进行应用迁移时,确定出所述应用迁移需要迁移到的所述第二虚拟机4;获取从所述第一虚拟机3到所述第二虚拟机4的应用迁移过程中的中断事件;向所述第二控制器2发送用于指示所述中断事件的通知消息;
所述第二控制器2,用于接收所述第一控制器1发送的通知消息,并基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机3的数据。
具体的,本发明实施例中的第一控制器、第二控制器、第一虚拟机和第二虚拟机可参照上述图1-14对应实施例的相关描述,此处不再赘述。
请参见图18,是本发明实施例提供的一种控制器的结构示意图,具体的,如图18所示,本发明实施例的所述控制器包括:通信接口300、存储器200和处理器100,所述处理器100分别与所述通信接口300及所述存储器200连接。所述存储器200可以是高速RAM存储器,也可以是非不稳定的存储器 (non-volatile memory),例如至少一个磁盘存储器。所述通信接口300、存储器200以及处理器100之间可以通过总线进行数据连接,也可以通过其他方式数据连接。本实施例中以总线连接进行说明。具体的,本发明实施例中的所述控制器可与上述图2至图14对应实施例中的第一控制器相对应,并可具体为通信网络中应用侧的控制器如Cloud controller,具体请参照图2至图14对应实施例中第一控制器的相关描述。其中,
所述存储器200用于存储驱动软件;
所述处理器100从所述存储器读取所述驱动软件并在所述驱动软件的作用下执行:
当第一虚拟机需要进行应用迁移时,确定出所述应用迁移需要迁移到的第二虚拟机;
获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的中断事件;
通过所述通信接口300向第二控制器发送用于指示所述中断事件的通知消息,以使所述第二控制器基于所述中断事件控制缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
可选的,所述通知消息包括中断时间点以及中断持续时间;所述处理器100在所述驱动软件的作用下执行所述获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的中断事件,具体执行以下步骤:
获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的迁移状态参数;
根据所述迁移状态参数预测所述应用迁移过程中的中断时间点以及中断持续时间。
可选的,所述处理器100在所述驱动软件的作用下执行所述获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的中断事件,具体执行以下步骤:
监测从所述第一虚拟机到所述第二虚拟机的应用迁移过程中是否发生中断事件;
所述向第二控制器发送用于指示所述中断事件的通知消息,包括:
当监测到所述中断事件发生时,通过所述通信接口300向第二控制器发送 用于指示所述中断事件的通知消息。
可选的,所述处理器100在所述驱动软件的作用下执行所述确定出所述应用迁移需要迁移到的第二虚拟机,具体执行以下步骤:
通过所述通信接口300接收第二控制器发送的目标网关的标识信息,所述目标网关为所述第二控制器确定出的;
根据所述目标网关的标识信息确定出所述应用迁移需要迁移到的第二虚拟机。
可选的,所述处理器100在所述驱动软件的作用下执行所述确定出所述应用迁移需要迁移到的第二虚拟机,具体执行以下步骤:
获取用户设备的位置信息,并根据所述位置信息确定出需要所述应用迁移需要迁移到的第二虚拟机;
所述方法还包括:
通过所述通信接口300向所述第二控制器发送所述第二虚拟机的位置信息,以使所述第二控制器根据所述第二虚拟机的位置信息确定出目标网关。
进一步可选的,所述处理器100还用于在所述驱动软件的作用下执行以下步骤:
通过所述通信接口300接收所述第二控制器发送的服务停止命令;
响应所述服务停止命令,控制所述第一虚拟机停止服务,进行所述第一虚拟机和所述第二虚拟机之间的内存拷贝及中央处理器CPU同步;
当同步完成时,控制所述第二虚拟机启动服务。
进一步的,请参见图19,图19是本发明实施例提供的另一种控制器的结构示意图,具体的,如图19所示,本发明实施例的所述控制器包括:通信接口600、存储器500和处理器400,所述处理器400分别与所述通信接口600及所述存储器500连接。所述存储器500可以是高速RAM存储器,也可以是非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。所述通信接口600、存储器500以及处理器400之间可以通过总线进行数据连接,也可以通过其他方式数据连接。本实施例中以总线连接进行说明。具体的,本发明实施例中的所述控制器可与上述图2至图14对应实施例中的第二控制器相对应,并可具体为通信网络中网络侧的控制器如CP,具体请参照图2至图14 对应实施例中第二控制器的相关描述。其中,
所述存储器500用于存储驱动软件;
所述处理器400从所述存储器读取所述驱动软件并在所述驱动软件的作用下执行:
通过所述通信接口600接收第一控制器发送的通知消息,所述通知消息指示了从需要进行应用迁移的第一虚拟机到第二虚拟机的应用迁移过程中的中断事件;
基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据。
可选的,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述处理器400在所述驱动软件的作用下执行所述基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据,具体执行以下步骤:
生成包括所述中断时间点和中断持续时间的数据缓存命令;
通过所述通信接口600向所述第一虚拟机对应的第一网关发送所述数据缓存命令,以使所述第一网关基于所述中断时间点和中断持续时间缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
可选的,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述处理器400在所述驱动软件的作用下执行所述基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据,具体执行以下步骤:
生成包括所述中断时间点和中断持续时间的数据缓存命令;
通过所述通信接口600向第二控制器选取的第二网关发送所述数据缓存命令,以使所述第二网关基于所述中断时间点和中断持续时间缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
可选的,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述处理器400在所述驱动软件的作用下执行所述基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据,具体执行以下步骤:
生成包括所述中断时间点和中断持续时间的数据缓存命令;
通过所述通信接口600向所述第二虚拟机对应的基站发送所述数据缓存命令,以使所述第二虚拟机对应的基站基于所述中断时间点和中断持续时间缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
可选的,所述处理器400在所述驱动软件的作用下执行所述发送所述数据缓存命令之前,还用于执行以下步骤:
监测所述中断时间点是否到达;
当监测到所述中断时间点到达时,执行所述发送所述数据缓存命令的步骤。
可选的,所述处理器400还用于在所述驱动软件的作用下执行以下步骤:
获取用户设备的位置信息,并根据所述位置信息确定出所述第二网关;
通过所述通信接口600向第一控制器发送所述第二网关的标识信息,以使所述第一控制器根据所述第二网关的标识信息确定出需要进行应用迁移的第一虚拟机要迁移到的第二虚拟机。
可选的,所述处理器400还用于在所述驱动软件的作用下执行以下步骤:
通过所述通信接口600接收第一控制器发送的需要进行应用迁移的第一虚拟机要迁移到的第二虚拟机的位置信息;
根据所述第二虚拟机的位置信息确定出所述第二网关。
可选的,所述处理器400在所述驱动软件的作用下执行所述接收第一控制器发送的通知消息之后,还用于执行以下步骤:
通过所述通信接口600向第一控制器发送服务停止命令,以使所述第一控制器基于所述服务停止命令控制进行所述第一虚拟机和所述第二虚拟机之间的内存拷贝及CPU同步。
可选的,所述处理器400还用于在所述驱动软件的作用下执行以下步骤:
当所述应用迁移完成时,更新数据转发规则;
其中,更新后的数据转发规则指示将所述应用迁移过程中缓存的数据转发到所述第二虚拟机上。
在本发明实施例中,应用侧的第一控制器可在第一虚拟机需要进行应用迁移时,确定出该应用迁移需要迁移到的第二虚拟机,通过获取从第一虚拟机到 第二虚拟机的应用迁移过程中的中断事件,并向网络侧的第二控制器发送用于指示该中断事件的通知消息,以使第二控制器能够基于该中断事件控制缓存应用迁移过程中发送至该第一虚拟机的数据,将数据缓存至第一网关、第二网关或基站上,从而能够通过获取应用迁移过程中的中断事件,并通知网络侧,让网络侧及时缓存上行数据报文,以确保可靠的会话连续性,防止丢包,使得数据缓存时间能尽量地接近虚拟机迁移的中断时间,从而缩短了数据缓存时间。
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述该作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
上述以软件功能模块的形式实现的集成的模块,可以存储在一个计算机可读取存储介质中。上述软件功能模块存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文:Read-Only Memory,简称ROM)、随机存取存储器(英文:Random Access Memory,简称RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其 限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (33)

  1. 一种应用迁移方法,应用于第一控制器中,其特征在于,包括:
    当第一虚拟机需要进行应用迁移时,确定出所述应用迁移需要迁移到的第二虚拟机;
    获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的中断事件;
    向第二控制器发送用于指示所述中断事件的通知消息,以使所述第二控制器基于所述中断事件控制缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
  2. 根据权利要求1所述的方法,其特征在于,所述通知消息包括中断时间点以及中断持续时间;所述获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的中断事件,包括:
    获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的迁移状态参数;
    根据所述迁移状态参数预测所述应用迁移过程中的中断时间点以及中断持续时间。
  3. 根据权利要求1所述的方法,其特征在于,所述获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的中断事件,包括:
    监测从所述第一虚拟机到所述第二虚拟机的应用迁移过程中是否发生中断事件;
    所述向第二控制器发送用于指示所述中断事件的通知消息,包括:
    当监测到所述中断事件发生时,向第二控制器发送用于指示所述中断事件的通知消息。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述确定出所述应用迁移需要迁移到的第二虚拟机,包括:
    接收第二控制器发送的目标网关的标识信息,所述目标网关为所述第二控 制器确定出的;
    根据所述目标网关的标识信息确定出所述应用迁移需要迁移到的第二虚拟机。
  5. 根据权利要求1-3任一项所述的方法,其特征在于,所述确定出所述应用迁移需要迁移到的第二虚拟机,包括:
    获取用户设备的位置信息,并根据所述位置信息确定出需要所述应用迁移需要迁移到的第二虚拟机;
    所述方法还包括:
    向所述第二控制器发送所述第二虚拟机的位置信息,以使所述第二控制器根据所述第二虚拟机的位置信息确定出目标网关。
  6. 根据权利要求4或5所述的方法,其特征在于,所述方法还包括:
    接收所述第二控制器发送的服务停止命令;
    响应所述服务停止命令,控制所述第一虚拟机停止服务,进行所述第一虚拟机和所述第二虚拟机之间的内存拷贝及中央处理器CPU同步;
    当同步完成时,控制所述第二虚拟机启动服务。
  7. 一种应用迁移方法,应用于第二控制器中,其特征在于,包括:
    接收第一控制器发送的通知消息,所述通知消息指示了从需要进行应用迁移的第一虚拟机到第二虚拟机的应用迁移过程中的中断事件;
    基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据。
  8. 根据权利要求7所述的方法,其特征在于,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据,包括:
    生成包括所述中断时间点和中断持续时间的数据缓存命令;
    向所述第一虚拟机对应的第一网关发送所述数据缓存命令,以使所述第一网关基于所述中断时间点和中断持续时间缓存所述应用迁移过程中发送至所 述第一虚拟机的数据。
  9. 根据权利要求7所述的方法,其特征在于,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据,包括:
    生成包括所述中断时间点和中断持续时间的数据缓存命令;
    向第二控制器选取的第二网关发送所述数据缓存命令,以使所述第二网关基于所述中断时间点和中断持续时间缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
  10. 根据权利要求7所述的方法,其特征在于,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据,包括:
    生成包括所述中断时间点和中断持续时间的数据缓存命令;
    向所述第二虚拟机对应的基站发送所述数据缓存命令,以使所述第二虚拟机对应的基站基于所述中断时间点和中断持续时间缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
  11. 根据权利要求8-10任一项所述的方法,其特征在于,在所述发送所述数据缓存命令之前,所述方法还包括:
    监测所述中断时间点是否到达;
    当监测到所述中断时间点到达时,执行所述发送所述数据缓存命令的步骤。
  12. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    获取用户设备的位置信息,并根据所述位置信息确定出所述第二网关;
    向第一控制器发送所述第二网关的标识信息,以使所述第一控制器根据所述第二网关的标识信息确定出需要进行应用迁移的第一虚拟机要迁移到的第二虚拟机。
  13. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    接收第一控制器发送的需要进行应用迁移的第一虚拟机要迁移到的第二虚拟机的位置信息;
    根据所述第二虚拟机的位置信息确定出所述第二网关。
  14. 根据权利要求7所述的方法,其特征在于,在所述接收第一控制器发送的通知消息之后,所述方法还包括:
    向第一控制器发送服务停止命令,以使所述第一控制器基于所述服务停止命令控制进行所述第一虚拟机和所述第二虚拟机之间的内存拷贝及中央处理器CPU同步。
  15. 根据权利要求7-14任一项所述的方法,其特征在于,所述方法还包括:
    当所述应用迁移完成时,更新数据转发规则;
    其中,更新后的数据转发规则指示将所述应用迁移过程中缓存的数据转发到所述第二虚拟机上。
  16. 一种应用迁移装置,设置于控制器中,其特征在于,包括:
    确定模块,用于当第一虚拟机需要进行应用迁移时,确定出所述应用迁移需要迁移到的第二虚拟机;
    事件获取模块,用于获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的中断事件;
    发送模块,用于向另一控制器发送用于指示所述中断事件的通知消息,以使所述另一控制器基于所述中断事件控制缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
  17. 根据权利要求16所述的装置,其特征在于,所述通知消息包括中断时间点以及中断持续时间;所述事件获取模块包括:
    参数获取单元,用于获取从所述第一虚拟机到所述第二虚拟机的应用迁移 过程中的迁移状态参数;
    预测单元,用于根据所述参数获取单元获取的所述迁移状态参数预测所述应用迁移过程中的中断时间点以及中断持续时间。
  18. 根据权利要求16所述的装置,其特征在于,所述事件获取模块具体用于:
    监测从所述第一虚拟机到所述第二虚拟机的应用迁移过程中是否发生中断事件;
    所述发送模块具体用于:
    当监测到所述中断事件发生时,向另一控制器发送用于指示所述中断事件的通知消息。
  19. 根据权利要求16-18任一项所述的装置,其特征在于,所述确定模块具体用于:
    接收另一控制器发送的目标网关的标识信息,所述目标网关为所述另一控制器确定出的;
    根据所述目标网关的标识信息确定出所述应用迁移需要迁移到的第二虚拟机。
  20. 根据权利要求16-18任一项所述的装置,其特征在于,所述确定模块具体用于:
    获取用户设备的位置信息,并根据所述位置信息确定出需要所述应用迁移需要迁移到的第二虚拟机;
    所述发送模块,还用于向所述另一控制器发送所述第二虚拟机的位置信息,以使所述另一控制器根据所述第二虚拟机的位置信息确定出目标网关。
  21. 根据权利要求19或20所述的装置,其特征在于,所述装置还包括:
    接收模块,用于接收所述另一控制器发送的服务停止命令;
    控制模块,用于响应所述服务停止命令,控制所述第一虚拟机停止服务, 进行所述第一虚拟机和所述第二虚拟机之间的内存拷贝及中央处理器CPU同步;
    所述控制模块,还用于当同步完成时,控制所述第二虚拟机启动服务。
  22. 一种应用迁移装置,设置于控制器中,其特征在于,包括:
    消息接收模块,用于接收另一控制器发送的通知消息,所述通知消息指示了从需要进行应用迁移的第一虚拟机到第二虚拟机的应用迁移过程中的中断事件;
    缓存控制模块,用于基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据。
  23. 根据权利要求22所述的装置,其特征在于,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述缓存控制模块具体用于:
    生成包括所述中断时间点和中断持续时间的数据缓存命令;
    向所述第一虚拟机对应的第一网关发送所述数据缓存命令,以使所述第一网关基于所述中断时间点和中断持续时间缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
  24. 根据权利要求22所述的装置,其特征在于,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述缓存控制模块具体用于:
    生成包括所述中断时间点和中断持续时间的数据缓存命令;
    向预先选取的第二网关发送所述数据缓存命令,以使所述第二网关基于所述中断时间点和中断持续时间缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
  25. 根据权利要求22所述的装置,其特征在于,所述通知消息中包括所述中断事件的中断时间点和中断持续时间;所述缓存控制模块具体用于:
    生成包括所述中断时间点和中断持续时间的数据缓存命令;
    向所述第二虚拟机对应的基站发送所述数据缓存命令,以使所述第二虚拟机对应的基站基于所述中断时间点和中断持续时间缓存所述应用迁移过程中 发送至所述第一虚拟机的数据。
  26. 根据权利要求23-25任一项所述的装置,其特征在于,所述装置还包括:
    时间监测模块,用于监测所述中断时间点是否到达,并在监测到所述中断时间点到达时,通知所述缓存控制模块发送所述数据缓存命令。
  27. 根据权利要求24所述的装置,其特征在于,所述装置还包括:
    第一确定模块,用于获取用户设备的位置信息,并根据所述位置信息确定出所述第二网关;
    第一发送模块,用于向另一控制器发送所述第二网关的标识信息,以使所述另一控制器根据所述第二网关的标识信息确定出需要进行应用迁移的第一虚拟机要迁移到的第二虚拟机。
  28. 根据权利要求24所述的装置,其特征在于,
    所述消息接收模块,还用于接收另一控制器发送的需要进行应用迁移的第一虚拟机要迁移到的第二虚拟机的位置信息;
    所述装置还包括:
    第二确定模块,用于根据所述第二虚拟机的位置信息确定出所述第二网关。
  29. 根据权利要求22所述的装置,其特征在于,所述装置还包括:
    第二发送模块,用于向另一控制器发送服务停止命令,以使所述另一控制器基于所述服务停止命令控制进行所述第一虚拟机和所述第二虚拟机之间的内存拷贝及中央处理器CPU同步。
  30. 根据权利要求22-29任一项所述的装置,其特征在于,所述装置还包括:
    更新模块,用于当所述应用迁移完成时,更新数据转发规则;
    其中,更新后的数据转发规则指示将所述应用迁移过程中缓存的数据转发 到所述第二虚拟机上。
  31. 一种控制器,其特征在于,包括:通信接口、存储器和处理器,所述处理器分别与所述通信接口和所述存储器连接;其中,
    所述存储器用于存储驱动软件;
    所述处理器用于从所述存储器读取所述驱动软件并在所述驱动软件的作用下执行:
    当第一虚拟机需要进行应用迁移时,确定出所述应用迁移需要迁移到的第二虚拟机;
    获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的中断事件;
    向第二控制器发送用于指示所述中断事件的通知消息,以使所述第二控制器基于所述中断事件控制缓存所述应用迁移过程中发送至所述第一虚拟机的数据。
  32. 一种控制器,其特征在于,包括:通信接口、存储器和处理器,所述处理器分别与所述通信接口和所述存储器连接;其中,
    所述存储器用于存储驱动软件;
    所述处理器用于从所述存储器读取所述驱动软件并在所述驱动软件的作用下执行:
    接收第一控制器发送的通知消息,所述通知消息指示了从需要进行应用迁移的第一虚拟机到第二虚拟机的应用迁移过程中的中断事件;
    基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据。
  33. 一种应用迁移系统,其特征在于,包括:第一控制器、第二控制器、第一虚拟机和第二虚拟机;其中,
    所述第一控制器,用于在所述第一虚拟机需要进行应用迁移时,确定出所述应用迁移需要迁移到的所述第二虚拟机;获取从所述第一虚拟机到所述第二虚拟机的应用迁移过程中的中断事件;向所述第二控制器发送用于指示所述中 断事件的通知消息;
    所述第二控制器,用于接收所述第一控制器发送的通知消息,并基于所述中断事件控制缓存所述应用迁移过程中的发送至所述第一虚拟机的数据。
PCT/CN2016/098883 2016-09-13 2016-09-13 一种应用迁移方法、装置及系统 WO2018049567A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/098883 WO2018049567A1 (zh) 2016-09-13 2016-09-13 一种应用迁移方法、装置及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/098883 WO2018049567A1 (zh) 2016-09-13 2016-09-13 一种应用迁移方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2018049567A1 true WO2018049567A1 (zh) 2018-03-22

Family

ID=61618567

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/098883 WO2018049567A1 (zh) 2016-09-13 2016-09-13 一种应用迁移方法、装置及系统

Country Status (1)

Country Link
WO (1) WO2018049567A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110968393A (zh) * 2018-09-30 2020-04-07 阿里巴巴集团控股有限公司 虚拟机的迁移处理方法、存储介质、计算设备
CN110971468A (zh) * 2019-12-12 2020-04-07 广西大学 一种基于脏页预测的延迟拷贝增量式容器检查点处理方法
US11513983B2 (en) 2020-05-15 2022-11-29 International Business Machines Corporation Interrupt migration
CN115426406A (zh) * 2022-07-25 2022-12-02 新疆大学 一种物联网数据采集系统
US11748131B2 (en) 2019-11-20 2023-09-05 Red Hat, Inc. Network updates for virtual machine migration

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130185719A1 (en) * 2012-01-17 2013-07-18 Microsoft Corporation Throttling guest write ios based on destination throughput
CN103685368A (zh) * 2012-09-10 2014-03-26 中国电信股份有限公司 用于迁移数据的方法及系统
CN103825915A (zh) * 2012-11-16 2014-05-28 中国电信股份有限公司 虚拟化环境下服务移动性管理方法及系统
CN103856480A (zh) * 2012-11-30 2014-06-11 国际商业机器公司 虚拟机迁移中的用户数据报协议分组迁移
CN104468397A (zh) * 2014-11-06 2015-03-25 杭州华三通信技术有限公司 一种虚拟机热迁移转发不丢包的方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130185719A1 (en) * 2012-01-17 2013-07-18 Microsoft Corporation Throttling guest write ios based on destination throughput
CN103685368A (zh) * 2012-09-10 2014-03-26 中国电信股份有限公司 用于迁移数据的方法及系统
CN103825915A (zh) * 2012-11-16 2014-05-28 中国电信股份有限公司 虚拟化环境下服务移动性管理方法及系统
CN103856480A (zh) * 2012-11-30 2014-06-11 国际商业机器公司 虚拟机迁移中的用户数据报协议分组迁移
CN104468397A (zh) * 2014-11-06 2015-03-25 杭州华三通信技术有限公司 一种虚拟机热迁移转发不丢包的方法和装置

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110968393A (zh) * 2018-09-30 2020-04-07 阿里巴巴集团控股有限公司 虚拟机的迁移处理方法、存储介质、计算设备
CN110968393B (zh) * 2018-09-30 2023-05-02 阿里巴巴集团控股有限公司 虚拟机的迁移处理方法、存储介质、计算设备
US11748131B2 (en) 2019-11-20 2023-09-05 Red Hat, Inc. Network updates for virtual machine migration
CN110971468A (zh) * 2019-12-12 2020-04-07 广西大学 一种基于脏页预测的延迟拷贝增量式容器检查点处理方法
CN110971468B (zh) * 2019-12-12 2022-04-05 广西大学 一种基于脏页预测的延迟拷贝增量式容器检查点处理方法
US11513983B2 (en) 2020-05-15 2022-11-29 International Business Machines Corporation Interrupt migration
CN115426406A (zh) * 2022-07-25 2022-12-02 新疆大学 一种物联网数据采集系统

Similar Documents

Publication Publication Date Title
EP3508002B1 (en) Relocation of mobile edge computing services
EP3338431B1 (en) Distributed gateway
WO2018049567A1 (zh) 一种应用迁移方法、装置及系统
US11617217B2 (en) Transport layer handling for split radio network architecture
CN112567715B (zh) 用于边缘计算的应用迁移机制
WO2019237999A1 (zh) 传输链路管理、建立、迁移方法、装置、基站及存储介质
JP6432955B2 (ja) 仮想ネットワーク機能インスタンスをマイグレーションさせるための方法、装置およびシステム
CN110868739B (zh) 由用户设备执行的方法、用户设备以及切换命令生成方法
CN111132253B (zh) 一种通信切换和服务迁移的联合移动性管理方法
JP2017518683A (ja) Bbuプールにおける仮想基地局移行のための方法及び装置
US20140254554A1 (en) Method and device for forwarding uplink data
WO2020226546A1 (en) Methods and apparatus in a network node or base station
US10306519B2 (en) Method for providing an application service in a cellular network
CN103379516B (zh) 高速缓存装置、高速缓存控制装置和检测切换的方法
CN110753372A (zh) 基带处理分离架构中的信息处理方法、装置及存储介质
CN105022658B (zh) 一种虚拟机迁移方法、系统及相关装置
JP7250114B2 (ja) サービスノードの更新方法、端末機器、および、ネットワーク側機器
CN111432438B (zh) 一种基站处理任务实时迁移方法
CN108430084B (zh) 一种基站切换方法及系统
CN116325930A (zh) 用于拓扑冗余的设备、方法、装置和计算机可读介质
CN112073980B (zh) 一种移动边缘计算的服务迁移方法和系统
CN114557048A (zh) 用于cu间拓扑适配的设备、方法、装置和计算机可读介质
US20230292195A1 (en) Conditional Handover Behavior Upon Dual Active Protocol Stacks Fallback
WO2014071637A1 (zh) 一种对虚拟机进行网络配置的方法和设备
WO2024037439A1 (zh) 算力任务迁移方法、装置及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16915953

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16915953

Country of ref document: EP

Kind code of ref document: A1