WO2016161677A1 - 一种业务卸载方法及系统 - Google Patents

一种业务卸载方法及系统 Download PDF

Info

Publication number
WO2016161677A1
WO2016161677A1 PCT/CN2015/077754 CN2015077754W WO2016161677A1 WO 2016161677 A1 WO2016161677 A1 WO 2016161677A1 CN 2015077754 W CN2015077754 W CN 2015077754W WO 2016161677 A1 WO2016161677 A1 WO 2016161677A1
Authority
WO
WIPO (PCT)
Prior art keywords
service
mobile node
layer device
cloud
policy
Prior art date
Application number
PCT/CN2015/077754
Other languages
English (en)
French (fr)
Inventor
郑侃
孟涵琳
龙航
Original Assignee
北京邮电大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京邮电大学 filed Critical 北京邮电大学
Publication of WO2016161677A1 publication Critical patent/WO2016161677A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/18Processing of user or subscriber data, e.g. subscribed services, user preferences or user profiles; Transfer of user or subscriber data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]

Definitions

  • the present application relates to the field of mobile cloud computing and software-defined network technologies, and in particular, to a service offloading method and system.
  • the performance of mobile nodes has continued to improve.
  • the mobile terminal device can communicate through Device-to-Device (D2D) technology; the vehicle can communicate with each other through Vehicle-to-Vehicle (V2V) technology, and the vehicle can also pass Vehicle-to-infrastructure (V2I) technology communicates with roadside wireless access units.
  • D2D Device-to-Device
  • V2V Vehicle-to-Vehicle
  • V2I Vehicle-to-infrastructure
  • both mobile vehicles and terminal equipment are loaded with a large number of sensor devices.
  • Both in terms of computing and storage both are equipped with high-capacity storage and faster processors, especially vehicles, which have been called computers on wheels.
  • All of these mobile node resources can be grouped together to form a micro cloud for sharing, thereby improving resource utilization while ensuring user experience.
  • deploying distributed small cloud has become a more recognized solution in the industry.
  • Xiaoyun can be a small computer deployed near a wireless access point. By bringing the service closer to the user, the cloud network composed of these small clouds is called a local cloud. Due to its powerful processing and storage capabilities, traditional centralized remote cloud is still an indispensable cloud service provider.
  • service uninstallation refers to offloading all or part of the applications originally implemented by the mobile node to the cloud for execution.
  • One of the benefits of service offloading is that it can save energy consumption of mobile nodes. Consumption.
  • the embodiment of the present application provides a service offloading method, which is used to provide a service unloading solution applicable to a three-layer cloud network architecture.
  • the embodiment of the present application further provides a service offloading system, which is used to provide a service unloading solution applicable to a three-layer cloud network architecture.
  • a service offloading method includes: receiving, by an application layer device, a service offloading request sent by a mobile node; determining, according to the service unloading request, a service offloading policy according to the auxiliary decision information; and controlling the control layer device according to the determined service offloading policy Unloading the service on the mobile node.
  • a service offloading system comprising: an application layer device, wherein: an application layer device is configured to receive a service offload request sent by a mobile node; and in response to the service offload request, determine a service offload policy according to the auxiliary decision information; and uninstall according to the determined service The policy controls the control layer device to offload the service from the mobile node.
  • the solution proposes that the application layer device determines the service offload policy in response to the service offload request, and controls the control layer device to offload the service from the mobile node according to the service offload policy, thereby providing a service offloading scheme applicable to the layer 3 cloud network architecture.
  • FIG. 1 is a schematic flowchart of a service unloading method applicable to a three-layer cloud network architecture according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of an application scenario of a service uninstallation method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a network controller interacting with a data layer and an application layer respectively through a southbound interface and a northbound interface in a software-defined network;
  • FIG. 4 is a schematic structural diagram of a service offloading system according to an embodiment of the present disclosure.
  • Embodiment 1 of the present application first provides a service offloading method.
  • the specific implementation flow chart of the method is shown in FIG. 1 and mainly includes the following steps:
  • Step 11 The application layer device receives a service offload request sent by the mobile node.
  • Step 11 is described in detail below.
  • SDN is an emerging network technology. Its main idea is to separate the control plane and the data plane from the underlying device, so that the logical architecture in the network includes the application layer, the control layer and the data layer. Based on such a logical architecture, the application layer is responsible for decision making, and the control layer is responsible for controlling the data layer resources (generally including the communication, computing, and storage resources of the mobile node, the wireless access point, and the server) to implement the network. Flexible control. Based on this technology, no matter the adjustment of the network configuration or the deployment of new network devices or services, only code level modifications are required. This can be significant Reduce network operating costs and increase the speed at which new devices or services can be introduced. In addition, SDN is easy to virtualize the network, which can easily integrate network computing storage resources and greatly improve the efficiency of resource utilization.
  • the application layer device may be located at an application layer of the SDN.
  • the application layer device may specifically be an application. Since the main function of the application is to make a decision for a service offload request, the application can be referred to as an "uninstall application.”
  • the service offload request received by the offload application may be sent by a data node located at the data layer.
  • the data node may directly send the service offload request to the application layer, or may send the service offload request to the control layer, and the control layer forwards the service offload request to the application layer.
  • the application layer device and the control layer device (both may be software implemented applications) may be disposed in the base station and exist as functional modules of the virtual base station.
  • Step 12 The application layer device determines the service offload policy according to the auxiliary decision information in response to the received service offload request.
  • the auxiliary decision information may be directly obtained by the application layer device itself.
  • the application layer device may also trigger the control layer device to obtain the auxiliary decision information.
  • the auxiliary decision information may be any information that is determined as a basis for the service uninstallation policy. For example, it may be computing resource usage information in a specific network where the mobile node that sends the service offload request is located. Taking the mobile node as an in-vehicle device as an example, the specific network where the mobile node is located may refer to the car network that the mobile node accesses by using V2V or V2I, and the like.
  • the auxiliary decision information is the computing resource usage information of the other in-vehicle device in the car network connected to the in-vehicle device and communicating with the in-vehicle device through the V2V mode
  • the application layer device determines that there is an idle computing resource in the other in-vehicle device according to the information, it may be determined that the service offloading policy is “unloading the service corresponding to the service offloading request to the in-vehicle device where the idle computing resource is located” .
  • the application layer device may select one service offloading policy as the optimal service offloading policy from the plurality of optional service offloading policies in response to the received service offloading request.
  • the method first determines the identifier of the optimal service offloading strategy according to the discrete time Markov decision and value iterative algorithm; and further determines, according to the identifier, the service offloading strategy with the identifier as the optimal service from the optional service offloading strategy. Uninstall the strategy.
  • Embodiment 1 of the present application To describe how to determine the service offloading policy, a specific application scenario of the method provided in Embodiment 1 of the present application is introduced.
  • FIG. 2 is a schematic diagram of a typical three-layer cloud network architecture, which may include a micro cloud, a local cloud, and a traditional centralized remote cloud.
  • the micro cloud is generally composed of mobile nodes such as mobile phones, in-vehicle communication devices, and tablet computers (such as A to F in FIG. 2, all of which are mobile nodes), and each mobile node has communication, calculation, and storage functions.
  • mobile nodes such as mobile phones, in-vehicle communication devices, and tablet computers (such as A to F in FIG. 2, all of which are mobile nodes)
  • each mobile node has communication, calculation, and storage functions.
  • a micro cloud a collection of a number of interconnected moving vehicles.
  • Resources in the micro cloud including communication resources, storage resources, and computing resources of the mobile node.
  • the end-to-end delay of data transmission in the micro-cloud is low.
  • the resources in the micro-cloud are dynamically changed.
  • the local cloud is generally composed of servers. Among them, "local" is relatively speaking.
  • the servers deployed near the wireless access points (hereinafter referred to as the wireless access points) accessed by the mobile nodes (hereinafter referred to as the mobile nodes) constitute a server.
  • the cloud is the local cloud of the mobile nodes.
  • the wireless access point mentioned here may be a base station in a Long Term Evolution (LTE) system, or may be connected to a mobile node by using other communication methods such as Wireless Fidelity (WiFi). Base station.
  • LTE Long Term Evolution
  • WiFi Wireless Fidelity
  • Base station For a wireless access point, it can interact with a local cloud deployed in its vicinity through a vehicle-to-infrastructure (V2I) communication technology or the like. End-to-end averaging in the local cloud The size of the delay, between the average end-to-end delay in the micro cloud and the end-to-end average delay in the far-end cloud.
  • the traditional centralized remote cloud is generally composed of servers.
  • the server mentioned here can be a centralized cloud server.
  • the remote cloud has abundant resources, but its end-to-end delay is longer due to its longer backhaul link.
  • the service offloading policy in the embodiment of the present application may include, but is not limited to, the following three types:
  • the mobile node's traffic is offloaded to the remote cloud.
  • the micro cloud is composed of the mobile node and other mobile nodes; the local cloud is composed of a server cluster deployed near the wireless access point accessed by the mobile node; and the remote cloud is a centralized cloud server. Constituted.
  • the service that the mobile node is uninstalled may be specified by the mobile node, for example, the mobile node may notify the application layer device of the identity of the service in the service offloading request, or may be the mobile device according to the query.
  • the service information of the node is selected from the services opened by the mobile node.
  • the specific selection method of the service may be randomly selected, or may be sequentially selected according to different services for the processing resources of the mobile node from the largest to the smallest, and so on.
  • the determination of the service unloading strategy is based on the auxiliary decision information.
  • the auxiliary decision information mentioned herein may be obtained by the application layer device to notify the control layer device.
  • the auxiliary decision information may specifically include the following information:
  • micro cloud Representing the number of service offload requests occupying i computing resources in the micro cloud (hereinafter referred to as micro cloud) where the mobile node (hereinafter referred to as the mobile node) described in step 11 is located.
  • the total number of computing resources occupied by the service offload request in the micro cloud is The total is not greater than the sum M of all computing resources in the micro cloud.
  • N is the maximum number of computing resources that a single service offload request can occupy; M satisfies the constraint condition M ⁇ K, where K is the maximum number of mobile nodes that the micro cloud can support. It should be noted that, in order to simplify the analysis, it is assumed in the embodiment of the present application that each mobile node can be virtualized into one computing resource.
  • n vc represents the allocation of computing resources in the micro cloud.
  • the number of service offload requests occupying i computing resources in the cloud (hereinafter referred to as the local cloud) where the wireless access point accessed by the mobile node is located. Obviously, the total number of computing resources occupied by the service offload request in the local cloud is This total is not greater than the sum M lc of all computing resources in the local cloud.
  • N 2 is the number of computing resources occupied by a single service offload request that occupies the most computing resources in the local cloud.
  • n lc represents the allocation of computing resources in the local cloud.
  • h represents the channel state of the mobile node.
  • the following describes the specific process of determining the service offload policy based on the above auxiliary decision information.
  • the process can include the following sub-steps:
  • Sub-step 1 Establish a system state space S.
  • s is the state of the system.
  • the system referred to herein refers to a communication system in which a mobile node is located, and the communication system includes a micro cloud, a local cloud, and a remote cloud.
  • n vc , n lc , M, h have been explained above and will not be described here.
  • the “system event” it represents includes: the arrival and departure of the service offload request in the system, and the arrival and departure of the mobile node in the micro cloud;
  • A ⁇ A p , A v ⁇ ,
  • a p represents the service in the system
  • a v represents the arrival of the mobile node in the micro cloud;
  • D v represents the departure of the mobile node in the micro cloud.
  • Sub-step 2 Establish a behavior set corresponding to the system state space S.
  • This set of behaviors contains all the behaviors that the system can perform.
  • the behavior set is generally defined for the state. Specifically, the behavior set can be expressed by the following formula [2]:
  • a s represents the state of the system
  • a s represents the set of behaviors that can be performed under state s, and the elements in A s can be represented as a(s), ie a(s) ⁇ A s .
  • the purpose of the step 12 is to select an action from the behavior set as a service uninstallation strategy.
  • the following section will further explain how to select behaviors from this set of behaviors.
  • the system event e in the state s of the system is the departure of the service offload request, or the joining (or leaving) of the mobile node in the micro cloud
  • the system only updates the resource occupancy in the resource pool without executing other That is, so that the behavior a(s) adopted under the state s can be defined
  • the mapping relationship between a(s) and the value is a(s) ⁇ ⁇ 1, 2, ... i, ..., N ⁇ Representing that the corresponding service of the mobile node that sends the service offload request described in step 11 is offloaded to the micro cloud and allocates i computing resources for the request, that is, allocates i computing resources for the service.
  • it can be defined that when a (S) ⁇ ⁇ N+1, N+2, ... The computing resource is allocated to the request; when the mapping relationship between a(s) and the value is a(s) 2N+1, the service is offloaded to the remote cloud.
  • Sub-step three to sub-step seven are further described below.
  • the purpose of sub-step 3 to sub-step 7 is to determine the value mapped by the service offload policy described in step 12, that is, to determine the behavior a(s) mapping adopted by the system in response to the service offload request described in step 11. Value. Based on the value, and the definitions described above, the behavior a(s) mapped by the value can be determined, and the determined behavior can be implemented in step 13 below, thereby implementing the offloading of the service.
  • Sub-step 3 Establish a system reward model.
  • the actual return r (s, a) of the execution behavior a(s) ie, the execution of the service unloading strategy a(s)
  • the return refers to the income that the system can obtain:
  • k(s, a) is the immediate return of the system (that is, the return immediately after the system performs the behavior a(s))
  • o(s, a) is the time between the occurrence of two consecutive system events in the system. Expected loss.
  • k(s, a) can be calculated according to the following formula [4]:
  • the purpose of the reward model is to ensure the user experience of the mobile node while maximizing system benefits.
  • E 0 is the upper limit of the return that the system can obtain after receiving a request; ⁇ represents the value per unit time, -P represents the penalty for the system, and d vc , d lc and d rc are respectively The total delay required to complete an offloading task request in the micro cloud, the local cloud, and the remote cloud, where the delay includes the transmission delay of the service data corresponding to the service offloading request to the cloud and the service in the cloud.
  • the data performs the processing delay of the corresponding calculation.
  • the service data specifically refers to the amount of data corresponding to a certain service (such as a code of a program that is expected to be uninstalled). The following further explains d vc , d lc and d rc :
  • d vc can be calculated according to the following formula [5]:
  • i represents the number of computing resources allocated to the service offload request in the micro cloud, that is, the number of computing resources occupied by the service offload request in the micro cloud;
  • ⁇ p represents the service rate of each computing resource.
  • D represents the amount of data corresponding to the service that the mobile node requests to offload
  • SNR represents the signal-to-noise ratio of the wireless channel between the mobile node and its wireless access point
  • h represents an index characterizing the channel state.
  • the transmission rate of the wireless channel can be calculated using the Shannon formula.
  • the transmission delay between the mobile node and the remote cloud is the same as the transmission delay between the mobile node and its wireless access point.
  • the total time is calculated.
  • the delay d rc can ignore the transmission delay, so d rc can be calculated by the following formula [7]:
  • d backhaul is the transmission delay of the backhaul link in the far-end cloud, that is, the transmission delay of the mobile node's wireless access point to the far-end cloud.
  • the system can get the benefit of (E 0 - ⁇ d vc ), where ⁇ d vc is the system overhead; similarly, when a service is separately offloaded to the local cloud and far In the case of the end cloud, the system obtains the benefits of (E 0 - ⁇ d lc ) and (E 0 - ⁇ d rc ) respectively; when the unloading service in the system is completed or the mobile node joins the micro cloud, the system has no revenue; when there is in the micro cloud When there is sufficient available computing resources and a certain mobile node leaves, the system has no revenue; when the computing resources in the micro cloud have all been allocated and the mobile node leaves, the computing resources that one of the services is allocated will inevitably be caused. An interruption occurs, so the system will be punished by -P.
  • the embodiment of the present application can select a service unloading strategy by using multiple reward models, for example, an average return model of r(s, a) can be adopted. Under this model, specifically, the expected loss o(s, a) in the formula [3] can be calculated by the following formula [8]:
  • c(s, a) is the system loss rate, and the specific calculation method is as follows: [9]; ⁇ (s, a) is the expected time between two consecutive decision times, and ⁇ will be introduced later. The calculation method of s, a) will not be repeated here.
  • Sub-step 4 Establish the transition probability matrix of the system
  • the average occurrence rate of the event is obtained, which is expressed by ⁇ (s, a).
  • the state is the virtual state in which the system is located after the system performs the selected service uninstallation policy and before the next system event occurs. Therefore, ⁇ (s, a) can be expressed by the formula [10]:
  • the total arrival rate for unloading business requests Requesting the exit rate for the offload task; ⁇ v represents the arrival rate of the mobile node in the micro cloud, and ⁇ v represents the exit rate of the mobile node in the micro cloud
  • the transition probability from the state s to the state s' after the system performs the behavior a can be represented by p(s'
  • ⁇ (s, a) can be calculated, thereby realizing the calculation of the expected loss o(s, a).
  • the embodiment of the present application can adopt a plurality of reward models to implement the selection behavior a(s) as the service unloading strategy of the system under the state s, and different reward criteria may lead to differences in the return calculation.
  • the average return model of r(s, a) can be used to select the behavior a(s). Introduction.
  • the actual average return g ⁇ that the system can obtain after performing the behavior ⁇ under state s can be expressed as follows:
  • Equation [14] is the average return model of the semi-Markov decision process
  • v(s) is the value function of state s
  • g is the return process gain
  • the average return model of the semi-Markov decision process can be converted into a common discrete time decision model by standardization, which makes the analysis simpler.
  • Step 2 For each state, calculate its value function according to formula [19]
  • Step 4 For each state s, calculate its static optimal strategy according to formula [22] and stop iteration.
  • mapping relationship between a(s) and numerical values defined above it can be determined The mapped a(s), which in turn determines the a(s) as the behavior selected by the system under state s (ie, the service offload policy) and performs the behavior.
  • step 13 can be further performed to implement the offloading of services on the mobile node.
  • Step 13 The application layer device controls the control layer device to uninstall the service from the mobile node according to the determined service offload policy.
  • the operations that the control layer device can perform include: 1. Notification B and C execution: through virtual The technology establishes a corresponding virtual machine (VM) for A, and processes the services offloaded from A using computing resources. Among them, the use of virtual machines can shield the differences of the underlying systems, so that mobile node resources produced by different producers can be effectively utilized. 2. Notification A sends the information about the service it is requesting to uninstall to B and C; and so on.
  • the operations that the control layer device can perform include: 1.
  • the wireless access point that notifies the access of the D and the E is executed: the corresponding virtual machine is respectively established for the D and E by the virtualization technology, and the processing is performed by using the computing resource from the D And the service unloaded on E; 2. Notifying D and E to send information about the service that it is requesting to offload to the wireless access point; and so on.
  • the mobile node that sends the service offload request to the application layer is F
  • the service offloading policy determined by the application layer device is performed
  • the service of the F is offloaded to the remote cloud.
  • the operations that the control layer device can perform include: 1. Notifying the remote cloud execution of the F: establishing a corresponding virtual machine for the F through the virtualization technology, and processing the service uninstalled from the F by using the computing resource; 2. Unloading the F request Information about the business is sent to the wireless access point; and so on.
  • the resources of each layer cloud in the three-layer cloud network architecture may form a virtualized resource pool by using a virtualization technology, and the virtualized resource pool may be centrally managed by a network controller in a software-defined network control layer.
  • the control layer device in the foregoing steps 11 to 13 may be equivalent to the network controller.
  • the application layer device described above can be equivalent to an application in a software-defined network application layer.
  • the network controller can interact with the data layer (including various types of wireless access points and mobile nodes in the data layer) through the southbound interface, and interact with the application layer through the northbound interface.
  • the network controller can adjust related information in the virtualized resource pool (including computing resources and storage resources of each layer cloud). Source situation, etc.).
  • Embodiment 2 of the present application provides a service offloading system.
  • the schematic diagram of the system is shown in FIG. 4, and includes an application layer device 41.
  • the main function of the application layer device 41 is: receiving a service offload request sent by the mobile node; determining a service offload policy according to the auxiliary decision information in response to the service offload request; and controlling the control layer device from the mobile node according to the determined service offload policy Uninstall the business.
  • the system may also include a control layer device 42.
  • the application layer device 41 is further configured to trigger the control layer device 42 to obtain the auxiliary decision information; and determine the service offload policy according to the auxiliary decision information acquired by the control layer device 42.
  • the control layer device is used to obtain the auxiliary decision information and provide it to the application layer device under the control of the application layer device 41.
  • the system provided in Embodiment 2 of the present application is a communication system where a mobile node that sends a service offload request is located.
  • the application layer device 41 may be specifically configured to: according to the obtained auxiliary decision information, determine the identifier of the optimal service offloading policy according to the discrete time Markov decision and value iterative algorithm; In the service offloading policy, the service offloading policy with the identifier is determined as the optimal service offloading policy.
  • the application layer device 41 is specifically configured to: according to the determined service offloading policy, control the control layer device to offload the service on the mobile node to a specific cloud for processing.
  • the specific cloud includes at least one of the following:
  • a local cloud composed of a server cluster deployed near the wireless access point accessed by the mobile node;
  • a remote cloud consisting of a centralized cloud server.
  • the auxiliary decision information may include the following information:
  • the channel state of the mobile node is the channel state of the mobile node.
  • the application layer device 41 may define an application in the network application layer for software; the control layer device 42 may define a controller in the network control layer for the software.
  • the mobile node may be an in-vehicle device.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM) or flash memory (flashRAM), in a computer readable medium.
  • RAM random access memory
  • ROM read only memory
  • flashRAM flash memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请公开了一种业务卸载方法,用以提供一种适用于三层云网络架构的业务卸载方案。方法包括:应用层设备接收移动节点发出的业务卸载请求;响应于所述业务卸载请求,根据辅助决策信息确定业务卸载策略;根据确定的业务卸载策略,控制控制层设备从所述移动节点上卸载业务。本申请还公开一种业务卸载系统。

Description

一种业务卸载方法及系统 技术领域
本申请涉及移动云计算以及软件定义网络技术领域,尤其涉及一种业务卸载方法及系统。
背景技术
近年来移动节点(包括移动车辆、移动终端设备等)的性能不断改善。比如在通信性能方面,移动终端设备可通过设备对设备(Device-to-Device,D2D)技术通信;车辆可通过车辆对车辆(Vehicle-to-Vehicle,V2V)技术相互通信,并且车辆还可通过车辆对基础设施(Vehicle-to-Infrastructure,V2I)技术与路边无线接入单元进行通信。在感知性能方面,移动车辆和终端设备都装载了大量的传感器设备。在计算和存储方面,两者都配备了大容量存储和更快的处理器,尤其是车辆,已被称为车轮上的计算机。所有这些移动节点的资源都可以集中起来形成微云进行共享,从而在保证用户体验的同时提高资源利用率。同时,为了降低移动节点云业务交互的端到端时延,部署分布式小云已成为业界较为认同的解决方案。小云可以是部署在无线接入点附近的小型计算机,通过拉近与用户的距离将服务带到用户身边,由这些小云组成的云网络被称为本地云。由于其强大的处理及存储能力,传统的集中式远端云仍然是不可或缺的云服务提供者。
通过三层云网络架构的部署,移动节点的性能可以得到进一步的提升。于此同时,其服务体验也会有很大改善。然而,针对移动节点的“业务卸载”这一需求而言,现有技术中还没有提出如何在三层云网络架构下实现业务卸载。
需要说明的是,业务卸载,是指将原本由移动节点实现的全部或部分应用卸载到云端执行。业务卸载的好处之一,在于可以节省移动节点的能量消 耗。
发明内容
本申请实施例提供一种业务卸载方法,用以提供一种适用于三层云网络架构的业务卸载方案。
本申请实施例还提供一种业务卸载系统,用以提供一种适用于三层云网络架构的业务卸载方案。
本申请实施例采用下述技术方案:
一种业务卸载方法,包括:应用层设备接收移动节点发出的业务卸载请求;响应于所述业务卸载请求,根据辅助决策信息确定业务卸载策略;根据确定的业务卸载策略,控制控制层设备从所述移动节点上卸载业务。
一种业务卸载系统,包括应用层设备,其中:应用层设备,用于接收移动节点发出的业务卸载请求;响应于所述业务卸载请求,根据辅助决策信息确定业务卸载策略;根据确定的业务卸载策略,控制控制层设备从所述移动节点上卸载业务。
本申请实施例采用的上述至少一个技术方案能够达到以下有益效果:
方案提出由应用层设备响应于业务卸载请求,确定业务卸载策略,并根据业务卸载策略控制控制层设备从移动节点上卸载业务,从而提供了一种适用于三层云网络架构的业务卸载方案。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1为本申请实施例提供的一种适用于三层云网络架构的业务卸载方法的流程示意图;
图2为本申请实施例提供的业务卸载方法的一种应用场景示意图;
图3为在软件定义网络中的网络控制器通过南向接口和北向接口分别与数据层和应用层交互的示意图;
图4为本申请实施例提供的一种业务卸载系统的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
以下结合附图,详细说明本申请各实施例提供的技术方案。
实施例1
为了提供一种适用于三层云网络架构的业务卸载方案,本申请实施例1首先提供一种业务卸载方法。该方法的具体实现流程图如图1所示,主要包括如下步骤:
步骤11,应用层设备接收移动节点发出的业务卸载请求。
以下对步骤11进行详细介绍。
首先,先介绍上述应用层设备所在的软件定义网络(Software Defined Network,SDN)的逻辑架构。
SDN,是一种新兴的网络技术,其主要的思想是从底层设备中分离控制平面与数据平面,使得网络中的逻辑架构包含应用层、控制层和数据层三层。基于这样的逻辑架构,采用应用层负责决策,控制层负责对数据层的资源(一般包括移动节点、无线接入点和服务器的通信、计算和存储资源)进行控制这样的模式,可以实现网络的灵活控制。基于该技术,无论是调整网络配置或是部署新的网络设备或服务,都仅需进行代码层面的修改。这样可以显著 降低网络运营成本,并提高新设备或业务的引入速度。另外,SDN很容易实现网络的虚拟化,可以方便整合网络计算存储资源,大幅提高资源的使用效率。
本申请实施例中,上述应用层设备,可以位于SDN的应用层。该应用层设备,具体可以为一种应用。由于该应用的主要功能在于针对业务卸载请求进行决策,从而该应用可称为“卸载应用”。该卸载应用接收到的业务卸载请求,可以是由位于数据层的数据节点发出的。
本申请实施例中,数据节点可以直接向应用层发送该业务卸载请求,也可以向控制层发送该业务卸载请求,并由控制层将该业务卸载请求转发给应用层。在一种实施方式中,应用层设备和控制层的设备(均可以是软件实现的应用),均可以设置在基站中,并作为虚拟基站的功能模块存在。
步骤12,应用层设备响应于接收到的业务卸载请求,根据辅助决策信息确定业务卸载策略。
本申请实施例中,可以由应用层设备自身直接获取辅助决策信息。当辅助决策信息由控制层设备管理时,也可以由应用层设备触发控制层设备获取该辅助决策信息。该辅助决策信息,可以是作为业务卸载策略确定依据的任何信息。比如可以是发送业务卸载请求的移动节点所在的特定网络中的计算资源使用情况信息。以移动节点为车载设备为例,移动节点所在的特定网络,可以是指该移动节点采用V2V或V2I方式接入的车联网,等等。
以移动节点为车载设备为例,本申请实施例中,若辅助决策信息为该车载设备接入的车联网中的、与该车载设备通过V2V方式通信的其他车载设备的计算资源使用情况信息,则当应用层设备根据该信息确定出所述其他车载设备中存在空闲的计算资源时,可以确定业务卸载策略为“将业务卸载请求对应的业务卸载到所述空闲的计算资源所在的车载设备”。
本申请实施例中,当移动节点所在的通信系统在当前状态下存在多个可 选的业务卸载策略时,应用层设备响应于接收到的业务卸载请求,可以从所述多个可选的业务卸载策略中选取一个业务卸载策略作为最优业务卸载策略。
下文将详细介绍一种最优业务卸载策略的选取方式。该方式先按照离散时间马尔科夫决策和值迭代算法,确定最优业务卸载策略的标识;进而根据该标识,从可选的业务卸载策略中,确定具备该标识的业务卸载策略作为最优业务卸载策略。
为了介绍如何确定业务卸载策略,首先本申请实施例1提供的该方法的一种具体应用场景进行介绍。
具体地,该应用场景可以如图2所示。图2为一个典型的三层云网络架构示意图,该架构可以包括微云、本地云和一个传统的集中式远端云。
其中,微云,一般由手机、车载通信设备、平板电脑等移动节点(如图2中的A~F,均为移动节点)等构成,各移动节点,均具备通信、计算及存储功能。以移动车辆为例,将若干互联的移动车辆所构成的集合称为微云。微云中的资源,包括移动节点的通信资源、存储资源和计算资源等。微云中数据传输的端到端时延较低,此外,由于微云中移动节点的不确定特性(如移动节点的到达和离开),微云中的资源是动态变化的。
本地云,一般由服务器构成。其中,“本地”是相对而言的。比如,针对图2所示的移动节点A~F而言,部署在这些移动节点(后称该些移动节点)接入的无线接入点(后称该无线接入点)附近的服务器所构成的云,即为该些移动节点的本地云。这里所说的无线接入点,可以是长期演进(Long Term Evolution,LTE)系统中的基站,也可以是采用其他通信方式(如无线保真网(Wireless Fidelity,WiFi))与移动节点建立连接的基站。针对无线接入点而言,其可通过如车与基础设施间(Vehicle-to-Infrastructure,V2I)通信技术等,与部署在其附近的本地云进行信息交互。本地云中的端到端平均时 延的大小,处于微云中的端到端平均时延的大小和远端云中的端到端平均时延大小之间。
传统的集中式远端云,一般也由服务器构成,这里所说的服务器,可以是集中式云服务器。该远端云中拥有丰富的资源,但由于其回程链路较长,从而端到端时延较长。
基于上述三层云网络架构,本申请实施例中的业务卸载策略可以但不限于包括下述三种:
将发送业务卸载请求的移动节点的业务卸载到微云中并分配相应的计算资源;
将该移动节点的业务卸载到本地云中并分配相应的计算资源;
将该移动节点的业务卸载到远端云中。
其中,微云是由该移动节点及其他移动节点共同构成的;本地云是由该移动节点接入的无线接入点附近所部署的服务器集群组成的;远端云则是由集中式云服务器构成的。
此外,移动节点被卸载的业务,可以是由移动节点指定的,比如移动节点可以将该业务的标识承载在业务卸载请求中通知给应用层设备;也可以是应用层设备根据查询到的该移动节点的业务信息,从该移动节点开通的业务中选取的。业务的具体选取方式,可以是随机选取,也可以根据不同业务分别对于移动节点的处理资源耗费量由大至小的顺序依次选取,等等。
本申请实施例中,业务卸载策略的确定,是以辅助决策信息为依据的。
其中,这里所说的辅助决策信息,可以是应用层设备通知控制层设备获取到的。辅助决策信息,具体而言可以包括下述信息:
1、
Figure PCTCN2015077754-appb-000001
Figure PCTCN2015077754-appb-000002
代表步骤11中所述的移动节点(后称该移动节点)所在微云(后称微云)中占用i个计算资源的业务卸载请求数量。显然,微云中被业务卸载 请求占用的计算资源总数为
Figure PCTCN2015077754-appb-000003
该总数不大于微云中所有计算资源的总和M。
微云中被占用的计算资源总数的上述计算公式中,N为单个业务卸载请求能够占用的计算资源最大数量;M满足约束条件M≤K,K为微云能支持的最大移动节点数。需要说明的是,为简化分析,本申请实施例中假设每个移动节点可虚拟化为一个计算资源。
本领域技术人员可以明了的是,若假定每个移动节点可分别虚拟化为k个计算资源,则M满足约束条件M≤kK。
根据
Figure PCTCN2015077754-appb-000004
的定义,可以设置
Figure PCTCN2015077754-appb-000005
其中,nvc代表微云中的计算资源分配情况。
2、
Figure PCTCN2015077754-appb-000006
Figure PCTCN2015077754-appb-000007
代表该移动节点所接入的无线接入点所在的云(后称本地云)中占用i个计算资源的业务卸载请求数量。显然,本地云中被业务卸载请求占用的计算资源总数为
Figure PCTCN2015077754-appb-000008
该总数不大于本地云中所有计算资源的总和Mlc
本地云中被占用的计算资源总数的上述计算公式中,N2为本地云中占用计算资源最多的单个业务卸载请求所占用的计算资源数量。
根据
Figure PCTCN2015077754-appb-000009
的定义,可以设置
Figure PCTCN2015077754-appb-000010
其中,nlc代表本地云中的计算资源分配情况。
3、h
h代表该移动节点的信道状态。
以下介绍根据上述辅助决策信息确定业务卸载策略的具体过程。
具体地,该过程可以包括下述几个子步骤:
子步骤一:建立系统状态空间S。
S的表达式如下式[1]所示:
S={s|s=(nvc,nlc,M,h,e)}   [1]
其中,s为系统所处状态。这里所说的系统,是指移动节点所处的通信系统,该通信系统包括微云、本地云和远端云。nvc、nlc、M、h的含义在前文已经说明,此处不再赘述。e代表系统事件集中的系统事件,e∈E={A,Dvc,Dlc,Dv}。
针对e而言,其代表的“系统事件”包括:系统中业务卸载请求的到达和离开,以及微云中移动节点的到达和离开;A={Ap,Av},Ap代表系统中业务卸载请求的到达,Av代表微云中移动节点的到达;
Figure PCTCN2015077754-appb-000011
Figure PCTCN2015077754-appb-000012
代表微云中占用i个计算资源的业务卸载请求的离开,i的取值范围为[1,N];
Figure PCTCN2015077754-appb-000013
Figure PCTCN2015077754-appb-000014
代表本地云中占用本地云中占用i个计算资源的请求离开,i的取值范围为[1,N];Dv代表微云中移动节点的离开。
子步骤二:建立系统状态空间S对应的行为集。
该行为集包含系统可执行的所有行为。
由于系统处在不同的状态下所能执行的行为并不相同,所以该行为集一般是针对状态定义的。具体地,该行为集可由下式[2]表示:
Figure PCTCN2015077754-appb-000015
其中,s表示系统的状态;As表示状态s下所能执行的行为集,As中的元素可表示为a(s),即a(s)∈As
在一种实施方式中,步骤12的执行目的,就是要从该行为集中选取一种行为,作为业务卸载策略。下文将进一步说明如何从该行为集中选取行为。
本申请实施例中,当系统的状态s下的系统事件e为业务卸载请求的离开,或微云中移动节点的加入(或离开)时,由于系统仅更新资源池中资源占用情况而不执行其他即为,从而可以定义该状态s下采用的行为a(s)所映 射的数值为0,即a(s)=0。
当系统的状态s下的系统事件e为业务卸载请求的到达时,可以定义:当a(s)与数值的映射关系为a(s)∈{1,2,…i,…,N}时,代表将步骤11中所述的发送业务卸载请求的移动节点的相应业务卸载于微云且为该请求分配i个计算资源,也就是为该业务分配i个计算资源。此外,可以定义:当a(s)与数值的映射关系为a(s)∈{N+1,N+2,…N+i,…,2N}时,代表将该业务卸载于本地云且为该请求分配i个计算资源;当a(s)与数值的映射关系为a(s)=2N+1时,代表将该业务卸载于远端云。
以下进一步介绍子步骤三~子步骤七。子步骤三~子步骤七的目的在于,确定步骤12中所述的业务卸载策略所映射的数值,即确定系统响应于步骤11中所述的业务卸载请求所采用的行为a(s)映射的数值。根据该数值,以及上文所述定义,可以确定出该数值所映射的行为a(s),进而可以在后文的步骤13中实施确定出的该行为,从而实现对业务的卸载。
子步骤三:建立系统回报模型。
本申请实施例中,可以按照下式[3]所示的回报模型,计算系统在状态s下执行行为a(s)(即执行业务卸载策略a(s))的实际回报r(s,a)。其中,回报是指系统可以获得的收益:
r(s,a)=k(s,a)-o(s,a)   [3]
其中,k(s,a)为系统的即时回报(即系统执行行为a(s)后立刻得到的回报),o(s,a)为系统中连续发生的两个系统事件的发生时刻之间的期望损耗。
本申请实施例中,可以根据下式[4]计算k(s,a):
Figure PCTCN2015077754-appb-000016
该回报模型建立的目的,是为了保证移动节点用户体验的同时最大化系统收益。
上述公式[4]中,E0是系统接收一个请求后能够获得的回报的上限值;β代表每单位时间的价值,-P代表对系统的惩罚,dvc、dlc和drc分别是指在微云、本地云以及远端云中完成一个卸载任务请求需要耗费的总时延,该时延具体包括业务卸载请求对应的业务数据卸载到云端的传输时延和在云端针对该业务业务数据执行相应计算的处理时延。其中,业务数据具体是指某个业务对应的数据量(如一个期望被卸载的程序的代码)。下面对dvc、dlc和drc做进一步解释:
1、dvc
由于微云中端到端传输时延较低,所以,当一个业务卸载请求被分配微云中的i个计算资源时,完成该业务的总时延dvc中,可忽略端到端传输时延,而认为该总时延与该业务处理请求的处理时延相等。因此,可以按照下式[5]计算dvc
dvc=1/(iμp)   [5]
公式[5]中,i表示微云中分配给业务卸载请求的计算资源数量,即该业务卸载请求在微云中占用的计算资源数量;μp表示每个计算资源的服务速率。
2、dlc
针对本地云而言,移动节点与其无线接入点间的传输时延会受到信道状态影响。若假设每个移动节点可分配到带宽为B的正交信道,则dlc的计算公式如下式[6]所示:
dlc=1/(iμp)+D/(B log2(1+SNR(h)))   [6]
其中,D代表移动节点请求卸载的业务对应的数据量,SNR代表移动节点与其无线接入点间的无线信道的信噪比,h表示表征信道状态的索引。为了简化分析,无线信道的传输速率可采用香农公式进行计算。
3、drc
针对远端云而言,移动节点与远端云之间的传输时延,与移动节点与其无线接入点间的传输时延相同,然而,由于远端云强大的处理能力,在计算总时延drc可忽略该传输时延,因此,drc可用下式[7]计算:
drc=D/(B log2(1+SNR(h)))+dbackhaul   [7]
公式[7]中,dbackhaul为远端云中回程链路的传输时延,即移动节点的无线接入点到远端云的传输时延。
当一个业务卸载到微云中并被分配i个计算资源时,系统可以得到(E0-βdvc)的收益,其中βdvc为系统开销;类似的,当一个业务分别卸载到本地云和远端云时,系统分别得到(E0-βdlc)和(E0-βdrc)的收益;当系统中卸载业务完成或者移动节点加入到微云中时,系统没有收益;当微云中有充足的可用计算资源时而某一移动节点的离开时,系统没有收益;当微云中的计算资源已全部分配完而出现移动节点的离开时,此时必然会导致其中一个业务被分配的计算资源出现中断,所以系统会受到-P的惩罚。
以下进一步介绍公式[3]中其他参数的计算方式。
本申请实施例可以采用多种回报模型选取业务卸载策略,如可以采用r(s,a)的平均回报模型。在该模型下,具体地,公式[3]中的期望损耗o(s,a)可以采用下述公式[8]计算:
o(s,a)=c(s,a)τ(s,a)   [8]
公式[8]中,c(s,a)为系统损耗速率,具体计算方式如下式[9];τ(s,a)为两个连续决策时刻之间的期望时间,后文将介绍τ(s,a)的计算方式,此处不再赘述。
Figure PCTCN2015077754-appb-000017
子步骤四:建立系统的转移概率矩阵
本申请实施例中,假设业务卸载请求的到达和离开服从泊松分布,微云中移动节点的到达和离开服从泊松分布。
基于上述假设,为求解转移概率矩阵,需得到事件的平均发生速率,该速率用σ(s,a)表示。
为了简化表达,先定义状态其中,该状态为系统执行选取的业务卸载策略后且下一个系统事件发生前,系统所处的虚拟状态。所以,σ(s,a)可以由公式[10]表示:
Figure PCTCN2015077754-appb-000019
其中,
Figure PCTCN2015077754-appb-000020
为卸载业务请求的总到达率;
Figure PCTCN2015077754-appb-000021
为卸载任务请求离开速率;λv表示微云中移动节点的到达率,μv表示微云中移动节点的离开速率
基于上述公式[10],系统执行行为a后从状态s转移到状态s'的转移概率可由p(s'|s,a)表示,p(s'|s,a)的计算公式如下式[11]:
Figure PCTCN2015077754-appb-000022
Figure PCTCN2015077754-appb-000023
中参数的确定方式请见表1。
表1:
Figure PCTCN2015077754-appb-000024
由上述公式[10],可以计算出τ(s,a),从而实现对期望损耗o(s,a)的计算。
本申请实施例可以采用很多回报模型,实现选取行为a(s)作为系统在状态s下的业务卸载策略,不同的回报准则会导致回报计算的差异性。比如,可以采用r(s,a)的平均回报模型,选取行为a(s),以下对该种方式进行详细 介绍。
如若使用平均回报模型,那么系统在状态s下执行行为π后能够得到的实际平均回报gπ可表示如下:
Figure PCTCN2015077754-appb-000025
基于公式[12],可假设系统在状态s下选取并执行的行为(即确定的业务卸载策略)π*对应的实际回报表示为:
Figure PCTCN2015077754-appb-000026
满足[13]的π*可通过迭代来进行求解,即:
Figure PCTCN2015077754-appb-000027
公式[14]为半马尔科夫决策过程平均回报模型,v(s)是状态s的值函数,g为回报过程增益。
进一步地,此半马尔科夫决策过程平均回报模型可以通过标准化,转换成普通的离散时间决策模型,使得分析更为简单。
本申请实施例中,引入η=Kλpvv+(K+Mlc)Nμp对半马尔科夫决策过程平均回报模型进行标准化,并且其满足[1-p(s|s,a)]σ(s,a)≤η≤∞,
Figure PCTCN2015077754-appb-000028
由于存在下述公式[15]~[17]所示的映射关系,因此,存在公式[18]。
Figure PCTCN2015077754-appb-000029
Figure PCTCN2015077754-appb-000030
Figure PCTCN2015077754-appb-000031
Figure PCTCN2015077754-appb-000032
由于状态空间S和行为集As可数,对于上述公式[18],本申请实施例可以采用值迭代算法对其进行求解,具体步骤如下:
步骤1:将所有状态的值函数设置为零,即
Figure PCTCN2015077754-appb-000033
并设置迭代次数n=0。
步骤2:针对每个状态,按照公式[19]计算其值函数
Figure PCTCN2015077754-appb-000034
Figure PCTCN2015077754-appb-000035
步骤3:判断公式[20]是否成立,若成立则执行步骤4;否则设置n=n+1,然后执行步骤2。其中,sp(v)代表v值的跨距,可以表达为公式[21]。
Figure PCTCN2015077754-appb-000036
Figure PCTCN2015077754-appb-000037
步骤4:对于每个状态s,按照公式[22]计算其静态最优策略,并停止迭代。
Figure PCTCN2015077754-appb-000038
需要说明的是,在计算出
Figure PCTCN2015077754-appb-000039
后,根据前文定义的a(s)与数值的映射关系,可以确定出
Figure PCTCN2015077754-appb-000040
所映射的a(s),进而将该a(s)确定为系统在状态s下选取的行为(即业务卸载策略)并执行该行为。
通过执行步骤12,确定出业务卸载策略后,可以进一步执行下述步骤13,实现对移动节点上业务的卸载。
步骤13,应用层设备根据确定的业务卸载策略,控制控制层设备从移动节点上卸载业务。
以图2为例,若向应用层设备发出业务卸载请求的移动节点为A,且应 用层设备通过执行步骤12确定出的业务卸载策略为“将A的业务卸载到微云中的B和C”,则控制层设备可以执行的操作包括:1、通知B和C执行:通过虚拟化技术为A建立相应的虚拟机(VM),以及利用计算资源处理从A上卸载的业务。其中,虚拟机的使用可以屏蔽底层系统的差异性,使得由不同产家生产的移动节点资源得以有效利用。2、通知A将其请求卸载的业务的相关信息发送给B和C;等等。
仍然以图1为例,若向应用层发出业务卸载请求的移动节点为D和E,且应用层设备通过执行步骤12确定出的业务卸载策略为“将D和E的业务卸载到其本地云中”,则控制层设备可以执行的操作包括:1、通知D和E接入的无线接入点执行:通过虚拟化技术分别为D和E建立相应的虚拟机,以及利用计算资源处理从D和E上卸载的业务;2、通知D和E将其请求卸载的业务的相关信息发送给该无线接入点;等等。
仍然以图1为例,若向应用层发出业务卸载请求的移动节点为F,且应用层设备通过执行步骤12确定出的业务卸载策略为“将F的业务卸载到其远端云中”,则控制层设备可以执行的操作包括:1、通知F的远端云执行:通过虚拟化技术为F建立相应的虚拟机,以及利用计算资源处理从F上卸载的业务;2、将F请求卸载的业务的相关信息发送给该无线接入点;等等。
本申请实施例中,三层云网络架构中各层云的资源可以通过虚拟化技术形成虚拟化资源池,该虚拟化资源池可以由软件定义网络控制层中的网络控制器来集中管理。前述步骤11~步骤13中的控制层设备可以相当于该网络控制器。而前文所述的应用层设备,则可以相当于软件定义网络应用层中的应用。
如图3所示,网络控制器可通过南向接口与数据层(数据层中包括各类无线接入点和移动节点)进行交互,通过北向接口与应用层进行交互。网络控制器可调取虚拟化资源池中的相关信息(包括各层云的计算资源、存储资 源情况等)。
实施例2
本申请实施例2提供一种业务卸载系统,该系统的结构示意图如图4所示,包括应用层设备41。该应用层设备41的主要功能在于:接收移动节点发出的业务卸载请求;响应于该业务卸载请求,根据辅助决策信息确定业务卸载策略;根据确定的业务卸载策略,控制控制层设备从移动节点上卸载业务。
在一种实施方式中,该系统还可以包括控制层设备42。当系统包括该控制层设备42时,应用层设备41还用于触发控制层设备42获取辅助决策信息;并根据控制层设备42获取的辅助决策信息,确定业务卸载策略。而控制层设备,则用于在应用层设备41的控制下,获取辅助决策信息并提供给应用层设备。
在一种实施方式中,本申请实施例2提供的该系统为发送业务卸载请求的移动节点所在的通信系统。在这样的场景下,应用层设备41具体可以用于:根据获取的辅助决策信息,按照离散时间马尔科夫决策和值迭代算法,确定最优业务卸载策略的标识;根据该标识,从可选的业务卸载策略中,确定具备该标识的业务卸载策略作为最优业务卸载策略。
在一种实施方式中,应用层设备41具体可以用于:根据确定的业务卸载策略,控制控制层设备将移动节点上的业务卸载到特定的云端进行处理。其中,特定的云端包括下述至少一个:
所述移动节点及其他移动节点共同构成的微云;
所述移动节点接入的无线接入点附近所部署的服务器集群组成的本地云;
由集中式云服务器构成的远端云。
在一种实施方式中,辅助决策信息可以包括下述信息:
该移动节点所在微云中分别占用不同数量的计算资源的业务卸载请求数量;
该移动节点接入的无线接入点所在的云中分别占用不同数量的计算资源的业务卸载请求数量;
该移动节点的信道状态。
在一种实施方式中,应用层设备41可以为软件定义网络应用层中的应用;控制层设备42可以为软件定义网络控制层中的控制器。
在一种实施方式中,上述移动节点可以为车载设备。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flashRAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商 品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (10)

  1. 一种业务卸载方法,其特征在于,包括:
    应用层设备接收移动节点发出的业务卸载请求;
    响应于所述业务卸载请求,根据辅助决策信息确定业务卸载策略;
    根据确定的业务卸载策略,控制控制层设备从所述移动节点上卸载业务。
  2. 如权利要求1所述的方法,其特征在于,根据辅助决策信息确定业务卸载策略,包括:
    触发控制层设备获取所述辅助决策信息;
    根据获取的所述辅助决策信息,确定业务卸载策略。
  3. 如权利要求2所述的方法,其特征在于,所述系统为所述移动节点所在的通信系统;
    根据获取的所述辅助决策信息,确定业务卸载策略,包括:
    根据获取的所述辅助决策信息,按照离散时间马尔科夫决策和值迭代算法,确定最优业务卸载策略的标识;
    根据所述标识,从所述可选的业务卸载策略中,确定具备所述标识的业务卸载策略作为所述最优业务卸载策略。
  4. 如权利要求3所述的方法,其特征在于,根据确定的业务卸载策略,控制控制层设备从所述移动节点上卸载业务,包括:
    根据确定的业务卸载策略,控制控制层设备将所述移动节点上的所述业务卸载到特定的云端进行处理;
    其中,所述特定的云端包括下述至少一个:
    所述移动节点及其他移动节点共同构成的微云;
    所述移动节点接入的无线接入点附近所部署的服务器集群组成的本地云;
    由集中式云服务器构成的远端云。
  5. 如权利要求1~4任一权项所述的方法,其特征在于,所述辅助决策信息,包括:
    所述移动节点所在微云中分别占用不同数量的计算资源的业务卸载请求数量;
    所述移动节点接入的无线接入点所在的云中分别占用不同数量的计算资源的业务卸载请求数量;
    所述移动节点的信道状态。
  6. 如权利要求1~4任一权项所述的方法,其特征在于,所述应用层设备为软件定义网络应用层中的应用;所述控制层设备为软件定义网络控制层中的控制器。
  7. 如权利要求6所述的方法,其特征在于,所述方法应用于车联网中。
  8. 一种业务卸载系统,其特征在于,包括应用层设备,其中:
    应用层设备,用于接收移动节点发出的业务卸载请求;响应于所述业务卸载请求,根据辅助决策信息确定业务卸载策略;根据确定的业务卸载策略,控制控制层设备从所述移动节点上卸载业务。
  9. 如权利要求8所述的系统,其特征在于,所述系统还包括控制层设备;
    所述应用层设备,用于触发控制层设备获取所述辅助决策信息;并根据获取的所述辅助决策信息,确定业务卸载策略;
    所述控制层设备,用于在所述应用层设备的控制下,获取所述辅助决策信息并提供给所述应用层设备。
  10. 如权利要求9所述的系统,其特征在于,所述系统为所述移动节点所在的通信系统;
    所述应用层设备,用于:根据获取的所述辅助决策信息,按照离散时间 马尔科夫决策和值迭代算法,确定最优业务卸载策略的标识;根据所述标识,从所述可选的业务卸载策略中,确定具备所述标识的业务卸载策略作为所述最优业务卸载策略。
PCT/CN2015/077754 2015-04-07 2015-04-29 一种业务卸载方法及系统 WO2016161677A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510161162.8A CN104869151A (zh) 2015-04-07 2015-04-07 一种业务卸载方法及系统
CN201510161162.8 2015-04-07

Publications (1)

Publication Number Publication Date
WO2016161677A1 true WO2016161677A1 (zh) 2016-10-13

Family

ID=53914669

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/077754 WO2016161677A1 (zh) 2015-04-07 2015-04-29 一种业务卸载方法及系统

Country Status (2)

Country Link
CN (1) CN104869151A (zh)
WO (1) WO2016161677A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110058934A (zh) * 2019-04-25 2019-07-26 中国石油大学(华东) 一种在大规模云雾计算环境中制定最优任务卸载决策的方法
CN111741054A (zh) * 2020-04-24 2020-10-02 浙江工业大学 一种移动用户深度神经网络计算卸载时延最小化方法
CN112162863A (zh) * 2020-10-20 2021-01-01 哈尔滨工业大学 一种边缘卸载决策方法、终端及可读存储介质
CN112162789A (zh) * 2020-09-17 2021-01-01 中国科学院计算机网络信息中心 一种基于软件定义的边缘计算随机卸载决策方法及系统
CN114296828A (zh) * 2021-12-30 2022-04-08 中国电信股份有限公司 数据计算任务的卸载方法及装置、存储介质、设备
US11411883B2 (en) 2019-05-27 2022-08-09 Toyota Motor Eng. & Manuf. North America. Inc. Hierarchical computing architecture for traffic management
CN114928609A (zh) * 2022-04-27 2022-08-19 南京工业大学 物联网场景的异构云-边环境的最优任务卸载方法
US11458996B2 (en) 2020-04-13 2022-10-04 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods to enable reciprocation in vehicular micro cloud

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106507267A (zh) * 2015-09-07 2017-03-15 中国移动通信集团公司 业务处理方法、终端及基站
CN105764121B (zh) * 2016-01-18 2019-04-23 浙江工业大学 一种蜂窝流量卸载网络中基于动态排序的设备与基站连接方法
CN105812461B (zh) * 2016-03-09 2019-03-12 福州大学 一种移动云环境情景感知计算迁移方法
CN107333281B (zh) * 2017-05-15 2019-08-20 北京邮电大学 移动计算卸载协同控制系统及方法
CN107766135B (zh) * 2017-09-29 2021-04-27 东南大学 移动朵云中基于粒子群和模拟退火优化的任务分配方法
CN108958916B (zh) * 2018-06-29 2021-06-22 杭州电子科技大学 一种移动边缘环境下工作流卸载优化方法
CN109067842B (zh) * 2018-07-06 2020-06-26 电子科技大学 面向车联网的计算任务卸载方法
CN109144719B (zh) * 2018-07-11 2022-02-15 东南大学 移动云计算系统中基于马尔科夫决策过程的协作卸载方法
CN108924796B (zh) * 2018-08-15 2020-04-07 电子科技大学 一种资源分配及卸载比例联合决策的方法
CN109343946B (zh) * 2018-09-19 2021-08-13 长安大学 一种软件定义车联网计算任务迁移和调度方法
WO2020133098A1 (zh) * 2018-12-27 2020-07-02 驭势科技(北京)有限公司 一种分布式计算网络系统与方法
CN112839360B (zh) * 2019-11-25 2023-04-18 华为技术有限公司 资源分配的方法及通信设备
CN111355779B (zh) * 2020-02-18 2021-05-28 湖南大学 基于服务的车联网任务卸载方法及其卸载装置
CN111464976B (zh) * 2020-04-21 2021-06-22 电子科技大学 一种基于车队的车辆任务卸载决策和总体资源分配方法
CN112888021B (zh) * 2021-01-29 2022-08-23 重庆邮电大学 一种车联网中避免中断的任务卸载方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140094142A1 (en) * 2012-09-28 2014-04-03 Cisco Technology, Inc. Network based on demand wireless roaming
US20140095695A1 (en) * 2012-09-28 2014-04-03 Ren Wang Cloud aware computing distribution to improve performance and energy for mobile devices
US20140105103A1 (en) * 2012-10-16 2014-04-17 Cisco Technology, Inc. Offloaded Security as a Service

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8493851B2 (en) * 2010-05-07 2013-07-23 Broadcom Corporation Method and system for offloading tunnel packet processing in cloud computing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140094142A1 (en) * 2012-09-28 2014-04-03 Cisco Technology, Inc. Network based on demand wireless roaming
US20140095695A1 (en) * 2012-09-28 2014-04-03 Ren Wang Cloud aware computing distribution to improve performance and energy for mobile devices
US20140105103A1 (en) * 2012-10-16 2014-04-17 Cisco Technology, Inc. Offloaded Security as a Service

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110058934A (zh) * 2019-04-25 2019-07-26 中国石油大学(华东) 一种在大规模云雾计算环境中制定最优任务卸载决策的方法
US11411883B2 (en) 2019-05-27 2022-08-09 Toyota Motor Eng. & Manuf. North America. Inc. Hierarchical computing architecture for traffic management
US11458996B2 (en) 2020-04-13 2022-10-04 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods to enable reciprocation in vehicular micro cloud
CN111741054A (zh) * 2020-04-24 2020-10-02 浙江工业大学 一种移动用户深度神经网络计算卸载时延最小化方法
CN111741054B (zh) * 2020-04-24 2022-07-26 浙江工业大学 一种移动用户深度神经网络计算卸载时延最小化方法
CN112162789A (zh) * 2020-09-17 2021-01-01 中国科学院计算机网络信息中心 一种基于软件定义的边缘计算随机卸载决策方法及系统
CN112162863A (zh) * 2020-10-20 2021-01-01 哈尔滨工业大学 一种边缘卸载决策方法、终端及可读存储介质
CN112162863B (zh) * 2020-10-20 2024-04-02 哈尔滨工业大学 一种边缘卸载决策方法、终端及可读存储介质
CN114296828A (zh) * 2021-12-30 2022-04-08 中国电信股份有限公司 数据计算任务的卸载方法及装置、存储介质、设备
CN114928609A (zh) * 2022-04-27 2022-08-19 南京工业大学 物联网场景的异构云-边环境的最优任务卸载方法

Also Published As

Publication number Publication date
CN104869151A (zh) 2015-08-26

Similar Documents

Publication Publication Date Title
WO2016161677A1 (zh) 一种业务卸载方法及系统
US11218546B2 (en) Computer-readable storage medium, an apparatus and a method to select access layer devices to deliver services to clients in an edge computing system
WO2020168761A1 (zh) 训练模型的方法和装置
CN112153700B (zh) 一种网络切片资源管理方法及设备
US10929189B2 (en) Mobile edge compute dynamic acceleration assignment
US9350682B1 (en) Compute instance migrations across availability zones of a provider network
WO2020224437A1 (zh) 一种通信方法、装置、实体及计算机可读存储介质
US9465641B2 (en) Selecting cloud computing resource based on fault tolerance and network efficiency
US20230086899A1 (en) Unlicensed spectrum harvesting with collaborative spectrum sensing in next generation networks
Maiti et al. An effective approach of latency-aware fog smart gateways deployment for IoT services
CN106856438B (zh) 一种网络业务实例化的方法、装置及nfv系统
Meng et al. A utility-based resource allocation scheme in cloud-assisted vehicular network architecture
Hattab et al. Optimized assignment of computational tasks in vehicular micro clouds
Chen et al. Latency minimization for mobile edge computing networks
CN103414657A (zh) 一种跨数据中心的资源调度方法、超级调度中心和系统
KR20230043044A (ko) 디지털 트윈 보조 회복성을 위한 방법 및 장치
US20230136612A1 (en) Optimizing concurrent execution using networked processing units
Harnal et al. Load balancing in fog computing using qos
CN112202829A (zh) 基于微服务的社交机器人调度系统和调度方法
US11228516B1 (en) Edge computing management using multiple latency options
Laroui et al. Virtual mobile edge computing based on IoT devices resources in smart cities
KR20220047408A (ko) 무선 네트워크들에서의 모델 지원 심층 강화 학습 기반 스케줄링
Nayyer et al. Cfro: Cloudlet federation for resource optimization
CN112583941A (zh) 一种支持接入多电力终端的方法、单元节点及电力物联网
Al-Razgan et al. [Retracted] A Computational Offloading Method for Edge Server Computing and Resource Allocation Management

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15888232

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC ( EPO FORM 1205A DATED 07/02/2018 )

122 Ep: pct application non-entry in european phase

Ref document number: 15888232

Country of ref document: EP

Kind code of ref document: A1