CN112148492B - Service deployment and resource allocation method considering multi-user mobility - Google Patents

Service deployment and resource allocation method considering multi-user mobility Download PDF

Info

Publication number
CN112148492B
CN112148492B CN202011038113.2A CN202011038113A CN112148492B CN 112148492 B CN112148492 B CN 112148492B CN 202011038113 A CN202011038113 A CN 202011038113A CN 112148492 B CN112148492 B CN 112148492B
Authority
CN
China
Prior art keywords
service
user
deployment
computing
overhead
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011038113.2A
Other languages
Chinese (zh)
Other versions
CN112148492A (en
Inventor
陈智麒
张胜
钱柱中
李文中
陆桑璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202011038113.2A priority Critical patent/CN112148492B/en
Publication of CN112148492A publication Critical patent/CN112148492A/en
Application granted granted Critical
Publication of CN112148492B publication Critical patent/CN112148492B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/04Constraint-based CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a service deployment and resource allocation method considering multi-user mobility, which is applied to an edge computing network scene. Modeling is carried out according to an edge computing scene, the method is regarded as an optimization problem integrating service computing delay cost, transmission delay cost and service migration cost, and a multi-user service deployment and computing resource allocation scheme is obtained by solving the optimization problem in combination with decision constraint. The invention fills the blank of the field, supports multi-user service deployment, considers the mobility of users, has wide applicability, and improves the task allocation and execution efficiency in the edge computing scene, thereby improving the overall processing performance of the network.

Description

Service deployment and resource allocation method considering multi-user mobility
Technical Field
The invention relates to the field of edge computing, in particular to a service deployment and resource allocation method considering multi-user mobility in an edge computing network environment.
Background
The last ten years are decades of rapid development of cloud computing technology, and cloud computing makes the internet a data and computing center of users by guaranteeing dynamic network resource pools, virtualization and high availability, so that a large number of services can be performed in the cloud, and becomes an important development direction of information technology nowadays. However, everything has two sides, and as people continuously get more and more aware of cloud computing, its disadvantages are exposed: (1) privacy security issues: because users need to provide their own information on the cloud computing platform to use services, the platform runs the risk of revealing the privacy of these users. (2) data transmission cost problem: the development of the internet of things enables various novel intelligent devices to continuously emerge, and the intelligent devices generate a large amount of data, which far exceeds the bearing capacity of a network and a cloud computing center. For example, monitoring cameras that are now in widespread use produce large amounts of video data each day; for an autopilot, data up to 5TB are generated daily, and it is not acceptable to transmit all the data to the cloud for processing under the current technical conditions. (3) high latency problem: due to the centralized characteristic of cloud computing, some nodes are necessarily far away from a computing center, and the real-time performance of the nodes is often not guaranteed. However, it is known that autopilot and industrial automation have high requirements for real-time data processing, and if network delays caused by data transmission cannot meet the requirements for real-time, catastrophic results are likely to occur.
For the existing problems of the current cloud computing, mobile edge computing (Mobile Edge Computing, MEC) has developed. The mobile edge computing overcomes a series of defects of the traditional cloud computing architecture by pushing computing, storage and other resources to the edge of the network, and provides a network computing mode with low time delay, high privacy safety, high expandability, rapidness and high efficiency. In a typical mobile edge computing scenario, there are three computing devices: cloud computing nodes, edge computing nodes (e.g., deployed on edge servers), user equipment nodes, each of which has the following advantages:
cloud computing node: generally, the computing resources of the cloud computing nodes are not limited, any task can be processed by high-speed computing capability after being submitted to the cloud, but the communication cost for uploading local data to the cloud is high. Representative public Cloud services are AWS, google Cloud, microsoft Azure, and the like.
User equipment node: these user devices may be smart mobile phones, smart car systems, embedded internet of things devices, etc. that may be considered as originating or itself near to originating data, so that they have little data communication costs, but are typically not equipped with high computational performance for taking into account device hardware costs and long endurance considerations.
Edge computing node: the method can be regarded as a compromise between the two, has certain computing processing capacity and has the data communication cost within an acceptable range. These edge computing nodes are often deployed on some mobile network Access Points (APs) and can provide edge computing services to mobile devices within the coverage area of the APs. Edge computing nodes may be considered an extension of cloud computing nodes, but are subject to server performance and cost considerations, providing limited computing resources. When facing a plurality of services, we assume that an edge computing node can generate a certain service computing delay by allocating the proportion of CPU computing resources, and different services occupy different proportions of CPUs.
In a typical mobile edge computing environment, however, mobility of the user is often considered. In a certain period of time, the user can roam from one place to another, so that the data delay of the cloud computing node and the user equipment node is not changed correspondingly due to the change of the network position, but the delay to each edge computing node is changed. The edge node providing the computing service needs to consider whether to continue running the service at that edge node or to migrate the service to another lower latency edge computing node to provide the service. The contradiction here is that the former is only an increase in data communication latency, while the latter can effectively reduce communication latency, but introduces additional migration overhead, where migration overhead includes destroying and recreating the virtual machine, migrating bandwidth occupancy of the virtual machine state, and even migration failure overhead. However, according to the work found by the current signature inventor, a plurality of defects exist in the current related research, such as only considering a single-user service decision deployment scheme and only considering coarse-grained allocation computing resources, and high-performance allocation and execution of tasks under the condition of multi-user mobile cannot be satisfied.
Disclosure of Invention
The invention aims to: aiming at the blank of the prior art, the invention provides a service deployment and resource allocation method considering multi-user mobility in an edge computing network environment, which realizes the comprehensive consideration of how user services are deployed on edge computing nodes and how the edge computing nodes allocate CPU occupation ratios to the services.
The technical scheme is as follows: a service deployment and resource allocation method considering multi-user mobility includes the following steps:
(1) Establishing a mathematical model for a moving edge computing scene:
cloud computing node included in computing scene according to mobile edgeUser equipment node->Edge calculation node set ε= {1,2, …, E }, calculation node of model +.>The decision on service deployment is made between a discrete time slice, the time of the model is defined as the discrete time slice set +.>And analyzing a decision space formed by service deployment and resource allocation problems: when the time slice t arrives, the service of the user u needs to decide on which computing node to deploy; and, for each computing node, if there are more than 1 user service deployments within the time slice t, it is necessary to consider the CPU computing resource allocation ratio between these services;
three overheads contained in the scene are calculated from the mobile edges: calculating delay cost, transmission delay cost and service migration cost, and determining the optimization target of the model as follows: minimizing the weighted sum of the three overheads, wherein the calculation delay overhead refers to the time delay from the arrival of a service request of a user at a calculation node to the return of the completion of the request calculation; the transmission delay overhead refers to the delay from the start of the user sending a service request to the receipt of the request by a computing node; the service migration cost refers to the cost generated by corresponding migration of the service after the user moves;
(2) And solving the established mathematical model to obtain a multi-user service deployment and resource allocation scheme.
Further, the optimization objective of the model is expressed as:
wherein,,is a two-dimensional indicator variable indicating whether user service j is deployed on computing node i at time slice t,/, is @>Representing deployment, but->Conversely; />Representing CPU computing resource allocation ratio,/->Is a ratio variable between 0 and 1, representing the ratio of computing resources allocated at computing node i by user service j during time slice t; />Representing the computational latency overhead of user service j at time slice t: />Expressed in timeSlice t, transmission delay overhead of user service j; />Representing the migration overhead of user service j at time slice t; w (w) 1 ,w 2 ,w 3 The weights of the three overheads, respectively.
Further, the step (2) of solving the established mathematical model includes:
recording deviceAll possible assignment spaces of (1) are X, and one X' E X is taken from any one, and the optimal solution y is uniquely corresponding * Defining a service deployment and computing resource allocation state as s= (x', y) * ) E S, the overall cost of the state is C s'→s I.e., the increased overhead required to migrate from the state S' of the last time slice to the state S of the current time slice, where S refers to the feasible solution set of service deployment and computing resource allocation schemes;
building graph g= (V, L), vertex set V represents a set of states, edge set L represents a set of total overheads between two states, in particular two adjacent statesAnd->The weights of the edges in between represent the service migration overhead involved between states i and j, and the service computation delay and transmission delay overhead at time slice t;
adding artificial nodes S and D on two sides of the graph G, and solving the shortest path between S and D through the following algorithm:
A. receiving input and initializing a dynamic programming state table Φ, which records a deployment scenario s from a certain deployment scenario t A mapping to a corresponding overhead;
B. continuously cycling from t=1 to T;
C. all (E+2) are obtained by Cartesian product for each cycle U The possible deployment schemes are marked as a set H, wherein E is the number of edge computing nodes, and U is the number of user services;
D. traversing the deployment schemes in each set H, and updating phi into one item with the smallest sum of the overhead of all schemes in the last time slice t-1 and the current overhead;
E. returning to the deployment scheme with the least overhead among all deployment schemes of phi.
As a more preferred embodiment, the solving the established mathematical model in the step (2) includes:
recording deviceAll possible assignment spaces of (1) are X, and one X' E X is taken from any one, and the optimal solution y is uniquely corresponding * Defining a service deployment and computing resource allocation state as s= (x', y) * ) E S, the overall cost of the state is C s'→s I.e., the increased overhead required to migrate from the state S' of the last time slice to the state S of the current time slice, where S refers to the feasible solution set of service deployment and computing resource allocation schemes;
building graph g= (V, L), vertex set V represents a set of states, edge set L represents a set of total overheads between two states, in particular two adjacent statesAnd->The weights of the edges in between represent the service migration overhead involved between states i and j, and the service computation delay and transmission delay overhead at time slice t;
adding artificial nodes S and D on two sides of the graph G, and solving the shortest path between S and D through the following algorithm:
a. receiving input and initializing Φ, Φ records the data from a deployment scenario s t A mapping to a corresponding overhead;
b. continuously cycling from t=1 to T;
c. all (E+2) are obtained per cycle U The possible deployment schemes are marked as a set H, wherein E is the number of edge computing nodes, and U is the number of user services;
d. obtaining kappa schemes from deployment schemes in each set H through uniform sampling, and updating phi into one of the overheads of all sampling schemes in the last time slice t-1 and the smallest sum of overheads in the current sampling scheme;
e. returning to the deployment scheme with the least overhead among all deployment schemes of phi.
As a more preferred embodiment, the solving the established mathematical model in the step (2) includes:
the service set deployed on the edge computing node i is marked as Λ i =SUBSET({λ 12 ,…,λ n }),λ n Representing the nth service, using w ij Representing transmission delay overhead and service migration overhead for deploying user service j to edge computing node i, using v ij =λ j /c i To represent the computational latency overhead of deploying user service j onto edge compute node i, c i Computing the computing power of the node i for the edge;
deployment { lambda } in arbitrary order 12 ,…,λ n ' will lambda j Deployed to all edge computing nodes epsilon for the ith computing node epsilon i Definition of LOAD (. Epsilon.) i ) Meaning that the sum of the costs of the user services already carried by the current node is represented by the meaning of defining a LOAD (epsilon) ij ) Represented by epsilon i The total cost generated by a user service j is deployed on the basis of the existing service;
the following updating algorithm is executed, and epsilon with the minimum final overhead increase is selected for deployment:
1) Calculating the node ε for each edge i Setting LOAD (epsilon) i )=0;
2) When the time slice t arrives, the following steps are performed:
i. for each user service j, a LOAD (ε) is calculated ij ) And select its generationCalculating node epsilon by the edge with the minimum price;
user service j is deployed on the edge computing node epsilon and then LOAD (epsilon, lambda) is used j ) Update LOAD (epsilon).
The beneficial effects are that: aiming at the service deployment and computing resource allocation problems considering multiple users in an edge computing environment, the invention models according to an edge computing scene, regards the service deployment and computing resource allocation problems as an optimization problem integrating service computing delay cost, transmission delay cost and service migration cost, and obtains the service deployment and computing resource allocation scheme of multiple users by solving the optimization problem in combination with decision constraint. The invention fills the blank of the field, supports multi-user service deployment, considers the mobility of users, has wide applicability, and improves the task allocation and execution efficiency in the edge computing scene, thereby improving the overall processing performance of the network.
Drawings
FIG. 1 is a schematic diagram of a service deployment and computing resource allocation scenario provided by an embodiment of the present invention;
FIG. 2 is an exemplary diagram of a dynamic programming algorithm of an offline algorithm FDSP provided by an embodiment of the present invention;
FIG. 3 is an exemplary diagram of an embodiment of the present invention providing an online algorithm OSP.
Detailed Description
The technical scheme of the invention is further described below with reference to the accompanying drawings.
Referring to the edge computing scenario of fig. 1, in one embodiment, a service deployment and resource allocation method that considers multi-user mobility includes the steps of:
and (1) establishing a mathematical model for service deployment and resource allocation problems according to the mobile edge computing scene.
In a mobile edge computing scenario, the devices involved include cloud computing nodes, also referred to as cloud serversUser equipment node->The edge compute nodes are also referred to as the set epsilon= {1,2, …, E } of edge servers. All three computing devices can be regarded as so-called computing nodes +.>To facilitate modeling of user movements, and service deployment decisions are made between a discrete time slice, thus defining time as a discrete set of time slices +.>
Analyzing a decision space formed by service deployment and resource allocation problems, two points need to be considered: first, when time slice t arrives, user u's service needs to decide which computing node to deploy on; and, a second point, for each compute node, if there are more than 1 user service deployments in time slice t, then it is necessary to consider the CPU compute resource allocation ratio between these services.
For the first point, firstly, the user equipment cannot be far away from the edge computing node of the operation service, otherwise, excessive network transmission delay is generated; then, user services are not deployed on the same edge computing node as much as possible, otherwise, service computing time delay is obviously increased due to the fact that a plurality of services preempt the same server resources; in addition, because the edge computing nodes have isomerism, different nodes can provide different computing capacities when facing the same user service according to the current machine state, such as the hardware and software configuration of CPU main frequency, memory frequency, instruction system and the like. The invention uses decision variablesTo characterize the service deployment scenario,/->Is a 0/1 indicator variable that indicates whether user service j was deployed to compute node i at time slice t,representing deployment, but->On the contrary. Note that at time slice t, a service can be and can only be deployed on one compute node, so there are the following decision constraints:
for the second point, a simplest allocation is to allocate CPU computing resources uniformly, assuming that a node provides computing power of size c (e.g., using CPU cycles per minute), and n user services have been deployed on the node, then each service obtains computing power ofFor the ith service, the service calculation time delay can be +.>To represent. It is easy to find that this distribution following the uniformity principle is not an optimal distribution, a better one based on the data throughput lambda of the service i To dynamically determine the CPU computing resource allocation ratio of the service. The present invention uses the decision variable +.>To indicate (I)>Is a ratio variable from 0 to 1, which indicates that user service j is being computed at node i at time slice tThe ratio of allocated computing resources. Note that at time slice t, one compute node can provide a total of 100% of the computing resource allocation ratio at most, and therefore has the following decision constraint:
wherein the last inequality is expressed in the sense that ifIndicating that in time slice t, user service j will not be deployed on compute node i, then there is naturally +.>If->User service j is deployed on compute node i, thenAny value between 0 and 1 may be taken.
The service of the user in the mobile edge computing scene is analyzed from generation to execution, wherein three overheads are mainly included: calculating time delay cost, transmission time delay cost and service migration cost, wherein the time delay cost is the time delay from the time when a service request of a user arrives at a computing node to the time when the request is calculated and returned; the transmission delay overhead refers to the delay from the start of the user sending a service request to the receipt of the request by a computing node; the service migration overhead refers to the overhead generated by the corresponding migration of the service after the user moves. Then our goal is to minimize the weighted sum of these three overheads. The optimization objective and constraint conditions of the mathematical model are given according to the analysis of the decision space as follows:
in time slice t, the user will have a service calculation demand according to the application demand of the user, and useExpressed specifically, is the computational demand of user service j at time slice t. It can be noted that the service calculation demand of the user varies with the time slice t, which is the result of the resultant of various factors such as the mobility of the user, the service timeliness and the change of the user demand, and it is assumed that the demand is directly known. Taking video analysis service as an example +.>The method can be known according to the size of the input video, the frame rate, the accuracy requirement of analysis tasks and the like. In addition, the present invention uses symbol c i To represent the computing power provided by computing node i (e.g., CPU clock frequency), it should be noted here that for cloud computing nodesAnd an edge computing node epsilon that provides no distinction between computing power to different users, but the local computing power of each user device may be considered different, as internet of things devices such as embedded computing devices themselves tend to provide different computing power due to hardware cost and power endurance considerations. Given service deployment and computing resource allocation decisions according to the definition above>And->The calculation delay overhead of the user service j at the time slice t can be obtained:
the transmission delay is typically composed of an access delay and a propagation delay. Access delay refers to the delay from a user equipment to a nearby access point, which is typically determined jointly by the radio environment and the terminal equipment. For propagation delay, if the user service is deployed locally, the propagation delay is almost negligible and can be regarded as 0; if the user service is deployed in the cloud, the propagation delay from the user device to the cloud can be considered to be a constant, and the specific value is determined by the cloud service provider; if the user service is deployed on an edge computing node, the latency is typically related to the edge-to-edge delay between the access point and the edge computing node on which the service resides. In the most ideal case, the service is deployed directly on the access point with negligible edge-to-edge propagation delay. For the transmission delay, the invention uniformly usesThe representation means the delay of the service request of user j to the computing node i at time slice t. According to the above definition, a service deployment decision is given>The transmission delay overhead of the user service j at the time slice t can be obtained:
since the movement of the user brings about a change in the network environment, the edge computing node closest to the user changes accordingly, so that when the user generates a new movement, the deployment location of the user service needs to be considered again: one way is to always run the service in the original location, regardless of the user's movements; another way is to migrate services with the user, which introduces service migration overhead. Service migration overhead including destroy and re-installThe overhead of newly creating a virtual machine, the bandwidth occupation overhead of migrating the virtual machine state, and even the failure overhead including migration failure. The invention uses symbolsRepresenting service migration overhead representing migration overhead for migrating user service j from computing node i to i' at time slice t, the overhead being related to specific user service type, user service running state, and being obtained by measurement in practical application, and being assumed to be a constant for convenience. Note that when i is equal to i', that is, service migration does not occur, the service migration overhead thereof can be considered as 0. From the above definition, it is not difficult to derive the service migration overhead of user service j at time slice t:
wherein the method comprises the steps ofIndicating whether user service j is deployed on compute node i during time slice t-1,/-, for example>Indicating whether user service j is deployed on compute node i' at time slice t, +.>And->Are all [000 … 1 … 000 ]]Such a vector, only one node being 1, is the currently deployed node, so by +.>Service migration overhead from the previous node to the next node can be calculated. Note that i' and i have no relationship, are all used to traverse the set of compute nodes, heThey differ in that i is the compute node under the t-1 time slice and i' is the compute node under the t time slice.
The service computation delay, transmission delay overhead, and service migration overhead are in accordance with the foregoing discussion, and the overall goal is to optimize these three overheads to minimize their overall overheads. One way is to assign different weights to the three overheads and then sum them, and record the weights of the three overheads as w respectively 1 ,w 2 ,w 3 . The corresponding overall weighted sum overhead for user service j at time slice t is:and when given the time sequence +.>It is desirable to minimize the overall overhead for all users over time, and the overall problem can be formally expressed as:
the following table shows the meaning of the symbols mentioned above:
table 1 sign meanings used in mathematical models
And (2) solving the established mathematical model to obtain a multi-user service deployment and resource allocation scheme.
Aiming at the optimization problem established in the step (1), when the problem scale is smaller, the invention provides a FDSP (Full Dynamic Service Placement) algorithm based on dynamic programming, which can obtain an accurate solution; when the problem size is increased, the invention provides an improved SDSP (Sampling Dynamic Service Placement) algorithm which can effectively relieve the problem of FDSP algorithm state number combination explosion based on the sampling thought. The invention also provides a OSP (Online Service Placement) online algorithm based on greedy ideas, which considers the imperceptibility of future information of the system in service decision. The following detailed description is made respectively.
FDSP is an accurate algorithm based on dynamic programming, which can be used to iterate calculations in multiple stages, since the optimal computing resource allocation scheme is uniquely determined by a service deployment scheme, and the problem itself has time-series properties. The problem is converted into a shortest path problem between two artificial nodes by adding the two artificial nodes, and a state transfer equation is given. In particular, a service deployment and computing resource allocation scheme can be considered a state, and then by combined computing, all states total (E+2) U And each. Assume thatIf the space of all possible assignments of (a) is X, then one X' E X is taken from any one, and the optimal solution y is uniquely corresponding * A service deployment and computing resource allocation state can be defined as s= (x', y) * ) E S, where S refers to the feasible solution set of service deployment and computing resource allocation schemes. According to the definition of the service calculation delay, the transmission delay cost and the service migration cost, the overall cost of the state can be defined as C s'→s I.e. the overhead required to migrate from the state s' of the last time slice to the state s of the current time slice. In fact, a graph g= (V, L) can be constructed, V being the set of vertices, representing the states, L being the set of edges between vertices, representing the overhead between the two states, with (e+2) at one time slice t U The individual states represent all (E+2) U The service deployment scheme, E+2 is the number of all computing nodes, 2 is composed of a cloud server and user equipment, and the states of two adjacent computing nodes are +.>And->The weights of the edges in between represent the service migration overhead involved between states i and j, and the service computation delay and transmission delay overhead at time slice t. By adding artificial nodes S and D on both sides of the graph G as the start and end points of the shortest path, the problem translates into finding the shortest path between S and D. In fact, the problem can be solved with dynamic programming, the core of which is the state transition equation +.>Wherein->Representing the cumulative minimum cost of choosing state s in the last step at time slice t, it can be considered a sub-problem of the original problem.
The algorithm is based on a dynamic programming idea, and comprises the following specific steps:
A. receiving input includes computing capability c of a computing node i Service computing requirementsTransmission delay->Service migration overhead->The number of time slices T, the number of edge servers E and the number of user services U. Initializing a dynamic programming state table phi, which records the state of a slave deployment scheme s t A mapping to a corresponding overhead;
B. continuously cycling from t=1 to T, performing C, D steps;
C. all (E+2) are obtained by Cartesian product for each cycle U The possible deployment schemes are marked as a set H;
D. traversing the deployment schemes in each set H, and updating phi into one item with the smallest sum of the overhead of all schemes in the last time slice t-1 and the current overhead;
E. returning to the deployment scheme with the least overhead among all deployment schemes of phi.
Taking fig. 2 as an example, it can be seen that t=4, e=2, u=2 in the figure, when t=1, we can findAll 16 possible deployment scenarios to +.>For example, it represents a deployment scenario where user service 1 is deployed to edge server 1 while user service 2 is also deployed to edge server 1. The optimal path can be found between S and D in the graphAnd obtaining an optimal deployment scheme.
The FDSP can obtain an accurate solution, but the dynamic programming-based mode faces the problem of state number combination explosion, so that an offline algorithm SDSP based on state sampling is also provided. The algorithm is based on the dynamic programming idea of the belt state sampling, and comprises the following specific steps:
a. receiving input includes computing capability c of a computing node i Service computing requirementsTransmission delay->Service migration overhead->The number of time slices T, the number of edge servers E and the number of user services U. Initializing a dynamic programming state table phi, which records the state of a slave deployment scheme s t A mapping to a corresponding overhead;
b. continuously cycling from t=1 to T, and executing steps c and d;
c. obtaining all possible deployment schemes by each cycle, and marking the deployment schemes as a set H;
d. from each set H (E+2) U Obtaining kappa schemes through uniform sampling in a deployment scheme, and updating phi into one of the overheads of all sampling schemes in the last time slice t-1 and the smallest sum of the overheads in the current sampling scheme;
e. returning to the deployment scheme with the least overhead among all deployment schemes of phi.
Taking into account that offline algorithms require a period of time to be known in advanceThe present invention converts the original continuous time optimization problem into a plurality of single step optimization problems, and proposes an online algorithm OSP to solve the problem.
For online problems, what is known is the last service deployment scenario x t-1 Thereby solving the service deployment scheme x under the current time slice t t . In the original modeling, the calculation delay cost, the transmission delay cost and the service migration cost are calculated according to the starting point of each user, but can be considered from the calculation node in fact. For edge computing node i, if its deployed service set is Λ i =SUBSET({λ 12 ,…,λ n }), then its transmission delay overhead and service migration overhead are respectivelyAnd->These two overheads are denoted here by w ij Representing the transmission delay overhead and service migration overhead of deploying user service j onto edge computing node i. Similarly, with v ij =λ j /c i To represent the computation of deploying user service j onto edge compute node iAnd (5) overhead. For this problem, there may be a greedy strategy as follows: deployment { lambda } in arbitrary order 12 ,…,λ n Attempts may be made to get λ j And deploying to all edge servers epsilon, and selecting epsilon with the minimum final overhead increase for deployment. For each edge server ε i Definition of LOAD (. Epsilon.) i ) Meaning that the sum of the costs of the user services already carried by the current server is represented by the notation LOAD (epsilon ij ) Represented by epsilon i The total cost generated by one more user service j is deployed on the basis of the existing service.
The following algorithm is adopted:
1) For each edge server ε i Setting LOAD (epsilon) i )=0;
2) When the time slice t arrives, the following steps are performed:
i. for each user service j, a LOAD (ε) is calculated ij ) And selecting epsilon where the cost is minimal;
user service j is deployed on the edge server epsilon and then LOAD (epsilon, lambda) is used j ) Update LOAD (epsilon).
Taking fig. 3 as an example, the edge calculates node epsilon 1 Has been deployed { lambda ] 12 Two user services, then it has already an overhead LOAD (epsilon 1 )=2(v 1,1 +v 1,2 )+w 1,1 +w 1,2 The method comprises the steps of carrying out a first treatment on the surface of the Similarly, there is a LOAD (. Epsilon.) 2 )=v 2,3 +w 2,3 LOAD (epsilon) 3 )=3(v 3,4 +v 3,5 +v 3,6 )+w 3,4 +w 3,5 +w 3,6 . Now consider lambda 7 Deployment to epsilon 123 On one of them, then try to get λ first 7 Deployed onto each epsilon are:
LOAD(ε 1 ∪λ 7 )=3(v 1,1 +v 1,2 +v 1,7 )+w 1,1 +w 1,2 +w 1,7
LOAD(ε 2 ∪λ 7 )=2(v 2,3 +v 2,7 )+w 2,3 +w 2,7
LOAD(ε 3 ∪λ 7 )=4(v 3,4 +v 3,5 +v 3,6 +v 3,7 )+w 3,4 +w 3,5 +w 3,6 +w 3,7
and epsilon with the minimum expenditure is selected from the three schemes for deployment.
The specific embodiments of the present invention have been described in detail above, but the present invention is not limited to the specific details of the above embodiments, and various equivalent changes can be made to the technical solution of the present invention within the scope of the technical concept of the present invention, and all the equivalent changes belong to the protection scope of the present invention.

Claims (6)

1. A service deployment and resource allocation method considering multi-user mobility, applied to an edge computing network environment, characterized in that the method comprises the following steps:
(1) Establishing a mathematical model for a moving edge computing scene:
cloud computing node included in computing scene according to mobile edgeUser equipment node->Edge calculation node set ε= {1,2, …, E }, calculation node of model +.>The decision on service deployment is made between a discrete time slice, the time of the model is defined as the discrete time slice set +.>And analyzing a decision space formed by service deployment and resource allocation problems: when the time slice t arrives, the service of the user u needs to decide on which computing node to deploy; and, for each computing node, if itMore than 1 user service deployment exists in the time slice t, and the CPU calculation resource allocation ratio between the services needs to be considered;
three overheads contained in the scene are calculated from the mobile edges: calculating delay cost, transmission delay cost and service migration cost, and determining the optimization target of the model as follows: minimizing the weighted sum of the three overheads, wherein the calculation delay overhead refers to the time delay from the arrival of a service request of a user at a calculation node to the return of the completion of the request calculation; the transmission delay overhead refers to the delay from the start of the user sending a service request to the receipt of the request by a computing node; the service migration cost refers to the cost generated by corresponding migration of the service after the user moves;
(2) And solving the established mathematical model to obtain a multi-user service deployment and resource allocation scheme.
2. The service deployment and resource allocation method considering multi-user mobility according to claim 1, wherein the optimization objective of the model is expressed as:
wherein,,is a two-dimensional indicator variable that indicates, at time slice t, whether user service j is deployed to compute node i,representing deployment, but->Conversely; />Representing CPU computing resource allocation ratio,/->Is a ratio variable between 0 and 1, representing the ratio of computing resources allocated at computing node i by user service j during time slice t; />Representing the computational latency overhead of user service j at time slice t: />Representing the transmission delay overhead of user service j at time slice t; />Representing the migration overhead of user service j at time slice t; w (w) 1 ,w 2 ,w 3 Weights of three overheads respectively; t is the number of time slices; u is the user set.
3. The service deployment and resource allocation method considering multi-user mobility according to claim 2, wherein theThe calculation method is as follows:
wherein the method comprises the steps ofRepresenting the calculated demand of user service j at time slice t, c i Representing the computing power provided by the computing node i; said->The calculation method is as follows:
wherein the method comprises the steps ofRepresenting the delay of the transmission of the service request of user j to the computing node i at time slice t;
the saidThe calculation method is as follows:
wherein the method comprises the steps ofRepresenting the migration overhead of migrating user service j from compute node i to i' at time slice t.
4. The service deployment and resource allocation method considering multi-user mobility according to claim 2, wherein said step (2) of solving the established mathematical model comprises:
recording deviceAll possible assignment spaces of (1) are X, and one X' E X is taken from any one, and the optimal solution y is uniquely corresponding * Defining a service deployment and computing resource allocation state as s= (x', y) * ) E S, the overall cost of the state is C s'→s I.e., the increased overhead required to migrate from the state S' of the last time slice to the state S of the current time slice, where S refers to the feasible solution set of service deployment and computing resource allocation schemes;
building graph g= (V, L), vertex set V represents a set of states, edge set L represents a set of total overheads between two states, in particular two adjacent statesAnd->The weights of the edges in between represent the service migration overhead involved between states i and j, and the service computation delay and transmission delay overhead at time slice t;
adding artificial nodes S and D on two sides of the graph G, and solving the shortest path between S and D through the following algorithm:
A. receiving input and initializing a dynamic programming state table Φ, which records a deployment scenario s from a certain deployment scenario t A mapping to a corresponding overhead;
B. continuously cycling from t=1 to T, performing C, D steps;
C. each cycle is obtained by Cartesian productTo all (E+2) U The possible deployment schemes are marked as a set H, wherein E is the number of edge computing nodes, and U is the number of user services;
D. traversing the deployment schemes in each set H, and updating phi into one item with the smallest sum of the overhead of all schemes in the last time slice t-1 and the current overhead;
E. returning to the deployment scheme with the least overhead among all deployment schemes of phi.
5. The service deployment and resource allocation method considering multi-user mobility according to claim 2, wherein said step (2) of solving the established mathematical model comprises:
recording deviceAll possible assignment spaces of (1) are X, and one X' E X is taken from any one, and the optimal solution y is uniquely corresponding * Defining a service deployment and computing resource allocation state as s= (x', y) * ) E S, the overall cost of the state is C s'→s I.e., the increased overhead required to migrate from the state S' of the last time slice to the state S of the current time slice, where S refers to the feasible solution set of service deployment and computing resource allocation schemes;
building graph g= (V, L), vertex set V represents a set of states, edge set L represents a set of total overheads between two states, in particular two adjacent statesAnd->The weights of the edges in between represent the service migration overhead involved between states i and j, and the service computation delay and transmission delay overhead at time slice t;
adding artificial nodes S and D on two sides of the graph G, and solving the shortest path between S and D through the following algorithm:
a. receiving inputAnd initializing Φ, Φ records the deployment scenario s from a certain deployment scenario t A mapping to a corresponding overhead;
b. continuously cycling from t=1 to T, and executing steps c and d;
c. all (E+2) are obtained per cycle U The possible deployment schemes are marked as a set H, wherein E is the number of edge computing nodes, and U is the number of user services;
d. obtaining kappa schemes from deployment schemes in each set H through uniform sampling, and updating phi into one of the overheads of all sampling schemes in the last time slice t-1 and the smallest sum of overheads in the current sampling scheme;
e. returning to the deployment scheme with the least overhead among all deployment schemes of phi.
6. The service deployment and resource allocation method considering multi-user mobility according to claim 2, wherein said step (2) of solving the established mathematical model comprises:
the service set deployed on the edge computing node i is marked as Λ i =SUBSET({λ 12 ,…,λ n }),λ n Representing the nth service, using w ij Representing transmission delay overhead and service migration overhead for deploying user service j to edge computing node i, using v ij =λ j /c i To represent the computational latency overhead of deploying user service j onto edge compute node i, c i Computing the computing power of the node i for the edge;
deployment { lambda } in arbitrary order 12 ,…,λ n ' will lambda j Deployed to all edge computing nodes epsilon, computing node epsilon for the ith edge i Definition of LOAD (. Epsilon.) i ) Meaning that the sum of the costs of the user services already carried by the current node is represented by the meaning of defining a LOAD (epsilon) ij ) Represented by epsilon i The total cost generated by a user service j is deployed on the basis of the existing service;
the following updating algorithm is executed, and epsilon with the minimum final overhead increase is selected for deployment:
1) Calculating the node ε for each edge i Setting LOAD (epsilon) i )=0;
2) When the time slice t arrives, the following steps are performed:
i. for each user service j, a LOAD (ε) is calculated ij ) And selecting an edge calculation node epsilon in which the cost is minimum;
user service j is deployed on the edge computing node epsilon and then LOAD (epsilon, lambda) is used j ) Update LOAD (epsilon).
CN202011038113.2A 2020-09-28 2020-09-28 Service deployment and resource allocation method considering multi-user mobility Active CN112148492B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011038113.2A CN112148492B (en) 2020-09-28 2020-09-28 Service deployment and resource allocation method considering multi-user mobility

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011038113.2A CN112148492B (en) 2020-09-28 2020-09-28 Service deployment and resource allocation method considering multi-user mobility

Publications (2)

Publication Number Publication Date
CN112148492A CN112148492A (en) 2020-12-29
CN112148492B true CN112148492B (en) 2023-07-28

Family

ID=73895122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011038113.2A Active CN112148492B (en) 2020-09-28 2020-09-28 Service deployment and resource allocation method considering multi-user mobility

Country Status (1)

Country Link
CN (1) CN112148492B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112822701A (en) * 2020-12-31 2021-05-18 中山大学 Multi-user deep neural network model segmentation and resource allocation optimization method in edge computing scene
CN112511652B (en) * 2021-02-03 2021-04-30 电子科技大学 Cooperative computing task allocation method under edge computing
CN113301151B (en) * 2021-05-24 2023-01-06 南京大学 Low-delay containerized task deployment method and device based on cloud edge cooperation
CN113259472A (en) * 2021-06-08 2021-08-13 江苏电力信息技术有限公司 Edge node resource allocation method for video analysis task
CN114139730B (en) * 2021-06-30 2024-04-19 武汉大学 Dynamic pricing and deployment method for machine learning tasks in edge cloud network
CN113595801B (en) * 2021-08-09 2023-06-30 湘潭大学 Edge cloud network server deployment method based on task traffic and timeliness
CN115834386A (en) * 2022-11-04 2023-03-21 北京沐融信息科技股份有限公司 Intelligent service deployment method, system and terminal for edge computing environment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107846704A (en) * 2017-10-26 2018-03-27 北京邮电大学 A kind of resource allocation and base station service arrangement method based on mobile edge calculations
CN109151864A (en) * 2018-09-18 2019-01-04 贵州电网有限责任公司 A kind of migration decision and resource optimal distribution method towards mobile edge calculations super-intensive network
CN109862592A (en) * 2018-12-06 2019-06-07 北京邮电大学 Resource management and dispatching method under a kind of mobile edge calculations environment based on multi-base station cooperative
CN111090522A (en) * 2019-12-13 2020-05-01 南京邮电大学 Scheduling system and decision method for service deployment and migration in mobile edge computing environment
WO2020119648A1 (en) * 2018-12-14 2020-06-18 深圳先进技术研究院 Computing task unloading algorithm based on cost optimization
CN111585916A (en) * 2019-12-26 2020-08-25 国网辽宁省电力有限公司电力科学研究院 LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3457733B1 (en) * 2016-05-28 2022-04-06 Huawei Technologies Co., Ltd. Mobile edge orchestrator and application migration system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107846704A (en) * 2017-10-26 2018-03-27 北京邮电大学 A kind of resource allocation and base station service arrangement method based on mobile edge calculations
CN109151864A (en) * 2018-09-18 2019-01-04 贵州电网有限责任公司 A kind of migration decision and resource optimal distribution method towards mobile edge calculations super-intensive network
CN109862592A (en) * 2018-12-06 2019-06-07 北京邮电大学 Resource management and dispatching method under a kind of mobile edge calculations environment based on multi-base station cooperative
WO2020119648A1 (en) * 2018-12-14 2020-06-18 深圳先进技术研究院 Computing task unloading algorithm based on cost optimization
CN111090522A (en) * 2019-12-13 2020-05-01 南京邮电大学 Scheduling system and decision method for service deployment and migration in mobile edge computing environment
CN111585916A (en) * 2019-12-26 2020-08-25 国网辽宁省电力有限公司电力科学研究院 LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Edge Cloud Capacity Allocation for Low Delay Computing on Mobile Devices;Can Wang;2017 IEEE International Symposium on Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on Ubiquitous Computing and Communications (ISPA/IUCC);全文 *
移动边缘计算中的任务迁移与任务部署;蔡政;中国优秀硕士学位论文全文数据库-信息科技辑;第3章 *

Also Published As

Publication number Publication date
CN112148492A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN112148492B (en) Service deployment and resource allocation method considering multi-user mobility
CN109951821B (en) Task unloading scheme for minimizing vehicle energy consumption based on mobile edge calculation
Zaman et al. LiMPO: Lightweight mobility prediction and offloading framework using machine learning for mobile edge computing
CN109951873B (en) Task unloading mechanism under asymmetric and uncertain information in fog computing of Internet of things
Chen et al. Budget-constrained edge service provisioning with demand estimation via bandit learning
CN110445866B (en) Task migration and cooperative load balancing method in mobile edge computing environment
Shu et al. Dependency-aware and latency-optimal computation offloading for multi-user edge computing networks
Misra et al. Multiarmed-bandit-based decentralized computation offloading in fog-enabled IoT
CN108600299B (en) Distributed multi-user computing task unloading method and system
CN111988787B (en) Task network access and service placement position selection method and system
CN114205317B (en) SDN and NFV-based service function chain SFC resource allocation method and electronic equipment
CN114595049A (en) Cloud-edge cooperative task scheduling method and device
CN113391824A (en) Computing offload method, electronic device, storage medium, and computer program product
CN109189563A (en) Resource regulating method, calculates equipment and storage medium at device
Asghari et al. Server placement in mobile cloud computing: A comprehensive survey for edge computing, fog computing and cloudlet
CN116233928A (en) Unloading decision and resource allocation method based on general sense calculation integration
Badri et al. A sample average approximation-based parallel algorithm for application placement in edge computing systems
CN113010317B (en) Combined service deployment and task offloading method and device, computer equipment and medium
CN114007231A (en) Heterogeneous unmanned aerial vehicle data unloading method and device, electronic equipment and storage medium
Li et al. Efficient data offloading using Markovian decision on state reward action in edge computing
CN113190342A (en) Method and system architecture for multi-application fine-grained unloading of cloud-edge cooperative network
KR102056894B1 (en) Dynamic resource orchestration for fog-enabled industrial internet of things networks
Khanh et al. Fuzzy‐Based Mobile Edge Orchestrators in Heterogeneous IoT Environments: An Online Workload Balancing Approach
Cao et al. Performance and stability of application placement in mobile edge computing system
CN112948114B (en) Edge computing method and edge computing platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant