CN110580199A - service migration method based on particle swarm in edge computing environment - Google Patents

service migration method based on particle swarm in edge computing environment Download PDF

Info

Publication number
CN110580199A
CN110580199A CN201910871666.7A CN201910871666A CN110580199A CN 110580199 A CN110580199 A CN 110580199A CN 201910871666 A CN201910871666 A CN 201910871666A CN 110580199 A CN110580199 A CN 110580199A
Authority
CN
China
Prior art keywords
task
server
equipment
base station
delay
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910871666.7A
Other languages
Chinese (zh)
Other versions
CN110580199B (en
Inventor
梁靓
蒋鹏
武彦飞
贾云健
陈正川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201910871666.7A priority Critical patent/CN110580199B/en
Publication of CN110580199A publication Critical patent/CN110580199A/en
Application granted granted Critical
Publication of CN110580199B publication Critical patent/CN110580199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to a service migration method based on particle swarm in an edge computing environment, belonging to the technical field of mobile communication. The method comprises the following steps: s1: modeling by a server; s2: service request reporting: the equipment sends the state information of the equipment to a local network controller; s3: finding out the equipment with the delay not meeting the requirement: the local network controller estimates the time delay of the equipment according to the received equipment state information and the network information collected by the state collection module, and judges whether the time delay requirement of the corresponding type of task is met; s4: making a migration decision: and selecting equipment which cannot meet the requirements for the server, and comprehensively considering energy consumption and time delay by combining factors such as task size, task type, link stability, server state and the like to make a migration decision with the maximum benefit. The invention can trigger service migration in real time when the performance of the equipment does not reach the standard, and maximizes the benefit of service migration while meeting the task requirements of each equipment.

Description

service migration method based on particle swarm in edge computing environment
Technical Field
the invention belongs to the technical field of mobile communication, and relates to a particle swarm-based service migration method in an edge computing environment.
Background
Mobile cloud computing technology, which is an integration of cloud computing and mobile computing, has found widespread use over the last few years, relying heavily on the centralization of computing and data resources so that these resources can be accessed by distributed end users in an on-demand manner, i.e., data processing, computing, and even storage of applications need not be performed on mobile devices with limited processing power, which greatly reduces the requirements of applications on mobile device hardware. However, since the cloud service is provided by a large centralized data center far away from the user, the user needs to bear long delay due to connection with the remote cloud data center, and since data interaction generated by all applications is performed through the core network, congestion is easily generated during network peak periods, and great stress is applied to the core network. MCC suffers from high latency and congestion problems, but also from security holes, low coverage and delayed data transmission. For 5G, these challenges may be more difficult to solve, and the main goal of Mobile Edge Computing (MEC) is to solve the challenges encountered by MCC systems. The MEC provides services to users by deploying a part of resources (e.g., storage and processing capabilities) of a data center in mobile cloud computing to the edge of a Radio Access Network (RAN), i.e., a location close to the users. Therefore, the data processing requirement generated by the mobile application only needs to be processed by the MEC server at the edge of the local network and the result is returned, and the data processing requirement does not need to be processed by a core network and a data center. This not only greatly relieves the network pressure of the core network, but also helps significantly reduce the network delay of the application.
with the development of MEC, it has become a key technology to realize the vision of the internet of things. An important issue in MECs is service migration related to user mobility. The coverage of a single edge server is limited and the mobility of the user terminals (e.g. smartphones and smart vehicles) may cause significant degradation of network performance and even disruption of ongoing edge traffic, where the user needs to migrate the service to a neighbouring server in order to maintain low latency access to the service. Since a large carrier of the service is a virtual machine, at this time, it may be considered to migrate the virtual machine where the service is located to the vicinity of the current location of the user by using a virtual machine dynamic migration technology, so as to ensure that the delay when the user uses the service is low.
The prior art has the following defects:
Most of the existing service migration strategies are not considered comprehensively, or decision is made mainly from a network side through information such as load and user requests, the influence of user mobility is not considered, or only a single performance index is optimized, the situation of a plurality of performance indexes is not considered comprehensively, or a one-dimensional mobility model is used for analyzing equipment mobility, the mobility of equipment in an actual scene cannot be reflected really, or service migration is performed only once in a fixed time period instead of triggering service migration at any time, and the service migration efficiency is too low.
Disclosure of Invention
in view of the above, an object of the present invention is to provide a service migration method based on particle swarm in an edge computing environment, which can trigger service migration in real time when the performance of a device does not meet the standard, and maximize the benefit of service migration while meeting the task requirements of each device.
in order to achieve the purpose, the invention provides the following technical scheme:
The service migration method based on the particle swarm in the edge computing environment comprises the following steps:
S1: modeling by a server;
S2: service request reporting: the equipment sends the state information of the equipment to a local network controller;
S3: finding out the equipment with the delay not meeting the requirement: the local network controller estimates the time delay of the equipment according to the received equipment state information and the network information collected by the state collection module, and judges whether the time delay requirement of the corresponding type of task is met;
s4: making a migration decision: and selecting equipment which cannot meet the requirements for the server, comprehensively considering energy consumption and time delay by combining factors such as task size, task type, link stability and server state, and making a migration decision with the maximum benefit while ensuring the performance index requirements of the equipment task.
further, the step S1 specifically includes:
for the uRLLC task, the minimum processing capacity V needing to be distributed to meet the time delay threshold condition is calculated firstneedSince this type of service has a high delay requirement, when the server has enough residual processing capacity VremainOnly the minimum processing capacity V allocated to the taskneedObviously, the allocation of all the remaining processing capacity to the task may have a great influence on other tasks that are to be performed, and based on this, the processing capacity allocated to the task needs to be dynamically allocated in combination with the processing capacity to be released in the server, and the larger the remaining processing capacity and the processing capacity to be released is, the larger the allocated processing capacity is; in some cases, V may be caused by a large delay in other parts of the taskneedLarge, the current remaining processing capacity of the server is not satisfied, and if the server waits for the later time to release enough processing capacity and then processes the data, the waiting time is increased, and V is increasedneedThe processing capacity is increased, and further, the processing capacity needs to wait for a longer time to be released, so that the task just meets the threshold condition under the condition that the performance of the subsequent task is seriously influenced; for this purpose, a minimum throughput threshold V for URLLC tasks is setuminwhen the residual processing capacity is greater than Vuminbut less than VneedWhen the processing capacity allocated to the task is Vumini.e. by
WhereinIndicating the processing capacity, χ, of the subsequent release of the ith time slotiRepresents the weight of the time slot, i is larger and smaller;
whereinThe amount of computation required to represent the task of device iIs proportional, i.e.
DuIs the delay threshold of the uRLLC task, tiThe sum of the rest delays except the processing delay of the task of the device i;
For the eMBB task, the latency requirement is lower than that of the urrllc task, so the processing capacity allocated to most of the time is lower than that allocated to the urrllc task, and when the server load is low, in order to ensure the processing capacity allocated to the subsequent task as much as possible, the processing capacity allocated to the subsequent task has an upper limit, that is, the processing capacity allocated to the subsequent task has a lower limit
Further, in step S2, the device status information includes the size and type of the task generated by the device, the location, the transmission power, the bandwidth, and the server selection.
Further, in step S3, the specific content is as follows:
The local network controller calculates the data transmission delay from the device i to the base station j according to the received device information and the information collected by the state collection module as follows:
WhereinIndicating the data transmission speed of the device i,Indicating the task size of device i.Can be obtained from the following formula
wherein
B is the bandwidth of the channel and,Is the power of device i received by base station j,Is the power at which device i transmits a signal, Li,jis the path loss between device i and server j, N0Is the noise power;
the relay transmission delay of the task of the device i from the relay base station j to the target base station j' is as follows:
whereinRepresents the speed at which the task is transmitted from base station j to base station j';
the processing time required by the task of the device i, that is, the processing delay time, is:
service migration latency of device i is
WhereinRepresenting the size of the virtual machine in which the service of device i resides. The larger the data amount of the task is, the larger the data amount included in the virtual machine is, that is, the size of the virtual machine is proportional to the number of tasks of the service that have been processed, so the size of the virtual machine is proportional to the number of tasks of the service that have been processed, and therefore
A data amount representing a task that has been processed by a virtual machine in which the service of the device i is located;
the steps for calculating the queuing delay are as follows:
Knowing the processing duration needed by n tasks in the queueC in the servervmProcessing time required for individual tasks
② creating an arrayand arranging the data in the array b from large to small. C in the arranged array bvmthe data is the queuing delay of the service of the 1 st device in the queuing queue
Processing time of 1 st equipment in queueAnd queue durationAdding the data and then putting the data into an array b, and arranging the data in the updated array b from large to small. C in the arranged array bvmThe data is the queuing delay of the service of the 2 nd device in the queuing queue
fourthly, the queuing delay of the nth task in the queue is obtained by analogy
Further, in step S4, the specific content is as follows:
The connectivity between the user equipment i and the base station j can change along with the movement of the equipment; the connectivity between the device and the base station is expressed in terms of the probability of interruption, which can be estimated as a function of the distance between the device and the base station; the probability density function of the signal-to-noise ratio of the signal received by the base station follows a log-normal distribution as follows:
Where σ is the standard deviation of log-normal shadowing;
when the signal-to-noise ratio received by the server is smaller than a certain threshold, the link is interrupted, and the interruption probability is the probability that the received SNR exceeds Γ, so the link stability between the device i and the base station j is represented as follows:
The optional server of device i needs to satisfy the following condition:
wherein P isminRepresenting the signal reception threshold, P, of the base stationoutmaxrepresenting a outage probability threshold;
Defining two integer variables Si,Xi∈NserverRespectively indicating the base station where the target server selected by the device is located and the relay base station selected by the device, NserverIs a server (base station) set; when X is presenti=Siand time, the device does not have a relay base station. Defining a binary variable Hi
The total latency of the task for device i is then:
the total energy consumption is as follows:
The energy consumption of each part is the time delay multiplied by the corresponding power,Respectively representing the time delay and the energy consumption before and after the task migration of the device i. The time delay and energy consumption are normalized for comparison at the same magnitude:
So the yield after migration is
Finding a migration decision that maximizes A using an improved particle swarm algorithm;
The larger the value of the contraction and expansion coefficient beta in the quantum particle swarm algorithm is, the more favorable the search of a global area is, the faster the convergence rate of the algorithm is at the moment, but the solution with high precision is not easy to obtain, and the smaller the value of the beta is, the more favorable the search of a local area is, the more favorable the solution with high precision is easy to obtain, but the convergence rate of the algorithm is slowed down; compared with the linear reduction of beta along with the iteration times in the traditional quantum particle swarm algorithm, the beta in the improved quantum particle swarm algorithm is set to be reduced along with the iteration times according to a cosine curve, namely
Wherein beta ism,β0is the upper and lower bound of beta, k is the current iteration number, kmaxIs the maximum number of iterations;
therefore, in the initial stage of the algorithm, as the difference between the global extreme value of the particle and the historical individual extreme value of the particle is large, the global extreme value is searched at a high speed within a period of time, so that the global extreme value can be approached quickly; in the later stage of algorithm evolution, after the individual extreme value of the particle history is close to the global extreme value, the local searching capability of the algorithm is enhanced by using a lower speed within a period of time, so that the accuracy of the algorithm is improved;
The invention changes the individual average optimal position of the current particle in the traditional quantum particle swarm algorithm from the average value of the optimal position of each particle to the random weighted average value of the optimal position of each particle to strengthen the randomness of the algorithm and strengthen the ability of the algorithm to break away the local extreme point constraint;
Because the Logistic chaotic map has the characteristics of ergodicity, regularity, randomness, sensitivity to initial values and the like, the particle swarm is initialized by adopting the following Logistic chaotic map:
si,n+1=4si,n(1-si,n)i∈Im
xi,n+1=4xi,n(1-xi,n)i∈Im
Si,n+1,1=[si,n+1Ns]
Xi,n+1,1=[xi,n+1Ns]
Where the fixed points 0, 0.25, 0.5, 0.75 and 1 cannot be taken as initial values, Si,n+1,1,Xi,n+1,1respectively representing the S, X parameters of the ith component of the (n + 1) th particle of the first iteration. The algorithm of the present invention generates the next generation of particle positions as follows:
wherein p iss,i,n,k,px,i,n,kThe ith components of the two decision variables of the attractor of the nth particle of the kth iteration, respectively, the attractor being for ensuring the algorithmconvergence exists, and each particle converges to a point attractor. p is a radical ofs,i,n,px,i,nthe ith component, P, which represents the optimal position of the two decision variables of the nth particle in the pastsg,i,Pxg,irespectively representing the ith component of the global optimal position of the two decision variables at the particle swarm so far.Is a random factor between (0,1), u is a random number between (0,1), L is the probability magnitude of the particle appearing at the relative point position, Ls,i,n,k,Lx,i,n,kThe ith component of L, which represents the two decision variables for the nth particle of the kth iteration, respectively, is
Ls,i,n,k+1=2β(k)|mbests,i-Si,n,k|
Lx,i,n,k+1=2β(k)|mbestx,i-Si,n,k|
mbests,i,mbestx,ithe i-th component, κ, of the two decision variables for the individual weighted mean optimum position of the current particle, respectivelynThe normalized random number of the nth particle, which is obtained by generating a random number between (0,1) by each particle and then normalizing, is represented, and the contribution rate of the current optimal position of each particle to the mbest is represented.
the final particle position equation is:
modNs,u(k)≥0.5
mod Ns u(k)<0.5
mod Ns u(k)≥0.5
mod Ns u(k)<0.5
The specific steps of solving by using the quantum particle swarm algorithm are as follows:
1) Randomly taking two values between 0 and 1 as S and X parameters of the first particle in the primary particles
2) Iterating the S, X parameters of the rest of the initial generation particles by using Logistic mapping
3) judging whether each particle meets the constraint condition, if not, iterating again, if so, calculating the fitness of each particle, and finding out the optimal position of each particle and the optimal position in all the particles so far
4) if the termination condition is met, turning to the step 6), otherwise, turning to the step 5)
5) iterating the next generation of particles by using a particle group algorithm, and turning to the step 3)
6) end up
the invention has the beneficial effects that:
(1) The invention supports a real-time service migration system model based on edge calculation, provides an effective platform for collecting real-time information of equipment and a network by a system, meets the requirements of actual equipment mobility and instantaneity of service migration immediately once the equipment performance does not reach the standard, and has stronger instantaneity and practicability compared with other service migration models.
(2) The invention dynamically allocates the processing capacity by combining the task type, the residual processing capacity of the server, the processing capacity to be released by the server and other factors on the basis that the server allocates the fixed processing capacity to the task of the equipment as the lowest allocated processing capacity. Compared with other server processing capacity distribution methods, the method improves the efficiency of the server and the task processing speed.
(3) The invention takes the change of time delay and energy consumption before and after migration as the benefit, and obtains the migration decision of maximizing the benefit under different requirements by different weight settings of the time delay and the energy consumption.
additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a diagram of a service migration system model based on edge computing according to the present invention;
FIG. 2 is a diagram illustrating steps performed in the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
fig. 1 is a model diagram of a service migration system of the present invention, which specifically includes:
Equipment: the device generates tasks in a Poisson stream, and a plurality of continuous tasks form a service and are processed in the same virtual machine.
Controlling the base station: the coverage is large, mainly used for transmitting control signaling.
A server: the system is deployed around the small base station, and has certain computing resources and storage resources through communication between the small base station and the outside.
The small base station: the coverage is small, mainly used for transmitting data, and will periodically report the information of the connected server to the control base station.
The local network controller: the system is mainly used for managing equipment mobility and controlling a base station, and comprises 4 modules including a state collection module, a delay prediction module, a server selection module and a service migration module.
when the base station where the target server selected by the device is located cannot directly communicate with the device, the device needs to select one of the base stations that can directly communicate as a relay base station to communicate with the base station where the target server is located. The invention provides a delay perception method based on which, when a user needs a service, a delay prediction module predicts the delay of the service according to the information collected by a state collection module, when the delay of the user does not meet the requirement, a server selection module in a local network controller selects a server which can obtain the maximum delay and energy consumption benefit according to the information of the state collection module, and finally a service migration module performs service migration.
The server allocates a virtual machine to each service of the device, that is, when a first task of each service of the device reaches the server, the server allocates a virtual machine to execute the task, and subsequent tasks of the same service of the device are also processed by the virtual machine. And after the service processing is finished, the virtual machine is cancelled, so that the virtual machine does not occupy server resources any more.
Fig. 2 is a diagram of implementation steps of a service migration scheme based on latency and energy consumption according to the present invention, and the specific steps are as follows:
s1: modeling by a server;
S2: service request reporting: the equipment sends the state information of the equipment to a local network controller;
s3: finding out the equipment with the delay not meeting the requirement: the local network controller estimates the time delay of the equipment according to the received equipment state information and the network information collected by the state collection module, and judges whether the time delay requirement of the corresponding type of task is met;
s4: making a migration decision: selecting equipment which cannot meet the requirements for the server, comprehensively considering energy consumption and time delay by combining factors such as task size, task type, link stability and server state, and making a migration decision with the maximum benefit while ensuring the performance index requirements of the equipment task;
s1 provides a method for dynamically allocating server processing capacity, which includes the following steps:
For the uRLLC task, the minimum processing capacity V needing to be distributed to meet the time delay threshold condition is calculated firstneedsince this type of service has a high delay requirement, when the server has enough residual processing capacity VremainOnly the minimum processing capacity V allocated to the taskneedObviously, the allocation of all the remaining processing capacity to the task may have a great influence on other tasks that are to be performed, and based on this, the processing capacity allocated to the task needs to be dynamically allocated in combination with the processing capacity to be released in the server, and the larger the remaining processing capacity and the processing capacity to be released is, the larger the allocated processing capacity is; in some cases, V may be caused by a large delay in other parts of the taskneedLarge, the current remaining processing capacity of the server is not satisfied, and if the server waits for the later time to release enough processing capacity and then processes the data, the waiting time is increased, and V is increasedneedThe processing capacity is increased, and further, the processing capacity needs to wait for a longer time to be released, so that the task just meets the threshold condition under the condition that the performance of the subsequent task is seriously influenced; for this purpose, a minimum throughput threshold V for URLLC tasks is setuminWhen the residual processing capacity is greater than Vuminbut less than Vneedwhen the processing capacity allocated to the task is VuminI.e. by
WhereinIndicating the processing capacity, χ, of the subsequent release of the ith time slotiRepresents the weight of the time slot, i is larger and smaller;
whereinthe amount of computation required to represent the task of device iis proportional, i.e.
DuIs the delay threshold of the uRLLC task, tithe sum of the rest delays except the processing delay of the task of the device i;
For the eMBB task, the latency requirement is lower than that of the urrllc task, so the processing capacity allocated to most of the time is lower than that allocated to the urrllc task, and when the server load is low, in order to ensure the processing capacity allocated to the subsequent task as much as possible, the processing capacity allocated to the subsequent task has an upper limit, that is, the processing capacity allocated to the subsequent task has a lower limit
S3 provides modeling of the delay, which is as follows:
The local network controller calculates the data transmission delay from the device i to the base station j according to the received device information and the information collected by the state collection module as follows:
whereinIndicating the data transmission speed of the device i,indicating the task size of device i.Can be obtained from the following formula
Wherein
B is the bandwidth of the channel and,is the power of device i received by base station j,Is the power at which device i transmits a signal, Li,jis the path loss between device i and server j, N0Is the noise power;
The relay transmission delay of the task of the device i from the relay base station j to the target base station j' is as follows:
WhereinRepresents the speed at which the task is transmitted from base station j to base station j';
The processing time required by the task of the device i, that is, the processing delay time, is:
service migration latency of device i is
WhereinRepresenting the size of the virtual machine in which the service of device i resides. The larger the data amount of the task is, the larger the data amount included in the virtual machine is, that is, the size of the virtual machine is proportional to the number of tasks of the service that have been processed, so the size of the virtual machine is proportional to the number of tasks of the service that have been processed, and therefore
A data amount representing a task that has been processed by a virtual machine in which the service of the device i is located;
The steps for calculating the queuing delay are as follows:
knowing the processing duration needed by n tasks in the queuec in the servervmprocessing time required for individual tasks
② creating an arrayAnd arranging the data in the array b from large to small. C in the arranged array bvmthe data is the queuing delay of the service of the 1 st device in the queuing queue
Processing time of 1 st equipment in queueAnd queue durationadding the added values and then putting the added values into an array b, and updating the array bthe data of (a) is arranged from large to small. C in the arranged array bvmThe data is the queuing delay of the service of the 2 nd device in the queuing queue
Fourthly, the queuing delay of the nth task in the queue is obtained by analogy
S4 sets out the limiting conditions and migration decision steps of the optional server, which are as follows:
the connectivity between the user equipment i and the base station j can change along with the movement of the equipment; the connectivity between the device and the base station is expressed in terms of the probability of interruption, which can be estimated as a function of the distance between the device and the base station; the probability density function of the signal-to-noise ratio of the signal received by the base station follows a log-normal distribution as follows:
where σ is the standard deviation of log-normal shadowing;
when the signal-to-noise ratio received by the server is smaller than a certain threshold, the link is interrupted, and the interruption probability is the probability that the received SNR exceeds Γ, so the link stability between the device i and the base station j is represented as follows:
The optional server of device i needs to satisfy the following condition:
Wherein P isminrepresenting the signal reception threshold, P, of the base stationoutmaxRepresenting a outage probability threshold;
Defining two integer variables Si,Xi∈NserverRespectively indicating the base station where the target server selected by the device is located and the relay base station selected by the device, NserverIs a server (base station) set; when X is presenti=SiAnd time, the device does not have a relay base station. Defining a binary variable Hi
the total latency of the task for device i is then:
the total energy consumption is as follows:
The energy consumption of each part is the time delay multiplied by the corresponding power,respectively representing the time delay and the energy consumption before and after the task migration of the device i. The time delay and energy consumption are normalized for comparison at the same magnitude:
So the yield after migration is
finding a migration decision that maximizes A using an improved particle swarm algorithm;
The larger the value of the contraction and expansion coefficient beta in the quantum particle swarm algorithm is, the more favorable the search of a global area is, the faster the convergence rate of the algorithm is at the moment, but the solution with high precision is not easy to obtain, and the smaller the value of the beta is, the more favorable the search of a local area is, the more favorable the solution with high precision is easy to obtain, but the convergence rate of the algorithm is slowed down; compared with the linear reduction of beta along with the iteration times in the traditional quantum particle swarm algorithm, the beta in the improved quantum particle swarm algorithm is set to be reduced along with the iteration times according to a cosine curve, namely
Wherein beta ism0is the upper and lower bound of beta, k is the current iteration number, kmaxIs the maximum number of iterations;
therefore, in the initial stage of the algorithm, as the difference between the global extreme value of the particle and the historical individual extreme value of the particle is large, the global extreme value is searched at a high speed within a period of time, so that the global extreme value can be approached quickly; in the later stage of algorithm evolution, after the individual extreme value of the particle history is close to the global extreme value, the local searching capability of the algorithm is enhanced by using a lower speed within a period of time, so that the accuracy of the algorithm is improved;
the invention changes the individual average optimal position of the current particle in the traditional quantum particle swarm algorithm from the average value of the optimal position of each particle to the random weighted average value of the optimal position of each particle to strengthen the randomness of the algorithm and strengthen the ability of the algorithm to break away the local extreme point constraint;
Because the Logistic chaotic map has the characteristics of ergodicity, regularity, randomness, sensitivity to initial values and the like, the particle swarm is initialized by adopting the following Logistic chaotic map:
si,n+1=4si,n(1-si,n) i∈Im
xi,n+1=4xi,n(1-xi,n) i∈Im
Where the fixed points 0, 0.25, 0.5, 0.75 and 1 cannot be taken as initial values, Si,n+1,1,Xi,n+1,1Respectively representing the S, X parameters of the ith component of the (n + 1) th particle of the first iteration. The algorithm of the present invention generates the next generation of particle positions as follows:
Wherein p iss,i,n,k,px,i,n,kthe ith components of the two decision variables of the attractors of the nth particle of the kth iteration, respectively, the attractors exist for ensuring the convergence of the algorithm, and each particle converges to one point of the attractors. Ps,i,n,Px,i,nThe ith component, P, which represents the optimal position of the two decision variables of the nth particle in the pastsg,i,Pxg,iRespectively representthe i-th component of the two decision variables at the global optimum position of the particle swarm so far.is a random factor between (0,1), u is a random number between (0,1), L is the probability magnitude of the particle appearing at the relative point position, Ls,i,n,k,Lx,i,n,kThe ith component of L, which represents the two decision variables for the nth particle of the kth iteration, respectively, is
Ls,i,n,k+1=2β(k)|mbests,i-Si,n,k|
Lxi,n,k+1=2β(k)|mbestx,i-Xi,n,k|
mbests,i,mbestx,iThe i-th component, κ, of the two decision variables for the individual weighted mean optimum position of the current particle, respectivelynthe normalized random number of the nth particle, which is obtained by generating a random number between (0,1) by each particle and then normalizing, is represented, and the contribution rate of the current optimal position of each particle to the mbest is represented.
The final particle position equation is:
modNs,u(k)≥0.5
modNs u(k)<0.5
modNs u(k)≥0.5
modNs u(k)<0.5
The specific steps of solving by using the quantum particle swarm algorithm are as follows:
1) Randomly taking two values between 0 and 1 as S and X parameters of the first particle in the primary particles
2) Iterating the S, X parameters of the rest of the initial generation particles by using Logistic mapping
3) Judging whether each particle meets the constraint condition, if not, iterating again, if so, calculating the fitness of each particle, and finding out the optimal position of each particle and the optimal position in all the particles so far
4) if the termination condition is met, turning to the step 6), otherwise, turning to the step 5)
5) iterating the next generation of particles by using a particle group algorithm, and turning to the step 3)
6) End up
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (5)

1. The service migration method based on the particle swarm in the edge computing environment is characterized in that: the method comprises the following steps:
S1: modeling by a server;
S2: service request reporting: the equipment sends the state information of the equipment to a local network controller;
S3: finding out the equipment with the delay not meeting the requirement: the local network controller estimates the time delay of the equipment according to the received equipment state information and the network information collected by the state collection module, and judges whether the time delay requirement of the corresponding type of task is met;
S4: making a migration decision: and selecting equipment which cannot meet the requirements for the server, comprehensively considering energy consumption and time delay by combining factors such as task size, task type, link stability and server state, and making a migration decision with the maximum benefit while ensuring the performance index requirements of the equipment task.
2. The particle swarm-based service migration method in the edge computing environment according to claim 1, wherein: the step S1 specifically includes:
For the uRLLC task, the minimum processing capacity V needing to be distributed to meet the time delay threshold condition is calculated firstneeddynamically allocating the processing capacity allocated to the task by combining the processing capacity to be released in the server, wherein the larger the residual processing capacity and the processing capacity to be released are, the larger the allocated processing capacity is; setting a minimum processing capacity threshold V of a URLLC taskuminWhen the residual processing capacity is greater than VuminBut less than VneedWhen the processing capacity allocated to the task is Vumini.e. by
whereinIndicating the processing capacity, χ, of the subsequent release of the ith time slotiRepresents the weight of the time slot, i is larger and smaller;
WhereinThe amount of computation required to represent the task of device iIs proportional, i.e.
DuIs the delay threshold of the uRLLC task, tithe sum of the rest delays except the processing delay of the task of the device i;
For the eMBB task, the delay requirement is lower than that of the urrllc task, the allocated processing capability is lower than that of the urrllc task, and when the server load is low, in order to ensure the processing capability allocated to the subsequent task as much as possible, the allocated processing capability has an upper limit, that is, the allocated processing capability has a lower limit, that is, the processing capability is lower than that of the urrllc task
3. the particle swarm-based service migration method in the edge computing environment according to claim 1, wherein: in the step S2, the device status information includes the size and type of the task generated by the device, the location, the transmission power, the bandwidth, and the server selection.
4. the particle swarm-based service migration method in the edge computing environment according to claim 1, wherein: in step S3, the following is specifically included:
The local network controller calculates the data transmission delay from the device i to the base station j according to the received device information and the information collected by the state collection module as follows:
WhereinIndicating the data transmission speed of the device i,Represents the task size of device i;is obtained from the formula
Wherein
B is the bandwidth of the channel and,Is the power of device i received by base station j,Is the power at which device i transmits a signal, Li,jIs the path loss between device i and server j, N0Is the noise power;
the relay transmission delay of the task of the device i from the relay base station j to the target base station j' is as follows:
Whereinrepresents the speed at which the task is transmitted from base station j to base station j';
The processing time required by the task of the device i, that is, the processing delay time, is:
service migration latency of device i is
whereinRepresenting the size of the virtual machine in which the service of the device i is located; the larger the data volume of the task is assumed to be, the larger the data volume contained in the virtual machine is, i.e. the size of the virtual machine is proportional to the number of tasks of the service that have already been processed, i.e. the virtual machine size is proportional to the
A data amount representing a task that has been processed by a virtual machine in which the service of the device i is located;
The steps for calculating the queuing delay are as follows:
Knowing the processing duration needed by n tasks in the queueC in the servervmProcessing time required for individual tasks
② creating an arrayArranging the data in the array b from large to small; c in the arranged array bvmThe data is the queuing delay of the service of the 1 st device in the queuing queue
Processing time of 1 st equipment in queueand queue durationAdding the data and then putting the data into an array b, and arranging the data in the updated array b from large to small; c in the arranged array bvmthe data is the queuing delay of the service of the 2 nd device in the queuing queue
Fourthly, the queuing delay of the nth task in the queue is obtained by analogy
5. the particle swarm-based service migration method in the edge computing environment according to claim 1, wherein: in step S4, the following is specifically included:
The connectivity between the user equipment i and the base station j can change along with the movement of the equipment; the connectivity between the device and the base station is represented by an interruption probability, estimated as a function of the distance between the device and the base station; the probability density function of the signal-to-noise ratio of the signal received by the base station follows a log-normal distribution as follows:
Where σ is the standard deviation of log-normal shadowing;
When the signal-to-noise ratio received by the server is smaller than a certain threshold value, the link is interrupted, the interruption probability is the probability that the received SNR exceeds Γ, and the link stability between the device i and the base station j is represented as follows:
the optional server of device i needs to satisfy the following condition:
Wherein P isminRepresenting the signal reception threshold, P, of the base stationoutmaxRepresenting a outage probability threshold;
defining two integer variables Si,Xi∈NserverRespectively indicating the base station where the target server selected by the device is located and the relay base station selected by the device, Nserveris a set of servers; when X is presenti=SiWhen, it means that the device does not have a relay base station; defining a binary variable Hi
The total latency of the task for device i is then:
The total energy consumption is as follows:
the energy consumption of each part is the time delay multiplied by the corresponding power,Individual watchDisplaying time delay and energy consumption before and after task migration of equipment i; the time delay and energy consumption are normalized for comparison at the same magnitude:
the yield after migration is
finding a migration decision that maximizes A using an improved particle swarm algorithm;
the larger the value of the contraction and expansion coefficient beta in the quantum particle swarm algorithm is, the more favorable the search of a global area is, the faster the convergence rate of the algorithm is at the moment, but the solution with high precision is not easy to obtain, and the smaller the value of the beta is, the more favorable the search of a local area is, the more favorable the solution with high precision is easy to obtain, but the convergence rate of the algorithm is slowed down; compared with the linear reduction of beta along with the iteration times in the traditional quantum particle swarm algorithm, the beta in the improved quantum particle swarm algorithm is set to be reduced along with the iteration times according to a cosine curve, namely
Wherein beta ism0is the upper and lower bound of beta, k is the current iteration number, kmaxIs the maximum number of iterations;
In the initial stage of the algorithm, as the difference between the global extreme value of the particle and the historical individual extreme value of the particle is large, the global extreme value is searched at a high speed within a period of time, so that the global extreme value can be approached quickly; in the later stage of algorithm evolution, after the individual extreme value of the particle history is close to the global extreme value, the local searching capability of the algorithm is enhanced by using a lower speed within a period of time, so that the accuracy of the algorithm is improved;
the improved quantum particle swarm optimization further changes the individual average optimal position of the current particle in the traditional quantum particle swarm optimization from the average value of the optimal position of each particle to the random weighted average value of the optimal position of each particle to strengthen the randomness of the optimization, and enhance the ability of the algorithm to break away from the local extreme point constraint.
CN201910871666.7A 2019-09-16 2019-09-16 Service migration method based on particle swarm in edge computing environment Active CN110580199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910871666.7A CN110580199B (en) 2019-09-16 2019-09-16 Service migration method based on particle swarm in edge computing environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910871666.7A CN110580199B (en) 2019-09-16 2019-09-16 Service migration method based on particle swarm in edge computing environment

Publications (2)

Publication Number Publication Date
CN110580199A true CN110580199A (en) 2019-12-17
CN110580199B CN110580199B (en) 2022-04-22

Family

ID=68811326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910871666.7A Active CN110580199B (en) 2019-09-16 2019-09-16 Service migration method based on particle swarm in edge computing environment

Country Status (1)

Country Link
CN (1) CN110580199B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110839075A (en) * 2019-11-08 2020-02-25 重庆大学 Service migration method based on particle swarm in edge computing environment
CN111813506A (en) * 2020-07-17 2020-10-23 华侨大学 Resource sensing calculation migration method, device and medium based on particle swarm algorithm
CN112118312A (en) * 2020-09-17 2020-12-22 浙江大学 Network burst load evacuation method facing edge server
CN112383900A (en) * 2020-10-09 2021-02-19 山西大学 Device-to-device proximity service method based on consensus algorithm
CN112395090A (en) * 2020-11-19 2021-02-23 华侨大学 Intelligent hybrid optimization method for service placement in mobile edge computing
CN113114733A (en) * 2021-03-24 2021-07-13 重庆邮电大学 Distributed task unloading and computing resource management method based on energy collection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107846704A (en) * 2017-10-26 2018-03-27 北京邮电大学 A kind of resource allocation and base station service arrangement method based on mobile edge calculations
CN109600178A (en) * 2018-12-07 2019-04-09 中国人民解放军军事科学院国防科技创新研究院 The optimization method of energy consumption and time delay and minimum in a kind of edge calculations
CN109885397A (en) * 2019-01-15 2019-06-14 长安大学 The loading commissions migration algorithm of time delay optimization in a kind of edge calculations environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107846704A (en) * 2017-10-26 2018-03-27 北京邮电大学 A kind of resource allocation and base station service arrangement method based on mobile edge calculations
CN109600178A (en) * 2018-12-07 2019-04-09 中国人民解放军军事科学院国防科技创新研究院 The optimization method of energy consumption and time delay and minimum in a kind of edge calculations
CN109885397A (en) * 2019-01-15 2019-06-14 长安大学 The loading commissions migration algorithm of time delay optimization in a kind of edge calculations environment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YING XIE ET AL: "A novel directional and non-local-convergent particle swarm optimization based workflow scheduling in cloud–edge environment", 《FUTURE GENERATION COMPUTER SYSTEMS》 *
YUANZHE LI ET AL: "An energy-aware Edge Server Placement Algorithm in Mobile Edge Computing", 《2018 IEEE INTERNATIONAL CONFERENCE ON EDGE COMPUTING》 *
刘梦颉: "移动边缘计算中移动终端与智能基站资源协同优化技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110839075A (en) * 2019-11-08 2020-02-25 重庆大学 Service migration method based on particle swarm in edge computing environment
CN111813506A (en) * 2020-07-17 2020-10-23 华侨大学 Resource sensing calculation migration method, device and medium based on particle swarm algorithm
CN111813506B (en) * 2020-07-17 2023-06-02 华侨大学 Resource perception calculation migration method, device and medium based on particle swarm optimization
CN112118312A (en) * 2020-09-17 2020-12-22 浙江大学 Network burst load evacuation method facing edge server
CN112118312B (en) * 2020-09-17 2021-08-17 浙江大学 Network burst load evacuation method facing edge server
CN112383900A (en) * 2020-10-09 2021-02-19 山西大学 Device-to-device proximity service method based on consensus algorithm
CN112383900B (en) * 2020-10-09 2021-09-28 山西大学 Device-to-device proximity service method based on consensus algorithm
CN112395090A (en) * 2020-11-19 2021-02-23 华侨大学 Intelligent hybrid optimization method for service placement in mobile edge computing
CN112395090B (en) * 2020-11-19 2023-05-30 华侨大学 Intelligent hybrid optimization method for service placement in mobile edge calculation
CN113114733A (en) * 2021-03-24 2021-07-13 重庆邮电大学 Distributed task unloading and computing resource management method based on energy collection
CN113114733B (en) * 2021-03-24 2022-07-08 重庆邮电大学 Distributed task unloading and computing resource management method based on energy collection

Also Published As

Publication number Publication date
CN110580199B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
CN110839075A (en) Service migration method based on particle swarm in edge computing environment
CN110580199B (en) Service migration method based on particle swarm in edge computing environment
CN111711666B (en) Internet of vehicles cloud computing resource optimization method based on reinforcement learning
CN109951849B (en) Method for combining resource allocation and content caching in F-RAN architecture
CN109947545A (en) A kind of decision-making technique of task unloading and migration based on user mobility
Kasgari et al. Model-free ultra reliable low latency communication (URLLC): A deep reinforcement learning framework
CN107708152B (en) Task unloading method of heterogeneous cellular network
CN112020103A (en) Content cache deployment method in mobile edge cloud
CN111629380A (en) Dynamic resource allocation method for high-concurrency multi-service industrial 5G network
CN110650487B (en) Internet of things edge computing configuration method based on data privacy protection
CN112188627B (en) Dynamic resource allocation strategy based on state prediction
CN114138373A (en) Edge calculation task unloading method based on reinforcement learning
CN113452566A (en) Cloud edge side cooperative resource management method and system
EP3491793B1 (en) System and method for resource-aware and time-critical iot frameworks
CN109639833A (en) A kind of method for scheduling task based on wireless MAN thin cloud load balancing
CN115297171A (en) Edge calculation unloading method and system for cellular Internet of vehicles hierarchical decision
CN115022322A (en) Edge cloud cooperation task unloading method based on crowd evolution in Internet of vehicles
Wu et al. Adaptive edge caching in UAV-assisted 5G network
CN113840330A (en) Method for establishing connection, gateway equipment, network system and scheduling center
CN111343243B (en) File acquisition method and system based on 5G power slice
CN111526526B (en) Task unloading method in mobile edge calculation based on service mashup
CN105407383A (en) Multi-version video-on-demand streaming media server cluster resource prediction method
CN114928611B (en) IEEE802.11p protocol-based energy-saving calculation unloading optimization method for Internet of vehicles
CN116700984A (en) Dynamic anti-intrusion resource scheduling system based on cloud game service
CN110958675A (en) Terminal access method based on 5G fog computing node

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Liang Jing

Inventor after: Xiao Jintao

Inventor after: Jiang Peng

Inventor after: Wu Yanfei

Inventor after: Jia Yunjian

Inventor after: Chen Zhengchuan

Inventor before: Liang Jing

Inventor before: Jiang Peng

Inventor before: Wu Yanfei

Inventor before: Jia Yunjian

Inventor before: Chen Zhengchuan

GR01 Patent grant
GR01 Patent grant