service migration method based on particle swarm in edge computing environment
Technical Field
the invention belongs to the technical field of mobile communication, and relates to a particle swarm-based service migration method in an edge computing environment.
Background
Mobile cloud computing technology, which is an integration of cloud computing and mobile computing, has found widespread use over the last few years, relying heavily on the centralization of computing and data resources so that these resources can be accessed by distributed end users in an on-demand manner, i.e., data processing, computing, and even storage of applications need not be performed on mobile devices with limited processing power, which greatly reduces the requirements of applications on mobile device hardware. However, since the cloud service is provided by a large centralized data center far away from the user, the user needs to bear long delay due to connection with the remote cloud data center, and since data interaction generated by all applications is performed through the core network, congestion is easily generated during network peak periods, and great stress is applied to the core network. MCC suffers from high latency and congestion problems, but also from security holes, low coverage and delayed data transmission. For 5G, these challenges may be more difficult to solve, and the main goal of Mobile Edge Computing (MEC) is to solve the challenges encountered by MCC systems. The MEC provides services to users by deploying a part of resources (e.g., storage and processing capabilities) of a data center in mobile cloud computing to the edge of a Radio Access Network (RAN), i.e., a location close to the users. Therefore, the data processing requirement generated by the mobile application only needs to be processed by the MEC server at the edge of the local network and the result is returned, and the data processing requirement does not need to be processed by a core network and a data center. This not only greatly relieves the network pressure of the core network, but also helps significantly reduce the network delay of the application.
with the development of MEC, it has become a key technology to realize the vision of the internet of things. An important issue in MECs is service migration related to user mobility. The coverage of a single edge server is limited and the mobility of the user terminals (e.g. smartphones and smart vehicles) may cause significant degradation of network performance and even disruption of ongoing edge traffic, where the user needs to migrate the service to a neighbouring server in order to maintain low latency access to the service. Since a large carrier of the service is a virtual machine, at this time, it may be considered to migrate the virtual machine where the service is located to the vicinity of the current location of the user by using a virtual machine dynamic migration technology, so as to ensure that the delay when the user uses the service is low.
The prior art has the following defects:
Most of the existing service migration strategies are not considered comprehensively, or decision is made mainly from a network side through information such as load and user requests, the influence of user mobility is not considered, or only a single performance index is optimized, the situation of a plurality of performance indexes is not considered comprehensively, or a one-dimensional mobility model is used for analyzing equipment mobility, the mobility of equipment in an actual scene cannot be reflected really, or service migration is performed only once in a fixed time period instead of triggering service migration at any time, and the service migration efficiency is too low.
Disclosure of Invention
in view of the above, an object of the present invention is to provide a service migration method based on particle swarm in an edge computing environment, which can trigger service migration in real time when the performance of a device does not meet the standard, and maximize the benefit of service migration while meeting the task requirements of each device.
in order to achieve the purpose, the invention provides the following technical scheme:
The service migration method based on the particle swarm in the edge computing environment comprises the following steps:
S1: modeling by a server;
S2: service request reporting: the equipment sends the state information of the equipment to a local network controller;
S3: finding out the equipment with the delay not meeting the requirement: the local network controller estimates the time delay of the equipment according to the received equipment state information and the network information collected by the state collection module, and judges whether the time delay requirement of the corresponding type of task is met;
s4: making a migration decision: and selecting equipment which cannot meet the requirements for the server, comprehensively considering energy consumption and time delay by combining factors such as task size, task type, link stability and server state, and making a migration decision with the maximum benefit while ensuring the performance index requirements of the equipment task.
further, the step S1 specifically includes:
for the uRLLC task, the minimum processing capacity V needing to be distributed to meet the time delay threshold condition is calculated firstneedSince this type of service has a high delay requirement, when the server has enough residual processing capacity VremainOnly the minimum processing capacity V allocated to the taskneedObviously, the allocation of all the remaining processing capacity to the task may have a great influence on other tasks that are to be performed, and based on this, the processing capacity allocated to the task needs to be dynamically allocated in combination with the processing capacity to be released in the server, and the larger the remaining processing capacity and the processing capacity to be released is, the larger the allocated processing capacity is; in some cases, V may be caused by a large delay in other parts of the taskneedLarge, the current remaining processing capacity of the server is not satisfied, and if the server waits for the later time to release enough processing capacity and then processes the data, the waiting time is increased, and V is increasedneedThe processing capacity is increased, and further, the processing capacity needs to wait for a longer time to be released, so that the task just meets the threshold condition under the condition that the performance of the subsequent task is seriously influenced; for this purpose, a minimum throughput threshold V for URLLC tasks is setuminwhen the residual processing capacity is greater than Vuminbut less than VneedWhen the processing capacity allocated to the task is Vumini.e. by
WhereinIndicating the processing capacity, χ, of the subsequent release of the ith time slotiRepresents the weight of the time slot, i is larger and smaller;
whereinThe amount of computation required to represent the task of device iIs proportional, i.e.
DuIs the delay threshold of the uRLLC task, tiThe sum of the rest delays except the processing delay of the task of the device i;
For the eMBB task, the latency requirement is lower than that of the urrllc task, so the processing capacity allocated to most of the time is lower than that allocated to the urrllc task, and when the server load is low, in order to ensure the processing capacity allocated to the subsequent task as much as possible, the processing capacity allocated to the subsequent task has an upper limit, that is, the processing capacity allocated to the subsequent task has a lower limit
Further, in step S2, the device status information includes the size and type of the task generated by the device, the location, the transmission power, the bandwidth, and the server selection.
Further, in step S3, the specific content is as follows:
The local network controller calculates the data transmission delay from the device i to the base station j according to the received device information and the information collected by the state collection module as follows:
WhereinIndicating the data transmission speed of the device i,Indicating the task size of device i.Can be obtained from the following formula
wherein
B is the bandwidth of the channel and,Is the power of device i received by base station j,Is the power at which device i transmits a signal, Li,jis the path loss between device i and server j, N0Is the noise power;
the relay transmission delay of the task of the device i from the relay base station j to the target base station j' is as follows:
whereinRepresents the speed at which the task is transmitted from base station j to base station j';
the processing time required by the task of the device i, that is, the processing delay time, is:
service migration latency of device i is
WhereinRepresenting the size of the virtual machine in which the service of device i resides. The larger the data amount of the task is, the larger the data amount included in the virtual machine is, that is, the size of the virtual machine is proportional to the number of tasks of the service that have been processed, so the size of the virtual machine is proportional to the number of tasks of the service that have been processed, and therefore
A data amount representing a task that has been processed by a virtual machine in which the service of the device i is located;
the steps for calculating the queuing delay are as follows:
Knowing the processing duration needed by n tasks in the queueC in the servervmProcessing time required for individual tasks
② creating an arrayand arranging the data in the array b from large to small. C in the arranged array bvmthe data is the queuing delay of the service of the 1 st device in the queuing queue
Processing time of 1 st equipment in queueAnd queue durationAdding the data and then putting the data into an array b, and arranging the data in the updated array b from large to small. C in the arranged array bvmThe data is the queuing delay of the service of the 2 nd device in the queuing queue
fourthly, the queuing delay of the nth task in the queue is obtained by analogy
Further, in step S4, the specific content is as follows:
The connectivity between the user equipment i and the base station j can change along with the movement of the equipment; the connectivity between the device and the base station is expressed in terms of the probability of interruption, which can be estimated as a function of the distance between the device and the base station; the probability density function of the signal-to-noise ratio of the signal received by the base station follows a log-normal distribution as follows:
Where σ is the standard deviation of log-normal shadowing;
when the signal-to-noise ratio received by the server is smaller than a certain threshold, the link is interrupted, and the interruption probability is the probability that the received SNR exceeds Γ, so the link stability between the device i and the base station j is represented as follows:
The optional server of device i needs to satisfy the following condition:
wherein P isminRepresenting the signal reception threshold, P, of the base stationoutmaxrepresenting a outage probability threshold;
Defining two integer variables Si,Xi∈NserverRespectively indicating the base station where the target server selected by the device is located and the relay base station selected by the device, NserverIs a server (base station) set; when X is presenti=Siand time, the device does not have a relay base station. Defining a binary variable Hi:
The total latency of the task for device i is then:
the total energy consumption is as follows:
The energy consumption of each part is the time delay multiplied by the corresponding power,Respectively representing the time delay and the energy consumption before and after the task migration of the device i. The time delay and energy consumption are normalized for comparison at the same magnitude:
So the yield after migration is
Finding a migration decision that maximizes A using an improved particle swarm algorithm;
The larger the value of the contraction and expansion coefficient beta in the quantum particle swarm algorithm is, the more favorable the search of a global area is, the faster the convergence rate of the algorithm is at the moment, but the solution with high precision is not easy to obtain, and the smaller the value of the beta is, the more favorable the search of a local area is, the more favorable the solution with high precision is easy to obtain, but the convergence rate of the algorithm is slowed down; compared with the linear reduction of beta along with the iteration times in the traditional quantum particle swarm algorithm, the beta in the improved quantum particle swarm algorithm is set to be reduced along with the iteration times according to a cosine curve, namely
Wherein beta ism,β0is the upper and lower bound of beta, k is the current iteration number, kmaxIs the maximum number of iterations;
therefore, in the initial stage of the algorithm, as the difference between the global extreme value of the particle and the historical individual extreme value of the particle is large, the global extreme value is searched at a high speed within a period of time, so that the global extreme value can be approached quickly; in the later stage of algorithm evolution, after the individual extreme value of the particle history is close to the global extreme value, the local searching capability of the algorithm is enhanced by using a lower speed within a period of time, so that the accuracy of the algorithm is improved;
The invention changes the individual average optimal position of the current particle in the traditional quantum particle swarm algorithm from the average value of the optimal position of each particle to the random weighted average value of the optimal position of each particle to strengthen the randomness of the algorithm and strengthen the ability of the algorithm to break away the local extreme point constraint;
Because the Logistic chaotic map has the characteristics of ergodicity, regularity, randomness, sensitivity to initial values and the like, the particle swarm is initialized by adopting the following Logistic chaotic map:
si,n+1=4si,n(1-si,n)i∈Im
xi,n+1=4xi,n(1-xi,n)i∈Im
Si,n+1,1=[si,n+1Ns]
Xi,n+1,1=[xi,n+1Ns]
Where the fixed points 0, 0.25, 0.5, 0.75 and 1 cannot be taken as initial values, Si,n+1,1,Xi,n+1,1respectively representing the S, X parameters of the ith component of the (n + 1) th particle of the first iteration. The algorithm of the present invention generates the next generation of particle positions as follows:
wherein p iss,i,n,k,px,i,n,kThe ith components of the two decision variables of the attractor of the nth particle of the kth iteration, respectively, the attractor being for ensuring the algorithmconvergence exists, and each particle converges to a point attractor. p is a radical ofs,i,n,px,i,nthe ith component, P, which represents the optimal position of the two decision variables of the nth particle in the pastsg,i,Pxg,irespectively representing the ith component of the global optimal position of the two decision variables at the particle swarm so far.Is a random factor between (0,1), u is a random number between (0,1), L is the probability magnitude of the particle appearing at the relative point position, Ls,i,n,k,Lx,i,n,kThe ith component of L, which represents the two decision variables for the nth particle of the kth iteration, respectively, is
Ls,i,n,k+1=2β(k)|mbests,i-Si,n,k|
Lx,i,n,k+1=2β(k)|mbestx,i-Si,n,k|
mbests,i,mbestx,ithe i-th component, κ, of the two decision variables for the individual weighted mean optimum position of the current particle, respectivelynThe normalized random number of the nth particle, which is obtained by generating a random number between (0,1) by each particle and then normalizing, is represented, and the contribution rate of the current optimal position of each particle to the mbest is represented.
the final particle position equation is:
modNs,u(k)≥0.5
mod Ns u(k)<0.5
mod Ns u(k)≥0.5
mod Ns u(k)<0.5
The specific steps of solving by using the quantum particle swarm algorithm are as follows:
1) Randomly taking two values between 0 and 1 as S and X parameters of the first particle in the primary particles
2) Iterating the S, X parameters of the rest of the initial generation particles by using Logistic mapping
3) judging whether each particle meets the constraint condition, if not, iterating again, if so, calculating the fitness of each particle, and finding out the optimal position of each particle and the optimal position in all the particles so far
4) if the termination condition is met, turning to the step 6), otherwise, turning to the step 5)
5) iterating the next generation of particles by using a particle group algorithm, and turning to the step 3)
6) end up
the invention has the beneficial effects that:
(1) The invention supports a real-time service migration system model based on edge calculation, provides an effective platform for collecting real-time information of equipment and a network by a system, meets the requirements of actual equipment mobility and instantaneity of service migration immediately once the equipment performance does not reach the standard, and has stronger instantaneity and practicability compared with other service migration models.
(2) The invention dynamically allocates the processing capacity by combining the task type, the residual processing capacity of the server, the processing capacity to be released by the server and other factors on the basis that the server allocates the fixed processing capacity to the task of the equipment as the lowest allocated processing capacity. Compared with other server processing capacity distribution methods, the method improves the efficiency of the server and the task processing speed.
(3) The invention takes the change of time delay and energy consumption before and after migration as the benefit, and obtains the migration decision of maximizing the benefit under different requirements by different weight settings of the time delay and the energy consumption.
additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a diagram of a service migration system model based on edge computing according to the present invention;
FIG. 2 is a diagram illustrating steps performed in the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
fig. 1 is a model diagram of a service migration system of the present invention, which specifically includes:
Equipment: the device generates tasks in a Poisson stream, and a plurality of continuous tasks form a service and are processed in the same virtual machine.
Controlling the base station: the coverage is large, mainly used for transmitting control signaling.
A server: the system is deployed around the small base station, and has certain computing resources and storage resources through communication between the small base station and the outside.
The small base station: the coverage is small, mainly used for transmitting data, and will periodically report the information of the connected server to the control base station.
The local network controller: the system is mainly used for managing equipment mobility and controlling a base station, and comprises 4 modules including a state collection module, a delay prediction module, a server selection module and a service migration module.
when the base station where the target server selected by the device is located cannot directly communicate with the device, the device needs to select one of the base stations that can directly communicate as a relay base station to communicate with the base station where the target server is located. The invention provides a delay perception method based on which, when a user needs a service, a delay prediction module predicts the delay of the service according to the information collected by a state collection module, when the delay of the user does not meet the requirement, a server selection module in a local network controller selects a server which can obtain the maximum delay and energy consumption benefit according to the information of the state collection module, and finally a service migration module performs service migration.
The server allocates a virtual machine to each service of the device, that is, when a first task of each service of the device reaches the server, the server allocates a virtual machine to execute the task, and subsequent tasks of the same service of the device are also processed by the virtual machine. And after the service processing is finished, the virtual machine is cancelled, so that the virtual machine does not occupy server resources any more.
Fig. 2 is a diagram of implementation steps of a service migration scheme based on latency and energy consumption according to the present invention, and the specific steps are as follows:
s1: modeling by a server;
S2: service request reporting: the equipment sends the state information of the equipment to a local network controller;
s3: finding out the equipment with the delay not meeting the requirement: the local network controller estimates the time delay of the equipment according to the received equipment state information and the network information collected by the state collection module, and judges whether the time delay requirement of the corresponding type of task is met;
s4: making a migration decision: selecting equipment which cannot meet the requirements for the server, comprehensively considering energy consumption and time delay by combining factors such as task size, task type, link stability and server state, and making a migration decision with the maximum benefit while ensuring the performance index requirements of the equipment task;
s1 provides a method for dynamically allocating server processing capacity, which includes the following steps:
For the uRLLC task, the minimum processing capacity V needing to be distributed to meet the time delay threshold condition is calculated firstneedsince this type of service has a high delay requirement, when the server has enough residual processing capacity VremainOnly the minimum processing capacity V allocated to the taskneedObviously, the allocation of all the remaining processing capacity to the task may have a great influence on other tasks that are to be performed, and based on this, the processing capacity allocated to the task needs to be dynamically allocated in combination with the processing capacity to be released in the server, and the larger the remaining processing capacity and the processing capacity to be released is, the larger the allocated processing capacity is; in some cases, V may be caused by a large delay in other parts of the taskneedLarge, the current remaining processing capacity of the server is not satisfied, and if the server waits for the later time to release enough processing capacity and then processes the data, the waiting time is increased, and V is increasedneedThe processing capacity is increased, and further, the processing capacity needs to wait for a longer time to be released, so that the task just meets the threshold condition under the condition that the performance of the subsequent task is seriously influenced; for this purpose, a minimum throughput threshold V for URLLC tasks is setuminWhen the residual processing capacity is greater than Vuminbut less than Vneedwhen the processing capacity allocated to the task is VuminI.e. by
WhereinIndicating the processing capacity, χ, of the subsequent release of the ith time slotiRepresents the weight of the time slot, i is larger and smaller;
whereinthe amount of computation required to represent the task of device iis proportional, i.e.
DuIs the delay threshold of the uRLLC task, tithe sum of the rest delays except the processing delay of the task of the device i;
For the eMBB task, the latency requirement is lower than that of the urrllc task, so the processing capacity allocated to most of the time is lower than that allocated to the urrllc task, and when the server load is low, in order to ensure the processing capacity allocated to the subsequent task as much as possible, the processing capacity allocated to the subsequent task has an upper limit, that is, the processing capacity allocated to the subsequent task has a lower limit
S3 provides modeling of the delay, which is as follows:
The local network controller calculates the data transmission delay from the device i to the base station j according to the received device information and the information collected by the state collection module as follows:
whereinIndicating the data transmission speed of the device i,indicating the task size of device i.Can be obtained from the following formula
Wherein
B is the bandwidth of the channel and,is the power of device i received by base station j,Is the power at which device i transmits a signal, Li,jis the path loss between device i and server j, N0Is the noise power;
The relay transmission delay of the task of the device i from the relay base station j to the target base station j' is as follows:
WhereinRepresents the speed at which the task is transmitted from base station j to base station j';
The processing time required by the task of the device i, that is, the processing delay time, is:
service migration latency of device i is
WhereinRepresenting the size of the virtual machine in which the service of device i resides. The larger the data amount of the task is, the larger the data amount included in the virtual machine is, that is, the size of the virtual machine is proportional to the number of tasks of the service that have been processed, so the size of the virtual machine is proportional to the number of tasks of the service that have been processed, and therefore
A data amount representing a task that has been processed by a virtual machine in which the service of the device i is located;
The steps for calculating the queuing delay are as follows:
knowing the processing duration needed by n tasks in the queuec in the servervmprocessing time required for individual tasks
② creating an arrayAnd arranging the data in the array b from large to small. C in the arranged array bvmthe data is the queuing delay of the service of the 1 st device in the queuing queue
Processing time of 1 st equipment in queueAnd queue durationadding the added values and then putting the added values into an array b, and updating the array bthe data of (a) is arranged from large to small. C in the arranged array bvmThe data is the queuing delay of the service of the 2 nd device in the queuing queue
Fourthly, the queuing delay of the nth task in the queue is obtained by analogy
S4 sets out the limiting conditions and migration decision steps of the optional server, which are as follows:
the connectivity between the user equipment i and the base station j can change along with the movement of the equipment; the connectivity between the device and the base station is expressed in terms of the probability of interruption, which can be estimated as a function of the distance between the device and the base station; the probability density function of the signal-to-noise ratio of the signal received by the base station follows a log-normal distribution as follows:
where σ is the standard deviation of log-normal shadowing;
when the signal-to-noise ratio received by the server is smaller than a certain threshold, the link is interrupted, and the interruption probability is the probability that the received SNR exceeds Γ, so the link stability between the device i and the base station j is represented as follows:
The optional server of device i needs to satisfy the following condition:
Wherein P isminrepresenting the signal reception threshold, P, of the base stationoutmaxRepresenting a outage probability threshold;
Defining two integer variables Si,Xi∈NserverRespectively indicating the base station where the target server selected by the device is located and the relay base station selected by the device, NserverIs a server (base station) set; when X is presenti=SiAnd time, the device does not have a relay base station. Defining a binary variable Hi:
the total latency of the task for device i is then:
the total energy consumption is as follows:
The energy consumption of each part is the time delay multiplied by the corresponding power,respectively representing the time delay and the energy consumption before and after the task migration of the device i. The time delay and energy consumption are normalized for comparison at the same magnitude:
So the yield after migration is
finding a migration decision that maximizes A using an improved particle swarm algorithm;
The larger the value of the contraction and expansion coefficient beta in the quantum particle swarm algorithm is, the more favorable the search of a global area is, the faster the convergence rate of the algorithm is at the moment, but the solution with high precision is not easy to obtain, and the smaller the value of the beta is, the more favorable the search of a local area is, the more favorable the solution with high precision is easy to obtain, but the convergence rate of the algorithm is slowed down; compared with the linear reduction of beta along with the iteration times in the traditional quantum particle swarm algorithm, the beta in the improved quantum particle swarm algorithm is set to be reduced along with the iteration times according to a cosine curve, namely
Wherein beta ism,β0is the upper and lower bound of beta, k is the current iteration number, kmaxIs the maximum number of iterations;
therefore, in the initial stage of the algorithm, as the difference between the global extreme value of the particle and the historical individual extreme value of the particle is large, the global extreme value is searched at a high speed within a period of time, so that the global extreme value can be approached quickly; in the later stage of algorithm evolution, after the individual extreme value of the particle history is close to the global extreme value, the local searching capability of the algorithm is enhanced by using a lower speed within a period of time, so that the accuracy of the algorithm is improved;
the invention changes the individual average optimal position of the current particle in the traditional quantum particle swarm algorithm from the average value of the optimal position of each particle to the random weighted average value of the optimal position of each particle to strengthen the randomness of the algorithm and strengthen the ability of the algorithm to break away the local extreme point constraint;
Because the Logistic chaotic map has the characteristics of ergodicity, regularity, randomness, sensitivity to initial values and the like, the particle swarm is initialized by adopting the following Logistic chaotic map:
si,n+1=4si,n(1-si,n) i∈Im
xi,n+1=4xi,n(1-xi,n) i∈Im
Where the fixed points 0, 0.25, 0.5, 0.75 and 1 cannot be taken as initial values, Si,n+1,1,Xi,n+1,1Respectively representing the S, X parameters of the ith component of the (n + 1) th particle of the first iteration. The algorithm of the present invention generates the next generation of particle positions as follows:
Wherein p iss,i,n,k,px,i,n,kthe ith components of the two decision variables of the attractors of the nth particle of the kth iteration, respectively, the attractors exist for ensuring the convergence of the algorithm, and each particle converges to one point of the attractors. Ps,i,n,Px,i,nThe ith component, P, which represents the optimal position of the two decision variables of the nth particle in the pastsg,i,Pxg,iRespectively representthe i-th component of the two decision variables at the global optimum position of the particle swarm so far.is a random factor between (0,1), u is a random number between (0,1), L is the probability magnitude of the particle appearing at the relative point position, Ls,i,n,k,Lx,i,n,kThe ith component of L, which represents the two decision variables for the nth particle of the kth iteration, respectively, is
Ls,i,n,k+1=2β(k)|mbests,i-Si,n,k|
Lxi,n,k+1=2β(k)|mbestx,i-Xi,n,k|
mbests,i,mbestx,iThe i-th component, κ, of the two decision variables for the individual weighted mean optimum position of the current particle, respectivelynthe normalized random number of the nth particle, which is obtained by generating a random number between (0,1) by each particle and then normalizing, is represented, and the contribution rate of the current optimal position of each particle to the mbest is represented.
The final particle position equation is:
modNs,u(k)≥0.5
modNs u(k)<0.5
modNs u(k)≥0.5
modNs u(k)<0.5
The specific steps of solving by using the quantum particle swarm algorithm are as follows:
1) Randomly taking two values between 0 and 1 as S and X parameters of the first particle in the primary particles
2) Iterating the S, X parameters of the rest of the initial generation particles by using Logistic mapping
3) Judging whether each particle meets the constraint condition, if not, iterating again, if so, calculating the fitness of each particle, and finding out the optimal position of each particle and the optimal position in all the particles so far
4) if the termination condition is met, turning to the step 6), otherwise, turning to the step 5)
5) iterating the next generation of particles by using a particle group algorithm, and turning to the step 3)
6) End up
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.