CN115866626A - NSGA-II and simulated annealing based service deployment method under edge environment - Google Patents

NSGA-II and simulated annealing based service deployment method under edge environment Download PDF

Info

Publication number
CN115866626A
CN115866626A CN202310153844.9A CN202310153844A CN115866626A CN 115866626 A CN115866626 A CN 115866626A CN 202310153844 A CN202310153844 A CN 202310153844A CN 115866626 A CN115866626 A CN 115866626A
Authority
CN
China
Prior art keywords
service
micro base
base station
user
services
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310153844.9A
Other languages
Chinese (zh)
Other versions
CN115866626B (en
Inventor
储成浩
蔡汝坚
袁水平
陈伟雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Sigao Intelligent Technology Co ltd
Original Assignee
Anhui Sigao Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Sigao Intelligent Technology Co ltd filed Critical Anhui Sigao Intelligent Technology Co ltd
Priority to CN202310153844.9A priority Critical patent/CN115866626B/en
Publication of CN115866626A publication Critical patent/CN115866626A/en
Application granted granted Critical
Publication of CN115866626B publication Critical patent/CN115866626B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses a service deployment method under an edge environment based on NSGA-II and simulated annealing, which comprises the following steps: constructing an edge service framework, and initializing the attributes of a user and a micro base station; modeling an edge service deployment problem by taking the minimum total delay time of a user for completing a service chain and the minimum total number of services processed by a cloud server as optimization targets and taking the coverage range of a micro base station and the number of deployed services as constraints; and coding the service deployment schemes of all the micro base stations as individuals in the NSGA-II algorithm population to solve the model to obtain a final population, selecting a proper amount of excellent individuals as an initial solution set, optimizing through an improved multi-objective simulated annealing algorithm, performing rapid non-dominated sorting on the optimized solution set to obtain a Pareto optimal front edge, and selecting a proper solution as a final edge service deployment scheme in the front edge. The invention maximally utilizes the service capability of the micro base station, and the hybrid algorithm further optimizes the learned quality, thereby providing a better edge service deployment scheme.

Description

NSGA-II and simulated annealing based service deployment method under edge environment
Technical Field
The invention belongs to the technical field of edge computing, and particularly relates to a service deployment method in an edge environment based on NSGA-II and simulated annealing.
Background
Mobile applications are becoming increasingly complex today, requiring a large amount of computing power and resulting high power consumption, often offloading heavy computing tasks to a remote cloud server for processing due to the limited processing power and battery power of the mobile device. However, traditional cloud computing bears tremendous pressure due to the long distance between the cloud server and the end user, coupled with the ever-increasing network traffic and computational workload, and faces the challenge of maintaining a reliable and low-latency connection with the user. To address this challenge, edge calculations are proposed. The service provider deploys the service on micro base stations which are distributed all over the region and are closer to the user, so that the service is provided, the calculation is carried out at the edge of the network, the transmission to a remote cloud server is not needed, and the quick response to the mobile equipment is realized. However, the micro base stations have limited service capabilities, and only a few services can be deployed on each micro base station. Meanwhile, each service has a plurality of candidates, such as union pay, payment treasure, weChat and the like for electronic payment. How to determine the services deployed by each micro base station and select its candidates becomes a hot challenge.
The existing edge service deployment method mainly comprises a simple greedy algorithm and a genetic algorithm, and at least has the following problems:
(1) The number of services handled by the cloud server is not considered. In existing solutions, in order to provide better quality of service to users, only the service chain response time of the user is often concerned. In the edge environment, adjacent micro base stations are connected with each other to form a micro base station connection graph, if a certain service in a user service chain cannot be executed on any micro base station on the micro base station connection graph, the service is uploaded to a cloud server to be processed, and the cloud server returns a result after all services which are not executed in the service chain are executed. When input data and output data of a service flow on a micro base station connection diagram, transmission time between the micro base station and the micro base station is generated, and on a cloud server, the transmission time does not exist. It follows that if there is a service in the service chain that cannot be performed by the micro base station, while the service is located further forward on the service chain, the total response time of the service chain is shorter. As the position of the service on the service chain is earlier, the number of services processed by the cloud server is larger, which means that the short response time of the service chain does not mean that the number of services executed in the cloud is smaller. Therefore, if only the service chain response time of the user is concerned, in the case that no matter how the service is deployed, all user service chains cannot be completed by the micro base station, unreasonable deployment may occur, that is, the service deployment of the micro base station tends to satisfy the complete service chain and the later services in the service chain to be deployed, and according to the above analysis, the purpose of deploying the later services is to upload the services to the cloud server as soon as possible, and once the later services in the service chain are uploaded to the cloud server, the later services in the service chain are processed by the cloud server, so that the later services deployed by the micro base station are invalid services for the corresponding service chain. Due to the unreasonable deployment, although the total delay of completing a small amount of service chains is reduced, the service capacity of the micro base station is greatly wasted, and the working pressure of the cloud server is increased.
(2) The deployment scenario is not optimized enough. The simple greedy algorithm cannot well deal with the NP problem of edge service deployment, the search efficiency of the ordinary genetic algorithm is low in the later evolution stage, the ordinary genetic algorithm is prone to fall into early maturity, namely super individuals appear in the population, and the adaptive value of the individuals greatly exceeds the average individual adaptive value of the current population. Therefore, the individual quickly occupies an absolute proportion in the population, the diversity of the population is quickly reduced, the evolutionary capacity of the population is basically lost, the algorithm is earlier converged into a local optimal solution, and a better edge service deployment scheme cannot be provided.
Disclosure of Invention
In view of the above, the invention provides a service deployment method under an edge environment based on NSGA-II and simulated annealing, which is used for solving the problems that the conventional edge service deployment method does not consider the times of accessing a cloud server by a user and the optimization degree is not enough. The method mainly comprises the following steps:
s1, constructing an edge service framework consisting of M micro base stations and N users, constructing a service deployment scheme of the micro base stations, and initializing attributes of the users and the micro base stations, wherein each user requests a service chain to a server, the service chain is composed of different types of services, each service comprises c specific candidate services, the services deployed by the micro base stations are processed by the micro base stations, and the services not deployed by the micro base stations are processed by cloud servers;
s2, modeling an edge service deployment problem by taking the minimum total delay time of a user for completing a service chain and the minimum total number of services processed by a cloud server as optimization targets and taking the coverage area of a micro base station and the number of deployed services as constraints;
s3, coding the service deployment schemes of all the micro base stations as individuals in the NSGA-II algorithm population, solving the model established in the step S2 by using the NSGA-II algorithm, and obtaining a final population after the specified iteration times are reached;
s4, selecting a proper number of excellent individuals from the final population obtained in the step S3 as an initial solution set, and optimizing the solution set through an improved multi-objective simulated annealing algorithm to obtain an optimized solution set;
and S5, performing rapid non-dominated sorting on the optimized solution set obtained in the step S4 to obtain a Pareto optimal front edge, and selecting a proper solution as a final edge service deployment scheme with the aim of achieving the balance of minimizing the total delay time of a user for completing a service chain and minimizing the total number of services processed by a cloud server.
Further, in step S1, the initializing step specifically includes:
s11, setting the type of service and the number of candidates of each service;
s12, setting the number of users, the geographic position of each user and a requested service chain, wherein the service chain consists of a string of services of different types, and a candidate is selected for each service, namely the service of the user iThe service chain is defined as:
Figure SMS_1
, wherein />
Figure SMS_2
Service number q in the service chain representing user i has selected the jth q A candidate;
s13, setting the number M of the micro base stations, the geographic position of each micro base station and the coverage area of the service signal, determining the number of services which can be deployed by each micro base station and the accessibility between the micro base stations, and routing a service chain requested by a user between the two reachable micro base stations.
Further, step S2 specifically includes:
s21, calculating the total delay time of all users for completing a service chain, when the users request the service chain, firstly uploading data to a micro base station which is closest to the users and covered by signals, and if the data are not covered by any micro base station, directly uploading the data to a cloud server through a macro base station; the micro base stations start to process services in the service chain in sequence, when the current micro base station is not deployed with the service, the micro base stations are routed to the micro base station which can be reached recently and has the service deployed for processing, and if all the micro base stations which can be reached are not deployed with the service, the micro base stations transmit the services to the cloud server for processing; when the service chain data are transmitted to the cloud server, the rest services are processed by the cloud server; after the whole service chain is processed, if the last service is processed by the cloud server, the data is transmitted back to the user through the macro base station, otherwise, the data is transmitted back to the user through the micro base station which is closest to the user and covers the user with signals, and a calculation formula of the total delay time of all the users for completing the service chain is as follows:
Figure SMS_3
wherein N is the number of users, Q is the length of the service chain,
Figure SMS_4
indicates a data up-time for the nth user, based on the time value>
Figure SMS_5
Represents the time when the q service of the nth user is transmitted to the qualified micro base station or cloud server, t exe Execution time for processing of a single service, <' > is greater or lesser>
Figure SMS_6
Indicating the data downlink time of the nth user;
Figure SMS_7
where α is the inverse of the wireless transmission rate, d (n, S) 0 ) Denotes the distance of the user n to the macro base station, S 0 Denotes a macro base station, d (n, s) n ) Denotes the distance, T, from user n to the nearest micro base station that can cover user n b Is the time for data to be transmitted from the base station to the cloud server through the backbone network;
Figure SMS_8
where β is the reciprocal of the wire transmission rate, d (S) p ,S q ) Distance of micro base station p to micro base station q, S p Indicating the current base station, S, in which the data is located q Denotes from S p Micro base station with a minimum number of hops and capable of handling the q-th service of user n, T b Is the time for data to be transmitted from the base station to the cloud server through the backbone network;
Figure SMS_9
wherein ,d(Se ,S n ) Is the distance from the micro base station e to the micro base station n, S e Micro base station, S, representing the last service of the service chain handling user n n Denotes a nearest micro base station, T, that can cover user n b Is the time for data to be transmitted from the base station to the cloud server through the backbone network;
s22, calculating the total number of the services processed by the cloud server:
Figure SMS_10
wherein ,
Figure SMS_11
a q-th service of 1 indicating that user n requests a service chain is processed by the cloud server, and is->
Figure SMS_12
A value of 0 indicates that the q service of the service chain requested by the user n is processed by the cloud micro base station;
s23, modeling the edge service deployment problem as follows: under the constraints of the coverage area of the micro base station and the number of the deployed services, the total delay time of a user for completing a service chain is minimized, the total number of the services processed by the cloud server is minimized, and a mathematical model is as follows:
Figure SMS_13
/>
Figure SMS_14
Figure SMS_15
Figure SMS_16
wherein ,dm Number of services deployed for micro base station m, cap (S) m ) Serving the maximum number of deployments of micro base station m, cov (S) n ) Is the signal coverage of the micro base station which is covered to the user n and is closest to the user n.
Further, step S3 specifically includes:
s31, encoding the service deployment schemes of all the micro base stations into: x = [ x (b) ] 1 ),...,x(b M )]Wherein x (b) i ) Is a deployment vector of the micro base station i, the length of the deployment vector is b i Vector (vector)The element in the amount is C ij Represents that the jth candidate of service i is deployed;
s32, initializing a population, randomly deploying each individual, and randomly selecting b for the micro base station i i A different service and randomly selecting a candidate for each service, wherein b i The maximum number of services that can be deployed by the micro base station i;
s33, selecting, crossing and mutating the current population to generate a progeny population;
s34, merging parent population and child population, and performing rapid non-dominated sorting;
s35, according to the non-dominated sorting result, calculating a crowding distance from the 0 th front edge, adding individuals into the population from large to small according to the crowding distance, discarding the individuals with the crowding distance being zero, and sequentially processing the next front edge after the front edge is processed until the population scale reaches the expectation;
and S36, judging whether the specified iteration times are reached, if so, terminating the NSGA-II algorithm, outputting the current population, and otherwise, returning to the step S33 to carry out the next iteration.
Further, the selection mode in step S33 is a put-back binary tournament selection, in which the basis is a dominant relationship; the crossing mode is multipoint crossing, a segment with a certain length is selected, and the deployment vectors of two individuals in the segment are interacted; the variation mode is multi-point variation, a segment with a certain length is selected, and the deployment vectors of the individuals in the segment are deployed again at random.
Further, in step S4, the traditional single-target simulated annealing algorithm is improved into a simulated annealing algorithm of multi-target optimization, which specifically includes:
s41, performing rapid non-dominated sorting on the final population obtained in the step S3, taking the individuals positioned on the 0 th leading surface and the 1 st leading surface as an initial solution set of an improved simulated annealing algorithm, and performing the following optimization operation on each individual in the initial solution set;
s42, carrying out mutation operation on the current solution individual, wherein the mutation mode is that a deployment vector with a certain length is randomly selected to carry out random deployment again, and an adjacent solution individual is generated;
s43, if the current solution individual cannot dominate the newly generated solution individual, adding the new solution into the solution set, and replacing the current solution to enter next iteration; otherwise, determining a solution for entering the next iteration according to the following metropolis acceptance criterion:
Figure SMS_17
wherein ,Ik+1 To enter the solution entity of the next iteration, I k The current solution is the current solution individual,
Figure SMS_18
a new solution individual generated by variation for the current solution individual, r is [0,1 ]]Random real number of (2), F k Is shown as I k Is adapted to the value vector of->
Figure SMS_19
Is->
Figure SMS_20
The adaptive value definition is the same as the individual adaptive value in NSGA-II, and T is the current temperature;
s44, if k is less than the specified iteration times, returning to the step S42 to perform the next iteration at the temperature T; otherwise, carrying out temperature reduction operation of simulated annealing: t = a × T, where a is the temperature decrease coefficient, and a ∈ (0, 1), and proceeds to step S45;
s45, if
Figure SMS_21
, wherein ,Tmin If the temperature is the set lower limit, terminating the simulated annealing algorithm and outputting the current solution set; otherwise, returning to the step S42, and carrying out the next iteration after cooling.
The technical scheme provided by the invention has the beneficial effects that:
1) By combining the NSGA-II and the simulated annealing algorithm, a service deployment scheme for the micro base station in the field of edge computing is provided, the scheme can balance the total delay time of a user for completing a service chain and the total number of services processed by a cloud server, greatly utilize the service capacity of the micro base station and reduce the working pressure of the cloud server;
2) The number of services processed by the cloud server is considered, unreasonable deployment is avoided, the service capacity of the micro base station is utilized to the maximum extent, and the method has good practicability;
3) The hybrid algorithm further optimizes and improves the quality of understanding, and a better edge service deployment scheme can be provided.
Drawings
FIG. 1 is a flow chart of a service deployment method in an edge environment based on NSGA-II and simulated annealing according to the present invention;
FIG. 2 is a block diagram of a service deployment method in an edge environment based on NSGA-II and simulated annealing according to the present invention;
FIG. 3 is a schematic diagram of an edge service deployment framework in an embodiment of the invention;
FIG. 4 is a schematic diagram of the final solution set obtained in the embodiment of the present invention, in which a square point represents the pareto optimal solution obtained by the NSGA-II algorithm, and a triangular point represents the pareto optimal solution optimized by the simulated annealing algorithm;
fig. 5 is a schematic diagram of a solution set obtained by using random deployment in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
The embodiment of the invention provides a service deployment method under an edge environment based on NSGA-II and simulated annealing, the general flow and the block diagram are shown in figure 1 and figure 2, and the method comprises the following steps:
s1, constructing an edge service framework composed of M micro base stations and N users, referring to fig. 3, where fig. 3 is a schematic diagram of an edge service deployment framework in an embodiment of the present invention. And initializing the attributes of the users and the micro base stations, wherein each user requests a service chain to the server, the service chain is composed of different types of services, each service comprises a plurality of specific candidate services, the services deployed by the micro base stations are processed by the micro base stations, and the services not deployed by the micro base stations are processed by the cloud server.
A utensilThe candidate services for a body may be represented as: c. C ij And refers to the jth candidate for the service numbered i. For example, when a user purchases on the internet, the service chain comprises browsing commodities and payment, wherein the browsing commodity service can be provided by candidates such as Taobao, jingdong, shuduo and the like, and the payment service can be provided by candidates such as Paobao and WeChat waiting. A particular user's service chain might be browsing for merchandise service to select treasure, payment service to select treasure for payment; each micro base station deploys a plurality of specific services to respond to user requests; the service of the user request service chain is processed in sequence, routing can be carried out between the micro base stations, different micro base stations respond to different services, and if no micro base station contains the service requested by the user, the cloud server responds to the user request. The initialization step specifically comprises:
s11, setting the type of service and the number of candidates of each service.
In this embodiment, a total of 10 services are set, and then an integer in the interval [2,5] is randomly generated for each service as the number of candidates for the service.
S12, setting the number of users, the geographic position of each user and a requested service chain, wherein the service chain is composed of a string of services of different types, a candidate is selected for each service, and the service chain of the user i is defined as:
Figure SMS_22
, wherein />
Figure SMS_23
Service number q in the service chain representing user i has selected the jth q And (4) a candidate.
In this embodiment, an EUA data set (https:// github. Com/swinedge/EUA-dataset) is used, which is a location information data set commonly used in the field of edge computing, including geographical location information of Australian edge servers and end users. Here, the user data in the EUA data set is used, 400 users are randomly extracted from the EUA data set, and the geographic positions of the users are represented by longitude and latitude. Then, a service chain is generated for each user, the length of each service chain is set to 10, and a candidate is randomly selected for each service.
S13, setting the number M of the micro base stations, the geographic position of each micro base station and the coverage area of the service signal, determining the number of services which can be deployed by each micro base station and the accessibility between the micro base stations, and if the two micro base stations are accessible, routing a service chain requested by a certain user between the two micro base stations.
In this embodiment, server data in the EUA data set is used, and 50 pieces of server data are randomly extracted from the server data as the micro base stations, and the geographical locations of the micro base stations are represented by longitude and latitude. For each micro base station, an integer in the interval [200,600] is randomly generated to serve as a coverage area, and an integer in the interval [3,5] is randomly generated to serve as the number of deployable services. And (3) representing the accessibility among the micro base stations by using an adjacent matrix, connecting each micro base station with at most 3 micro base stations in 300m, and solving the shortest hop count of every two micro base stations by using a Floyd algorithm.
And S2, modeling the edge service deployment problem by taking the minimum total delay time of the user for completing the service chain and the minimum total number of services processed by the cloud server as optimization targets and taking the coverage area of the micro base station and the number of the deployed services as constraints.
And S21, calculating the total delay time of all users for completing the service chain. When a user requests a service chain, data are uploaded to a micro base station which is closest to the user and covered by signals, and if the micro base station is not covered by any micro base station, the data are directly uploaded to a cloud server through a macro base station; then the micro base stations start to process services in the service chain in sequence, when the services which are not deployed by the current micro base station are met, the micro base stations are routed to the micro base station which can be reached recently and has the deployed services for processing, and if all the micro base stations which can be reached do not deploy the services, the micro base stations transmit the services to a cloud server for processing; the cloud server can process all types of services, and when the service chain data are transmitted to the cloud server, the rest services are processed by the cloud server; after the whole service chain is processed, if the last service is processed by the cloud server, the data is transmitted back to the user through the macro base station, otherwise, the data is transmitted back to the user through the micro base station which is closest to the user and covers the user with signals. The calculation formula of the total delay time for all users to complete the service chain is as follows:
Figure SMS_24
wherein N is the number of users, Q is the length of the service chain,
Figure SMS_25
indicates a data up-time for the nth user, based on the time value>
Figure SMS_26
Represents the time when the q service of the nth user is transmitted to the qualified micro base station or cloud server, t exe Execution time for processing of a single service, <' > is greater or lesser>
Figure SMS_27
Indicating the data downlink time of the nth user. />
Figure SMS_28
Where α is the reciprocal of the wireless transmission rate, d (n, S) 0 ) Denotes the distance of the user n to the macro base station, S 0 Denotes a macro base station, d (n, S) n ) Denotes the distance, T, from user n to the nearest micro base station that can cover user n b Is the time that data is transmitted from the base station to the cloud server through the backbone network.
Figure SMS_29
Where β is the reciprocal of the wire transmission rate, d (S) p ,S q ) Distance of micro base station p to micro base station q, S p Indicating the current base station, S, in which the data is located q Denotes from S p Micro base station with a minimum number of hops and capable of handling the q-th service of user n, T b Is the time that data is transmitted from the base station to the cloud server through the backbone network.
Figure SMS_30
wherein ,d(Se ,S n ) Is the distance from the micro base station e to the micro base station n, S e Micro base station, S, representing the last service of the service chain handling user n n Indicates the nearest micro base station, T, that can cover the user n b Is the time that data is transmitted from the base station to the cloud server through the backbone network.
S22, calculating the total number of the services processed by the cloud server:
Figure SMS_31
wherein ,
Figure SMS_32
a value of 1 indicates that the qth service representing the service chain requested by user n is processed by the cloud server, and is taken>
Figure SMS_33
A value of 0 indicates that user n requests the q service of the service chain to be handled by the micro base station.
Specifically, the optimization goals are two of:
Figure SMS_34
Figure SMS_35
the constraint condition is that the user can only communicate with the macro base station or the micro base station covering the user, and each micro base station can only deploy the number of services limited in step S13.
S23, modeling the edge service deployment problem as follows: under the constraints of the coverage area of the micro base station and the number of deployed services, the total delay time of a user for completing a service chain is minimized, the total number of services processed by a cloud server is minimized, and a mathematical model is as follows:
Figure SMS_36
Figure SMS_37
Figure SMS_38
Figure SMS_39
wherein ,dm Number of services deployed for micro base station m, cap (S) m ) Serving the maximum number of deployments of micro base station m, cov (S) n ) Is the signal coverage of the micro base station which is covered to the user n and is closest to the user n.
And S3, coding the service deployment schemes of all the micro base stations as individuals in the NSGA-II algorithm population, solving the model established in the step S2 by using the NSGA-II algorithm, and obtaining the final population after the specified iteration times are reached.
S31, encoding the service deployment schemes of all the micro base stations into: x = [ x (b) ] 1 ),...,x(b M )]Wherein x (b) i ) Is a deployment vector of the micro base station i, the length of the deployment vector is b i The element in the vector is C ij Represents that the jth candidate of service i is deployed; the chromosome of each individual of the population in the NSGA-ii algorithm represents a service deployment scenario for all micro base stations.
S32, initializing a population, randomly deploying each individual, and randomly selecting b for the micro base station i i A different service and randomly selecting a candidate for each service, wherein b i The maximum number of services that the micro base station i can deploy;
and S33, selecting, crossing and mutating the current population to generate a progeny population.
In the embodiment, 4 individuals are randomly selected from the current population each time, the championship match selection is carried out in two groups, and the comparison basis of the selection is the domination relationship; selecting the obtained two parent individuals to perform cross operation, namely randomly selecting a plurality of micro base stations and exchanging service deployment of the two individuals to the micro base stations; and respectively carrying out mutation operation on the two individuals obtained after the crossing, namely randomly selecting a plurality of micro base stations and carrying out random service deployment on the micro base stations. And repeating the selection cross variation operation until the filial generation population reaches the specified scale.
And S34, merging the parent population and the child population, and performing rapid non-dominant sequencing.
In this embodiment, the function of merging the populations is to keep elite individuals, the fast non-dominated sorting divides the entire population into a plurality of leading surfaces according to the dominated relationship, the individuals located on the leading surface with the smaller number completely dominate the individuals located on the leading surface with the larger number, and the individuals located on the same leading surface do not dominate each other.
S35, according to the result of the non-dominated sorting, calculating a crowding distance from the 0 th leading face, adding individuals into the population from large to small according to the crowding distance, discarding the individuals with the crowding distance being zero (namely, the individuals are the same as the previous individuals and the next individuals), and sequentially processing the next leading face after the leading face is processed until the population size reaches the expectation; the individual is selected according to the crowding distance from the front surface, so that the population diversity is protected.
And S36, judging whether the specified iteration times are reached, if so, terminating the NSGA-II algorithm, outputting the current population, and otherwise, returning to the step S33 to carry out the next iteration.
And S4, selecting a proper number of excellent individuals from the final population obtained in the step S3 as an initial solution set, optimizing the solution set through an improved multi-objective simulated annealing algorithm to obtain an optimized solution set, optimizing each individual in the input initial solution set through the improved multi-objective simulated annealing algorithm, and keeping the encountered excellent solutions in the optimization process.
S41, performing rapid non-dominated sorting on the final population obtained in the step S3, taking the individuals positioned on the 0 th leading surface and the 1 st leading surface as an initial solution set of an improved simulated annealing algorithm, and performing the following optimization operation on each individual in the initial solution set.
S42, carrying out mutation operation on the current solution individual, wherein the mutation mode is that a deployment vector with a certain length is randomly selected to carry out random deployment again, and an adjacent solution individual is generated; here, the neighboring solution space of the current individual is searched, where the mutation is similar to the mutation in the genetic algorithm, but the variant fragments are shorter.
S43, if the current solution individual cannot dominate the newly generated solution individual, adding the new solution into the solution set, and replacing the current solution to enter next iteration; otherwise, determining a solution for entering the next iteration according to the following metropolis acceptance criterion:
Figure SMS_40
wherein ,Ik+1 To enter the solution entity of the next iteration, I k The current solution is the current solution individual,
Figure SMS_41
a new solution individual generated by variation for the current solution individual, r is [0,1 ]]Random real number of (2), F k Is I k Adapted value vector of (4), in combination with a plurality of adaptive value vectors>
Figure SMS_42
Is->
Figure SMS_43
The adaptive value definition is the same as the individual adaptive value in NSGA-II, and T is the current temperature;
s44, if k is smaller than the specified iteration times, returning to the step S42 to perform the next iteration at the temperature T; otherwise, carrying out temperature reduction operation of simulated annealing: t = a × T, where a is the temperature decrease coefficient, and a ∈ (0, 1), and proceeds to step S45; at each temperature, the iterative search is performed several times, ensuring an adequate search of the adjacent solution space for the current solution.
S45, if
Figure SMS_44
, wherein ,Tmin If the temperature is the set lower limit, terminating the simulated annealing algorithm and outputting the current solution set; otherwise, returning to the stepAnd S42, carrying out the next iteration after cooling.
And S5, performing rapid non-dominated sorting on the optimized solution set obtained in the step S4 to obtain a Pareto optimal front edge, and selecting a proper solution as a final edge service deployment scheme according to requirements in the front edge. The selection principle of selecting a proper solution from the Pareto optimal frontier is to achieve the balance of the following two aspects according to the load capacity of the cloud server: minimizing the total delay time for the user to complete the service chain, so that the user experience quality is best; the total number of services processed by the cloud server is minimized, the service capacity of the micro base station is greatly utilized, and the workload of the cloud server is reduced.
In this embodiment, the final solution set obtained by the NSGA-ii and the simulated annealing algorithm is shown in fig. 4, where the square points represent the pareto optimal solution set obtained by the NSGA-ii algorithm, and the triangular points represent the pareto optimal solution set optimized by the simulated annealing algorithm; each point corresponds to a deployment scheme, and as can be seen from a comparison of a solution set obtained by random deployment shown in fig. 5, the method used in the present invention performs a larger optimization on the edge service deployment scheme. After the final solution set is obtained, a suitable edge deployment scheme can be selected according to the balance between the total delay time of the user for completing the service chain and the total number of services processed by the cloud server.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. A service deployment method under an edge environment based on NSGA-II and simulated annealing is characterized by comprising the following steps:
s1, constructing an edge service framework consisting of M micro base stations and N users, constructing a service deployment scheme of the micro base stations, and initializing attributes of the users and the micro base stations, wherein each user requests a service chain from a server, the service chain is formed by different types of services, each service comprises c specific candidate services, the services deployed by the micro base stations are processed by the micro base stations, and the services not deployed by the micro base stations are processed by cloud servers;
s2, modeling an edge service deployment problem by taking the minimum total delay time of a user for completing a service chain and the minimum total number of services processed by a cloud server as optimization targets and taking the coverage area of the micro base station and the number of deployed services as constraints;
s3, coding the service deployment schemes of all the micro base stations as individuals in the NSGA-II algorithm population, solving the model established in the step S2 by using the NSGA-II algorithm, and obtaining a final population after the specified iteration times are reached;
s4, selecting a proper number of excellent individuals from the final population obtained in the step S3 as an initial solution set, and optimizing the solution set through an improved multi-objective simulated annealing algorithm to obtain an optimized solution set;
and S5, performing rapid non-dominated sorting on the optimized solution set obtained in the step S4 to obtain a Pareto optimal front edge, and selecting a proper solution as a final edge service deployment scheme with the aim of achieving the balance of minimizing the total delay time of a user for completing a service chain and minimizing the total number of services processed by a cloud server.
2. The NSGA-II and simulated annealing based service deployment method under the edge environment according to claim 1, wherein in the step S1, the step of initializing specifically comprises:
s11, setting the type of service and the number of candidates of each service;
s12, setting the number of users, the geographic position of each user and a requested service chain, wherein the service chain is composed of a string of services of different types, a candidate is selected for each service, and the service chain of the user i is defined as:
Figure QLYQS_1
, wherein />
Figure QLYQS_2
Service number q in the service chain representing user i has selected the jth q A candidate;
s13, setting the number M of the micro base stations, the geographic position of each micro base station and the coverage area of the service signal, determining the number of services which can be deployed by each micro base station and the accessibility between the micro base stations, and routing a service chain requested by a user between the two reachable micro base stations.
3. The NSGA-II and simulated annealing based service deployment method under the edge environment according to claim 1, wherein the step S2 specifically comprises:
s21, calculating total delay time of all users for completing a service chain, when the users request the service chain, firstly uploading data to a micro base station which is closest in distance and covered by signals to the users, and if the data are not covered by any micro base station, directly uploading the data to a cloud server through a macro base station; the micro base stations start to process services in the service chain in sequence, when the current micro base station is not deployed with the service, the micro base stations are routed to the micro base station which can be reached recently and has the service deployed for processing, and if all the micro base stations which can be reached are not deployed with the service, the micro base stations transmit the services to the cloud server for processing; when the service chain data are transmitted to the cloud server, the rest services are processed by the cloud server; after the whole service chain is processed, if the last service is processed by the cloud server, the data is transmitted back to the user through the macro base station, otherwise, the data is transmitted back to the user through the micro base station which is closest to the user and covers the user with signals, and a calculation formula of the total delay time of all the users for completing the service chain is as follows:
Figure QLYQS_3
wherein N is the number of users, Q is the length of the service chain,
Figure QLYQS_4
indicates a data up-time for the nth user, based on the time value>
Figure QLYQS_5
Represents the time when the q service of the nth user is transmitted to the qualified micro base station or cloud server, t exe Execution time for processing of a single service, <' > is greater or lesser>
Figure QLYQS_6
Indicating the data downlink time of the nth user;
Figure QLYQS_7
where α is the reciprocal of the wireless transmission rate, d (n, S) 0 ) Denotes the distance of the user n to the macro base station, S 0 Denotes a macro base station, d (n, s) n ) Denotes the distance, T, from user n to the nearest micro base station that can cover user n b Is the time for data to be transmitted from the base station to the cloud server through the backbone network;
Figure QLYQS_8
where β is the reciprocal of the wire transmission rate, d (S) p ,S q ) Distance of micro base station p to micro base station q, S p Indicating the current base station, S, in which the data is located q Represents from S p A micro base station which has the least hop count and can process the q service of the user n;
Figure QLYQS_9
wherein ,d(Se ,S n ) Is the distance from the micro base station e to the micro base station n, S e Micro base station, S, representing the last service of the service chain handling user n n Indicating a micro base station which can cover the user n and is closest to the user n;
s22, calculating the total number of the services processed by the cloud server:
Figure QLYQS_10
wherein ,
Figure QLYQS_11
a q-th service of 1 indicating that user n requests a service chain is processed by the cloud server, and is->
Figure QLYQS_12
A value of 0 indicates that the q service of the service chain requested by the user n is processed by the cloud micro base station;
s23, modeling the edge service deployment problem as follows: under the constraints of the coverage area of the micro base station and the number of deployed services, the total delay time of a user for completing a service chain is minimized, the total number of services processed by a cloud server is minimized, and a mathematical model is as follows:
Figure QLYQS_13
/>
Figure QLYQS_14
Figure QLYQS_15
Figure QLYQS_16
wherein ,dm Number of services deployed for micro base station m, cap (S) m ) Serving the maximum number of deployments of micro base station m, cov (S) n ) Is the signal coverage of the micro base station which is covered to the user n and is closest to the user n.
4. The NSGA-II and simulated annealing based service deployment method under the edge environment according to claim 1, wherein the step S3 specifically comprises:
s31, encoding the service deployment schemes of all the micro base stations into: x = [ x (b) ] 1 ),...,x(b M )]Wherein x (b) i ) Is a deployment vector of the micro base station i, the length of the deployment vector is b i The element in the vector is C ij Represents that the jth candidate of service i is deployed;
s32, initializing a population, randomly deploying each individual, and randomly selecting b for the micro base station i i A different service and randomly selecting a candidate for each service, wherein b i The maximum number of services that can be deployed by the micro base station i;
s33, selecting, crossing and mutating the current population to generate a progeny population;
s34, merging parent population and child population, and performing rapid non-dominated sorting;
s35, according to the result of non-dominated sorting, calculating a crowding distance from the 0 th leading face, adding individuals into the population from large to small according to the crowding distance, discarding the individuals with the crowding distance of zero, and sequentially processing the next leading face after the leading face is processed until the population scale reaches the expectation;
and S36, judging whether the specified iteration times are reached, if so, terminating the NSGA-II algorithm, outputting the current population, and otherwise, returning to the step S33 to carry out the next iteration.
5. The NSGA-II and simulated annealing based service deployment method under the edge environment as claimed in claim 4, wherein the selection mode in step S33 is a put-back binary tournament selection, in which the basis is a dominance relationship; the crossing mode is multipoint crossing, a segment with a certain length is selected, and the deployment vectors of two individuals in the segment are interacted; the variation mode is multipoint variation, a segment with a certain length is selected, and the deployment vector of the individual in the segment is re-randomly deployed.
6. The NSGA-II and simulated annealing based service deployment method under the edge environment according to claim 1, wherein in step S4, a traditional single-target simulated annealing algorithm is improved into a simulated annealing algorithm of multi-target optimization, and the method specifically comprises the following steps:
s41, performing rapid non-dominated sorting on the final population obtained in the step S3, taking the individuals positioned on the 0 th leading surface and the 1 st leading surface as an initial solution set of an improved simulated annealing algorithm, and performing the following optimization operation on each individual in the initial solution set;
s42, carrying out mutation operation on the current solution individual, wherein the mutation mode is that a deployment vector with a certain length is randomly selected to carry out random deployment again, and an adjacent solution individual is generated;
s43, if the current solution individual cannot dominate the newly generated solution individual, adding the new solution into the solution set, and replacing the current solution to enter next iteration; otherwise, determining a solution for entering the next iteration according to the following metropolis acceptance criterion:
Figure QLYQS_17
/>
wherein ,Ik+1 To enter the solution entity of the next iteration, I k The current solution is the current solution individual,
Figure QLYQS_18
a new solution individual generated by variation for the current solution individual, r is [0,1 ]]Random real number of (2), F k Is I k Is adapted to the value vector of->
Figure QLYQS_19
Is->
Figure QLYQS_20
The adaptive value definition is the same as the individual adaptive value in NSGA-II, and T is the current temperature;
s44, if k is less than the specified iteration times, returning to the step S42 to perform the next iteration at the temperature T; otherwise, carrying out temperature reduction operation of simulated annealing: t = a × T, where a is the cooling coefficient, and a ∈ (0, 1), and proceeds to step S45;
s45, if
Figure QLYQS_21
, wherein ,Tmin If the temperature is the set lower limit, terminating the simulated annealing algorithm and outputting the current solution set; otherwise, returning to the step S42, and carrying out the next iteration after cooling. />
CN202310153844.9A 2023-02-23 2023-02-23 Service deployment method based on NSGA-II and simulated annealing in edge environment Active CN115866626B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310153844.9A CN115866626B (en) 2023-02-23 2023-02-23 Service deployment method based on NSGA-II and simulated annealing in edge environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310153844.9A CN115866626B (en) 2023-02-23 2023-02-23 Service deployment method based on NSGA-II and simulated annealing in edge environment

Publications (2)

Publication Number Publication Date
CN115866626A true CN115866626A (en) 2023-03-28
CN115866626B CN115866626B (en) 2023-05-12

Family

ID=85658750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310153844.9A Active CN115866626B (en) 2023-02-23 2023-02-23 Service deployment method based on NSGA-II and simulated annealing in edge environment

Country Status (1)

Country Link
CN (1) CN115866626B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116307296A (en) * 2023-05-22 2023-06-23 南京航空航天大学 Cloud resource optimization configuration method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110418356A (en) * 2019-06-18 2019-11-05 深圳大学 A kind of calculating task discharging method, device and computer readable storage medium
CN112181655A (en) * 2020-09-30 2021-01-05 杭州电子科技大学 Hybrid genetic algorithm-based calculation unloading method in mobile edge calculation
CN112882723A (en) * 2021-02-24 2021-06-01 武汉大学 Edge service deployment method facing parallel micro-service combination
CN113220364A (en) * 2021-05-06 2021-08-06 北京大学 Task unloading method based on vehicle networking mobile edge computing system model
CN113781002A (en) * 2021-09-18 2021-12-10 北京航空航天大学 Low-cost workflow application migration method based on agent model and multi-population optimization in cloud edge cooperative network
WO2022116957A1 (en) * 2020-12-02 2022-06-09 中兴通讯股份有限公司 Algorithm model determining method, path determining method, electronic device, sdn controller, and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110418356A (en) * 2019-06-18 2019-11-05 深圳大学 A kind of calculating task discharging method, device and computer readable storage medium
CN112181655A (en) * 2020-09-30 2021-01-05 杭州电子科技大学 Hybrid genetic algorithm-based calculation unloading method in mobile edge calculation
WO2022116957A1 (en) * 2020-12-02 2022-06-09 中兴通讯股份有限公司 Algorithm model determining method, path determining method, electronic device, sdn controller, and medium
CN112882723A (en) * 2021-02-24 2021-06-01 武汉大学 Edge service deployment method facing parallel micro-service combination
CN113220364A (en) * 2021-05-06 2021-08-06 北京大学 Task unloading method based on vehicle networking mobile edge computing system model
CN113781002A (en) * 2021-09-18 2021-12-10 北京航空航天大学 Low-cost workflow application migration method based on agent model and multi-population optimization in cloud edge cooperative network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
冯晨 等: "《基于NSGA-Ⅱ-SA算法的外协云服务组合优选》" *
钟云峰: "《工业互联网云边协同计算任务卸载策略研究》" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116307296A (en) * 2023-05-22 2023-06-23 南京航空航天大学 Cloud resource optimization configuration method
CN116307296B (en) * 2023-05-22 2023-09-29 南京航空航天大学 Cloud resource optimization configuration method

Also Published As

Publication number Publication date
CN115866626B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN113242568B (en) Task unloading and resource allocation method in uncertain network environment
CN109840154B (en) Task dependency-based computing migration method in mobile cloud environment
Konstantinidis et al. Multi-objective energy-efficient dense deployment in wireless sensor networks using a hybrid problem-specific MOEA/D
CN111585816B (en) Task unloading decision method based on adaptive genetic algorithm
CN110134493B (en) Service function chain deployment algorithm based on resource fragment avoidance
CN112286677B (en) Resource-constrained edge cloud-oriented Internet of things application optimization deployment method
CN111182570A (en) User association and edge computing unloading method for improving utility of operator
CN112911608B (en) Large-scale access method for edge-oriented intelligent network
CN111949409B (en) Method and system for unloading computing task in power wireless heterogeneous network
Tam et al. Multifactorial evolutionary optimization to maximize lifetime of wireless sensor network
CN115866626A (en) NSGA-II and simulated annealing based service deployment method under edge environment
CN105491599A (en) Novel regression system for predicting LTE network performance indexes
CN111836284A (en) Energy consumption optimization calculation and unloading method and system based on mobile edge calculation
CN115396953A (en) Calculation unloading method based on improved particle swarm optimization algorithm in mobile edge calculation
CN114449490A (en) Multi-task joint computing unloading and resource allocation method based on D2D communication
Wang Collaborative task offloading strategy of UAV cluster using improved genetic algorithm in mobile edge computing
Wang et al. Multi-objective joint optimization of communication-computation-caching resources in mobile edge computing
CN113139639A (en) MOMBI-based smart city application-oriented multi-target calculation migration method and device
CN114615705B (en) Single-user resource allocation strategy method based on 5G network
CN114980216B (en) Dependency task unloading system and method based on mobile edge calculation
CN116089091A (en) Resource allocation and task unloading method based on edge calculation of Internet of things
CN115499876A (en) Computing unloading strategy based on DQN algorithm under MSDE scene
CN109150739B (en) MOEA/D-based multi-target base station active storage allocation method
Hoiles et al. Risk-averse caching policies for YouTube content in femtocell networks using density forecasting
CN113347255A (en) Edge server site selection deployment model and solving method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20230328

Assignee: HUBEI THINGO TECHNOLOGY DEVELOPMENT Co.,Ltd.

Assignor: Anhui Sigao Intelligent Technology Co.,Ltd.

Contract record no.: X2023980039196

Denomination of invention: A Service Deployment Method in Edge Environment Based on NSGA - II and Simulated Annealing

Granted publication date: 20230512

License type: Exclusive License

Record date: 20230810

EE01 Entry into force of recordation of patent licensing contract