CN115866626B - Service deployment method based on NSGA-II and simulated annealing in edge environment - Google Patents

Service deployment method based on NSGA-II and simulated annealing in edge environment Download PDF

Info

Publication number
CN115866626B
CN115866626B CN202310153844.9A CN202310153844A CN115866626B CN 115866626 B CN115866626 B CN 115866626B CN 202310153844 A CN202310153844 A CN 202310153844A CN 115866626 B CN115866626 B CN 115866626B
Authority
CN
China
Prior art keywords
service
micro base
base station
user
deployment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310153844.9A
Other languages
Chinese (zh)
Other versions
CN115866626A (en
Inventor
储成浩
蔡汝坚
袁水平
陈伟雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Sigao Intelligent Technology Co ltd
Original Assignee
Anhui Sigao Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Sigao Intelligent Technology Co ltd filed Critical Anhui Sigao Intelligent Technology Co ltd
Priority to CN202310153844.9A priority Critical patent/CN115866626B/en
Publication of CN115866626A publication Critical patent/CN115866626A/en
Application granted granted Critical
Publication of CN115866626B publication Critical patent/CN115866626B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a service deployment method under an edge environment based on NSGA-II and simulated annealing, which comprises the following steps: constructing an edge service frame, and initializing attributes of a user and a micro base station; modeling an edge service deployment problem by taking the minimum total delay time of a user completing a service chain and the minimum total service number processed by a cloud server as optimization targets and taking the coverage range of a micro base station and the number of deployment services as constraints; and (3) coding the service deployment schemes of all the micro base stations, solving the models as individuals in NSGA-II algorithm populations to obtain a final population, selecting a proper amount of excellent individuals from the final population as an initial solution set, optimizing the initial solution set through an improved multi-objective simulated annealing algorithm, and carrying out rapid non-dominant sequencing on the optimized solution set to obtain a Pareto optimal front edge, and selecting a proper solution from the front edge as a final edge service deployment scheme. The invention maximizes the service capability of the micro base station, and the hybrid algorithm further optimizes the quality of knowledge, thereby providing a better edge service deployment scheme.

Description

Service deployment method based on NSGA-II and simulated annealing in edge environment
Technical Field
The invention belongs to the technical field of edge calculation, and particularly relates to a service deployment method in an edge environment based on NSGA-II and simulated annealing.
Background
Today, mobile applications are becoming increasingly diverse and complex, requiring a large amount of computing power and resulting in high power consumption, and because mobile devices have limited processing power and battery power, heavy computing tasks are often offloaded to remote cloud servers for processing. However, due to the long distance between the cloud server and the end user, coupled with the ever increasing network traffic and computational effort, traditional cloud computing is burdened with tremendous pressure facing the challenge of maintaining reliable and low latency connections with users. To address this challenge, edge computation has been proposed. Service providers deploy services on micro base stations that are located throughout and closer to users to provide services and perform computations at the network edge without transmission to remote cloud server processing, thereby enabling fast response to mobile devices. However, the micro base stations have limited service capabilities, and only a few services can be deployed on each micro base station. At the same time, each service may have multiple candidates, such as e-pay for silver-linked, pay treasures, weChat, etc. for selection. How to determine the service each micro base station deploys and select its candidates becomes a hotspot puzzle.
The existing edge service deployment method mainly comprises a simple greedy algorithm and a genetic algorithm, and at least has the following problems:
(1) The number of services handled by the cloud server is not considered. In existing schemes, in order to provide better quality of service to users, only the service chain response time of users is often of interest. In the edge environment, adjacent micro base stations are connected with each other to form a micro base station communication diagram, if a certain service in a user service chain cannot be executed on any micro base station on the micro base station communication diagram, the service is uploaded to a cloud server for processing, and after all the services which are not executed in the service chain are executed by the cloud server, the result is returned. The transmission time between the micro base stations is generated when the input data and the output data of the service flow on the micro base station communication diagram, and the transmission time does not exist on the cloud server. It follows that the total response time of a service chain is shorter if there is a service in the service chain that cannot be performed by the micro base station, while the more forward the service is located on the service chain. The more front the service is on the service chain, the more services are processed by the cloud server, and in this case, the short response time of the service chain does not mean that the number of services executed by the cloud is small. Thus, if only the service chain response time of the user is concerned, in the case that the service can not be completely deployed by the micro base station in any way, unreasonable deployment may occur, that is, the service deployment of the micro base station tends to satisfy the complete service chain and deploy the later service in the service chain, and as a result of the above analysis, the later service is deployed for the purpose of uploading the service to the cloud server as early as possible, and once the service is uploaded to the cloud server, the subsequent non-executed service of the service chain is processed by the cloud server, so that the later service deployed by the micro base station is ineffective service for the corresponding service chain. Such unreasonable deployment reduces the overall latency of the completion of the small number of service chains, but greatly wastes the service capacity of the micro base station and aggravates the working pressure of the cloud server.
(2) The deployment scheme is not optimized to an adequate degree. The simple greedy algorithm can not well cope with the NP difficult problem of edge service deployment, the common genetic algorithm has lower searching efficiency in the later period of evolution and is easy to fall into 'early maturity', i.e. super individuals appear in the population, and the adaptation value of the individuals greatly exceeds the average individual adaptation value of the current population. Therefore, the individuals quickly occupy absolute proportion in the population, the diversity of the population is rapidly reduced, the evolution capability of the population is basically lost, and therefore the algorithm is converged to a local optimal solution earlier and a better edge service deployment scheme cannot be given.
Disclosure of Invention
In view of the above, the invention provides a service deployment method under an edge environment based on NSGA-II and simulated annealing, which is used for solving the problems that the existing edge service deployment method does not consider the times of users accessing a cloud server and the optimization degree is insufficient. Mainly comprises the following steps:
s1, constructing an edge service frame consisting of M micro base stations and N users, constructing a service deployment scheme of the micro base stations, and initializing attributes of the users and the micro base stations, wherein each user requests a service chain to a server, the service chain consists of different kinds of services, each service comprises c specific candidate services, the service deployed by the micro base stations is processed by the micro base stations, and the service not deployed by the micro base stations is processed by a cloud server;
s2, modeling an edge service deployment problem by taking the minimum total delay time of a user completing a service chain and the minimum total number of services processed by a cloud server as an optimization target and taking the coverage range of a micro base station and the number of deployment services as constraints;
s3, coding service deployment schemes of all the micro base stations as individuals in NSGA-II algorithm populations, solving the model established in the step S2 by using NSGA-II algorithm, and obtaining a final population after reaching the appointed iteration times;
s4, selecting a proper number of excellent individuals from the final population obtained in the step S3 as an initial solution set, and optimizing the solution set through an improved multi-objective simulated annealing algorithm to obtain an optimized solution set;
s5, carrying out rapid non-dominant sequencing on the optimized solution set obtained in the step S4 to obtain a Pareto optimal front edge, wherein in the front edge, the purpose of balancing the minimum total delay time of a user completing a service chain and the minimum total number of services processed by a cloud server is achieved, and a proper solution is selected as a final edge service deployment scheme.
Further, in step S1, the initializing step specifically includes:
s11, setting the type of service and the number of candidates of each service;
s12, setting the number of users, the geographic position of each user and a requested service chain, wherein the service chain consists of a series of different kinds of services, a candidate is selected for each service, and the service chain of the user i is defined as:
Figure SMS_1
, wherein />
Figure SMS_2
Indicating that the j-th service is selected by the q-numbered service in the service chain of user i q A number of candidates;
s13, setting the number M of the micro base stations, the geographic position of each micro base station and the coverage range of service signals, and determining the number of the service which can be deployed by each micro base station and the accessibility among the micro base stations, wherein a service chain requested by a user is routed between the two reachable micro base stations.
Further, step S2 specifically includes:
s21, calculating total delay time of completing a service chain of all users, when the users request the service chain, firstly uploading data to a micro base station which is closest to the users and covers signals, and if the data is not covered by any micro base station, directly uploading the data to a cloud server through a macro base station; the micro base station starts to process the services in the service chain in sequence, when encountering the service which is not deployed by the current micro base station, the micro base station is routed to the recently reachable micro base station for deploying the service, and if all the reachable micro base stations do not deploy the service, the service is transmitted to the cloud server for processing through the macro base station; when service chain data are transmitted to the cloud server, the rest of the services are processed by the cloud server; after the complete service chain is processed, if the last service is processed by the cloud server, the data is transmitted back to the user through the macro base station, otherwise, the data is transmitted back to the user through the micro base station which is nearest to the user and covers the user by the signal, and the calculation formula of the total delay time of all the users completing the service chain is as follows:
Figure SMS_3
wherein N is the number of users, Q is the length of a service chain,
Figure SMS_4
data up time indicating nth user,/->
Figure SMS_5
Representing the time when the qth service of the nth user is transmitted to the micro base station or cloud server meeting the requirements, t exe Processing execution time for a single service, +.>
Figure SMS_6
Representing the data downlink time of the nth user;
Figure SMS_7
where α is the inverse of the radio transmission rate, d (n, S 0 ) Representing the distance of user n to macro base station, S 0 Represents macro base station, d (n, s n ) Representing the distance, T, from user n to the nearest micro base station that can cover user n b Is the time that data is transmitted from the base station to the cloud server over the backbone network;
Figure SMS_8
where β is the inverse of the wire transmission rate, d (S p ,S q ) Is the distance from the micro base station p to the micro base station q, S p Represents the current base station where the data is located, S q Indicating separation S p Micro base station with minimum hop count and capable of processing the q-th service of user n, T b Is the time that data is transmitted from the base station to the cloud server over the backbone network;
Figure SMS_9
wherein ,d(Se ,S n ) Is the distance from the micro base station e to the micro base station n, S e Micro base station for representing last service of service chain of processing user n n Representing the nearest micro base station, T, capable of covering the user n b Is the time that data is transmitted from the base station to the cloud server over the backbone network;
s22, calculating the total number of services processed by the cloud server:
Figure SMS_10
wherein ,
Figure SMS_11
1 means that the q-th service of the service chain requested by user n is processed by the cloud server, +.>
Figure SMS_12
A value of 0 indicates that user n pleaseSolving the q-th service of the service chain to be processed by the cloud micro base station;
s23, modeling the edge service deployment problem as: under the constraint of the coverage area of the micro base station and the number of deployment services, the total delay time of a user for completing a service chain is minimized, the total number of services processed by a cloud server is minimized, and a mathematical model is as follows:
Figure SMS_13
/>
Figure SMS_14
Figure SMS_15
Figure SMS_16
wherein ,dm The number of services deployed for the micro base station m, cap (S m ) For the maximum deployment service number of the micro base station m, cov (S n ) To cover the signal coverage of the nearest micro base station to user n.
Further, the step S3 specifically includes:
s31, encoding service deployment schemes of all micro base stations as follows: x= [ x (b) 1 ),...,x(b M )]Wherein x (b) i ) Is the deployment vector of the micro base station i, and the length of the deployment vector is b i The element in the vector is C ij Representing the jth candidate where service i is deployed;
s32, initializing a population, randomly deploying each individual, and randomly selecting b for the micro base station i i A different service, and randomly selecting a candidate for each service, wherein b i The maximum number of the deployed services of the micro base station i;
s33, selecting, crossing and mutating the current population to generate a child population;
s34, merging parent and offspring populations, and performing rapid non-dominant sorting;
s35, calculating a crowding distance from the 0 th front surface according to a non-dominant sorting result, adding individuals into a population according to the crowding distance from large to small, and discarding individuals with the crowding distance of zero, and sequentially processing the next front surface after the front surface is processed until the population scale reaches the expected value;
s36, judging whether the specified iteration times are reached, if so, stopping the NSGA-II algorithm, outputting the current population, otherwise, returning to the step S33 to carry out the next iteration.
Further, the selection in step S33 is a binary tournament selection with a put back, wherein the basis is a dominant relationship; the crossing mode is multi-point crossing, a segment with a certain length is selected, and deployment vectors of two individuals in the segment are interacted; the variation mode is multi-point variation, a segment with a certain length is selected, and the deployment vector of the individual in the segment is deployed again randomly.
Further, in step S4, the conventional single-target simulated annealing algorithm is improved into a multi-target optimized simulated annealing algorithm, which specifically includes:
s41, performing rapid non-dominant ranking on the final population obtained in the step S3, taking individuals positioned on the 0 th front surface and the 1 st front surface as an initial solution set for improving a simulated annealing algorithm, and performing the following optimization operation on each individual in the initial solution set;
s42, performing mutation operation on the current solution individuals, wherein the mutation mode is to randomly select a deployment vector with a certain length for re-random deployment, and generating an adjacent solution individual;
s43, if the current solution individual cannot dominate the newly generated solution individual, adding the new solution into the solution set, and replacing the current solution to enter the next iteration; otherwise, determining a solution to enter the next iteration according to the following metapolis acceptance criteria:
Figure SMS_17
wherein ,Ik+1 To get inSolution to the next iteration, I k For the current solution individual,
Figure SMS_18
new solution individuals generated by mutation for the current solution individuals, r is [0,1]Random real number, F k Is I k Is/are adapted to the value vector, ">
Figure SMS_19
Is->
Figure SMS_20
The adaptation value definition is the same as the individual adaptation value in NSGA-II, and T is the current temperature;
s44, if k is smaller than the designated iteration number, returning to the step S42 to perform the next iteration at the temperature T; otherwise, performing simulated annealing cooling operation: t=a×t, where a is a cooling coefficient, and a e (0, 1), and step S45 is entered;
s45, if
Figure SMS_21
, wherein ,Tmin If the temperature is the set lower limit, terminating the simulated annealing algorithm and outputting the current solution set; otherwise, returning to the step S42, and carrying out the next iteration after cooling.
The technical scheme provided by the invention has the beneficial effects that:
1) By combining NSGA-II and a simulated annealing algorithm, a service deployment scheme for the micro base station in the edge computing field is provided, the scheme can weigh the total delay time of a user completing a service chain and the total number of services processed by a cloud server, the service capability of the micro base station is greatly utilized, and the working pressure of the cloud server is reduced;
2) The number of services processed by the cloud server is considered, unreasonable deployment is avoided, the service capacity of the micro base station is utilized to the maximum extent, and the method has good practicability;
3) The hybrid algorithm further optimizes and improves the quality of knowledge, and can give a better edge service deployment scheme.
Drawings
FIG. 1 is a flow chart of a service deployment method in an edge environment based on NSGA-II and simulated annealing in accordance with the present invention;
FIG. 2 is a block diagram of a service deployment method in an edge environment based on NSGA-II and simulated annealing in accordance with the present invention;
FIG. 3 is a schematic diagram of an edge service deployment framework in an embodiment of the invention;
FIG. 4 is a schematic diagram of a final solution set obtained in the embodiment of the present invention, wherein square points represent pareto optimal solutions obtained by the NSGA-II algorithm, and triangular points represent pareto optimal solutions optimized by the simulated annealing algorithm;
FIG. 5 is a schematic diagram of solution set using random deployment in an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
The embodiment of the invention provides a service deployment method under an edge environment based on NSGA-II and simulated annealing, the general flow and the block diagram of which are shown in figures 1 and 2, comprising the following steps:
s1, constructing an edge service framework consisting of M micro base stations and N users, referring to FIG. 3, FIG. 3 is a schematic diagram of an edge service deployment framework in an embodiment of the invention. And initializing attributes of the users and the micro base stations, wherein each user requests a service chain from a server, the service chain is composed of different kinds of services, each service comprises a plurality of specific candidate services, the service deployed by the micro base station is processed by the micro base station, and the service not deployed by the micro base station is processed by the cloud server.
One particular candidate service may be represented as: c ij Refers to the j-th candidate of service number i. For example, when a user shops on the internet, the service chain comprises commodity browsing and payment, wherein commodity browsing service can be provided by candidates such as panda, jingdong, spell, and the like, and payment service can be provided by candidates such as a payment treasure and WeChat waiting. The service chain of a specific user may be to browse commodity service selection panned treasures, and pay service selection paid treasuresThe method comprises the steps of carrying out a first treatment on the surface of the Each micro base station can deploy a plurality of specific services so as to respond to the user request; the service of the user request service chain needs to be processed in sequence, and can be routed among the micro base stations, so that different micro base stations can respond to different services, and if no micro base station contains the service requested by the user, the cloud server responds to the user request. The initialization step specifically comprises the following steps:
s11, setting the type of the service and the number of candidates of each service.
In this embodiment, 10 kinds of services are set first, and then an integer in the interval [2,5] is randomly generated for each kind of service as the candidate number of the service.
S12, setting the number of users, the geographic position of each user and a requested service chain, wherein the service chain consists of a series of different kinds of services, a candidate is selected for each service, and the service chain of the user i is defined as:
Figure SMS_22
, wherein />
Figure SMS_23
Indicating that the j-th service is selected by the q-numbered service in the service chain of user i q And a number of candidates.
In this embodiment, an EUA dataset (https:// github. Com/swinnedge/EUA-dataset) is used, which is a location information dataset commonly used in the field of edge computing, containing geographic location information for Australian edge servers and end users. Here, the user data in the EUA data set is adopted, 400 users are randomly extracted from the user data, and the geographic position of the user is represented by longitude and latitude. Then, a service chain is generated for each user, the service chain length is set to 10, and a candidate is randomly selected for each service.
S13, setting the number M of the micro base stations, the geographical position of each micro base station and the coverage range of service signals, determining the number of the service which can be deployed by each micro base station and the accessibility between the micro base stations, and if the service can be achieved between the two micro base stations, routing a service chain requested by a certain user between the two micro base stations.
In this embodiment, server data in the EUA data set is adopted, 50 pieces of server data are randomly extracted from the server data to serve as the micro base station, and longitude and latitude are used for representing the geographic position of the micro base station. For each micro base station, an integer in the interval [200,600] is then generated randomly as the coverage area, and an integer in the interval [3,5] is generated randomly as the number of the deployable services. And (3) expressing the reachability among the micro base stations by using an adjacency matrix, wherein each micro base station is connected with at most 3 micro base stations within 300m, and the shortest hop count of every two micro base stations is calculated by using a Floyd algorithm.
S2, modeling an edge service deployment problem by taking the minimum total delay time of a user completing a service chain and the minimum total number of services processed by a cloud server as optimization targets and taking the coverage range of a micro base station and the number of deployed services as constraints.
S21, calculating the total delay time of all users completing the service chain. When a user requests a service chain, firstly, data is uploaded to a micro base station which is closest to the user and is covered by a signal, and if the data is not covered by any micro base station, the data is directly uploaded to a cloud server through a macro base station; then, the micro base station starts to process the services in the service chain in sequence, when encountering the service which is not deployed by the current micro base station, the micro base station processes the service which is accessible recently and is deployed, and if all the accessible micro base stations do not deploy the service, the service is transmitted to the cloud server for processing through the macro base station; the cloud server can process all types of services, and when service chain data are transmitted to the cloud server, the rest of the services are processed by the cloud server; after the complete service chain is processed, if the last service is processed by the cloud server, the data is returned to the user through the macro base station, otherwise, the data is returned to the user through the micro base station which is nearest to the user and covers the user through the signal. The calculation formula of the total delay time of all users completing the service chain is as follows:
Figure SMS_24
wherein N is the number of users, Q is the length of a service chain,
Figure SMS_25
data up time indicating nth user,/->
Figure SMS_26
Representing the time when the qth service of the nth user is transmitted to the micro base station or cloud server meeting the requirements, t exe Processing execution time for a single service, +.>
Figure SMS_27
Indicating the data downlink time of the nth user. />
Figure SMS_28
Where α is the inverse of the radio transmission rate, d (n, S 0 ) Representing the distance of user n to macro base station, S 0 Represents a macro base station, d (n, S n ) Representing the distance, T, from user n to the nearest micro base station that can cover user n b Is the time that data is transmitted from the base station to the cloud server over the backbone network.
Figure SMS_29
Where β is the inverse of the wire transmission rate, d (S p ,S q ) Is the distance from the micro base station p to the micro base station q, S p Represents the current base station where the data is located, S q Indicating separation S p Micro base station with minimum hop count and capable of processing the q-th service of user n, T b Is the time that data is transmitted from the base station to the cloud server over the backbone network.
Figure SMS_30
wherein ,d(Se ,S n ) Is the distance from the micro base station e to the micro base station n, S e Micro base station for representing last service of service chain of processing user n n Representing the nearest micro base station, T, capable of covering the user n b Is that data is transmitted from a base station to cloud service through a backbone networkThe time of the server.
S22, calculating the total number of services processed by the cloud server:
Figure SMS_31
wherein ,
Figure SMS_32
1 indicates that the qth service representing the service chain requested by user n is processed by the cloud server,/->
Figure SMS_33
A value of 0 indicates that the qth service of the service chain requested by the user n is processed by the micro base station.
Specifically, the optimization objectives are two:
Figure SMS_34
Figure SMS_35
the constraint is that the user can only communicate with the macro base station or the micro base stations covering the user, and each micro base station can only deploy the number of services limited in step S13.
S23, modeling the edge service deployment problem as: under the constraint of the coverage area of the micro base station and the number of deployment services, the total delay time of a user for completing a service chain is minimized, the total number of services processed by a cloud server is minimized, and a mathematical model is as follows:
Figure SMS_36
Figure SMS_37
Figure SMS_38
Figure SMS_39
wherein ,dm The number of services deployed for the micro base station m, cap (S m ) For the maximum deployment service number of the micro base station m, cov (S n ) To cover the signal coverage of the nearest micro base station to user n.
S3, coding service deployment schemes of all the micro base stations as individuals in NSGA-II algorithm populations, solving the model established in the step S2 by using NSGA-II algorithm, and obtaining a final population after the appointed iteration times are reached.
S31, encoding service deployment schemes of all micro base stations as follows: x= [ x (b) 1 ),...,x(b M )]Wherein x (b) i ) Is the deployment vector of the micro base station i, and the length of the deployment vector is b i The element in the vector is C ij Representing the jth candidate where service i is deployed; the chromosome of each individual of the population in the NSGA-ii algorithm represents a service deployment scenario for all micro base stations.
S32, initializing a population, randomly deploying each individual, and randomly selecting b for the micro base station i i A different service, and randomly selecting a candidate for each service, wherein b i The maximum number of the deployed services of the micro base station i;
s33, selecting, crossing and mutating the current population to generate a child population.
In the embodiment, 4 individuals are randomly selected from the current population each time, tournament selection is performed in two groups, and the comparison basis of selection is a dominant relationship; the two obtained father individuals are selected to carry out cross operation, namely, a plurality of micro base stations are randomly selected, and service deployment of the two individuals is exchanged; and performing mutation operation on the two individuals obtained after the intersection respectively, namely randomly selecting a plurality of micro base stations, and re-performing random service deployment on the micro base stations. The above selection crossover variation operation is repeated until the offspring population reaches a specified size.
S34, merging parent and offspring populations, and performing rapid non-dominant sorting.
In this embodiment, the combined population is used to retain elite individuals, and the rapid non-dominant ordering divides the whole population into a plurality of fronts according to dominant relations, and individuals on the fronts with smaller numbers completely dominate individuals on the fronts with larger numbers, and individuals on the same fronts do not dominate each other.
S35, calculating a crowding distance from the 0 th front surface according to a non-dominant sorting result, adding individuals into a population according to the crowding distance from large to small, discarding individuals with the crowding distance of zero (namely the same as the front and rear individuals), and sequentially processing the next front surface after the front surface is processed until the population scale reaches the expectations; the selection of individuals from the front at crowded distances is a protection for population diversity.
S36, judging whether the specified iteration times are reached, if so, stopping the NSGA-II algorithm, outputting the current population, otherwise, returning to the step S33 to carry out the next iteration.
S4, selecting a proper number of excellent individuals from the final population obtained in the step S3 as an initial solution set, optimizing the solution set through an improved multi-objective simulated annealing algorithm to obtain an optimized solution set, optimizing each individual in the input initial solution set through the improved multi-objective simulated annealing algorithm, and reserving the encountered excellent solution in the optimization process.
S41, performing rapid non-dominant ranking on the final population obtained in the step S3, taking individuals positioned on the 0 th front surface and the 1 st front surface as an initial solution set for improving a simulated annealing algorithm, and performing the following optimization operation on each individual in the initial solution set.
S42, performing mutation operation on the current solution individuals, wherein the mutation mode is to randomly select a deployment vector with a certain length for re-random deployment, and generating an adjacent solution individual; here, a search is made for the adjacent solution space of the current individual, where the variation is similar to the variation in the genetic algorithm, but the variation is shorter.
S43, if the current solution individual cannot dominate the newly generated solution individual, adding the new solution into the solution set, and replacing the current solution to enter the next iteration; otherwise, determining a solution to enter the next iteration according to the following metapolis acceptance criteria:
Figure SMS_40
wherein ,Ik+1 To enter the solution individual of the next iteration, I k For the current solution individual,
Figure SMS_41
new solution individuals generated by mutation for the current solution individuals, r is [0,1]Random real number, F k Is I k Is/are adapted to the value vector, ">
Figure SMS_42
Is->
Figure SMS_43
The adaptation value definition is the same as the individual adaptation value in NSGA-II, and T is the current temperature;
s44, if k is smaller than the designated iteration number, returning to the step S42 to perform the next iteration at the temperature T; otherwise, performing simulated annealing cooling operation: t=a×t, where a is a cooling coefficient, and a e (0, 1), and step S45 is entered; at each temperature, the search is iterated several times, ensuring a sufficient search of the adjacent solution space for the current solution.
S45, if
Figure SMS_44
, wherein ,Tmin If the temperature is the set lower limit, terminating the simulated annealing algorithm and outputting the current solution set; otherwise, returning to the step S42, and carrying out the next iteration after cooling.
S5, carrying out rapid non-dominant sorting on the optimized solution set obtained in the step S4 to obtain a Pareto optimal front edge, and selecting a proper solution as required in the front edge to serve as a final edge service deployment scheme. The selection principle of selecting a proper solution from the Pareto optimal front is to achieve the balance of the following two aspects according to the load capacity of the cloud server: minimizing the total delay time of the user completing the service chain, so that the user experience quality is the best; the total number of services processed by the cloud server is minimized, the service capacity of the micro base station is greatly utilized, and the workload of the cloud server is reduced.
In this embodiment, the final solution set obtained by NSGA-ii and the simulated annealing algorithm is shown in fig. 4, where square points represent the pareto optimal solution set obtained by NSGA-ii algorithm, and triangle points represent the pareto optimal solution set optimized by the simulated annealing algorithm; each point corresponds to a deployment scheme, and comparing the solution set obtained by random deployment shown in fig. 5, the method used in the invention optimizes the edge service deployment scheme greatly. After the final solution set is obtained, a suitable edge deployment scheme can be selected according to the trade-off between the total delay time for completing the service chain for the user and the total number of services processed by the cloud server.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. The service deployment method under the edge environment based on NSGA-II and simulated annealing is characterized by comprising the following steps:
s1, constructing an edge service frame consisting of M micro base stations and N users, constructing a service deployment scheme of the micro base stations, and initializing attributes of the users and the micro base stations, wherein each user requests a service chain to a server, the service chain consists of different kinds of services, each service comprises c specific candidate services, the service deployed by the micro base stations is processed by the micro base stations, and the service not deployed by the micro base stations is processed by a cloud server;
s2, modeling an edge service deployment problem by taking the minimum total delay time of a user completing a service chain and the minimum total number of services processed by a cloud server as an optimization target and taking the coverage range of a micro base station and the number of deployment services as constraints;
s3, coding service deployment schemes of all the micro base stations as individuals in NSGA-II algorithm populations, solving the model established in the step S2 by using NSGA-II algorithm, and obtaining a final population after reaching the appointed iteration times;
s4, selecting a proper number of excellent individuals from the final population obtained in the step S3 as an initial solution set, and optimizing the solution set through an improved multi-objective simulated annealing algorithm to obtain an optimized solution set;
s5, carrying out rapid non-dominant sequencing on the optimized solution set obtained in the step S4 to obtain a Pareto optimal front edge, wherein in the front edge, the purpose of balancing the minimum total delay time of a user completing a service chain and the minimum total number of services processed by a cloud server is achieved, and a proper solution is selected as a final edge service deployment scheme.
2. The NSGA-ii and simulated annealing based service deployment method in the edge environment according to claim 1, wherein in step S1, the initializing step specifically comprises:
s11, setting the type of service and the number of candidates of each service;
s12, setting the number of users, the geographic position of each user and a requested service chain, wherein the service chain consists of a series of different kinds of services, a candidate is selected for each service, and the service chain of the user i is defined as:
Figure QLYQS_1
, wherein />
Figure QLYQS_2
Indicating that the j-th service is selected by the q-numbered service in the service chain of user i q A number of candidates;
s13, setting the number M of the micro base stations, the geographic position of each micro base station and the coverage range of service signals, and determining the number of the service which can be deployed by each micro base station and the accessibility among the micro base stations, wherein a service chain requested by a user is routed between the two reachable micro base stations.
3. The NSGA-ii and simulated annealing based service deployment method in the edge environment according to claim 1, wherein step S2 specifically comprises:
s21, calculating total delay time of completing a service chain of all users, when the users request the service chain, firstly uploading data to a micro base station which is closest to the users and covers signals, and if the data is not covered by any micro base station, directly uploading the data to a cloud server through a macro base station; the micro base station starts to process the services in the service chain in sequence, when encountering the service which is not deployed by the current micro base station, the micro base station is routed to the recently reachable micro base station for deploying the service, and if all the reachable micro base stations do not deploy the service, the service is transmitted to the cloud server for processing through the macro base station; when service chain data are transmitted to the cloud server, the rest of the services are processed by the cloud server; after the complete service chain is processed, if the last service is processed by the cloud server, the data is transmitted back to the user through the macro base station, otherwise, the data is transmitted back to the user through the micro base station which is nearest to the user and covers the user by the signal, and the calculation formula of the total delay time of all the users completing the service chain is as follows:
Figure QLYQS_3
wherein N is the number of users, Q is the length of a service chain,
Figure QLYQS_4
data up time indicating nth user,/->
Figure QLYQS_5
Representing the transmission of the nth service of the nth user to a satisfactory micro base station or cloud serverTime t exe Processing execution time for a single service, +.>
Figure QLYQS_6
Representing the data downlink time of the nth user;
Figure QLYQS_7
where α is the inverse of the radio transmission rate, d (n, S 0 ) Representing the distance of user n to macro base station, S 0 Represents macro base station, d (n, s n ) Representing the distance, T, from user n to the nearest micro base station that can cover user n b Is the time that data is transmitted from the base station to the cloud server over the backbone network;
Figure QLYQS_8
where β is the inverse of the wire transmission rate, d (S p ,S q ) Is the distance from the micro base station p to the micro base station q, S p Represents the current base station where the data is located, S q Indicating separation S p A micro base station with the least hops and capable of processing the q-th service of the user n;
Figure QLYQS_9
wherein ,d(Se ,S n ) Is the distance from the micro base station e to the micro base station n, S e Micro base station for representing last service of service chain of processing user n n Representing the nearest micro base station capable of covering the user n;
s22, calculating the total number of services processed by the cloud server:
Figure QLYQS_10
wherein ,
Figure QLYQS_11
1 means that the q-th service of the service chain requested by user n is processed by the cloud server, +.>
Figure QLYQS_12
A q-th service of the service chain requested by the user n is processed by the cloud micro base station, wherein the q-th service is 0;
s23, modeling the edge service deployment problem as: under the constraint of the coverage area of the micro base station and the number of deployment services, the total delay time of a user for completing a service chain is minimized, the total number of services processed by a cloud server is minimized, and a mathematical model is as follows:
Figure QLYQS_13
/>
Figure QLYQS_14
Figure QLYQS_15
Figure QLYQS_16
wherein ,dm The number of services deployed for the micro base station m, cap (S m ) For the maximum deployment service number of the micro base station m, cov (S n ) To cover the signal coverage of the nearest micro base station to user n.
4. The NSGA-ii and simulated annealing based service deployment method in the edge environment according to claim 1, wherein step S3 specifically comprises:
s31, encoding service deployment schemes of all micro base stations as follows: x= [ x (b) 1 ),...,x(b M )]Wherein x (b) i ) Is a micro base stationi, the deployment vector of length b i The element in the vector is C ij Representing the jth candidate where service i is deployed;
s32, initializing a population, randomly deploying each individual, and randomly selecting b for the micro base station i i A different service, and randomly selecting a candidate for each service, wherein b i The maximum number of the deployed services of the micro base station i;
s33, selecting, crossing and mutating the current population to generate a child population;
s34, merging parent and offspring populations, and performing rapid non-dominant sorting;
s35, calculating a crowding distance from the 0 th front surface according to a non-dominant sorting result, adding individuals into a population according to the crowding distance from large to small, and discarding individuals with the crowding distance of zero, and sequentially processing the next front surface after the front surface is processed until the population scale reaches the expected value;
s36, judging whether the specified iteration times are reached, if so, stopping the NSGA-II algorithm, outputting the current population, otherwise, returning to the step S33 to carry out the next iteration.
5. The NSGA-ii and simulated annealing based edge environment service deployment method of claim 4 wherein the selection in step S33 is a binary tournament selection with a put back, wherein the basis is a dominance; the crossing mode is multi-point crossing, a segment with a certain length is selected, and deployment vectors of two individuals in the segment are interacted; the variation mode is multi-point variation, a segment with a certain length is selected, and the deployment vector of the individual in the segment is deployed again randomly.
6. The NSGA-ii and simulated annealing based service deployment method in the edge environment according to claim 1, wherein in step S4, the conventional single-objective simulated annealing algorithm is improved to a multi-objective optimized simulated annealing algorithm, specifically comprising:
s41, performing rapid non-dominant ranking on the final population obtained in the step S3, taking individuals positioned on the 0 th front surface and the 1 st front surface as an initial solution set for improving a simulated annealing algorithm, and performing the following optimization operation on each individual in the initial solution set;
s42, performing mutation operation on the current solution individuals, wherein the mutation mode is to randomly select a deployment vector with a certain length for re-random deployment, and generating an adjacent solution individual;
s43, if the current solution individual cannot dominate the newly generated solution individual, adding the new solution into the solution set, and replacing the current solution to enter the next iteration; otherwise, determining a solution to enter the next iteration according to the following metapolis acceptance criteria:
Figure QLYQS_17
/>
wherein ,Ik+1 To enter the solution individual of the next iteration, I k For the current solution individual,
Figure QLYQS_18
new solution individuals generated by mutation for the current solution individuals, r is [0,1]Random real number, F k Is I k Is/are adapted to the value vector, ">
Figure QLYQS_19
Is->
Figure QLYQS_20
The adaptation value definition is the same as the individual adaptation value in NSGA-II, and T is the current temperature;
s44, if k is smaller than the designated iteration number, returning to the step S42 to perform the next iteration at the temperature T; otherwise, performing simulated annealing cooling operation: t=a×t, where a is a cooling coefficient, and a e (0, 1), and step S45 is entered;
s45, if
Figure QLYQS_21
, wherein ,Tmin Is the lower limit of the set temperature, the die is terminatedThe simulated annealing algorithm outputs the current solution set; otherwise, returning to the step S42, and carrying out the next iteration after cooling. />
CN202310153844.9A 2023-02-23 2023-02-23 Service deployment method based on NSGA-II and simulated annealing in edge environment Active CN115866626B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310153844.9A CN115866626B (en) 2023-02-23 2023-02-23 Service deployment method based on NSGA-II and simulated annealing in edge environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310153844.9A CN115866626B (en) 2023-02-23 2023-02-23 Service deployment method based on NSGA-II and simulated annealing in edge environment

Publications (2)

Publication Number Publication Date
CN115866626A CN115866626A (en) 2023-03-28
CN115866626B true CN115866626B (en) 2023-05-12

Family

ID=85658750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310153844.9A Active CN115866626B (en) 2023-02-23 2023-02-23 Service deployment method based on NSGA-II and simulated annealing in edge environment

Country Status (1)

Country Link
CN (1) CN115866626B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116307296B (en) * 2023-05-22 2023-09-29 南京航空航天大学 Cloud resource optimization configuration method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110418356A (en) * 2019-06-18 2019-11-05 深圳大学 A kind of calculating task discharging method, device and computer readable storage medium
CN112181655A (en) * 2020-09-30 2021-01-05 杭州电子科技大学 Hybrid genetic algorithm-based calculation unloading method in mobile edge calculation
CN112882723A (en) * 2021-02-24 2021-06-01 武汉大学 Edge service deployment method facing parallel micro-service combination
CN113220364A (en) * 2021-05-06 2021-08-06 北京大学 Task unloading method based on vehicle networking mobile edge computing system model
CN113781002A (en) * 2021-09-18 2021-12-10 北京航空航天大学 Low-cost workflow application migration method based on agent model and multi-population optimization in cloud edge cooperative network
WO2022116957A1 (en) * 2020-12-02 2022-06-09 中兴通讯股份有限公司 Algorithm model determining method, path determining method, electronic device, sdn controller, and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110418356A (en) * 2019-06-18 2019-11-05 深圳大学 A kind of calculating task discharging method, device and computer readable storage medium
CN112181655A (en) * 2020-09-30 2021-01-05 杭州电子科技大学 Hybrid genetic algorithm-based calculation unloading method in mobile edge calculation
WO2022116957A1 (en) * 2020-12-02 2022-06-09 中兴通讯股份有限公司 Algorithm model determining method, path determining method, electronic device, sdn controller, and medium
CN112882723A (en) * 2021-02-24 2021-06-01 武汉大学 Edge service deployment method facing parallel micro-service combination
CN113220364A (en) * 2021-05-06 2021-08-06 北京大学 Task unloading method based on vehicle networking mobile edge computing system model
CN113781002A (en) * 2021-09-18 2021-12-10 北京航空航天大学 Low-cost workflow application migration method based on agent model and multi-population optimization in cloud edge cooperative network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
冯晨 等.《基于NSGA-Ⅱ-SA算法的外协云服务组合优选》.《现代制造工程》.2022,(第第3期期),全文. *
钟云峰.《工业互联网云边协同计算任务卸载策略研究》.《中国优秀硕士学位论文全文数据库》.2022,(第第12期期),全文. *

Also Published As

Publication number Publication date
CN115866626A (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN107172166B (en) Cloud and mist computing system for industrial intelligent service
CN109840154B (en) Task dependency-based computing migration method in mobile cloud environment
CN108737569B (en) Service selection method facing mobile edge computing environment
Konstantinidis et al. Multi-objective energy-efficient dense deployment in wireless sensor networks using a hybrid problem-specific MOEA/D
CN112288347B (en) Cold chain distribution route determining method, device, server and storage medium
CN115866626B (en) Service deployment method based on NSGA-II and simulated annealing in edge environment
CN113705959B (en) Network resource allocation method and electronic equipment
CN116501711A (en) Computing power network task scheduling method based on 'memory computing separation' architecture
Tam et al. Multifactorial evolutionary optimization to maximize lifetime of wireless sensor network
CN114585006B (en) Edge computing task unloading and resource allocation method based on deep learning
CN114595049A (en) Cloud-edge cooperative task scheduling method and device
CN112882723A (en) Edge service deployment method facing parallel micro-service combination
CN111044062B (en) Path planning and recommending method and device
CN114168328A (en) Mobile edge node calculation task scheduling method and system based on federal learning
Chen et al. A game theoretic approach to task offloading for multi-data-source tasks in mobile edge computing
CN113139639A (en) MOMBI-based smart city application-oriented multi-target calculation migration method and device
Wang et al. Multi-objective joint optimization of communication-computation-caching resources in mobile edge computing
Wang Collaborative task offloading strategy of UAV cluster using improved genetic algorithm in mobile edge computing
CN116170066B (en) Load prediction method for low-orbit satellite Internet of things
CN114650321A (en) Task scheduling method for edge computing and edge computing terminal
CN114615705B (en) Single-user resource allocation strategy method based on 5G network
CN115437858A (en) Edge node abnormity positioning method, device, equipment and computer program product
CN113347255A (en) Edge server site selection deployment model and solving method thereof
CN111754313B (en) Efficient communication online classification method for distributed data without projection
CN114385359B (en) Cloud edge task time sequence cooperation method for Internet of things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20230328

Assignee: HUBEI THINGO TECHNOLOGY DEVELOPMENT Co.,Ltd.

Assignor: Anhui Sigao Intelligent Technology Co.,Ltd.

Contract record no.: X2023980039196

Denomination of invention: A Service Deployment Method in Edge Environment Based on NSGA - II and Simulated Annealing

Granted publication date: 20230512

License type: Exclusive License

Record date: 20230810

EE01 Entry into force of recordation of patent licensing contract