CN113127193A - Method and device for unloading and scheduling dynamic services of edge network - Google Patents
Method and device for unloading and scheduling dynamic services of edge network Download PDFInfo
- Publication number
- CN113127193A CN113127193A CN202110310209.8A CN202110310209A CN113127193A CN 113127193 A CN113127193 A CN 113127193A CN 202110310209 A CN202110310209 A CN 202110310209A CN 113127193 A CN113127193 A CN 113127193A
- Authority
- CN
- China
- Prior art keywords
- scheduling
- window
- service
- information
- system information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 84
- 238000005096 rolling process Methods 0.000 claims abstract description 136
- 238000005457 optimization Methods 0.000 claims abstract description 81
- 238000004364 calculation method Methods 0.000 claims abstract description 22
- 230000008859 change Effects 0.000 claims abstract description 17
- 210000000349 chromosome Anatomy 0.000 claims description 46
- 238000004422 calculation algorithm Methods 0.000 claims description 29
- 230000002068 genetic effect Effects 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 abstract description 5
- 230000010485 coping Effects 0.000 abstract 1
- 230000006870 function Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 108090000623 proteins and genes Proteins 0.000 description 9
- 230000003044 adaptive effect Effects 0.000 description 8
- 230000035772 mutation Effects 0.000 description 8
- 238000012544 monitoring process Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000005265 energy consumption Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 238000010187 selection method Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229920003087 methylethyl cellulose Polymers 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000013433 optimization analysis Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/12—Computing arrangements based on biological models using genetic models
- G06N3/126—Evolutionary algorithms, e.g. genetic algorithms or genetic programming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/509—Offload
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Genetics & Genomics (AREA)
- Artificial Intelligence (AREA)
- Physiology (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The invention provides a method and a device for unloading and scheduling dynamic services of an edge network, wherein the method comprises the following steps: updating initial system information in an optimization window of a current rolling window according to system information of a current scheduling window to obtain updated system information; in the current optimization window, establishing a service unloading and scheduling model according to the updated system information; and analyzing the service unloading and scheduling model to obtain the optimal service unloading and scheduling scheme of the next rolling window. The method of the invention establishes a multi-objective calculation unloading and task scheduling problem optimization model by collecting and updating system information in the current rolling window, obtains the optimal scheduling scheme of the next rolling window through model analysis, and gradually carries out service unloading and resource scheduling optimization on each rolling window, thereby greatly reducing the calculation complexity and improving the robustness and the practicability of the dynamic service unloading and resource scheduling scheme for coping with the dynamic network environment and the user service change.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method and an apparatus for offloading and scheduling dynamic services of an edge network.
Background
Mobile Edge Computing (MEC), one of the key technologies of 5G, has become an important core technology for promoting intelligent products and applications. The MEC provides convenient cloud computing service for the terminal equipment nearby by deploying the edge cloud or fog server at the edge of the network, so that the defect of computing capability of the terminal equipment can be effectively made up, and long-distance transmission delay and return link flow burden brought by remote mobile cloud computing service are greatly reduced. With the rapid development and deployment of edge networks, more and more new types of intelligent products and applications will benefit from MECs. However, due to the limitation of edge network resources, the existing edge network traffic offload and resource scheduling mechanisms are difficult to effectively meet the massive connection and performance requirements of future intelligent products and applications.
At present, aiming at the dynamic properties of the actual network environment and the user equipment in the research of the problem of the service unloading and resource scheduling of the edge network, including the random disturbance of the wireless network environment, the mobility of the user, the randomness of the arrival of the user service and the like, the research of some dynamic service unloading and resource scheduling optimizes the long-term performance of the network and the user in the dynamic network environment by utilizing the Lyapunov optimization or Markov theory. However, the dynamic traffic offloading and resource scheduling schemes proposed by the above researches are not high in robustness and practicality against dynamic network environments and user traffic changes.
Therefore, how to better implement dynamic traffic offloading and resource scheduling has become a research focus of interest in the industry.
Disclosure of Invention
The invention provides a method and a device for unloading and scheduling dynamic services of an edge network, which are used for better realizing the unloading of the dynamic services and the scheduling of resources.
The invention provides a method for unloading and scheduling dynamic services of an edge network, which comprises the following steps:
updating initial system information in an optimization window of a current rolling window according to system information of a current scheduling window to obtain updated system information;
establishing a service unloading and scheduling model according to the updated system information in the current optimization window;
and analyzing the service unloading and scheduling model to obtain the optimal service unloading and scheduling scheme of the next rolling window.
According to the method for unloading and scheduling dynamic services of the edge network provided by the invention, in the optimization window of the current rolling window, the initial system information is updated according to the system information of the current scheduling window to obtain the updated system information, and the method specifically comprises the following steps:
acquiring network state change information and newly added task set information in a scheduling window of the current rolling window;
updating the network state information of the initial system information in the optimization window of the current rolling window according to the network state change information to obtain updated network state information;
and updating the user service information of the initial system information in the optimized window of the current rolling window according to the newly added task set information to obtain updated user service information.
According to the method for unloading and scheduling the dynamic services of the edge network provided by the invention, the step of analyzing the service unloading and scheduling model to obtain the optimal service unloading and scheduling scheme of the next rolling window specifically comprises the following steps:
generating a plurality of chromosomes as initial populations based on a two-layer coding method according to the updated system information;
the first layer sequence of each chromosome is a subtask execution sequence of user service, and the second layer sequence is an unloading calculation server indication sequence;
performing genetic operation based on the initial population, and obtaining an optimal chromosome under the condition of meeting a preset termination condition;
and decoding the optimal chromosome according to the two-layer coding method to obtain the optimal service unloading and scheduling scheme of the next rolling window.
According to the method for unloading and scheduling the dynamic service of the edge network provided by the invention, according to the newly added task set information, in the optimization window of the current rolling window, the user service information of the initial system information is updated to obtain the updated user service information, and the method specifically comprises the following steps:
obtaining self-adaptive scheduling scheme information based on a greedy algorithm according to the newly added task set information;
and updating the user service information of the initial system information according to the self-adaptive scheduling scheme information to obtain updated user service information.
According to the method for unloading and scheduling dynamic services of the edge network provided by the invention, before the step of updating the system information acquired by the scheduling window of the current rolling window, the method further comprises the following steps:
dividing a global time axis according to a preset time interval to obtain a plurality of rolling windows;
the global time axis is a time axis for executing user services according to time sequence, and each rolling window comprises a scheduling window and an optimization window.
According to the method for unloading and scheduling dynamic services of the edge network provided by the invention, after the step of obtaining the updated system information of the current rolling window, the method further comprises the following steps:
determining the difference value between the average user channel gain of the current rolling window and the average user channel gain of the next rolling window according to the network state information of the current rolling window and the network state information of the next rolling window;
and determining the time interval adjustment information of the next rolling window according to the difference and a preset threshold.
According to the method for unloading and scheduling dynamic services of the edge network provided by the invention, the method further comprises the following steps:
in the scheduling window of the current rolling window, under the condition that the network topology and the user set are monitored to be changed in a large scale, system information is obtained again so as to establish a new service unloading and scheduling model;
and analyzing the new service unloading and scheduling model to obtain the optimal service unloading and scheduling scheme of the current scheduling window.
The invention also provides a device for unloading and scheduling the dynamic services of the edge network, which comprises:
the model establishing module is used for updating the initial system information in the optimization window of the current rolling window according to the system information of the current scheduling window to obtain updated system information;
establishing a service unloading and scheduling model according to the updated system information in the current optimization window;
and the scheme generation module is used for analyzing the service unloading and scheduling model to obtain the optimal service unloading and scheduling scheme of the next rolling window.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of any one of the above methods for dynamic traffic offload and scheduling of the edge network.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the edge network dynamic traffic offloading and scheduling method as described in any of the above.
The invention provides a method and a device for unloading and scheduling dynamic services of an edge network, which are used for acquiring and updating system information in a current rolling window, establishing a multi-objective calculation unloading and task scheduling problem optimization model, obtaining the optimal service unloading and resource scheduling scheme of the next rolling window through model analysis, and gradually unloading the services and scheduling and optimizing the resources in each rolling window interval, thereby realizing the decomposition of a large-scale or infinite-period global optimization complex problem into a series of small-scale local optimization sub-problems which are related to each other, greatly reducing the calculation complexity, and improving the robustness and the practicability of the dynamic service unloading and resource scheduling scheme for dealing with dynamic network environment and user service change.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for dynamic traffic offload and scheduling of an edge network according to the present invention;
FIG. 2 is a schematic diagram of encoding of parent chromosomes based on genetic algorithm calculation according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an adaptive scheduling scheme for obtaining a current scheduling window based on a greedy algorithm according to an embodiment of the present invention;
FIG. 4 is a frame diagram of a scheduling mechanism of a method for dynamic traffic offload and scheduling for an edge network according to the present invention;
fig. 5 is a schematic structural diagram of an edge network dynamic traffic offload and scheduling apparatus provided in the present invention;
fig. 6 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a method for offloading and scheduling dynamic traffic of an edge network provided by the present invention, as shown in fig. 1, including:
and step S1, updating the initial system information according to the system information of the current scheduling window in the optimization window of the current rolling window to obtain the updated system information.
In particular, the scrolling window described herein has a time attribute with a time interval size. The rolling window comprises a scheduling window and an optimizing window.
And as time goes on, the rolling window continuously rolls and advances, and in the time interval, service unloading and resource scheduling optimization are carried out by acquiring and updating system information.
In the embodiment of the present invention, the current rolling window refers to a rolling window where traffic offloading and scheduling are performed at the current time.
It should be noted that the method of the present invention is applicable to a multi-service, multi-user and multi-base-station network scenario with timing dependent service characteristics in an edge network; the time sequence dependent service described in the present invention refers to a user service having a time sequence dependency relationship along a time line.
In an embodiment of the present invention, there are multiple base stations M ═ {1, 2, …, M, …, M } in the network, each base station is configured with computing power ofThe mobile edge server of (1) can provide the unloaded computing service for the computing service unloaded by the user. There are randomly moving users U ═ 1, 2, …, …, U } in the network, at the th place, which are dynamically arriving trafficIn t time periods, the service of each user arrives randomly, and each service may include a plurality of sub-services that need to be executed in sequence. The user equipment can support a multi-connection technology, and a plurality of services of the same user can be unloaded to different mobile edge servers for processing through different base stations through the multi-connection technology. Therefore, all services of multiple users in the network are numbered uniformly, and a multi-user Service set in the network is represented as Service {1, 2, …, K, … K }, and a sub-Service set of each Service is represented as
Embodiments of the present invention utilize triplets to represent the service requirement R for each sub-servicekl,Zkl,Dkl,TaklIn which R isklIndicating sub-service source data size, ZklRepresenting sub-business computing requirements, DklRepresents the maximum tolerated delay of the sub-service, refers to the maximum allowable duration for completing the sub-service, TaklIndicating the sub-traffic arrival time. Each sub-service of each user can be unloaded to different mobile edge servers for execution through the multi-connection technology, and order aklIndicating a traffic sub-traffic offload indication, wherein akl0 denotes execution locally, aklM denotes the execution of a computing service offloaded to its MEC server through the mth base station through a sub-service.
With the above-described network information and user service information, it is understood that the system information in the method of the present invention specifically includes network state information, such as information of channel state, network topology, user equipment and server computing power, and user service information, such as information of newly arrived service sets and cancelled service sets.
Furthermore, a scheduling window of the rolling window is responsible for monitoring and acquiring the system information; the system information of the current scheduling window refers to the system information obtained by monitoring and acquiring the scheduling window in the current scheduling window; the initial system information refers to the system information obtained by the current scheduling window from the last rolling window before the current scheduling window monitors and collects the system information;
the optimization window of the rolling window is responsible for updating the system information monitored and collected in the scheduling window through the interaction of the user and the network information, and specifically comprises the following steps:
when the scheduling window rolls to the optimization window interval, the optimization window updates the initial system information according to the system information collected by the scheduling window to obtain updated system information;
the updated system information described in the present invention refers to the final system information of the current rolling window obtained according to the collected system information on the basis of the initial system information of the current rolling window.
And step S2, establishing a service unloading and scheduling model according to the updated system information in the current optimization window.
Specifically, the traffic offload and scheduling model described in the present invention is a model that converts the computational offload and traffic scheduling online optimization sub-problem into a multi-objective optimization problem within a rolling window.
The method of the invention utilizes a weighting method to model a multi-objective optimization problem for the multi-objective computation unloading and service scheduling problems by comprehensively considering various measurement criteria such as network efficiency, resource cost, user experience and the like. Let CkRepresenting a user UkTotal time of application completion, EkRepresenting a user UkApplication completion energy consumption overhead, DkRepresenting a user UkA lingering service delay exceeding a minimum service requirement is applied.
Wherein, the user UkTotal time C for completion of each taskk=maxk∈U,l∈Tskk{ Ckl }, user UkUsing a lingering service delay D for tasks exceeding a minimum service requirementk=max{0,Ckl-Dkl},CklIndicating the ith service completion time for service k.
Compared with static scheduling, the method introduces the deviation performance indexTo minimize resource bias and scheduling overhead due to changes in the front-to-back scheduling schemeFor the (i-1) th scheduling window scheduling scheme,is the scheduling scheme. Therefore, the sub-problem of calculating and unloading the scheduling window and performing online optimization on the service scheduling can be modeled into a multi-objective optimization problem, and the objective function of the model is as follows:
wherein α represents a user UkDelay requirement coefficient of the application; beta represents a user UkEnergy consumption requirement coefficient of the application. Alpha and beta can be preset according to the actual situation of the edge network, and the specific values are not limited in the embodiment of the invention. The sensitivity of the executed task to delay and energy consumption can be determined by adjusting the values of alpha and beta, gamma is a penalty coefficient, and the value of gamma is determined by each user UkAnd (4) application determination.
In the embodiment of the present invention, for the sub-problem of service offloading and resource scheduling online optimization of each scheduling window, on one hand, the effectiveness of scheduling in terms of user experience and network performance needs to be considered, and on the other hand, the stability of the scheduling schemes of the previous and subsequent times needs to be measured, so as to avoid frequent switching of users and large change of resource configuration. Meanwhile, the offloading of sub-traffic in traffic with timing dependency also needs to follow inter-traffic timing constraints.
Further, a service that needs to be executed before a certain service is made an immediately preceding service of this service. Let skl1Denotes the ith traffic data transmission start time of the traffic k, ckl1Denotes the l-th service data transmission completion time, s, of the service kkl2The ith traffic computation service start time, c, representing traffic kkI2The l-th service computation service completion time, P, representing service kklThe traffic set immediately before the ith traffic representing the traffic k, the logical constraints existing between the user traffic and between the traffic offload data transmission phase and the computational service phase, i.e. the constraint conditions of the traffic offload and scheduling model, can be represented as:
thus, based on the updated system information, a traffic offload and scheduling model can be established within the current optimization window.
And step S3, analyzing the service unloading and scheduling model to obtain the optimal service unloading and scheduling scheme of the next rolling window.
Specifically, the method analyzes the established multi-objective calculation unloading and service scheduling optimization problem model, and can solve the model by adopting a genetic algorithm or a deep reinforcement learning algorithm so as to obtain the optimal service unloading and scheduling scheme of the next rolling window.
The next rolling window described in the invention is the next rolling window scheduling interval period after the current rolling window scheduling interval period is finished, and the next rolling window and the current rolling window scheduling interval period are continuous in time;
in the embodiment of the invention, the optimal service unloading and scheduling scheme refers to the optimal scheduling order of each subtask of the user service and the indication position of the unloading calculation server corresponding to each subtask.
By the method provided by the embodiment of the invention, system information in the current rolling window is acquired and updated, a multi-objective calculation unloading and service scheduling problem optimization model is established, and the optimal service unloading and resource scheduling scheme of the next rolling window is obtained through model analysis, so that the service unloading and resource scheduling optimization are performed on each rolling window interval step by step, the overall optimization complex problem of a large scale or infinite period is decomposed into a series of small-scale local optimization sub-problems which are related to each other, the calculation complexity is greatly reduced, and the robustness and the practicability of the dynamic service unloading and resource scheduling scheme for dealing with dynamic network environment and user service change are improved.
Based on any of the above embodiments, the step of updating the initial system information according to the system information of the current scheduling window in the optimized window of the current rolling window to obtain updated system information specifically includes:
acquiring network state change information and newly added service set information in a scheduling window of the current rolling window;
updating the network state information of the initial system information in the optimization window of the current rolling window according to the network state change information to obtain updated network state information;
and updating the user service information of the initial system information in the optimized window of the current rolling window according to the newly added service set information to obtain updated user service information.
Specifically, the network state change information described in the present invention includes changes in channel state, network topology, user equipment, server computing power, and the like; the new service set information includes the newly arrived user service set and the cancelled user service set described above. Wherein, the newly arrived user service set refers to a set of service subtasks which are newly added by the user and need to be unloaded and scheduled; a cancelled user traffic set refers to a set of traffic subtasks that the user needs to cancel the offload and scheduling process.
Further, in the optimization window of the current rolling window, updating the network state information of the initial system information according to the change information of the channel state, the network topology, the computing power of the user equipment and the server and the like to obtain the updated network state information; and updating the user service information of the initial system information according to the newly arrived user service set and the cancelled user service set to obtain the updated user service information.
With respect to the above-described information, to facilitate the execution of subsequent algorithms, embodiments of the present invention collect, update, and maintain the following data primarily in the system at each rolling window interval:
network environment data including a matrix of radio channel gains G from users to respective base stations, a matrix of computational resources F available to user equipmentlocalMEC server available computing resource matrix FmecTime matrix T of computing resources available to the user equipmentlocalTime matrix T of available radio resources of base stationbsTime matrix T of computing resources available to MEC servermec;
Network topology data comprising an available base station/server set M and a user set U;
the user service information data includes a service state table series _ list (including user codes, service codes, sub-service states, sub-service requirements) and scheduling scheme execution status.
According to the embodiment of the invention, network environment data, network topology data and user service information data monitored and collected in the scheduling window time period are updated in the optimization window time period through the interaction of the user and the network information, so that final system information data in the current rolling window time period are obtained, and the calculation and analysis of a subsequent model are facilitated.
Based on any of the above embodiments, the step of analyzing the traffic offload and scheduling model to obtain the optimal traffic offload and scheduling scheme of the next rolling window specifically includes:
generating a plurality of chromosomes as initial populations based on a two-layer coding method according to the updated system information;
the first layer sequence of each chromosome is a sub-service execution sequence of user service, and the second layer sequence is an unloading calculation server indication sequence;
performing genetic operation based on the initial population, and obtaining an optimal chromosome under the condition of meeting a termination condition;
and decoding the optimal chromosome according to the two-layer coding method to obtain the optimal service unloading and scheduling scheme of the next rolling window.
In particular, in embodiments of the present invention, a genetic algorithm may be utilized to solve the traffic offload and scheduling model.
In the genetic algorithm, each solution may be defined by a chromosome, and the present embodiment employs a segmented coding method to define the chromosome.
The coding method is the key of the genetic algorithm, and the mutation probability and the cross probability are influenced by the coding method, so that the coding problem greatly influences the efficiency of genetic calculation.
Therefore, the embodiment of the present invention adopts an expression method based on the service execution sequence, and adopts a two-layer coding method for the chromosome, wherein the first layer represents the sub-service execution sequence O of the user service, and the second layer represents the offload computation server instruction a. The chromosome can be represented as:
Ii=[Oi;Ai]; (4)
further, a plurality of chromosomes may be generated as the initial population based on the two-layer encoding method according to the updated system information. In the embodiment of the present invention, if the number of the plurality of chromosomes is set to be N, N initial solutions may be generated, each solution is a traffic offload and resource scheduling manner, and the N manners may be regarded as an initial population. Wherein, N is a preset value, which can be set according to the control parameters of the genetic algorithm, and can be set to 50, 100, 150, etc.
Fig. 2 is a schematic diagram of encoding of parent chromosomes based on genetic algorithm calculation in the embodiment of the present invention, and as shown in fig. 2, the embodiment of the present invention adopts an expression method based on a service execution sequence, and adopts a two-layer encoding method for chromosomes.
As shown in fig. 2, the parental chromosomes if1, 2, 1, 3, 2, 3; 0, 2, 1, 0, 1, 2 is composed of two parts, i.e. the traffic handling sequence vector O i1, 2, 1, 3, 2, 3 and a traffic offload decision vector ai1, {0, 2, 1, 0, 1, 2 }. At OiIn (1), the first service Tsk of service 1 is represented by the first 111 Second 1 denotes the second of service 1Two services T12. Thus, O describediHas a traffic processing sequence of { Tsk11,Tsk21,Tsk12,Tsk31,Tsk22,Tsk32}. In AiIn the method, the service unloading decision vector is encoded according to the service number of the mobile equipment in sequence and can be interpreted as { a }11=0,a12=2,a21=1,a22=0,a31=1,a322 }. Based on OiAnd AiThe immediate prior service set of each service can be identified and the problem optimization target value can be calculated.
And (4) carrying out genetic operation based on the initial population, and obtaining the optimal chromosome under the condition of meeting a preset termination condition.
The preset termination condition described in the present invention includes a preset convergence condition, or a preset maximum number of iterations.
In an embodiment of the present invention, in order to evaluate the quality of each solution, an objective function (1) is used to calculate an fitness value for each chromosome. The fitness function is defined as follows:
where G is a constant value that is large enough to ensure that the fitness is non-negative.
Specifically, the process of finding the optimal solution by the genetic algorithm is a genetic operation process, and mainly comprises three genetic operations, namely, a selection operation, a crossover operation and a mutation operation.
For the selection operation, a selection method of roulette may be employed to select a superior solution, and in the selection operation, a chromosome having the best fitness is selected from the N chromosomes of the initial population.
The general steps of roulette are:
wherein, f (x)m) Is the m-th individual fitness value;
step 5, if theta is less than or equal to q1Then select the first chromosome x1Otherwise, if q ism-1<θ≤qmThen the mth chromosome (m is more than or equal to 2 and less than or equal to N) is selected.
The fitness can be calculated according to the pre-designed fitness function.
For the crossing operation, the crossing aims to generate a new individual by using the parent individual after certain operation combination, and the effective inheritance of the characteristics ensures that the information of the parent individual is transmitted to the filial generation.
In the present example, crossover operations were performed on the parent chromosomes, and randomly selected gene segments were swapped at the corresponding positions of O and A. However, due to the data characteristics of O in the chromosome, the interleaving operation may cause data redundancy, and part of the traffic processing sequence information is lost. In this case, to ensure the feasibility and effectiveness of the business process sequences in the offspring chromosomes, redundant gene locations can be examined and supplemented with missing gene data.
As shown in FIG. 2, chromosome i and O in chromosome (i +1) were randomly swappediSecond to fourth columns, AiThe third to fifth columns of gene fragments. In bookIn the inventive examples, it was examined that the last gene data of O was found to be redundant in both offspring chromosomes i and (i + 1). At the same time, Tsk13Deletion of Gene data in progeny chromosome i, Tsk33The gene data is deleted in the progeny (i +1) chromosome. Thus, 1, 3 can be filled in to the redundant gene locations of the offspring chromosomes i and (i +1), respectively.
For mutation operation, the purpose of mutation is to increase population diversity by randomly changing some genes of chromosomes, and generating new individuals with small perturbation to the chromosomes.
In the embodiment of the present invention, two modes may be included for the variation.
The first way is to randomly select two loci in the chromosome for the first layer code O, adopt interchange mutation, and then change the indicating position of the unloading calculation server according to the interchange loci of the first layer code by carrying out the change of the second layer code.
The second way is to directly mutate the second layer code a to change the indication position of the offload computation server.
Further, through crossover and mutation operations, a population of progeny is generated. The parent population and the offspring population are merged into a new population and all chromosomes are sorted in descending order according to the calculated fitness. According to the natural evolution principle, chromosomes with poor fitness are removed, and a certain number of chromosomes with good fitness are reserved in a new population. Based on the population evolution algorithm, fitness calculation, selection operation, crossover operation and mutation operation are repeatedly carried out until a preset termination condition is reached, and the optimal chromosome is obtained.
And decoding the corresponding scheduling scheme of the optimal chromosome by using a two-layer coding method to obtain the optimal service unloading and scheduling scheme of the next rolling window.
Through the embodiment of the present invention, the steps of solving the service offloading and scheduling model according to the genetic algorithm and obtaining the optimal service offloading and scheduling scheme of the next rolling window can be specifically expressed as follows:
step 5, selecting superior individuals from the population by adopting a roulette selection method to form a paired population P1;
step 6, carrying out genetic operations including selection operation, crossover operation and mutation operation on the population P1 to generate a progeny population P2;
step 7, calculating an objective function for the offspring population P2 according to the traffic unloading and scheduling scheme;
8, reinserting the sub-population in the population P0 to replace parents, and outputting target values of the individuals of the current population after insertion and the individuals of the current generation after insertion;
and 9, if the convergence condition is met or the iteration times are reached, terminating the operation, otherwise, returning to the step 4.
By the method of the embodiment of the invention, the multi-objective service calculation unloading and service scheduling optimization problem model is calculated and analyzed based on the genetic algorithm, and the optimal service unloading and scheduling scheme of the next rolling window is obtained.
Based on any of the above embodiments, the step of updating the user service information of the initial system information in the optimized window of the current rolling window according to the newly added service set information to obtain updated user service information specifically includes:
obtaining self-adaptive scheduling scheme information based on a greedy algorithm according to the newly added service set information;
and updating the user service information of the initial system information according to the self-adaptive scheduling scheme information to obtain updated user service information.
Specifically, in the embodiment of the present invention, the information of the newly added service set includes a newly arrived user service set and a cancelled user service set; the information of the adaptive scheduling scheme described in the present invention represents the user service information corresponding to the adaptive scheduling scheme.
Further, for the newly arrived service in the scheduling window period, in order to be able to adaptively provide the scheduling service in the scheduling window period, the newly arrived service is subjected to service coding, and the initial scheduling scheme information corresponding to the newly arrived service is generated by using a greedy algorithm.
In the embodiment of the present invention, the newly arrived service described in the present invention is the newly added service set information.
Through the embodiment of the invention, the specific steps for generating the adaptive scheduling scheme information are as follows:
and carrying out service coding on the newly arrived service, and adding all the sub-services of the newly arrived service to the user sub-service execution sequence O of the original scheduling scheme of the current scheduling window according to the service arrival sequence.
And respectively calculating the function value of the objective function (1) under different unloading decisions according to the execution time sequence of the sub-service, and selecting the unloading decision in the initial scheduling scheme information corresponding to the minimum objective function value to be added behind the unloading calculation server indication sequence A of the original scheduling scheme of the current scheduling window based on a greedy algorithm.
Therefore, after the initial scheduling scheme information of the newly arrived service is added to the original scheduling scheme information of the current window, the self-adaptive scheduling scheme information is obtained.
In the embodiment of the invention, the generated self-adaptive scheme is decoded to obtain the user service information corresponding to the self-adaptive scheme, and the user service information of the initial system information is updated to obtain the updated user service information.
By the method of the embodiment of the invention, the newly arrived service in the scheduling window period can be self-adaptively provided with the scheduling service based on the greedy algorithm.
Fig. 3 is a schematic diagram of an adaptive scheduling scheme for obtaining a current scheduling window based on a greedy algorithm in an embodiment of the present invention, and as shown in fig. 3, an original scheduling scheme of the current scheduling window, that is, an optimal scheduling scheme Schedule generated by an optimized window of a previous rolling window is{1, 2, 3, 1, 3, 2; 0, 1, 2, 2, 1, 2 }; the newly arrived service is coded into 4, 2 related sub-services exist, the function value of the objective function (1) under different decisions is calculated based on a greedy algorithm, and the initial scheduling decision corresponding to the minimum objective function value is selected to be { a }41=0,a422, so the initial scheduling decision of the newly arrived service is added into the original scheduling scheme, and then an adaptive scheduling scheme Schedule is generated, wherein the adaptive scheduling scheme Schedule is {1, 2, 3, 1, 3, 2, 4, 4; 0,1,2,2,1,2,0,2}.
Based on any of the above embodiments, before the step of updating the system information acquired by the scheduling window of the current rolling window, the method further includes:
dividing a global time axis according to a preset time interval to obtain a plurality of rolling windows;
the global time axis is a time axis for executing user services according to time sequence, and each rolling window comprises a scheduling window and an optimization window.
Specifically, in order to decompose a global optimization complex problem of a large scale or infinite period into a series of small scale local optimization sub-problems related to each other, in the embodiment of the present invention, the whole dynamic scheduling process is divided into a plurality of continuous static scheduling intervals.
Further, with Δ T as a unit time interval, a global time axis for executing user services in chronological order is divided into a plurality of rolling windows. Each rolling window is divided into two basic sub-windows according to functions: an optimization window and a scheduling window.
By the embodiment of the invention, each scheduling interval can be optimized on line by the rolling window rescheduling method, so that the system can achieve local optimization in each scheduling interval, and the service unloading and resource scheduling scheme can adapt to complicated and variable dynamic users and network environment changes.
Based on any of the above embodiments, after the step of obtaining updated system information of the current scroll window, the method further includes:
determining the difference value between the average user channel gain of the current rolling window and the average user channel gain of the next rolling window according to the network state information of the current rolling window and the network state information of the next rolling window;
and determining the time interval adjustment information of the next rolling window according to the difference and a preset threshold.
Specifically, the time interval adjustment information of the next rolling window described in the present invention refers to a specific value of the time interval of the next rolling window obtained through optimization adjustment.
Further, by monitoring the network state information of the current rolling window and the next rolling window in real time, the average user channel gain value of the current rolling window and the average user channel gain value of the next rolling window can be obtained, so that the difference value between the two values is determined, and the dynamic property of the network environment is judged.
In the embodiment of the present invention, the step of adjusting and optimizing the time interval of the next rolling window specifically includes:
setting M as a constant integer; g _ threshold is a preset threshold, namely a channel gain difference threshold; time _ down is the interval down step, and Time _ up is the interval up step.
If the average user channel gain difference between the current rolling window and the next rolling window is continuously greater than the set threshold value G _ threshold for M times, the network environment can be judged to have higher dynamic property, and the Time interval delta T of the next rolling window is reduced by the length of Time _ down so as to shorten the Time interval of the next rolling window;
if the average user channel gain difference between the current rolling window and the next rolling window is less than the set threshold value G _ threshold for M times continuously, the network environment is judged to have low dynamics, and the Time interval delta T of the next rolling window is increased by the length of Time _ up so as to prolong the Time interval of the next rolling window.
The method of the embodiment of the invention judges the dynamic property of the network environment by monitoring the average user channel gain difference value of the front rolling window and the back rolling window, thereby adaptively adjusting the time interval of the rolling window interval.
Based on any of the above embodiments, the method further comprises:
in the scheduling window of the current rolling window, under the condition that the network topology and the user set are monitored to be changed in a large scale, system information is obtained again so as to establish a new service unloading and scheduling model;
and analyzing the new service unloading and scheduling model to obtain the service unloading and scheduling scheme of the current scheduling window.
Specifically, the network topology and the user set described in the present invention are changed in a large scale, which is also referred to as a rescheduling trigger event, and then system information is obtained again when the rescheduling trigger event is monitored in the scheduling window of the current rolling window.
The re-acquisition of system information described in the present invention is the system information obtained by re-monitoring and collecting the current scheduling window.
After entering the optimization window from the scheduling window, the current optimization window establishes a new service unloading and scheduling model according to the acquired system information, and then performs optimization analysis on the new service unloading and scheduling model through optimization algorithms such as a genetic algorithm or deep reinforcement learning, so as to obtain an optimal service unloading and scheduling scheme of the current scheduling window.
By the method of the embodiment of the invention, when monitoring large-scale changes of the network topology and the user set and events needing urgent rescheduling, such as faults of a base station or equipment, the scheduling scheme optimization decision in the scheduling window is started, and the purpose of self-adaptively providing service during the scheduling window is realized.
Fig. 4 is a frame diagram of a scheduling mechanism of a dynamic traffic offload and scheduling method for an edge network according to the present invention, as shown in fig. 4, a global time axis is divided into a plurality of rolling windows at time intervals Δ T, a current rolling window is an (i-1) th rolling window, a next rolling window is an ith rolling window, the (i-1) th rolling window includes an (i-1) th scheduling window and an (i-1) th optimization window, and the (i) th rolling window includes an (i) th scheduling window and an (i) th optimization window.
As shown in fig. 4, in the (i-1) th scheduling window, information collection and monitoring are performed, wherein the information includes network topology information, network environment information and traffic information. In addition, the system is also responsible for service coding and pre-decision, namely, initial scheduling scheme information corresponding to newly arrived services is obtained aiming at the newly arrived services through a service coding and greedy pre-decision algorithm, and then an adaptive scheduling scheme is generated;
as time goes on, the scheduling window period is ended, the prediction time of the optimization window period is entered, and in the (i-1) th optimization window, network information is updated according to information collected and monitored in the scheduling window, wherein the network information comprises user/base station set updating, channel gain and node resource state updating and service information updating; optionally, window interval adjustment may be performed in the optimized window to obtain the time interval of the (i) th rolling window. After updated network information is obtained, scheduling window task unloading and resource scheduling subproblem optimization are started in the optimization window, the scheduling window resource limited multi-objective optimization problem is solved by establishing a multi-objective calculation unloading and task scheduling optimization problem model and utilizing optimization algorithms such as a genetic algorithm and the like, and the optimal service unloading and scheduling scheme of the next rolling window, namely the (i) th scheduling window is obtained, so that the (i) th scheduling window executes the optimal service unloading and scheduling scheme after the rescheduling time of the optimization window period.
In the (i-1) th scheduling window, if the rescheduling is monitored, if the rescheduling trigger event is monitored, such as large-scale change of a network topology and a user set, an optimization program for calculating unloading and task scheduling is restarted immediately, namely, a multi-objective calculating unloading and task scheduling optimization problem model is established according to the recollected and monitored system information, and the scheduling window resource limited multi-objective optimization problem is solved by using optimization algorithms such as a genetic algorithm and the like, so that the optimal service unloading and scheduling scheme of the (i) th scheduling window is obtained.
Fig. 5 is a schematic structural diagram of an edge network dynamic traffic offload and dispatch device provided by the present invention, and as shown in fig. 5, the edge network dynamic traffic offload and dispatch device provided by the present invention includes:
a model building module 510, configured to update initial system information according to system information of a current scheduling window in an optimization window of a current rolling window, to obtain updated system information;
establishing a service unloading and scheduling model according to the updated system information in the current optimization window;
and a scheme generating module 520, configured to analyze the service offloading and scheduling model to obtain an optimal service offloading and scheduling scheme of the next rolling window.
The invention provides a dynamic service unloading and scheduling device of an edge network, which establishes a multi-objective calculation unloading and task scheduling problem optimization model by collecting and updating system information in a current rolling window, obtains an optimal service unloading and resource scheduling scheme of a next rolling window through model analysis, and gradually performs service unloading and resource scheduling optimization on each rolling window interval, thereby realizing that a global optimization complex problem of a large scale or infinite period is decomposed into a series of small-scale local optimization sub-problems which are related to each other, greatly reducing the calculation complexity, and improving the robustness and the practicability of the dynamic service unloading and resource scheduling scheme for dealing with dynamic network environment and user service change.
The edge network dynamic service offloading and scheduling apparatus described in the present invention and the above-described edge network dynamic service offloading and scheduling method may be referred to in a corresponding manner, and thus are not described herein again.
Fig. 6 is a schematic structural diagram of an electronic device provided in the present invention, and as shown in fig. 6, the electronic device may include: a processor (processor)610, a communication Interface (Communications Interface)620, a memory (memory)630 and a communication bus 640, wherein the processor 610, the communication Interface 620 and the memory 630 communicate with each other via the communication bus 640. The processor 610 may invoke logic instructions in the memory 630 to perform the edge network dynamic traffic offload and scheduling method, which comprises: updating initial system information in an optimization window of a current rolling window according to system information of a current scheduling window to obtain updated system information; establishing a service unloading and scheduling model according to the updated system information in the current optimization window; and analyzing the service unloading and scheduling model to obtain the optimal service unloading and scheduling scheme of the next rolling window.
In addition, the logic instructions in the memory 630 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, which includes a computer program stored on a non-transitory computer-readable storage medium, the computer program including program instructions, when the program instructions are executed by a computer, the computer being capable of executing the edge network dynamic traffic offloading and scheduling method provided by the above methods, the method including: updating initial system information in an optimization window of a current rolling window according to system information of a current scheduling window to obtain updated system information; establishing a service unloading and scheduling model according to the updated system information in the current optimization window; and analyzing the service unloading and scheduling model to obtain the optimal service unloading and scheduling scheme of the next rolling window.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, is implemented to perform the edge network dynamic traffic offloading and scheduling method provided by the above methods, the method including: updating initial system information in an optimization window of a current rolling window according to system information of a current scheduling window to obtain updated system information; establishing a service unloading and scheduling model according to the updated system information in the current optimization window; and analyzing the service unloading and scheduling model to obtain the optimal service unloading and scheduling scheme of the next rolling window.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A method for unloading and scheduling dynamic traffic of an edge network is characterized by comprising the following steps:
updating initial system information in an optimization window of a current rolling window according to system information of a current scheduling window to obtain updated system information;
establishing a service unloading and scheduling model according to the updated system information in the current optimization window;
and analyzing the service unloading and scheduling model to obtain the optimal service unloading and scheduling scheme of the next rolling window.
2. The method for unloading and scheduling dynamic services of an edge network according to claim 1, wherein the step of updating the initial system information according to the system information of the current scheduling window in the optimization window of the current rolling window to obtain the updated system information specifically comprises:
acquiring network state change information and newly added task set information in a scheduling window of the current rolling window;
updating the network state information of the initial system information in the optimization window of the current rolling window according to the network state change information to obtain updated network state information;
and updating the user service information of the initial system information in the optimized window of the current rolling window according to the newly added task set information to obtain updated user service information.
3. The method for dynamic traffic offload and scheduling for edge networks according to claim 1, wherein the step of analyzing the traffic offload and scheduling model to obtain the optimal traffic offload and scheduling scheme for the next rolling window specifically comprises:
generating a plurality of chromosomes as initial populations based on a two-layer coding method according to the updated system information;
the first layer sequence of each chromosome is a subtask execution sequence of user service, and the second layer sequence is an unloading calculation server indication sequence;
performing genetic operation based on the initial population, and obtaining an optimal chromosome under the condition of meeting a preset termination condition;
and decoding the optimal chromosome according to the two-layer coding method to obtain the optimal service unloading and scheduling scheme of the next rolling window.
4. The method for unloading and scheduling a dynamic service of an edge network according to claim 2, wherein the step of updating the user service information of the initial system information in the optimized window of the current rolling window according to the information of the newly added task set to obtain the updated user service information specifically comprises:
obtaining self-adaptive scheduling scheme information based on a greedy algorithm according to the newly added task set information;
and updating the user service information of the initial system information according to the self-adaptive scheduling scheme information to obtain updated user service information.
5. The method for dynamic traffic offload and scheduling for edge networks according to claim 1, wherein before the step of updating the system information obtained by the scheduling window of the current rolling window, the method further comprises:
dividing a global time axis according to a preset time interval to obtain a plurality of rolling windows;
the global time axis is a time axis for executing user services according to time sequence, and each rolling window comprises a scheduling window and an optimization window.
6. The edge network dynamic traffic offload and scheduling method of claim 1, wherein after the step of obtaining updated system information of the current rolling window, the method further comprises:
determining the difference value between the average user channel gain of the current rolling window and the average user channel gain of the next rolling window according to the network state information of the current rolling window and the network state information of the next rolling window;
and determining the time interval adjustment information of the next rolling window according to the difference and a preset threshold.
7. The edge network dynamic traffic offload and scheduling method of claim 1, wherein the method further comprises:
in the scheduling window of the current rolling window, under the condition that the network topology and the user set are monitored to be changed in a large scale, system information is obtained again so as to establish a new service unloading and scheduling model;
and analyzing the new service unloading and scheduling model to obtain the optimal service unloading and scheduling scheme of the current scheduling window.
8. An edge network dynamic traffic offload and scheduling apparatus, comprising:
the model establishing module is used for updating the initial system information in the optimization window of the current rolling window according to the system information of the current scheduling window to obtain updated system information;
establishing a service unloading and scheduling model according to the updated system information in the current optimization window;
and the scheme generation module is used for analyzing the service unloading and scheduling model to obtain the optimal service unloading and scheduling scheme of the next rolling window.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the edge network dynamic traffic offload and scheduling method according to any of claims 1 to 7.
10. A non-transitory computer readable storage medium, having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the steps of the edge network dynamic traffic offload and scheduling method according to any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110310209.8A CN113127193A (en) | 2021-03-23 | 2021-03-23 | Method and device for unloading and scheduling dynamic services of edge network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110310209.8A CN113127193A (en) | 2021-03-23 | 2021-03-23 | Method and device for unloading and scheduling dynamic services of edge network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113127193A true CN113127193A (en) | 2021-07-16 |
Family
ID=76773830
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110310209.8A Pending CN113127193A (en) | 2021-03-23 | 2021-03-23 | Method and device for unloading and scheduling dynamic services of edge network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113127193A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113641496A (en) * | 2021-08-13 | 2021-11-12 | 西安工程大学 | DIDS task scheduling optimization method based on deep reinforcement learning |
CN113780745A (en) * | 2021-08-16 | 2021-12-10 | 华中科技大学 | IT (information technology) personnel scheduling method and system driven by door-to-door service requirement |
CN115016932A (en) * | 2022-05-13 | 2022-09-06 | 电子科技大学 | Embedded distributed deep learning model resource elastic scheduling method |
CN117422426A (en) * | 2023-12-18 | 2024-01-19 | 广州南华工程管理有限公司 | Information optimization method and system based on water transport engineering BIM model |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108256671A (en) * | 2017-12-26 | 2018-07-06 | 佛山科学技术学院 | A kind of more resources of multitask based on learning-oriented genetic algorithm roll distribution method |
US20200184407A1 (en) * | 2018-12-10 | 2020-06-11 | At&T Intellectual Property I, L.P. | Telecommunication network customer premises service scheduling optimization |
-
2021
- 2021-03-23 CN CN202110310209.8A patent/CN113127193A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108256671A (en) * | 2017-12-26 | 2018-07-06 | 佛山科学技术学院 | A kind of more resources of multitask based on learning-oriented genetic algorithm roll distribution method |
US20200184407A1 (en) * | 2018-12-10 | 2020-06-11 | At&T Intellectual Property I, L.P. | Telecommunication network customer premises service scheduling optimization |
Non-Patent Citations (1)
Title |
---|
吕昕晨: "移动边缘计算任务迁移与资源管理研究", 信息科技辑, pages 36 - 99 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113641496A (en) * | 2021-08-13 | 2021-11-12 | 西安工程大学 | DIDS task scheduling optimization method based on deep reinforcement learning |
CN113641496B (en) * | 2021-08-13 | 2023-12-12 | 陕西边云协同网络科技有限责任公司 | DIDS task scheduling optimization method based on deep reinforcement learning |
CN113780745A (en) * | 2021-08-16 | 2021-12-10 | 华中科技大学 | IT (information technology) personnel scheduling method and system driven by door-to-door service requirement |
CN113780745B (en) * | 2021-08-16 | 2024-05-14 | 华中科技大学 | IT personnel scheduling method and system driven by door-to-door service demand |
CN115016932A (en) * | 2022-05-13 | 2022-09-06 | 电子科技大学 | Embedded distributed deep learning model resource elastic scheduling method |
CN117422426A (en) * | 2023-12-18 | 2024-01-19 | 广州南华工程管理有限公司 | Information optimization method and system based on water transport engineering BIM model |
CN117422426B (en) * | 2023-12-18 | 2024-04-12 | 广州南华工程管理有限公司 | Information optimization method and system based on water transport engineering BIM model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113127193A (en) | Method and device for unloading and scheduling dynamic services of edge network | |
Sabuncuoglu et al. | Job shop scheduling with beam search | |
Li et al. | A hybrid load balancing strategy of sequential tasks for grid computing environments | |
CN112685170B (en) | Dynamic optimization of backup strategies | |
CN106610867B (en) | On-chip network task scheduling method and device | |
CN112637806B (en) | Transformer substation monitoring system based on deep reinforcement learning and resource scheduling method thereof | |
CN110851277A (en) | Task scheduling strategy based on edge cloud cooperation in augmented reality scene | |
Lu et al. | A resource investment problem based on project splitting with time windows for aircraft moving assembly line | |
CN116185523A (en) | Task unloading and deployment method | |
CN114118832A (en) | Bank scheduling method and system based on historical data prediction | |
CN113139639B (en) | MOMBI-oriented smart city application multi-target computing migration method and device | |
Satrya et al. | Evolutionary computing approach to optimize superframe scheduling on industrial wireless sensor networks | |
Kashyap et al. | DECENT: Deep learning enabled green computation for edge centric 6G networks | |
CN106406082B (en) | System control method, device, controller and control system | |
CN115759672A (en) | Customer service scheduling method and device | |
CN114124554B (en) | Virtual network service chain throughput prediction method | |
Nemmich et al. | An Enhanced Discrete Bees Algorithm for Resource Constrained Optimization Problems | |
CN113342487B (en) | Cloud computing resource scheduling method based on online fault tolerance | |
CN114741857A (en) | Satellite time-frequency resource scheduling method, computer equipment and readable medium | |
CN114936808A (en) | Cloud-edge cooperative task management system and method for substation fault detection | |
Belhor et al. | Multiobjective Evolutionary Algorithm for Home Health Care Routing and Scheduling Problem | |
Wang et al. | A Novel Coevolutionary Approach to Reliability Guaranteed Multi‐Workflow Scheduling upon Edge Computing Infrastructures | |
Wena et al. | Multistage human resource allocation for software development by multiobjective genetic algorithm | |
CN110689320A (en) | Large-scale multi-target project scheduling method based on co-evolution algorithm | |
CN116521453B (en) | Cloud cluster disaster recovery method and related equipment based on integer linear programming model ILP |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |