CN114599096A - Mobile edge calculation unloading time delay optimization method and device and storage medium - Google Patents

Mobile edge calculation unloading time delay optimization method and device and storage medium Download PDF

Info

Publication number
CN114599096A
CN114599096A CN202210148031.6A CN202210148031A CN114599096A CN 114599096 A CN114599096 A CN 114599096A CN 202210148031 A CN202210148031 A CN 202210148031A CN 114599096 A CN114599096 A CN 114599096A
Authority
CN
China
Prior art keywords
task
user
processing time
population
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210148031.6A
Other languages
Chinese (zh)
Inventor
叶帆
窦海娥
王磊
赵君喜
郑宝玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202210148031.6A priority Critical patent/CN114599096A/en
Publication of CN114599096A publication Critical patent/CN114599096A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/04Wireless resource allocation
    • H04W72/044Wireless resource allocation based on the type of the allocated resource
    • H04W72/0446Resources in time domain, e.g. slots or frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/56Allocation or scheduling criteria for wireless resources based on priority criteria
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Genetics & Genomics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Physiology (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a method, a device and a storage medium for optimizing unloading time delay of mobile edge calculation, wherein the method comprises the following steps: comprehensively calculating and evaluating the task delay response priority of the user in the communication system by using a hierarchical analysis model to obtain a priority weight; constructing a task processing time model, and calculating to obtain task processing time; establishing a problem model of an optimization target by using the priority weight and the task processing time; based on a problem model of an optimization target, a global optimal target is obtained through iteration, and an optimal channel selection coefficient and unloading coefficient parameter combination adopted when each user processes tasks is generated.

Description

Mobile edge calculation unloading time delay optimization method and device and storage medium
Technical Field
The invention relates to a mobile edge computing unloading time delay optimization method, a mobile edge computing unloading time delay optimization device and a storage medium, and belongs to the technical field.
Background
Task unloading refers to unloading tasks which cannot be processed to a data center for processing by a user. Resource allocation mainly refers to allocation of wireless channels, wired channels, server computation and cache resources. Task offloading is closely related to resource allocation issues. After making the task allocation decision, the problem of reasonable allocation of resources must be considered. For example, when a task is offloaded, how many wireless channels or timeslots are allocated in the wireless access network to transmit them, how much bandwidth is required for the wired channels to be allocated in the backhaul network, and how much computing resources are ultimately allocated by the data center to handle the task. These all need to be properly planned to make efficient use of resources, and the proper allocation of resources determines the efficiency of task execution and the user's service experience. In the data splitting process, the transmission bandwidth depends on the minimum bandwidth of multiple paths. In general, wired transmission is easy to expand bandwidth capacity by laying optical fibers, and does not become a limiting factor of transmission rate. However, the radio resources are relatively small, and are likely to become a bottleneck limiting the transmission speed.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a mobile edge computing unloading time delay optimization method, a mobile edge computing unloading time delay optimization device and a storage medium, which effectively avoid network congestion and data accumulation, fully utilize computing resources of a heterogeneous edge server while ensuring the transmission of priority users, and effectively optimize and reduce system time delay.
In order to achieve the purpose, the invention is realized by adopting the following technical scheme:
in a first aspect, the present invention provides a method for optimizing offload delay in mobile edge computing, including:
comprehensively calculating and evaluating the task delay response priority of the user in the communication system by using a hierarchical analysis model to obtain the priority weight;
constructing a task processing time model, and calculating to obtain task processing time;
establishing a problem model of an optimization target by using the priority weight and the task processing time;
and obtaining a global optimal target through iteration based on the problem model of the optimal target, and generating an optimal channel selection coefficient and unloading coefficient parameter combination adopted when each user processes the task.
Further, the using a hierarchical analysis model to perform comprehensive calculation and evaluation on the task delay response priority of the user in the communication system to obtain the priority weight includes:
establishing a hierarchical analysis model;
comparing the importance of each factor of the same level with respect to a certain criterion in the previous level to construct a judgment matrix;
after the judgment matrix is constructed, whether the difference between the judgment matrix and the consistent matrix is larger needs to be verified from a quantitative angle;
calculating the weight of each index, and solving the weight by using an arithmetic mean method;
and forming a new matrix by the weight vectors of all the criterion layer-scheme layer judgment matrixes, and calculating the task priority weight vector.
Further, the task processing time model is constructed, and the task processing time is calculated, wherein the formula is as follows:
Figure BDA0003509255180000021
wherein, tnIndicating the task processing time, dnRepresenting the amount of task data, alpha representing the channel selection coefficient, knRepresenting the percentage of time slots, C, occupied by user n when it is offloaded to the base stationSAnd CMDenotes the channel capacity of SBS and MBS respectively, beta denotes the unload factor, zetaSAnd ζMRespectively representing the time required for the SBS and MBS to transmit a unit data amount,
Figure BDA0003509255180000031
and
Figure BDA0003509255180000032
respectively representing corresponding services on SBSm, MBS and cloud computing centerThe device allocates CPU frequency to user n, n represents user number, m represents MBS number.
Further, the problem model of the optimization target is established by using the priority weight and the task processing time, and the formula is as follows:
Figure BDA0003509255180000033
s.t.C1:0≤αn,m≤1,n∈N,m∈M
C2:
Figure BDA0003509255180000034
C3:
Figure BDA0003509255180000035
C4:
Figure BDA0003509255180000036
C5:
Figure BDA0003509255180000037
wherein wnRepresenting the priority weight, t, of a user n in the system per unit time in the modelnRepresenting the task processing time, C1 and C2 represent the range constraints of the user channel selection coefficients and the offload coefficients, indicating that one user can only be served by one node, but one node can serve multiple users; c3 indicates that for the constraint of user delay, neither processing time can exceed the maximum completion time tolerated; c4 and C5 indicate that wireless channel resources and node computing resources are limited, BkAnd CkRespectively representing node channel resources and computational resources.
Further, based on the problem model of the optimization objective, a global optimum objective is obtained through iteration, and an optimum channel selection coefficient and unloading coefficient parameter combination adopted when each user processes a task is generated, including:
step 1: determining a population as a group of feasible solutions, wherein the population is composed of chromosomes, a row where the chromosomes are located represents a corresponding service node, the number of the chromosomes is M +2, genes on the chromosomes are binary numbers, 0 represents deletion, 1 represents the existence of the genes, and a column where the genes are located corresponds to a user in a system, so that the length of each chromosome, namely the number of the genes is N;
step 2: calculating a population fitness value, firstly calculating the fitness of each chromosome:
Figure BDA0003509255180000041
wherein N is the total number of terminal equipment, deltaijIndicating whether the gene exists or not, and calculating the fitness of the whole population:
Figure BDA0003509255180000042
in the algorithm, the smaller the value of the set fitness is, the better the fitness is, the more suitable the population is for survival, and the more possible the population is to become global optimum;
step 3: selecting good population, in order to prepare chromosome to be mated next, selecting population by roulette method, and defining probability of selecting population as
Figure BDA0003509255180000043
Wherein FiThe fitness value of the ith population is represented. Repeating the operation to obtain two populations, reserving chromosomes with highest parent fitness, placing the chromosomes in one population behind offspring, and selecting chromosomes in the other population in the same row;
step 4: and (3) carrying out gene exchange between chromosomes on two chromosome parts selected by Step3 before and after the cross point by adopting single point crossing and randomly selecting one cross point to generate new offspring.
Step 5: crossover points on the chromosome were randomly acquired, and the above gene values were changed.
Step 6: step2-5 is repeated until the number of evolutionary iterations is reached.
Further, the method also comprises the following steps: aiming at a service request sent by a user node in a heterogeneous network in unit time, the minimum value of the average processing time of all tasks in a system model is obtained while the transmission of priority users is guaranteed, and the optimal solution corresponding to the minimum optimization target is solved.
In a second aspect, the present invention provides a mobile edge computing offload delay optimization apparatus, including:
the first calculation unit is used for carrying out comprehensive calculation and evaluation on the task delay response priority of the user in the communication system by using a hierarchical analysis model to obtain a priority weight;
the second calculation unit is used for constructing a task processing time model and calculating to obtain task processing time;
the problem model building unit of the optimization target builds a problem model of the optimization target by using the priority weight and the task processing time;
and the generating unit is used for obtaining a global optimal target through iteration based on the problem model of the optimal target and generating an optimal channel selection coefficient and unloading coefficient parameter combination adopted when each user processes the task.
In a third aspect, the present invention provides a mobile edge computing offload delay optimization apparatus, including a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to any one of the preceding claims.
In a fourth aspect, the present invention provides a computer-readable storage medium having a computer program stored thereon, characterized in that: the program when executed by a processor implements the steps of any of the methods described above.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a mobile edge computing unloading time delay optimization method, a device and a storage medium, which are used for optimizing a computing unloading strategy by making a reasonable task allocation decision and a resource allocation strategy, namely a hierarchical analysis model and an improved genetic algorithm, according to a resource allocation optimization theory under the application of mobile edge computing and combining a heterogeneous network model architecture, effectively avoiding network congestion and data accumulation, fully utilizing computing resources of a heterogeneous edge server while ensuring the transmission of priority users, and effectively optimizing and reducing the system time delay.
Drawings
FIG. 1 is a diagram of a heterogeneous network system model;
FIG. 2 is a diagram of a hierarchical analysis model;
FIG. 3 is a schematic diagram of a genetic algorithm population;
FIG. 4 is a flow chart of a genetic algorithm;
FIG. 5 is a population fitness distribution plot;
FIG. 6 is a population average treatment time evolution curve;
FIG. 7 is a graph of average task processing time versus number of users;
FIG. 8 is a graph of average task processing time versus number of task cycles;
fig. 9 is a graph showing the average task processing time as a function of the average data amount of the task.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Example 1
The invention provides a mobile edge calculation unloading time delay optimization method, which comprises the following steps:
comprehensively calculating and evaluating the task delay response priority of the user in the communication system by using a hierarchical analysis model to obtain a priority weight;
constructing a task processing time model, and calculating to obtain task processing time;
establishing a problem model of an optimization target by using the priority weight and the task processing time;
and obtaining a global optimal target through iteration based on the problem model of the optimal target, and generating an optimal channel selection coefficient and unloading coefficient parameter combination adopted when each user processes the task.
The contents designed in the above embodiments will be described below with reference to a preferred embodiment.
As shown in fig. 1, a heterogeneous network system model adopted in this embodiment is that, in a heterogeneous system, N user terminal devices, M Small-cell Base stations (SBS), 1 Macro-cell Base Station (MBS), and 1 cloud computing center form the SBS and MBS, on which MEC servers are deployed, which can provide computing services for users, and the cloud computing center has larger-scale computing devices, and can provide strong computing power and centralized operation and maintenance management, so as to provide rich networks and computing services for users. The terminal device is represented by set as Ν ═ 1, 2., N ∈ N, where N is the total number of terminal devices and N represents the number of each user device, and the small cell base station is represented by set as M ═ 1, 2., M ∈ M, where M is the total number of SBS and M represents the number of each SBS.
Assuming that each device has a time-delay sensitive or computation intensive task to be completed in a unit time, the computation is performed by a server off-loaded to a nearby base station or a cloud computing center. Task tau of device nnExpressed as:
Figure BDA0003509255180000071
wherein d isnIndicating the amount of task data, λnThe number of CPU cycles required to execute a task,
Figure BDA0003509255180000072
indicating the maximum completion time tolerated by the task.
For the network model used by the user equipment, it is considered herein that wireless network communication is performed using a Time Division Multiple Access (TDMA) system that allows Multiple user equipments to use the same frequency in different Time slots, so that each device can offload data using the Time slots allocated by the base station. Considering the heterogeneity of device access, the terminal device is connected to the SBS or MBS in a wireless manner, and the SBS is connected to the MBS, and the MBS is connected to the cloud computing center in a wired manner. In order to save expenses, a wired connection is not provided between the SBS and the cloud computing center. When the computing power of the terminal equipment is insufficient, the equipment can unload the tasks to an MEC server on a nearby SBS in a wireless mode, and return result data to the original equipment after computing; or the data is forwarded to the MBS through the SBS, and then returned through the original path after being calculated by the MEC server on the MBS; and then, the data are transmitted to the cloud computing center with stronger computing power for computing through the two times of forwarding of the SBS and the MBS, and then the processing result is returned to the equipment. Generally, the amount of data sent by a user is much larger than the amount of data returned by a link, so that the delay effect caused by downlink data can be ignored when considering the delay.
According to the system model and the relevant theory thereof, the heterogeneous network-oriented mobile edge computing unloading time delay optimization method is provided, and the method comprises the following four steps:
the method comprises the following steps: and performing comprehensive calculation and evaluation on the task delay response priority of the user in the communication system by using a hierarchical analysis model to obtain the weight.
In order to guarantee Quality of Experience (QoE) of different users, The priority of all users appearing in The system in unit time is scored by using an Analytic Hierarchy Process (AHP) model, so that a complex decision problem with multiple indexes can be solved. The method specifically comprises the following 5 steps:
(1) and establishing a hierarchical structure.
The hierarchical analysis model diagram is shown in fig. 2, and the hierarchical structure is divided into three layers: 1. the target layer, namely the evaluation target, is used for distinguishing task priority; 2. task for selecting evaluation index as criterion layer
Figure BDA0003509255180000081
N belongs to three parameters contained in N and is used as an index S1Corresponding task data volume dnAn index S2Corresponding to the number lambda of CPU cycles required by the server to process the tasknHehe fingerSign S3Corresponding to maximum completion time tolerated by a task
Figure BDA0003509255180000082
Figure BDA0003509255180000082
3. The scheme layer, namely the alternative scheme, is each task needing to be unloaded in the system in unit time, and the proportion of each task compared with the priority of the whole task can be obtained through the model, wherein the larger the proportion is, the higher the priority is, and the lower the priority is otherwise.
(2) And constructing a judgment matrix.
And comparing the importance of each factor of the same layer with respect to a certain criterion in the previous layer to construct a judgment matrix. The weights assigned by the factors are difficult to directly give in the matrix, so that the importance degree of comparison between every two different factors is given, the importance degree is represented by a scale a, and a scale value judgment standard is given in table 1. This step first constructs a target layer-criterion layer decision matrix a ═ (a)ij)3×3Then, a criterion layer-scheme layer judgment matrix B corresponding to 3 indexes is constructed respectivelyk=(aij)n×n,n∈N,k∈S1,S2,S3Wherein
Figure BDA0003509255180000083
TABLE 1 Scale value determination standards
Figure BDA0003509255180000084
(3) And (5) checking the consistency.
After the judgment matrix is constructed, whether the difference between the judgment matrix and the consistent matrix is larger needs to be verified from the quantitative angle. The general steps of the consistency check are divided into two steps:
1) calculating a consistency index CI:
Figure BDA0003509255180000091
wherein λ ismaxRepresentation decision matrixThe maximum value of the characteristic values, n, represents the number of orders of the judgment matrix, i.e., the number of indexes.
2) Calculating the consistency ratio CR:
Figure BDA0003509255180000092
wherein, RI is an average random consistency index, and the correspondence is as follows in table 2:
TABLE 2 RI and n correspondences
Figure BDA0003509255180000093
If RI is less than 0.1, the consistency check is passed. If the judgment matrix does not pass the verification, the elements in the judgment matrix need to be modified until the verification is passed. Here, consistency check is performed on the target layer-criterion layer judgment matrix a and the criterion layer-scheme layer judgment matrix B, respectively.
(4) And calculating the weight of each index. The weight is obtained by using an arithmetic mean method, and the method comprises the following three steps:
1) and normalizing the judgment matrix by columns.
2) And adding the elements in the same row in the normalized columns.
3) And dividing each element in the vector obtained after the addition by n to obtain a weight vector. Judging matrix B with criterion layer-scheme layerS1=(aij)nnFor example, the resulting weight vector is
Figure BDA0003509255180000094
Wherein
Figure BDA0003509255180000095
Expressed as:
Figure BDA0003509255180000096
according to the above calculation method, the weight direction of the other two criterion layers-scheme layer judgment matrix can be obtainedQuantity of
Figure BDA0003509255180000097
And the target layer-criterion layer judgment matrix a ═ aij)3×3Is given by the weight vector Λ ═ λS1S2S3]T
(5) And calculating the task priority.
Forming a new matrix delta (W) by the weight vectors of all the criterion layer-scheme layer judgment matrixesS1 WS2 WS3) And then calculating a task priority weight vector W, wherein the process can be expressed as follows by an equation:
Figure BDA0003509255180000101
wherein, wnRepresenting the priority weight, w, of users n in the system per unit time in the modelnLarger indicates higher task priority, better QoE.
Step two: and constructing a task processing time model which comprises channel transmission time and a server execution time model.
Assuming that all tasks under the scene of the invention need to be unloaded to an MEC cloud computing center for computing, defining k to be more than or equal to 0n≦ 1, N ∈ N indicating the percentage of time slots that user N takes to offload to the base station. Defining a channel selection coefficient alphan,mThe method comprises the following steps that (1) N belongs to 0 and N, M belongs to M and represents a channel selection decision of a user N, wherein N represents a user terminal number, and N represents a terminal equipment set; m represents an SBS number, and M represents an SBS set. When user n transmits a task to SBSm, α n,m1 is ═ 1; when a user transmits a task to MBS, alpha n1. Due to the task inseparability, only one of the SBS or MBS can be selected through the wireless link, and thus for user n, its channel selection coefficient satisfies the constraint:
Figure BDA0003509255180000102
wherein M is the total amount of SBS.
(1) Channel transmission time
When alpha isn,mWhen 1, the time delay of the user n accessing the SBSm through the wireless channel is considered, and all SBS bandwidths are assumed to be the same as BSObtaining the channel capacity C of SBS according to Shannon formulaS
Figure BDA0003509255180000111
Wherein the content of the first and second substances,
Figure BDA0003509255180000112
representing the channel gain, P, between SBS and user nSRepresenting SBS forward link transmission power, σ2Representing the white gaussian noise power of the radio channel, the transmission rate between user n and SBS
Figure BDA0003509255180000113
Expressed as:
Figure BDA0003509255180000114
wherein k isnRepresenting the percentage of time slot occupied by the user n when unloaded to the base station, k is more than or equal to 0nIs less than or equal to 1, and N belongs to N. Then the wireless transmission delay connected to the SBS uplink
Figure BDA0003509255180000115
Expressed as:
Figure BDA0003509255180000116
similarly, considering the time delay of user n directly accessing MBS, suppose all MBS's have the same bandwidth as BMObtaining the channel capacity C of MBS according to the Shannon formulaM
Figure BDA0003509255180000117
Wherein the content of the first and second substances,
Figure BDA0003509255180000118
indicating the channel gain, P, between MBS and user nMRepresenting the MBS forward link transmission power, the transmission rate between user n and MBS
Figure BDA0003509255180000119
Expressed as:
Figure BDA00035092551800001110
wireless transmission time delay capable of being connected to MBS uplink
Figure BDA00035092551800001111
Expressed as:
Figure BDA00035092551800001112
then, considering the time delay of the task transmitted by the wired channel, because the SBS is connected with the MBS, the MBS is connected with the cloud computing center through the wired medium such as optical fiber, and the signal interference is greatly reduced compared with the wireless, the model assumes that the transmission rate in the wired channel is a relatively large value, and the time delay is calculated and is considered to be in direct proportion to the task data volume, so the wired transmission time delay generated by the task of the user n transmitted from the SBS to the MBS
Figure BDA00035092551800001113
Expressed as:
Figure BDA00035092551800001114
therein, ζSIs a constant that represents the time required for the SBS to transmit a unit amount of data. Likewise, a cable transmission resulting from the transmission of the tasks of user n from the MBS to the cloud computing center is availableDelay of time transmission
Figure BDA0003509255180000121
Expressed as:
Figure BDA0003509255180000122
therein, ζMIs a constant representing the time required for the MBS to transmit a unit data amount.
(2) Server execution time
Defining an unloading factor
Figure BDA0003509255180000123
Representing offload execution decisions for user n, SBmMB and CE respectively represent servers deployed on SBSm, MBS and cloud computing centers, and when the coefficient value is 1, the server unloaded to the corresponding node is represented to execute the task; the coefficient value is 0, indicating that it is not executing on the server of this node. Because the tasks are not detachable, the tasks can be unloaded to one server to be completely executed, and each task is unloaded and executed successfully, the constraint condition is required to be met for each user n:
Figure BDA0003509255180000124
defining tasks
Figure BDA0003509255180000125
The execution time on the server is represented as:
Figure BDA0003509255180000126
wherein the content of the first and second substances,
Figure BDA0003509255180000127
and
Figure BDA0003509255180000128
respectively representing tasks taunCorresponding to server execution time, lambda, on SBSm, MBS and cloud computing centernIndicating the number of CPU cycles required to execute the task,
Figure BDA0003509255180000129
and
Figure BDA00035092551800001210
respectively representing the CPU frequencies distributed to the user n by the corresponding servers on the SBS m, the MBS and the cloud computing center.
(3) Task processing time
According to different task transmission paths and execution end points, the model provides 5 different types of task unloading modes. Known tasks
Figure BDA00035092551800001211
n denotes a user number, and m denotes an SBS number.
1) When the channel selection coefficient alpha n,m1 and unload factor
Figure BDA00035092551800001212
In the time, the task is unloaded on the SBSm directly through a wireless link, the mode is named as mode 1, and the calculation expression of the task processing time is as follows:
Figure BDA00035092551800001213
wherein, the processing time comprises two parts: wireless transmission delay from terminal to SBS uplink
Figure BDA00035092551800001214
And MEC server execution time on SBSm
Figure BDA0003509255180000131
2) When the channel selection coefficient alpha n,m1 and unload factor
Figure BDA0003509255180000132
When the task reaches the SBS through the wireless link, the task reaches the MBS through the wire, and is finally unloaded on the MBS, this mode is named as mode 2, and the task processing time calculation expression is as follows:
Figure BDA0003509255180000133
wherein, the processing time comprises three parts: wireless transmission delay from terminal to SBS uplink
Figure BDA0003509255180000134
Wired transmission delay for transmitting user n task from SBS to MBS
Figure BDA0003509255180000135
And MEC server execution time on MBS
Figure BDA0003509255180000136
3) When the channel selection coefficient alpha n1 and unload factor
Figure BDA0003509255180000137
In time, the task is unloaded on the MBS after directly passing through the wireless link, and this mode is named as mode 3, and the task processing time calculation expression is as follows:
Figure BDA0003509255180000138
wherein, the processing time comprises two parts: wireless transmission delay from terminal to MBS uplink
Figure BDA0003509255180000139
And MEC server execution time on SBS m
Figure BDA00035092551800001310
4) When the channel selection coefficient alpha n1 and unload factor
Figure BDA00035092551800001311
In the method, the task reaches the MBS through a wireless link, then reaches the cloud computing center through a wire, and finally is unloaded on the cloud computing center, the mode is named as a mode 4, and the task processing time computing expression is as follows:
Figure BDA00035092551800001312
wherein, the processing time comprises three parts: wireless transmission delay from terminal to MBS uplink
Figure BDA00035092551800001313
Wired transmission time delay for transmitting user n tasks from MBS to cloud computing center
Figure BDA00035092551800001314
And MEC server execution time on cloud computing center
Figure BDA00035092551800001315
5) When the channel selection coefficient alpha n,m1 and unload factor
Figure BDA00035092551800001316
When the task is processed, the task reaches the SBS through the wireless link, then reaches the MBS through the wire, is forwarded to the cloud computing center through the wire, and finally is unloaded on the cloud computing center, wherein the mode is named as a mode 5, and the task processing time computing expression is as follows:
Figure BDA0003509255180000141
the processing time includes four parts: wireless transmission delay from terminal to SBS uplink
Figure BDA0003509255180000142
Wired transmission delay for transmitting user n task from SBS to MBS
Figure BDA0003509255180000143
Wired transmission time delay for transmitting user n tasks from MBS to cloud computing center
Figure BDA0003509255180000144
And MEC server execution time on cloud computing center
Figure BDA0003509255180000145
In order to ensure the uniqueness of the mode selection, in combination with the channel selection coefficient and the offload coefficient, the task processing time obtained by the above 5 modes can be summarized as follows:
Figure BDA0003509255180000146
and because the channel selection coefficients satisfy the constraints
Figure BDA0003509255180000147
And the unloading coefficient satisfies the constraint
Figure BDA0003509255180000148
The upper type can be expanded and simplified into
Figure BDA0003509255180000149
Step three: and establishing a problem model of an optimization target by combining the hierarchical analysis model and the task processing time model.
The problem model to be solved by the invention is as follows: aiming at a service request sent by a user node in a heterogeneous network in unit time, the minimum value of the average processing time of all tasks in a system model is obtained while the transmission of priority users is guaranteed, and the optimal solution corresponding to the minimum optimization target is solved, namely the optimal channel selection coefficient and unloading coefficient parameter combination adopted when each user processes the tasks is solved. The problem function is expressed as:
Figure BDA00035092551800001410
s.t.C1:0≤αn,m≤1,n∈N,m∈M
C2:
Figure BDA00035092551800001411
C3:
Figure BDA00035092551800001412
C4:
Figure BDA00035092551800001413
C5:
Figure BDA00035092551800001414
wherein, C1 and C2 are the range constraints of the user channel selection coefficients and the offload coefficients, which indicate that one user can only be served by one node, but one node can serve multiple users; c3 indicates that for the constraint of user delay, neither processing time can exceed the maximum completion time tolerated; c4 and C5 indicate that wireless channel resources and node computing resources are limited, BkAnd CkRespectively representing node channel resources and computational resources.
Step four: and solving the load balancing scheduling problem by using an improved genetic algorithm, obtaining a global optimal target through iteration, and generating an optimal channel selection coefficient and unloading coefficient parameter combination adopted when each user processes a task.
This problem can be seen as a load balancing scheduling problem, which is non-convex, i.e. many locally optimal solutions occur on a global scale, and in order to find a globally optimal solution on a global scale, the present invention is solved using a modified Genetic Algorithm (GA), which is an indian function considering priority delivery of tasks according to weight compared to the traditional Genetic Algorithm. The parameter sets of the problem include: the task processing method comprises a task priority weight matrix, a task periodicity matrix, a task data volume matrix, a node processing speed matrix and a node processing time matrix. The algorithm flow is as follows:
step 1: and (5) initializing a population. Determining a population as a group of feasible solutions, setting the population number as 100, wherein the population is formed by chromosomes, a row where the chromosomes are located represents a corresponding service node, the chromosome number is M +2, genes on the chromosomes are binary numbers, 0 represents deletion, 1 represents existence of the genes, and columns where the genes are located correspond to users in a system, so that the length of each chromosome, namely the number of the genes, is N. A schematic of the population is shown in figure 3.
Step 2: and calculating the population fitness value. Fitness for each chromosome is first calculated:
Figure BDA0003509255180000151
wherein N is the total number of terminal equipment, deltaijIndicating whether the gene exists or not, and calculating the fitness of the whole population:
Figure BDA0003509255180000152
in the algorithm, the smaller the value of the set fitness is, the better the fitness is, and the more suitable the population is for survival, the more likely the population is to become global optimum.
Step 3: and (4) selecting. The elite population is selected in order to prepare the chromosomes for mating in the next step. Selecting a population by a roulette method, and defining the probability of the population being selected as
Figure BDA0003509255180000161
Wherein FiThe fitness value of the ith population is represented. Repeating the operation to obtain two populations, reserving chromosomes with highest parent fitness, placing the chromosomes in one population behind offspring, and selecting chromosomes in the other population in the same row.
Step 4: and (4) crossing. Adopting single-point crossing, randomly selecting a crossing point, and carrying out gene exchange between chromosomes on two chromosome parts selected by Step3 before and after the crossing point to generate a new offspring, wherein the crossing probability is set to be 0.6.
Step 5: and (5) carrying out mutation. Crossover points on the chromosome are randomly acquired, the gene values above are changed, and in order to prevent the algorithm from converging to a locally optimal solution too quickly, the mutation probability is set to 0.1.
Step 6: step2-5 is repeated until the number of evolutionary iterations is reached.
The above algorithm flow is shown in fig. 4, and the feasible solution corresponding to the population finally having the minimum fitness value is the global optimal solution, that is, the decision of the user task with the minimum system average task processing time on the channel selection coefficient and the unloading coefficient while guaranteeing the transmission of the priority users.
In order to further illustrate the effectiveness of the scheme provided by the invention, the invention carries out simulation verification, and the simulation tools used in the invention comprise Python and MATLAB. In the simulation, the number of the user terminals is set to be N-1000, the total number of the SBS is set to be M-8, and the total frequency of the MEC servers on all the SBS is
Figure BDA0003509255180000162
Total frequency f of MEC servers on MBSMB80GHz, cloud computing center server total frequency f CE200 GHz. The number of task cycles needing unloading is subjected to normal distribution, the average value is 0.5 gigacycle, the task data volume is subjected to normal distribution, the average value is 18MB, a distribution diagram of fitness values of 100 populations along with the change of iteration times is given in figure 5, and it can be seen that the chromosome fitness is better and better along with the increase of the iteration times, and the total task processing time gradually tends to converge. Fig. 6 shows a population average processing time evolution curve, and it can be seen that as the number of iterations increases, the task average processing time value becomes smaller and smaller, and when the number of iterations is 80, the system performance tends to be stable. In order to verify the superiority of the proposed scheme in the aspect of mobile edge calculation time delay optimization, a comparison with another two heuristic algorithms is given, and the particle swarmThe algorithm (Particle Swarm Optimization, PSO) and the Ant Colony Optimization (ACO) respectively change the number of users, the average number of task cycles and the amount of task data by using a method of controlling variables, and the simulation pair is shown in fig. 7, fig. 8 and fig. 9, for example, it can be seen that the genetic algorithm proposed herein is superior to the traditional two heuristic algorithms in performance. In addition, in order to verify the necessity of processing task priorities, a conventional genetic algorithm which does not consider task priorities is compared with the improved genetic algorithm, and fig. 7 shows that the performance of the improved genetic algorithm provided by the invention is always superior to that of the conventional genetic algorithm along with the increase of the number of users, and when the number of users exceeds 1400, the rising trend of the average task processing time of the users becomes fast, and the performance begins to deteriorate, which shows that the algorithm provided by the invention utilizes the advantages of the edge calculation in the aspect of unloading task delay optimization when processing a large-scale user group.
Example 2
The embodiment provides a mobile edge computing offload delay optimization device, which includes:
the first calculation unit is used for carrying out comprehensive calculation and evaluation on the task delay response priority of the user in the communication system by using a hierarchical analysis model to obtain a priority weight;
the second calculation unit is used for constructing a task processing time model and calculating to obtain task processing time;
the problem model building unit of the optimization target builds a problem model of the optimization target by using the priority weight and the task processing time;
and the generating unit is used for obtaining a global optimal target through iteration based on the problem model of the optimal target and generating an optimal channel selection coefficient and unloading coefficient parameter combination adopted when each user processes the task.
Example 3
The embodiment provides a mobile edge computing unloading delay optimization device, which comprises a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to any of embodiment 1.
Example 4
The present embodiment provides a computer-readable storage medium having a computer program stored thereon, characterized in that: the program, when executed by a processor, implements the steps of the method of any of embodiment 1.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (9)

1. A mobile edge computing offload delay optimization method is characterized by comprising the following steps:
comprehensively calculating and evaluating the task delay response priority of the user in the communication system by using a hierarchical analysis model to obtain a priority weight;
constructing a task processing time model, and calculating to obtain task processing time;
establishing a problem model of an optimization target by using the priority weight and the task processing time;
and obtaining a global optimal target through iteration based on the problem model of the optimal target, and generating an optimal channel selection coefficient and unloading coefficient parameter combination adopted when each user processes the task.
2. The mobile edge computing offload delay optimization method of claim 1, wherein: the comprehensive calculation and evaluation of the task delay response priority of the user in the communication system by using the hierarchical analysis model to obtain the priority weight comprises the following steps:
establishing a hierarchical analysis model;
comparing the importance of each factor of the same level with respect to a certain criterion in the previous level to construct a judgment matrix;
after the judgment matrix is constructed, whether the difference between the judgment matrix and the consistent matrix is larger needs to be verified from a quantitative angle;
calculating the weight of each index, and solving the weight by using an arithmetic mean method;
and forming a new matrix by the weight vectors of all the criterion layer-scheme layer judgment matrixes, and calculating the task priority weight vector.
3. The mobile edge computing offload delay optimization method of claim 1, wherein: the task processing time model is constructed, and the task processing time is calculated, wherein the formula is as follows:
Figure FDA0003509255170000021
wherein, tnIndicating the task processing time, dnDenotes the amount of task data, alpha denotes the channel selection coefficient, knRepresenting the percentage of time slots, C, occupied by user n when it is offloaded to the base stationSAnd CMDenotes the channel capacities of SBS and MBS, respectively, beta denotes the offload factor, ζSAnd ζMRespectively representing the time required for the SBS and MBS to transmit a unit data amount,
Figure FDA0003509255170000022
and
Figure FDA0003509255170000023
respectively representing the CPU frequencies distributed to a user n by the SBSm, the MBS and the corresponding server on the cloud computing center, wherein n represents a user number, and m represents an MBS number.
4. The method of claim 1, wherein the method comprises: establishing a problem model of an optimization target by using the priority weight and the task processing time, wherein the formula is as follows:
Figure FDA0003509255170000024
s.t.C1:0≤αn,m≤1,n∈N,m∈M
C2:
Figure FDA0003509255170000025
C3:
Figure FDA0003509255170000026
C4:
Figure FDA0003509255170000027
C5:
Figure FDA0003509255170000028
wherein wnRepresenting the priority weight, t, of a user n in the system per unit time in the modelnRepresenting the task processing time, C1 and C2 represent the range constraints of the user channel selection coefficients and the offload coefficients, indicating that one user can only be served by one node, but one node can serve multiple users; c3 indicates that for the constraint of user delay, neither processing time can exceed the maximum completion time tolerated; c4 and C5 indicate that wireless channel resources and node computing resources are limited, BkAnd CkRespectively representing node channel resources and computational resources.
5. The mobile edge computing offload delay optimization method of claim 1, wherein: based on the problem model of the optimization target, obtaining a global optimal target through iteration, and generating an optimal channel selection coefficient and unloading coefficient parameter combination adopted when each user processes a task, wherein the optimal channel selection coefficient and unloading coefficient parameter combination comprises the following steps:
step 1: determining a population as a group of feasible solutions, wherein the population is composed of chromosomes, a row where the chromosomes are located represents a corresponding service node, the number of the chromosomes is M +2, genes on the chromosomes are binary numbers, 0 represents deletion, 1 represents the existence of the genes, and a column where the genes are located corresponds to a user in a system, so that the length of each chromosome, namely the number of the genes is N;
step 2: calculating a population fitness value, firstly calculating the fitness of each chromosome:
Figure FDA0003509255170000031
wherein N is the total number of terminal equipment, deltaijIndicating whether the gene exists or not, and calculating the fitness of the whole population:
Figure FDA0003509255170000032
in the algorithm, the smaller the value of the set fitness is, the better the fitness is, the more suitable the population is for survival, and the more possible the population is to become global optimum;
step 3: selecting good population, in order to prepare chromosome to be mated next, selecting population by roulette method, and defining probability of selecting population as
Figure FDA0003509255170000033
Wherein FiThe fitness value of the ith population is represented. Repeating the operation to obtain two populations, reserving chromosomes with highest parent fitness, placing the chromosomes in one population behind offspring, and selecting chromosomes in the other population in the same row;
step 4: and (3) carrying out gene exchange between chromosomes on two chromosome parts selected by Step3 before and after the cross point by adopting single point crossing and randomly selecting one cross point to generate new offspring.
Step 5: crossover points on the chromosome were randomly acquired, and the above gene values were changed.
Step 6: step2-5 is repeated until the number of evolutionary iterations is reached.
6. The mobile edge computing offload delay optimization method of claim 1, further comprising: aiming at a service request sent by a user node in a heterogeneous network in unit time, the minimum value of the average processing time of all tasks in a system model is obtained while the transmission of priority users is guaranteed, and the optimal solution corresponding to the minimum optimization target is solved.
7. A mobile edge computing offload delay optimization apparatus, comprising:
the first calculation unit is used for comprehensively calculating and evaluating the task delay response priority of the user in the communication system by using a hierarchical analysis model to obtain a priority weight;
the second calculation unit is used for constructing a task processing time model and calculating to obtain task processing time;
the problem model building unit of the optimization target builds a problem model of the optimization target by using the priority weight and the task processing time;
and the generating unit is used for obtaining a global optimal target through iteration based on the problem model of the optimal target and generating an optimal channel selection coefficient and unloading coefficient parameter combination adopted when each user processes the task.
8. A mobile edge computing offload delay optimization device is characterized in that: comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to any one of claims 1 to 6.
9. A computer-readable storage medium having stored thereon a computer program, characterized in that: the program when executed by a processor implements the steps of the method of any one of claims 1 to 6.
CN202210148031.6A 2022-02-17 2022-02-17 Mobile edge calculation unloading time delay optimization method and device and storage medium Pending CN114599096A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210148031.6A CN114599096A (en) 2022-02-17 2022-02-17 Mobile edge calculation unloading time delay optimization method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210148031.6A CN114599096A (en) 2022-02-17 2022-02-17 Mobile edge calculation unloading time delay optimization method and device and storage medium

Publications (1)

Publication Number Publication Date
CN114599096A true CN114599096A (en) 2022-06-07

Family

ID=81805823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210148031.6A Pending CN114599096A (en) 2022-02-17 2022-02-17 Mobile edge calculation unloading time delay optimization method and device and storage medium

Country Status (1)

Country Link
CN (1) CN114599096A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174397A (en) * 2022-07-28 2022-10-11 河海大学 Federal edge learning training method and system combining gradient quantization and bandwidth allocation
CN115827185A (en) * 2022-10-31 2023-03-21 中电信数智科技有限公司 6G aerial base station and Beidou aerial obstacle avoidance combined method, storage medium and equipment
CN116521340A (en) * 2023-04-27 2023-08-01 福州慧林网络科技有限公司 Low-delay parallel data processing system and method based on large-bandwidth network
CN116782412A (en) * 2023-08-17 2023-09-19 北京航空航天大学 High dynamic heterogeneous wireless network resource allocation method based on random access

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174397A (en) * 2022-07-28 2022-10-11 河海大学 Federal edge learning training method and system combining gradient quantization and bandwidth allocation
CN115174397B (en) * 2022-07-28 2023-10-13 河海大学 Federal edge learning training method and system combining gradient quantization and bandwidth allocation
CN115827185A (en) * 2022-10-31 2023-03-21 中电信数智科技有限公司 6G aerial base station and Beidou aerial obstacle avoidance combined method, storage medium and equipment
CN115827185B (en) * 2022-10-31 2023-12-01 中电信数智科技有限公司 Method, storage medium and equipment for combining 6G air base station with Beidou air obstacle avoidance
CN116521340A (en) * 2023-04-27 2023-08-01 福州慧林网络科技有限公司 Low-delay parallel data processing system and method based on large-bandwidth network
CN116521340B (en) * 2023-04-27 2023-10-10 福州慧林网络科技有限公司 Low-delay parallel data processing system and method based on large-bandwidth network
CN116782412A (en) * 2023-08-17 2023-09-19 北京航空航天大学 High dynamic heterogeneous wireless network resource allocation method based on random access
CN116782412B (en) * 2023-08-17 2023-11-14 北京航空航天大学 High dynamic heterogeneous wireless network resource allocation method based on random access

Similar Documents

Publication Publication Date Title
CN109413724B (en) MEC-based task unloading and resource allocation scheme
CN114599096A (en) Mobile edge calculation unloading time delay optimization method and device and storage medium
CN111447619B (en) Joint task unloading and resource allocation method in mobile edge computing network
CN108920279B (en) Mobile edge computing task unloading method under multi-user scene
CN109951869B (en) Internet of vehicles resource allocation method based on cloud and mist mixed calculation
Lee et al. An online secretary framework for fog network formation with minimal latency
CN111182570B (en) User association and edge computing unloading method for improving utility of operator
CN108809695B (en) Distributed uplink unloading strategy facing mobile edge calculation
CN111586720B (en) Task unloading and resource allocation combined optimization method in multi-cell scene
CN110098969B (en) Fog computing task unloading method for Internet of things
CN108391317B (en) Resource allocation method and system for D2D communication in cellular network
CN108901075B (en) GS algorithm-based resource allocation method
CN111405569A (en) Calculation unloading and resource allocation method and device based on deep reinforcement learning
CN111641891B (en) Task peer-to-peer unloading method and device in multi-access edge computing system
Hussain et al. System capacity maximization with efficient resource allocation algorithms in D2D communication
CN111132191A (en) Method for unloading, caching and resource allocation of joint tasks of mobile edge computing server
CN110719641B (en) User unloading and resource allocation joint optimization method in edge computing
CN111800812B (en) Design method of user access scheme applied to mobile edge computing network of non-orthogonal multiple access
CN110856259A (en) Resource allocation and offloading method for adaptive data block size in mobile edge computing environment
CN111953547B (en) Heterogeneous base station overlapping grouping and resource allocation method and device based on service
CN111601327B (en) Service quality optimization method and device, readable medium and electronic equipment
CN113590279A (en) Task scheduling and resource allocation method for multi-core edge computing server
Kopras et al. Task allocation for energy optimization in fog computing networks with latency constraints
Sun et al. A joint learning and game-theoretic approach to multi-dimensional resource management in fog radio access networks
Ortín et al. Joint cell selection and resource allocation games with backhaul constraints

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination