CN114666339B - Edge unloading method and system based on noose set and storage medium - Google Patents
Edge unloading method and system based on noose set and storage medium Download PDFInfo
- Publication number
- CN114666339B CN114666339B CN202210140585.1A CN202210140585A CN114666339B CN 114666339 B CN114666339 B CN 114666339B CN 202210140585 A CN202210140585 A CN 202210140585A CN 114666339 B CN114666339 B CN 114666339B
- Authority
- CN
- China
- Prior art keywords
- edge server
- task
- edge
- candidate
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000003860 storage Methods 0.000 title claims abstract description 11
- 238000004422 calculation algorithm Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 9
- 230000002776 aggregation Effects 0.000 claims description 7
- 238000004220 aggregation Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 abstract description 9
- 238000005265 energy consumption Methods 0.000 description 14
- 238000002474 experimental method Methods 0.000 description 12
- 230000004044 response Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 9
- 238000006243 chemical reaction Methods 0.000 description 7
- 230000000052 comparative effect Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 239000003795 chemical substances by application Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000012733 comparative method Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 230000001149 cognitive effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 235000017166 Bambusa arundinacea Nutrition 0.000 description 1
- 235000017491 Bambusa tulda Nutrition 0.000 description 1
- 241001330002 Bambuseae Species 0.000 description 1
- 235000015334 Phyllostachys viridis Nutrition 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 239000011425 bamboo Substances 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 235000012149 noodles Nutrition 0.000 description 1
- 230000001777 nootropic effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1021—Server selection for load balancing based on client or server locations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/101—Server selection for load balancing based on network conditions
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Computer And Data Communications (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to the technical field of edge computing unloading, and discloses an edge unloading method, a system and a storage medium based on an intelligent set, wherein the method comprises the steps of computing a first cost required by a task when the task is executed on a mobile device and a second cost required by the task when the task is executed on a first edge server, wherein the first edge server is the edge server which is closest to the mobile device in N edge servers; unloading the task to the first edge server for execution under the condition that the first cost is larger than the second cost; under the condition that the first edge server reaches a load threshold value, locking a second edge server based on the context parameter of each edge server in the N edge servers, and unloading the task to the second edge server for execution; a multi-level edge calculation unloading strategy is designed, the problems of whether the task needs to be unloaded and where the task needs to be unloaded are solved, and the cost for executing the task is reduced.
Description
Technical Field
The invention relates to the technical field of edge computing unloading, in particular to an edge unloading method, system and storage medium based on a noon set.
Background
With the rapid development of wireless communication technology and internet technology, the era of interconnection of everything has quietly come. It is expected that by 2030, the number of mobile devices in china will reach 40 billion. Meanwhile, novel applications such as VR, online games and unmanned driving emerge like bamboo shoots in spring after rain, the size and complexity of data volume generated by the applications are increased sharply, the requirement on delay is stricter and stricter, and otherwise the user experience quality is difficult to meet. Although mobile devices are equipped with increasingly powerful CPUs, the mobile devices themselves are not distracting to delay-sensitive and compute-intensive applications due to their size, storage space, and power. Thus, edge calculation takes place at the end.
Edge Computing (EC) is a new type of Computing model that will compute and store resources such as: cloudlets, mini-data centers, or fog nodes, etc. are deployed at the edge of the network closer to the mobile devices or sensors to address the deficiencies of the devices in terms of resource storage, computational performance, and energy efficiency, thereby providing fast processing and low-latency services. Edge computing still faces many technical challenges such as computational offloading and mobility management. As one of the key technologies in edge computing, computation offloading refers to a technology in which a terminal device hands part or all of a computation task to an edge server for processing, and plays an important role in delay minimization and quality of service assurance. The mobility management refers to how to select a proper edge node to provide services for a user according to a moving track of the user when the user is in a dense and complex network coverage area. In response to the problems with edge computing, the academia has a great deal of research and algorithms aimed at exploring making optimal offloading decisions while minimizing energy consumption while satisfying execution delay constraints. However, these studies have some disadvantages, such as that part of the algorithm defaults to requiring that all tasks be offloaded, which is obviously unreasonable because part of the tasks may be less expensive to process within the mobile device; when a plurality of edge nodes exist near the mobile equipment, the current research is insufficient how to select an optimal node by simultaneously considering the context factors of the edge nodes and the mobile equipment and fusing multiple attributes; in some studies, the offloading decision model and mobility management were either done separately or the offloading decision was done based entirely on mobility, which is in fact an important factor characterizing context impact decisions. Therefore, the unloading mode of the existing edge unloading method is high in cost.
Disclosure of Invention
The invention provides an edge unloading method, an edge unloading system and a storage medium based on a central intelligence set, and aims to solve the problem that the unloading mode of the existing edge unloading method is high in cost.
In order to achieve the purpose, the invention is realized by the following technical scheme:
in a first aspect, the present invention provides an edge offloading method based on a noose set, which is applied to a computing offloading structure, where the computing offloading structure includes a cloud computing center, N edge servers, and a mobile device, where N is a positive integer, and the method includes:
s1, calculating a first cost required when a task is executed on a mobile device and a second cost required when the task is executed on a first edge server, wherein the first edge server is the edge server which is closest to the mobile device in the N edge servers;
s2, unloading the task to the first edge server to execute under the condition that the first cost is larger than the second cost;
s3, under the condition that the first edge server reaches a load threshold value, locking a second edge server based on a context parameter of each candidate edge server in the N edge servers, wherein the context parameter comprises user mobility, network conditions, the load of each edge server and CPU utilization rate;
and S4, unloading the task to the second edge server for execution.
In a second aspect, an embodiment of the present application provides a system for edge offload based on a wisdom set, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when executing the computer program.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method according to the first aspect.
Has the beneficial effects that:
according to the edge unloading method based on the intelligent set, provided by the invention, under the condition that a task needs to be executed on a mobile device in a larger way, the task is firstly unloaded to a first edge server closest to the mobile device for execution, and under the condition that the first edge server reaches a load threshold value, a second edge server is locked based on the context parameter of each edge server in N edge servers, and the task is unloaded to the second edge server for execution, so that a multi-level edge calculation unloading strategy is designed, the problems of unloading and unloading of the task are solved, and the task execution cost is reduced; on the basis, the second edge server is determined according to the user mobility, the network condition, the load of each edge server and the CPU utilization rate, the real-time property of the user moving in different cloud coverage ranges is fully considered, the high variability of the context parameters along with the time is processed by the middle intelligence set, energy and time are saved, and the number of failed tasks is reduced.
Drawings
FIG. 1 is a flowchart of a method for edge offload based on an intelligence set according to a preferred embodiment of the present invention;
FIG. 2 is a schematic diagram of a three-tier computing offload architecture in accordance with a preferred embodiment of the present invention;
FIG. 3 is a schematic diagram of a cloud model of a preferred embodiment of the present invention;
FIG. 4 is a graph of the mobility of a user over time in accordance with a preferred embodiment of the present invention;
FIG. 5 is a graph of mean number of task failures for NSCO and the method of the comparative experiment in accordance with a preferred embodiment of the present invention;
FIG. 6 is a graph of the average elapsed time for NSCO and two comparative methods of the preferred embodiment of the present invention when processing different numbers of tasks;
FIG. 7 is a graph of the average consumed energy for NSCO and two comparative methods of the preferred embodiment of the present invention when dealing with different numbers of tasks.
Detailed Description
The technical solutions of the present invention are described clearly and completely below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The use of "first," "second," and the like, herein does not denote any order, quantity, or importance, but rather the terms "first," "second," and the like are used to distinguish one element from another. Also, the use of the terms "a" or "an" and the like do not denote a limitation of quantity, but rather denote the presence of at least one. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used only to indicate relative positional relationships, and when the absolute position of the object to be described is changed, the relative positional relationships are changed accordingly.
Referring to fig. 1, an embodiment of the present application provides an edge offload method based on a central intelligence set, which is applied to a computing offload structure, where the computing offload structure includes a cloud computing center, N candidate edge servers, and a mobile device, where N is a positive integer, and the method includes:
s1, calculating a first cost required when a task is executed on a mobile device and a second cost required when the task is executed on a first edge server, wherein the first edge server is the candidate edge server closest to the mobile device in N candidate edge servers;
the edge unloading method based on the central intelligence set is suitable for a three-layer computing unloading structure shown in fig. 2, and in the structure, a cloud computing center can provide stable and powerful computing capacity and is suitable for processing computing intensive tasks and delay tolerant tasks. Cloudlet (edge server) is a server with computing and network resources deployed at the edge of the network, has better computing power than mobile devices, and has lower latency than cloud computing. Four offload destinations are considered in this application: local to the mobile device, the closest cloudlet (first edge server), the best cloudlet (second edge server), the cloud computing center, denoted in the following calculations with subscripts m, nea, opt, cl, respectively.
In a three-tier cloud-edge hybrid environment, the offload problem is described as how to choose where to perform the task, and how to select an optimal cloudlet based on contextual factors, thereby minimizing overall completion time and energy consumption. Specifically, if n tasks are set, if W tasks are executed locally, x tasks are executed in a nearby cloudlet, y tasks are executed in an optimal cloudlet, and Z tasks are executed on a cloud server, the total cost for completing the group of tasks is as follows:
W+x+y+Z=n (1)
C Site consists of two parts, namely the completion time T and the consumed energy E of the task:
C site =α·T site +β·E site ;
site∈{m,nea,opt,cl}; (2)
alpha and beta are weight factors for adjusting the time and energy consumption parts in the cost, can be adjusted according to the preference of a user for the factors, and the calculation of the completion time T and the energy consumption E is described in detail later, and the values of the completion time T and the energy consumption E participate in the calculation of the cost after being normalized.
And S2, unloading the task to the first edge server to execute under the condition that the first cost is larger than the second cost.
It should be noted that not all tasks need to be offloaded, and the mobile device can also provide a good operating environment for some small tasks. The method and the device determine whether the task needs to be unloaded or not through the unloading cost.
The time delay and energy consumption for completing the task on the mobile device are calculated as follows:
T m =I/S m (3)
E m =P mo ·(I/S m ) (4)
wherein I refers to the task size, described in terms of the number of instructions, S m Is the number of instructions executed per unit time, P, by the mobile device mo Power consumption per unit time when the mobile device performs a task.
If the task request is responded to by the most recent clodlet, the corresponding calculation is as follows:
in this embodiment, the definition of the parameters involved includes D nea /S p Denotes the propagation time, D u /B u And D d /B d Indicating upload and return link time, I/S nea Represents the time, Q, recently spent by the cloudlet processing task nea Time to wait in line in the last cloudlet, D u Representing the amount of data uploaded, D d Represents the amount of backtransmission data, B u Indicating upload data rate, B d Indicating the return data rate, P ts Representing the power consumption per unit time, P, of the mobile device transmitting data tr Representing the power consumption per time unit of the mobile device receiving the data, P mo Representing the power consumption per time unit, P, of the mobile device in performing task calculations i Representing the power consumption per unit time of the mobile device in idle state (waiting result), I representing the number of instructions of the task to be executed, by which the task size is represented, S m Indicating the number of instructions per unit of time executed by the mobile device, S nea Indicating the number of instructions executed in the most recent cloudlet unit time, S opt Indicates the number of instructions executed in the best cloudlet unit time, S cl Representing the number of instructions executed in cloud unit time, S p Denotes the propagation velocity, D cl Represents the distance, D, between the closest cloudlet and cloud nea Distance, D, representing the closest cloudlet and mobile device opt Represents the distance, Q, between the nearest and best cloudlets nea Indicating the time queued in the last cloudlet, Q opt Indicating the queuing time, T, in the best cloudlet timer Indicating the set timer time.
Therefore, when C m ≤C nea Then the task is processed locally at the mobile device at less cost, if C m >C nea It is more appropriate to offload the task to the nearest cloudlet.
S3, under the condition that the first edge server reaches a load threshold value, locking a second edge server based on the context parameters of each edge server in the N edge servers, wherein the context parameters comprise user mobility, network conditions, the load of each edge server and CPU utilization rate;
and S4, unloading the task to a second edge server for execution.
In the above edge offloading method based on the intelligent set, under the condition that a task needs to be executed on a mobile device in a larger amount, firstly, the task is offloaded to a first edge server closest to the mobile device for execution, and under the condition that the first edge server reaches a load threshold, a second edge server is locked based on a context parameter of each of N edge servers, and the task is offloaded to the second edge server for execution, so that a multi-level edge calculation offloading policy is designed, the problem that whether and where the task needs to be offloaded is solved, and the cost for executing the task is reduced; on the basis, the second edge server is determined according to the user mobility, the network condition, the server load and the CPU utilization rate of each candidate edge server, and the real-time performance of the user moving in different cloud coverage ranges is fully considered.
Optionally, S3 specifically includes:
s31, regarding candidate edge servers except the first edge server in the N candidate edge servers as candidate edge servers, and constructing a time-varying context matrix of each candidate edge server according to context parameters of each candidate edge server in the latest q moments;
s32, converting the time-varying context matrix of each candidate edge server into a single-value mesomeric context matrix by using a reverse cloud generator algorithm;
s33, aggregating the single-value mesointelligence context matrix into a single-value mesointelligence number of the candidate edge server by using a single-value mesointelligence weighted average aggregation operator;
and S34, calculating the score of each candidate edge server by using a score function of the intelligence number in the single value, and taking the candidate edge server with the highest score as a second edge server.
In the optional embodiment, the high variability of the context parameters over time is processed by adopting the central intelligence set, so that not only can energy and time be saved, but also the number of failed tasks can be reduced.
In some scenarios, the last cloudlet reaches a load threshold and cannot serve the new arriving task, at which point the last cloudlet needs to act as an agent, broadcast a request, seek help from other nearby cloudlets, and specifically, in one example, the step of locking the second edge server (the last cloudlet) is as follows.
The first edge server broadcasts a task request message, searches for other cloudlets nearby, and sets a timer T timer ,T timer <<T nea . When there are cleudlets nearby that can satisfy the task's request, they respond their own CPU utilization, current load, network connection with other cleudlets to the agent, which selects the best cleudlet from the information of the candidate cleudlets and forwards the task to the cleudlet for execution. In this case, the time delay T opt Energy consumption E opt And the cost score is calculated as follows:
optionally, the method further includes: and setting a preset time threshold, and unloading the task to the cloud computing center if the second edge server is not successfully locked within the preset time threshold.
In this alternative embodiment, if the timer T is set timer When the task reaches 0, no cloudlet responds to the request of the task, and the situation that no proper cloudlet exists nearby is indicated, at the moment, the task is unloaded to the cloud computing center, and the cloud server provides services. In this way, it can be ensured that tasks on the mobile device can be offloaded regardless of whether the second edge server is successfully locked, reducing stress on the mobile device. In this case, the time delay T opt Energy consumption E opt And the cost score is calculated as follows:
it should be noted that, in different time periods, the network conditions show different congestion degrees, the available resources of the server also change dynamically with time, and the terminal device is also in different positions at different times due to its mobility, that is, the network conditions, the server resources, and the position of the terminal device all change dynamically with time, and these context factors affecting the offloading decision show time-varying characteristics.
Tasks may be offloaded to different cloudlets at different times and at different locations. These time-varying context factors play an important role in offloading decisions, while the time-varying dynamics of context parameters are rarely considered in existing research. In the chapter, four context parameters of user mobility, network conditions, server load and CPU utilization rate are depicted in a time-varying manner by adopting a single-value intelligent set.
When the current user is in the service range of p candidate cloudlets, in order to select the best cloudlet, the values of the four contexts of the p candidate cloudlets in q moments are obtained, so that the decision can be conveniently made. The method represents the mobility by predicting the residence time of the user in cloudlets, and the method is recorded asi belongs to {1,2,. Eta., p }, and j belongs to {1,2,. Eta., q }. The other three context parameters include the load of the candidate cloudlet, the CPU utilization rate and the network condition between the candidate cloudlet and the agent cloudlet, and corresponding values can be obtained through corresponding APIs and are respectively recorded as ≥ h>Considering that mobility is a very large indicator (the moreThe larger the size, the better), the rest three are extremely small indexes, in order to avoid dimension disorder, y' = max _ y-y extremely small indexes are adopted for carrying out forward processing, and max _ y is the maximum value of the index y. Meanwhile, in order to enable the features of different dimensions to be in the same numerical order and reduce the influence of the features with large variance, the normalization processing is carried out by adopting y = (y-min _ y)/(max _ y-min _ y) data. Therefore, for p candidate cloudlets in the user service range, the normalized context attribute data of the candidate cloudlets in the latest q moments can be expressed as a matrix ^ greater than or equal to>Each matrix element->Namely, the normalized context attribute data of cloudletei at time j is a quadrupleti ∈ {1, 2., p }, j ∈ {1, 2., q }. Then there are:
the time-varying context information has uncertainty, and the central intelligence Set (NS) has independent membership function, uncertain membership function and non-membership function, and can well represent inconsistent and uncertain information generated due to time variation. But it is defined in the nonstandard unit subintervals ]0-,1+ [.
First, a Single-valued noose Set (SVNS) is defined, where X is a given domain, and a Single-valued noose Set A on X uses a function including membership T A (x) Uncertain membership function I A (x) And a non-membership function F A (x) To be expressed as:
A={<X,T A (x),I A (x),F A (x)>lx∈X}
in the formula, T A (x),I A (x),F A (x)∈[0,1]Satisfy the following requirementsHas T of 0 ≤ A (x)+I A (x)+F A (x) Less than or equal to 3. For one element in the Single-Valued mesogenized A on the discourse field X, it is called Single-Valued neutral mental Number (SVNN), abbreviated as SVNN<T A ,I A ,F A >。
In order to establish an SVNS model for the time-varying context attribute of the cloudlet and facilitate the subsequent multi-attribute decision based on a centralized intelligence set, the context sequence data of the cloudlet in q moments needs to be establishedEstablishing three membership functions, i.e. completion>Is performed. Therefore, the method selects a cognitive Model (Cloud Model, CM) which is a cognitive Model for realizing bidirectional conversion between qualitative concepts and quantitative data based on probability statistics and fuzzy set theory to complete the conversion.
It is worth explaining that clouds are composed of cloud droplets, and one cloud droplet is realized once by a qualitative concept, and a certain number of cloud droplets can express one cloud. In this application, cloddlet i The value of a certain time-varying context at the moment j can be regarded as a cloud droplet, and the data sequence of q moments can represent a cloud model of the context and is recorded as
As shown in FIG. 3, the digital features of the cloud model are represented by expected valuesEntropy of the entropyEntropy is exceeded or is exceeded>Three values are used for characterization. Is desired->Is the expectation of cloud drop spatial distribution in the domain of discourse, and represents the basic certainty of qualitative concept; entropy->The metric representing the uncertainty of a qualitative concept reflects the range of values that can be accepted by this concept in the theoretical domain, reflecting the cloud span in fig. 3. Entropy is exceeded or is exceeded>Is entropy->Is the uncertainty of entropy, expresses the degree of deviation of the cloud model, and reflects the thickness of the cloud in fig. 3. Therefore, take expectation>Degree of membership T as a median intelligence number A Entropy->As the degree of uncertainty membership I A Over entropy->As a degree of non-membership F A The conversion to SVNS may be completed. Namely:
further, the application adopts a reverse cloud generator algorithm in the cloud model theory to complete conversion:
specifically, the input to the inverse cloud generator algorithm may be a cloudlet i A certain time-varying context sequence, e.g.i ∈ {1,2,..., p }; the output may be a cloddlet i Numerical eigenvalue of certain context cloud model (1) ②③
In one example, with cloudlet 1 The above process is explained taking the load data over 10 time instants as an example. After forward and normalize the data, cloudlet 1 The time-varying load data of (0.98, 1.00,0.54,0.00,0.60,0.51,0.26,0.82,0.80, 0.50), the corresponding cloud model is:
inputting the data into a reverse generator to obtain the digital characteristic expectation corresponding to the cloud modelEntropy of the entropy Entropy is exceeded or is exceeded>Thus, cloudlet 1 Load single-valued noon-figure of
So far, if cloudlet i The time-varying data sequences of network conditions, load, CPU utilization are respectively After the intelligent conversion is carried out by using the reverse generator algorithm of the cloud model, the cloudlet is established i The single-value mesoscopic context models of (a) are respectively:
to establish a single-value intelligent mobility model, mobility needs to be measured first.
Referring to fig. 4, fig. 4 illustrates the change in mobility of a user over time. Assuming that the user starts to move in the α direction, at a certain moment the user changes the moving direction to α 1 and at the next moment the user moves again in α 2, the mobility is measured by predicting the stay time of the mobile device within the clouldet to provide a reference for the selection of the clouldet.
Suppose at some point the user is in the service area of two cloudlets at the same timeTaking cloudlet2 as an example, R represents the service range of the cloudlet, S represents the distance from the coverage range of the cloudlet along the moving direction of the user, v represents the moving speed of the user, and the moving direction of the userAnd velocity v may be obtained by GPS, D is the linear distance between the user's current location and cloudlet, and @>Setting the current position of the user as (A, B), the position of the cloudlet as (a, B) and the number of the/or-the-greater-than-the-value (R) for the direction vector from the current position of the user to the cloudlet>The user's dwell time within a certain cloudlet can be calculated by the following formula: />
The distance S can be calculated by a trigonometric function:
thus, the user is in a clouldlet i The mobility data sequence of q time points isAfter the intelligent conversion is carried out by utilizing a reverse cloud generator algorithm, the established mobility single-value intelligent set model is
To sum up, for cloudlet i Its single-valued mesoscopic context model is as the following matrixi ∈ {1, 2., p }, the row represents p candidate cloudlets, and the SVNS listing the four context factors of mobility, network conditions, load, and CPU utilization is represented as follows:
after obtaining cloddlet i After the single-value intelligent context model is obtained, decision is made according to four relevant context factors of the candidate cloudlets, and the optimal cloudlet is selected. In the multi-attribute decision problem, the attribute of each alternative is often complex, and different context attributes contribute to decisions to different degrees, and should be given different weight values. Because the attribute weight is completely unknown, the characteristic of the entropy of the fuzzy theory is met. Therefore, the optimal weight of each context attribute under the SVNS environment is calculated by using the theory of intelligent entropy.
Is provided withIs discourse domain X = { X = 1 ,x 2 ,…,x n A single-valued nootropic set on (E), the entropy E (A) is defined as->A c Is the complement of A. Is at>In each element->The SVNN, each column, representing a certain context of cloudletei, represents the SVNS of each context attribute. Thus, the weight for each attribute is calculated as follows:
where crtt ∈ { M, D, L, C }, represents four context attributes.
Finally, using equation (13), a Single-valued Weighted mean aggregation operator (Single-valued neural S et Weighted Algorithm, SVNSWA), the cloudlet is computed i Aggregation of context attributes SVNS into candidate cloudlets i S VNN of (1), denoted as SVNN i ={T i ,I i ,F i }. Wherein cnt is formed by { M, D, L, C }.
In obtaining SVNN i Thereafter, the score for each candidate cloudlet is calculated using the scoring function of formula (14), SVNN. The scoring function is an important indicator in the SVNN ordering. The larger the membership degree T is, the larger the SVNN is; the smaller the uncertainty I is, the larger the SVNN is; likewise, the smaller the non-membership F, the larger the SVNN. After a list of scores for the candidate cloudlets is obtained, the best cloudlet is obtained with the highest score.
score(SVNN i )=(T i +1-I i +1-F i )/3 (14)
In summary, the present application is Based On the method of edge offload (NSCO) of the noodless Set. The algorithm can be described as follows:
{ input: task =<I,D u >I is the number of instructions of the task to be executed, D u Is the amount of data uploaded at the time of the offload.
And (3) outputting: the execution position of task t.
1. The cost C of executing the task locally and in the nearest cloudlet is calculated using equations (2) (3) (4) (5) (6) m And C nea
2.if C m ≤C nea :
3. Is executed locally;
4.else:
5. the last cloudlet received the task;
if the recent cloudlet meets the task requirements:
7. executing the task on the latest cloddlet;
8.else:
9. the last cloudlet acts as an agent, broadcasting task request messages to other available cloudlets nearby;
if at time T timer Within, there are other cloudlets that can satisfy the offload request of the task:
11. obtaining four context data of mobility, network condition, load and CPU utilization rate of a user in the latest q moments of all candidate cloudlets to form the cloudlet i Time-varying context matrix of
12. Using a reverse cloud generator algorithm willConversion to a single-valued mesoscopic context matrix:
14. Pairing SVNN using equation (14) i The comparison is made with the best cloudlet scoring highest and the task is transferred from the most recent cloudlet to the best cloudlet.
15.else:
16.T timer No clooudlets in the immediate vicinity can execute the task, and the task is offloaded to the cloud.
}
The main flow of the algorithm is to determine the execution of the taskPosition: first, steps 1-7 represent cost C in terms of executing a task m And C nea To determine whether to offload the task to the nearest cloudlet. Then, step 8-14 shows that when the latest cloudlet cannot meet the task request, the time-varying context information of the cloudlets in the single-value intelligence collection is used for selecting the best cloudlet to execute the task by using the time-varying context information of the cloudlets. The final step 15-16 indicates if at T timer There are no suitable candidate cloudlets within, offloading the task to the cloud computing center. The time complexity of the algorithm is O (n), wherein the time complexity of the single-valued middle intelligence set weighted average aggregation operator is O (n), the time complexity of the candidate cloudlet compared by using the SVNN comparison function is O (n), the time complexity and the time complexity are in a parallel relation, the time complexity of the other algorithm processes is O (1), and the comprehensive time complexity is O (n).
In the following, the performance of the edge unloading method based on the central intelligence set provided by the present application is verified through experiments.
(1) Data set: the Stanford Drone dataset and the Alibara cluster dataset were used in the experiments herein. The Stanford Drone data set continuously records the motion trail of pedestrians in a certain area in the Stanford university campus, and specific position information of users is provided. The Alibara cluster data set provides cluster tracking from actual production, records relevant data of 4000 servers within 8 days, and extracts two data of CPU utilization rate and load of the servers in an experiment. And (3) the network condition is measured by communication delay and calculated according to the simulation parameters in the step (2).
(2) Simulation parameters: advantech EIS-D210 (3846MIPS, 1.5GHz,4GB RAM) is used herein as the cloudlet parameter, and Del PowerEdge (31790MIPS, 3.0GHz,768GB RAM) is used as the cloud server parameter. Setting the service range of the cloudlet to be 0-50m, setting the bandwidth between the user and the cloudlet to be 100Mbps, and setting the bandwidth between the user and the cloud server to be 1Gbps. The task size is distributed in a uniform distribution mode, the average value is 4600M instructions, the data transmission quantity of each task is also distributed in the same mode, and the average value is 750 kilobytes.
(3) Comparative experiment: the algorithms NSCO herein are compared with the Application-aware packet selection for the completion of streaming in the multi-packet environment (appAware) and mCloud: a Context-Aw are underfilling Framework for Heterogeneous Mobile Cloud (mCloud) comparative experiments were performed.
appware: different cloudlets can execute different types of application programs, and the application programs are distributed to the different cloudlets according to the types of the requested application programs, so that the workload is balanced, the system delay is reduced, and the power consumption is reduced.
mCloud: taking into account changes in the mobile device context (network conditions), help is provided for selecting wireless medium and cloud resources to make better offloading decisions to provide better performance and lower battery consumption.
(4) Evaluation indexes are as follows: three evaluation criteria of task average failure times, response time and energy consumption are selected. A failed task is defined as a user's dwell time within a certain cloudlet is less than the completion time to offload the task to that cloudlet, because in this case, the user has left the service area of the cloudlet and cannot receive the result.
In one case study, 20 cloudlets were defined to simulate the task offloading experiments, and the raw data of context parameters over 10 moments were converted into a noose set, yielding the results shown in table 1. The weighted average aggregation using equation (13) can then be used to obtain the median intelligence per candidate cloudlet single value, as shown in table 2. Finally, a list of scores for the candidate cloudlets can be obtained using equation (14):
cloudletl5>cloudlet5>cloudletl3>cloudletT>cloudlet3……>cloudletl4。
TABLE 1 noon-noon number for each context parameter
TABLE 2 Intelligence number in candidate cloudlet aggregate units
Then, comparative analysis was performed as follows:
(1) Task average failure number analysis
The average failure times of the tasks are measured when the number of the tasks is 25, 50, 75 and 100, respectively, and the average failure times of the tasks of the method for the time-varying context-aware edge offload based on the central intelligence set (abbreviated as NSCO) and the method for the comparative experiment proposed herein are shown in fig. 5.
The reasons why the above advantages can be obtained are two: (1) The NSCO takes into account the high mobility of the user and captures the user's movement trend through the user's stay time in the clouldlet range. Predicting user dwell times can help filter out those cloudlets that have less available time, thereby avoiding potential task offload failures. (2) NSCO predicts the future from historical data that the most suitable cloudlet over the past period of time should also be optimal in the near future. When the recent cloudlet cannot meet the task requirement and is converted into the best cloudlet around the task, the NSCO selects the historical time data of the relevant context and combines algorithms such as a cloud model and intelligent aggregation to select the most recent optimal cloudlet, and the method can play an important role in reducing the number of times of task unloading failure.
In the comparative experiments appAware and mCloud, neither the dynamics of time is considered to select the cloudlet, nor the mobility of the user is considered, so that the number of failed tasks is large. In addition, in the comparative experiment appAware, only whether a certain type of cloudlet is dedicated to processing a certain type of task is considered, and if the certain type of cloudlet is not used, the task is selected to be unloaded to the cloud center, but the task is transferred to the cloud, so that the response time is increased, and the task is often failed due to the excessively long response time. In the comparative experiment mCloud, only the context condition of the network interface is considered, and the task is determined to be processed only according to whether the network interface is available, so that a large number of tasks are processed in the local device at some time.
(2) Analysis of time and energy consumption spent on task offloading
Fig. 6 and 7 show the average elapsed time and energy consumption for NSCO and the two comparative methods when dealing with different numbers of tasks. It can be seen from the figure that using the scheme proposed herein, the average response time is reduced by about 28.9% and 54.7% for appAware and mCloud, respectively, and the average energy consumption is reduced by about 33.2% and 56.8% for appAware and mCloud, respectively.
Except that the response time analyzed in (1) is too long to cause task failure, so that the response time of the system is increased, in the appware method, if a specific cloudlet of a certain task type is not processed, the task is selected to be unloaded to the cloud center, but the propagation delay is increased when the task is transferred to the cloud, so that the response time and the energy consumption are influenced. In mCloud, due to unreasonable task proportion distribution, the number of tasks processed locally is too large, and response time and energy consumption are increased.
In the scheme proposed herein, the closest Cloudlet is selected first. If it cannot offload the task's requirements, the recent Cloudlet acts as a proxy server and selects the best Cloudlet from the Cloudlets in its vicinity to handle the task. If none of the nearby cloudlets respond, the task is offloaded to the cloud again. Moreover, when the optimal cloudlet is selected, four context factors, namely user mobility, network conditions, the utilization rate of a cloudlet CPU and the load of the cloudlet, are fully considered from the whole, and the four context factors respectively correspond to the propagation time, the communication time, the processing time and the queuing time in the response time, so that the time spent on unloading tasks is short, the consumed energy is low, and the effect of improving the comprehensive experience of the user is achieved.
In summary, the problem of task offloading under edge computation is studied herein. A computational offloading policy is proposed that takes into account a number of contextual factors such as user mobility. When the most recent cloudlet cannot process the offloaded task, the problem is converted into a multi-attribute decision problem, an optimal cloudlet is selected from the neighborhood to process the task, and a set of medians is adopted to process the high dynamic variability of the context data over time. Simulation experiment results show that delay and power consumption are respectively reduced by 28.9% -54.7% and 33.2% -56.8% by using the strategy proposed by us.
In this embodiment, the edge server may also refer to an edge cloud.
The application also provides a system for edge offload based on a central intelligence set, which comprises a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method when executing the computer program. The edge unloading system based on the intelligent set can realize each embodiment of the edge unloading method based on the intelligent set, and can achieve the same beneficial effects, and the details are not repeated here.
The present application also provides a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method steps as described above. The computer-readable storage medium can implement the embodiments of the edge offload method based on the noon-like set, and can achieve the same beneficial effects, which are not described herein again.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations can be devised by those skilled in the art in light of the above teachings. Therefore, the technical solutions that can be obtained by a person skilled in the art through logical analysis, reasoning or limited experiments based on the prior art according to the concepts of the present invention should be within the scope of protection determined by the claims.
Claims (7)
1. An edge offload method based on a noose set is applied to a computing offload structure, wherein the computing offload structure comprises a cloud computing center, N edge servers and a mobile device, and N is a positive integer, and the method comprises the following steps:
s1, calculating a first cost required when a task is executed on a mobile device and a second cost required when the task is executed on a first edge server, wherein the first edge server is the edge server which is closest to the mobile device in the N edge servers;
s2, unloading the task to the first edge server to execute under the condition that the first cost is larger than the second cost;
s3, under the condition that the first edge server reaches a load threshold value, locking a second edge server based on a context parameter of each edge server in the N edge servers, wherein the context parameters comprise user mobility, network conditions, load of each edge server and CPU utilization rate;
s4, unloading the task to the second edge server for execution;
the S3 specifically includes:
s31, regarding edge servers except the first edge server in the N edge servers as candidate edge servers, and constructing a time-varying context matrix of each candidate edge server according to context parameters of each candidate edge server within the latest q moments;
s32, converting the time-varying context matrix of each candidate edge server into a single-value mesomeric context matrix by using a reverse cloud generator algorithm;
s33, using a single-value mesopic weighted average aggregation operator to aggregate the single-value mesopic context matrix into a single-value mesopic number of the candidate edge server;
and S34, calculating the score of each candidate edge server by using a score function of the intelligence number in the single value, and taking the candidate edge server with the highest score as a second edge server.
2. The noose-set-based edge offload method of claim 1, further comprising: and setting a preset time threshold, and unloading the task to the cloud computing center if the second edge server is not successfully locked within the preset time threshold.
3. The method of claim 1, wherein the S31 comprises:
the estimated user residence time in the range of the candidate edge server is taken as the user mobility of the candidate edge server and is recorded as
Obtaining network conditions corresponding to the candidate edge server through API of the candidate edge serverServer load->And CPU utilization->
Normalizing the context parameters of the candidate edge server to obtain a time-varying context matrixThe following were used:
in the formula, q represents q time moments, p represents p candidate edge servers, i belongs to {1, 2.. Eta., p }, and j belongs to {1, 2.. Eta., q }.
4. The method of claim 3, wherein the single-valued noon-like numbers of the candidate edge servers in the S33 satisfy the following relation:
where each element represents the mesomeric representation, T, of the context of the candidate edge server i i Representing degree of membership, I, of candidate edge servers I i Representing uncertain membership of candidate edge servers i, F i Representing the degree of non-membership of the candidate edge server i.
5. The noose-set-based edge offload method of claim 4, wherein the S34 comprises:
establishing a score function Score (SVNN) of an intelligent number in a single value i ) The following were used:
score(SVNN i )=(T i +1-I i +1-F i )/3;
and calculating the score of each candidate edge server by adopting a score function, and taking the candidate edge server with the highest score as a second edge server.
6. A system for edge offload based on an intelligence set, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any of the preceding claims 1 to 5 when executing the computer program.
7. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210140585.1A CN114666339B (en) | 2022-02-16 | 2022-02-16 | Edge unloading method and system based on noose set and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210140585.1A CN114666339B (en) | 2022-02-16 | 2022-02-16 | Edge unloading method and system based on noose set and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114666339A CN114666339A (en) | 2022-06-24 |
CN114666339B true CN114666339B (en) | 2023-04-11 |
Family
ID=82027226
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210140585.1A Active CN114666339B (en) | 2022-02-16 | 2022-02-16 | Edge unloading method and system based on noose set and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114666339B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116112865B (en) * | 2023-01-17 | 2023-10-03 | 广州爱浦路网络技术有限公司 | Edge application server selection method based on user equipment position, computer device and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111104211A (en) * | 2019-12-05 | 2020-05-05 | 山东师范大学 | Task dependency based computation offload method, system, device and medium |
CN112887435A (en) * | 2021-04-13 | 2021-06-01 | 中南大学 | Method for improving task unloading cooperation rate in edge calculation |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10771569B1 (en) * | 2019-12-13 | 2020-09-08 | Industrial Technology Research Institute | Network communication control method of multiple edge clouds and edge computing system |
CN111274037B (en) * | 2020-01-21 | 2023-04-28 | 中南大学 | Edge computing task unloading method and system |
BR112022019005A2 (en) * | 2020-03-23 | 2022-11-01 | Apple Inc | STRUCTURE OF SERVICE DISCOVERY AND DYNAMIC DOWNLOAD FOR CELLULAR NETWORK SYSTEMS BASED ON EDGE COMPUTING |
CN111835849B (en) * | 2020-07-13 | 2021-12-07 | 中国联合网络通信集团有限公司 | Method and device for enhancing service capability of access network |
US11427215B2 (en) * | 2020-07-31 | 2022-08-30 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for generating a task offloading strategy for a vehicular edge-computing environment |
CN112306696B (en) * | 2020-11-26 | 2023-05-26 | 湖南大学 | Energy-saving and efficient edge computing task unloading method and system |
CN112600895B (en) * | 2020-12-07 | 2023-04-21 | 中国科学院深圳先进技术研究院 | Service scheduling method, system, terminal and storage medium for mobile edge calculation |
-
2022
- 2022-02-16 CN CN202210140585.1A patent/CN114666339B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111104211A (en) * | 2019-12-05 | 2020-05-05 | 山东师范大学 | Task dependency based computation offload method, system, device and medium |
CN112887435A (en) * | 2021-04-13 | 2021-06-01 | 中南大学 | Method for improving task unloading cooperation rate in edge calculation |
Also Published As
Publication number | Publication date |
---|---|
CN114666339A (en) | 2022-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022121097A1 (en) | Method for offloading computing task of mobile user | |
CN107766135B (en) | Task allocation method based on particle swarm optimization and simulated annealing optimization in moving cloud | |
CN111930436B (en) | Random task queuing unloading optimization method based on edge calculation | |
CN111586720B (en) | Task unloading and resource allocation combined optimization method in multi-cell scene | |
CN112004239A (en) | Computing unloading method and system based on cloud edge cooperation | |
WO2023040022A1 (en) | Computing and network collaboration-based distributed computation offloading method in random network | |
CN109151864B (en) | Migration decision and resource optimal allocation method for mobile edge computing ultra-dense network | |
CN111475274B (en) | Cloud collaborative multi-task scheduling method and device | |
CN113296845A (en) | Multi-cell task unloading algorithm based on deep reinforcement learning in edge computing environment | |
CN111586696A (en) | Resource allocation and unloading decision method based on multi-agent architecture reinforcement learning | |
CN114143891A (en) | FDQL-based multi-dimensional resource collaborative optimization method in mobile edge network | |
Cha et al. | Fuzzy logic based client selection for federated learning in vehicular networks | |
WO2024174426A1 (en) | Task offloading and resource allocation method based on mobile edge computing | |
CN111984419A (en) | Complex task computing and transferring method for marginal environment reliability constraint | |
Lv et al. | Edge computing task offloading for environmental perception of autonomous vehicles in 6G networks | |
CN114666339B (en) | Edge unloading method and system based on noose set and storage medium | |
CN114390057A (en) | Multi-interface self-adaptive data unloading method based on reinforcement learning under MEC environment | |
CN112867066A (en) | Edge calculation migration method based on 5G multi-cell deep reinforcement learning | |
Huang et al. | Federated learning based qos-aware caching decisions in fog-enabled internet of things networks | |
CN114064294B (en) | Dynamic resource allocation method and system in mobile edge computing environment | |
Zheng et al. | Digital Twin Enabled Task Offloading for IoVs: A Learning-Based Approach | |
CN116828534B (en) | Intensive network large-scale terminal access and resource allocation method based on reinforcement learning | |
CN117858109A (en) | User association, task unloading and resource allocation optimization method based on digital twin | |
CN111930435A (en) | Task unloading decision method based on PD-BPSO technology | |
Li et al. | D2D-assisted computation offloading for mobile edge computing systems with energy harvesting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |