CN115835306A - Task processing method, device, equipment and storage medium - Google Patents

Task processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115835306A
CN115835306A CN202211459678.7A CN202211459678A CN115835306A CN 115835306 A CN115835306 A CN 115835306A CN 202211459678 A CN202211459678 A CN 202211459678A CN 115835306 A CN115835306 A CN 115835306A
Authority
CN
China
Prior art keywords
node
time delay
delay
task
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211459678.7A
Other languages
Chinese (zh)
Inventor
张力方
胡泽妍
刘桂志
王玉婷
李一喆
李宏平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202211459678.7A priority Critical patent/CN115835306A/en
Publication of CN115835306A publication Critical patent/CN115835306A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a task processing method, a task processing device and a task processing storage medium, and relates to the technical field of communication. The method comprises the following steps: monitoring whether the resource processing capacity of the MEC node is greater than or equal to a preset occupation ratio or not to obtain a judgment result, wherein the resource processing capacity is used for representing the occupation ratio of the used resources on the MEC node to the usable resources of the MEC node; determining a target processing node according to the judgment result, a first time delay and a second time delay, wherein the first time delay is the sum of communication time delay and calculation time delay for unloading the task to the fog node, the second time delay is the sum of communication time delay and calculation time delay for unloading the task to the cloud node, and the target processing node is the fog node or the cloud node or the MEC node; and in response to the target processing node not being an MEC node, offloading the task to the target processing node for processing. The method and the device can reduce the time delay of task processing.

Description

Task processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method, an apparatus, a device, and a storage medium for task processing.
Background
With the continuous development of artificial intelligence and mobile internet technologies, a great number of business applications such as augmented reality, face recognition, image rendering, automatic driving and the like emerge, and the business applications usually need to consume huge computing resources, storage resources and energy consumption. At present, the computing power of the terminal is limited, the battery capacity is low, and the processing requirements of the service applications cannot be met, so that cloud computing is proposed and rapidly developed.
The cloud computing utilizes a virtualization technology to establish a computing resource pool with ultra-large capacity, so that various applications can obtain required computing resources, storage resources, software and platform services. The occurrence of cloud computing meets the requirement of computation-intensive business processing, but business applications such as automatic driving and the like also have the characteristic of time delay sensitivity, and the transmission time delay from a terminal to a cloud end cannot meet the requirement of the business applications on ultra-low time delay under many conditions. Therefore, the European Telecommunications Standards Institute (ETSI) established an Industry Specification Group (ISG) of Mobile Edge Computing (MEC) in 12 months 2014, and started the Mobile Edge Computing standardization to develop the Mobile Edge Computing. ETSI defines MEC as a network architecture that can provide Internet Technology (IT) and cloud computing functions in a location near a mobile user in a wireless access network, and aims to migrate IT and cloud computing from a core network to an edge access network, so as to shorten end-to-end time delay of task processing and ensure security and privacy of data. In 9' 2016, the concept of mobile Edge Computing was extended to Multi-access Edge Computing (MEC), which is further extended from a telecommunication cellular network to other Wireless networks to expand the applicability of Edge Computing in heterogeneous networks including Wireless Fidelity (WiFi) and fixed access technologies.
Although the problems of bandwidth shortage, network congestion and overlong time delay caused by uploading mass data to a cloud computing center in a network are solved by the massive deployment of the edge computing equipment and the terminals, computational resources are caused to show a ubiquitous deployment trend, and a computational island effect is inevitably generated. On one hand, the edge computing nodes do not perform effective cooperative processing tasks, and computing resources of a single node cannot meet the computing resource requirements of ultra-large computing-intensive tasks such as image rendering and the like, and the problem of ultra-low time delay requirements of business applications with the characteristics of computing-intensive and time delay sensitivity cannot be solved; on the other hand, although some edge computing nodes are overloaded and cannot effectively process computing tasks, some computing nodes are still idle due to unbalanced network loads, so that computing resources of the edge network cannot be fully utilized.
Therefore, in order to efficiently and cooperatively utilize heterogeneous Computing resources of the whole network, in 2019, a Computing-aware network (CAN) is proposed by operators, equipment providers and the like as a technical scheme for Computing and network fusion based on a distributed system, so as to realize joint optimization scheduling of Information and Communication Technology (ICT) systems and provide end-to-end experience guarantee. The CAN aims to connect and cooperate various computing powers of cloud computing, edge computing and terminals in a network mode, realize deep fusion and cooperative perception of computing and the network, and realize on-demand scheduling and efficient sharing of computing power resources.
Computational power aware routing and computational power resource allocation are a key problem in the research of computational power aware networks, and in the traditional network architecture, computational power and the network are usually managed separately. In the aspect of computing power management, a computing offloading technology is a key technology of edge computing, and after an edge computing concept is proposed, many researchers propose task offloading strategies based on single-user multi-node, multi-user single-node, and multi-user multi-node, which are essentially perfect matching of a terminal task and an edge computing node, but still have the problem of high task processing delay.
Disclosure of Invention
The application provides a task processing method, a task processing device and a task processing storage medium, which are used for solving the problem of high time delay of task processing.
In a first aspect, the present application provides a task processing method, including: monitoring whether the resource processing capacity of the MEC node is greater than or equal to a preset occupation ratio or not to obtain a judgment result, wherein the resource processing capacity is used for representing the occupation ratio of the used resources on the MEC node to the usable resources of the MEC node; determining a target processing node according to the judgment result, a first time delay and a second time delay, wherein the first time delay is the sum of communication time delay and calculation time delay for unloading the task to the fog node, the second time delay is the sum of communication time delay and calculation time delay for unloading the task to the cloud node, and the target processing node is the fog node or the cloud node or the MEC node; and in response to the target processing node not being an MEC node, offloading the task to the target processing node for processing.
In one possible implementation, determining a target processing node according to the determination result, the first delay and the second delay includes: in response to the judgment result that the resource processing capacity of the MEC node is larger than or equal to the preset ratio, determining the size relation among the service tolerance time delay, the first time delay and the second time delay corresponding to the task; determining the target processing node as an MEC node in response to the first time delay and the second time delay being larger than the service tolerance time delay; in response to that only one of the first time delay and the second time delay is smaller than the service tolerance time delay, determining that the target processing node is a node with the time delay smaller than the service tolerance time delay in the cloud node and the fog node; and in response to the fact that the first time delay and the second time delay are both smaller than the service tolerance time delay, determining that the target processing node is a node with smaller time delay in the cloud node and the fog node.
In a possible implementation manner, the task processing method may further include: in response to the judgment result that the resource processing capacity of the MEC node is larger than or equal to the preset occupation ratio and the first time delay and the second time delay are both smaller than the service tolerance time delay, acquiring first energy consumption for unloading the task to the fog node and second energy consumption for unloading the task to the cloud node; in response to the first energy consumption being less than the second energy consumption, determining the target processing node as a fog node; in response to the first energy consumption being greater than the second energy consumption, determining the target processing node as a cloud node; and in response to the first energy consumption being equal to the second energy consumption, determining the target processing node to be a fog node or a cloud node.
In a possible implementation manner, the task processing method may further include: and in response to the judgment result that the resource processing capacity of the MEC node is larger than or equal to the preset occupation ratio and the first time delay and the second time delay are both smaller than the service tolerance time delay, determining that the target processing node is a cloud node or a fog node according to the first energy consumption, the second energy consumption, the first time delay and the second time delay based on the balance requirements of the energy consumption and the time delay.
In one possible implementation, determining a target processing node according to the determination result, the first delay and the second delay includes: in response to the judgment result that the resource processing capacity of the MEC node is smaller than the preset ratio, determining the magnitude relation among the third time delay, the first time delay and the second time delay of the MEC node processing task; determining the target processing node as an MEC node in response to the first time delay and the second time delay being larger than the third time delay; and in response to at least one of the first delay and the second delay being smaller than or equal to the third delay, determining a target processing node based on a minimum delay principle according to the size relationship between the first delay, the second delay, the third delay and the service tolerance delay corresponding to the task.
In a possible implementation manner, based on a minimum delay principle, determining a target processing node according to a size relationship between a first delay, a second delay, a third delay, and a service tolerance delay corresponding to a task, includes: and in response to the first time delay, the second time delay and the third time delay all being smaller than the service tolerance time delay, determining the target processing node as a node with the minimum energy consumption in the MEC node, the cloud node and the fog node.
In a possible implementation manner, based on a minimum delay principle, determining a target processing node according to a size relationship between a first delay, a second delay, a third delay, and a service tolerance delay corresponding to a task, further includes: in response to the first delay and the second delay both being less than or equal to a third delay, and the third delay being greater than or equal to a traffic-tolerant delay, determining a target processing node according to: in response to that the first time delay and the second time delay are both smaller than the service tolerance time delay, determining that the target processing node is the node with the minimum energy consumption in the cloud node and the fog node; responding to the fact that the service tolerance time delay is between the first time delay and the second time delay, and determining that the target processing node is a node with smaller time delay in the cloud node and the fog node; and determining the target processing node as an MEC node in response to the first time delay and the second time delay being larger than the service tolerance time delay.
In a possible embodiment, the third delay is between the first delay and the second delay. Correspondingly, based on the minimum delay principle, determining the target processing node according to the magnitude relationship between the first delay, the second delay, the third delay and the service tolerance delay corresponding to the task, further comprising: in response to that the third time delay is smaller than the service tolerance time delay, determining that the target processing node is an MEC node and a node with lower energy consumption in the node with the corresponding time delay smaller than the third time delay; responding to the third time delay being larger than or equal to the service tolerance time delay, and determining the target processing node as a node with the corresponding time delay being smaller than the third time delay; and in response to the third delay being smaller than the third delay and being larger than the service tolerance delay, determining the target processing node as an MEC node.
In a second aspect, the present application provides a task processing apparatus, including:
the monitoring module is used for monitoring whether the resource processing capacity of the MEC node is larger than or equal to a preset occupation ratio or not to obtain a judgment result, wherein the resource processing capacity is used for representing the occupation ratio of the used resources on the MEC node to the usable resources of the MEC node;
the determining module is used for determining a target processing node according to the judgment result, a first time delay and a second time delay, wherein the first time delay is the sum of communication time delay and calculation time delay for unloading the task to the fog node, the second time delay is the sum of communication time delay and calculation time delay for unloading the task to the cloud node, and the target processing node is the fog node or the cloud node or the MEC node;
and the processing module is used for unloading the task to the target processing node for processing in response to the fact that the target processing node is not the MEC node.
In a possible implementation, the determining module is specifically configured to: in response to the judgment result that the resource processing capacity of the MEC node is larger than or equal to the preset ratio, determining the size relation among the service tolerance time delay, the first time delay and the second time delay corresponding to the task; determining the target processing node as an MEC node in response to the first time delay and the second time delay being larger than the service tolerance time delay; in response to that only one of the first time delay and the second time delay is smaller than the service tolerance time delay, determining that the target processing node is a node with the time delay smaller than the service tolerance time delay in the cloud node and the fog node; and determining that the target processing node is a node with smaller time delay in the cloud node and the fog node in response to the fact that the first time delay and the second time delay are both smaller than the service tolerance time delay.
In one possible embodiment, the determining module may be further configured to: in response to the judgment result that the resource processing capacity of the MEC node is larger than or equal to the preset occupation ratio and the first time delay and the second time delay are both smaller than the service tolerance time delay, acquiring first energy consumption for unloading the task to the fog node and second energy consumption for unloading the task to the cloud node; in response to the first energy consumption being less than the second energy consumption, determining the target processing node as a fog node; in response to the first energy consumption being greater than the second energy consumption, determining the target processing node as a cloud node; and in response to the first energy consumption being equal to the second energy consumption, determining the target processing node to be a fog node or a cloud node.
In one possible embodiment, the determining module may be further configured to: and in response to the judgment result that the resource processing capacity of the MEC node is larger than or equal to the preset occupation ratio and the first time delay and the second time delay are both smaller than the service tolerance time delay, determining that the target processing node is a cloud node or a fog node according to the first energy consumption, the second energy consumption, the first time delay and the second time delay based on the balance requirements of the energy consumption and the time delay.
In one possible embodiment, the determining module may be further configured to: in response to the judgment result that the resource processing capacity of the MEC node is smaller than the preset ratio, determining the magnitude relation among the third time delay, the first time delay and the second time delay of the MEC node processing task; determining the target processing node as an MEC node in response to the first time delay and the second time delay being larger than the third time delay; and in response to at least one of the first time delay and the second time delay being less than or equal to the third time delay, determining a target processing node based on a minimum time delay principle according to the size relationship between the first time delay, the second time delay and the third time delay and the service tolerance time delay corresponding to the task.
In one possible embodiment, the determining module may be further configured to: and in response to the first time delay, the second time delay and the third time delay all being smaller than the service tolerance time delay, determining the target processing node as a node with the minimum energy consumption in the MEC node, the cloud node and the fog node.
In one possible embodiment, the determining module may be further configured to: in response to the first delay and the second delay both being less than or equal to a third delay, and the third delay being greater than or equal to a traffic-tolerant delay, determining a target processing node according to: in response to that the first time delay and the second time delay are both smaller than the service tolerance time delay, determining that the target processing node is the node with the minimum energy consumption in the cloud node and the fog node; responding to the fact that the service tolerance time delay is between the first time delay and the second time delay, and determining that the target processing node is a node with smaller time delay in the cloud node and the fog node; and determining the target processing node as an MEC node in response to the first time delay and the second time delay being larger than the service tolerance time delay.
In a possible embodiment, the third delay is between the first delay and the second delay. The determination module is further configured to: in response to that the third time delay is smaller than the service tolerance time delay, determining that the target processing node is an MEC node and a node with lower energy consumption in the node with the corresponding time delay smaller than the third time delay; in response to the third time delay being greater than or equal to the service tolerance time delay, determining the target processing node as a node with the corresponding time delay being less than the third time delay; and in response to the third time delay being smaller than the third time delay and being larger than the service tolerance time delay, determining the target processing node as an MEC node.
In a third aspect, the present application provides a task processing device, including: a memory and a processor. The memory is used for storing program instructions; the processor is for calling program instructions in the memory to perform the task processing method of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed, the task processing method of the first aspect is implemented.
In a fifth aspect, the present application provides a computer program product comprising a computer program that, when executed, is configured to implement the task processing method of the first aspect.
According to the task processing method, the task processing device, the task processing equipment and the task processing storage medium, a judgment result is obtained by monitoring whether the resource processing capacity of the MEC node is larger than or equal to a preset occupation ratio, wherein the resource processing capacity is used for representing the occupation ratio of resources used by the MEC node in the MEC node; determining a target processing node according to the judgment result, a first time delay and a second time delay, wherein the first time delay is the sum of communication time delay and calculation time delay for unloading the task to the fog node, the second time delay is the sum of communication time delay and calculation time delay for unloading the task to the cloud node, and the target processing node is the fog node or the cloud node or the MEC node; and in response to the target processing node not being an MEC node, offloading the task to the target processing node for processing. The task processing method provided by the application combines the resource processing capability of the MEC node, the first time delay and the second time delay to determine the target processing node of the task, and achieves the effect of dynamically selecting the target processing node of the task so as to reduce the time delay of task processing.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic diagram of a task processing method applied to a computing power network system topology diagram according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating a task processing method according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a task processing device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a task processing device according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, article, or apparatus.
It should be understood that, unless otherwise specified, all references to "MEC node" in the embodiments of the present application refer to a multi-access edge computing node.
In order to solve the problems in the related art, the application provides a task processing method, which considers that tasks are unloaded to a fog node or a cloud node for processing when the number of tasks of an MEC node is large. Specifically, a target processing node corresponding to the task is determined among the MEC node, the fog node and the cloud node based on the size relationship between the resource processing capacity of the MEC node and the preset occupation ratio, and the task is unloaded to the target processing node when the target processing node is not the MEC node. Namely, the resource processing capacity of the MEC node and the time delay of the fog node and the cloud node are dynamically sensed, and a target processing node for processing the task is selected, so that the time delay of task processing is reduced.
Furthermore, energy consumption of the MEC node, the fog node and the cloud node can be comprehensively considered, and the node with the minimum time delay and the minimum energy consumption is selected as the target processing node.
Fig. 1 is a schematic diagram of a topology diagram of a computing power network system to which a task processing method provided in an embodiment of the present application is applied. As shown in fig. 1, the computational power network system topology includes terminals, routing nodes (R), MEC nodes, fog nodes (also may be referred to as "Fog Node servers" or "Fog Node servers"), and Yun Jiedian (also may be referred to as "central servers" or "Center servers"). The number of the routing nodes can be multiple, the fog node server and the central server can store computing resources, and the terminal can be a mobile phone, a computer, a notebook computer and the like.
The links between the nodes and the weights between the links are also included in fig. 1. Exemplarily shown as W 12_R Representing the weight between routing node 1 and routing node 2, W 31_N Representing the weight between routing node 3 and the MEC node, and the rest of the weights are represented in the same way. The weight may be an integrated value of dimensions such as path length and bandwidth occupancy.
Illustratively, the node servers can be selected according to the minimum value of the weight calculation result by knowing the weight, wherein the node servers comprise the fog node server and the cloud node server. The node server is not a server alone but a server cluster, and is configured by a plurality of nodes and a management unit that manages the entire device, each node includes a module management unit that switches an operation mode of the node, and the module management unit switches each node to operate alone or to operate in cooperation with another node based on configuration information transmitted from the management unit. In this embodiment, as shown in fig. 1, a user may transmit information to a node server through a terminal, and the transmitted information may pass through some routes, and the routes may be different, and the reached routing node and the node server may be different. For example, the route may be selected according to the minimum value of the weight calculation result by knowing the weight, the selected route may have different time delays, the node server may be selected according to the minimum time delay principle, and if the time delays are the same, the node server may be selected according to the minimum energy consumption principle.
Those skilled in the art will appreciate that the computational power network system topology shown in fig. 1 does not constitute a definition of a system model to which the task processing method is applicable, and may include more or fewer routing nodes than those shown, as well as cloud node servers and fog node servers.
A task processing method according to an exemplary embodiment of the present application is described below with reference to fig. 2 in conjunction with the example of fig. 1. It should be noted that the above application scenario is only shown for the convenience of understanding the spirit and principle of the present application, and the embodiments of the present application are not limited by the application scenario shown in fig. 1.
Fig. 2 is a flowchart illustrating a task processing method according to an embodiment of the present application. As shown in fig. 2, the task processing method in the embodiment of the present application includes the following steps:
s201: and monitoring whether the resource processing capacity of the MEC node is greater than or equal to a preset occupation ratio or not to obtain a judgment result, wherein the resource processing capacity is used for representing the occupation ratio of the used resources on the MEC node to the usable resources of the MEC node.
It can be understood that the larger the resource processing capability is, the larger the time delay of the MEC node processing task is, and the smaller the resource processing capability is, the time delay of the MEC node processing task is usually within the service tolerance time delay, so that the resource processing capability of the MEC node is monitored to obtain that the resource processing capability of the MEC node is greater than or equal to the preset occupation ratio, or the resource processing capability of the MEC node is smaller than the preset occupation ratio, which are two judgment results.
In this step, the resource handling capacity of the MEC node may be obtained in a variety of ways. For example, the server uploads the resource capability of the MEC node to a centralized controller (similar to a resource scheduling management platform), and the centralized controller may collect information such as the resource processing capability of the MEC node in real time. The preset occupancy may be manually set according to actual conditions and/or empirical values, for example, the maximum task amount that a certain MEC node can simultaneously process is 500, and when the MEC node simultaneously processes 400 tasks, considering that the task processing method provided by the present application is used, the preset occupancy is 400 to 500, that is, 80%.
For example, the judgment result includes that the resource processing capacity of the MEC node is greater than or equal to the preset proportion or the resource processing capacity of the MEC node is smaller than the preset proportion, and if the preset proportion is 80%, the resource processing capacity of the MEC node is greater than or equal to 80%, or the resource processing capacity of the MEC node is less than 80%.
S202: and determining a target processing node according to the judgment result, a first time delay and a second time delay, wherein the first time delay is the sum of communication time delay and calculation time delay for unloading the task to the fog node, the second time delay is the sum of communication time delay and calculation time delay for unloading the task to the cloud node, and the target processing node is the fog node or the cloud node or the MEC node.
And determining a target processing node by combining the first time delay and the second time delay on the premise of judging a result so as to achieve the purpose of reducing the task processing time delay.
By way of example, if
Figure BDA0003954329060000091
Representing the communication latency to offload task n to fog node m,
Figure BDA0003954329060000092
representing the computational delay in offloading task n to fog node m,
Figure BDA0003954329060000093
representing the communication latency to offload task n to cloud node c,
Figure BDA0003954329060000094
representing the computation latency, T, to offload task n to cloud node c mec_wait_com Representing waiting and processing delays of traffic at MEC nodes, E n,mec Representing the energy consumption for offloading the task n on the MEC node, the first time delay is
Figure BDA0003954329060000095
A second time delay of
Figure BDA0003954329060000096
The third time delay is T mec_wait_com
The specific implementation of determining the target processing node is different according to different judgment results, and reference may be made to the following embodiments. The target processing node is a fog node, a cloud node or an MEC node, and if the target processing node is the MEC node, the MEC node still processes the task; if the target processing node is a fog node or a cloud node, step S203 is executed.
S203: and in response to the target processing node not being an MEC node, offloading the task to the target processing node for processing.
The task processing method provided by the embodiment of the application monitors whether the resource processing capacity of the MEC node is greater than or equal to a preset ratio to obtain a judgment result, wherein the resource processing capacity is used for representing the ratio of resources which are used by resources used on the MEC node in the MEC node; determining a target processing node according to the judgment result, a first time delay and a second time delay, wherein the first time delay is the sum of communication time delay and calculation time delay for unloading the task to the fog node, the second time delay is the sum of communication time delay and calculation time delay for unloading the task to the cloud node, and the target processing node is the fog node or the cloud node or the MEC node; and in response to the target processing node not being an MEC node, offloading the task to the target processing node for processing. The target processing node of the task is determined by combining the resource processing capability of the MEC node, the first time delay and the second time delay, so that the effect of dynamically selecting the target processing node of the task is realized, and the time delay of task processing is reduced.
On the basis of the above-described embodiment, how to "determine the target processing node based on the determination result, the first delay, and the second delay" is explained in cases.
First, a method for calculating time delay, energy consumption, and the like according to the embodiment of the present application will be described. For example, the parameters in the calculation formula are illustrated as follows:
T _n_t representing the service tolerant delay;
Z n representing the total task amount;
Figure BDA0003954329060000101
representing a transmission delay introduced when the task n is unloaded to the cloud node;
R n,c representing the transmission rate of the task n to the cloud node;
R n,m representing the transmission rate of the task n to the fog node;
R n,m (a) When the task n monopolizes the channel of the fog node m, the transmission rate from the task n to the fog node m is represented;
n m (a) Representing the number of tasks for selecting the fog node m;
E n,m represents the energy consumption for offloading task n onto fog node m;
E n,c represents the energy consumption for offloading the task n to the cloud node c;
Figure BDA0003954329060000102
representing the transmitting power of the task n for communicating with the cloud node;
Figure BDA0003954329060000103
representing the transmitting power of the task n for communicating with the fog node;
f c representing the computing power of the cloud node;
f m representing the computing power of the fog node;
E n,mec represents the energy consumption for offloading task n on the MEC node;
Figure BDA0003954329060000111
representing the transmitting power of the task n for communicating with the MEC node;
R n,mec representing the transmission rate of the task n to the MEC node;
n mec (a) Indicating the number of tasks to select the MEC node.
Wherein, T _n_t And T mec_wait_com 、Z n 、R n,c 、R n,m 、R n,mec 、n m (a)、n mec (a)、
Figure BDA0003954329060000112
Figure BDA0003954329060000113
f c 、f m Is a known quantity that can be obtained in advance.
According to the parameter description, the time delay and energy consumption parameters required by the embodiment of the application are obtained through calculation, and the calculation process is as follows:
(1) Transmission rate of task n to fog node m:
Figure BDA0003954329060000114
(2) The communication delay, computation delay and energy consumption for offloading the task n to the fog node m may be expressed as:
communication delay:
Figure BDA0003954329060000115
calculating time delay:
Figure BDA0003954329060000116
wherein, γ n Indicating the efficiency of the computation, in an ideal case,γ n =1。
energy consumption:
Figure BDA0003954329060000117
(3) Communication delay, computation delay and energy consumption for offloading the task n to the cloud node c may be respectively expressed as:
communication delay:
Figure BDA0003954329060000118
calculating time delay:
Figure BDA0003954329060000119
energy consumption:
Figure BDA00039543290600001110
(4) The energy consumption to offload a task n onto an MEC node may be expressed as:
energy consumption:
Figure BDA00039543290600001111
illustratively, the preset occupancy is set to 80%.
Based on the foregoing, in the first case, determining the target processing node according to the determination result, the first time delay, and the second time delay, the method may further include: in response to the judgment result that the resource processing capacity of the MEC node is larger than or equal to the preset ratio, determining the size relation among the service tolerance time delay, the first time delay and the second time delay corresponding to the task; determining the target processing node as an MEC node in response to the first time delay and the second time delay being larger than the service tolerance time delay; in response to that only one of the first time delay and the second time delay is smaller than the service tolerance time delay, determining that the target processing node is a node with the time delay smaller than the service tolerance time delay in the cloud node and the fog node; and in response to the fact that the first time delay and the second time delay are both smaller than the service tolerance time delay, determining that the target processing node is a node with smaller time delay in the cloud node and the fog node.
That is, when the resource processing capacity of the MEC node is more than or equal to 80%:
if it is not
Figure BDA0003954329060000121
And is
Figure BDA0003954329060000122
The task may be processed directly on the MEC node, i.e. the target processing node is the MEC node. In this case, the time delay of the processing task of the fog node and the time delay of the processing task of the cloud node are both greater than the service tolerance time delay corresponding to the task, and the task processing time delay is still not greatly reduced when the task is unloaded to the fog node or the cloud node for processing, so that the task is still processed by the MEC node at this time.
If it is not
Figure BDA0003954329060000123
And is
Figure BDA0003954329060000124
Unloading the task to the cloud node for processing;
if it is not
Figure BDA0003954329060000125
And is
Figure BDA0003954329060000126
Unloading the task to the fog node for processing;
if it is not
Figure BDA0003954329060000127
And is
Figure BDA0003954329060000128
And unloading the task to the node with smaller time delay in the cloud node and the fog node for processing. Exemplarily, when both the first delay and the second delay are smaller than the service tolerance delay, comparing the first delay with the second delay, and if the first delay is larger than the second delay, selecting the cloud node as a target processing node of the service to be processed; if the first time delay is less than the second time delay,selecting a fog node as a target processing node of the service to be processed; and if the first time delay is equal to the second time delay, selecting the cloud node or the fog node as a target processing node of the service to be processed.
Further, the target processing node can be determined by combining the energy consumption of the cloud node and the fog node processing tasks. In one implementation manner, the task processing method may further include: in response to the judgment result that the resource processing capacity of the MEC node is larger than or equal to the preset occupation ratio and the first time delay and the second time delay are both smaller than the service tolerance time delay, acquiring first energy consumption for unloading the task to the fog node and second energy consumption for unloading the task to the cloud node; in response to the first energy consumption being less than the second energy consumption, determining the target processing node as a fog node; in response to the first energy consumption being greater than the second energy consumption, determining the target processing node as a cloud node; and in response to the first energy consumption being equal to the second energy consumption, determining the target processing node to be a fog node or a cloud node.
Illustratively, when the resource processing capacity of the MEC is greater than or equal to 80%, energy consumption is introduced, and on the basis of ensuring that the time delay is small, the energy consumption is considered:
if it is not
Figure BDA0003954329060000131
And is
Figure BDA0003954329060000132
The energy consumption of two nodes can be compared, and considered according to the situation:
the first condition is as follows: if E n,m <E n,c Unloading the task to the fog node for processing;
case two: if E n,c <E n,m And unloading the task to the cloud node for processing.
According to the traffic node unloading method and device, the service conditions of server resources and energy consumption are comprehensively considered, the conditions of server time delay, energy consumption and the like are dynamically sensed to unload the traffic node, and therefore network energy consumption is greatly reduced.
In addition, based on the balanced requirements of energy consumption and time delay, the target processing node can be determined by combining the energy consumption weight and the time delay weight. Therefore, the task processing method may further include: and in response to the judgment result that the resource processing capacity of the MEC node is larger than or equal to the preset occupation ratio and the first time delay and the second time delay are both smaller than the service tolerance time delay, determining that the target processing node is a cloud node or a fog node according to the first energy consumption, the second energy consumption, the first time delay and the second time delay based on the balance requirements of the energy consumption and the time delay.
For example, if the time delay and the energy consumption are equally important for the method proposed by the present application, the time delay weight and the energy consumption weight may be 0.5, and if the time delay is more important than the energy consumption for the method proposed by the present application, the time delay weight may be 0.7, the energy weight may be 0.3, and the specific weight value may be set according to actual needs or historical experience, which is not limited in the embodiment of the present application.
In a second case, the task processing method may further include: determining the magnitude relation among the third time delay, the first time delay and the second time delay of the MEC node processing task in response to the judgment result that the resource processing capacity of the MEC node is smaller than the preset ratio; determining the target processing node as an MEC node in response to the first time delay and the second time delay being larger than the third time delay; and in response to at least one of the first delay and the second delay being smaller than or equal to the third delay, determining a target processing node based on a minimum delay principle according to the size relationship between the first delay, the second delay, the third delay and the service tolerance delay corresponding to the task.
Illustratively, if the resource handling capacity of the MEC node is less than 80%
Figure BDA0003954329060000133
Figure BDA0003954329060000134
And is provided with
Figure BDA0003954329060000135
The task can be processed directly on the MEC node; otherwise, reference may be made to the subsequent examples.
Optionally, based on a minimum delay principle, determining the target processing node according to a size relationship between the first delay, the second delay, the third delay, and the service tolerance delay corresponding to the task, includes: and in response to the first time delay, the second time delay and the third time delay all being smaller than the service tolerance time delay, determining the target processing node as a node with the minimum energy consumption in the MEC node, the cloud node and the fog node.
Illustratively, the traffic tolerant delay T is compared when the resource handling capacity of the MEC node is < 80% _n_t And waiting and processing time delay T of business at MEC node mec_wait_com When T is particularly mec_wait_com <T _n_t When the method is used:
if it is not
Figure BDA0003954329060000141
And is provided with
Figure BDA0003954329060000142
Then the node with the least energy consumption is selected as the target processing node, i.e. Min { E } is selected n,m ,E n,c ,E n,mec };
If it is not
Figure BDA0003954329060000143
Can be according to T _n_t Selecting a target processing node in relation to a sum of a communication delay and a computation delay to offload a task to a cloud node, i.e., if
Figure BDA0003954329060000144
Then the node with the least energy consumption is selected as the target processing node, i.e. Min { E } is selected n,m ,E n,c ,E n,mec }。
If it is not
Figure BDA0003954329060000145
Can be according to T _n_t Selecting a target processing node in relation to the sum of the communication delay and the computation delay to offload tasks to the fog node, i.e. if
Figure BDA0003954329060000146
Then the node with the least energy consumption is selected as the target processing node, i.e. Mi is selectedn{E n,m ,E n,c ,E n,mec }。
Further, based on the minimum delay principle, determining the target processing node according to the size relationship between the first delay, the second delay, the third delay, and the service tolerance delay corresponding to the task, may further include: in response to the first delay and the second delay both being less than or equal to a third delay, and the third delay being greater than or equal to a traffic-tolerant delay, determining a target processing node according to: in response to that the first time delay and the second time delay are both smaller than the service tolerance time delay, determining that the target processing node is the node with the minimum energy consumption in the cloud node and the fog node; responding to the fact that the service tolerance time delay is between the first time delay and the second time delay, and determining that the target processing node is a node with smaller time delay in the cloud node and the fog node; and determining the target processing node as an MEC node in response to the first time delay and the second time delay being larger than the service tolerance time delay.
Illustratively, when the resource handling capacity of the MEC node is < 80%, if
Figure BDA0003954329060000147
Figure BDA0003954329060000148
And is
Figure BDA0003954329060000149
Comparable traffic tolerant delay T _n_t And waiting and processing time delay T of business at MEC node mec_wait_com If T is _n_t ≤T mec_wait_com Can be according to T _n_t Selecting a target processing node according to the relation between the sum of the communication delay and the calculation delay for unloading the task to the fog node and the sum of the communication delay and the calculation delay for unloading the task to the cloud node, wherein the specific is as follows:
if it is
Figure BDA0003954329060000151
Then according to the principle of minimum energy consumption, selecting the node with minimum energy consumption as the target processing node, namely selecting Min { E } n,m ,E n,c }。
If it is
Figure BDA0003954329060000152
The fog node is selected as the target processing node.
If it is
Figure BDA0003954329060000153
It may choose to process the task on the MEC node or to forego processing the task.
Further, the third delay is between the first delay and the second delay, and based on the minimum delay principle, the target processing node is determined according to the relationship between the first delay, the second delay, the third delay, and the service tolerance delay corresponding to the task, and the method further includes: in response to that the third time delay is smaller than the service tolerance time delay, determining that the target processing node is an MEC node and a node with lower energy consumption in the node with the corresponding time delay smaller than the third time delay; responding to the third time delay being larger than or equal to the service tolerance time delay, and determining the target processing node as a node with the corresponding time delay being smaller than the third time delay; and in response to the third time delay being smaller than the third time delay and being larger than the service tolerance time delay, determining the target processing node as an MEC node.
Illustratively, when the resource handling capacity of the MEC node is < 80%, if
Figure BDA0003954329060000154
Figure BDA0003954329060000155
Comparable traffic tolerant delay T _n_t And waiting and processing time delay T of business at MEC node mec_wait_com When T is mec_wait_com <T _n_t When it is, if
Figure BDA0003954329060000156
Then according to the principle of minimum energy consumption, selecting the node with minimum energy consumption as the target processing node, namely selecting Min { E } n,m ,E n,mec }; if it is
Figure BDA0003954329060000157
According to the principle of minimum energy consumption, selecting the node with minimum energy consumption as the target processing node, namely selecting Min { E n,c ,E n,mec }。
Illustratively, when the resource handling capacity of the MEC node is < 80%, if
Figure BDA0003954329060000158
Figure BDA0003954329060000159
Comparable traffic tolerant delay T _n_t And waiting and processing time delay T of business at MEC node mec_wait_com When T is _n_t ≤T mec_wait_com When, if
Figure BDA00039543290600001510
Selecting a fog node as a target processing node; if it is
Figure BDA00039543290600001511
Figure BDA00039543290600001512
The cloud node is selected as the target processing node.
Illustratively, when the resource handling capacity of the MEC node is < 80%, if
Figure BDA00039543290600001513
Figure BDA00039543290600001514
Comparable traffic tolerant delay T _n_t And waiting and processing time delay T of business at MEC node mec_wait_com When T is _n_t ≤T mec_wait_com When, if
Figure BDA00039543290600001515
Or
Figure BDA00039543290600001516
Figure BDA0003954329060000161
It may choose to process the task on the MEC node or to forego processing the task.
In summary, the task processing method provided by the application achieves the purpose of dynamically selecting the target processing node by judging the resource processing capability of the MEC node and combining the time delay, and can reduce the time delay of task processing; furthermore, energy consumption is introduced, and the aim of processing tasks with lower time delay and less energy consumption is achieved by comprehensively considering time delay and energy consumption.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 3 is a schematic structural diagram of a task processing device according to an embodiment of the present application. For ease of illustration, only portions relevant to the embodiments of the present application are shown. As shown in fig. 3, the task processing device 30 includes: a monitoring module 31, a determination module 32 and a processing module 33. Wherein,
the monitoring module 31 is configured to monitor whether a resource processing capability of the MEC node is greater than or equal to a preset occupation ratio, to obtain a determination result, where the resource processing capability is used to represent an occupation ratio of resources that are already used on the MEC node and are available at the MEC node.
The determining module 32 is configured to determine a target processing node according to the determination result, a first time delay and a second time delay, where the first time delay is a sum of a communication time delay and a computation time delay for offloading the task to the fog node, the second time delay is a sum of a communication time delay and a computation time delay for offloading the task to the cloud node, and the target processing node is the fog node, the cloud node, or the MEC node.
And the processing module 33 is configured to, in response to that the target processing node is not the MEC node, offload the task to the target processing node for processing.
In a possible implementation, the determining module 32 is specifically configured to: in response to the judgment result that the resource processing capacity of the MEC node is larger than or equal to the preset ratio, determining the size relation among the service tolerance time delay, the first time delay and the second time delay corresponding to the task; in response to the first time delay and the second time delay being larger than the service tolerance time delay, determining that the target processing node is an MEC node; in response to that only one of the first time delay and the second time delay is smaller than the service tolerance time delay, determining that the target processing node is a node with the time delay smaller than the service tolerance time delay in the cloud node and the fog node; and determining that the target processing node is a node with smaller time delay in the cloud node and the fog node in response to the fact that the first time delay and the second time delay are both smaller than the service tolerance time delay.
In one possible implementation, the determining module 32 may be further configured to: responding to the judgment result that the resource processing capacity of the MEC node is larger than or equal to the preset occupation ratio and both the first time delay and the second time delay are smaller than the service tolerance time delay, and acquiring first energy consumption for unloading the task to the fog node and second energy consumption for unloading the task to the cloud node; in response to the first energy consumption being less than the second energy consumption, determining the target processing node as a fog node; in response to the first energy consumption being greater than the second energy consumption, determining the target processing node as a cloud node; and in response to the first energy consumption being equal to the second energy consumption, determining the target processing node to be a fog node or a cloud node.
In one possible implementation, the determining module 32 may be further configured to: and in response to the judgment result that the resource processing capacity of the MEC node is larger than or equal to the preset occupation ratio and the first time delay and the second time delay are both smaller than the service tolerance time delay, determining that the target processing node is a cloud node or a fog node according to the first energy consumption, the second energy consumption, the first time delay and the second time delay based on the balance requirements of the energy consumption and the time delay.
In one possible implementation, the determining module 32 may be further configured to: determining the magnitude relation among the third time delay, the first time delay and the second time delay of the MEC node processing task in response to the judgment result that the resource processing capacity of the MEC node is smaller than the preset ratio; determining the target processing node as an MEC node in response to the first time delay and the second time delay being larger than the third time delay; and in response to at least one of the first time delay and the second time delay being less than or equal to the third time delay, determining a target processing node based on a minimum time delay principle according to the size relationship between the first time delay, the second time delay and the third time delay and the service tolerance time delay corresponding to the task.
In one possible implementation, the determining module 32 may be further configured to: and in response to the fact that the first time delay, the second time delay and the third time delay are all smaller than the service tolerance time delay, determining that the target processing node is a node with the minimum energy consumption in the MEC node, the cloud node and the fog node.
In one possible implementation, the determining module 32 may be further configured to: in response to the first delay and the second delay both being less than or equal to a third delay, and the third delay being greater than or equal to a traffic-tolerant delay, determining a target processing node according to: in response to that the first time delay and the second time delay are both smaller than the service tolerance time delay, determining that the target processing node is the node with the minimum energy consumption in the cloud node and the fog node; responding to the fact that the service tolerance time delay is between the first time delay and the second time delay, and determining that the target processing node is a node with smaller time delay in the cloud node and the fog node; and determining the target processing node as an MEC node in response to the first time delay and the second time delay being larger than the service tolerance time delay.
In one possible embodiment, the third delay is between the first delay and the second delay. The determination module 32 is further configured to: in response to that the third time delay is smaller than the service tolerance time delay, determining that the target processing node is an MEC node and a node with lower energy consumption in the node with the corresponding time delay smaller than the third time delay; responding to the third time delay being larger than or equal to the service tolerance time delay, and determining the target processing node as a node with the corresponding time delay being smaller than the third time delay; and in response to the third delay being smaller than the third delay and being larger than the service tolerance delay, determining the target processing node as an MEC node.
The task processing device provided in the embodiment of the present application has similar implementation principles and technical effects to those of the embodiments described above, and reference may be made to the embodiments described above specifically, which are not repeated herein.
Fig. 4 is a schematic structural diagram of a task processing device according to an embodiment of the present application. As shown in fig. 4, the task processing device 40 includes: at least one processor 410, a memory 420, a communication interface 430, and a system bus 440. The memory 420 and the communication interface 430 are connected to the processor 410 through the system bus 440 and complete mutual communication, the memory 420 is used for storing instructions, the communication interface 430 is used for communicating with other devices, and the processor 410 is used for calling the instructions in the memory to execute the scheme of the task processing method embodiment.
The Processor 410 mentioned in fig. 4 may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), etc.; a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The Memory 420 may include a Random Access Memory (RAM), a Static Random Access Memory (SRAM), an electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk, such as at least one disk Memory.
The communication interface 430 is used to enable communication between the task processing device and other devices (e.g., clients).
The system bus 440 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus 440 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
Those skilled in the art will appreciate that the task processing device illustrated in fig. 4 does not constitute a limitation of the task processing device and may include more or fewer components than those illustrated, or some of the components may be combined, or a different arrangement of components.
The embodiment of the application also provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed, the task processing method is implemented.
The embodiment of the present application also provides a computer program product, which includes a computer program, and when the computer program is executed, the method for processing the task is implemented.
The embodiment of the present application further provides a chip for executing the instruction, where the chip is used to execute the task processing method in any of the above method embodiments.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (11)

1. A task processing method, comprising:
monitoring whether the resource processing capacity of a multi-access edge computing MEC node is larger than or equal to a preset occupation ratio or not to obtain a judgment result, wherein the resource processing capacity is used for representing the occupation ratio of resources used by the MEC node in the MEC node;
determining a target processing node according to the judgment result, a first time delay and a second time delay, wherein the first time delay is the sum of communication time delay and calculation time delay for unloading the task to the fog node, the second time delay is the sum of communication time delay and calculation time delay for unloading the task to the cloud node, and the target processing node is the fog node or the cloud node or the MEC node;
in response to the target processing node not being the MEC node, offloading the task to the target processing node for processing.
2. The task processing method according to claim 1, wherein the determining a target processing node according to the determination result, the first delay, and the second delay includes:
in response to the judgment result that the resource processing capacity of the MEC node is greater than or equal to a preset occupation ratio, determining the size relationship among the service tolerance time delay corresponding to the task, the first time delay and the second time delay;
in response to the first delay and the second delay being both greater than the traffic-tolerant delay, determining the target processing node to be the MEC node;
in response to only one of the first time delay and the second time delay being less than the service-tolerant time delay, determining the target processing node to be a node of the cloud node and the fog node having a time delay less than the service-tolerant time delay;
and in response to that the first time delay and the second time delay are both smaller than the service tolerance time delay, determining that the target processing node is a node with smaller time delay in the cloud node and the fog node.
3. The task processing method according to claim 2, further comprising:
in response to the judgment result that the resource processing capacity of the MEC node is greater than or equal to a preset occupation ratio and the first time delay and the second time delay are both less than the service tolerance time delay, acquiring first energy consumption for unloading the task to the fog node and second energy consumption for unloading the task to the cloud node;
in response to the first energy consumption being less than the second energy consumption, determining the target processing node to be the fog node;
in response to the first energy consumption being greater than the second energy consumption, determining the target processing node as the cloud node;
in response to the first energy consumption being equal to the second energy consumption, determining the target processing node to be the fog node or the cloud node.
4. The task processing method according to claim 3, further comprising:
and in response to the judgment result that the resource processing capacity of the MEC node is larger than or equal to a preset occupation ratio and the first time delay and the second time delay are both smaller than the service tolerance time delay, determining that the target processing node is the cloud node or the fog node according to the first energy consumption, the second energy consumption, the first time delay and the second time delay based on the balance requirements of energy consumption and time delay.
5. The task processing method according to any one of claims 1 to 4, wherein the determining a target processing node according to the determination result, the first delay, and the second delay includes:
determining a size relationship among a third time delay of the MEC node for processing the task, the first time delay and the second time delay in response to the judgment result that the resource processing capacity of the MEC node is smaller than a preset duty ratio;
in response to the first latency and the second latency both being greater than the third latency, determining the target processing node to be the MEC node;
and in response to that at least one of the first delay and the second delay is less than or equal to the third delay, determining the target processing node according to the size relationship between the first delay, the second delay, the third delay and the service tolerance delay corresponding to the task based on a minimum delay principle.
6. The task processing method according to claim 5, wherein the determining the target processing node according to the relationship among the first delay, the second delay, the third delay, and the service tolerance delay corresponding to the task based on the minimum delay principle includes:
and in response to that the first time delay, the second time delay and the third time delay are all smaller than the service tolerance time delay, determining that the target processing node is the node with the minimum energy consumption in the MEC node, the cloud node and the fog node.
7. The task processing method according to claim 5, wherein the determining the target processing node based on the minimum delay rule according to a size relationship between the first delay, the second delay, the third delay, and a service tolerance delay corresponding to the task further comprises:
in response to the first latency and the second latency both being less than or equal to the third latency, and the third latency being greater than or equal to the traffic-tolerant latency, determining the target processing node according to:
in response to that the first time delay and the second time delay are both smaller than the service tolerance time delay, determining that the target processing node is the node with the minimum energy consumption in the cloud node and the fog node;
responding to the service tolerance time delay between the first time delay and the second time delay, and determining that the target processing node is a node with smaller time delay in the cloud node and the fog node;
and determining the target processing node to be the MEC node in response to the first time delay and the second time delay being larger than the service tolerance time delay.
8. The task processing method according to claim 5, wherein the third delay is between the first delay and the second delay, and the determining the target processing node is based on a minimum delay rule according to a size relationship between the first delay, the second delay, the third delay, and a service tolerance delay corresponding to the task, further comprises:
in response to that the third delay is smaller than the service tolerance delay, determining that the target processing node is the MEC node and a node with lower energy consumption in the nodes with the corresponding delays smaller than the third delay;
in response to the third delay being greater than or equal to the service tolerance delay, determining the target processing node as a node having a corresponding delay less than the third delay;
in response to being less than the third latency being greater than the traffic-tolerant latency, determining the target processing node to be the MEC node.
9. A task processing apparatus, characterized by comprising:
the device comprises a monitoring module, a judging module and a judging module, wherein the monitoring module is used for monitoring whether the resource processing capacity of a multi-access edge computing MEC node is greater than or equal to a preset occupation ratio to obtain a judgment result, and the resource processing capacity is used for representing the occupation ratio of resources used by the MEC node on the MEC node;
a determining module, configured to determine a target processing node according to the determination result, a first time delay and a second time delay, where the first time delay is a sum of a communication time delay and a computation time delay for offloading the task to the fog node, the second time delay is a sum of a communication time delay and a computation time delay for offloading the task to the cloud node, and the target processing node is the fog node or the cloud node or the MEC node;
and the processing module is used for unloading the task to the target processing node for processing in response to the target processing node not being the MEC node.
10. A task processing device characterized by comprising: a memory and a processor;
the memory is to store program instructions;
the processor is configured to call program instructions in the memory to perform a task processing method according to any one of claims 1 to 8.
11. A computer-readable storage medium having stored thereon computer-executable instructions which, when executed, implement the task processing method of any one of claims 1 to 8.
CN202211459678.7A 2022-11-21 2022-11-21 Task processing method, device, equipment and storage medium Pending CN115835306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211459678.7A CN115835306A (en) 2022-11-21 2022-11-21 Task processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211459678.7A CN115835306A (en) 2022-11-21 2022-11-21 Task processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115835306A true CN115835306A (en) 2023-03-21

Family

ID=85529843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211459678.7A Pending CN115835306A (en) 2022-11-21 2022-11-21 Task processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115835306A (en)

Similar Documents

Publication Publication Date Title
CN111651253B (en) Computing resource scheduling method and device
Lee et al. An online secretary framework for fog network formation with minimal latency
Hoang et al. Optimal admission control policy for mobile cloud computing hotspot with cloudlet
CN116547958A (en) Method, system and computer readable medium for ranking process of network function selection
US11968122B2 (en) Joint optimization method and system for delay and spectrum occupation in cloud-edge collaborative network
Yin et al. Distributed resource sharing in fog-assisted big data streaming
CN111132235A (en) Mobile offload migration algorithm based on improved HRRN algorithm and multi-attribute decision
CN106095529B (en) A kind of carrier wave emigration method under C-RAN framework
CN111835849B (en) Method and device for enhancing service capability of access network
CN112788698B (en) Data processing method and device and terminal equipment
US20200084096A1 (en) Virtualized network function interworking
Tanzil et al. A distributed coalition game approach to femto-cloud formation
CN113254095B (en) Task unloading, scheduling and load balancing system and method for cloud edge combined platform
Ma et al. Online NFV-enabled multicasting in mobile edge cloud networks
Huang et al. Distributed resource allocation for network slicing of bandwidth and computational resource
CN110399210B (en) Task scheduling method and device based on edge cloud
CN114205361B (en) Load balancing method and server
CN111158893B (en) Task unloading method, system, equipment and medium applied to fog computing network
CN111405614B (en) Method for calculating APP load sharing at mobile edge
Jasim et al. Efficient load migration scheme for fog networks
Anoop et al. RETRACTED ARTICLE: Multi-user energy efficient secured framework with dynamic resource allocation policy for mobile edge network computing
US20190386922A1 (en) Method, network interface card, and computer program product for load balance
Sarvabhatla et al. A network aware energy efficient offloading algorithm for mobile cloud computing over 5g network
CN115835306A (en) Task processing method, device, equipment and storage medium
WO2022250907A1 (en) Compute-aware resource configurations for a radio access network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination