WO2024098872A1 - Task processing method, system, apparatus and device, and computer-readable storage medium - Google Patents

Task processing method, system, apparatus and device, and computer-readable storage medium Download PDF

Info

Publication number
WO2024098872A1
WO2024098872A1 PCT/CN2023/113135 CN2023113135W WO2024098872A1 WO 2024098872 A1 WO2024098872 A1 WO 2024098872A1 CN 2023113135 W CN2023113135 W CN 2023113135W WO 2024098872 A1 WO2024098872 A1 WO 2024098872A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
resource
computing node
target
strategy
Prior art date
Application number
PCT/CN2023/113135
Other languages
French (fr)
Chinese (zh)
Inventor
张亚强
李茹杨
邓琪
李雪雷
赵雅倩
李仁刚
Original Assignee
山东海量信息技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 山东海量信息技术研究院 filed Critical 山东海量信息技术研究院
Publication of WO2024098872A1 publication Critical patent/WO2024098872A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the technical field of resource scheduling, and in particular to a task processing method, system, device, equipment and computer-readable storage medium.
  • Edge computing as an emerging computing paradigm in recent years, effectively reduces network latency by providing computing power, storage, caching and other services to end users at the edge of mobile networks. Due to the variety of edge computing nodes, such as servers in operator base stations and personal computers in smart homes, computing resource nodes in edge networks have heterogeneous characteristics, and the performance of each node varies greatly.
  • Heterogeneous computing is a computing system composed of computing units using different types of instruction sets and architectures.
  • the computing resources in heterogeneous computing include CPU (Central Processing Unit), GPU (Graphics Processing Unit), FPGA (Field Programmable Gate Array) and ASIC (Application Specific Integrated Circuit). These computing resources each have distinct functional characteristics, different characteristics in computing power and power consumption, and can adapt to different types of computing tasks.
  • edge computing different edge nodes have heterogeneous characteristics, and the computing resources inside these heterogeneous edge nodes often also have heterogeneous characteristics. In a node, multiple heterogeneous computing resources will be included at the same time.
  • the existing research and methods of resource allocation in edge networks mainly use resource utilization efficiency and user experience quality as the main indicators to measure the advantages and disadvantages of resource allocation strategies, and then determine the optimal solution for resource allocation.
  • its main shortcoming is that it does not take into account the demand for benefits of computing node owners in edge networks.
  • each edge node usually belongs to different owners, and the purpose of sharing its own computing resources is mainly to use idle computing resources to obtain corresponding benefits and achieve a win-win situation.
  • existing work pays little attention to the resource allocation problem of edge nodes with multiple heterogeneous computing resources, and the research on the collaborative processing of computing tasks based on heterogeneous resources is not sufficient.
  • the purpose of this application is to provide a task processing method, which can make more reasonable resource allocation for computing tasks and achieve a win-win situation for task requesters and resource providers; another purpose of this application is to provide another task processing method, task processing device and equipment, and computer-readable storage medium, all of which have the above-mentioned beneficial effects.
  • the present application provides a task processing method, which is applied to computing nodes in an edge network.
  • the node is provided with one or more heterogeneous resources, and the method includes:
  • the resource allocation strategy includes the processing capacity of each heterogeneous resource allocated to the target task;
  • the resource allocation strategy and local resource information are input into the preset strategy learning model for processing to obtain the resource sharing strategy;
  • the resource sharing strategy includes the available resources of various heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization target;
  • the method further includes:
  • heterogeneous resources include CPU resources, GPU resources, and FPGA resources.
  • the task processing method further includes:
  • the local resource information meets the preset conditions, the local resource information is published to the edge network.
  • publishing the local resource information to the edge network includes:
  • the local resource information is published to the edge network.
  • the local resource information includes available resources and resource provision time, and the available resources include currently available resources of various heterogeneous resources.
  • the present application provides another task processing method, which is applied to an edge network, where each computing node in the edge network is provided with one or more heterogeneous resources, and the method includes:
  • the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task
  • the resource allocation strategy is sent to each target computing node so that each target computing node uses a preset strategy learning model to process the resource allocation strategy and local resource information, obtains a resource sharing strategy, and uses the resource sharing strategy to execute the target task;
  • the resource sharing strategy includes the available resources of each heterogeneous resource, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
  • determining the target computing node according to the task information of the target task and the local resource information of each computing node includes:
  • the computing node is determined to be the target computing node.
  • the method before determining the target computing node and the resource allocation strategy according to the task information of the target task and the local resource information of each computing node, the method further includes:
  • the local resource information published by each computing node is counted; wherein the local resource information is published to the edge network by the computing node when the local resource information meets the preset conditions.
  • the task processing method further includes:
  • the present application further discloses a task processing system, including an edge network and computing nodes deployed in the edge network, each computing node is provided with one or more heterogeneous resources, wherein:
  • the edge network is used to determine the target computing node and the resource allocation strategy according to the task information of the target task and the local resource information of each computing node, and send the resource allocation strategy to each target computing node;
  • the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
  • the target computing node is used to process the resource allocation strategy and local resource information using the preset strategy learning model, obtain the resource sharing strategy, and use the resource sharing strategy to execute the target task;
  • the resource sharing strategy includes the available resources of various heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
  • the present application further discloses a task processing device, which is applied to a computing node in an edge network, each computing node is provided with one or more heterogeneous resources, and the device includes:
  • a receiving module is used to receive a resource allocation strategy issued by an edge network; the resource allocation strategy includes the processing capacity of each heterogeneous resource allocated to a target task;
  • a processing module is used to input the resource allocation strategy and local resource information into a preset strategy learning model for processing to obtain a resource sharing strategy;
  • the resource sharing strategy includes the available resources of various heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization target;
  • the execution module is used to execute the target task using the resource sharing strategy.
  • the present application further discloses another task processing device, which is applied to an edge network, wherein each computing node in the edge network is provided with one or more heterogeneous resources, and the device comprises:
  • a determination module used to determine the target computing node and resource allocation strategy according to the task information of the target task and the local resource information of each computing node; the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
  • the sending module is used to send the resource allocation strategy to each target computing node, so that each target computing node uses the preset strategy learning model to process the resource allocation strategy and local resource information, obtains the resource sharing strategy, and uses the resource sharing strategy to execute the target task;
  • the resource sharing strategy includes the available resources of each heterogeneous resource, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
  • the present application further discloses a task processing device, including:
  • a processor is used to implement the steps of any of the above task processing methods when executing a computer program.
  • the present application further discloses a non-volatile computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of any of the above task processing methods are implemented.
  • a policy learning model is pre-deployed in each computing node of the edge network, and the resource sharing strategy is determined based on the preset policy learning model, so as to realize task processing by using the resource sharing strategy. Since the preset policy learning model is trained with the task processing value of the computing node as the optimization goal, task processing based on the obtained resource sharing strategy can maximize the task processing value of the computing node and ensure that the task is processed, thereby achieving a win-win situation for the task requester and the resource provider.
  • the resource allocation strategy is created by the edge network, and then the computing node learns the resource allocation strategy based on the resource allocation strategy, and both the resource allocation strategy and the resource sharing strategy take the heterogeneous resources in the computing node into consideration, thereby further realizing the reasonable allocation of heterogeneous resources within the computing node.
  • FIG1 is a schematic diagram of the structure of a task processing system provided in some embodiments of the present application.
  • FIG2 is a flowchart of a task processing method provided in some embodiments of the present application.
  • FIG3 is a flowchart of another task processing method provided in some embodiments of the present application.
  • FIG4 is a flowchart of another task processing method provided in some embodiments of the present application.
  • FIG5 is a schematic diagram of a process flow of a task processing device provided by the present application in some embodiments.
  • FIG6 is a flowchart of another task processing device provided in some embodiments of the present application.
  • FIG7 is a schematic diagram of the structure of a task processing device provided in some embodiments of the present application.
  • FIG8 is a schematic diagram of the structure of a non-volatile computer-readable storage medium provided in some embodiments of the present application.
  • FIG1 is a schematic diagram of the structure of a task processing system provided in the present application, the task processing system includes an edge network 100 and various computing nodes 200 deployed in the edge network, wherein the edge network 100 is used to allocate resources for computing tasks to allocate them to appropriate computing nodes 100, so as to realize task processing based on the computing resources in the computing nodes 100.
  • edge computing is mainly aimed at the massive terminal users in the era of the Internet of Everything. Terminal users and their devices generate highly concurrent computing task requests. Since the capabilities of terminal user devices are extremely limited, by utilizing the computing power resources and network bandwidth resources in the edge computing network, users can reasonably offload computing tasks to the edge, and the computing nodes in the edge network will allocate corresponding computing resources to process user requests and complete task processing.
  • the present application provides a task processing method in some embodiments.
  • FIG. 2 is a flowchart of a task processing method provided in some embodiments of the present application.
  • the task processing method can be applied to computing nodes in an edge network.
  • Each computing node is provided with one or more heterogeneous resources, including It includes the following S101 to S103.
  • each computing node in the edge network is distributed and deployed in the geographic space, and its resource provider sources are relatively diverse.
  • the type of computing node is not unique.
  • it can be an infrastructure service provider such as a telecom operator, a cloud computing service provider, and an individual user connected to the Internet.
  • Each computing node contains one or more heterogeneous resources, such as CPU, GPU, FPGA and ASIC.
  • These heterogeneous resources have distinct functional characteristics, different characteristics in computing power, power consumption, etc., and can adapt to different types of computing tasks. For example, CPU is good at processing tasks with sequential logic structure, while GPU has strong processing ability for parallel tasks, and FPGA has good processing performance for real-time computing tasks in some special scenarios. Therefore, when facing different computing tasks, the processing capabilities of heterogeneous resources are also different.
  • the heterogeneous resources in each computing node in the edge network may include CPU resources, GPU resources, and FPGA resources.
  • S101 receiving a resource allocation strategy issued by an edge network;
  • the resource allocation strategy includes a processing amount of a target task allocated to each heterogeneous resource;
  • This step can achieve the acquisition of resource allocation strategy, that is, receiving the resource allocation strategy issued by the edge network.
  • the resource allocation strategy is a strategy for the target task, which can be constructed by the edge network in combination with the task requirements of the target task and the local resource information of each computing node in the edge network, and is used to allocate reasonable computing resources for the target task to facilitate the processing of the target task, which is the computing task initiated by the user end and needs to be processed.
  • the resource allocation strategy is equivalent to the initial allocation strategy, which includes the task volume of the target task allocated to each heterogeneous resource in the edge network.
  • each computing node contains one or more heterogeneous resources
  • the processing volume of the target task allocated to each heterogeneous resource is the task volume of the subtasks of the target task allocated to each heterogeneous resource.
  • the task volume can be the proportion of the subtask to the target task, or the number of subtasks. Obviously, when the task volume is the proportion of subtasks, the sum of the proportions of subtasks of all heterogeneous resources is 1; when the task volume is the number of subtasks, the sum of the number of subtasks of all heterogeneous resources is the total task volume of the target task.
  • the computing nodes that receive the resource allocation strategy issued by the edge network are not unique. They can be computing nodes selected by the edge network based on a certain screening strategy. Therefore, in the edge network, the computing nodes that receive the resource allocation strategy may be one or more, and may be some computing nodes in the edge network or all computing nodes in the edge network. Of course, when all computing nodes in the edge network are not selected, it means that there are no computing nodes in the current edge network that meet the processing requirements of the target task, and the processing can be delayed for a period of time. On this basis, it can be imagined that the heterogeneous resources mentioned in the resource allocation strategy refer to the heterogeneous resources of each selected computing node.
  • S102 Input the resource allocation strategy and local resource information into a preset strategy learning model for processing to obtain a resource sharing strategy;
  • the resource sharing strategy includes available resources of various heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization target;
  • This step can realize the calculation of resource sharing strategy based on the preset strategy learning model.
  • a strategy learning model that is suitable for itself is pre-created.
  • the input of the model is the resource allocation strategy issued by the edge network and the resource information of the computing node itself (that is, the local resource information mentioned above).
  • the local resource information mainly refers to the available resources that the computing node itself is expected to provide.
  • the output of the model is the resource sharing strategy.
  • the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
  • the task processing value refers to the computing node in the task processing That is, taking the profit demand of computing nodes into consideration, a preset strategy learning model is constructed to maximize the task processing value of computing nodes.
  • the resource sharing strategy learned based on the preset strategy learning model can maximize the task processing profit of computing nodes, so as to achieve a win-win situation for task requesters and resource providers.
  • the resource sharing strategy output by the preset strategy learning model is the final allocation strategy, which includes the available resources of various heterogeneous resources within the current computing node.
  • the available resources here refer to the resources provided for processing the current target task, which can be the ratio of the resources provided for processing the current target task in each heterogeneous resource to the available resources actually provided by itself.
  • each computing node that receives the resource allocation strategy will output a resource sharing strategy suitable for itself, and each resource sharing strategy clearly indicates the computing resources provided by the heterogeneous resources within its own computing node for the target task. Therefore, the processing of the target task can be achieved based on the computing resources actually provided by the heterogeneous resources in the selected computing nodes.
  • This step can realize the execution of the target task, that is, the target task is executed based on the resource sharing strategy.
  • the execution of the target task here can be to execute the target task by utilizing the available resources of the heterogeneous resources indicated in the resource sharing strategy.
  • the "sharing" in the resource sharing strategy means that the various heterogeneous resources in the various computing nodes that execute the target task can share information during the execution of the target task, thereby ensuring that the target task can be completed and the accuracy of the target task execution result is guaranteed.
  • the task processing method pre-deploys a policy learning model in each computing node of the edge network, and determines the resource sharing strategy based on the preset policy learning model, thereby utilizing the resource sharing strategy to implement task processing. Since the preset policy learning model is trained with the task processing value of the computing node as the optimization goal, task processing based on the resource sharing strategy can maximize the task processing value of the computing node and ensure that the task is processed, thereby achieving a win-win situation for the task requester and the resource provider.
  • the resource allocation strategy is created by the edge network, and then the computing node learns the resource allocation strategy based on the resource allocation strategy, and both the resource allocation strategy and the resource sharing strategy take the heterogeneous resources in the computing node into account, further realizing the reasonable allocation of heterogeneous resources within the computing node.
  • the task processing method provided in some embodiments of the present application can further implement an execution result feedback function.
  • the execution result of the target task can also be counted so as to report the execution result to the edge network, thereby realizing the feedback of the execution result.
  • the edge network can also feed back the execution result to the initiator of the target task.
  • the task processing method may further include the following steps:
  • the local resource information meets the preset conditions, the local resource information is published to the edge network.
  • the resource allocation strategy can be constructed by the edge network in combination with the task requirements of the target task and the local resource information of each computing node in the edge network. Therefore, before constructing the resource allocation strategy, the edge network needs to first Get the local resource information.
  • the local resource information is published to the edge network by the corresponding computing node, and each computing node publishes the local resource information to the edge network when the local resource information meets the preset conditions.
  • the local resource information does not meet its own preset conditions, the local resource information will not be published. Therefore, each computing node can monitor its own local resource information in real time, and publish it to the edge network when the local resource information meets the preset conditions.
  • the preset conditions are the publishing conditions pre-set by the owner of the computing node according to its actual needs, and are not unique. This application does not limit this.
  • publishing the local resource information to the edge network includes the following steps:
  • the local resource information is published to the edge network.
  • the present application provides a type of preset condition, namely, the resource ratio of available resources in the computing node reaches a preset threshold, wherein available resources refer to idle resources in the computing node, and resource ratio refers to the proportion of idle resources to total resources.
  • available resources can be further determined based on the local resource information, and the resource ratio of available resources can be counted.
  • the resource ratio reaches the preset threshold, it means that it can provide computing resources for the computing tasks of other terminals, so the local resource information can be published; conversely, when the resource ratio does not reach the preset threshold, it means that it cannot provide computing resources for the computing tasks of other terminals, so the local resource information is not published.
  • the value of the preset threshold does not affect the implementation of this technical solution, and it can be set by the owner of the computing node according to actual needs, and this application does not limit this.
  • the local resource information may include available resources and resource provision time, and the available resources may include currently available resources which may be heterogeneous resources.
  • the present application provides a type of local resource information in some embodiments.
  • the available resources of the computing node need to meet the actual needs of the target task;
  • the willingness of the owner of the computing node to share resources sometimes fluctuates over time. For example, when a computing node is relatively idle, the owner of the computing node is more inclined to share its idle resources, serve other users, and obtain certain rewards or benefits. Therefore, the resource provision time of the computing node should also meet the actual needs of the target task.
  • local resource information can include its own available resources and resource provision time, among which the available resources include the current available resources of various heterogeneous resources within itself.
  • the present application provides another task processing method in some embodiments.
  • FIG 3 is a flow chart of another task processing method provided in the present application.
  • the task processing method can be applied to an edge network, and each computing node in the edge network is provided with one or more heterogeneous resources, including the following S201 and S202.
  • the task processing method provided in some embodiments of the present application is applied to an edge network, that is, the implementation process of S201 and S202 is executed by the edge network.
  • S201 Determine a target computing node and a resource allocation strategy according to task information of the target task and local resource information of each computing node; the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
  • This step can determine the target computing node and resource allocation strategy.
  • the target computing node is the computing node selected from the edge network to implement the target task processing
  • the resource allocation strategy is the strategy for the target task. It is used to allocate reasonable computing resources to the target task in order to facilitate the processing of the target task.
  • the target task is the computing task initiated by the user end that needs to be processed.
  • the edge network when the edge network receives the target task to be processed, it can first count the task information of the target task, including but not limited to task type, task target, computing resource demand and other information; then count the local resource information of each computing node in the edge network, and the local resource information may include but not limited to the available resources that the computing node itself is expected to provide, the time limit for providing available resources and other information; finally, combine the task information of the target task and the local resource information of each computing node to screen and determine the target computing node from a large number of computing nodes, and build a resource allocation strategy.
  • the resource allocation strategy is equivalent to the initial allocation strategy, which includes the task volume of the target task allocated to each heterogeneous resource in the edge network.
  • each computing node contains one or more heterogeneous resources, wherein the processing volume of the target task allocated to each heterogeneous resource is the task volume of the subtasks of the target task allocated to each heterogeneous resource, and the task volume can be the proportion of the subtask to the target task, or the number of subtasks.
  • the task volume is the proportion of subtasks
  • the sum of the proportions of subtasks of all heterogeneous resources is 1;
  • the task volume is the number of subtasks, the sum of the number of subtasks of all heterogeneous resources is the total task volume of the target task.
  • the number of target computing nodes is not unique. It may be one or more. It may be some computing nodes in the edge network or all computing nodes in the edge network. Of course, if no target computing node is determined, it means that there is no computing node in the current edge network that meets the processing requirements of the target task, and the processing can be delayed for a period of time. On this basis, it can be imagined that the heterogeneous resources mentioned in the resource allocation strategy refer to the heterogeneous resources of each target computing node that has been screened and determined.
  • S202 Send the resource allocation strategy to each target computing node, so that each target computing node uses a preset strategy learning model to process the resource allocation strategy and local resource information, obtains a resource sharing strategy, and uses the resource sharing strategy to execute the target task;
  • the resource sharing strategy includes the available resources of each heterogeneous resource, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
  • This step can implement the distribution of resource allocation strategies, that is, the resource allocation strategies are distributed to each target computing node, and each target computing node implements the execution of the target task based on the resource allocation strategy.
  • the implementation process of each target computing node executing the target task is as follows:
  • a policy learning model adapted to itself is pre-created.
  • the input of the model is the resource allocation strategy issued by the edge network and the resource information of the target computing node itself (i.e., the local resource information mentioned above).
  • the local resource information mainly refers to the available resources that the target computing node itself is expected to provide.
  • the output of the model is the resource sharing strategy.
  • the preset policy learning model is obtained by training with the task processing value of the computing node as the optimization goal.
  • the task processing value refers to the benefit obtained by the computing node when performing task processing, that is, taking the benefit demand of the computing node into consideration, and constructing a preset policy learning model that can maximize the task processing value of the computing node.
  • the resource sharing strategy learned based on the preset policy learning model can maximize the task processing benefit of the computing node, so as to achieve a win-win situation for the task requester and the resource provider.
  • the resource sharing strategy output by the preset strategy learning model is the final allocation strategy, which includes the available resources of various heterogeneous resources within the current target computing node.
  • the available resources here refer to the resources provided for processing the current target task, which can be the ratio of the resources provided by each heterogeneous resource for processing the current target task to the available resources actually provided by itself.
  • each target computing node will output a resource sharing strategy suitable for itself, and each resource sharing strategy clearly indicates the heterogeneous resources within its own computing node to provide resources for the target task.
  • the computing resources provided by each target computing node can realize the processing of the target task based on the computing resources actually provided by each heterogeneous resource in each target computing node.
  • the target task can be executed based on the resource sharing strategy.
  • the target task here can be executed by using the available resources of the heterogeneous resources indicated in the resource sharing strategy. It can be understood that since the target task may be jointly executed by the heterogeneous resources in multiple target computing nodes, the "sharing" in the resource sharing strategy means that the heterogeneous resources in the target computing nodes that execute the target task can share information during the execution of the target task, thereby ensuring that the target task can be completed and the accuracy of the target task execution result is guaranteed.
  • the task processing method pre-deploys a policy learning model in each computing node of the edge network, and determines the resource sharing strategy based on the preset policy learning model, thereby utilizing the resource sharing strategy to implement task processing. Since the preset policy learning model is trained with the task processing value of the computing node as the optimization goal, task processing based on the resource sharing strategy can maximize the task processing value of the computing node and ensure that the task is processed, thereby achieving a win-win situation for the task requester and the resource provider.
  • the resource allocation strategy is created by the edge network, and then the computing node learns the resource allocation strategy based on the resource allocation strategy, and both the resource allocation strategy and the resource sharing strategy take the heterogeneous resources in the computing node into account, further realizing the reasonable allocation of heterogeneous resources within the computing node.
  • the above-mentioned determination of the target computing node according to the task information of the target task and the local resource information of each computing node may include the following steps:
  • the computing node is determined to be the target computing node.
  • the present application provides a method for implementing a screening and determination of target computing nodes in some embodiments.
  • the available resources of the target computing node need to meet the actual needs of the target task;
  • the willingness of the owner of the computing node to share resources sometimes fluctuates over time. For example, when a computing node is relatively idle, the owner of the computing node is more inclined to share its idle resources, serve other users, and obtain certain rewards or benefits. Therefore, the resource provision time of the target computing node should also meet the actual needs of the target task.
  • the resource provision time and available resources of the corresponding computing node can be determined according to the local resource information, and then it can be determined whether the two meet the task requirements indicated in the task information of the target task, and the current computing node is confirmed as the target computing node when the task requirements are met. If any type of information does not meet the task requirements, it will not be confirmed as the target computing node.
  • the following steps may also be included:
  • the local resource information published by each computing node is counted; wherein the local resource information is published to the edge network by the computing node when the local resource information meets the preset conditions.
  • the resource allocation strategy is constructed by the edge network based on the task information of the target task and the local resource information of each computing node. Therefore, before constructing the resource allocation strategy, the edge network needs to obtain the local resource information of each computing node. Among them, the local resource information is published to the edge network by the corresponding computing node. Therefore, when the edge network receives the target task, it can directly capture the local resource information published by each computing node.
  • each computing node publishes local resource information to the edge network only when the local resource information meets the preset conditions. If the local resource information does not meet its own preset conditions, the local resource information will not be published.
  • the preset conditions are publishing conditions pre-set by the owner of the computing node according to its actual needs, and are not unique. For example, the idle resource ratio can reach above the preset threshold, and the current time can meet the preset resource sharing time.
  • the task processing method may further include the following steps:
  • the task processing method provided in some embodiments of the present application can further implement the execution result feedback function, and the execution result is the execution result of the target task.
  • the execution result is the execution result of the target task.
  • For each target computing node after completing the target task, it can further report the execution result to the edge network; for the edge network, after receiving the execution result uploaded by the target computing node, it can feedback the final execution result to the initiator of the target task.
  • the present application provides yet another task processing method in some embodiments.
  • FIG. 4 is a flowchart of another task processing method provided by the present application.
  • the task processing method may include the following steps:
  • Each distributed edge computing node actively and timely publishes resource status tags to the edge network based on its own operating status and available resources;
  • each heterogeneous edge node outputs its own computing resource sharing strategy based on the heterogeneous computing resource combination allocation plan given in the previous step.
  • each edge computing node In an edge computing network, each edge computing node is distributed and deployed in geographic space, and its resource providers come from a variety of sources. These edge computing nodes are usually heterogeneous, and the capabilities and types of computing resources that different edge computing nodes can provide may vary. In addition, the owner's willingness to share computing resources also fluctuates over time. Therefore, according to certain strategies, the heterogeneous computing resources in the edge network can be actively shared and discovered.
  • Label i represents the label used by edge computing node i for active resource discovery
  • Thd i represents the threshold at which the edge computing node can publish the label, which can be the percentage of all currently available computing resources to the total amount of computing resources actually owned by the edge computing node
  • Time i represents the time period set by the owner for resource sharing.
  • Ci , Gi , and Fi respectively represent the CPU, GPU, and FPGA computing resources currently available to the edge computing node.
  • the resource publishing module is normally run to monitor the utilization of various heterogeneous resources and the current time in real time.
  • the module actively publishes the label Label i to the edge network to indicate that the computing resources on the edge computing node can be shared within the time range Time i , and the types and sizes of the sharable resources are Ci , Gi, and F i respectively.
  • the task scheduling module allocates resources for the user's computing task based on the available heterogeneous computing resources to improve the utilization of edge network resources, meet the various constraints of the user's computing task, and improve the user's quality of experience.
  • the task scheduling module first counts the schedulable status of various heterogeneous computing resources based on the tags actively released by each computing node, and selects the computing nodes that can be used during the time period when the user's computing task is completed. Under this condition, it counts whether the heterogeneous computing resources that can be scheduled can meet the user's computing task. If so, it proceeds to the next step. Otherwise, the computing task cannot be met.
  • the combination allocation strategy of heterogeneous computing resources should reduce the total consumption Cost t as much as possible, optimize the average cost of the edge network, and improve the user experience quality under the condition of satisfying the task constraint item T thd and the total computing task m t .
  • the process of solving the optimal solution of the objective function can be expressed as a linear programming problem.
  • the optimal heterogeneous computing resource combination strategy can be obtained by solving the above objective function using a linear programming solver.
  • each edge computing node learns its own resource allocation strategy based on the optimal resource combination strategy obtained in the previous step.
  • each edge computing node shares part of its computing resources for the processing of the current computing task and obtains corresponding benefits.
  • each edge computing node can meet the optimal resource allocation strategy for computing tasks.
  • the strategy learning goal of each edge computing node is to generate the optimal resource sharing strategy a i so that it can maximize the current benefit r i :
  • the reward represents the benefits that resource owners can obtain when they provide their own heterogeneous computing resources for task processing, and each resource owner seeks to maximize the benefits they obtain.
  • the values represent the reward that can be obtained when a unit of heterogeneous computing resources processes a task, that is, the basic benefit when sharing a certain type of resource, which is a positive real number.
  • heterogeneous resources have different processing capabilities for different types of computing tasks. Therefore, it is necessary to further improve the efficiency of resource use and the benefit level of resource owners by setting the unit benefit of heterogeneous resources when facing computing tasks.
  • the resource sharing strategy of each edge computing node is learned, and a centralized training and distributed execution mechanism is adopted.
  • Set the strategy learning network (preset strategy learning model) of the edge computing node to Ni ,
  • the output of the policy learning network is the resource sharing decision
  • each edge computing node sets a separate value evaluation network Qi , whose parameters are ⁇ i .
  • the parameters of Ni need to be trained.
  • the training process of each edge computing node is as follows:
  • the state information s i ⁇ s 1 , s 2 ⁇ obtained by edge computing node i is used as the input of the local decision network to obtain the current output a i , execute action a i , and the system is converted to the new state s ′ i and the corresponding benefit ri , and ⁇ s i , a i , ri , s ′ i ⁇ is saved in the set D;
  • s ′m represents the next environmental information observed by all edge computing nodes
  • is a real number between 0 and 1, representing the discount factor
  • s m represents the current environment information observed by all edge computing nodes
  • the task processing method pre-deploys a policy learning model in each computing node of the edge network, and determines the resource sharing strategy based on the preset policy learning model, thereby utilizing the resource sharing strategy to implement task processing. Since the preset policy learning model is trained with the task processing value of the computing node as the optimization goal, task processing based on the obtained resource sharing strategy can maximize the task processing value of the computing node and ensure that the task is processed, thereby achieving a win-win situation for the task requester and the resource provider.
  • the resource allocation strategy is created by the edge network, and the computing node learns the resource allocation strategy based on the resource allocation strategy, and Both the resource allocation strategy and resource sharing strategy take the heterogeneous resources in the computing nodes into consideration, further realizing the reasonable allocation of heterogeneous resources within the computing nodes.
  • the present application provides a task processing system in some embodiments.
  • the task processing system may include an edge network 100 and computing nodes 200 deployed in the edge network 100 , each computing node 200 being provided with one or more heterogeneous resources, wherein:
  • the edge network 100 is used to determine the target computing node 200 and the resource allocation strategy according to the task information of the target task and the local resource information of each computing node 200, and send the resource allocation strategy to each target computing node 200;
  • the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
  • the target computing node 200 is used to process the resource allocation strategy and local resource information using a preset strategy learning model, obtain a resource sharing strategy, and use the resource sharing strategy to execute the target task;
  • the resource sharing strategy includes the available resources of various heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
  • the task processing system pre-deploys a policy learning model in each computing node of the edge network, and determines the resource sharing strategy based on the preset policy learning model, thereby utilizing the resource sharing strategy to implement task processing. Since the preset policy learning model is trained with the task processing value of the computing node as the optimization goal, task processing based on the resource sharing strategy can maximize the task processing value of the computing node and ensure that the task is processed, thereby achieving a win-win situation for the task requester and the resource provider.
  • the resource allocation strategy is created by the edge network, and then the computing node learns the resource allocation strategy based on the resource allocation strategy, and both the resource allocation strategy and the resource sharing strategy take the heterogeneous resources in the computing node into account, further realizing the reasonable allocation of heterogeneous resources within the computing node.
  • the present application provides a task processing device in some embodiments.
  • FIG. 5 is a schematic diagram of the structure of a task processing device provided by the present application.
  • the task processing device can be applied to computing nodes in an edge network.
  • Each computing node is provided with one or more heterogeneous resources, including:
  • the receiving module 1 is used to receive the resource allocation strategy sent by the edge network; the resource allocation strategy includes the processing capacity of each heterogeneous resource allocated to the target task;
  • Processing module 2 is used to input the resource allocation strategy and local resource information into the preset strategy learning model for processing to obtain the resource sharing strategy;
  • the resource sharing strategy includes the available resources of each heterogeneous resource, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization target;
  • the execution module 3 is used to execute the target task by utilizing the resource sharing strategy.
  • the task processing device pre-deploys a policy learning model in each computing node of the edge network, and determines the resource sharing strategy based on the preset policy learning model, thereby utilizing the resource sharing strategy to implement task processing. Since the preset policy learning model is trained with the task processing value of the computing node as the optimization goal, task processing based on the obtained resource sharing strategy can maximize the task processing value of the computing node and ensure that the task is processed, thereby achieving a win-win situation for the task requester and the resource provider.
  • the resource allocation strategy is created by the edge network, and then the computing node learns the resource allocation strategy based on the resource allocation strategy, and both the resource allocation strategy and the resource sharing strategy take the heterogeneous resources in the computing node into account, further realizing the computing node.
  • Reasonable allocation of internal heterogeneous resources is created by the edge network, and then the computing node learns the resource allocation strategy based on the resource allocation strategy, and both the resource allocation strategy and the resource sharing strategy take the heterogeneous resources in the computing node into account, further realizing the computing node.
  • the task processing device may further include a reporting module, which is used to obtain the execution result of the target task after executing the target task using the resource sharing strategy described above; and report the execution result to the edge network.
  • a reporting module which is used to obtain the execution result of the target task after executing the target task using the resource sharing strategy described above; and report the execution result to the edge network.
  • the above heterogeneous resources may include CPU resources, GPU resources, and FPGA resources.
  • the task processing device may further include a publishing module for real-time monitoring of local resource information; when the local resource information meets a preset condition, the local resource information is published to the edge network.
  • the publishing module may be used to determine available resources based on local resource information; when the resource ratio of available resources reaches a preset threshold, the local resource information is published to the edge network.
  • the local resource information may include available resources and resource provision time, and the available resources may include currently available resources of various heterogeneous resources.
  • the present application provides another task processing device in some embodiments.
  • FIG. 6 is a schematic diagram of the structure of another task processing device provided by the present application, which can be applied to an edge network, and each computing node in the edge network is provided with one or more heterogeneous resources, including:
  • Determining module 4 used to determine the target computing node and resource allocation strategy according to the task information of the target task and the local resource information of each computing node; the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
  • the sending module 5 is used to send the resource allocation strategy to each target computing node, so that each target computing node uses the preset strategy learning model to process the resource allocation strategy and local resource information, obtains the resource sharing strategy, and uses the resource sharing strategy to execute the target task; the resource sharing strategy includes the available resources of each heterogeneous resource, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
  • the task processing device pre-deploys a policy learning model in each computing node of the edge network, and determines the resource sharing strategy based on the preset policy learning model, thereby utilizing the resource sharing strategy to implement task processing. Since the preset policy learning model is trained with the task processing value of the computing node as the optimization goal, task processing based on the resource sharing strategy can maximize the task processing value of the computing node and ensure that the task is processed, thereby achieving a win-win situation for the task requester and the resource provider.
  • the resource allocation strategy is created by the edge network, and then the computing node learns the resource allocation strategy based on the resource allocation strategy, and both the resource allocation strategy and the resource sharing strategy take the heterogeneous resources in the computing node into account, further realizing the reasonable allocation of heterogeneous resources within the computing node.
  • the determination module 4 may be used to determine resource provision time and available resources based on local resource information; when both resource provision time and available resources meet the task requirements indicated by the task information, the computing node is determined to be the target computing node.
  • the task processing device may also include a statistical module for counting the local resource information published by each computing node before determining the target computing node and resource allocation strategy based on the task information of the target task and the local resource information of each computing node; wherein the local resource information is published to the edge network by the computing node when the local resource information meets preset conditions.
  • the task processing device may further include a feedback module for obtaining the target computing node uploaded The execution result of the target task; the execution result is fed back to the initiator of the target task.
  • the present application provides a task processing device in some embodiments.
  • FIG. 7 is a schematic diagram of the structure of a task processing device provided by the present application, and the task processing device may include:
  • the processor can implement the steps of any one of the above-mentioned task processing methods when being used to execute a computer program.
  • Fig. 7 it is a schematic diagram of the composition structure of the task processing device, which may include: a processor 10, a memory 11, a communication interface 12 and a communication bus 13.
  • the processor 10, the memory 11 and the communication interface 12 all communicate with each other through the communication bus 13.
  • the processor 10 may be a central processing unit (CPU), an application specific integrated circuit, a digital signal processor, a field programmable gate array, or other programmable logic devices.
  • CPU central processing unit
  • application specific integrated circuit a digital signal processor
  • field programmable gate array a field programmable gate array
  • the processor 10 may call a program stored in the memory 11 , and the processor 10 may execute operations in the embodiment of the task processing method.
  • the memory 11 is used to store one or more programs, which may include program codes, and the program codes include computer operation instructions.
  • the memory 11 stores at least a program for implementing the following functions:
  • the resource allocation strategy includes the processing capacity of each heterogeneous resource allocated to the target task;
  • the resource allocation strategy and local resource information are input into the preset strategy learning model for processing to obtain the resource sharing strategy;
  • the resource sharing strategy includes the available resources of various heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization target;
  • the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task
  • the resource allocation strategy is sent to each target computing node so that each target computing node uses a preset strategy learning model to process the resource allocation strategy and local resource information, obtains a resource sharing strategy, and uses the resource sharing strategy to execute the target task;
  • the resource sharing strategy includes the available resources of each heterogeneous resource, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
  • the memory 11 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application required for at least one function, etc.; the data storage area may store data created during use.
  • the memory 11 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device or other volatile solid-state storage device.
  • the communication interface 12 may be an interface of a communication module, and is used to connect to other devices or systems.
  • the structure shown in FIG. 7 does not constitute a task processing device in some embodiments of the present application.
  • the task processing device may include more or fewer components than those shown in FIG. 7 , or may combine certain components.
  • the present application provides a non-volatile computer-readable storage medium in some embodiments.
  • a non-volatile computer-readable storage medium 80 stores a computer program 810 , and when the computer program 810 is executed by a processor, the steps of any of the above-mentioned task processing methods can be implemented.
  • the non-volatile computer-readable storage medium may include: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and other media that can store program codes.
  • the steps of the method or algorithm described in conjunction with the embodiments disclosed herein may be implemented directly using hardware, a software module executed by a processor, or a combination of the two.
  • the software module may be placed in a random access memory (RAM), a memory, a read-only memory (ROM), an electrically programmable ROM, an electrically erasable programmable ROM, a register, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A task processing method, system, apparatus and device, and a computer-readable storage medium, which are applied to the technical field of resource scheduling. The method is applied to computing nodes in an edge network, wherein each computing node is provided with one or more heterogeneous resources. The method comprises: receiving a resource allocation policy, which is issued by an edge network, wherein the resource allocation policy comprises processing amounts regarding a target task that are allocated to the respective heterogeneous resources; inputting the resource allocation policy and local resource information into a preset policy learning model for processing, so as to obtain a resource sharing policy, wherein the resource sharing policy comprises available resources of the heterogeneous resources, and the preset policy learning model is obtained by means of performing training with task processing values of the computing nodes as an optimization target; and executing the target task by using the resource sharing policy. By means of the method, more rational resource allocation can be performed on a computing task, and the win-win of a task requester and a resource provider is realized.

Description

任务处理方法、系统、装置、设备及计算机可读存储介质Task processing method, system, device, equipment and computer readable storage medium
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请要求于2022年11月07日提交中国专利局,申请号为202211381753.2,申请名称为“任务处理方法、系统、装置、设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application filed with the China Patent Office on November 7, 2022, with application number 202211381753.2, and application name “Task processing method, system, device, equipment and computer-readable storage medium”, all contents of which are incorporated by reference in this application.
技术领域Technical Field
本申请涉及资源调度技术领域,特别涉及一种任务处理方法、系统、装置、设备及计算机可读存储介质。The present application relates to the technical field of resource scheduling, and in particular to a task processing method, system, device, equipment and computer-readable storage medium.
背景技术Background technique
边缘计算作为一种近年来新兴的计算范式,通过在移动网络边缘处为终端用户提供算力、存储、缓存等各类服务,从而有效减少网络延迟。由于边缘计算节点的类型多样,如运营商基站中的服务器、智能家居中的个人计算机等设备,因此,边缘网络中的计算资源节点具有异构特性,各节点的性能差异较大。Edge computing, as an emerging computing paradigm in recent years, effectively reduces network latency by providing computing power, storage, caching and other services to end users at the edge of mobile networks. Due to the variety of edge computing nodes, such as servers in operator base stations and personal computers in smart homes, computing resource nodes in edge networks have heterogeneous characteristics, and the performance of each node varies greatly.
异构计算是一种由使用不同类型的指令集和体系结构的计算单元所组成的计算系统,异构计算中的计算资源包括CPU(Central Processing Unit,中央处理器)、GPU(Graphics Processing Unit,图形处理器)、FPGA(Field Programmable Gate Array,现场可编程逻辑门阵列)以及ASIC(Application Specific Integrated Circuit,专用集成电路)等多种类型,这些计算资源各自具有明显的功能特性,在计算能力、功耗等方面具有不同的特点,能够适应不同类型的计算任务。在边缘计算中,不同的边缘节点之间具有异构特性,而这些异构边缘节点内部的计算资源往往也具有异构特性,在一个节点内部,会同时包括多种异构计算资源。Heterogeneous computing is a computing system composed of computing units using different types of instruction sets and architectures. The computing resources in heterogeneous computing include CPU (Central Processing Unit), GPU (Graphics Processing Unit), FPGA (Field Programmable Gate Array) and ASIC (Application Specific Integrated Circuit). These computing resources each have distinct functional characteristics, different characteristics in computing power and power consumption, and can adapt to different types of computing tasks. In edge computing, different edge nodes have heterogeneous characteristics, and the computing resources inside these heterogeneous edge nodes often also have heterogeneous characteristics. In a node, multiple heterogeneous computing resources will be included at the same time.
现有边缘网络资源分配的研究和方法,主要以资源利用效率和用户体验质量等作为衡量资源分配策略优劣的主要指标,进而确定资源分配的最优方案。然而,其主要不足在于,并未考虑边缘网络中计算节点拥有者对收益的需求,在实际场景中,各个边缘节点通常隶属于不同的拥有者,其共享自身计算资源的目的,主要在于利用空闲计算资源来获得相应的收益,实现双赢;同时,现有工作对拥有多种异构计算资源的边缘节点资源分配问题关注较少,对基于异构资源协同处理计算任务的研究还不够充分。The existing research and methods of resource allocation in edge networks mainly use resource utilization efficiency and user experience quality as the main indicators to measure the advantages and disadvantages of resource allocation strategies, and then determine the optimal solution for resource allocation. However, its main shortcoming is that it does not take into account the demand for benefits of computing node owners in edge networks. In actual scenarios, each edge node usually belongs to different owners, and the purpose of sharing its own computing resources is mainly to use idle computing resources to obtain corresponding benefits and achieve a win-win situation. At the same time, existing work pays little attention to the resource allocation problem of edge nodes with multiple heterogeneous computing resources, and the research on the collaborative processing of computing tasks based on heterogeneous resources is not sufficient.
因此,如何为计算任务进行更为合理的资源分配,实现任务请求者和资源提供者的双赢是本领域技术人员亟待解决的问题。Therefore, how to make more reasonable resource allocation for computing tasks and achieve a win-win situation for task requesters and resource providers is an urgent problem to be solved by those skilled in the art.
发明内容Summary of the invention
本申请的目的是提供一种任务处理方法,该任务处理方法可以为计算任务进行更为合理的资源分配,实现任务请求者和资源提供者的双赢;本申请的另一目的是提供另一种任务处理方法、任务处理装置及设备、计算机可读存储介质,均具有上述有益效果。The purpose of this application is to provide a task processing method, which can make more reasonable resource allocation for computing tasks and achieve a win-win situation for task requesters and resource providers; another purpose of this application is to provide another task processing method, task processing device and equipment, and computer-readable storage medium, all of which have the above-mentioned beneficial effects.
第一方面,本申请提供了一种任务处理方法,应用于边缘网络中的计算节点,每一计算 节点设置有一种或多种异构资源,方法包括:In the first aspect, the present application provides a task processing method, which is applied to computing nodes in an edge network. The node is provided with one or more heterogeneous resources, and the method includes:
接收边缘网络下发的资源分配策略;资源分配策略包括每一种异构资源被分配的关于目标任务的处理量;Receive the resource allocation strategy issued by the edge network; the resource allocation strategy includes the processing capacity of each heterogeneous resource allocated to the target task;
将资源分配策略和本地资源信息输入至预设策略学习模型进行处理,获得资源共享策略;资源共享策略包括各异构资源的可提供资源,预设策略学习模型以计算节点的任务处理价值为优化目标训练获得;The resource allocation strategy and local resource information are input into the preset strategy learning model for processing to obtain the resource sharing strategy; the resource sharing strategy includes the available resources of various heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization target;
利用资源共享策略执行目标任务。Utilize resource sharing strategies to execute target tasks.
在一些实施例中,利用资源共享策略执行目标任务之后,还包括:In some embodiments, after executing the target task using the resource sharing strategy, the method further includes:
获取目标任务的执行结果;Get the execution result of the target task;
将执行结果上报至边缘网络。Report the execution results to the edge network.
在一些实施例中,异构资源包括CPU资源、GPU资源、FPGA资源。In some embodiments, heterogeneous resources include CPU resources, GPU resources, and FPGA resources.
在一些实施例中,任务处理方法还包括:In some embodiments, the task processing method further includes:
对本地资源信息进行实时监控;Real-time monitoring of local resource information;
当本地资源信息满足预设条件时,将本地资源信息发布至边缘网络。When the local resource information meets the preset conditions, the local resource information is published to the edge network.
在一些实施例中,当本地资源信息满足预设条件时,将本地资源信息发布至边缘网络,包括:In some embodiments, when the local resource information meets a preset condition, publishing the local resource information to the edge network includes:
根据本地资源信息确定可用资源;Determine available resources based on local resource information;
当可用资源的资源占比达到预设阈值时,将本地资源信息发布至边缘网络。When the resource ratio of available resources reaches a preset threshold, the local resource information is published to the edge network.
在一些实施例中,本地资源信息包括可用资源和资源提供时间,可用资源包括各异构资源的当前可用资源。In some embodiments, the local resource information includes available resources and resource provision time, and the available resources include currently available resources of various heterogeneous resources.
第二方面,本申请提供了另一种任务处理方法,应用于边缘网络,边缘网络中的每一计算节点设置有一种或多种异构资源,方法包括:In a second aspect, the present application provides another task processing method, which is applied to an edge network, where each computing node in the edge network is provided with one or more heterogeneous resources, and the method includes:
根据目标任务的任务信息和各计算节点的本地资源信息,确定目标计算节点和资源分配策略;资源分配策略包括每一种异构资源被分配的关于目标任务的处理量;Determine the target computing node and resource allocation strategy according to the task information of the target task and the local resource information of each computing node; the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
将资源分配策略发送至各目标计算节点,以使各目标计算节点利用预设策略学习模型对资源分配策略和本地资源信息进行处理,获得资源共享策略,并利用资源共享策略执行目标任务;资源共享策略包括各异构资源的可提供资源,预设策略学习模型以计算节点的任务处理价值为优化目标训练获得。The resource allocation strategy is sent to each target computing node so that each target computing node uses a preset strategy learning model to process the resource allocation strategy and local resource information, obtains a resource sharing strategy, and uses the resource sharing strategy to execute the target task; the resource sharing strategy includes the available resources of each heterogeneous resource, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
在一些实施例中,根据目标任务的任务信息和各计算节点的本地资源信息,确定目标计算节点,包括:In some embodiments, determining the target computing node according to the task information of the target task and the local resource information of each computing node includes:
根据本地资源信息确定资源提供时间和可用资源;Determine resource provision time and available resources based on local resource information;
当资源提供时间和可用资源均满足任务信息指示的任务需求时,确定计算节点为目标计算节点。When both the resource provision time and the available resources meet the task requirements indicated by the task information, the computing node is determined to be the target computing node.
在一些实施例中,根据目标任务的任务信息和各计算节点的本地资源信息,确定目标计算节点和资源分配策略之前,还包括:In some embodiments, before determining the target computing node and the resource allocation strategy according to the task information of the target task and the local resource information of each computing node, the method further includes:
统计各计算节点发布的本地资源信息;其中,本地资源信息由计算节点在本地资源信息满足预设条件时发布至边缘网络。The local resource information published by each computing node is counted; wherein the local resource information is published to the edge network by the computing node when the local resource information meets the preset conditions.
在一些实施例中,任务处理方法还包括:In some embodiments, the task processing method further includes:
获取目标计算节点上传的目标任务的执行结果; Get the execution result of the target task uploaded by the target computing node;
将执行结果反馈至目标任务的发起端。Feedback the execution results to the initiator of the target task.
第三方面,本申请还公开了一种任务处理系统,包括边缘网络和部署于边缘网络中的计算节点,每一计算节点设置有一种或多种异构资源,其中,In a third aspect, the present application further discloses a task processing system, including an edge network and computing nodes deployed in the edge network, each computing node is provided with one or more heterogeneous resources, wherein:
边缘网络,用于根据目标任务的任务信息和各计算节点的本地资源信息,确定目标计算节点和资源分配策略,并将资源分配策略发送至各目标计算节点;资源分配策略包括每一种异构资源被分配的关于目标任务的处理量;The edge network is used to determine the target computing node and the resource allocation strategy according to the task information of the target task and the local resource information of each computing node, and send the resource allocation strategy to each target computing node; the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
目标计算节点,用于利用预设策略学习模型对资源分配策略和本地资源信息进行处理,获得资源共享策略,并利用资源共享策略执行目标任务;资源共享策略包括各异构资源的可提供资源,预设策略学习模型以计算节点的任务处理价值为优化目标训练获得。The target computing node is used to process the resource allocation strategy and local resource information using the preset strategy learning model, obtain the resource sharing strategy, and use the resource sharing strategy to execute the target task; the resource sharing strategy includes the available resources of various heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
第四方面,本申请还公开了一种任务处理装置,应用于边缘网络中的计算节点,每一计算节点设置有一种或多种异构资源,装置包括:In a fourth aspect, the present application further discloses a task processing device, which is applied to a computing node in an edge network, each computing node is provided with one or more heterogeneous resources, and the device includes:
接收模块,用于接收边缘网络下发的资源分配策略;资源分配策略包括每一种异构资源被分配的关于目标任务的处理量;A receiving module is used to receive a resource allocation strategy issued by an edge network; the resource allocation strategy includes the processing capacity of each heterogeneous resource allocated to a target task;
处理模块,用于将资源分配策略和本地资源信息输入至预设策略学习模型进行处理,获得资源共享策略;资源共享策略包括各异构资源的可提供资源,预设策略学习模型以计算节点的任务处理价值为优化目标训练获得;A processing module is used to input the resource allocation strategy and local resource information into a preset strategy learning model for processing to obtain a resource sharing strategy; the resource sharing strategy includes the available resources of various heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization target;
执行模块,用于利用资源共享策略执行目标任务。The execution module is used to execute the target task using the resource sharing strategy.
第五方面,本申请还公开了另一种任务处理装置,应用于边缘网络,边缘网络中的每一计算节点设置有一种或多种异构资源,装置包括:In a fifth aspect, the present application further discloses another task processing device, which is applied to an edge network, wherein each computing node in the edge network is provided with one or more heterogeneous resources, and the device comprises:
确定模块,用于根据目标任务的任务信息和各计算节点的本地资源信息,确定目标计算节点和资源分配策略;资源分配策略包括每一种异构资源被分配的关于目标任务的处理量;A determination module, used to determine the target computing node and resource allocation strategy according to the task information of the target task and the local resource information of each computing node; the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
发送模块,用于将资源分配策略发送至各目标计算节点,以使各目标计算节点利用预设策略学习模型对资源分配策略和本地资源信息进行处理,获得资源共享策略,并利用资源共享策略执行目标任务;资源共享策略包括各异构资源的可提供资源,预设策略学习模型以计算节点的任务处理价值为优化目标训练获得。The sending module is used to send the resource allocation strategy to each target computing node, so that each target computing node uses the preset strategy learning model to process the resource allocation strategy and local resource information, obtains the resource sharing strategy, and uses the resource sharing strategy to execute the target task; the resource sharing strategy includes the available resources of each heterogeneous resource, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
第六方面,本申请还公开了一种任务处理设备,包括:In a sixth aspect, the present application further discloses a task processing device, including:
存储器,用于存储计算机程序;Memory for storing computer programs;
处理器,用于执行计算机程序时实现如上的任一种任务处理方法的步骤。A processor is used to implement the steps of any of the above task processing methods when executing a computer program.
第七方面,本申请还公开了一种非易失性计算机可读存储介质,非易失性计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现如上的任一种任务处理方法的步骤。In a seventh aspect, the present application further discloses a non-volatile computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of any of the above task processing methods are implemented.
应用本申请所提供的技术方案,在边缘网络的各个计算节点中预先部署策略学习模型,并基于该预设策略学习模型实现资源共享策略的确定,从而利用该资源共享策略实现任务处理,由于预设策略学习模型是以计算节点的任务处理价值为优化目标进行训练获得的,因此,基于获得资源共享策略进行任务处理可以使得计算节点的任务处理价值得到最大化,并保证任务处理完毕,从而实现任务请求者和资源提供者的双赢;此外,由边缘网络创建资源分配策略,再由计算节点基于资源分配策略学习得到资源分配策略,且资源分配策略和资源共享策略均将计算节点中的异构资源考虑在内,进一步实现了计算节点内部异构资源的合理分配。 By applying the technical solution provided in the present application, a policy learning model is pre-deployed in each computing node of the edge network, and the resource sharing strategy is determined based on the preset policy learning model, so as to realize task processing by using the resource sharing strategy. Since the preset policy learning model is trained with the task processing value of the computing node as the optimization goal, task processing based on the obtained resource sharing strategy can maximize the task processing value of the computing node and ensure that the task is processed, thereby achieving a win-win situation for the task requester and the resource provider. In addition, the resource allocation strategy is created by the edge network, and then the computing node learns the resource allocation strategy based on the resource allocation strategy, and both the resource allocation strategy and the resource sharing strategy take the heterogeneous resources in the computing node into consideration, thereby further realizing the reasonable allocation of heterogeneous resources within the computing node.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明现有技术和本申请在一些实施例中的技术方案,下面将对现有技术和本申请在一些实施例中描述中需要使用的附图作简要的介绍。当然,下面有关本申请在一些实施例中的附图描述的仅仅是本申请中的一部分实施例,对于本领域普通技术人员来说,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图,所获得的其他附图也属于本申请的保护范围。In order to more clearly illustrate the technical solutions of the prior art and the present application in some embodiments, the following briefly introduces the drawings required to be used in the description of the prior art and the present application in some embodiments. Of course, the following description of the drawings in some embodiments of the present application is only a part of the embodiments of the present application. For ordinary technicians in this field, other drawings can be obtained based on the provided drawings without creative work, and the obtained other drawings also belong to the protection scope of the present application.
图1为本申请在一些实施例中所提供的一种任务处理系统的结构示意图;FIG1 is a schematic diagram of the structure of a task processing system provided in some embodiments of the present application;
图2为本申请在一些实施例中所提供的一种任务处理方法的流程示意图;FIG2 is a flowchart of a task processing method provided in some embodiments of the present application;
图3为本申请在一些实施例中所提供的另一种任务处理方法的流程示意图;FIG3 is a flowchart of another task processing method provided in some embodiments of the present application;
图4为本申请在一些实施例中所提供的又一种任务处理方法的流程示意图;FIG4 is a flowchart of another task processing method provided in some embodiments of the present application;
图5为本申请在一些实施例中所提供的一种任务处理装置的流程示意图;FIG5 is a schematic diagram of a process flow of a task processing device provided by the present application in some embodiments;
图6为本申请在一些实施例中所提供的另一种任务处理装置的流程示意图;FIG6 is a flowchart of another task processing device provided in some embodiments of the present application;
图7为本申请在一些实施例中所提供的一种任务处理设备的结构示意图;FIG7 is a schematic diagram of the structure of a task processing device provided in some embodiments of the present application;
图8为本申请在一些实施例中所提供的一种非易失性计算机可读存储介质的结构示意图。FIG8 is a schematic diagram of the structure of a non-volatile computer-readable storage medium provided in some embodiments of the present application.
具体实施方式Detailed ways
本申请的核心是提供一种任务处理方法,该任务处理方法可以为计算任务进行更为合理的资源分配,实现任务请求者和资源提供者的双赢;本申请的另一核心是提供另一种任务处理方法、任务处理装置及设备、计算机可读存储介质,均具有上述有益效果。The core of this application is to provide a task processing method, which can make more reasonable resource allocation for computing tasks and achieve a win-win situation for task requesters and resource providers; another core of this application is to provide another task processing method, task processing device and equipment, and computer-readable storage medium, all of which have the above-mentioned beneficial effects.
为了对本申请中的技术方案进行更加清楚、完整地描述,下面将结合附图,对本申请在一些实施例中的技术方案进行介绍。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的一些实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to describe the technical solutions in the present application more clearly and completely, the technical solutions in some embodiments of the present application will be introduced below in conjunction with the accompanying drawings. Obviously, the described embodiments are only part of the embodiments of the present application, rather than all of the embodiments. Based on some embodiments in the present application, all other embodiments obtained by ordinary technicians in this field without making creative work are within the scope of protection of this application.
本申请所提供的任务处理方法应用于任务处理系统,该任务处理系统为边缘网络系统。参考图1,图1为本申请所提供的一种任务处理系统的结构示意图,该任务处理系统包括边缘网络100以及部署于边缘网络的各个计算节点200,其中,边缘网络100用于为计算任务进行资源分配,以将其分配至合适的计算节点100上,以便于基于计算节点100中的计算资源实现任务处理。The task processing method provided in the present application is applied to a task processing system, which is an edge network system. Referring to FIG1 , FIG1 is a schematic diagram of the structure of a task processing system provided in the present application, the task processing system includes an edge network 100 and various computing nodes 200 deployed in the edge network, wherein the edge network 100 is used to allocate resources for computing tasks to allocate them to appropriate computing nodes 100, so as to realize task processing based on the computing resources in the computing nodes 100.
可以理解的是,边缘计算主要面向万物互联时代的海量终端用户,由终端用户及其设备产生高并发的计算任务请求,由于终端用户设备能力极其有限,因此,利用边缘计算网络中的计算力资源和网络带宽资源,用户可以将计算任务合理的卸载至边缘端,并由边缘网络中的计算节点分配相应的计算资源来处理用户的请求,完成任务处理。It is understandable that edge computing is mainly aimed at the massive terminal users in the era of the Internet of Everything. Terminal users and their devices generate highly concurrent computing task requests. Since the capabilities of terminal user devices are extremely limited, by utilizing the computing power resources and network bandwidth resources in the edge computing network, users can reasonably offload computing tasks to the edge, and the computing nodes in the edge network will allocate corresponding computing resources to process user requests and complete task processing.
本申请在一些实施例中提供了一种任务处理方法。The present application provides a task processing method in some embodiments.
参考图2,图2为本申请在一些实施例中所提供的一种任务处理方法的流程示意图,该任务处理方法可应用于边缘网络中的计算节点,每一计算节点设置有一种或多种异构资源,包 括如下S101至S103。Referring to FIG. 2, FIG. 2 is a flowchart of a task processing method provided in some embodiments of the present application. The task processing method can be applied to computing nodes in an edge network. Each computing node is provided with one or more heterogeneous resources, including It includes the following S101 to S103.
首先,需要说明的是,本申请在一些实施例中所提供的任务处理方法应用于边缘网络中的各个计算节点,即S101至S103的实现流程由边缘网络中的计算节点执行。在边缘网络中,各个计算节点在地理空间中分布部署,其资源提供者来源较为多样,换而言之,计算节点的类型并不唯一,例如,可以是电信运营商等基础设施服务提供商、云计算服务提供商以及接入到互联网中的个人用户等。First of all, it should be noted that the task processing method provided in some embodiments of the present application is applied to each computing node in the edge network, that is, the implementation process of S101 to S103 is executed by the computing node in the edge network. In the edge network, each computing node is distributed and deployed in the geographic space, and its resource provider sources are relatively diverse. In other words, the type of computing node is not unique. For example, it can be an infrastructure service provider such as a telecom operator, a cloud computing service provider, and an individual user connected to the Internet.
其中,每一个计算节点中均包含有一种或多种异构资源,异构资源如上CPU、GPU、FPGA以及ASIC等,这些异构资源各自具有明显的功能特性,在计算能力、功耗等方面具有不同的特点,能够适应不同类型的计算任务,例如,CPU善于处理具有顺序逻辑结构的任务,而GPU则对并行任务处理能力较强,FPGA则对一些特点场景下的实时计算任务具有较好的处理性能,因此,在面对不同计算任务时,异构资源的处理能力也各有不同。在一种可能的实现方式中,边缘网络中每一个计算节点中的异构资源可以包括CPU资源、GPU资源、FPGA资源。Each computing node contains one or more heterogeneous resources, such as CPU, GPU, FPGA and ASIC. These heterogeneous resources have distinct functional characteristics, different characteristics in computing power, power consumption, etc., and can adapt to different types of computing tasks. For example, CPU is good at processing tasks with sequential logic structure, while GPU has strong processing ability for parallel tasks, and FPGA has good processing performance for real-time computing tasks in some special scenarios. Therefore, when facing different computing tasks, the processing capabilities of heterogeneous resources are also different. In one possible implementation, the heterogeneous resources in each computing node in the edge network may include CPU resources, GPU resources, and FPGA resources.
S101:接收边缘网络下发的资源分配策略;资源分配策略包括每一种异构资源被分配的关于目标任务的处理量;S101: receiving a resource allocation strategy issued by an edge network; the resource allocation strategy includes a processing amount of a target task allocated to each heterogeneous resource;
本步骤可以实现资源分配策略的获取,即接收边缘网络下发的资源分配策略。其中,资源分配策略是针对目标任务的策略,可以由边缘网络结合目标任务的任务需求和边缘网络中各计算节点的本地资源信息构建获得,用于为目标任务分配合理的计算资源,以便于实现目标任务的处理,该目标任务即为用户端发起的需要进行处理的计算任务。This step can achieve the acquisition of resource allocation strategy, that is, receiving the resource allocation strategy issued by the edge network. Among them, the resource allocation strategy is a strategy for the target task, which can be constructed by the edge network in combination with the task requirements of the target task and the local resource information of each computing node in the edge network, and is used to allocate reasonable computing resources for the target task to facilitate the processing of the target task, which is the computing task initiated by the user end and needs to be processed.
其中,资源分配策略相当于初始的分配策略,其中包括有边缘网络中每一种异构资源被分配的关于目标任务的任务量。如上,每一个计算节点中均包含有一种或多种异构资源,每一种异构资源被分配的关于目标任务的处理量即为目标任务被分配至各个异构资源的子任务的任务量,该任务量可以为子任务关于目标任务的占比,也可以为子任务的数量。显然,当任务量为子任务占比时,所有异构资源的子任务占比的加和为1;当任务量为子任务数量时,所有异构资源的自任务数量加和为目标任务的总任务量。Among them, the resource allocation strategy is equivalent to the initial allocation strategy, which includes the task volume of the target task allocated to each heterogeneous resource in the edge network. As mentioned above, each computing node contains one or more heterogeneous resources, and the processing volume of the target task allocated to each heterogeneous resource is the task volume of the subtasks of the target task allocated to each heterogeneous resource. The task volume can be the proportion of the subtask to the target task, or the number of subtasks. Obviously, when the task volume is the proportion of subtasks, the sum of the proportions of subtasks of all heterogeneous resources is 1; when the task volume is the number of subtasks, the sum of the number of subtasks of all heterogeneous resources is the total task volume of the target task.
需要说明的是,接收到边缘网络下发的资源分配策略的计算节点并不唯一,可以为边缘网络基于一定的筛选策略选中的计算节点,因此,在边缘网络中,接收到资源分配策略的计算节点可能为一个,也可能为多个,可能为边缘网络中的部分计算节点,也可能为边缘网络中的全部计算节点,当然,当边缘网络中的所有计算节点均未被选中时,说明当前边缘网络中不存在满足目标任务处理需求的计算节点,可以延迟等待一段时间再进行处理。在此基础上,可以想到的是,资源分配策略中所提到的异构资源是指被选中的各个计算节点的异构资源。It should be noted that the computing nodes that receive the resource allocation strategy issued by the edge network are not unique. They can be computing nodes selected by the edge network based on a certain screening strategy. Therefore, in the edge network, the computing nodes that receive the resource allocation strategy may be one or more, and may be some computing nodes in the edge network or all computing nodes in the edge network. Of course, when all computing nodes in the edge network are not selected, it means that there are no computing nodes in the current edge network that meet the processing requirements of the target task, and the processing can be delayed for a period of time. On this basis, it can be imagined that the heterogeneous resources mentioned in the resource allocation strategy refer to the heterogeneous resources of each selected computing node.
S102:将资源分配策略和本地资源信息输入至预设策略学习模型进行处理,获得资源共享策略;资源共享策略包括各异构资源的可提供资源,预设策略学习模型以计算节点的任务处理价值为优化目标训练获得;S102: Input the resource allocation strategy and local resource information into a preset strategy learning model for processing to obtain a resource sharing strategy; the resource sharing strategy includes available resources of various heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization target;
本步骤可以基于预设策略学习模型实现资源共享策略的计算。首先,在各个计算节点中,均预先创建有适应于自身的策略学习模型,模型的输入为边缘网络下发的资源分配策略和计算节点自身的资源信息(即上述本地资源信息),该本地资源信息主要是指计算节点本身预计提供的可用资源,模型的输出即为资源共享策略。其中,预设策略学习模型是以计算节点的任务处理价值为优化目标训练获得的,该任务处理价值是指计算节点在进行任务处理 时所获得的收益,也就是将计算节点的收益需求考虑在内,构建可以使得计算节点的任务处理价值达到最大化的预设策略学习模型,换而言之,基于该预设策略学习模型学习得到的资源共享策略,可以使得计算节点的任务处理收益达到最大化,以便于实现任务请求者和资源提供者的双赢。This step can realize the calculation of resource sharing strategy based on the preset strategy learning model. First, in each computing node, a strategy learning model that is suitable for itself is pre-created. The input of the model is the resource allocation strategy issued by the edge network and the resource information of the computing node itself (that is, the local resource information mentioned above). The local resource information mainly refers to the available resources that the computing node itself is expected to provide. The output of the model is the resource sharing strategy. Among them, the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal. The task processing value refers to the computing node in the task processing That is, taking the profit demand of computing nodes into consideration, a preset strategy learning model is constructed to maximize the task processing value of computing nodes. In other words, the resource sharing strategy learned based on the preset strategy learning model can maximize the task processing profit of computing nodes, so as to achieve a win-win situation for task requesters and resource providers.
进一步,预设策略学习模型输出资源共享策略即为最终的分配策略,其中包括有当前计算节点内部各异构资源的可提供资源,这里的可提供资源是指为处理当前目标任务所提供的资源,可以是每一种异构资源中为处理当前目标任务提供的资源与自身实际提供的可用资源的占比。Furthermore, the resource sharing strategy output by the preset strategy learning model is the final allocation strategy, which includes the available resources of various heterogeneous resources within the current computing node. The available resources here refer to the resources provided for processing the current target task, which can be the ratio of the resources provided for processing the current target task in each heterogeneous resource to the available resources actually provided by itself.
当然,当接收到资源分配策略的计算节点为多个时,每一个接收到资源分配策略的计算节点均会输出一个适用于自身的资源共享策略,且每一个资源共享策略均明确指示了自身计算节点内部各异构资源为目标任务提供的计算资源,由此,基于被选中的各个计算节点中各异构资源实际提供的计算资源即可实现目标任务的处理。Of course, when there are multiple computing nodes that receive the resource allocation strategy, each computing node that receives the resource allocation strategy will output a resource sharing strategy suitable for itself, and each resource sharing strategy clearly indicates the computing resources provided by the heterogeneous resources within its own computing node for the target task. Therefore, the processing of the target task can be achieved based on the computing resources actually provided by the heterogeneous resources in the selected computing nodes.
S103:利用资源共享策略执行目标任务。S103: Execute the target task using the resource sharing strategy.
本步骤可以实现目标任务的执行,即基于资源共享策略执行目标任务。当然,此处执行目标任务可以为利用资源共享策略中指示的各异构资源的可提供资源执行该目标任务。可以理解的是,由于目标任务可能是由多个计算节点中的各个异构资源共同执行,因此,资源共享策略中的“共享”,是指执行目标任务的各个计算节点中的各个异构资源在执行目标任务期间可以进行信息共享,进而保证目标任务可以被执行完毕以及保证目标任务执行结果的准确性。This step can realize the execution of the target task, that is, the target task is executed based on the resource sharing strategy. Of course, the execution of the target task here can be to execute the target task by utilizing the available resources of the heterogeneous resources indicated in the resource sharing strategy. It can be understood that since the target task may be jointly executed by various heterogeneous resources in multiple computing nodes, the "sharing" in the resource sharing strategy means that the various heterogeneous resources in the various computing nodes that execute the target task can share information during the execution of the target task, thereby ensuring that the target task can be completed and the accuracy of the target task execution result is guaranteed.
可见,本申请在一些实施例中所提供的任务处理方法,在边缘网络的各个计算节点中预先部署策略学习模型,并基于该预设策略学习模型实现资源共享策略的确定,从而利用该资源共享策略实现任务处理,由于预设策略学习模型是以计算节点的任务处理价值为优化目标进行训练获得的,因此,基于获得资源共享策略进行任务处理可以使得计算节点的任务处理价值得到最大化,并保证任务处理完毕,从而实现任务请求者和资源提供者的双赢;此外,由边缘网络创建资源分配策略,再由计算节点基于资源分配策略学习得到资源分配策略,且资源分配策略和资源共享策略均将计算节点中的异构资源考虑在内,进一步实现了计算节点内部异构资源的合理分配。It can be seen that the task processing method provided in some embodiments of the present application pre-deploys a policy learning model in each computing node of the edge network, and determines the resource sharing strategy based on the preset policy learning model, thereby utilizing the resource sharing strategy to implement task processing. Since the preset policy learning model is trained with the task processing value of the computing node as the optimization goal, task processing based on the resource sharing strategy can maximize the task processing value of the computing node and ensure that the task is processed, thereby achieving a win-win situation for the task requester and the resource provider. In addition, the resource allocation strategy is created by the edge network, and then the computing node learns the resource allocation strategy based on the resource allocation strategy, and both the resource allocation strategy and the resource sharing strategy take the heterogeneous resources in the computing node into account, further realizing the reasonable allocation of heterogeneous resources within the computing node.
在一些实施例中,上述利用资源共享策略执行目标任务之后,还可以包括如下步骤:In some embodiments, after the above-mentioned resource sharing strategy is used to execute the target task, the following steps may also be included:
获取目标任务的执行结果;Get the execution result of the target task;
将执行结果上报至边缘网络。Report the execution results to the edge network.
本申请在一些实施例中所提供的任务处理方法还可以进一步实现执行结果反馈功能。在基于资源共享策略执行完毕目标任务之后,还可以统计目标任务的执行结果,以便于将执行结果上报至边缘网络,从而实现执行结果的反馈。更进一步的,边缘网络还可以将执行结果反馈至目标任务的发起方。The task processing method provided in some embodiments of the present application can further implement an execution result feedback function. After the target task is executed based on the resource sharing strategy, the execution result of the target task can also be counted so as to report the execution result to the edge network, thereby realizing the feedback of the execution result. Furthermore, the edge network can also feed back the execution result to the initiator of the target task.
在一些实施例中,该任务处理方法还可以包括如下步骤:In some embodiments, the task processing method may further include the following steps:
对本地资源信息进行实时监控;Real-time monitoring of local resource information;
当本地资源信息满足预设条件时,将本地资源信息发布至边缘网络。When the local resource information meets the preset conditions, the local resource information is published to the edge network.
如上,资源分配策略可以由边缘网络结合目标任务的任务需求和边缘网络中各计算节点的本地资源信息构建获得,因此,边缘网络在构建资源分配策略之前,需要先对各计算节点 的本地资源信息进行获取。As mentioned above, the resource allocation strategy can be constructed by the edge network in combination with the task requirements of the target task and the local resource information of each computing node in the edge network. Therefore, before constructing the resource allocation strategy, the edge network needs to first Get the local resource information.
其中,本地资源信息是由相应的计算节点发布至边缘网络中的,并且,各个计算节点将本地资源信息发布至边缘网络中,是在本地资源信息满足预设条件的情况下执行的,当本地资源信息不满足自身的预设条件时,将不会进行本地资源信息的发布。因此,各个计算节点可以对自身的本地资源信息进行实时监控,并在本地资源信息满足预设条件时,将其发布至边缘网络。其中,预设条件则是由计算节点的拥有者根据自身实际需求预先设定的发布条件,并不唯一,本申请对此不做限定。Among them, the local resource information is published to the edge network by the corresponding computing node, and each computing node publishes the local resource information to the edge network when the local resource information meets the preset conditions. When the local resource information does not meet its own preset conditions, the local resource information will not be published. Therefore, each computing node can monitor its own local resource information in real time, and publish it to the edge network when the local resource information meets the preset conditions. Among them, the preset conditions are the publishing conditions pre-set by the owner of the computing node according to its actual needs, and are not unique. This application does not limit this.
在一些实施例中,上述当本地资源信息满足预设条件时,将本地资源信息发布至边缘网络,包括如下步骤:In some embodiments, when the local resource information meets the preset conditions, publishing the local resource information to the edge network includes the following steps:
根据本地资源信息确定可用资源;Determine available resources based on local resource information;
当可用资源的资源占比达到预设阈值时,将本地资源信息发布至边缘网络。When the resource ratio of available resources reaches a preset threshold, the local resource information is published to the edge network.
本申请在一些实施例中提供了一种类型的预设条件,即计算节点中的可用资源的资源占比达到预设阈值,其中,可用资源是指计算节点中的空闲资源,资源占比则是指空闲资源占总资源的比例。In some embodiments, the present application provides a type of preset condition, namely, the resource ratio of available resources in the computing node reaches a preset threshold, wherein available resources refer to idle resources in the computing node, and resource ratio refers to the proportion of idle resources to total resources.
在对本地资源信息进行实时统计的过程中,还可以进一步根据本地资源信息确定可用资源,并统计可用资源的资源占比,当该资源占比达到预设阈值时,说明自身可以为其他终端的计算任务提供计算资源,因此,可以进行本地资源信息的发布;反之,当资源占比未达到预设阈值时,说明自身无法为其他终端的计算任务提供计算资源,因此,不进行本地资源信息的发布。当然,预设阈值的取值并不影响本技术方案的实施,由计算节点的拥有者根据实际需求进行设置即可,本申请对此不做限定。In the process of real-time statistics of local resource information, available resources can be further determined based on the local resource information, and the resource ratio of available resources can be counted. When the resource ratio reaches the preset threshold, it means that it can provide computing resources for the computing tasks of other terminals, so the local resource information can be published; conversely, when the resource ratio does not reach the preset threshold, it means that it cannot provide computing resources for the computing tasks of other terminals, so the local resource information is not published. Of course, the value of the preset threshold does not affect the implementation of this technical solution, and it can be set by the owner of the computing node according to actual needs, and this application does not limit this.
在一些实施例中,本地资源信息可以包括可用资源和资源提供时间,可用资源包括可以各异构资源的当前可用资源。In some embodiments, the local resource information may include available resources and resource provision time, and the available resources may include currently available resources which may be heterogeneous resources.
本申请在一些实施例中提供了一种类型的本地资源信息。首先,要想实现任务处理,计算节点的可用资源需要满足目标任务的实际需求;其次,计算节点的所有者对资源的分享意愿有时会随时间的变化而产生波动,如当某计算节点较为空闲时,则该计算节点的拥有者更倾向于分享其空闲资源,为其它用户进行服务,并获得一定的奖励或收益,因此,计算节点的资源提供时间也应当满足目标任务的实际需求。基于此,本地资源信息可以包括自身的可用资源和资源提供时间,其中,可用资源则包括自身内部各类异构资源的当前可用资源。The present application provides a type of local resource information in some embodiments. First, in order to realize task processing, the available resources of the computing node need to meet the actual needs of the target task; secondly, the willingness of the owner of the computing node to share resources sometimes fluctuates over time. For example, when a computing node is relatively idle, the owner of the computing node is more inclined to share its idle resources, serve other users, and obtain certain rewards or benefits. Therefore, the resource provision time of the computing node should also meet the actual needs of the target task. Based on this, local resource information can include its own available resources and resource provision time, among which the available resources include the current available resources of various heterogeneous resources within itself.
本申请在一些实施例中提供了另一种任务处理方法。The present application provides another task processing method in some embodiments.
参考图3,图3为本申请所提供的另一种任务处理方法的流程示意图,该任务处理方法可应用于边缘网络,边缘网络中的每一计算节点设置有一种或多种异构资源,包括如下S201和S202。Refer to Figure 3, which is a flow chart of another task processing method provided in the present application. The task processing method can be applied to an edge network, and each computing node in the edge network is provided with one or more heterogeneous resources, including the following S201 and S202.
首先,需要说明的是,本申请在一些实施例中所提供的任务处理方法应用于边缘网络,即S201和S202的实现流程由边缘网络执行。First of all, it should be noted that the task processing method provided in some embodiments of the present application is applied to an edge network, that is, the implementation process of S201 and S202 is executed by the edge network.
S201:根据目标任务的任务信息和各计算节点的本地资源信息,确定目标计算节点和资源分配策略;资源分配策略包括每一种异构资源被分配的关于目标任务的处理量;S201: Determine a target computing node and a resource allocation strategy according to task information of the target task and local resource information of each computing node; the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
本步骤可以实现目标计算节点和资源分配策略的确定。其中,目标计算节点即为从边缘网络中选中的用于实现目标任务处理的计算节点,资源分配策略则是针对目标任务的策略, 用于为目标任务分配合理的计算资源,以便于实现目标任务的处理,该目标任务即为用户端发起的需要进行处理的计算任务。This step can determine the target computing node and resource allocation strategy. The target computing node is the computing node selected from the edge network to implement the target task processing, and the resource allocation strategy is the strategy for the target task. It is used to allocate reasonable computing resources to the target task in order to facilitate the processing of the target task. The target task is the computing task initiated by the user end that needs to be processed.
在实现过程中,边缘网络在接收到待处理的目标任务时,可以先统计该目标任务的任务信息,包括但不限于任务类型、任务目标、计算资源需求量等信息;然后统计边缘网络中各个计算节点的本地资源信息,该本地资源信息可以包括但不限于计算节点本身预计提供的可用资源、提供可用资源的时间限制等信息;最后,结合目标任务的任务信息和各计算节点的本地资源信息从众多计算节点中筛选确定目标计算节点,并构建获得资源分配策略。During the implementation process, when the edge network receives the target task to be processed, it can first count the task information of the target task, including but not limited to task type, task target, computing resource demand and other information; then count the local resource information of each computing node in the edge network, and the local resource information may include but not limited to the available resources that the computing node itself is expected to provide, the time limit for providing available resources and other information; finally, combine the task information of the target task and the local resource information of each computing node to screen and determine the target computing node from a large number of computing nodes, and build a resource allocation strategy.
其中,资源分配策略相当于初始的分配策略,其中包括有边缘网络中每一种异构资源被分配的关于目标任务的任务量。可以理解的是,每一个计算节点中均包含有一种或多种异构资源,其中,每一种异构资源被分配的关于目标任务的处理量即为目标任务被分配至各个异构资源的子任务的任务量,该任务量可以为子任务关于目标任务的占比,也可以为子任务的数量。显然,当任务量为子任务占比时,所有异构资源的子任务占比的加和为1;当任务量为子任务数量时,所有异构资源的自任务数量加和为目标任务的总任务量。Among them, the resource allocation strategy is equivalent to the initial allocation strategy, which includes the task volume of the target task allocated to each heterogeneous resource in the edge network. It can be understood that each computing node contains one or more heterogeneous resources, wherein the processing volume of the target task allocated to each heterogeneous resource is the task volume of the subtasks of the target task allocated to each heterogeneous resource, and the task volume can be the proportion of the subtask to the target task, or the number of subtasks. Obviously, when the task volume is the proportion of subtasks, the sum of the proportions of subtasks of all heterogeneous resources is 1; when the task volume is the number of subtasks, the sum of the number of subtasks of all heterogeneous resources is the total task volume of the target task.
需要说明的是,目标计算节点的数量并不唯一,可能为一个,也可能为多个,可能为边缘网络中的部分计算节点,也可能为边缘网络中的全部计算节点,当然,没有确定任何一个目标计算节点时,说明当前边缘网络中不存在满足目标任务处理需求的计算节点,可以延迟等待一段时间再进行处理。在此基础上,可以想到的是,资源分配策略中所提到的异构资源是指被筛选确定的各个目标计算节点的异构资源。It should be noted that the number of target computing nodes is not unique. It may be one or more. It may be some computing nodes in the edge network or all computing nodes in the edge network. Of course, if no target computing node is determined, it means that there is no computing node in the current edge network that meets the processing requirements of the target task, and the processing can be delayed for a period of time. On this basis, it can be imagined that the heterogeneous resources mentioned in the resource allocation strategy refer to the heterogeneous resources of each target computing node that has been screened and determined.
S202:将资源分配策略发送至各目标计算节点,以使各目标计算节点利用预设策略学习模型对资源分配策略和本地资源信息进行处理,获得资源共享策略,并利用资源共享策略执行目标任务;资源共享策略包括各异构资源的可提供资源,预设策略学习模型以计算节点的任务处理价值为优化目标训练获得。S202: Send the resource allocation strategy to each target computing node, so that each target computing node uses a preset strategy learning model to process the resource allocation strategy and local resource information, obtains a resource sharing strategy, and uses the resource sharing strategy to execute the target task; the resource sharing strategy includes the available resources of each heterogeneous resource, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
本步骤可以实现资源分配策略的下发,即将资源分配策略下发至各个目标计算节点中,由各个目标计算节点基于该资源分配策略实现目标任务的执行。其中,每一个目标计算节点执行目标任务的实现流程如下:This step can implement the distribution of resource allocation strategies, that is, the resource allocation strategies are distributed to each target computing node, and each target computing node implements the execution of the target task based on the resource allocation strategy. The implementation process of each target computing node executing the target task is as follows:
首先,在边缘网络的各个计算节点中,均预先创建有适应于自身的策略学习模型,对于目标计算节点而言,模型的输入为边缘网络下发的资源分配策略和目标计算节点自身的资源信息(即上述本地资源信息),该本地资源信息主要是指目标计算节点本身预计提供的可用资源,模型的输出即为资源共享策略。其中,预设策略学习模型是以计算节点的任务处理价值为优化目标训练获得的,该任务处理价值是指计算节点在进行任务处理时所获得的收益,也就是将计算节点的收益需求考虑在内,构建可以使得计算节点的任务处理价值达到最大化的预设策略学习模型,换而言之,基于该预设策略学习模型学习得到的资源共享策略,可以使得计算节点的任务处理收益达到最大化,以便于实现任务请求者和资源提供者的双赢。First, in each computing node of the edge network, a policy learning model adapted to itself is pre-created. For the target computing node, the input of the model is the resource allocation strategy issued by the edge network and the resource information of the target computing node itself (i.e., the local resource information mentioned above). The local resource information mainly refers to the available resources that the target computing node itself is expected to provide. The output of the model is the resource sharing strategy. Among them, the preset policy learning model is obtained by training with the task processing value of the computing node as the optimization goal. The task processing value refers to the benefit obtained by the computing node when performing task processing, that is, taking the benefit demand of the computing node into consideration, and constructing a preset policy learning model that can maximize the task processing value of the computing node. In other words, the resource sharing strategy learned based on the preset policy learning model can maximize the task processing benefit of the computing node, so as to achieve a win-win situation for the task requester and the resource provider.
进一步,预设策略学习模型输出资源共享策略即为最终的分配策略,其中包括有当前目标计算节点内部各异构资源的可提供资源,这里的可提供资源是指为处理当前目标任务所提供的资源,可以是每一种异构资源中为处理当前目标任务提供的资源与自身实际提供的可用资源的占比。Furthermore, the resource sharing strategy output by the preset strategy learning model is the final allocation strategy, which includes the available resources of various heterogeneous resources within the current target computing node. The available resources here refer to the resources provided for processing the current target task, which can be the ratio of the resources provided by each heterogeneous resource for processing the current target task to the available resources actually provided by itself.
当然,当目标计算节点为多个时,每一个目标计算节点均会输出一个适用于自身的资源共享策略,且每一个资源共享策略均明确指示了自身计算节点内部各异构资源为目标任务提 供的计算资源,由此,基于各个目标计算节点中各异构资源实际提供的计算资源即可实现目标任务的处理。Of course, when there are multiple target computing nodes, each target computing node will output a resource sharing strategy suitable for itself, and each resource sharing strategy clearly indicates the heterogeneous resources within its own computing node to provide resources for the target task. The computing resources provided by each target computing node can realize the processing of the target task based on the computing resources actually provided by each heterogeneous resource in each target computing node.
最后,即可基于资源共享策略执行目标任务。当然,此处执行目标任务可以为利用资源共享策略中指示的各异构资源的可提供资源执行该目标任务。可以理解的是,由于目标任务可能是由多个目标计算节点中的各个异构资源共同执行,因此,资源共享策略中的“共享”,是指执行目标任务的各个目标计算节点中的各个异构资源在执行目标任务期间可以进行信息共享,进而保证目标任务可以被执行完毕以及保证目标任务执行结果的准确性。Finally, the target task can be executed based on the resource sharing strategy. Of course, the target task here can be executed by using the available resources of the heterogeneous resources indicated in the resource sharing strategy. It can be understood that since the target task may be jointly executed by the heterogeneous resources in multiple target computing nodes, the "sharing" in the resource sharing strategy means that the heterogeneous resources in the target computing nodes that execute the target task can share information during the execution of the target task, thereby ensuring that the target task can be completed and the accuracy of the target task execution result is guaranteed.
可见,本申请在一些实施例中所提供的任务处理方法,在边缘网络的各个计算节点中预先部署策略学习模型,并基于该预设策略学习模型实现资源共享策略的确定,从而利用该资源共享策略实现任务处理,由于预设策略学习模型是以计算节点的任务处理价值为优化目标进行训练获得的,因此,基于获得资源共享策略进行任务处理可以使得计算节点的任务处理价值得到最大化,并保证任务处理完毕,从而实现任务请求者和资源提供者的双赢;此外,由边缘网络创建资源分配策略,再由计算节点基于资源分配策略学习得到资源分配策略,且资源分配策略和资源共享策略均将计算节点中的异构资源考虑在内,进一步实现了计算节点内部异构资源的合理分配。It can be seen that the task processing method provided in some embodiments of the present application pre-deploys a policy learning model in each computing node of the edge network, and determines the resource sharing strategy based on the preset policy learning model, thereby utilizing the resource sharing strategy to implement task processing. Since the preset policy learning model is trained with the task processing value of the computing node as the optimization goal, task processing based on the resource sharing strategy can maximize the task processing value of the computing node and ensure that the task is processed, thereby achieving a win-win situation for the task requester and the resource provider. In addition, the resource allocation strategy is created by the edge network, and then the computing node learns the resource allocation strategy based on the resource allocation strategy, and both the resource allocation strategy and the resource sharing strategy take the heterogeneous resources in the computing node into account, further realizing the reasonable allocation of heterogeneous resources within the computing node.
在一些实施例中,上述根据目标任务的任务信息和各计算节点的本地资源信息,确定目标计算节点,可以包括如下步骤:In some embodiments, the above-mentioned determination of the target computing node according to the task information of the target task and the local resource information of each computing node may include the following steps:
根据本地资源信息确定资源提供时间和可用资源;Determine resource provision time and available resources based on local resource information;
当资源提供时间和可用资源均满足任务信息指示的任务需求时,确定计算节点为目标计算节点。When both the resource provision time and the available resources meet the task requirements indicated by the task information, the computing node is determined to be the target computing node.
本申请在一些实施例中提供了一种筛选确定目标计算节点的实现方法。首先,要想实现任务处理,目标计算节点的可用资源需要满足目标任务的实际需求;其次,计算节点的所有者对资源的分享意愿有时会随时间的变化而产生波动,如当某计算节点较为空闲时,则该计算节点的拥有者更倾向于分享其空闲资源,为其它用户进行服务,并获得一定的奖励或收益,因此,目标计算节点的资源提供时间也应当满足目标任务的实际需求。The present application provides a method for implementing a screening and determination of target computing nodes in some embodiments. First, in order to realize task processing, the available resources of the target computing node need to meet the actual needs of the target task; second, the willingness of the owner of the computing node to share resources sometimes fluctuates over time. For example, when a computing node is relatively idle, the owner of the computing node is more inclined to share its idle resources, serve other users, and obtain certain rewards or benefits. Therefore, the resource provision time of the target computing node should also meet the actual needs of the target task.
在此基础上,在实现过程中,在获得各个计算节点的本地资源信息之后,可以先根据该本地资源信息确定对应计算节点的资源提供时间和可用资源,然后判断这二者是否满足目标任务的任务信息中指示的任务需求,并在满足任务需求时将当前计算节点确认为目标计算节点。当有任何一类信息不满足任务需求时,则不会被确认为目标计算节点。On this basis, in the implementation process, after obtaining the local resource information of each computing node, the resource provision time and available resources of the corresponding computing node can be determined according to the local resource information, and then it can be determined whether the two meet the task requirements indicated in the task information of the target task, and the current computing node is confirmed as the target computing node when the task requirements are met. If any type of information does not meet the task requirements, it will not be confirmed as the target computing node.
当然,上述选择标准仅为本申请在一些实施例中所提供的一种实现方式,并不唯一,可以根据资源提供者和任务请求者的实际需求进行设定,本申请对此不做限定。Of course, the above selection criteria are only one implementation method provided in some embodiments of the present application, and are not unique. They can be set according to the actual needs of the resource provider and the task requester, and the present application does not limit this.
在一些实施例中,上述根据目标任务的任务信息和各计算节点的本地资源信息,确定目标计算节点和资源分配策略之前,还可以包括如下步骤:In some embodiments, before determining the target computing node and the resource allocation strategy based on the task information of the target task and the local resource information of each computing node, the following steps may also be included:
统计各计算节点发布的本地资源信息;其中,本地资源信息由计算节点在本地资源信息满足预设条件时发布至边缘网络。The local resource information published by each computing node is counted; wherein the local resource information is published to the edge network by the computing node when the local resource information meets the preset conditions.
可以理解的是,资源分配策略由边缘网络基于目标任务的任务信息和各计算节点的本地资源信息构建获得,因此,边缘网络在构建资源分配策略之前,需要先对各计算节点的本地资源信息进行获取。其中,本地资源信息是由相应的计算节点发布至边缘网络中的,因此,边缘网络在接收到目标任务时可以直接抓取各个计算节点发布的本地资源信息。 It is understandable that the resource allocation strategy is constructed by the edge network based on the task information of the target task and the local resource information of each computing node. Therefore, before constructing the resource allocation strategy, the edge network needs to obtain the local resource information of each computing node. Among them, the local resource information is published to the edge network by the corresponding computing node. Therefore, when the edge network receives the target task, it can directly capture the local resource information published by each computing node.
此外,各个计算节点将本地资源信息发布至边缘网络中,是在本地资源信息满足预设条件的情况下执行的,当本地资源信息不满足自身的预设条件时,将不会进行本地资源信息发布。其中,预设条件则是由计算节点的拥有者根据自身实际需求预先设定的发布条件,并不唯一,例如,可以为自身的空闲资源占比达到预设阈值以上,当前时间满足预先设定的资源共享时间等。In addition, each computing node publishes local resource information to the edge network only when the local resource information meets the preset conditions. If the local resource information does not meet its own preset conditions, the local resource information will not be published. The preset conditions are publishing conditions pre-set by the owner of the computing node according to its actual needs, and are not unique. For example, the idle resource ratio can reach above the preset threshold, and the current time can meet the preset resource sharing time.
在一些实施例中,该任务处理方法还可以包括如下步骤:In some embodiments, the task processing method may further include the following steps:
获取目标计算节点上传的目标任务的执行结果;Get the execution result of the target task uploaded by the target computing node;
将执行结果反馈至目标任务的发起端。Feedback the execution results to the initiator of the target task.
本申请在一些实施例中所提供的任务处理方法还可以进一步实现执行结果反馈功能,该执行结果即为目标任务的执行结果。对于各目标计算节点而言,其在执行完成目标任务之后,可以进一步将执行结果上报至边缘网络;对于边缘网络而言,其在接收到目标计算节点上传的执行结果之后,即可将最终的执行结果反馈至目标任务的发起端。The task processing method provided in some embodiments of the present application can further implement the execution result feedback function, and the execution result is the execution result of the target task. For each target computing node, after completing the target task, it can further report the execution result to the edge network; for the edge network, after receiving the execution result uploaded by the target computing node, it can feedback the final execution result to the initiator of the target task.
本申请在一些实施例中提供了又一种任务处理方法。The present application provides yet another task processing method in some embodiments.
参考图4,图4为本申请所提供的又一种任务处理方法的流程示意图,该任务处理方法可包括如下步骤:Referring to FIG. 4 , FIG. 4 is a flowchart of another task processing method provided by the present application. The task processing method may include the following steps:
1、分布式边缘网络异构计算资源主动发现策略。各分布式的边缘计算节点,根据自身运行状态及可用资源情况,主动适时地向边缘网络发布资源状态标签;1. Active discovery strategy for heterogeneous computing resources in distributed edge networks. Each distributed edge computing node actively and timely publishes resource status tags to the edge network based on its own operating status and available resources;
2、基于异构计算的用户任务资源组合分配策略。根据当前用户的计算任务请求信息,以及当前边缘网络中获得的计算资源状态标签,统计各类异构计算资源的可调度情况,得到关于该计算任务的异构计算资源组合分配方案。2. User task resource combination allocation strategy based on heterogeneous computing. According to the current user's computing task request information and the computing resource status label obtained in the current edge network, the schedulable status of various types of heterogeneous computing resources is counted to obtain the heterogeneous computing resource combination allocation plan for the computing task.
3、异构计算节点资源共享策略学习。利用多智能体策略学习技术,各异构边缘节点根据上一步给出的异构计算资源组合分配方案,输出自身的计算资源共享策略。3. Learning heterogeneous computing node resource sharing strategies. Using multi-agent strategy learning technology, each heterogeneous edge node outputs its own computing resource sharing strategy based on the heterogeneous computing resource combination allocation plan given in the previous step.
上述各步骤的实现流程如下:The implementation process of the above steps is as follows:
1、分布式边缘网络异构计算资源主动发现策略。1. Active discovery strategy for heterogeneous computing resources in distributed edge networks.
在边缘计算网络中,各个边缘计算节点在地理空间中分布部署,其资源提供者来源较为多样,这些边缘计算节点通常具有异构性,不同边缘计算节点能够提供的计算资源的能力和类型可能存在差异。此外,所有者对计算资源的分享意愿也随时间的变化而产生波动。因此,可以根据一定的策略,主动的将边缘网络中的异构计算资源进行共享和发现。In an edge computing network, each edge computing node is distributed and deployed in geographic space, and its resource providers come from a variety of sources. These edge computing nodes are usually heterogeneous, and the capabilities and types of computing resources that different edge computing nodes can provide may vary. In addition, the owner's willingness to share computing resources also fluctuates over time. Therefore, according to certain strategies, the heterogeneous computing resources in the edge network can be actively shared and discovered.
(1)边缘节点计算资源标签设置:(1) Edge node computing resource label setting:
以标签的形式,边缘计算节点所有者对其所拥有的异构计算资源进行描述,其标签可以表示为如下多元组:
Labeli={Thdi,Timei,Ci,Gi,Fi};
The owner of the edge computing node describes the heterogeneous computing resources it owns in the form of a label, which can be expressed as the following tuple:
Label i ={Thd i , Time i , C i , G i , F i };
其中,Labeli表示边缘计算节点i用来进行主动资源发现的标签,Thdi表示该边缘计算节点可以发布标签的阈值,该阈值可以为当前所有可用计算资源占该边缘计算节点实际拥有的计算资源总量的百分比,Timei表示拥有者设定的可以进行资源共享的时段, Ci,Gi,Fi分别表示该边缘计算节点当前可以利用的CPU、GPU以及FPGA计算资源的情况。Among them, Label i represents the label used by edge computing node i for active resource discovery, Thd i represents the threshold at which the edge computing node can publish the label, which can be the percentage of all currently available computing resources to the total amount of computing resources actually owned by the edge computing node, and Time i represents the time period set by the owner for resource sharing. Ci , Gi , and Fi respectively represent the CPU, GPU, and FPGA computing resources currently available to the edge computing node.
(2)资源主动发现策略:(2) Active resource discovery strategy:
在该边缘计算节点上,常态运行资源发布模块,用来实时监控自身各类异构资源的利用情况以及当前时间,当满足发布条件时,该模块主动向边缘网络中发布标签Labeli,以表示此时该边缘计算节点上的计算资源可以在时间Timei范围内被共享,且可共享的资源类型及大小分别为Ci,Gi,FiOn this edge computing node, the resource publishing module is normally run to monitor the utilization of various heterogeneous resources and the current time in real time. When the publishing conditions are met, the module actively publishes the label Label i to the edge network to indicate that the computing resources on the edge computing node can be shared within the time range Time i , and the types and sizes of the sharable resources are Ci , Gi, and F i respectively.
2、基于异构计算的用户任务资源组合分配策略。2. User task resource combination allocation strategy based on heterogeneous computing.
当边缘网络中产生计算任务(目标任务)时,任务调度模块基于可用的异构计算资源,针对该用户的计算任务进行资源组合分配,以提高边缘网络资源利用率,并满足用户计算任务的各项约束要求,提高用户的体验质量。When a computing task (target task) is generated in the edge network, the task scheduling module allocates resources for the user's computing task based on the available heterogeneous computing resources to improve the utilization of edge network resources, meet the various constraints of the user's computing task, and improve the user's quality of experience.
(1)异构计算资源信息收集。任务调度模块首先根据当前各计算节点主动发布的标签,统计各类异构计算资源的可调度情况,筛选在用户计算任务完成时段内可以利用的计算节点;在此条件下,统计可以被调度的异构计算资源是否能够满足用户计算任务,若可以,则进行下一步,否则该计算任务无法被满足。(1) Heterogeneous computing resource information collection. The task scheduling module first counts the schedulable status of various heterogeneous computing resources based on the tags actively released by each computing node, and selects the computing nodes that can be used during the time period when the user's computing task is completed. Under this condition, it counts whether the heterogeneous computing resources that can be scheduled can meet the user's computing task. If so, it proceeds to the next step. Otherwise, the computing task cannot be met.
(2)异构资源组合分配策略生成。根据各类计算资源处理单位计算任务时所需要的消耗,以及用户计算任务的大小,可以得到基于异构计算资源组合的多个分配策略。假设CPU、GPU以及FPGA分别处理单位计算任务量所需的资源消耗为实数c,g,f,某产生于t时刻的计算任务的总任务量为实数mt,且该计算任务可以被划分为多个实数部分,其中分别对应CPU、GPU以及FPGA所处理的任务量为实数x,y,z,在此条件下计算任务t总的消耗为:
Costt=cx+gy+fz;
(2) Generation of heterogeneous resource combination allocation strategy. Based on the consumption required by various computing resources to process unit computing tasks and the size of user computing tasks, multiple allocation strategies based on heterogeneous computing resource combination can be obtained. Assume that the resource consumption required by the CPU, GPU and FPGA to process unit computing tasks is real number c, g, f, respectively, the total task amount of a computing task generated at time t is real number m t , and the computing task can be divided into multiple real number parts, where the task amounts corresponding to the CPU, GPU and FPGA are real numbers x, y, z, respectively. Under this condition, the total consumption of computing task t is:
Cost t = cx + gy + fz;
相应的限制条件为:
Tt<Tthd
mt=x+y+z;
The corresponding restrictions are:
T t <T thd
m t = x + y + z;
其中,表示针对该计算任务约束项的真实值,分别表示该t时刻的计算任务对应于CPU、GPU、FPGA等异构计算资源的约束因子,则异构计算资源的组合分配策略应该在满足任务约束项Tthd及计算任务总量mt的条件下,尽可能的降低总消耗Costt,优化边缘网络的平均开销,并提升用户体验质量。求解该目标函数最优解的过程可以表示为线性规划问题,利用线性规划求解器对上述目标函数求解即可得到最优的异构计算资源组合策略。 in, represents the true value of the constraint item for this computing task, They represent the constraint factors of heterogeneous computing resources such as CPU, GPU, and FPGA corresponding to the computing task at time t. The combination allocation strategy of heterogeneous computing resources should reduce the total consumption Cost t as much as possible, optimize the average cost of the edge network, and improve the user experience quality under the condition of satisfying the task constraint item T thd and the total computing task m t . The process of solving the optimal solution of the objective function can be expressed as a linear programming problem. The optimal heterogeneous computing resource combination strategy can be obtained by solving the above objective function using a linear programming solver.
3、异构计算节点资源共享策略学习。3. Learning resource sharing strategies for heterogeneous computing nodes.
利用多智能体策略学习技术,各边缘计算节点根据上一步骤中得到的最优资源组合策略,学习得到自身的资源分配策略。在当前边缘计算节点集合中,各边缘计算节点通过将部分计算资源进行共享,用于当前计算任务的处理,并获得相应的收益。Using multi-agent strategy learning technology, each edge computing node learns its own resource allocation strategy based on the optimal resource combination strategy obtained in the previous step. In the current set of edge computing nodes, each edge computing node shares part of its computing resources for the processing of the current computing task and obtains corresponding benefits.
对于集合中的边缘计算节点i,设其对当前计算任务的资源共享策略为:
For edge computing node i in the set, let its resource sharing strategy for the current computing task be:
其中,分别为其预计分配给当前计算任务的各类异构计算资源(CPU、GPU、FPGA)占其自身当前可用资源的比例,均为0至1之间的实数,且满足:


in, They are the proportions of various heterogeneous computing resources (CPU, GPU, FPGA) that are expected to be allocated to the current computing task to their own currently available resources. They are all real numbers between 0 and 1 and satisfy:


即各边缘计算节点分享的计算资源能够满足计算任务的最优异构资源分配策略。各边缘计算节点的策略学习目标为生成最优的资源共享策略ai,使其能够最大化当前获得的收益ri



That is, the computing resources shared by each edge computing node can meet the optimal resource allocation strategy for computing tasks. The strategy learning goal of each edge computing node is to generate the optimal resource sharing strategy a i so that it can maximize the current benefit r i :



其中,分别表示其异构资源共享给当前计算任务所获得的奖励。奖励表示资源拥有者将自身拥有的各类异构计算资源提供给任务处理时能够获得的收益,且每个资源拥有者都追求其所获得的收益最大化。上式中的分别表示单位异构计算资源处理任务时能够获得的奖励值,即衡量共享某一类资源时的基础收益,为正实数。一般来说,异构资源对不同类型的计算任务处理能力表现不同,因此,需要通过设置面对计算任务时异构资源的单位收益来进一步提高对资源的使用效率以及提高资源拥有者的收益水平。in, Respectively represent the rewards obtained by sharing their heterogeneous resources with the current computing task. The reward represents the benefits that resource owners can obtain when they provide their own heterogeneous computing resources for task processing, and each resource owner seeks to maximize the benefits they obtain. The values represent the reward that can be obtained when a unit of heterogeneous computing resources processes a task, that is, the basic benefit when sharing a certain type of resource, which is a positive real number. Generally speaking, heterogeneous resources have different processing capabilities for different types of computing tasks. Therefore, it is necessary to further improve the efficiency of resource use and the benefit level of resource owners by setting the unit benefit of heterogeneous resources when facing computing tasks.
利用多智能体策略学习技术,对每个边缘计算节点的资源共享策略进行学习,采用集中训练、分布执行的机制。设置边缘计算节点的策略学习网络(预设策略学习模型)为Ni, 网络参数为ωi,其输入为当前计算任务的异构计算资源组合分配策略s1={x,y,z}opt以及该边缘计算节点的当前可用异构计算资源s2={Ci,Gi,Fi},策略学习网络的输出为资源共享决策 Using multi-agent strategy learning technology, the resource sharing strategy of each edge computing node is learned, and a centralized training and distributed execution mechanism is adopted. Set the strategy learning network (preset strategy learning model) of the edge computing node to Ni , The network parameters are ω i , and its input is the heterogeneous computing resource combination allocation strategy s 1 = {x, y, z} opt of the current computing task and the currently available heterogeneous computing resources s 2 = {C i , G i , F i } of the edge computing node. The output of the policy learning network is the resource sharing decision
此外,每个边缘计算节点设置单独的价值评价网络Qi,其参数为θi,为了得到最优的共享决策输出,需要对Ni的参数进行训练,每个边缘计算节点的训练过程如下:In addition, each edge computing node sets a separate value evaluation network Qi , whose parameters are θi . In order to obtain the optimal shared decision output, the parameters of Ni need to be trained. The training process of each edge computing node is as follows:
(1)将边缘计算节点i获取到的状态信息si={s1,s2}作为本地决策网络的输入,得到当前输出ai,执行动作ai,系统则转换到新的状态s′i以及相应的收益ri,将{si,ai,ri,s′i}保存在集合D;(1) The state information s i = {s 1 , s 2 } obtained by edge computing node i is used as the input of the local decision network to obtain the current output a i , execute action a i , and the system is converted to the new state s ′ i and the corresponding benefit ri , and {s i , a i , ri , s ′ i } is saved in the set D;
(2)当集合D中的记录数大于M,从集合D中采样M次,计算目标价值:
(2) When the number of records in set D is greater than M, sample M times from set D and calculate the target value:
其中,s′m表示所有边缘计算节点观测到的下一个环境信息,γ为0到1之间的实数,表示折扣因子;Where s ′m represents the next environmental information observed by all edge computing nodes, γ is a real number between 0 and 1, representing the discount factor;
(3)根据最小化损失函数来更新各边缘计算节点的价值评价网络Qi的参数θi,最小化损失函数为:
(3) Update the parameters θ i of the value evaluation network Qi of each edge computing node according to the minimization loss function. The minimization loss function is:
其中,sm表示所有边缘计算节点观测到的当前环境信息;Among them, s m represents the current environment information observed by all edge computing nodes;
(4)策略学习网络Ni的更新梯度为:
(4) The update gradient of the strategy learning network Ni is:
(5)训练收敛或达到指定的训练次数后,完成训练,并将Ni部署在边缘计算节点中,生成当前边缘计算节点的资源共享策略。(5) After the training converges or reaches the specified number of training times, the training is completed, and Ni is deployed in the edge computing node to generate the resource sharing strategy of the current edge computing node.
可见,本申请在一些实施例中所提供的任务处理方法,在边缘网络的各个计算节点中预先部署策略学习模型,并基于该预设策略学习模型实现资源共享策略的确定,从而利用该资源共享策略实现任务处理,由于预设策略学习模型是以计算节点的任务处理价值为优化目标进行训练获得的,因此,基于获得资源共享策略进行任务处理可以使得计算节点的任务处理价值得到最大化,并保证任务处理完毕,从而实现任务请求者和资源提供者的双赢;此外,由边缘网络创建资源分配策略,再由计算节点基于资源分配策略学习得到资源分配策略,且 资源分配策略和资源共享策略均将计算节点中的异构资源考虑在内,进一步实现了计算节点内部异构资源的合理分配。It can be seen that the task processing method provided in some embodiments of the present application pre-deploys a policy learning model in each computing node of the edge network, and determines the resource sharing strategy based on the preset policy learning model, thereby utilizing the resource sharing strategy to implement task processing. Since the preset policy learning model is trained with the task processing value of the computing node as the optimization goal, task processing based on the obtained resource sharing strategy can maximize the task processing value of the computing node and ensure that the task is processed, thereby achieving a win-win situation for the task requester and the resource provider. In addition, the resource allocation strategy is created by the edge network, and the computing node learns the resource allocation strategy based on the resource allocation strategy, and Both the resource allocation strategy and resource sharing strategy take the heterogeneous resources in the computing nodes into consideration, further realizing the reasonable allocation of heterogeneous resources within the computing nodes.
本申请在一些实施例中提供了一种任务处理系统。The present application provides a task processing system in some embodiments.
如图1所示,该任务处理系统可以包括边缘网络100和部署于边缘网络100中的计算节点200,每一计算节点200设置有一种或多种异构资源,其中,As shown in FIG1 , the task processing system may include an edge network 100 and computing nodes 200 deployed in the edge network 100 , each computing node 200 being provided with one or more heterogeneous resources, wherein:
边缘网络100,用于根据目标任务的任务信息和各计算节点200的本地资源信息,确定目标计算节点200和资源分配策略,并将资源分配策略发送至各目标计算节点200;资源分配策略包括每一种异构资源被分配的关于目标任务的处理量;The edge network 100 is used to determine the target computing node 200 and the resource allocation strategy according to the task information of the target task and the local resource information of each computing node 200, and send the resource allocation strategy to each target computing node 200; the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
目标计算节点200,用于利用预设策略学习模型对资源分配策略和本地资源信息进行处理,获得资源共享策略,并利用资源共享策略执行目标任务;资源共享策略包括各异构资源的可提供资源,预设策略学习模型以计算节点的任务处理价值为优化目标训练获得。The target computing node 200 is used to process the resource allocation strategy and local resource information using a preset strategy learning model, obtain a resource sharing strategy, and use the resource sharing strategy to execute the target task; the resource sharing strategy includes the available resources of various heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
可见,本申请在一些实施例中所提供的任务处理系统,在边缘网络的各个计算节点中预先部署策略学习模型,并基于该预设策略学习模型实现资源共享策略的确定,从而利用该资源共享策略实现任务处理,由于预设策略学习模型是以计算节点的任务处理价值为优化目标进行训练获得的,因此,基于获得资源共享策略进行任务处理可以使得计算节点的任务处理价值得到最大化,并保证任务处理完毕,从而实现任务请求者和资源提供者的双赢;此外,由边缘网络创建资源分配策略,再由计算节点基于资源分配策略学习得到资源分配策略,且资源分配策略和资源共享策略均将计算节点中的异构资源考虑在内,进一步实现了计算节点内部异构资源的合理分配。It can be seen that the task processing system provided in some embodiments of the present application pre-deploys a policy learning model in each computing node of the edge network, and determines the resource sharing strategy based on the preset policy learning model, thereby utilizing the resource sharing strategy to implement task processing. Since the preset policy learning model is trained with the task processing value of the computing node as the optimization goal, task processing based on the resource sharing strategy can maximize the task processing value of the computing node and ensure that the task is processed, thereby achieving a win-win situation for the task requester and the resource provider. In addition, the resource allocation strategy is created by the edge network, and then the computing node learns the resource allocation strategy based on the resource allocation strategy, and both the resource allocation strategy and the resource sharing strategy take the heterogeneous resources in the computing node into account, further realizing the reasonable allocation of heterogeneous resources within the computing node.
对于本申请在一些实施例中提供的系统的介绍请参照上述方法实施例,本申请在此不做赘述。For an introduction to the system provided in some embodiments of the present application, please refer to the above method embodiments, and the present application will not go into details here.
本申请在一些实施例中提供了一种任务处理装置。The present application provides a task processing device in some embodiments.
参考图5,图5为本申请所提供的一种任务处理装置的结构示意图,该任务处理装置可应用于边缘网络中的计算节点,每一计算节点设置有一种或多种异构资源,包括:Referring to FIG. 5 , FIG. 5 is a schematic diagram of the structure of a task processing device provided by the present application. The task processing device can be applied to computing nodes in an edge network. Each computing node is provided with one or more heterogeneous resources, including:
接收模块1,用于接收边缘网络下发的资源分配策略;资源分配策略包括每一种异构资源被分配的关于目标任务的处理量;The receiving module 1 is used to receive the resource allocation strategy sent by the edge network; the resource allocation strategy includes the processing capacity of each heterogeneous resource allocated to the target task;
处理模块2,用于将资源分配策略和本地资源信息输入至预设策略学习模型进行处理,获得资源共享策略;资源共享策略包括各异构资源的可提供资源,预设策略学习模型以计算节点的任务处理价值为优化目标训练获得;Processing module 2 is used to input the resource allocation strategy and local resource information into the preset strategy learning model for processing to obtain the resource sharing strategy; the resource sharing strategy includes the available resources of each heterogeneous resource, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization target;
执行模块3,用于利用资源共享策略执行目标任务。The execution module 3 is used to execute the target task by utilizing the resource sharing strategy.
可见,本申请在一些实施例中所提供的任务处理装置,在边缘网络的各个计算节点中预先部署策略学习模型,并基于该预设策略学习模型实现资源共享策略的确定,从而利用该资源共享策略实现任务处理,由于预设策略学习模型是以计算节点的任务处理价值为优化目标进行训练获得的,因此,基于获得资源共享策略进行任务处理可以使得计算节点的任务处理价值得到最大化,并保证任务处理完毕,从而实现任务请求者和资源提供者的双赢;此外,由边缘网络创建资源分配策略,再由计算节点基于资源分配策略学习得到资源分配策略,且资源分配策略和资源共享策略均将计算节点中的异构资源考虑在内,进一步实现了计算节点 内部异构资源的合理分配。It can be seen that the task processing device provided in some embodiments of the present application pre-deploys a policy learning model in each computing node of the edge network, and determines the resource sharing strategy based on the preset policy learning model, thereby utilizing the resource sharing strategy to implement task processing. Since the preset policy learning model is trained with the task processing value of the computing node as the optimization goal, task processing based on the obtained resource sharing strategy can maximize the task processing value of the computing node and ensure that the task is processed, thereby achieving a win-win situation for the task requester and the resource provider. In addition, the resource allocation strategy is created by the edge network, and then the computing node learns the resource allocation strategy based on the resource allocation strategy, and both the resource allocation strategy and the resource sharing strategy take the heterogeneous resources in the computing node into account, further realizing the computing node. Reasonable allocation of internal heterogeneous resources.
在一些实施例中,该任务处理装置还可以包括上报模块,用于在上述利用资源共享策略执行目标任务之后,获取目标任务的执行结果;将执行结果上报至边缘网络。In some embodiments, the task processing device may further include a reporting module, which is used to obtain the execution result of the target task after executing the target task using the resource sharing strategy described above; and report the execution result to the edge network.
在一些实施例中,上述异构资源可以包括CPU资源、GPU资源、FPGA资源。In some embodiments, the above heterogeneous resources may include CPU resources, GPU resources, and FPGA resources.
在一些实施例中,该任务处理装置还可以包括发布模块,用于对本地资源信息进行实时监控;当本地资源信息满足预设条件时,将本地资源信息发布至边缘网络。In some embodiments, the task processing device may further include a publishing module for real-time monitoring of local resource information; when the local resource information meets a preset condition, the local resource information is published to the edge network.
在一些实施例中,上述发布模块可用于根据本地资源信息确定可用资源;当可用资源的资源占比达到预设阈值时,将本地资源信息发布至边缘网络。In some embodiments, the publishing module may be used to determine available resources based on local resource information; when the resource ratio of available resources reaches a preset threshold, the local resource information is published to the edge network.
在一些实施例中,上述本地资源信息可以包括可用资源和资源提供时间,可用资源可以包括各异构资源的当前可用资源。In some embodiments, the local resource information may include available resources and resource provision time, and the available resources may include currently available resources of various heterogeneous resources.
对于本申请在一些实施例中提供的装置的介绍请参照上述方法实施例,本申请在此不做赘述。For an introduction to the apparatus provided in some embodiments of the present application, please refer to the above method embodiments, and the present application will not elaborate on them here.
本申请在一些实施例中提供了另一种任务处理装置。The present application provides another task processing device in some embodiments.
参考图6,图6为本申请所提供的另一种任务处理装置的结构示意图,该任务处理装置可应用于边缘网络,边缘网络中的每一计算节点设置有一种或多种异构资源,包括:Referring to FIG. 6 , FIG. 6 is a schematic diagram of the structure of another task processing device provided by the present application, which can be applied to an edge network, and each computing node in the edge network is provided with one or more heterogeneous resources, including:
确定模块4,用于根据目标任务的任务信息和各计算节点的本地资源信息,确定目标计算节点和资源分配策略;资源分配策略包括每一种异构资源被分配的关于目标任务的处理量;Determining module 4, used to determine the target computing node and resource allocation strategy according to the task information of the target task and the local resource information of each computing node; the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
发送模块5,用于将资源分配策略发送至各目标计算节点,以使各目标计算节点利用预设策略学习模型对资源分配策略和本地资源信息进行处理,获得资源共享策略,并利用资源共享策略执行目标任务;资源共享策略包括各异构资源的可提供资源,预设策略学习模型以计算节点的任务处理价值为优化目标训练获得。The sending module 5 is used to send the resource allocation strategy to each target computing node, so that each target computing node uses the preset strategy learning model to process the resource allocation strategy and local resource information, obtains the resource sharing strategy, and uses the resource sharing strategy to execute the target task; the resource sharing strategy includes the available resources of each heterogeneous resource, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
可见,本申请在一些实施例中所提供的任务处理装置,在边缘网络的各个计算节点中预先部署策略学习模型,并基于该预设策略学习模型实现资源共享策略的确定,从而利用该资源共享策略实现任务处理,由于预设策略学习模型是以计算节点的任务处理价值为优化目标进行训练获得的,因此,基于获得资源共享策略进行任务处理可以使得计算节点的任务处理价值得到最大化,并保证任务处理完毕,从而实现任务请求者和资源提供者的双赢;此外,由边缘网络创建资源分配策略,再由计算节点基于资源分配策略学习得到资源分配策略,且资源分配策略和资源共享策略均将计算节点中的异构资源考虑在内,进一步实现了计算节点内部异构资源的合理分配。It can be seen that the task processing device provided in some embodiments of the present application pre-deploys a policy learning model in each computing node of the edge network, and determines the resource sharing strategy based on the preset policy learning model, thereby utilizing the resource sharing strategy to implement task processing. Since the preset policy learning model is trained with the task processing value of the computing node as the optimization goal, task processing based on the resource sharing strategy can maximize the task processing value of the computing node and ensure that the task is processed, thereby achieving a win-win situation for the task requester and the resource provider. In addition, the resource allocation strategy is created by the edge network, and then the computing node learns the resource allocation strategy based on the resource allocation strategy, and both the resource allocation strategy and the resource sharing strategy take the heterogeneous resources in the computing node into account, further realizing the reasonable allocation of heterogeneous resources within the computing node.
在一些实施例中,上述确定模块4可用于根据本地资源信息确定资源提供时间和可用资源;当资源提供时间和可用资源均满足任务信息指示的任务需求时,确定计算节点为目标计算节点。In some embodiments, the determination module 4 may be used to determine resource provision time and available resources based on local resource information; when both resource provision time and available resources meet the task requirements indicated by the task information, the computing node is determined to be the target computing node.
在一些实施例中,该任务处理装置还可以包括统计模块,用于在上述根据目标任务的任务信息和各计算节点的本地资源信息,确定目标计算节点和资源分配策略之前,统计各计算节点发布的本地资源信息;其中,本地资源信息由计算节点在本地资源信息满足预设条件时发布至边缘网络。In some embodiments, the task processing device may also include a statistical module for counting the local resource information published by each computing node before determining the target computing node and resource allocation strategy based on the task information of the target task and the local resource information of each computing node; wherein the local resource information is published to the edge network by the computing node when the local resource information meets preset conditions.
在一些实施例中,该任务处理装置还可以包括反馈模块,用于获取目标计算节点上传的 目标任务的执行结果;将执行结果反馈至目标任务的发起端。In some embodiments, the task processing device may further include a feedback module for obtaining the target computing node uploaded The execution result of the target task; the execution result is fed back to the initiator of the target task.
对于本申请在一些实施例中提供的装置的介绍请参照上述方法实施例,本申请在此不做赘述。For an introduction to the apparatus provided in some embodiments of the present application, please refer to the above method embodiments, and the present application will not elaborate on them here.
本申请在一些实施例中提供了一种任务处理设备。The present application provides a task processing device in some embodiments.
参考图7,图7为本申请所提供的一种任务处理设备的结构示意图,该任务处理设备可包括:Referring to FIG. 7 , FIG. 7 is a schematic diagram of the structure of a task processing device provided by the present application, and the task processing device may include:
存储器,用于存储计算机程序;Memory for storing computer programs;
处理器,用于执行计算机程序时可实现如上述任意一种任务处理方法的步骤。The processor can implement the steps of any one of the above-mentioned task processing methods when being used to execute a computer program.
如图7所示,为任务处理设备的组成结构示意图,任务处理设备可以包括:处理器10、存储器11、通信接口12和通信总线13。处理器10、存储器11、通信接口12均通过通信总线13完成相互间的通信。As shown in Fig. 7, it is a schematic diagram of the composition structure of the task processing device, which may include: a processor 10, a memory 11, a communication interface 12 and a communication bus 13. The processor 10, the memory 11 and the communication interface 12 all communicate with each other through the communication bus 13.
本申请在一些实施例中,处理器10可以为中央处理器(Central Processing Unit,CPU)、特定应用集成电路、数字信号处理器、现场可编程门阵列或者其他可编程逻辑器件等。In some embodiments of the present application, the processor 10 may be a central processing unit (CPU), an application specific integrated circuit, a digital signal processor, a field programmable gate array, or other programmable logic devices.
处理器10可以调用存储器11中存储的程序,处理器10可以执行任务处理方法的实施例中的操作。The processor 10 may call a program stored in the memory 11 , and the processor 10 may execute operations in the embodiment of the task processing method.
存储器11中用于存放一个或者一个以上程序,程序可以包括程序代码,程序代码包括计算机操作指令,本申请在一些实施例中,存储器11中至少存储有用于实现以下功能的程序:The memory 11 is used to store one or more programs, which may include program codes, and the program codes include computer operation instructions. In some embodiments of the present application, the memory 11 stores at least a program for implementing the following functions:
接收边缘网络下发的资源分配策略;资源分配策略包括每一种异构资源被分配的关于目标任务的处理量;Receive the resource allocation strategy issued by the edge network; the resource allocation strategy includes the processing capacity of each heterogeneous resource allocated to the target task;
将资源分配策略和本地资源信息输入至预设策略学习模型进行处理,获得资源共享策略;资源共享策略包括各异构资源的可提供资源,预设策略学习模型以计算节点的任务处理价值为优化目标训练获得;The resource allocation strategy and local resource information are input into the preset strategy learning model for processing to obtain the resource sharing strategy; the resource sharing strategy includes the available resources of various heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization target;
利用资源共享策略执行目标任务;Use resource sharing strategies to execute target tasks;
或者,or,
根据目标任务的任务信息和各计算节点的本地资源信息,确定目标计算节点和资源分配策略;资源分配策略包括每一种异构资源被分配的关于目标任务的处理量;Determine the target computing node and resource allocation strategy according to the task information of the target task and the local resource information of each computing node; the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
将资源分配策略发送至各目标计算节点,以使各目标计算节点利用预设策略学习模型对资源分配策略和本地资源信息进行处理,获得资源共享策略,并利用资源共享策略执行目标任务;资源共享策略包括各异构资源的可提供资源,预设策略学习模型以计算节点的任务处理价值为优化目标训练获得。The resource allocation strategy is sent to each target computing node so that each target computing node uses a preset strategy learning model to process the resource allocation strategy and local resource information, obtains a resource sharing strategy, and uses the resource sharing strategy to execute the target task; the resource sharing strategy includes the available resources of each heterogeneous resource, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
在一种可能的实现方式中,存储器11可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统,以及至少一个功能所需的应用程序等;存储数据区可存储使用过程中所创建的数据。In a possible implementation, the memory 11 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application required for at least one function, etc.; the data storage area may store data created during use.
此外,存储器11可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件或其他易失性固态存储器件。In addition, the memory 11 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device or other volatile solid-state storage device.
通信接口12可以为通信模块的接口,用于与其他设备或者系统连接。The communication interface 12 may be an interface of a communication module, and is used to connect to other devices or systems.
当然,需要说明的是,图7所示的结构并不构成对本申请在一些实施例中任务处理设备 的限定,在实际应用中任务处理设备可以包括比图7所示的更多或更少的部件,或者组合某些部件。Of course, it should be noted that the structure shown in FIG. 7 does not constitute a task processing device in some embodiments of the present application. In actual applications, the task processing device may include more or fewer components than those shown in FIG. 7 , or may combine certain components.
本申请在一些实施例中提供了一种非易失性计算机可读存储介质。The present application provides a non-volatile computer-readable storage medium in some embodiments.
如图8所示,本申请在一些实施例中所提供的非易失性计算机可读存储介质80上存储有计算机程序810,计算机程序810被处理器执行时可实现如上述任意一种任务处理方法的步骤。As shown in FIG8 , in some embodiments of the present application, a non-volatile computer-readable storage medium 80 stores a computer program 810 , and when the computer program 810 is executed by a processor, the steps of any of the above-mentioned task processing methods can be implemented.
该非易失性计算机可读存储介质可以包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The non-volatile computer-readable storage medium may include: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and other media that can store program codes.
对于本申请在一些实施例中提供的非易失性计算机可读存储介质的介绍请参照上述方法实施例,本申请在此不做赘述。For an introduction to the non-volatile computer-readable storage medium provided in some embodiments of the present application, please refer to the above method embodiments, and the present application will not elaborate on them here.
说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。The various embodiments in the specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments. The same or similar parts between the various embodiments can be referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant parts can be referred to the method part description.
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Professionals may further appreciate that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, computer software, or a combination of the two. In order to clearly illustrate the interchangeability of hardware and software, the composition and steps of each example have been generally described in the above description according to function. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professionals and technicians may use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this application.
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM或技术领域内所公知的任意其它形式的存储介质中。The steps of the method or algorithm described in conjunction with the embodiments disclosed herein may be implemented directly using hardware, a software module executed by a processor, or a combination of the two. The software module may be placed in a random access memory (RAM), a memory, a read-only memory (ROM), an electrically programmable ROM, an electrically erasable programmable ROM, a register, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
以上对本申请所提供的技术方案进行了详细介绍。本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以对本申请进行若干改进和修饰,这些改进和修饰也落入本申请的保护范围内。 The technical solution provided by the present application is described in detail above. The principle and implementation method of the present application are described in detail using specific examples herein, and the description of the above embodiments is only used to help understand the method and core idea of the present application. It should be pointed out that for ordinary technicians in this technical field, without departing from the principle of the present application, several improvements and modifications can be made to the present application, and these improvements and modifications also fall within the scope of protection of the present application.

Claims (20)

  1. 一种任务处理方法,其特征在于,应用于边缘网络中的计算节点,每一所述计算节点设置有一种或多种异构资源,所述方法包括:A task processing method, characterized in that it is applied to computing nodes in an edge network, each of the computing nodes is provided with one or more heterogeneous resources, and the method comprises:
    接收所述边缘网络下发的资源分配策略;所述资源分配策略包括每一种所述异构资源被分配的关于目标任务的处理量;Receiving a resource allocation strategy issued by the edge network; the resource allocation strategy includes a processing amount of a target task to which each of the heterogeneous resources is allocated;
    将所述资源分配策略和本地资源信息输入至预设策略学习模型进行处理,获得资源共享策略;所述资源共享策略包括各所述异构资源的可提供资源,所述预设策略学习模型以所述计算节点的任务处理价值为优化目标训练获得;The resource allocation strategy and local resource information are input into a preset strategy learning model for processing to obtain a resource sharing strategy; the resource sharing strategy includes the available resources of each of the heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization target;
    利用所述资源共享策略执行所述目标任务。The target task is executed using the resource sharing strategy.
  2. 根据权利要求1所述的方法,其特征在于,所述利用所述资源共享策略执行所述目标任务之后,还包括:The method according to claim 1, characterized in that after executing the target task using the resource sharing strategy, it also includes:
    获取所述目标任务的执行结果;Obtaining the execution result of the target task;
    将所述执行结果上报至所述边缘网络。The execution result is reported to the edge network.
  3. 根据权利要求1所述的方法,其特征在于,所述异构资源包括中央处理器CPU资源、图形处理器GPU资源、现场可编程逻辑门阵列FPGA资源。The method according to claim 1 is characterized in that the heterogeneous resources include central processing unit (CPU) resources, graphics processing unit (GPU) resources, and field programmable gate array (FPGA) resources.
  4. 根据权利要求3所述的方法,其特征在于,所述异构资源还包括专用集成电路ASIC资源。The method according to claim 3 is characterized in that the heterogeneous resources also include application-specific integrated circuit ASIC resources.
  5. 根据权利要求1至4任一项所述的方法,其特征在于,还包括:The method according to any one of claims 1 to 4, further comprising:
    对所述本地资源信息进行实时监控;Performing real-time monitoring on the local resource information;
    当所述本地资源信息满足预设条件时,将所述本地资源信息发布至所述边缘网络。When the local resource information meets a preset condition, the local resource information is published to the edge network.
  6. 根据权利要求5所述的方法,其特征在于,所述当所述本地资源信息满足预设条件时,将所述本地资源信息发布至所述边缘网络,包括:The method according to claim 5, characterized in that when the local resource information meets a preset condition, publishing the local resource information to the edge network comprises:
    根据所述本地资源信息确定可用资源;Determining available resources according to the local resource information;
    当所述可用资源的资源占比达到预设阈值时,将所述本地资源信息发布至所述边缘网络。When the resource proportion of the available resources reaches a preset threshold, the local resource information is published to the edge network.
  7. 根据权利要求5所述的方法,其特征在于,所述本地资源信息包括可用资源和资源提供时间,所述可用资源包括各所述异构资源的当前可用资源。The method according to claim 5 is characterized in that the local resource information includes available resources and resource provision time, and the available resources include currently available resources of each of the heterogeneous resources.
  8. 根据权利要求6所述的方法,其特征在于,The method according to claim 6, characterized in that
    所述可用资源是指计算节点中的空闲资源,所述资源占比是指空闲资源占总资源的比例。The available resources refer to idle resources in the computing nodes, and the resource proportion refers to the ratio of idle resources to total resources.
  9. 根据权利要求1所述的方法,其特征在于,所述任务量为目标任务被分配至各个 异构资源的子任务的任务量。The method according to claim 1 is characterized in that the task volume is the target task assigned to each The workload of subtasks of heterogeneous resources.
  10. 根据权利要求9所述的方法,其特征在于,所述任务量为所述子任务关于所述目标任务的占比,或者所述子任务的数量。The method according to claim 9 is characterized in that the task amount is the proportion of the subtask to the target task, or the number of the subtasks.
  11. 一种任务处理方法,其特征在于,应用于边缘网络,所述边缘网络中的每一计算节点设置有一种或多种异构资源,所述方法包括:A task processing method, characterized in that it is applied to an edge network, each computing node in the edge network is provided with one or more heterogeneous resources, and the method comprises:
    根据目标任务的任务信息和各所述计算节点的本地资源信息,确定目标计算节点和资源分配策略;所述资源分配策略包括每一种所述异构资源被分配的关于所述目标任务的处理量;Determine the target computing node and the resource allocation strategy according to the task information of the target task and the local resource information of each computing node; the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
    将所述资源分配策略发送至各所述目标计算节点,以使各所述目标计算节点利用预设策略学习模型对所述资源分配策略和所述本地资源信息进行处理,获得资源共享策略,并利用所述资源共享策略执行所述目标任务;所述资源共享策略包括各所述异构资源的可提供资源,所述预设策略学习模型以所述计算节点的任务处理价值为优化目标训练获得。The resource allocation strategy is sent to each of the target computing nodes, so that each of the target computing nodes uses a preset strategy learning model to process the resource allocation strategy and the local resource information, obtains a resource sharing strategy, and uses the resource sharing strategy to execute the target task; the resource sharing strategy includes the available resources of each of the heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
  12. 根据权利要求11所述的方法,其特征在于,所述根据目标任务的任务信息和各所述计算节点的本地资源信息,确定目标计算节点,包括:The method according to claim 11, characterized in that the step of determining the target computing node according to the task information of the target task and the local resource information of each computing node comprises:
    根据所述本地资源信息确定资源提供时间和可用资源;Determining resource provision time and available resources according to the local resource information;
    当所述资源提供时间和所述可用资源均满足所述任务信息指示的任务需求时,确定所述计算节点为所述目标计算节点。When the resource provision time and the available resources both meet the task requirements indicated by the task information, the computing node is determined to be the target computing node.
  13. 根据权利要求11所述的方法,其特征在于,所述根据目标任务的任务信息和各所述计算节点的本地资源信息,确定目标计算节点和资源分配策略之前,还包括:The method according to claim 11, characterized in that before determining the target computing node and the resource allocation strategy based on the task information of the target task and the local resource information of each computing node, it also includes:
    统计各所述计算节点发布的所述本地资源信息;其中,所述本地资源信息由所述计算节点在所述本地资源信息满足预设条件时发布至所述边缘网络。The local resource information published by each of the computing nodes is counted; wherein the local resource information is published by the computing node to the edge network when the local resource information meets a preset condition.
  14. 根据权利要求11所述的方法,其特征在于,还包括:The method according to claim 11, further comprising:
    获取所述目标计算节点上传的所述目标任务的执行结果;Obtaining the execution result of the target task uploaded by the target computing node;
    将所述执行结果反馈至所述目标任务的发起端。The execution result is fed back to the initiator of the target task.
  15. 一种任务处理系统,其特征在于,包括边缘网络和部署于所述边缘网络中的计算节点,每一所述计算节点设置有一种或多种异构资源,其中,A task processing system, characterized in that it comprises an edge network and computing nodes deployed in the edge network, each computing node is provided with one or more heterogeneous resources, wherein:
    所述边缘网络,用于根据目标任务的任务信息和各所述计算节点的本地资源信息,确定目标计算节点和资源分配策略,并将所述资源分配策略发送至各所述目标计算节点;所述资源分配策略包括每一种所述异构资源被分配的关于所述目标任务的处理量;The edge network is used to determine the target computing node and the resource allocation strategy according to the task information of the target task and the local resource information of each computing node, and send the resource allocation strategy to each target computing node; the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
    所述目标计算节点,用于利用预设策略学习模型对所述资源分配策略和所述本地资 源信息进行处理,获得资源共享策略,并利用所述资源共享策略执行所述目标任务;所述资源共享策略包括各所述异构资源的可提供资源,所述预设策略学习模型以所述计算节点的任务处理价值为优化目标训练获得。The target computing node is used to use a preset strategy learning model to learn the resource allocation strategy and the local resource The source information is processed to obtain a resource sharing strategy, and the target task is executed using the resource sharing strategy; the resource sharing strategy includes the available resources of each of the heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
  16. 根据权利要求15所述的系统,其特征在于,The system according to claim 15, characterized in that
    所述边缘网络,还用于将执行结果反馈至所述目标任务的发起端。The edge network is also used to feed back the execution result to the initiator of the target task.
  17. 一种任务处理装置,其特征在于,应用于边缘网络中的计算节点,每一所述计算节点设置有一种或多种异构资源,所述装置包括:A task processing device, characterized in that it is applied to computing nodes in an edge network, each of the computing nodes is provided with one or more heterogeneous resources, and the device comprises:
    接收模块,用于接收所述边缘网络下发的资源分配策略;所述资源分配策略包括每一种所述异构资源被分配的关于目标任务的处理量;A receiving module, configured to receive a resource allocation strategy issued by the edge network; the resource allocation strategy includes a processing amount of a target task to which each of the heterogeneous resources is allocated;
    处理模块,用于将所述资源分配策略和本地资源信息输入至预设策略学习模型进行处理,获得资源共享策略;所述资源共享策略包括各所述异构资源的可提供资源,所述预设策略学习模型以所述计算节点的任务处理价值为优化目标训练获得;A processing module, used for inputting the resource allocation strategy and local resource information into a preset strategy learning model for processing to obtain a resource sharing strategy; the resource sharing strategy includes the available resources of each of the heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization target;
    执行模块,用于利用所述资源共享策略执行所述目标任务。An execution module is used to execute the target task using the resource sharing strategy.
  18. 一种任务处理装置,其特征在于,应用于边缘网络,所述边缘网络中的每一计算节点设置有一种或多种异构资源,所述装置包括:A task processing device, characterized in that it is applied to an edge network, each computing node in the edge network is provided with one or more heterogeneous resources, and the device comprises:
    确定模块,用于根据目标任务的任务信息和各所述计算节点的本地资源信息,确定目标计算节点和资源分配策略;所述资源分配策略包括每一种所述异构资源被分配的关于所述目标任务的处理量;A determination module, configured to determine a target computing node and a resource allocation strategy according to the task information of the target task and the local resource information of each computing node; the resource allocation strategy includes the processing amount of each heterogeneous resource allocated to the target task;
    发送模块,用于将所述资源分配策略发送至各所述目标计算节点,以使各所述目标计算节点利用预设策略学习模型对所述资源分配策略和所述本地资源信息进行处理,获得资源共享策略,并利用所述资源共享策略执行所述目标任务;所述资源共享策略包括各所述异构资源的可提供资源,所述预设策略学习模型以所述计算节点的任务处理价值为优化目标训练获得。A sending module is used to send the resource allocation strategy to each of the target computing nodes, so that each of the target computing nodes uses a preset strategy learning model to process the resource allocation strategy and the local resource information, obtain a resource sharing strategy, and use the resource sharing strategy to execute the target task; the resource sharing strategy includes the available resources of each of the heterogeneous resources, and the preset strategy learning model is trained with the task processing value of the computing node as the optimization goal.
  19. 一种任务处理设备,其特征在于,包括:A task processing device, comprising:
    存储器,用于存储计算机程序;Memory for storing computer programs;
    处理器,用于执行所述计算机程序时实现如权利要求1至14任一项所述的任务处理方法的步骤。A processor, configured to implement the steps of the task processing method according to any one of claims 1 to 14 when executing the computer program.
  20. 一种非易失性计算机可读存储介质,其特征在于,所述非易失性计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至14任一项所述的任务处理方法的步骤。 A non-volatile computer-readable storage medium, characterized in that a computer program is stored on the non-volatile computer-readable storage medium, and when the computer program is executed by a processor, the steps of the task processing method according to any one of claims 1 to 14 are implemented.
PCT/CN2023/113135 2022-11-07 2023-08-15 Task processing method, system, apparatus and device, and computer-readable storage medium WO2024098872A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211381753.2 2022-11-07
CN202211381753.2A CN115421930B (en) 2022-11-07 2022-11-07 Task processing method, system, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2024098872A1 true WO2024098872A1 (en) 2024-05-16

Family

ID=84207382

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/113135 WO2024098872A1 (en) 2022-11-07 2023-08-15 Task processing method, system, apparatus and device, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN115421930B (en)
WO (1) WO2024098872A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115421930B (en) * 2022-11-07 2023-03-24 山东海量信息技术研究院 Task processing method, system, device, equipment and computer readable storage medium
CN115858131B (en) * 2023-02-22 2023-05-16 山东海量信息技术研究院 Task execution method, system, device and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210397941A1 (en) * 2020-06-12 2021-12-23 International Business Machines Corporation Task-oriented machine learning and a configurable tool thereof on a computing environment
WO2022171083A1 (en) * 2021-02-10 2022-08-18 中国移动通信有限公司研究院 Information processing method based on internet-of-things device, and related device and storage medium
WO2022171066A1 (en) * 2021-02-10 2022-08-18 中国移动通信有限公司研究院 Task allocation method and apparatus based on internet-of-things device, and network training method and apparatus
CN114970834A (en) * 2022-06-23 2022-08-30 中国电信股份有限公司 Task allocation method and device and electronic equipment
CN115421930A (en) * 2022-11-07 2022-12-02 山东海量信息技术研究院 Task processing method, system, device, equipment and computer readable storage medium

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11610165B2 (en) * 2018-05-09 2023-03-21 Volvo Car Corporation Method and system for orchestrating multi-party services using semi-cooperative nash equilibrium based on artificial intelligence, neural network models,reinforcement learning and finite-state automata
US20200249998A1 (en) * 2019-02-01 2020-08-06 Alibaba Group Holding Limited Scheduling computation graph heterogeneous computer system
US11989647B2 (en) * 2019-02-08 2024-05-21 Adobe Inc. Self-learning scheduler for application orchestration on shared compute cluster
CN112925634A (en) * 2019-12-06 2021-06-08 中国电信股份有限公司 Heterogeneous resource scheduling method and system
CN114787830A (en) * 2019-12-20 2022-07-22 惠普发展公司,有限责任合伙企业 Machine learning workload orchestration in heterogeneous clusters
CN111314889B (en) * 2020-02-26 2023-03-31 华南理工大学 Task unloading and resource allocation method based on mobile edge calculation in Internet of vehicles
CN111405569A (en) * 2020-03-19 2020-07-10 三峡大学 Calculation unloading and resource allocation method and device based on deep reinforcement learning
CN113448721A (en) * 2020-03-27 2021-09-28 中国移动通信有限公司研究院 Network system for computing power processing and computing power processing method
CN112134959B (en) * 2020-09-24 2022-10-28 北京工业大学 Heterogeneous edge resource sharing method based on block chain
CN112788605B (en) * 2020-12-25 2022-07-26 威胜信息技术股份有限公司 Edge computing resource scheduling method and system based on double-delay depth certainty strategy
CN114915629B (en) * 2021-02-10 2023-08-15 中国移动通信有限公司研究院 Information processing method, device, system, electronic equipment and storage medium
CN113364831B (en) * 2021-04-27 2022-07-19 国网浙江省电力有限公司电力科学研究院 Multi-domain heterogeneous computing network resource credible cooperation method based on block chain
CN113282368B (en) * 2021-05-25 2023-03-28 国网湖北省电力有限公司检修公司 Edge computing resource scheduling method for substation inspection
CN113377540A (en) * 2021-06-15 2021-09-10 上海商汤科技开发有限公司 Cluster resource scheduling method and device, electronic equipment and storage medium
CN113923781A (en) * 2021-06-25 2022-01-11 国网山东省电力公司青岛供电公司 Wireless network resource allocation method and device for comprehensive energy service station
CN114021770A (en) * 2021-09-14 2022-02-08 北京邮电大学 Network resource optimization method and device, electronic equipment and storage medium
CN114040425B (en) * 2021-11-17 2024-03-15 中电信数智科技有限公司 Resource allocation method based on global resource utility rate optimization
CN114567560A (en) * 2022-01-20 2022-05-31 国网江苏省电力有限公司信息通信分公司 Edge node dynamic resource allocation method based on generation confrontation simulation learning
CN114691363A (en) * 2022-03-28 2022-07-01 福州大学 Cloud data center self-adaption efficient resource allocation method based on deep reinforcement learning
CN114610474B (en) * 2022-05-12 2022-09-02 之江实验室 Multi-strategy job scheduling method and system under heterogeneous supercomputing environment
CN114928612B (en) * 2022-06-01 2024-04-12 南京浮点智算数字科技有限公司 Excitation mechanism and resource allocation method for collaborative offloading in mobile edge computing
CN115037749B (en) * 2022-06-08 2023-07-28 山东省计算中心(国家超级计算济南中心) Large-scale micro-service intelligent multi-resource collaborative scheduling method and system
CN115168027A (en) * 2022-06-15 2022-10-11 中国科学院沈阳自动化研究所 Calculation power resource measurement method based on deep reinforcement learning
CN115175217A (en) * 2022-06-30 2022-10-11 重庆邮电大学 Resource allocation and task unloading optimization method based on multiple intelligent agents

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210397941A1 (en) * 2020-06-12 2021-12-23 International Business Machines Corporation Task-oriented machine learning and a configurable tool thereof on a computing environment
WO2022171083A1 (en) * 2021-02-10 2022-08-18 中国移动通信有限公司研究院 Information processing method based on internet-of-things device, and related device and storage medium
WO2022171066A1 (en) * 2021-02-10 2022-08-18 中国移动通信有限公司研究院 Task allocation method and apparatus based on internet-of-things device, and network training method and apparatus
CN114970834A (en) * 2022-06-23 2022-08-30 中国电信股份有限公司 Task allocation method and device and electronic equipment
CN115421930A (en) * 2022-11-07 2022-12-02 山东海量信息技术研究院 Task processing method, system, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN115421930A (en) 2022-12-02
CN115421930B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
WO2024098872A1 (en) Task processing method, system, apparatus and device, and computer-readable storage medium
Almutairi et al. A novel approach for IoT tasks offloading in edge-cloud environments
CN110380891B (en) Edge computing service resource allocation method and device and electronic equipment
CN109802998B (en) Game-based fog network cooperative scheduling excitation method and system
CN110990138B (en) Resource scheduling method, device, server and storage medium
US20080021987A1 (en) Sub-task processor distribution scheduling
CN107003887A (en) Overloaded cpu setting and cloud computing workload schedules mechanism
CN116360972A (en) Resource management method, device and resource management platform
CN112463390A (en) Distributed task scheduling method and device, terminal equipment and storage medium
CN112306369A (en) Data processing method, device, server and storage medium
Zheng et al. Adaptive resource scheduling mechanism in P2P file sharing system
Huang et al. Computation offloading for multimedia workflows with deadline constraints in cloudlet-based mobile cloud
CN113938394A (en) Monitoring service bandwidth allocation method and device, electronic equipment and storage medium
CN116302578B (en) QoS (quality of service) constraint stream application delay ensuring method and system
Durga et al. Context-aware adaptive resource provisioning for mobile clients in intra-cloud environment
Chunlin et al. Multiple context based service scheduling for balancing cost and benefits of mobile users and cloud datacenter supplier in mobile cloud
Staffolani et al. RLQ: Workload allocation with reinforcement learning in distributed queues
CN112561301A (en) Work order distribution method, device, equipment and computer readable medium
Zhang et al. Crowd-Funding: a new resource cooperation mode for mobile cloud computing
Li et al. Resource scheduling approach for multimedia cloud content management
CN115562861A (en) Method and apparatus for data processing for data skew
Edward Gerald et al. A fruitfly-based optimal resource sharing and load balancing for the better cloud services
CN111694670B (en) Resource allocation method, apparatus, device and computer readable medium
CN113946274B (en) Data processing method, device, equipment and medium
Babu et al. Petri net model for resource scheduling with auto scaling in elastic cloud

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23887567

Country of ref document: EP

Kind code of ref document: A1