CN112311832B - Task processing method, system and node - Google Patents

Task processing method, system and node Download PDF

Info

Publication number
CN112311832B
CN112311832B CN201910703055.1A CN201910703055A CN112311832B CN 112311832 B CN112311832 B CN 112311832B CN 201910703055 A CN201910703055 A CN 201910703055A CN 112311832 B CN112311832 B CN 112311832B
Authority
CN
China
Prior art keywords
task
node
candidate
nodes
computing resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910703055.1A
Other languages
Chinese (zh)
Other versions
CN112311832A (en
Inventor
苏辉
蒋海青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ezviz Software Co Ltd
Original Assignee
Hangzhou Ezviz Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ezviz Software Co Ltd filed Critical Hangzhou Ezviz Software Co Ltd
Priority to CN201910703055.1A priority Critical patent/CN112311832B/en
Publication of CN112311832A publication Critical patent/CN112311832A/en
Application granted granted Critical
Publication of CN112311832B publication Critical patent/CN112311832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1012Server selection for load balancing based on compliance of requirements or conditions with available server resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention provides a task processing method, a task processing system and a node. The method is applied to the node and comprises the following steps: traversing each task, if the task is not the last one, processing the task and traversing the next task when the node can meet the computing resource required by the task, otherwise, traversing the next task; if the task is the last one, when the node can meet the computing resources needed by the task, the task is processed, and for each task which is not processed, an auxiliary node which can meet the computing resources needed by the task is selected from a plurality of candidate nodes to request the auxiliary node to process the task, otherwise, for each task which is not processed, the auxiliary node which can meet the computing resources needed by the task is selected from the plurality of candidate nodes to request the auxiliary node to process the task. Compared with the prior art, the scheme provided by the embodiment of the invention can improve the utilization rate of the computing resources of the whole local area network and improve the task processing efficiency of the local area network.

Description

Task processing method, system and node
Technical Field
The present invention relates to the field of computer network technologies, and in particular, to a task processing method, system, and node.
Background
Currently, with the continuous development of the internet of things technology, various local area networks composed of a plurality of nodes play more and more important roles in the daily life of people. Such as intelligent monitoring systems, intelligent home systems, intelligent transportation systems, and the like.
In a local area network, because the computing capabilities of node devices are different, when a certain node device acquires a task to be processed, it may happen that the available computing resources of the node cannot meet the target computing resources required for processing the task, so that the node cannot process the task to be processed.
For example, suppose there is node A1 in a local area network, whose available computing resource is C1, and the target computing resource needed for pending task 1 is X _ HD. Then, when the node a1 acquires the pending task 1, since C1< X _ HD, the node a1 cannot process the pending task 1.
In the related art, a scheduling node is provided in a local area network, and each node in the local area network is divided into a master node for processing various tasks and a backup node. Therefore, when available computing resources of a certain main node in the local area network cannot meet target computing resources required by the tasks to be processed, the main node requests the scheduling node to acquire the resources, and then the scheduling node allocates the tasks to be processed to the spare nodes with spare resources for processing.
However, in the above related art, when there is no situation that the available computing resources of the master node satisfy the target computing resources required by the to-be-processed task, the computing resources in the backup node will not be utilized, and thus there may be computing resources that are not used all the time in the local area network, which results in a low utilization rate of the computing resources of the entire local area network, and further, reduces the task processing capability of the local area network.
Disclosure of Invention
The embodiment of the invention aims to provide a task processing method, a task processing system and a task processing node, so that the utilization rate of computing resources of the whole local area network is improved, and the task processing efficiency of the local area network is improved.
The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a task processing method, which is applied to a node, and the method includes:
acquiring a plurality of tasks to be processed;
determining a task sequence comprising the plurality of tasks;
sequentially traversing each task in the task sequence according to the arrangement sequence of the tasks in the task sequence;
when traversing each task, if the task is not the last task in the task sequence, processing the task and traversing the next task when judging that the node can meet the computing resources required by the task, otherwise, traversing the next task;
if the task is the last in the task sequence, processing the task when judging that the node can meet the computing resources required by the task, and selecting an auxiliary node capable of meeting the computing resources required by the task from a plurality of candidate nodes aiming at each unprocessed task when the unprocessed task exists in the task sequence, and requesting the auxiliary node to process the task; otherwise, aiming at each task which comprises the task and is not processed, selecting an auxiliary node which can meet the computing resources required by the task from the plurality of candidate nodes, and requesting the auxiliary node to process the task;
wherein the candidate nodes are: the nodes except the nodes belong to each local area network of the target cloud service center; and the target cloud service center is a cloud service center to which the local area network where the node is located belongs.
Optionally, in a specific implementation manner, the step of selecting, from the multiple candidate nodes, an auxiliary node capable of satisfying the computing resource required by the task and requesting the auxiliary node to process the task includes:
when an auxiliary node capable of meeting the computing resources required by the task exists in the first type of candidate nodes, requesting the auxiliary node to process the task;
when the first type of candidate nodes do not have auxiliary nodes capable of meeting the computing resources required by the task, selecting one auxiliary node capable of meeting the computing resources required by the task from the second type of candidate nodes, and requesting the auxiliary node to process the task;
wherein the first type candidate node is: the candidate nodes belong to the local area network where the nodes are located; the second type candidate node is: candidate nodes of the plurality of candidate nodes other than the first class of candidate nodes.
Optionally, in a specific implementation manner, the step of selecting, for each unprocessed task including the task, an auxiliary node capable of satisfying the computing resource required by the task from a plurality of candidate nodes, and requesting the auxiliary node to process the task includes:
selecting an auxiliary node capable of meeting the computing resources required by the task from a plurality of candidate nodes, and requesting the auxiliary node to process the task;
judging whether unprocessed tasks exist in the task sequence or not;
and if so, selecting an auxiliary node capable of meeting the computing resources required by the task from the plurality of candidate nodes for each unprocessed task, and requesting the auxiliary node to process the task.
Optionally, in a specific implementation manner, the step of selecting an auxiliary node capable of satisfying the computing resource required by the task from the plurality of candidate nodes includes:
sequentially traversing each candidate node in the plurality of candidate nodes according to a preset searching sequence; when each candidate node is traversed, judging whether the candidate node can meet the computing resources required by the task; if so, determining the candidate node as an auxiliary node, otherwise, traversing the next candidate node;
alternatively, the first and second electrodes may be,
broadcasting a resource query instruction to each candidate node in the plurality of candidate nodes, so that each candidate node judges whether the candidate node can meet the computing resource required by the task after receiving the resource query instruction, and feeding back a judgment result; determining a candidate node corresponding to the received first target result as an auxiliary node; wherein the resource query instruction comprises: the amount of computing resources required by the task, the target result is: the result is a yes judgment result.
Optionally, in a specific implementation manner, the method further includes:
for each unprocessed task, if an auxiliary node capable of meeting the computing resource required by the task does not exist in the plurality of candidate nodes, sending a task processing request to the target cloud service center, and requesting the target cloud service center to process the task; wherein the task processing request comprises: the task and the amount of computing resources required for the task.
In a second aspect, an embodiment of the present invention provides a task processing system, where the task processing system includes multiple nodes in local area networks belonging to a same target cloud service center;
for any node of the plurality of nodes, the node to:
acquiring a plurality of tasks to be processed;
determining a task sequence comprising the plurality of tasks;
sequentially traversing each task in the task sequence according to the arrangement sequence of the tasks in the task sequence;
when traversing each task, if the task is not the last task in the task sequence, processing the task and traversing the next task when judging that the node can meet the computing resources required by the task, otherwise, traversing the next task;
if the task is the last in the task sequence, processing the task when judging that the node can meet the computing resources required by the task, and selecting an auxiliary node capable of meeting the computing resources required by the task from a plurality of candidate nodes aiming at each unprocessed task when the unprocessed task exists in the task sequence, and requesting the auxiliary node to process the task; otherwise, aiming at each unprocessed task comprising the task, selecting an auxiliary node which can meet the computing resources required by the task from the plurality of candidate nodes, and requesting the auxiliary node to process the task; wherein the candidate nodes are: a node of the plurality of nodes other than the node;
the selected secondary node: the task processing module is used for receiving the task sent by the node and processing the task.
Optionally, in a specific implementation manner, the node is configured to select, from a plurality of candidate nodes, an auxiliary node that can satisfy the computing resource required by the task, and request the auxiliary node to process the task, where the method includes:
when an auxiliary node capable of meeting the computing resources required by the task exists in the first type of candidate nodes, requesting the auxiliary node to process the task;
when the first type of candidate nodes do not have auxiliary nodes capable of meeting the computing resources required by the task, selecting one auxiliary node capable of meeting the computing resources required by the task from the second type of candidate nodes, and requesting the auxiliary node to process the task;
wherein the first type candidate node is: the candidate nodes belong to the local area network where the nodes are located; the second type candidate node is: candidate nodes of the plurality of candidate nodes other than the first class of candidate nodes.
Optionally, in a specific implementation manner, the node is configured to, for each unprocessed task including the task, select, from a plurality of candidate nodes, an auxiliary node that can satisfy the computing resource required by the task, and request the auxiliary node to process the task, where the method includes:
selecting an auxiliary node capable of meeting the computing resources required by the task from a plurality of candidate nodes, and requesting the auxiliary node to process the task;
judging whether unprocessed tasks exist in the task sequence or not;
and if so, selecting an auxiliary node capable of meeting the computing resources required by the task from the plurality of candidate nodes for each unprocessed task, and requesting the auxiliary node to process the task.
Optionally, in a specific implementation manner, the node is configured to select, from a plurality of candidate nodes, an auxiliary node capable of satisfying the computing resource required by the task, and includes:
sequentially traversing each candidate node in the plurality of candidate nodes according to a preset searching sequence; when each candidate node is traversed, judging whether the candidate node can meet the computing resources required by the task; if so, determining the candidate node as an auxiliary node, otherwise, traversing the next candidate node;
alternatively, the first and second electrodes may be,
broadcasting a resource query instruction to each candidate node in the plurality of candidate nodes, so that each candidate node judges whether the candidate node can meet the computing resource required by the task after receiving the resource query instruction, and feeding back a judgment result; determining a candidate node corresponding to the received first target result as an auxiliary node; wherein the resource query instruction comprises: the amount of computing resources required by the task, the target result is: the result is a yes judgment result.
Optionally, in a specific implementation manner, the system further includes the target cloud service center;
for any node in the plurality of nodes, the node is further configured to, for each unprocessed task, if there is no auxiliary node that can satisfy the computing resources required by the task in the plurality of candidate nodes, send a task processing request to the target cloud service center, and request the target cloud service center to process the task;
and the target cloud service center is used for receiving the task sent by the node and processing the task.
In a fourth aspect, an embodiment of the present invention provides a node, including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the memory and the processor through the communication bus;
a memory for storing a computer program;
and a processor configured to implement the steps of any of the task processing methods provided by the first aspect when executing the program stored in the memory.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps of any one of the task processing methods provided in the first aspect.
As can be seen from the above, with the solution provided in the embodiment of the present invention, after the node obtains the plurality of tasks to be processed, the node may determine the task sequence including the plurality of tasks, and may further traverse each task according to the order of the tasks in the task sequence. In this way, when traversing each task, if the task is not the last task in the task sequence, when the node can satisfy the computing resources required by the task, the node can locally process the task and traverse the next task, otherwise, the node can directly traverse the next task. If the task is the last task in the task sequence, the node may process the task locally when the node is able to satisfy the computing resources required by the task, otherwise, for each unprocessed task that includes the task, the node may select an auxiliary node from a plurality of candidate nodes that is able to satisfy the computing resources required by the task and request the auxiliary node to process the task. Each candidate node is a node except the node in each local area network belonging to the target cloud service center; and the target cloud service center is a cloud service center to which the local area network where the node is located belongs.
Based on this, by applying the scheme provided by the embodiment of the invention, when the node cannot meet the acquired computational resources learned by the task to be processed, the node can select the auxiliary node capable of meeting the computational resources required by the task at each candidate node. Therefore, backup nodes do not need to be set in each local area network, each node in the local area network can directly receive the task to be processed from the outside, and the task to be processed is processed when the available resources of the node can meet the target computing resources required by the task to be processed. Therefore, the situation that computing resources which are not used all the time exist in the local area network can be avoided, and further the computing resource utilization rate of the whole local area network and the task processing efficiency of the local area network are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a task processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a specific implementation of selecting an auxiliary node by a node;
fig. 3 is a schematic structural diagram of a task processing system according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a node according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the related art, when the situation that the available computing resources of the master node meet the target computing resources required by the tasks to be processed does not exist, the computing resources in the backup nodes are not utilized, so that the computing resources which are not used all the time may exist in the local area network, the utilization rate of the computing resources of the whole local area network is low, and further, the task processing capacity of the local area network is reduced. In order to solve the above problem, an embodiment of the present invention provides a task processing method.
Fig. 1 is a flowchart illustrating a task processing method according to an embodiment of the present invention. The task processing method is applied to the nodes. A node is an electronic device in a local area network, for example, a camera in an intelligent monitoring system, an intelligent sound in an intelligent home system, and the like. In the embodiment of the present invention, the task processing method may be applied to any node in any local area network, and for this reason, the embodiment of the present invention is not specifically limited, and is hereinafter referred to as a node for short.
As shown in fig. 1, a task processing method provided in an embodiment of the present invention may include the following steps:
s101: acquiring a plurality of tasks to be processed;
s102: determining a task sequence comprising a plurality of tasks;
s103: sequentially traversing each task in the task sequence according to the arrangement sequence of the tasks in the task sequence;
s104: when traversing each task, if the task is not the last task in the task sequence, processing the task and traversing the next task when judging that the node can meet the computing resources required by the task, otherwise, traversing the next task;
s105: if the task is the last task in the task sequence, processing the task when judging that the node can meet the computing resources required by the task, and selecting an auxiliary node capable of meeting the computing resources required by the task from a plurality of candidate nodes aiming at each unprocessed task when the unprocessed task exists in the task sequence, and requesting the auxiliary node to process the task; otherwise, aiming at each unprocessed task comprising the task, selecting an auxiliary node which can meet the computing resources required by the task from a plurality of candidate nodes, and requesting the auxiliary node to process the task;
wherein, the candidate nodes are: the nodes except the nodes belong to each local area network of the target cloud service center; the target cloud service center is a cloud service center to which a local area network where the node is located belongs.
As can be seen from the above, with the solution provided in the embodiment of the present invention, after the node obtains the plurality of tasks to be processed, the node may determine the task sequence including the plurality of tasks, and may further traverse each task according to the order of the tasks in the task sequence. In this way, when traversing each task, if the task is not the last task in the task sequence, when the node can satisfy the computing resources required by the task, the node can locally process the task and traverse the next task, otherwise, the node can directly traverse the next task. If the task is the last task in the task sequence, the node may process the task locally when the node is able to satisfy the computing resources required by the task, otherwise, for each unprocessed task that includes the task, the node may select an auxiliary node from a plurality of candidate nodes that is able to satisfy the computing resources required by the task and request the auxiliary node to process the task. Each candidate node is a node except the node in each local area network belonging to the target cloud service center; and the target cloud service center is a cloud service center to which the local area network where the node is located belongs.
Based on this, by applying the scheme provided by the embodiment of the invention, when the node cannot meet the acquired computational resources learned by the task to be processed, the node can select the auxiliary node capable of meeting the computational resources required by the task at each candidate node. Therefore, backup nodes do not need to be set in each local area network, each node in the local area network can directly receive the task to be processed from the outside, and the task to be processed is processed when the available resources of the node can meet the target computing resources required by the task to be processed. Therefore, the situation that computing resources which are not used all the time exist in the local area network can be avoided, and further the computing resource utilization rate of the whole local area network and the task processing efficiency of the local area network are improved.
In the step S101, the node may obtain multiple tasks to be processed in multiple ways, for example, the node may obtain the tasks to be processed from other nodes in the local area network, may also obtain the tasks to be processed from the cloud service center to which the local area network belongs, and may also obtain the tasks to be processed from nodes in the adjacent local area network of the local area network. The embodiment of the present invention is not particularly limited.
In addition, the to-be-processed task acquired by the node may be acquired from the same task source, for example, the node may acquire a plurality of to-be-processed tasks from a cloud service center to which the local area network belongs; for example, the node may obtain one task to be processed from the cloud service center to which the local area network belongs, another node in the local area network, and one node in an adjacent local area network of the local area network, so as to obtain three tasks to be processed. In this regard, the embodiments of the present invention are not limited to the specific embodiments.
When the node acquires a plurality of tasks to be processed, the node can acquire the specific content of each task and also can acquire the quantity of computing resources required by each task.
Thus, after the step S101 is completed and the plurality of tasks to be processed are acquired, the node may continue to execute the step S102, that is, determine the task sequence including the plurality of tasks.
Wherein, the node can determine the task sequence in various ways. For example, the nodes may determine the task sequences in the order from small to large according to the number of the computing resources required by each task, may determine the task sequences in the order from large to small according to the number of the computing resources required by each task, and may determine the task sequences in a random order. This is all reasonable.
Furthermore, after the step S102 is completed and the task sequence including the plurality of tasks is determined, the node may continue to perform the following steps S103 to S105. Specifically, the method comprises the following steps:
the node may sequentially traverse each task in the task sequence according to the determined order of the tasks in the task sequence.
Thus, when traversing to each task, a node may first determine whether the task is the last in the sequence of tasks.
Furthermore, if the task is not the last in the task sequence, the node can further determine whether the node can satisfy the computing resources required by the task, if so, the node can process the task locally, otherwise, the node can traverse the next task.
Accordingly, if the task is the last in the sequence of tasks, the node may further determine whether it can satisfy the computing resources required by the task itself. If so, the node may process the task locally, i.e., the node may process the last task in the sequence of tasks locally.
Furthermore, for each task in the task sequence except the last task, when the node determines that the node cannot satisfy the computing resource required by the task, the node performs the operation of traversing the next task, so that for each task in the task sequence except the last task, when the node determines that the node cannot satisfy the computing resource required by the task, the task is not processed. That is, after the last task in the task sequence is processed locally by the node, there may be unprocessed tasks in the other tasks except the last task in the task sequence.
Based on this, after the last task in the task sequence is processed locally by the node, the node may further determine whether there is an unprocessed task in the task sequence. Thus, when it is determined that an unprocessed task exists in the task sequence, for each unprocessed task, the node may select an auxiliary node that can satisfy the computing resources required by the task from the candidate nodes, and request the auxiliary node to process the task.
Wherein, the candidate nodes are: and the target cloud server center is the cloud service center to which the local area network of the node belongs.
And corresponding to the situation that the node can meet the computing resource required by the last task in the task sequence, when the node judges that the node cannot meet the computing resource required by the last task in the task sequence, the node can determine that the last task in the task sequence is an unprocessed task. Further, unprocessed tasks may exist in other tasks except the last task in the above task sequence. Therefore, the node can determine whether there are other unprocessed tasks among the plurality of tasks included in the task sequence, in addition to the last task.
In this way, for each unprocessed task including the last task, the node may select an auxiliary node capable of satisfying the computing resources required by the task from the candidate nodes, and request the auxiliary node to process the task.
That is, after traversing the last task in the sequence of tasks, the node may determine each unprocessed task present in the sequence of tasks. Further, for each unprocessed task, the node may select an auxiliary node from the candidate nodes that satisfies the computational resources required by the task, and request the auxiliary node to process the task.
When the node can meet the computing resource required by the last task in the task sequence, after traversing the last task, the node determines that the unprocessed task in the task sequence does not comprise the last task; and when the node cannot meet the computing resource required by the last task in the task sequence, after the node traverses the last task, the unprocessed task in the task sequence determined by the node comprises the last task.
In addition, optionally, the mode that the node requests the candidate node to process the task may be: the node sends a task processing request to the candidate node, wherein the task processing request can include specific task content of the task and the amount of computing resources required by the task. Thus, when receiving the task processing request, the candidate node can process the task based on the computing resources available to the candidate node.
Further, no matter whether the unprocessed task in the task sequence determined by the node includes the last task or not, for each unprocessed task, the auxiliary node capable of satisfying the computing resource required by the task may feed back a processing message to the node after the processing completes the task. The message may be information indicating that the task has been processed, or may be a processing result of the task, which is not limited in this embodiment of the present invention.
In addition, compared with the situation that the node acquires a plurality of tasks to be processed, when the node acquires only one task to be processed, the node can directly judge whether the node can meet the computing resources required by the task. If so, the node may process the task locally, otherwise, the node may select an auxiliary node from the candidate nodes that can satisfy the computing resources required by the task and request the auxiliary node to process the task.
Optionally, in a specific implementation manner, each candidate node may be divided into a first class of candidate nodes and a second class of candidate nodes, where the first class of candidate nodes is: the candidate nodes belong to the local area network where the nodes are located; the second type candidate node is: candidate nodes of the plurality of candidate nodes other than the first class of candidate nodes.
Furthermore, in this specific implementation manner, the step of the node executing step S105 to select one auxiliary node from the plurality of candidate nodes that can satisfy the computing resource required by the task and request the auxiliary node to process the task may include the following steps:
when an auxiliary node capable of meeting the computing resources required by the task exists in the first type of candidate nodes, requesting the auxiliary node to process the task;
and when the first class of candidate nodes does not have auxiliary nodes capable of meeting the computing resources required by the task, selecting one auxiliary node capable of meeting the computing resources required by the task from the second class of candidate nodes, and requesting the auxiliary node to process the task.
That is, in this specific implementation, for each task that is not processed, the node tends to complete the processing of the task through other nodes except itself in the local area network where the node is located.
Based on this, for each unprocessed task, the node firstly selects an auxiliary node which can meet the computing resources required by the task from other nodes except the node in the local area network where the node is located, and then requests the auxiliary node to process the task;
when the auxiliary node capable of meeting the computing resource required by the task does not exist in other nodes except the local network in which the node is located, the node continues to select one auxiliary node capable of meeting the computing resource required by the task from the nodes belonging to other local networks of the target cloud service node in order to find the auxiliary node to process the task because the auxiliary node capable of meeting the computing resource required by the task cannot complete the processing of the task through other nodes except the local network in which the node is located.
Optionally, in a specific implementation manner, as shown in fig. 2, when the node determines that it cannot satisfy the computing resource required by the last task in the task sequence, the node executes, for each unprocessed task including the task, a manner of selecting, from a plurality of candidate nodes, an auxiliary node that can satisfy the computing resource required by the task and requesting the auxiliary node to process the task may include the following steps:
s201: selecting an auxiliary node capable of meeting the computing resources required by the task from the plurality of candidate nodes, and requesting the auxiliary node to process the task;
s202: judging whether unprocessed tasks exist in the task sequence or not; if yes, go to step S203;
s203: for each unprocessed task, an auxiliary node capable of satisfying the computing resources required by the task is selected from the plurality of candidate nodes, and the auxiliary node is requested to process the task.
In this specific implementation manner, when the node determines that it cannot satisfy the computing resource required by the last task in the task sequence, the node may first select an auxiliary node that can satisfy the computing resource required by the last task from the plurality of candidate nodes, and request the auxiliary node to process the last task. Further, the node may continue to determine whether an unprocessed task exists in the task sequence, that is, the node may continue to determine whether an unprocessed task exists in the other tasks except the last task in the task sequence.
Thus, when the node judges that the task sequence has the unprocessed task, the node can select an auxiliary node capable of meeting the computing resource required by the task from a plurality of candidate nodes for each unprocessed task, and request the auxiliary node to process the task.
Optionally, in a specific implementation manner, the manner in which the node performs the above-mentioned selection of an auxiliary node capable of satisfying the computing resource required by the task from the plurality of candidate nodes may include the following steps:
sequentially traversing each candidate node in the plurality of candidate nodes according to a preset searching sequence; when each candidate node is traversed, judging whether the candidate node can meet the computing resources required by the task; if so, determining the candidate node as the auxiliary node, and otherwise, traversing the next candidate node.
In this specific implementation manner, a search order may be set in advance for each node in the plurality of candidate nodes. In the embodiment of the present invention, the search sequence may be preset and input into the node, or may be temporarily set by the node, which is reasonable. In addition, in the embodiment of the present invention, the search order may be set according to any rule, and the embodiment of the present invention is not particularly limited.
In this way, the node can sequentially traverse each candidate node in the plurality of candidate nodes according to the preset searching sequence; when traversing to each candidate node, the node can judge whether the candidate node can meet the computing resource required by the task; if so, the node may determine the candidate node as an auxiliary node, otherwise, the node may traverse the next candidate node.
The manner in which the node may determine whether the candidate node can satisfy the computing resource required by the task may specifically be: the node may send a resource query instruction to the candidate node, where the resource query instruction may include an amount of computing resources required for the task. Therefore, after receiving the resource query instruction, the candidate node can determine whether the candidate node can satisfy the computing resources required by the task, that is, whether the number of the available computing resources is not less than the number of the computing resources required by the task.
And further, the candidate node may send the determination result to the node, where when the determination result is yes, the node may determine the candidate node as an auxiliary node, otherwise, the node may not determine the candidate node as an auxiliary node and needs to traverse a next node.
Optionally, in another specific implementation manner, the manner in which the node performs the above-described selection of an auxiliary node that can meet the computation resource required by the task from among the multiple candidate nodes may include the following steps:
broadcasting a resource query instruction to each candidate node in the plurality of candidate nodes, so that each candidate node judges whether the candidate node can meet the computing resource required by the task after receiving the resource query instruction, and feeding back a judgment result; determining a candidate node corresponding to the received first target result as an auxiliary node; wherein, the resource query instruction comprises: the number of computing resources required by the task, the target result is: the result is a yes judgment result.
In this specific implementation, a node may broadcast a resource query instruction to each of a plurality of candidate nodes at the same time, where the resource query instruction may include the number of computing resources required by the task. Therefore, after each candidate node receives the resource query instruction, whether the candidate node can meet the computing resources required by the task can be judged, and the judgment result is fed back to the node.
Furthermore, when the node receives the judgment result that the first result is yes, the candidate node sending the judgment result can be judged as the auxiliary node, and the auxiliary node is requested to process the task.
It will be appreciated that whether or not the unprocessed task determined by the node comprises the last task in the sequence of tasks, it may be the case that for each task that is not processed, there is no auxiliary node of the candidate nodes that satisfies the computational resources required by that task.
Based on this, in order to effectively process the task, optionally, in a specific implementation manner, the task processing method provided in the foregoing embodiment of the present invention may further include the following steps:
for each unprocessed task, if an auxiliary node capable of meeting the computing resource required by the task does not exist in the plurality of candidate nodes, sending a task processing request to a target cloud service center and requesting the target cloud service center to process the task; wherein the task processing request comprises: the task and the amount of computing resources required for the task.
In this specific implementation manner, for each unprocessed task, if there is no auxiliary node that can satisfy the computing resource required by the task in the plurality of candidate nodes, the node may send a task processing request to a target cloud service center to which a local area network where the node is located belongs, where the task processing request may include: the task and the amount of computing resources required for the task.
Thus, after receiving the task processing request, the target cloud service center may process the task locally, or after receiving the task processing request, the target cloud service center may search for an auxiliary node capable of satisfying the computing resource required by the task in each local area network belonging to other target cloud servers, and forward the task processing request to the auxiliary node, so that the auxiliary node processes the task.
Correspondingly to the task processing method provided in the embodiment of the present invention, the embodiment of the present invention further provides a task processing system.
Fig. 3 is a schematic structural diagram of a task processing system according to an embodiment of the present invention. Wherein the system comprises a plurality of nodes 310 in respective local area networks belonging to the agreed target cloud service center.
For any of the plurality of nodes, the node 310 is configured to:
acquiring a plurality of tasks to be processed;
determining a task sequence comprising the plurality of tasks;
sequentially traversing each task in the task sequence according to the arrangement sequence of the tasks in the task sequence;
when traversing each task, if the task is not the last task in the task sequence, processing the task and traversing the next task when judging that the node can meet the computing resources required by the task, otherwise, traversing the next task;
if the task is the last in the task sequence, processing the task when judging that the node can meet the computing resources required by the task, and selecting an auxiliary node capable of meeting the computing resources required by the task from a plurality of candidate nodes aiming at each unprocessed task when the unprocessed task exists in the task sequence, and requesting the auxiliary node to process the task; otherwise, aiming at each unprocessed task comprising the task, selecting an auxiliary node which can meet the computing resources required by the task from the plurality of candidate nodes, and requesting the auxiliary node to process the task; wherein the candidate nodes are: a node of the plurality of nodes other than the node;
the selected secondary node: the task processing module is used for receiving the task sent by the node and processing the task.
As can be seen from the above, with the solution provided in the embodiment of the present invention, after the node obtains the plurality of tasks to be processed, the node may determine the task sequence including the plurality of tasks, and may further traverse each task according to the order of the tasks in the task sequence. In this way, when traversing each task, if the task is not the last task in the task sequence, when the node can satisfy the computing resources required by the task, the node can locally process the task and traverse the next task, otherwise, the node can directly traverse the next task. If the task is the last task in the task sequence, the node may process the task locally when the node is able to satisfy the computing resources required by the task, otherwise, for each unprocessed task that includes the task, the node may select an auxiliary node from a plurality of candidate nodes that is able to satisfy the computing resources required by the task and request the auxiliary node to process the task. Each candidate node is a node except the node in each local area network belonging to the target cloud service center; and the target cloud service center is a cloud service center to which the local area network where the node is located belongs.
Based on this, by applying the scheme provided by the embodiment of the invention, when the node cannot meet the acquired computational resources learned by the task to be processed, the node can select the auxiliary node capable of meeting the computational resources required by the task at each candidate node. Therefore, backup nodes do not need to be set in each local area network, each node in the local area network can directly receive the task to be processed from the outside, and the task to be processed is processed when the available resources of the node can meet the target computing resources required by the task to be processed. Therefore, the situation that computing resources which are not used all the time exist in the local area network can be avoided, and further the computing resource utilization rate of the whole local area network and the task processing efficiency of the local area network are improved.
Optionally, in a specific implementation manner, the node 310 is configured to select, from a plurality of candidate nodes, an auxiliary node that can satisfy the computing resource required by the task, and request the auxiliary node to process the task, where the method includes:
when an auxiliary node capable of meeting the computing resources required by the task exists in the first type of candidate nodes, requesting the auxiliary node to process the task;
when the first type of candidate nodes do not have auxiliary nodes capable of meeting the computing resources required by the task, selecting one auxiliary node capable of meeting the computing resources required by the task from the second type of candidate nodes, and requesting the auxiliary node to process the task;
wherein the first type candidate node is: the candidate nodes belong to the local area network where the nodes are located; the second type candidate node is: candidate nodes of the plurality of candidate nodes other than the first class of candidate nodes.
Optionally, in a specific implementation manner, the node 310 is configured to, for each unprocessed task including the task, select an auxiliary node capable of satisfying the computing resources required by the task from a plurality of candidate nodes, and request the auxiliary node to process the task, including:
selecting an auxiliary node capable of meeting the computing resources required by the task from a plurality of candidate nodes, and requesting the auxiliary node to process the task;
judging whether unprocessed tasks exist in the task sequence or not;
and if so, selecting an auxiliary node capable of meeting the computing resources required by the task from the plurality of candidate nodes for each unprocessed task, and requesting the auxiliary node to process the task.
Optionally, in a specific implementation manner, the node 310 is configured to select, from a plurality of candidate nodes, an auxiliary node capable of satisfying the computing resource required by the task, and includes:
sequentially traversing each candidate node in the plurality of candidate nodes according to a preset searching sequence; when each candidate node is traversed, judging whether the candidate node can meet the computing resources required by the task; if so, determining the candidate node as an auxiliary node, otherwise, traversing the next candidate node;
alternatively, the first and second electrodes may be,
broadcasting a resource query instruction to each candidate node in the plurality of candidate nodes, so that each candidate node judges whether the candidate node can meet the computing resource required by the task after receiving the resource query instruction, and feeding back a judgment result; determining a candidate node corresponding to the received first target result as an auxiliary node; wherein the resource query instruction comprises: the amount of computing resources required by the task, the target result is: the result is a yes judgment result.
Optionally, in a specific implementation manner, the system further includes the target cloud service center;
for any node in the plurality of nodes, the node 310 is further configured to, for each unprocessed task, if there is no auxiliary node that can satisfy the computing resources required by the task in the plurality of candidate nodes, send a task processing request to the target cloud service center, and request the target cloud service center to process the task;
and the target cloud service center is used for receiving the task sent by the node and processing the task.
With respect to the task processing method provided by the embodiment of the present invention, the embodiment of the present invention further provides a node, which is a node in a local area network, as shown in fig. 4, and includes a processor 401, a communication interface 402, a memory 403, and a communication bus 404, where the processor 401, the communication interface 402, and the memory 403 complete mutual communication through the communication bus 404,
a memory 403 for storing a computer program;
the processor 401 is configured to implement the task processing method according to the embodiment of the present invention when executing the program stored in the memory 403.
Specifically, the task processing method includes:
acquiring a plurality of tasks to be processed;
determining a task sequence comprising the plurality of tasks;
sequentially traversing each task in the task sequence according to the arrangement sequence of the tasks in the task sequence;
when traversing each task, if the task is not the last task in the task sequence, processing the task and traversing the next task when judging that the node can meet the computing resources required by the task, otherwise, traversing the next task;
if the task is the last in the task sequence, processing the task when judging that the node can meet the computing resources required by the task, and selecting an auxiliary node capable of meeting the computing resources required by the task from a plurality of candidate nodes aiming at each unprocessed task when the unprocessed task exists in the task sequence, and requesting the auxiliary node to process the task; otherwise, aiming at each unprocessed task comprising the task, selecting an auxiliary node which can meet the computing resources required by the task from the plurality of candidate nodes, and requesting the auxiliary node to process the task;
it should be noted that other implementation manners of the task processing method implemented by the processor 401 executing the program stored in the memory 403 are the same as the task processing method embodiment provided in the foregoing method embodiment section, and are not described again here.
As can be seen from the above, with the solution provided in the embodiment of the present invention, after the node obtains the plurality of tasks to be processed, the node may determine the task sequence including the plurality of tasks, and may further traverse each task according to the order of the tasks in the task sequence. In this way, when traversing each task, if the task is not the last task in the task sequence, when the node can satisfy the computing resources required by the task, the node can locally process the task and traverse the next task, otherwise, the node can directly traverse the next task. If the task is the last task in the task sequence, the node may process the task locally when the node is able to satisfy the computing resources required by the task, otherwise, for each unprocessed task that includes the task, the node may select an auxiliary node from a plurality of candidate nodes that is able to satisfy the computing resources required by the task and request the auxiliary node to process the task. Each candidate node is a node except the node in each local area network belonging to the target cloud service center; and the target cloud service center is a cloud service center to which the local area network where the node is located belongs.
Based on this, by applying the scheme provided by the embodiment of the invention, when the node cannot meet the acquired computational resources learned by the task to be processed, the node can select the auxiliary node capable of meeting the computational resources required by the task at each candidate node. Therefore, backup nodes do not need to be set in each local area network, each node in the local area network can directly receive the task to be processed from the outside, and the task to be processed is processed when the available resources of the node can meet the target computing resources required by the task to be processed. Therefore, the situation that computing resources which are not used all the time exist in the local area network can be avoided, and further the computing resource utilization rate of the whole local area network and the task processing efficiency of the local area network are improved.
The communication bus mentioned by the above node may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the node and other devices.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
Corresponding to the task processing method provided in the embodiment of the present invention, an embodiment of the present invention further provides a computer-readable storage medium, and when executed by a processor, the computer program implements the task processing method provided in the embodiment of the present invention.
Specifically, the task processing method includes:
acquiring a plurality of tasks to be processed;
determining a task sequence comprising the plurality of tasks;
sequentially traversing each task in the task sequence according to the arrangement sequence of the tasks in the task sequence;
when traversing each task, if the task is not the last task in the task sequence, processing the task and traversing the next task when judging that the node can meet the computing resources required by the task, otherwise, traversing the next task;
if the task is the last in the task sequence, processing the task when judging that the node can meet the computing resources required by the task, and selecting an auxiliary node capable of meeting the computing resources required by the task from a plurality of candidate nodes aiming at each unprocessed task when the unprocessed task exists in the task sequence, and requesting the auxiliary node to process the task; otherwise, aiming at each unprocessed task comprising the task, selecting an auxiliary node which can meet the computing resources required by the task from the plurality of candidate nodes, and requesting the auxiliary node to process the task;
it should be noted that other implementation manners of the task processing method implemented when the computer program is executed by the processor are the same as the task processing method embodiments provided in the foregoing method embodiment section, and are not described again here.
As can be seen from the above, with the solution provided in the embodiment of the present invention, after the node obtains the plurality of tasks to be processed, the node may determine the task sequence including the plurality of tasks, and may further traverse each task according to the order of the tasks in the task sequence. In this way, when traversing each task, if the task is not the last task in the task sequence, when the node can satisfy the computing resources required by the task, the node can locally process the task and traverse the next task, otherwise, the node can directly traverse the next task. If the task is the last task in the task sequence, the node may process the task locally when the node is able to satisfy the computing resources required by the task, otherwise, for each unprocessed task that includes the task, the node may select an auxiliary node from a plurality of candidate nodes that is able to satisfy the computing resources required by the task and request the auxiliary node to process the task. Each candidate node is a node except the node in each local area network belonging to the target cloud service center; and the target cloud service center is a cloud service center to which the local area network where the node is located belongs.
Based on this, by applying the scheme provided by the embodiment of the invention, when the node cannot meet the acquired computational resources learned by the task to be processed, the node can select the auxiliary node capable of meeting the computational resources required by the task at each candidate node. Therefore, backup nodes do not need to be set in each local area network, each node in the local area network can directly receive the task to be processed from the outside, and the task to be processed is processed when the available resources of the node can meet the target computing resources required by the task to be processed. Therefore, the situation that computing resources which are not used all the time exist in the local area network can be avoided, and further the computing resource utilization rate of the whole local area network and the task processing efficiency of the local area network are improved.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the node embodiment and the computer-readable storage medium embodiment, since they are substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A task processing method is applied to a node, and the method comprises the following steps:
acquiring a plurality of tasks to be processed;
determining a task sequence comprising the plurality of tasks;
sequentially traversing each task in the task sequence according to the arrangement sequence of the tasks in the task sequence;
when traversing each task, if the task is not the last task in the task sequence, processing the task when judging that the node can meet the computing resource required by the task, traversing the next task, and when judging that the node can not meet the computing resource required by the task, traversing the next task;
if the task is the last in the task sequence, processing the task when judging that the node can meet the computing resources required by the task, and selecting an auxiliary node capable of meeting the computing resources required by the task from a plurality of candidate nodes aiming at each task which is not processed when the task exists in the task sequence, and requesting the auxiliary node to process the task; when the node is judged to be incapable of meeting the computing resources required by the task, selecting an auxiliary node capable of meeting the computing resources required by the task from the plurality of candidate nodes, requesting the auxiliary node to process the task, and judging whether an unprocessed task exists in the task sequence; if yes, selecting an auxiliary node capable of meeting the computing resources required by the task from the candidate nodes aiming at each unprocessed task, and requesting the auxiliary node to process the task;
wherein the candidate nodes are: the nodes except the nodes belong to each local area network of the target cloud service center; and the target cloud service center is a cloud service center to which the local area network where the node is located belongs.
2. The method of claim 1, wherein the step of selecting an auxiliary node from the plurality of candidate nodes that satisfies the computational resources required by the task and requesting the auxiliary node to process the task comprises:
when an auxiliary node capable of meeting the computing resources required by the task exists in the first type of candidate nodes, requesting the auxiliary node to process the task;
when the first type of candidate nodes do not have auxiliary nodes capable of meeting the computing resources required by the task, selecting one auxiliary node capable of meeting the computing resources required by the task from the second type of candidate nodes, and requesting the auxiliary node to process the task;
wherein the first type candidate node is: the candidate nodes belong to the local area network where the nodes are located; the second type candidate node is: candidate nodes of the plurality of candidate nodes other than the first class of candidate nodes.
3. The method of claim 1, wherein the step of selecting an auxiliary node from the plurality of candidate nodes that satisfies the computational resources required by the task comprises:
sequentially traversing each candidate node in the plurality of candidate nodes according to a preset searching sequence; when each candidate node is traversed, judging whether the candidate node can meet the computing resources required by the task; if so, determining the candidate node as an auxiliary node, otherwise, traversing the next candidate node;
alternatively, the first and second electrodes may be,
broadcasting a resource query instruction to each candidate node in the plurality of candidate nodes, so that each candidate node judges whether the candidate node can meet the computing resource required by the task after receiving the resource query instruction, and feeding back a judgment result; determining a candidate node corresponding to the received first target result as an auxiliary node; wherein the resource query instruction comprises: the amount of computing resources required by the task, the target result is: the result is a yes judgment result.
4. The method according to any one of claims 1-3, further comprising:
for each unprocessed task, if an auxiliary node capable of meeting the computing resource required by the task does not exist in the plurality of candidate nodes, sending a task processing request to the target cloud service center, and requesting the target cloud service center to process the task; wherein the task processing request comprises: the task and the amount of computing resources required for the task.
5. A task processing system is characterized by comprising a plurality of nodes belonging to the same target cloud service center in each local area network;
for any node of the plurality of nodes, the node to:
acquiring a plurality of tasks to be processed;
determining a task sequence comprising the plurality of tasks;
sequentially traversing each task in the task sequence according to the arrangement sequence of the tasks in the task sequence;
when traversing each task, if the task is not the last task in the task sequence, processing the task when judging that the node can meet the computing resource required by the task, traversing the next task, and when judging that the node can not meet the computing resource required by the task, traversing the next task;
if the task is the last in the task sequence, processing the task when judging that the node can meet the computing resources required by the task, and selecting an auxiliary node capable of meeting the computing resources required by the task from a plurality of candidate nodes aiming at each unprocessed task when the unprocessed task exists in the task sequence, and requesting the auxiliary node to process the task; when the node is judged to be incapable of meeting the computing resources required by the task, selecting an auxiliary node capable of meeting the computing resources required by the task from the plurality of candidate nodes, requesting the auxiliary node to process the task, and judging whether an unprocessed task exists in the task sequence; if yes, selecting an auxiliary node capable of meeting the computing resources required by the task from the candidate nodes aiming at each unprocessed task, and requesting the auxiliary node to process the task;
wherein the candidate nodes are: a node of the plurality of nodes other than the node; the selected secondary node: the task processing module is used for receiving the task sent by the node and processing the task.
6. The system of claim 5, wherein the node is configured to select an assisting node from the plurality of candidate nodes that satisfies the computational resources required by the task and request the assisting node to process the task, comprising:
when an auxiliary node capable of meeting the computing resources required by the task exists in the first type of candidate nodes, requesting the auxiliary node to process the task;
when the first type of candidate nodes do not have auxiliary nodes capable of meeting the computing resources required by the task, selecting one auxiliary node capable of meeting the computing resources required by the task from the second type of candidate nodes, and requesting the auxiliary node to process the task;
wherein the first type candidate node is: the candidate nodes belong to the local area network where the nodes are located; the second type candidate node is: candidate nodes of the plurality of candidate nodes other than the first class of candidate nodes.
7. The system of claim 5, wherein the node is configured to select an auxiliary node from the plurality of candidate nodes that satisfies the computational resources required by the task, comprising:
sequentially traversing each candidate node in the plurality of candidate nodes according to a preset searching sequence; when each candidate node is traversed, judging whether the candidate node can meet the computing resources required by the task; if so, determining the candidate node as an auxiliary node, otherwise, traversing the next candidate node;
alternatively, the first and second electrodes may be,
broadcasting a resource query instruction to each candidate node in the plurality of candidate nodes, so that each candidate node judges whether the candidate node can meet the computing resource required by the task after receiving the resource query instruction, and feeding back a judgment result; determining a candidate node corresponding to the received first target result as an auxiliary node; wherein the resource query instruction comprises: the amount of computing resources required by the task, the target result is: the result is a yes judgment result.
8. The system of any one of claims 5-7, further comprising the target cloud service center;
for any node in the plurality of nodes, the node is further configured to, for each unprocessed task, if there is no auxiliary node that can satisfy the computing resources required by the task in the plurality of candidate nodes, send a task processing request to the target cloud service center, and request the target cloud service center to process the task;
and the target cloud service center is used for receiving the task sent by the node and processing the task.
9. A node is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 4 when executing a program stored in the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 4.
CN201910703055.1A 2019-07-31 2019-07-31 Task processing method, system and node Active CN112311832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910703055.1A CN112311832B (en) 2019-07-31 2019-07-31 Task processing method, system and node

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910703055.1A CN112311832B (en) 2019-07-31 2019-07-31 Task processing method, system and node

Publications (2)

Publication Number Publication Date
CN112311832A CN112311832A (en) 2021-02-02
CN112311832B true CN112311832B (en) 2022-08-05

Family

ID=74486201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910703055.1A Active CN112311832B (en) 2019-07-31 2019-07-31 Task processing method, system and node

Country Status (1)

Country Link
CN (1) CN112311832B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521044A (en) * 2011-12-30 2012-06-27 北京拓明科技有限公司 Distributed task scheduling method and system based on messaging middleware
CN103279385A (en) * 2013-06-01 2013-09-04 北京华胜天成科技股份有限公司 Method and system for scheduling cluster tasks in cloud computing environment
CN108737560A (en) * 2018-05-31 2018-11-02 南京邮电大学 Cloud computing task intelligent dispatching method and system, readable storage medium storing program for executing, terminal
CN109656690A (en) * 2017-10-11 2019-04-19 阿里巴巴集团控股有限公司 Scheduling system, method and storage medium
CN109656691A (en) * 2017-10-11 2019-04-19 阿里巴巴集团控股有限公司 Processing method, device and the electronic equipment of computing resource

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9477521B2 (en) * 2014-05-29 2016-10-25 Netapp, Inc. Method and system for scheduling repetitive tasks in O(1)

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521044A (en) * 2011-12-30 2012-06-27 北京拓明科技有限公司 Distributed task scheduling method and system based on messaging middleware
CN103279385A (en) * 2013-06-01 2013-09-04 北京华胜天成科技股份有限公司 Method and system for scheduling cluster tasks in cloud computing environment
CN109656690A (en) * 2017-10-11 2019-04-19 阿里巴巴集团控股有限公司 Scheduling system, method and storage medium
CN109656691A (en) * 2017-10-11 2019-04-19 阿里巴巴集团控股有限公司 Processing method, device and the electronic equipment of computing resource
CN108737560A (en) * 2018-05-31 2018-11-02 南京邮电大学 Cloud computing task intelligent dispatching method and system, readable storage medium storing program for executing, terminal

Also Published As

Publication number Publication date
CN112311832A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
US8671134B2 (en) Method and system for data distribution in high performance computing cluster
CN110096336B (en) Data monitoring method, device, equipment and medium
CN108121511B (en) Data processing method, device and equipment in distributed edge storage system
US10367719B2 (en) Optimized consumption of third-party web services in a composite service
WO2021227999A1 (en) Cloud computing service system and method
CN111460504B (en) Service processing method, device, node equipment and storage medium
CN109033814B (en) Intelligent contract triggering method, device, equipment and storage medium
CN110347515B (en) Resource optimization allocation method suitable for edge computing environment
CN112416881A (en) Intelligent terminal storage sharing method, device, medium and equipment based on block chain
CN113835865A (en) Task deployment method and device, electronic equipment and storage medium
CN109508912B (en) Service scheduling method, device, equipment and storage medium
CN111694639B (en) Updating method and device of process container address and electronic equipment
CN111611076B (en) Fair distribution method for mobile edge computing shared resources under task deployment constraint
CN110336813B (en) Access control method, device, equipment and storage medium
CN113886069A (en) Resource allocation method and device, electronic equipment and storage medium
CN113934545A (en) Video data scheduling method, system, electronic equipment and readable medium
CN114036031B (en) Scheduling system and method for resource service application in enterprise digital middleboxes
CN105740249B (en) Processing method and system in parallel scheduling process of big data job
WO2024037629A1 (en) Data integration method and apparatus for blockchain, and computer device and storage medium
CN106776034B (en) Task batch processing calculation method, master station computer and system
CN112311832B (en) Task processing method, system and node
CN111835809B (en) Work order message distribution method, work order message distribution device, server and storage medium
CN111046004A (en) Data file storage method, device, equipment and storage medium
CN111506400A (en) Computing resource allocation system, method, device and computer equipment
CN110750362A (en) Method and apparatus for analyzing biological information, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant