CN113918333A - Task processing method and terminal - Google Patents
Task processing method and terminal Download PDFInfo
- Publication number
- CN113918333A CN113918333A CN202111190647.1A CN202111190647A CN113918333A CN 113918333 A CN113918333 A CN 113918333A CN 202111190647 A CN202111190647 A CN 202111190647A CN 113918333 A CN113918333 A CN 113918333A
- Authority
- CN
- China
- Prior art keywords
- node
- nodes
- level
- performance
- current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 238000012544 monitoring process Methods 0.000 claims abstract description 41
- 238000004891 communication Methods 0.000 claims abstract description 39
- 238000012163 sequencing technique Methods 0.000 claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 8
- 238000004590 computer program Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 abstract description 23
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/508—Monitor
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The invention discloses a task processing method and a terminal, which are used for monitoring the resource residual condition of each node in a distributed cluster and the communication condition between each node and adjacent nodes thereof, and performing performance sequencing according to monitoring data to obtain a first preset number of performance optimal nodes as super nodes; receiving tasks to be processed, and processing corresponding subtasks in the tasks to be processed by each super node; if the current subtask can be continuously split into a second preset number of next-level subtasks, selecting a second preset number of performance optimal nodes from next-level nodes corresponding to the current nodes for processing the current subtask to process the next-level subtasks in a one-to-one correspondence manner until the current nodes are bottom-layer nodes or the current subtask cannot be continuously split; therefore, the tasks to be processed are split step by step, and the subtasks obtained by splitting at each stage are processed in the nodes with the optimal performance at each stage, so that the calculation efficiency of the tasks can be improved, and the execution time of the tasks can be reduced.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a task processing method and a terminal.
Background
At present, there are various types of push in the message push system, mainly including personal type push, broadcast type push, and tag type push.
The main flow for realizing the pushing of the label type is as follows: and inquiring a plurality of label sets meeting the conditions, carrying out intersection difference set calculation on the sets, finally obtaining the final calculation result, and carrying out message pushing on the result. With the increase of the number of users, the traditional method generally performs calculation by a method provided by a single-point redis, but because the tag data needs to be synchronously loaded into the redis, the problem of long time consumption exists.
At present, a new method generally adopts a distributed deployment manner, deploys a plurality of single-point rediss, and transmits messages and results between different single-point rediss to finally obtain a calculation result. However, as the number of devices continues to increase, the number of various original labelsets also continues to increase, and in order to further reduce the computation time, the number of redis distributed computing nodes must be increased. However, as the number of redis distributed computing nodes increases, the number of the redis distributed computing nodes may reach tens or hundreds, and the computing nodes may be deployed in different computer rooms, and may be deployed by using different machines, the computing efficiency of the computing nodes therein may be affected by the external factors.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: provided are a task processing method and a terminal, which can improve the processing efficiency of tasks.
In order to solve the technical problems, the invention adopts the technical scheme that:
a task processing method, comprising the steps of:
monitoring the resource residual condition of each node in the distributed cluster and the communication condition between each node and the adjacent node thereof;
performing performance sequencing on each node according to the monitored data, and taking the first preset number of performance optimal nodes as super nodes;
receiving tasks to be processed, and processing corresponding subtasks in the tasks to be processed by each super node;
if the current subtask can be continuously split into a second preset number of next-level subtasks, selecting a second preset number of performance optimal nodes from next-level nodes corresponding to a current node for processing the current subtask to perform one-to-one corresponding processing on the next-level subtask until the current node is a bottom-layer node or the current subtask cannot be continuously split;
and summarizing results obtained by processing each level of the performance optimal node to obtain a task processing result.
In order to solve the technical problems, the invention adopts the technical scheme that:
a task processing terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
monitoring the resource residual condition of each node in the distributed cluster and the communication condition between each node and the adjacent node thereof;
performing performance sequencing on each node according to the monitored data, and taking the first preset number of performance optimal nodes as super nodes;
receiving tasks to be processed, and processing corresponding subtasks in the tasks to be processed by each super node;
if the current subtask can be continuously split into a second preset number of next-level subtasks, selecting a second preset number of performance optimal nodes from next-level nodes corresponding to a current node for processing the current subtask to perform one-to-one corresponding processing on the next-level subtask until the current node is a bottom-layer node or the current subtask cannot be continuously split;
and summarizing results obtained by processing each level of the performance optimal node to obtain a task processing result.
The invention has the beneficial effects that: monitoring the resource residual condition of each node in the distributed cluster and the communication condition between each node and the adjacent node thereof, and performing performance sequencing according to the monitoring data to obtain a first preset number of performance optimal nodes as super nodes; receiving tasks to be processed, and processing corresponding subtasks in the tasks to be processed by each super node; if the current subtask can be continuously split into a second preset number of next-level subtasks, selecting a second preset number of performance optimal nodes from next-level nodes corresponding to the current node for processing the current subtask to perform one-to-one corresponding processing on the next-level subtasks until the current node is a bottommost node or the current subtask cannot be continuously split; therefore, the tasks to be processed can be split step by step, and the subtasks obtained by splitting at each stage are processed in the nodes with the optimal performance at each stage, so that the calculation efficiency of the tasks can be further improved and the execution time of the tasks can be reduced compared with the conventional random distribution mode.
Drawings
FIG. 1 is a flow chart of a task processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a task processing terminal according to an embodiment of the present invention;
fig. 3 is a relationship diagram of a first node and a second node of a task processing method according to an embodiment of the present invention.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 1 and fig. 3, an embodiment of the present invention provides a task processing method, including:
monitoring the resource residual condition of each node in the distributed cluster and the communication condition between each node and the adjacent node thereof;
performing performance sequencing on each node according to the monitored data, and taking the first preset number of performance optimal nodes as super nodes;
receiving tasks to be processed, and processing corresponding subtasks in the tasks to be processed by each super node;
if the current subtask can be continuously split into a second preset number of next-level subtasks, selecting a second preset number of performance optimal nodes from next-level nodes corresponding to a current node for processing the current subtask to perform one-to-one corresponding processing on the next-level subtask until the current node is a bottom-layer node or the current subtask cannot be continuously split;
and summarizing results obtained by processing each level of the performance optimal node to obtain a task processing result.
From the above description, the beneficial effects of the present invention are: monitoring the resource residual condition of each node in the distributed cluster and the communication condition between each node and the adjacent node thereof, and performing performance sequencing according to the monitoring data to obtain a first preset number of performance optimal nodes as super nodes; receiving tasks to be processed, and processing corresponding subtasks in the tasks to be processed by each super node; if the current subtask can be continuously split into a second preset number of next-level subtasks, selecting a second preset number of performance optimal nodes from next-level nodes corresponding to the current node for processing the current subtask to perform one-to-one corresponding processing on the next-level subtasks until the current node is a bottommost node or the current subtask cannot be continuously split; therefore, the tasks to be processed can be split step by step, and the subtasks obtained by splitting at each stage are processed in the nodes with the optimal performance at each stage, so that the calculation efficiency of the tasks can be further improved and the execution time of the tasks can be reduced compared with the conventional random distribution mode.
Further, the monitoring resource remaining condition of each node in the distributed cluster and communication condition between each node and its neighboring node includes:
monitoring the CPU occupancy rate and the residual amount of the memory of each node in the distributed cluster at preset time intervals;
and monitoring the resource use condition and the network communication condition between each node and the adjacent node thereof according to the monitoring thread in each node.
As can be seen from the above description, the CPU occupancy rate and the remaining memory amount of each node are regularly monitored, and the resource usage condition and the network communication condition between each node and its adjacent node are monitored, so that the performance information of each node can be accurately obtained, and the subsequent performance ranking of the nodes is facilitated.
Further, the performing performance ranking on each node according to the monitored data, and using the first preset number of performance-optimized nodes as super nodes includes:
calculating the resource residual condition of each node and the communication condition between each node and the adjacent node thereof according to a preset proportion to obtain the performance value of each node;
and sequencing according to the performance value of each node, and acquiring a first preset number of nodes with optimal performance values as super nodes.
As can be seen from the above description, the performance values of the nodes are calculated according to the resource remaining condition of each node and the communication condition between each node and its adjacent node according to the preset proportion, and the first preset number of nodes with the optimal performance values are obtained, so that task calculation is performed on the nodes with excellent performance, and the task calculation efficiency is improved.
Further, the receiving the to-be-processed task, and the processing, by each super node, a corresponding sub-task in the to-be-processed task includes:
receiving a task to be processed, and splitting the task to be processed into a first preset number of subtasks;
and performing one-to-one correspondence processing on the subtasks by using each super node.
According to the description, the subtasks of the task to be processed correspond to the super nodes one by one, and the corresponding subtasks are distributed to the super nodes for processing, so that the calculation efficiency of the task can be further improved.
Further, the selecting a second preset number of performance-optimized nodes from the next-level nodes corresponding to the current node processing the current subtask to perform one-to-one correspondence processing on the next-level subtask until the current node is a bottommost node or the current subtask cannot be continuously split includes:
performing performance sequencing according to the resource surplus condition of a next-level node corresponding to the current node of the current subtask and the communication condition between the next-level node and the adjacent node of the next-level node, and acquiring a second preset number of performance optimal nodes of the next level;
and processing the corresponding next-level subtasks one by using each next-level performance optimal node until the current node is the bottommost node or the current subtask cannot be continuously split.
As can be seen from the above description, the nodes with superior performance at the next stage are selected for calculation according to each processing result, so that the task processing can be guaranteed to be performed in the nodes with superior performance, the calculation efficiency of the task can be improved, and the execution time of the task can be reduced.
Referring to fig. 2, another embodiment of the present invention provides a task processing terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the following steps:
monitoring the resource residual condition of each node in the distributed cluster and the communication condition between each node and the adjacent node thereof;
performing performance sequencing on each node according to the monitored data, and taking the first preset number of performance optimal nodes as super nodes;
receiving tasks to be processed, and processing corresponding subtasks in the tasks to be processed by each super node;
if the current subtask can be continuously split into a second preset number of next-level subtasks, selecting a second preset number of performance optimal nodes from next-level nodes corresponding to a current node for processing the current subtask to perform one-to-one corresponding processing on the next-level subtask until the current node is a bottom-layer node or the current subtask cannot be continuously split;
and summarizing results obtained by processing each level of the performance optimal node to obtain a task processing result.
As can be seen from the above description, the resource remaining condition of each node in the distributed cluster and the communication condition between each node and its neighboring nodes are monitored, performance sorting is performed according to the monitoring data, and a first preset number of performance-optimized nodes are obtained as super nodes; receiving tasks to be processed, and processing corresponding subtasks in the tasks to be processed by each super node; if the current subtask can be continuously split into a second preset number of next-level subtasks, selecting a second preset number of performance optimal nodes from next-level nodes corresponding to the current node for processing the current subtask to perform one-to-one corresponding processing on the next-level subtasks until the current node is a bottommost node or the current subtask cannot be continuously split; therefore, the tasks to be processed can be split step by step, and the subtasks obtained by splitting at each stage are processed in the nodes with the optimal performance at each stage, so that the calculation efficiency of the tasks can be further improved and the execution time of the tasks can be reduced compared with the conventional random distribution mode.
Further, the monitoring resource remaining condition of each node in the distributed cluster and communication condition between each node and its neighboring node includes:
monitoring the CPU occupancy rate and the residual amount of the memory of each node in the distributed cluster at preset time intervals;
and monitoring the resource use condition and the network communication condition between each node and the adjacent node thereof according to the monitoring thread in each node.
As can be seen from the above description, the CPU occupancy rate and the remaining memory amount of each node are regularly monitored, and the resource usage condition and the network communication condition between each node and its adjacent node are monitored, so that the performance information of each node can be accurately obtained, and the subsequent performance ranking of the nodes is facilitated.
Further, the performing performance ranking on each node according to the monitored data, and using the first preset number of performance-optimized nodes as super nodes includes:
calculating the resource residual condition of each node and the communication condition between each node and the adjacent node thereof according to a preset proportion to obtain the performance value of each node;
and sequencing according to the performance value of each node, and acquiring a first preset number of nodes with optimal performance values as super nodes.
As can be seen from the above description, the performance values of the nodes are calculated according to the resource remaining condition of each node and the communication condition between each node and its adjacent node according to the preset proportion, and the first preset number of nodes with the optimal performance values are obtained, so that task calculation is performed on the nodes with excellent performance, and the task calculation efficiency is improved.
Further, the receiving the to-be-processed task, and the processing, by each super node, a corresponding sub-task in the to-be-processed task includes:
receiving a task to be processed, and splitting the task to be processed into a first preset number of subtasks;
and performing one-to-one correspondence processing on the subtasks by using each super node.
According to the description, the subtasks of the task to be processed correspond to the super nodes one by one, and the corresponding subtasks are distributed to the super nodes for processing, so that the calculation efficiency of the task can be further improved.
Further, the selecting a second preset number of performance-optimized nodes from the next-level nodes corresponding to the current node processing the current subtask to perform one-to-one correspondence processing on the next-level subtask until the current node is a bottommost node or the current subtask cannot be continuously split includes:
performing performance sequencing according to the resource surplus condition of a next-level node corresponding to the current node of the current subtask and the communication condition between the next-level node and the adjacent node of the next-level node, and acquiring a second preset number of performance optimal nodes of the next level;
and processing the corresponding next-level subtasks one by using each next-level performance optimal node until the current node is the bottommost node or the current subtask cannot be continuously split.
As can be seen from the above description, the nodes with superior performance at the next stage are selected for calculation according to each processing result, so that the task processing can be guaranteed to be performed in the nodes with superior performance, the calculation efficiency of the task can be improved, and the execution time of the task can be reduced.
The task processing method and the terminal are suitable for further improving task computing efficiency in a distributed computing scene, and are described in the following through specific implementation modes:
example one
Referring to fig. 1 and 3, a task processing method includes the steps of:
and S1, monitoring the resource residual condition of each node in the distributed cluster and the communication condition between each node and the adjacent nodes.
Wherein, step S1 includes:
monitoring the CPU occupancy rate and the residual amount of the memory of each node in the distributed cluster at preset time intervals;
and monitoring the resource use condition and the network communication condition between each node and the adjacent node thereof according to the monitoring thread in each node.
Specifically, in this embodiment, there is one redis distributed computing cluster, where there are 100 redis single nodes, and there is one monitoring system, and the system monitors the CPU occupancy and the remaining memory amount of each node every 1 minute; each node is internally provided with a micro monitoring thread, and the thread is mainly used for monitoring key information of the node and adjacent nodes, including network communication conditions, packet loss rates and resource use conditions of the adjacent nodes.
And S2, performing performance sequencing on each node according to the monitored data, and taking the first preset number of performance optimal nodes as super nodes.
Wherein, step S2 includes:
calculating the resource residual condition of each node and the communication condition between each node and the adjacent node thereof according to a preset proportion to obtain the performance value of each node;
and sequencing according to the performance value of each node, and acquiring a first preset number of nodes with optimal performance values as super nodes.
Specifically, in this embodiment, the first preset number is 2, and the resource remaining condition of each node and the communication condition between each node and its neighboring node are determined according to 6: 4, calculating the performance value, analyzing to obtain a node list with sufficient resources and excellent communication conditions with the adjacent nodes, and selecting the node with the second highest performance value as the super node.
And S3, receiving the tasks to be processed, and processing the corresponding subtasks in the tasks to be processed by each super node.
Wherein, step S3 includes:
receiving a task to be processed, and splitting the task to be processed into a first preset number of subtasks;
and performing one-to-one correspondence processing on the subtasks by using each super node.
Specifically, when a new task comes, the task management module assigns subtasks to the super nodes, and divides the task into two subtasks and distributes the two subtasks to the corresponding two super nodes for processing.
And S4, if the current subtask can be continuously split into a second preset number of next-level subtasks, selecting a second preset number of performance-optimized nodes from the next-level nodes corresponding to the current node for processing the current subtask to perform one-to-one corresponding processing on the next-level subtasks until the current node is the bottommost node or the current subtask cannot be continuously split.
If the task to be processed is not processed, splitting each subtask into a second preset number of next-level subtasks;
performing performance sequencing according to the resource surplus condition of a next-level node corresponding to the current node of the current subtask and the communication condition between the next-level node and the adjacent node of the next-level node, and acquiring a second preset number of performance optimal nodes of the next level;
and processing the corresponding next-level subtasks one by using each next-level performance optimal node until the current node is the bottommost node or the current subtask cannot be continuously split.
Specifically, referring to fig. 3, in this embodiment, the second preset number is 2, the corresponding secondary node 1 and the secondary node 2 with the optimal performance are calculated from the monitoring information of the super node 1, and the subtask 1 corresponding to the super node 1 is divided into the secondary subtask 1 and the secondary subtask 2;
respectively sending the secondary subtasks 1 and 2 to the secondary nodes 1 and 2 for subsequent calculation;
similarly, the subtask 2 corresponding to the super node 2 is split into a second-level subtask 3 and a second-level subtask 4, and the second-level subtasks 3 and 4 are respectively sent to the second-level nodes 3 and 4 for subsequent calculation.
Each node autonomously carries out monitoring operation aiming at the self node and the adjacent nodes, the monitoring information comprises the information of the self node and the communication key data of the adjacent nodes, and the node with the optimal performance at the next stage is selected for calculation according to each processing result until the task processing is completed.
And S5, summarizing the results obtained by processing each level of the performance optimal node to obtain a task processing result.
Therefore, by monitoring and preferentially processing data according to nodes with excellent conditions, the computing efficiency of the task can be improved, and the task execution time can be reduced.
Example two
Referring to fig. 2, a task processing terminal includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the steps of a task processing method according to an embodiment.
In summary, according to the task processing method and the terminal provided by the present invention, the resource remaining condition of each node in the distributed cluster and the communication condition between each node and its adjacent node are monitored, and performance sorting is performed according to the monitoring data, so as to obtain a first preset number of performance-optimized nodes as super nodes; receiving the tasks to be processed, and processing corresponding subtasks in the tasks to be processed by each super node, wherein the subtasks of the tasks to be processed correspond to the super nodes one by one, and the corresponding subtasks are distributed to the super nodes for processing, so that the calculation efficiency of the tasks can be further improved; if the current subtask can be continuously split into a second preset number of next-level subtasks, selecting a second preset number of performance optimal nodes from next-level nodes corresponding to the current node for processing the current subtask to perform one-to-one corresponding processing on the next-level subtasks until the current node is a bottommost node or the current subtask cannot be continuously split; therefore, the tasks to be processed can be split step by step, and the subtasks obtained by splitting at each stage are processed in the nodes with the optimal performance at each stage, so that the calculation efficiency of the tasks can be further improved and the execution time of the tasks can be reduced compared with the conventional random distribution mode.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.
Claims (10)
1. A method for processing a task, comprising the steps of:
monitoring the resource residual condition of each node in the distributed cluster and the communication condition between each node and the adjacent node thereof;
performing performance sequencing on each node according to the monitored data, and taking the first preset number of performance optimal nodes as super nodes;
receiving tasks to be processed, and processing corresponding subtasks in the tasks to be processed by each super node;
if the current subtask can be continuously split into a second preset number of next-level subtasks, selecting a second preset number of performance optimal nodes from next-level nodes corresponding to a current node for processing the current subtask to perform one-to-one corresponding processing on the next-level subtask until the current node is a bottom-layer node or the current subtask cannot be continuously split;
and summarizing results obtained by processing each level of the performance optimal node to obtain a task processing result.
2. The method according to claim 1, wherein the monitoring of the resource remaining situation of each node in the distributed cluster and the communication situation between each node and its neighboring nodes comprises:
monitoring the CPU occupancy rate and the residual amount of the memory of each node in the distributed cluster at preset time intervals;
and monitoring the resource use condition and the network communication condition between each node and the adjacent node thereof according to the monitoring thread in each node.
3. The task processing method according to claim 1, wherein the performing performance ranking on each node according to the monitored data, and the using a first preset number of performance-optimized nodes as super nodes comprises:
calculating the resource residual condition of each node and the communication condition between each node and the adjacent node thereof according to a preset proportion to obtain the performance value of each node;
and sequencing according to the performance value of each node, and acquiring a first preset number of nodes with optimal performance values as super nodes.
4. The task processing method according to claim 1, wherein the receiving the to-be-processed task, and the processing, by each super node, a corresponding sub-task of the to-be-processed task comprises:
receiving a task to be processed, and splitting the task to be processed into a first preset number of subtasks;
and performing one-to-one correspondence processing on the subtasks by using each super node.
5. The task processing method according to claim 1, wherein the selecting a second preset number of performance-optimized nodes from the next-level nodes corresponding to the current node that processes the current subtask to perform the one-to-one correspondence processing on the next-level subtask until the current node is a bottom-most node or the current subtask cannot be continuously split includes:
performing performance sequencing according to the resource surplus condition of a next-level node corresponding to the current node of the current subtask and the communication condition between the next-level node and the adjacent node of the next-level node, and acquiring a second preset number of performance optimal nodes of the next level;
and processing the corresponding next-level subtasks one by using each next-level performance optimal node until the current node is the bottommost node or the current subtask cannot be continuously split.
6. A task processing terminal comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the following steps when executing the computer program:
monitoring the resource residual condition of each node in the distributed cluster and the communication condition between each node and the adjacent node thereof;
performing performance sequencing on each node according to the monitored data, and taking the first preset number of performance optimal nodes as super nodes;
receiving tasks to be processed, and processing corresponding subtasks in the tasks to be processed by each super node;
if the current subtask can be continuously split into a second preset number of next-level subtasks, selecting a second preset number of performance optimal nodes from next-level nodes corresponding to a current node for processing the current subtask to perform one-to-one corresponding processing on the next-level subtask until the current node is a bottom-layer node or the current subtask cannot be continuously split;
and summarizing results obtained by processing each level of the performance optimal node to obtain a task processing result.
7. The task processing terminal according to claim 6, wherein the monitoring of the resource remaining situation of each node in the distributed cluster and the communication situation between each node and its neighboring nodes comprises:
monitoring the CPU occupancy rate and the residual amount of the memory of each node in the distributed cluster at preset time intervals;
and monitoring the resource use condition and the network communication condition between each node and the adjacent node thereof according to the monitoring thread in each node.
8. The task processing terminal according to claim 6, wherein the performing performance ranking on each node according to the monitored data, and using a first preset number of performance-optimized nodes as super nodes comprises:
calculating the resource residual condition of each node and the communication condition between each node and the adjacent node thereof according to a preset proportion to obtain the performance value of each node;
and sequencing according to the performance value of each node, and acquiring a first preset number of nodes with optimal performance values as super nodes.
9. The task processing terminal according to claim 6, wherein the receiving the task to be processed, and the processing, by each super node, the corresponding sub-task in the task to be processed includes:
receiving a task to be processed, and splitting the task to be processed into a first preset number of subtasks;
and performing one-to-one correspondence processing on the subtasks by using each super node.
10. The task processing terminal according to claim 6, wherein the selecting a second preset number of performance-optimized nodes from the next-level nodes corresponding to the current node that processes the current subtask to perform the one-to-one correspondence processing on the next-level subtask until the current node is a bottom-most node or the current subtask cannot be continuously split includes:
performing performance sequencing according to the resource surplus condition of a next-level node corresponding to the current node of the current subtask and the communication condition between the next-level node and the adjacent node of the next-level node, and acquiring a second preset number of performance optimal nodes of the next level;
and processing the corresponding next-level subtasks one by using each next-level performance optimal node until the current node is the bottommost node or the current subtask cannot be continuously split.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111190647.1A CN113918333A (en) | 2021-10-13 | 2021-10-13 | Task processing method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111190647.1A CN113918333A (en) | 2021-10-13 | 2021-10-13 | Task processing method and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113918333A true CN113918333A (en) | 2022-01-11 |
Family
ID=79239952
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111190647.1A Pending CN113918333A (en) | 2021-10-13 | 2021-10-13 | Task processing method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113918333A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115955481A (en) * | 2022-12-12 | 2023-04-11 | 支付宝(杭州)信息技术有限公司 | Emergency response method and device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017148296A1 (en) * | 2016-03-02 | 2017-09-08 | 阿里巴巴集团控股有限公司 | Method of assigning application to assigned service cluster and device |
CN109710263A (en) * | 2018-12-18 | 2019-05-03 | 北京字节跳动网络技术有限公司 | Compilation Method, device, storage medium and the electronic equipment of code |
CN110213338A (en) * | 2019-05-09 | 2019-09-06 | 国家计算机网络与信息安全管理中心 | A kind of clustering acceleration calculating method and system based on cryptographic calculation |
CN111459659A (en) * | 2020-03-10 | 2020-07-28 | 中国平安人寿保险股份有限公司 | Data processing method, device, scheduling server and medium |
CN112860387A (en) * | 2019-11-27 | 2021-05-28 | 上海哔哩哔哩科技有限公司 | Distributed task scheduling method and device, computer equipment and storage medium |
WO2021179462A1 (en) * | 2020-03-12 | 2021-09-16 | 重庆邮电大学 | Improved quantum ant colony algorithm-based spark platform task scheduling method |
WO2021179588A1 (en) * | 2020-03-13 | 2021-09-16 | 北京旷视科技有限公司 | Computing resource scheduling method and apparatus, electronic device, and computer readable storage medium |
-
2021
- 2021-10-13 CN CN202111190647.1A patent/CN113918333A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017148296A1 (en) * | 2016-03-02 | 2017-09-08 | 阿里巴巴集团控股有限公司 | Method of assigning application to assigned service cluster and device |
CN109710263A (en) * | 2018-12-18 | 2019-05-03 | 北京字节跳动网络技术有限公司 | Compilation Method, device, storage medium and the electronic equipment of code |
CN110213338A (en) * | 2019-05-09 | 2019-09-06 | 国家计算机网络与信息安全管理中心 | A kind of clustering acceleration calculating method and system based on cryptographic calculation |
CN112860387A (en) * | 2019-11-27 | 2021-05-28 | 上海哔哩哔哩科技有限公司 | Distributed task scheduling method and device, computer equipment and storage medium |
CN111459659A (en) * | 2020-03-10 | 2020-07-28 | 中国平安人寿保险股份有限公司 | Data processing method, device, scheduling server and medium |
WO2021179462A1 (en) * | 2020-03-12 | 2021-09-16 | 重庆邮电大学 | Improved quantum ant colony algorithm-based spark platform task scheduling method |
WO2021179588A1 (en) * | 2020-03-13 | 2021-09-16 | 北京旷视科技有限公司 | Computing resource scheduling method and apparatus, electronic device, and computer readable storage medium |
Non-Patent Citations (1)
Title |
---|
胡亚红;盛夏;毛家发;: "资源不均衡Spark环境任务调度优化算法研究", 计算机工程与科学, no. 02, 15 February 2020 (2020-02-15) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115955481A (en) * | 2022-12-12 | 2023-04-11 | 支付宝(杭州)信息技术有限公司 | Emergency response method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107087019B (en) | Task scheduling method and device based on end cloud cooperative computing architecture | |
US10474504B2 (en) | Distributed node intra-group task scheduling method and system | |
Fan et al. | Toward optimal deployment of communication-intensive cloud applications | |
CN111459641B (en) | Method and device for task scheduling and task processing across machine room | |
US8326982B2 (en) | Method and apparatus for extracting and visualizing execution patterns from web services | |
CN110187960A (en) | A kind of distributed resource scheduling method and device | |
CN112019581A (en) | Method and device for scheduling task processing entities | |
CN114840323A (en) | Task processing method, device, system, electronic equipment and storage medium | |
CN113918333A (en) | Task processing method and terminal | |
CN111240822B (en) | Task scheduling method, device, system and storage medium | |
US8316367B2 (en) | System and method for optimizing batch resource allocation | |
CN111158800A (en) | Method and device for constructing task DAG based on mapping relation | |
Alamro et al. | Cred: Cloud right-sizing to meet execution deadlines and data locality | |
CN113467908A (en) | Task execution method and device, computer readable storage medium and terminal equipment | |
CN111049900B (en) | Internet of things flow calculation scheduling method and device and electronic equipment | |
CN110879753B (en) | GPU acceleration performance optimization method and system based on automatic cluster resource management | |
CN110750362A (en) | Method and apparatus for analyzing biological information, and storage medium | |
CN115712572A (en) | Task testing method and device, storage medium and electronic device | |
Yao et al. | Genetic scheduling on minimal processing elements in the grid | |
CN113961333B (en) | Method and device for generating and executing circular task, AI chip and storage medium | |
CN115729961A (en) | Data query method, device, equipment and computer readable storage medium | |
CN110297693B (en) | Distributed software task allocation method and system | |
CN113822485A (en) | Power distribution network scheduling task optimization method and system | |
CN114546631A (en) | Task scheduling method, control method, core, electronic device and readable medium | |
Tsoutsouras et al. | Job-arrival aware distributed run-time resource management on intel scc manycore platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |