CN116737345A - Distributed task processing system, distributed task processing method, distributed task processing device, storage medium and storage device - Google Patents

Distributed task processing system, distributed task processing method, distributed task processing device, storage medium and storage device Download PDF

Info

Publication number
CN116737345A
CN116737345A CN202311010091.2A CN202311010091A CN116737345A CN 116737345 A CN116737345 A CN 116737345A CN 202311010091 A CN202311010091 A CN 202311010091A CN 116737345 A CN116737345 A CN 116737345A
Authority
CN
China
Prior art keywords
task
target
node
computing node
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311010091.2A
Other languages
Chinese (zh)
Inventor
曾洪海
肖恒进
王超
王永恒
巫英才
连建晓
周春来
恽爽
路游
韩珺婷
王梦丝
杨亚飞
董子铭
郑黄河
沈镇方
鲁艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202311010091.2A priority Critical patent/CN116737345A/en
Publication of CN116737345A publication Critical patent/CN116737345A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The specification discloses a distributed task processing system, a distributed task processing method, a distributed task processing device, a distributed task processing storage medium and distributed task processing equipment, wherein task information of each task is determined through a scheduling node in the process of processing each task, the load condition of a system is determined based on the task information, and when the load condition is too high, a target task needing to be terminated is determined, and a task identifier of the target task is broadcasted to a computing node. The computing node can judge whether the computing node is executing the target task according to the received task identification of the target task, if so, the computing node updates the state of the target task into a termination state and stops executing the target task. The method and the device can monitor the load state of the distributed processing system, and automatically determine the target task to be terminated to terminate based on the task information of each task when the system load is monitored to be too high. The task processing efficiency of the system can be ensured without increasing the computing resources.

Description

Distributed task processing system, distributed task processing method, distributed task processing device, storage medium and storage device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a distributed task processing system, a distributed task processing method, a distributed task processing device, a distributed task processing storage medium, and a distributed task processing device.
Background
With the development of computer technology and the need for service integration, the use of a distributed system to perform task processing procedures has become one of the most common application scenarios of the distributed system.
Generally, the distributed system includes a scheduling node and a computing node, where the scheduling node receives tasks and sends task information to the computing node, and the computing node automatically generates tasks and executes the tasks according to the received task information.
However, in the process of executing tasks by the distributed system, there is a case that the system load is too high, in which case, downtime of the distributed system can be avoided only by increasing system computing resources, so as to ensure the task processing efficiency of the distributed system.
Based on this, the present specification provides a distributed task processing system.
Disclosure of Invention
The present specification provides a distributed task processing system, a distributed task processing method, a distributed task processing device, a distributed task processing storage medium, and a distributed task processing device, so as to partially solve the foregoing problems of the prior art.
The technical scheme adopted in the specification is as follows:
The present specification provides a distributed task processing system, the system comprising: the system comprises a scheduling node and a plurality of computing nodes, wherein each computing node executes different tasks; wherein:
the scheduling node is used for determining task information corresponding to tasks executed by each computing node respectively, determining the load condition of the system according to the task information of each task, determining a target task according to the task information corresponding to each task when abnormality exists according to the load condition, generating a termination instruction according to the task identification of the target task, and broadcasting;
the computing node is used for receiving a termination instruction sent by the scheduling node, updating the execution state of the target task into a termination state when the target task and the corresponding execution state thereof are determined to be stored according to the task identification of the target task carried in the termination instruction, determining the execution state corresponding to each task executed by the computing node, and stopping executing the task if the execution state is the termination state.
Optionally, the computing node is configured to send a task generation request to the scheduling node; receiving a task identifier returned by the scheduling node, generating a task to be executed according to the task identifier and the task information, determining an execution state of the task to be executed, and transmitting the execution state to the scheduling node according to the task identifier;
The scheduling node is used for distributing task identifiers for tasks corresponding to the task generation request according to the received task generation request, and the tasks executed by the computing nodes correspond to different task identifiers; returning the task identifier to the computing node according to the task generation request; and receiving the execution state sent by the computing node, updating the state of the task corresponding to the task identifier according to the execution state, and storing the state.
Optionally, the scheduling node is configured to determine task information corresponding to tasks executed by each computing node respectively, where, for each task, the task information of the task includes at least one of task execution duration, task priority, and number of threads occupied by the task; and determining the load condition of the system according to task information corresponding to each task.
Optionally, the scheduling node is configured to determine task levels corresponding to each task according to task information corresponding to each task, and determine, as a target task, a task with a task level lower than a preset threshold according to the task levels corresponding to each task; the task information of each task comprises at least one of task execution time length, task priority and task occupation thread number, the task grade of the task is inversely related to the task execution time length, the task grade is positively related to the task priority, and the task grade is inversely related to the task occupation thread number.
The present specification provides a distributed task processing method, the task execution method is applied to a scheduling node of a distributed task processing system, the system includes: the system comprises a scheduling node and a plurality of computing nodes, wherein each computing node executes different tasks; the method comprises the following steps:
determining task information corresponding to the tasks executed by the computing nodes respectively, and determining the load condition of the system according to the task information of the tasks;
when the system is determined to be abnormal according to the load condition, determining a target task according to task information of tasks corresponding to the tasks respectively;
generating a termination instruction according to the task identification of the target task, and broadcasting the termination instruction; and the computing node receiving the termination instruction terminates executing the target task according to the task identification of the target task.
Optionally, determining the load condition of the system according to the task information of each task specifically includes:
for each computing node, receiving task information sent by the computing node, wherein the task information is task information corresponding to a task executed by the computing node, and the task information comprises at least one of task execution time length, task priority and task occupation thread number;
And determining the load condition of the system according to task information corresponding to each task.
Optionally, determining the target task according to the task information corresponding to each task, which specifically includes:
determining task grades corresponding to each task according to task information corresponding to each task, wherein the task information of each task comprises at least one of task execution time length, task priority and task occupation thread number, the task grade of each task is inversely related to the task execution time length, the task grade is positively related to the task priority, and the task grade is inversely related to the task occupation thread number;
and determining the task with the task grade lower than a preset threshold value as a target task according to the task grade corresponding to each task.
The present specification provides a distributed task processing device for use in a scheduling node in a distributed task processing system, the system comprising: a scheduling node and a plurality of computing nodes, each computing node performing a different task, the apparatus comprising:
the load determining module is used for determining task information corresponding to the tasks executed by the computing nodes respectively and determining the load condition of the system according to the task information of the tasks;
The target determining module is used for determining target tasks according to the information of the tasks corresponding to the tasks when the system is determined to have abnormality according to the load condition;
and the broadcasting module is used for generating a termination instruction according to the task identifier of the target task and broadcasting so that the computing node receiving the termination instruction terminates executing the target task according to the task identifier.
The present specification provides a computer readable storage medium storing a computer program which when executed by a processor implements a distributed task processing method as described above.
The present specification provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a distributed task processing method as described above when executing the program.
The above-mentioned at least one technical scheme that this specification adopted can reach following beneficial effect:
in the process of processing each task by the distributed task processing system, task information of each task is determined through a scheduling node, the load condition of the system is determined based on the task information, when the load condition is too high, a target task needing to be terminated is determined, and a task identifier of the target task is broadcasted to a computing node. The computing node can judge whether the computing node is executing the target task according to the received task identification of the target task, if so, the computing node updates the state of the target task into a termination state and stops executing the target task.
The method and the device can monitor the load state of the distributed processing system, and automatically determine the target task to be terminated to terminate based on the task information of each task when the system load is monitored to be too high. The task processing efficiency of the system can be ensured without increasing the computing resources.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. In the drawings:
FIG. 1 is a schematic diagram of a distributed task processing system provided herein;
FIG. 2 is a flow chart of a distributed task processing method provided in the present specification;
FIG. 3 is a flow chart of a distributed task processing method provided in the present specification;
FIG. 4 is a flow chart of a distributed task processing method provided in the present specification;
FIG. 5 is a flow chart of a distributed task processing method provided in the present specification;
FIG. 6 is a schematic diagram of a distributed task processing device provided herein;
fig. 7 is a schematic view of the electronic device corresponding to fig. 5 provided in the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a distributed task processing system provided in the present specification. Wherein the distributed task processing system comprises: a scheduling node and a number of computing nodes. Wherein each computing node provides a different service. That is, in the distributed task processing system in this specification, each computing node performs a different task. Of course, for each computing node, the computing node is a node that is only available to perform certain types of tasks, such as validation tasks, wind control tasks, and the like. The distributed task processing system may be pre-deployed with configuration information corresponding to each type of task, and the computing node may determine the configuration information corresponding to the type of task when processing the task, so as to execute the task based on the configuration information. The task type corresponding to the task which can be executed by the computing node can be set according to the requirement, and the specification does not limit the task type.
From a hardware perspective, in one or more embodiments provided herein, for each node in the system, the node may be one or more servers, or may be one or more smart devices. Each node may run on a different server or may run on the same server for deployment.
From a software perspective, in one or more embodiments provided herein, each node in the system may be code running on a server, and the functions of each node are implemented by running the code of each node. In addition, in the present specification, codes of different nodes are independent from each other, and each node can communicate or data transmission with other nodes through a preset data interface.
In one or more embodiments provided herein, the distributed task processing system is exemplified by a single scheduling node and multiple computing nodes. The tasks executed by the distributed task processing system can be tasks corresponding to the same task type or tasks corresponding to different task types.
The task executed by the distributed task processing system is taken as a database query task as an example. The data is stored in a table form in the database, and to query the data to be queried from the database, the data to be queried and each data stored in the database are generally compared, so that the process of comparing the data to be queried and each data stored in the database can be a database query task. Thus, the tasks performed by the nodes in the distributed task system may be database query tasks generated between the task to be queried and the data stored in the database.
Of course, the database query task is merely illustrative of tasks performed by the distributed task processing system, and each node in the distributed task processing system may perform the same type of task, or may perform different types of tasks, e.g., a node performs a type of task and B node performs a type of task. Specific tasks executed by each node belong to the same task type, and can be set according to requirements, which is not limited in the specification.
A scheduling node in the distributed task processing system may respond to a task processing request. The task processing request can be sent by a user or can be automatically generated by a server for storing task generation data according to preset task generation conditions. The preset task generating condition may be at least one of the current time reaching a preset time point and monitoring a specified operation performed by the user.
The task processing request may carry task generation data, so the scheduling node may parse the task generation request, determine task generation data carried in the task generation request, send each task generation data to each computing node, and generate a task according to the received task generation data by the computing node, and execute the task. The task generating data may include a task type corresponding to the task, computing resources required for executing the task, configuration information corresponding to the task, and the like.
In one or more embodiments provided herein, during execution of tasks by computing nodes in the distributed task processing system, the scheduling node may monitor a load condition of the system, and when it is monitored that the load condition of the system is too high, determine a target task from the tasks executed by the computing nodes, and terminate the target task to reduce a load pressure of the system.
Specifically, each computing node in the distributed task processing system can send task information of a task executed by itself to a scheduling node according to a preset time interval.
Thus, the scheduling node may receive task information sent by the computing node, where the task information may be task information corresponding to a task currently being performed by the node. The task information may include a task identifier corresponding to the task.
After receiving the task information sent by each computing node, the scheduling node can determine the load condition of the system according to the task information of each task. Taking the task information as a task identifier for example, when the number of the received task identifiers is greater than a preset number threshold, the scheduling node can determine that the load of the system is too high. The scheduling node may determine, according to a preset number threshold and the received task information of each task, whether the number of tasks being processed by the computing node in the system is greater than a high number threshold.
If so, the scheduling node may determine that the load of the system is too high. In this case, there is typically a possibility that the computing nodes in the system may be down. The computing node may determine that an anomaly exists based on the load conditions.
If not, the scheduling node can determine that the load balance of the system is not abnormal.
Further, when an anomaly is determined according to the load condition, the scheduling node can also determine a target task from the tasks being executed by the computing nodes and terminate the target task.
Specifically, for each task, the task information of the task may include at least one of a task execution duration, a task priority, and a number of threads occupied by the task.
The scheduling node may then determine, for each task, a task level corresponding to the task based on the task information for the task. Wherein the task level is used to characterize the importance of the task. The task level is inversely related to the task execution duration of the task, the task level is inversely related to the task priority of the task, and the task level of the task is inversely related to the number of threads occupied by the task of the task. That is, for each task, the shorter the task execution time of the task, the higher the task priority of the task, and the fewer the number of threads occupied by the task of the task, the higher the task class of the task. Conversely, the longer the task execution duration of the task, the lower the task priority, and the more threads the task occupies, the lower the task class of the task.
Then, the scheduling node can determine the task with the task grade lower than the preset threshold value from the tasks according to the task grade corresponding to each task respectively as a target task. Wherein the target task is a task for termination.
After determining the target task, the scheduling node may generate a termination instruction based on the task identification of the target task, and broadcast the termination instruction to each computing node.
For each computing node, the task identifier of the task being executed and/or to be executed by the computing node and the execution state corresponding to each task are stored in the computing node. The computing node may then receive a termination instruction broadcast by the scheduling node, and parse the termination instruction to determine a task identifier of the target task included in the termination instruction.
Then, the computing node can judge whether the task executed by the computing node contains the target task according to the task identification of each task stored by the computing node and the determined task identification of the target task.
If yes, the target task can be determined to be executed by the computing node, the computing node needs to update the execution state of the target task to a termination state according to the task identification of the target task, and the updated target task and the state thereof are stored in the computing node.
If not, the computing node can determine that the target task is not executed by the computing node, and discard the received termination instruction, the determined task identifier of the target task and other data.
Then, through the above operation, for each computing node, if the computing node executes the target task, the execution state of the target task is the termination state among the tasks stored in the computing node and the states corresponding thereto. However, since the step of changing the state may be performed during the execution of the target task, it may occur that the execution state corresponding to the target task has been changed, but the computing node is still executing the task. In order to avoid the occurrence of the above situation, the computing node may further determine, for each task executed by itself, an execution state corresponding to the task, and stop executing the task when the execution state is a termination state. Of course, if the execution state is executing or waiting to be executed, the computing node may continue to process the task.
The computing node can determine the execution state corresponding to each task according to a preset time interval, and stop executing the task with the execution state being the termination state. The computing node can also monitor whether the execution state of each task changes, if so, the computing node can judge whether the state of the task after the task change is a termination state, and if so, the computing node stops executing the task. In particular, how the computing node determines the execution states corresponding to the tasks respectively, and how to process the tasks based on the execution states, which can be set according to needs, and this specification does not limit the task.
Based on the distributed task processing system shown in fig. 1, in the process of processing each task, task information of each task is determined through a scheduling node, the load condition of the system is determined based on the task information, and when the load condition is too high, a target task needing to be terminated is determined, and the task identification of the target task is broadcasted to a computing node. The computing node can judge whether the computing node is executing the target task according to the received task identification of the target task, if so, the computing node updates the state of the target task into a termination state and stops executing the target task. The method and the device can monitor the load state of the distributed processing system, and automatically determine the target task to be terminated to terminate based on the task information of each task when the system load is monitored to be too high. The task processing efficiency of the system can be ensured without increasing the computing resources.
Further, for the system in this specification, in order to avoid a situation in which tasks handled by a plurality of computing nodes correspond to the same task identifier, but each task is actually a different task. When each computing node generates a task, a different task identifier can be allocated to each task by the scheduling node.
Specifically, for each computing node, the computing node may receive the task generation information sent by the scheduling node, and then, the computing node may send a task generation request to the scheduling node according to the task generation information.
The scheduling node may receive a task generation request sent by the computing node, and allocate a task identifier to a task corresponding to the task generation request according to the task generation request, where each task executed in the system corresponds to a different task identifier.
Thus, after assigning the task identification, the scheduling node may return the task identification to the computing node based on the task generation request.
The computing node can receive the task identification returned by the dispatching node, generate a task according to the task generation information and the task identification globally unique in the system, and determine the execution state corresponding to the task.
After determining the execution state, the computing node may send the execution state to a scheduling node according to the task identification.
The scheduling node may receive the execution state and store a correspondence between the task identifier and the execution state based on the task identifier corresponding to the execution state.
Then, subsequently, the scheduling node may acquire the task executed by the computing node and the task information corresponding to each task from the computing node according to the preset time interval, and update the corresponding relationship between each task identifier and each task state according to the task information.
Based on the same ideas, the present specification provides a flow diagram of a distributed task processing method as shown in fig. 2. The distributed task processing method is applied to the dispatching nodes.
The scheduling node may first determine task information corresponding to each task from each computing node. And then, according to the task information corresponding to each task, determining the load condition of the system. Then, the scheduling node can judge whether the load of the system is too high according to the load condition. If yes, the scheduling node can determine the target task to be terminated from the tasks according to the task information corresponding to the tasks. Then, after determining the target task, the scheduling node may generate a termination instruction according to the task identifier of the target task, and broadcast the termination instruction.
The number of the target tasks included in the termination instruction may be one or more.
Based on the same ideas, the present specification provides a flow diagram of a distributed task processing method as shown in fig. 3. Wherein the distributed task processor is applied to the compute nodes.
The computing node may monitor the termination instruction and determine a task identification of the target task included in the termination instruction after monitoring the termination instruction. Then, the computing node can judge whether the target task is executed by the computing node according to the task identifier of the target task and the task identifiers respectively corresponding to the tasks stored by the computing node. If so, the computing node may update the execution state of the target task to a termination state. If not, the compute node may discard the termination instruction.
Based on the same ideas, the present specification provides a flow diagram of a distributed task processing method as shown in fig. 4. Wherein the distributed task processor is applied to the compute nodes.
The computing node can determine the execution state of each task being executed by the computing node, and judge whether the execution state is a termination state according to the determined execution state. If so, the scheduling node may terminate the task. If not, the computing node may perform the task.
After the target task is terminated, the scheduling node may determine task generation information corresponding to the target task from the computing nodes based on the task identifier of the target task, and distribute the task generation information to other computing nodes for execution. And the prompt information is used for prompting the user that the target task has errors. Of course, when the computing node distributes the task generation information corresponding to the target task to other computing nodes, the computing node may determine the load condition of each computing node, distribute the task generation information based on the load condition, or store the task generation information, and then, when the pressure of the system is small, redistribute the task generation information. In particular, how and when the task is processed after being terminated can be set according to the needs, and this specification is not limited thereto.
Based on the same thought, the specification provides a flow diagram of a distributed task processing method. As shown in fig. 5, wherein:
s100: and determining task information corresponding to the tasks executed by the computing nodes respectively, and determining the load condition of the system according to the task information of the tasks.
S102: when the system is determined to be abnormal according to the load condition, determining a target task according to task information of tasks corresponding to the tasks respectively.
S104: and generating a termination instruction according to the task identification of the target task, and broadcasting the termination instruction so that the computing node receiving the termination instruction terminates executing the target task according to the task identification of the target task.
In one or more embodiments provided herein, the method is applied to a scheduling node in a distributed task processing system, the system comprising: a scheduling node and a number of computing nodes. Each node may be configured to perform a task that is not exactly the same.
In one or more embodiments provided in the present specification, specific implementation steps of the distributed task processing method may refer to descriptions of scheduling nodes in the above system, which are not described herein.
Based on a distributed task processing method shown in fig. 5, in the process of processing each task, determining task information of each task, determining a load condition of a system based on each task information, and when the load condition is too high, determining a target task to be terminated, and broadcasting a task identifier of the target task to a computing node. So that the computing node can stop executing the target task according to the received task identification of the target task. The method can monitor the load state of the distributed processing system, and automatically determines the target task to be terminated to terminate based on the task information of each task when the system load is monitored to be too high. The task processing efficiency of the system can be ensured without increasing the computing resources.
In addition, in the present specification, the scheduling node may determine the load condition of the system based on task information of each task.
Specifically, for each computing node, the scheduling grounded node may receive task information sent by the computing node, where the task information is task information corresponding to a task executed by the computing node. The task information comprises at least one of task identification, task execution time, task priority and task occupation thread number.
The scheduling node can then determine the load condition of the system according to the task information corresponding to each task.
If the tasks executed in the system are too many, i.e. the task identifiers received by the scheduling node are too many, the scheduling node may determine that the system load is too high. If the average execution time corresponding to each task in the system is too long, the system load is too high, and if the average thread number of each task is too large, the system load is too high.
The scheduling node may determine at least one of a task number index, a task execution time index, and a task occupation resource index of the system according to task information corresponding to each task, as a system index of the system, and determine a load condition of the system based on the system index and a preset rule. The task number index is the total number of tasks being executed by the system, the task execution time index can be determined according to the average number, the median and the like of task execution time corresponding to each task, and the task occupation resource index can be determined according to the average number, the median and the like of task occupation thread number corresponding to each task.
Furthermore, under the condition of too high load, the scheduling node can also determine the task grade of each task based on the task information of each task, and determine the target task from the tasks with low grade to terminate.
Specifically, the scheduling node may determine task levels corresponding to the tasks according to task information corresponding to the tasks. For each task, the task level of the task is inversely related to the task execution duration of the task, the task level is positively related to the task priority of the task, and the task level is inversely related to the number of threads occupied by the task of the task.
Taking the task execution time length as A, the task occupation thread number as B and the task priority as C as examples, the task grade can be. Of course, the above formula for determining the task level is merely an example, and specifically how to determine the task level based on the task information may be set as needed, which is not limited in this specification.
And finally, the scheduling node can determine the task with the task grade lower than the preset threshold value as a target task according to the task grade corresponding to each task.
It should be noted that, the distributed task processing method is applied to the scheduling unit in the distributed task processing system, so the specific execution process of the distributed task processing method may refer to the description of the scheduling unit in the distributed task processing system, and this description is not repeated here.
Based on the same thought, the present disclosure also provides a distributed task processing device, as shown in fig. 6.
Fig. 6 is a schematic structural diagram of a distributed task processing device provided in the present specification, where the device is applied to a scheduling node in a distributed task processing system, and the system includes: the system comprises a scheduling node and a plurality of computing nodes, wherein each computing node executes different tasks, and the scheduling node comprises:
the load determining module 200 is configured to determine task information corresponding to the tasks executed by the computing nodes, and determine a load condition of the system according to the task information of the tasks.
And the target determining module 202 is configured to determine a target task according to the information of the tasks corresponding to the tasks respectively when it is determined that the system has an abnormality according to the load condition.
And the broadcasting module 204 is configured to generate a termination instruction according to the task identifier of the target task, and broadcast the termination instruction, so that the computing node that receives the termination instruction terminates executing the target task according to the task identifier.
Optionally, the load determining module 200 is configured to, for each computing node, receive task information sent by the computing node, where the task information is task information corresponding to a task executed by the computing node, where the task information includes at least one of task execution duration, task priority, and task occupation thread number, and determine a load condition of the system according to task information corresponding to each task respectively.
Optionally, the target determining module 202 is configured to determine task levels corresponding to each task according to task information corresponding to each task, where for each task, the task information of the task includes at least one of task execution duration, task priority, and task occupation thread number, the task level of the task is inversely related to the task execution duration, the task level is inversely related to the task priority, the task level is inversely related to the task occupation thread number, and a task with a task level lower than a preset threshold is determined as a target task according to the task level corresponding to each task.
The present specification also provides a computer readable storage medium storing a computer program operable to perform a distributed task processing method as provided in fig. 1 above.
The present specification also provides a schematic structural diagram of the electronic device shown in fig. 7. At the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, as described in fig. 7, although other hardware required by other services may be included. The processor reads the corresponding computer program from the non-volatile memory into the memory and then runs to implement a distributed task processing method as described above with respect to fig. 5. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable lesion detection device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable lesion detection device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable lesion detection device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description.

Claims (10)

1. A distributed task processing system, the system comprising: the system comprises a scheduling node and a plurality of computing nodes, wherein each computing node executes different tasks; wherein:
the scheduling node is used for determining task information corresponding to tasks executed by each computing node respectively, determining the load condition of the system according to the task information of each task, determining a target task according to the task information corresponding to each task when abnormality exists according to the load condition, generating a termination instruction according to the task identification of the target task, and broadcasting;
the computing node is used for receiving a termination instruction sent by the scheduling node, updating the execution state of the target task into a termination state when the target task and the corresponding execution state thereof are determined to be stored according to the task identification of the target task carried in the termination instruction, determining the execution state corresponding to each task executed by the computing node, and stopping executing the task if the execution state is the termination state.
2. The system of claim 1, wherein the computing node is to send a task generation request to the scheduling node; receiving a task identifier returned by the scheduling node, generating a task to be executed according to the task identifier and the task information, determining an execution state of the task to be executed, and transmitting the execution state to the scheduling node according to the task identifier;
The scheduling node is used for distributing task identifiers for tasks corresponding to the task generation request according to the received task generation request, and the tasks executed by the computing nodes correspond to different task identifiers; returning the task identifier to the computing node according to the task generation request; and receiving the execution state sent by the computing node, updating the state of the task corresponding to the task identifier according to the execution state, and storing the state.
3. The system of claim 1, wherein the scheduling node is configured to determine task information corresponding to tasks executed by each computing node, and for each task, the task information of the task includes at least one of a task execution duration, a task priority, and a number of threads occupied by the task; and determining the load condition of the system according to task information corresponding to each task.
4. The system of claim 1, wherein the scheduling node is configured to determine task levels corresponding to each task according to task information corresponding to each task, and determine tasks with task levels lower than a preset threshold according to the task levels corresponding to each task, as target tasks; the task information of each task comprises at least one of task execution time length, task priority and task occupation thread number, the task grade of the task is inversely related to the task execution time length, the task grade is positively related to the task priority, and the task grade is inversely related to the task occupation thread number.
5. A distributed task processing method, wherein the task execution method is applied to a scheduling node of a distributed task processing system, the system comprising: the system comprises a scheduling node and a plurality of computing nodes, wherein each computing node executes different tasks; the method comprises the following steps:
determining task information corresponding to the tasks executed by the computing nodes respectively, and determining the load condition of the system according to the task information of the tasks;
when the system is determined to have abnormality according to the load condition, determining a target task according to task information respectively corresponding to each task;
generating a termination instruction according to the task identification of the target task, and broadcasting the termination instruction; and the computing node receiving the termination instruction terminates executing the target task according to the task identification of the target task.
6. The method according to claim 5, wherein determining the load condition of the system according to the task information of each task specifically comprises:
for each computing node, receiving task information sent by the computing node, wherein the task information is task information corresponding to a task executed by the computing node, and the task information comprises at least one of task execution time length, task priority and task occupation thread number;
And determining the load condition of the system according to task information corresponding to each task.
7. The method of claim 5, wherein determining the target task according to the information of the task corresponding to each task, specifically comprises:
determining task grades corresponding to each task according to task information corresponding to each task, wherein the task information of each task comprises at least one of task execution time length, task priority and task occupation thread number, the task grade of each task is inversely related to the task execution time length, the task grade is positively related to the task priority, and the task grade is inversely related to the task occupation thread number;
and determining the task with the task grade lower than a preset threshold value as a target task according to the task grade corresponding to each task.
8. A distributed task processing device, the device being applied to a scheduling node in a distributed task processing system, the system comprising: a scheduling node and a plurality of computing nodes, each computing node performing a different task, the apparatus comprising:
the load determining module is used for determining task information corresponding to the tasks executed by the computing nodes respectively and determining the load condition of the system according to the task information of the tasks;
The target determining module is used for determining target tasks according to the information of the tasks corresponding to the tasks when the system is determined to have abnormality according to the load condition;
and the broadcasting module is used for generating a termination instruction according to the task identifier of the target task and broadcasting so that the computing node receiving the termination instruction terminates executing the target task according to the task identifier.
9. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the method of any of the preceding claims 5-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 5-7 when executing the program.
CN202311010091.2A 2023-08-11 2023-08-11 Distributed task processing system, distributed task processing method, distributed task processing device, storage medium and storage device Pending CN116737345A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311010091.2A CN116737345A (en) 2023-08-11 2023-08-11 Distributed task processing system, distributed task processing method, distributed task processing device, storage medium and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311010091.2A CN116737345A (en) 2023-08-11 2023-08-11 Distributed task processing system, distributed task processing method, distributed task processing device, storage medium and storage device

Publications (1)

Publication Number Publication Date
CN116737345A true CN116737345A (en) 2023-09-12

Family

ID=87918914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311010091.2A Pending CN116737345A (en) 2023-08-11 2023-08-11 Distributed task processing system, distributed task processing method, distributed task processing device, storage medium and storage device

Country Status (1)

Country Link
CN (1) CN116737345A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117370034A (en) * 2023-12-07 2024-01-09 之江实验室 Evaluation method and device of computing power dispatching system, storage medium and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107688496A (en) * 2017-07-24 2018-02-13 上海壹账通金融科技有限公司 Task distribution formula processing method, device, storage medium and server
US20180321979A1 (en) * 2017-05-04 2018-11-08 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing a scheduler with preemptive termination of existing workloads to free resources for high priority items
CN111897638A (en) * 2020-07-27 2020-11-06 广州虎牙科技有限公司 Distributed task scheduling method and system
US20210405915A1 (en) * 2020-06-26 2021-12-30 Western Digital Technologies, Inc. Distributed function processing with estimate-based scheduler
CN114666335A (en) * 2022-03-21 2022-06-24 北京计算机技术及应用研究所 DDS-based distributed system load balancing device
CN115033375A (en) * 2022-05-20 2022-09-09 新华三技术有限公司 Distributed task scheduling method, device, equipment and storage medium in cluster mode
CN115686831A (en) * 2022-10-11 2023-02-03 北京市建筑设计研究院有限公司 Task processing method and device based on distributed system, equipment and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180321979A1 (en) * 2017-05-04 2018-11-08 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing a scheduler with preemptive termination of existing workloads to free resources for high priority items
CN107688496A (en) * 2017-07-24 2018-02-13 上海壹账通金融科技有限公司 Task distribution formula processing method, device, storage medium and server
US20210405915A1 (en) * 2020-06-26 2021-12-30 Western Digital Technologies, Inc. Distributed function processing with estimate-based scheduler
CN111897638A (en) * 2020-07-27 2020-11-06 广州虎牙科技有限公司 Distributed task scheduling method and system
CN114666335A (en) * 2022-03-21 2022-06-24 北京计算机技术及应用研究所 DDS-based distributed system load balancing device
CN115033375A (en) * 2022-05-20 2022-09-09 新华三技术有限公司 Distributed task scheduling method, device, equipment and storage medium in cluster mode
CN115686831A (en) * 2022-10-11 2023-02-03 北京市建筑设计研究院有限公司 Task processing method and device based on distributed system, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117370034A (en) * 2023-12-07 2024-01-09 之江实验室 Evaluation method and device of computing power dispatching system, storage medium and electronic equipment
CN117370034B (en) * 2023-12-07 2024-02-27 之江实验室 Evaluation method and device of computing power dispatching system, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN107450979B (en) Block chain consensus method and device
CN108845876B (en) Service distribution method and device
CN107196772B (en) Method and device for broadcasting message
CN109739627B (en) Task scheduling method, electronic device and medium
CN115002143B (en) Node election method and device, storage medium and electronic equipment
CN116225669B (en) Task execution method and device, storage medium and electronic equipment
CN116737345A (en) Distributed task processing system, distributed task processing method, distributed task processing device, storage medium and storage device
CN109391512A (en) A kind of service issuing method, device and electronic equipment
CN116305298B (en) Method and device for managing computing power resources, storage medium and electronic equipment
CN111459724B (en) Node switching method, device, equipment and computer readable storage medium
CN111400032B (en) Resource allocation method and device
CN116932175B (en) Heterogeneous chip task scheduling method and device based on sequence generation
CN112559565A (en) Abnormity detection method, system and device
CN113032119A (en) Task scheduling method and device, storage medium and electronic equipment
CN110825943B (en) Method, system and equipment for generating user access path tree data
CN110244964B (en) Operation and maintenance method, device and equipment based on operation and maintenance application
CN109614388B (en) Budget deduction method and device
CN114780201A (en) Resource adjusting method and device, electronic equipment and storage medium
CN110032433B (en) Task execution method, device, equipment and medium
CN116501474B (en) System, method and device for processing batch homogeneous tasks
CN116743550B (en) Processing method of fault storage nodes of distributed storage cluster
CN117041980B (en) Network element management method and device, storage medium and electronic equipment
CN117421129B (en) Service execution method and device based on heterogeneous storage cluster and electronic equipment
CN117555697B (en) Distributed training-oriented cache loading system, method, device and equipment
CN117459415A (en) Service execution method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination