CN114035928A - Distributed task allocation processing method - Google Patents

Distributed task allocation processing method Download PDF

Info

Publication number
CN114035928A
CN114035928A CN202111422839.0A CN202111422839A CN114035928A CN 114035928 A CN114035928 A CN 114035928A CN 202111422839 A CN202111422839 A CN 202111422839A CN 114035928 A CN114035928 A CN 114035928A
Authority
CN
China
Prior art keywords
asynchronous
task
time
tasks
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111422839.0A
Other languages
Chinese (zh)
Inventor
王家印
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Fumin Bank Co Ltd
Original Assignee
Chongqing Fumin Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Fumin Bank Co Ltd filed Critical Chongqing Fumin Bank Co Ltd
Priority to CN202111422839.0A priority Critical patent/CN114035928A/en
Publication of CN114035928A publication Critical patent/CN114035928A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention relates to the technical field of computers, in particular to a distributed task allocation processing method, which comprises the following steps: s100: acquiring asynchronous tasks to be processed to generate asynchronous task queues, and setting a unique identifier for each asynchronous task; s300: an idle thread in a node acquires asynchronous tasks to be processed, the number of which is preset for processing each time, from an asynchronous task queue at intervals by adopting a queue tail stealing mechanism, and processes the asynchronous tasks one by one; s400: and clearing the processed asynchronous task from the asynchronous task queue through the unique identifier. Compared with the prior art, the method has low implementation complexity and cannot cause task blocking.

Description

Distributed task allocation processing method
Technical Field
The invention relates to the technical field of computers, in particular to a distributed task allocation processing method.
Background
In the design of a system with high concurrency and large request amount, many asynchronous tasks are involved in order to improve performance, and the number of tasks in a complex system is possibly more than that of real-time services, so that a task processing scheme is particularly important, otherwise, a large amount of tasks are accumulated, and especially when the real-time service processing efficiency is higher than the task processing efficiency, the tasks can never be processed. At present, two common task processing schemes are provided, one is to adopt single-machine queuing to process according to batches at fixed time, and the other is to select a main node to slice a task and then distribute the task to each node for processing. However, the first scheme is only suitable for a system with a small amount of tasks, and the second scheme has high implementation complexity and can also cause task accumulation when the tasks are generated to reach a certain speed.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a distributed task allocation processing method, which is low in implementation complexity and can not cause task blockage.
The basic scheme provided by the invention is as follows: the distributed task allocation processing method comprises the following steps:
s100: acquiring asynchronous tasks to be processed to generate asynchronous task queues, and setting a unique identifier for each asynchronous task;
s300: idle threads in the nodes acquire the preset number of asynchronous tasks to be processed each time from the tail part of the queue to the head part of the queue from the asynchronous task queue at intervals, and process the asynchronous tasks one by one;
s400: and clearing the processed asynchronous task from the asynchronous task queue through the unique identifier.
The principle and the advantages of the invention are as follows: the invention generates the queue by uniformly storing the asynchronous tasks to be processed. And idle threads in the nodes take a fixed number of asynchronous tasks from the tail part of the queue to the head part of the queue for processing each time, and the asynchronous tasks are cleared from the queue after the processing is finished. Compared with the prior art, the method has the advantages that the thread adopts a queue tail stealing mechanism, a fixed asynchronous task is obtained each time, a fragmentation center is not needed, the task can be fragmented, the fragmentation fault tolerance is high, the implementation complexity is low, and other tasks cannot be blocked from being executed due to the fact that part of tasks are executed all the time and fail.
Further, the following steps are also included between the steps S100 and S300:
s200: setting acquisition time;
the step S300 further includes the steps of:
s310: and each time the acquisition time is passed, the thread acquires the asynchronous tasks to be processed, the number of which is to be processed each time, from the asynchronous task queue.
And by setting the acquisition time, the threads in the nodes acquire and process the asynchronous tasks from the asynchronous task queue at regular time.
Further, the method comprises a stand-alone scheme and a cluster scheme, wherein the cluster scheme further comprises the following steps:
s210: and polling each node in the calling cluster by an external timing task system to execute the step S300.
By means of an external timing task system, each node in the cluster sequentially obtains asynchronous tasks from the asynchronous task queue for processing, efficiency is improved, and even if one node is broken down, other nodes still perform processing.
Further, the step S300 further includes the following tasks:
s320: and after the thread acquires the asynchronous task, marking the acquired asynchronous task as in-process through the unique identifier and then processing the asynchronous task.
Avoiding repeated fetching by other threads.
Further, the step S310 specifically includes the following steps:
s311: after one-time acquisition time, acquiring the number of asynchronous tasks to be processed in an asynchronous task queue;
s312: judging whether the number of the asynchronous tasks to be processed is larger than or equal to the number of the asynchronous tasks to be processed each time, if not, executing a step S313, and if so, executing a step S314;
s313: acquiring all to-be-processed asynchronous tasks in an asynchronous task queue;
s314: and acquiring the asynchronous tasks to be processed, the number of which is to be processed each time, from the asynchronous task queue.
Before the asynchronous tasks are acquired each time, whether the number of the asynchronous tasks to be processed is larger than the number of the asynchronous tasks to be processed each time is judged, if the number of the asynchronous tasks to be processed is smaller than the number of the asynchronous tasks to be processed each time, all the asynchronous tasks to be processed in the asynchronous task queue are acquired and processed, and the asynchronous tasks to be processed are not acquired again for the next time, so that task accumulation is prevented, and if the number of the asynchronous tasks to be processed each time is larger than or equal to the number of the asynchronous tasks to be processed each time, the asynchronous tasks to be processed each time are directly acquired.
Further, the step S300 further includes the steps of:
s330: setting a processing time;
s340: the thread processes the fetched asynchronous task within processing time.
The time for a thread to process an asynchronous task is specified.
Further, the method also comprises the following steps:
s510: setting an updating time;
s520: after one-time updating time, acquiring the newly increased speed of the asynchronous tasks in the asynchronous task queue in the updating time;
s530: and updating the processing number of each time according to the new speed and the processing time of the asynchronous task, so that the quotient of the processing number and the processing time of each time is more than or equal to the new speed of the asynchronous task.
By setting the updating time, the newly increased speed of the asynchronous task is checked every time a period of time passes, so that the number of the tasks acquired every time is updated, and a self-adaptive speed change mechanism is realized. When the new adding speed of the asynchronous task is low, the waste of system resources is reduced, and when the new adding speed of the asynchronous task is high, the accumulation of tasks is avoided.
Further, the S305 specifically includes the following steps:
and setting the processing time according to the requirement of the timeliness of task processing.
Because the quotient of the number of the acquired tasks and the processing time is greater than or equal to the newly increased speed of the asynchronous tasks each time, after the newly increased speed of the asynchronous tasks is determined, if the value of the processing time is smaller, the number of the acquired tasks each time is smaller, the generated number of the fragments is larger, the parallelism is higher, and the task processing efficiency is higher. Therefore, the processing time can be set according to the timeliness requirement of the task processing, and the task processing efficiency can be changed.
Further, the step S100 further includes the steps of:
s101: and arranging the asynchronous tasks according to the order of creation time.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating a distributed task allocation processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating S300 in an embodiment of a distributed task allocation processing method according to the present invention;
fig. 3 is a logic diagram of a cluster scheme according to an embodiment of the distributed task allocation processing method of the present invention.
Detailed Description
The following is further detailed by way of specific embodiments:
the embodiment is basically as shown in the attached figure 1:
the distributed task allocation processing method comprises the following steps:
s100: acquiring asynchronous tasks to be processed, generating an asynchronous task queue, and setting a unique identifier for each asynchronous task;
s110: arranging the asynchronous tasks according to the sequence of the creation time;
s200: setting acquisition time;
s300: acquiring time every time an idle thread in a node passes, acquiring a preset number of asynchronous tasks to be processed each time from an asynchronous task queue in a direction from the tail of the queue to the head of the queue, and processing the asynchronous tasks one by one;
s400: and clearing the processed asynchronous task from the asynchronous task queue through the unique identifier.
Moreover, the distributed task allocation processing method of the present invention further includes a stand-alone scheme and a cluster scheme, and in the cluster scheme, the method further includes step S210:
s210: and polling each node in the calling cluster by an external timing task system to execute the step S300.
S300 is specifically shown in fig. 2, and includes the following steps:
s310: and each time the acquisition time is passed, the thread acquires the asynchronous tasks to be processed, the number of which is to be processed each time, from the asynchronous task queue. The step S310 specifically includes the following steps:
s311: after one-time acquisition time, acquiring the number of asynchronous tasks to be processed in an asynchronous task queue;
s312: judging whether the number of the asynchronous tasks to be processed is larger than or equal to the number of the asynchronous tasks to be processed each time, if not, executing a step S313, and if so, executing a step S314;
s313: acquiring all to-be-processed asynchronous tasks in an asynchronous task queue;
s314: acquiring the asynchronous tasks to be processed, the number of which is to be processed each time, from the asynchronous task queue;
s320: after the thread acquires the asynchronous task, the acquired asynchronous task is marked as in-process for processing through unique identification;
s320: setting a processing time;
s340: the thread processes the acquired asynchronous task within the processing time.
The following explains the specific implementation process of the distributed task allocation processing method of the present invention in detail with reference to fig. 3:
FIG. 3 is an ideal process flow of a simplified version of the cluster scheme of the distributed task assignment processing method of the present invention. Assuming that the acquisition time is 1 second, the number of the acquired tasks is 3 each time, the number of the newly added tasks per second is 3, and the processing time is 3 seconds.
In the first second, the thread 1 in the node 1 acquires the task 3, the task 2 and the task 1 from the tail of the queue, marks the task 3, the task 2 and the task 1 as processing, and then starts processing.
And in the second, the thread 1 of the node 2 acquires the task 6, the task 5 and the task 4 from the tail part of the queue, marks the task 6, the task 5 and the task 4 as processing and then starts processing. At this time, the thread 1 of the node 1 finishes processing the task 3, removes the task 3 from the asynchronous task queue, and continues to process the task 2 and the task 1.
And in the third second, the thread 2 of the node 1 acquires the task 9, the task 8 and the task 7 from the tail part of the queue, and marks the task 9, the task 8 and the task 7 as processing and then starts processing. Thread 1 of node 2 is now finished processing task 6, removing task 6 from the asynchronous task queue, and continues processing tasks 5, 4. Thread 1 of node 1 is now finished processing task 2, removing task 2 from the asynchronous task queue, and continues processing task 1.
The asynchronous tasks to be processed are uniformly stored, the idle thread in the node takes a fixed number of asynchronous tasks from the tail part to the head part of the queue for processing, and after an acquisition time, the next thread continues to acquire the asynchronous tasks. The method has the advantages that the fragmentation center is not needed for fragmentation, the fault tolerance of the fragmentation is high, the implementation complexity is low, and the execution of other tasks cannot be blocked because partial tasks are always executed and failed. And in the cluster scheme, the external timing task system polls and calls the thread of each node for processing, and even if one node is crashed, other nodes continue to execute.
Further comprising the steps of:
s510: setting updating time which is a multiple of the acquisition time;
s520: after one-time updating time, acquiring the newly increased speed of the asynchronous tasks in the asynchronous task queue in the updating time;
s530: and updating the processing number of each time according to the new speed and the processing time of the asynchronous task, so that the quotient of the processing number and the processing time of each time is more than or equal to the new speed of the asynchronous task.
Since the asynchronous task new increase speed in the asynchronous task queue may change, the asynchronous task new increase speed in the asynchronous task queue is checked every time, and in this embodiment, the update time is five times the acquisition time. And setting the newly increased speed of the asynchronous tasks as s/s, the number of the asynchronous tasks processed each time as n and the processing time as t. In order to ensure that the tasks are not accumulated, the processing speed of each task is ensured to be greater than the new increasing speed of the tasks, namely n ÷ t ≧ s. The processing time is given, and the number n of the processed asynchronous tasks each time is adjusted after the new speed s of the asynchronous tasks is checked. Therefore, a self-adaptive speed change mechanism is realized, the waste of system resources is reduced when the new speed of the asynchronous task is low, and the task processing efficiency is improved when the new speed of the asynchronous task is high.
And it can be known that n ÷ t ≧ s, if the update speed s of the asynchronous task is known, the smaller the processing time t is, the smaller the number n obtained each time is, the more the number of generated fragments is, the higher the parallelism is, and the higher the task processing efficiency is. Therefore, the value of t is set according to the requirement of task processing timeliness, and the task processing efficiency is changed.
The foregoing are merely exemplary embodiments of the present invention, and no attempt is made to show structural details of the invention in more detail than is necessary for the fundamental understanding of the art, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice with the teachings of the invention. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (9)

1. The distributed task allocation processing method is characterized by comprising the following steps: the method comprises the following steps:
s100: acquiring asynchronous tasks to be processed to generate asynchronous task queues, and setting a unique identifier for each asynchronous task;
s300: idle threads in the nodes acquire the preset number of asynchronous tasks to be processed each time from the tail part of the queue to the head part of the queue from the asynchronous task queue at intervals, and process the asynchronous tasks one by one;
s400: and clearing the processed asynchronous task from the asynchronous task queue through the unique identifier.
2. The distributed task allocation processing method according to claim 1, characterized in that: the following steps are also included between S100 and S300:
s200: setting acquisition time;
the step S300 further includes the steps of:
s310: and each time the acquisition time is passed, the thread acquires the asynchronous tasks to be processed, the number of which is to be processed each time, from the asynchronous task queue.
3. The distributed task allocation processing method according to claim 2, characterized in that: the method comprises a stand-alone scheme and a cluster scheme, wherein the cluster scheme further comprises the following steps:
s210: and polling each node in the calling cluster by an external timing task system to execute the step S300.
4. The distributed task allocation processing method according to any one of claims 2 and 3, characterized in that: the step S300 further includes the following tasks:
s320: and after the thread acquires the asynchronous task, marking the acquired asynchronous task as in-process through the unique identifier and then processing the asynchronous task.
5. The distributed task allocation processing method according to claim 4, characterized in that: the step S310 specifically includes the following steps:
s311: after one-time acquisition time, acquiring the number of asynchronous tasks to be processed in an asynchronous task queue;
s312: judging whether the number of the asynchronous tasks to be processed is larger than or equal to the number of the asynchronous tasks to be processed each time, if not, executing a step S313, and if so, executing a step S314;
s313: acquiring all to-be-processed asynchronous tasks in an asynchronous task queue;
s314: and acquiring the asynchronous tasks to be processed, the number of which is to be processed each time, from the asynchronous task queue.
6. The distributed task allocation processing method according to claim 5, characterized in that: the step S300 further includes the steps of:
s330: setting a processing time;
s340: the thread processes the fetched asynchronous task within processing time.
7. The distributed task allocation processing method according to claim 6, characterized in that: further comprising the steps of:
s510: setting an updating time;
s520: after one-time updating time, acquiring the newly increased speed of the asynchronous tasks in the asynchronous task queue in the updating time;
s530: and updating the processing number of each time according to the new speed and the processing time of the asynchronous task, so that the quotient of the processing number and the processing time of each time is more than or equal to the new speed of the asynchronous task.
8. The distributed task allocation processing method according to claim 7, characterized in that: the S305 specifically includes the following steps:
and setting the processing time according to the requirement of the timeliness of task processing.
9. The distributed task allocation processing method according to claim 1, characterized in that: the step S100 further includes the steps of:
s101: and arranging the asynchronous tasks according to the order of creation time.
CN202111422839.0A 2021-11-26 2021-11-26 Distributed task allocation processing method Pending CN114035928A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111422839.0A CN114035928A (en) 2021-11-26 2021-11-26 Distributed task allocation processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111422839.0A CN114035928A (en) 2021-11-26 2021-11-26 Distributed task allocation processing method

Publications (1)

Publication Number Publication Date
CN114035928A true CN114035928A (en) 2022-02-11

Family

ID=80145754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111422839.0A Pending CN114035928A (en) 2021-11-26 2021-11-26 Distributed task allocation processing method

Country Status (1)

Country Link
CN (1) CN114035928A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116151137A (en) * 2023-04-24 2023-05-23 之江实验室 Simulation system, method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506808A (en) * 2021-02-08 2021-03-16 南京吉拉福网络科技有限公司 Test task execution method, computing device, computing system and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506808A (en) * 2021-02-08 2021-03-16 南京吉拉福网络科技有限公司 Test task execution method, computing device, computing system and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116151137A (en) * 2023-04-24 2023-05-23 之江实验室 Simulation system, method and device

Similar Documents

Publication Publication Date Title
CN106802826B (en) Service processing method and device based on thread pool
WO2020211579A1 (en) Processing method, device and system for distributed bulk processing system
CN109445851B (en) Multithreaded processor
JPWO2014041673A1 (en) Stream data multiplex processing method
CN111949386A (en) Task scheduling method, system, computing device and readable storage medium
CN106569887B (en) Fine-grained task scheduling method in cloud environment
CN109413502B (en) Multithreading barrage message distribution method, device, equipment and storage medium
CN108536530B (en) Multithreading task scheduling method and device
CN111897637B (en) Job scheduling method, device, host and storage medium
CN110704185B (en) Cluster system fragmentation timing task scheduling method and cluster system
CN114035928A (en) Distributed task allocation processing method
US8458136B2 (en) Scheduling highly parallel jobs having global interdependencies
CN110704172B (en) Cluster system timing task scheduling method and cluster system
Cho et al. Scheduling parallel real-time tasks on the minimum number of processors
CN114816709A (en) Task scheduling method, device, server and readable storage medium
US10649934B2 (en) Image processing apparatus, notification monitoring program, and notification monitoring method
CN109964206B (en) Device and method for processing tasks
US11822960B2 (en) Cascading of graph streaming processors
CN116089033A (en) Task scheduling method based on multistage heterogeneous dynamic queue
CN115469989A (en) Distributed batch task scheduling method and system
CN115904650A (en) Timed task supervision method and device under Linux system
CN114691324A (en) Elastic adjustment method and device for task processing parallelism
CN109379605B (en) Bullet screen distribution method, device, equipment and storage medium based on bullet screen sequence
CN113742071A (en) Task processing method and electronic equipment
CN113535361A (en) Task scheduling method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination