CN110851245A - Distributed asynchronous task scheduling method and electronic equipment - Google Patents

Distributed asynchronous task scheduling method and electronic equipment Download PDF

Info

Publication number
CN110851245A
CN110851245A CN201910903256.6A CN201910903256A CN110851245A CN 110851245 A CN110851245 A CN 110851245A CN 201910903256 A CN201910903256 A CN 201910903256A CN 110851245 A CN110851245 A CN 110851245A
Authority
CN
China
Prior art keywords
task
queue
tasks
target
scheduling method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910903256.6A
Other languages
Chinese (zh)
Inventor
林维镇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Wangsu Co Ltd
Original Assignee
Xiamen Wangsu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Wangsu Co Ltd filed Critical Xiamen Wangsu Co Ltd
Priority to CN201910903256.6A priority Critical patent/CN110851245A/en
Publication of CN110851245A publication Critical patent/CN110851245A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention relates to the technical field of data processing, and discloses a distributed asynchronous task scheduling method and electronic equipment. Task information is obtained from a configuration center used for carrying out task configuration on each process, wherein the task information at least comprises a target task queue configured for the current process, and asynchronous tasks to be executed are stored in the target task queue; acquiring a batch of target tasks from a target task queue; and the target task is called and executed, so that the task scheduling efficiency can be improved.

Description

Distributed asynchronous task scheduling method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of data processing, in particular to a distributed asynchronous task scheduling method and electronic equipment.
Background
With the development of internet technology, the amount of processed data in the process of processing tasks by the operating system of the server is rapidly increasing. In the case that the conventional method for processing tasks by using one machine cannot meet the requirement of task processing, a distributed task processing method is developed, and the distributed asynchronous task processing method can simultaneously process a plurality of tasks by using a plurality of machines and can be infinitely horizontally expanded theoretically, so that the task scheduling and the task processing can be well completed.
The inventor finds that at least the following problems exist in the prior art: in the existing distributed task scheduling system, each device for processing tasks must go to the database to obtain tasks, and the tasks in the database are continuously updated, so that when the tasks are obtained from the database, the database can firstly perform exclusive query on each task, obtain the tasks to be executed through priority ranking, update the states of the tasks to be executed into the states for processing and then execute the tasks, and the task processing efficiency of each task processing device is low.
Disclosure of Invention
The embodiment of the invention aims to provide a distributed asynchronous task scheduling method and electronic equipment, which can improve task scheduling efficiency.
In order to solve the above technical problem, an embodiment of the present invention provides a distributed asynchronous task scheduling method, including the following steps: task information is obtained from a configuration center used for carrying out task configuration on each process, wherein the task information at least comprises a target task queue configured for the current process, and asynchronous tasks to be executed are stored in the target task queue; acquiring a batch of target tasks from a target task queue; and calling and executing the target task.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the distributed asynchronous task scheduling method described above.
Compared with the prior art, the method and the device for task configuration of the process of the invention have the advantages that the task information is obtained from the configuration center for task configuration of each process, wherein the task information at least comprises a target task queue configured for the current process, and the target task queue stores asynchronous tasks to be executed; acquiring a batch of target tasks from a target task queue; and calling and executing the target task. The configuration center configures tasks for each process, and the processes directly acquire the target tasks in batches from the target task queue to execute, so that frequent access operation on the database is avoided, the situation that the database load is too high and the flow bottleneck is caused due to simultaneous access of a large number of task requests to the database can be avoided, and the task scheduling efficiency is improved; meanwhile, because each process uniformly acquires the task information from the configuration center, the process for processing the task can be greatly expanded, and the task scheduling efficiency can be further improved.
In addition, the configuration center is also used for configuring task concurrency rules for each task queue, and the task information also comprises the task concurrency rules corresponding to the target task queue; the method for acquiring the batch of target tasks from the target task queue specifically comprises the following steps: and acquiring a batch of target tasks from the target task queue according to the task concurrency rule corresponding to the target task queue. And setting a task concurrency rule to ensure that the process is not confused in the process of executing the task.
In addition, the configuration center configures a task concurrency rule for each task queue through a task configuration table and a task queue management table, the task configuration table is also used for recording task types configured for each process, and the task queue management table is also used for recording task queues respectively corresponding to each task type; task information is acquired from a configuration center for task configuration of each process, and the task information specifically comprises the following steps: and acquiring task information from a task configuration table and a task queue management table in the configuration center. The task configuration table and the task management table are configured to record the task queues configured for the processes and the task concurrency rules of the task queues, so that the unified management of tasks of the processes is facilitated, and the tasks are distributed by adopting the two tables, so that the configuration is more flexible.
In addition, the task concurrency rules configured for each task queue at least comprise the number of concurrent threads for processing the tasks in each task queue; and the number of concurrent threads for processing the tasks in each task queue is adjusted in real time according to the speed of processing the tasks by the thread pool. The task concurrency rule comprises the number of concurrent threads for processing the tasks in the task queue, and the number of the concurrent threads is adjusted according to the speed of processing the tasks in the thread pool, so that on one hand, the phenomenon that the consumption speed of the tasks in the task queue is too low is avoided, and on the other hand, the phenomenon that the CPU resource is wasted due to too high number of the concurrent threads is avoided.
In addition, the task concurrency rules configured for the respective task queues further comprise a task peak time period, and for the task queues storing the non-core tasks, the number of concurrent threads in the task peak time period is lower than that in the non-task peak time period. And in the task peak period, reducing the number of concurrent threads of the task queue corresponding to the non-core task, so that the resources are all used for executing the core task as much as possible.
In addition, the task information also comprises the number of tasks acquired from the target task queue each time, and the number of tasks acquired from the target task queue each time is obtained through the following method: and (3) acquiring the number of tasks from the target task queue each time, namely the number of processing threads of the target task queue, the thread pool load coefficient, the number of waiting processing tasks and the number of processing tasks. When the number of tasks acquired from the target task queue is calculated, the load coefficient of the thread pool is considered, the number of tasks to be processed is ensured not to be excessive while the task consumption rate in the target task queue is ensured to be high, and the priority execution with high priority is ensured, namely, a new task is given with the highest priority, and other tasks which are submitted for several hours (the task is not executed when the task with high priority is submitted) are executed after the task with high priority.
In addition, the configuration center is also used for changing task concurrency rules and task states. And modifying the concurrency rule and the task state according to the requirement, so that the requirement can be quickly executed.
In addition, the target task is called and executed, and the method at least comprises the following steps: and if the execution of the task fails, judging whether the retry times of the task does not reach a preset threshold value, if so, adding the retry task into the task queue again, wherein the preset threshold value is different according to different task service types. And for the task which fails to execute, retrying according to the rule of the number of retrying times so that the task is executed as much as possible.
In addition, the tasks in the target task queue are prioritized. Because the data in the task queue are arranged according to the priority sequence, compared with directly taking the target task from the database, the process of sequencing the tasks according to the priority is omitted, and the efficiency of scheduling the tasks can be improved.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a flowchart of a distributed asynchronous task scheduling method provided in accordance with a first embodiment of the present invention;
FIG. 2 is a flowchart of a distributed asynchronous task scheduling method provided in accordance with a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
In the invention, a distributed architecture is provided, in which a complex task can be decomposed into a plurality of simple task types, and a plurality of specific tasks under each task type can be stored by one or a plurality of task queues; in actual implementation, when a process processes a task, the task type executed by each process is configured in advance by a configuration center or can also be dynamically modified, a plurality of thread pools can be opened up in the process, each thread pool processes a queue of tasks, and the tasks in the thread pools are all processed concurrently.
The first embodiment of the invention relates to a distributed asynchronous task scheduling method. The core of the embodiment is that task information is obtained from a configuration center for performing task configuration on each process, wherein the task information at least comprises a target task queue configured for the current process, and asynchronous tasks to be executed are stored in the target task queue; acquiring a batch of target tasks from a target task queue; and calling and executing the target task. The distributed asynchronous task scheduling method in this embodiment is shown in fig. 1, and the implementation details of this embodiment are specifically described below, which are provided only for easy understanding and are not necessary for implementing this embodiment.
Step 101, task information is obtained from a configuration center for task configuration for each process.
Specifically, in the asynchronous task scheduling system provided by the application, the configuration center configures task information for each process, the task information at least includes task queues configured for each process, each task queue stores a task to be executed, and after a process is started, the task information can be periodically obtained from the configuration center, or the configuration center can actively push the task to each process. It should be noted that, when the configuration center performs task configuration, there may be a plurality of target task queues configured for each process, and when a process executes a task, a thread pool is created for each task queue, that is, each thread pool executes a task in a queue, and each thread pool executes the task concurrently.
In actual implementation, the configuration center may also be used to alter task concurrency rules and task states. When a sudden situation occurs and the task state needs to be changed, for example, when a certain task needs to be stopped, the state of the task is directly changed to be stopped, and the state is pushed to a corresponding process, so that the process can timely control the task execution state in the thread pool according to the task state. And modifying the concurrency rule and the task state according to the requirement, so that the requirement can be quickly executed.
In actual implementation, the tasks in the target task queue may be prioritized. Compared with the method of directly taking the target task from the database, the data in the task queue are arranged according to the priority sequence, so that the process of sequencing the tasks according to the priority is omitted, and the efficiency of scheduling the tasks can be improved.
In addition, in the present embodiment, the high-priority task is guaranteed to be executed preferentially, that is, when a new task with the highest priority is acquired, the task is executed preferentially, and even if other tasks to be executed are submitted for several hours, the other tasks are executed after the task with the highest priority.
And 102, acquiring a batch of target tasks from the target task queue.
Specifically, in the process of acquiring the batch of target tasks from the target task queue, the number of tasks acquired from the target task queue each time can be configured by the task center, that is, the task information configured by the configuration center can also include the number of tasks acquired from the target task queue each time, and the process of acquiring the tasks from the task queue is completed by an assistant thread (producer thread) in the thread pool, while the thread actually executing each task is a core thread (consumer thread) in the thread pool. In practical implementation, the number of tasks acquired from the target task queue each time is obtained by the following method: and (3) acquiring the number of tasks from the target task queue each time, namely the number of processing threads of the target task queue, the thread pool load coefficient, the number of waiting processing tasks and the number of processing tasks. For example, if the task of the current process in the processing target queue a is to open up a thread pool for the target queue a, the number of core tasks in the thread pool is 20, that is, the number of processing threads of the target queue a is 20, the load factor of the thread pool is 1.5, the number of tasks to be processed taken out from the target queue a is 5, and the number of tasks being processed in the thread pool is 20, then the number of tasks obtained from the target task queue is: 20 x 1.5-5-20 ═ 5, i.e., the number of tasks that the consumer thread in the thread pool needs to fetch from target queue a is 5. When the number of tasks acquired from the target task queue is calculated, the load coefficient of the thread pool is considered, and the number of tasks to be processed is ensured not to be excessive while the task consumption rate in the target task queue is ensured to be high.
It should be noted that, in the process of calculating the number of tasks acquired from the target task queue, if the calculated result is less than 0, the producer thread does not go to the task queue to acquire the tasks, so that the consumer thread can process the tasks in time, and the task accumulation is not caused.
And 103, calling and executing the target task.
Specifically, when the target task is executed, each core thread in the thread pool executes one target task respectively, and the execution process of the target task is concurrent. In actual implementation, metadata descriptions of each task are stored in the task queue, and when a target task is called and executed, a core thread for processing the task obtains and executes a specific task according to the metadata description of the corresponding task.
In one example, a target task is called and executed, and at least comprises the following steps: and if the execution of the task fails, judging whether the retry times of the task does not reach a preset threshold value, if so, adding the retry task into the task queue again, wherein the preset threshold value is different according to different task service types. In addition, in the implementation, when a retry task is added to the task queue, the task is inserted into an appropriate position in the task queue according to the priority of the task. For the task which fails to execute, the retry is carried out according to the rule of the retry number, so that the task can be executed as much as possible.
In one example, the task type executed by each process may also be dynamically modified, and the modification of the task type executed by each process may be completed by modifying the configuration information of the configuration center.
Compared with the prior art, the method and the device for task configuration of the process have the advantages that task information is obtained from a configuration center used for task configuration of each process, wherein the task information at least comprises a target task queue configured for a current process, and asynchronous tasks to be executed are stored in the target task queue; acquiring a batch of target tasks from the target task queue; and calling and executing the target task. The configuration center configures tasks for each process, and the processes directly acquire the target tasks in batches from the target task queue to execute, so that frequent access operation on the database is avoided, the situation that the database load is too high and the flow bottleneck is caused due to simultaneous access of a large number of task requests to the database can be avoided, and the task scheduling efficiency is improved; meanwhile, because each process uniformly acquires the task information from the configuration center, the process for processing the task can be greatly expanded, and the task scheduling efficiency can be further improved.
The second embodiment of the invention relates to a distributed asynchronous task scheduling method. In the second embodiment of the present invention, the configuration center is further configured to configure a task concurrency rule for each task queue, which is described in detail below.
In this embodiment, the task information further includes a task concurrency rule corresponding to the target task queue; the configuration center configures task concurrency rules for each task queue through a task configuration table and a task queue management table, the task configuration table is further used for recording task types configured for each process, and the task queue management table is further used for recording task queues respectively corresponding to each task type. In addition, in the present embodiment, the step of obtaining the batch of target tasks from the target task queue specifically includes: acquiring a batch of target tasks from the target task queue according to task concurrency rules corresponding to the target task queue; the task information obtained from the configuration center for performing task configuration on each process specifically includes: and acquiring task information from a task configuration table and a task queue management table in the configuration center.
A flowchart of a distributed asynchronous task scheduling method provided in this embodiment is shown in fig. 2, and includes the following steps:
step 201, acquiring task information from a task configuration table and a task queue management table in a configuration center.
Step 202, obtaining a batch of target tasks from the target task queue according to the task concurrency rule corresponding to the target task queue.
And step 203, calling and executing the target task.
In practical implementation, one process, that is, one worker executor, is used to execute a specific task, and when each executor acquires a task from a configuration center, corresponding task information can be directly acquired according to the name of an execution group, that is, configuration information in a corresponding task configuration table and a task queue management table is acquired according to the name of the execution group.
In one example, the task types (contents in the task configuration table) configured by the configuration center for each execution group are shown in table 1, task names of the task types allocated by the execution units in the execution group of performance _ test are wtc transcoding fast queue and wtc transcoding slow queue, and task names of the task types allocated by the execution units in the execution group of hb _ performance _ test are wtc transcoding fast queue and wtc transcoding slow queue. As shown in table 2, the task queues (task queue management tables) respectively corresponding to the task types are respectively, a task queue corresponding to the task type of which the task name is wtc transcoding fast queue in the performance _ test execution group includes queue 1 and queue 2, a task queue corresponding to the task type of which the task name is wtc transcoding slow queue in the performance _ test execution group includes queue 1 and queue 2, a task queue corresponding to the task type of which the task name is wtc fast code slow queue in the hb _ performance _ test execution group includes queue 2, and a task queue corresponding to the task type of which the task name is wtc fast code slow queue in the hb _ performance _ test execution group includes queue 3.
In practical implementation, the task queue management table may also be configured with a task concurrency rule, including a number of concurrent threads, a thread pool load factor, and the like, the number of tasks obtained from the task queue may also be configured, and the number of concurrent threads processing the tasks in each task queue may be adjusted in real time according to the speed of the thread pool processing the tasks. The task concurrency rule comprises the number of concurrent threads for processing the tasks in the task queue, and the number of the concurrent threads is adjusted according to the speed of processing the tasks in the thread pool, so that on one hand, the phenomenon that the consumption speed of the tasks in the task queue is too low is avoided, and on the other hand, the phenomenon that the CPU resource is wasted due to too high number of the concurrent threads is avoided. In one example, the specific configuration data of the task queue management table is shown in table 3.
It should be noted that, in the task configuration table, some task concurrency rules may also be configured, that is, a unified task concurrency rule is configured for the task queue corresponding to each type of task, so that the task concurrency rules are not required to be configured for each task queue in the corresponding task queue management table any more, and only when the task concurrency rules of individual task queues need to be adjusted, the task concurrency rules are reconfigured in the task queue management table.
TABLE 1
Name of Worker group Task name
performance_test wtc transcoding fast queue
performance_test wtc transcoding slow queue
hb_performance_test wtc transcoding fast queue
hb_performance_test wtc transcoding slow queue
TABLE 2
Name of Worker group Task name Queue name
performance_test wtc transcoding fast queue Queue 1
performance_test wtc transcoding slow queue Queue 1
performance_test wtc transcoding fast queue Queue 2
performance_test wtc transcoding slow queue Queue 2
hb_performance_test wtc transcoding fast queue Queue 2
hb_performance_test wtc transcoding slow queue Queue 3
TABLE 3
Figure BDA0002212489560000071
Figure BDA0002212489560000081
In addition, in actual implementation, the number of task instructions issued by users in different time periods is different, so that the task amount required to be processed by each process in different time periods is different, in order to better meet actual requirements, the task concurrence rules configured for each task queue further include a task peak time period, and for the task queue storing non-core tasks, the number of concurrent threads in the task peak time period is lower than the number of concurrent threads in the non-task peak time period. And in the task peak period, reducing the number of concurrent threads of the task queue corresponding to the non-core task, so that the resources are all used for executing the core task as much as possible.
Compared with the prior art, the task configuration table and the task management table are configured to record the task queues configured for the processes and the task concurrency rules of the task queues, unified management of tasks of the processes is facilitated, and the tasks are distributed by adopting the two tables, so that configuration is more flexible. In addition, the task concurrency rule comprises the number of concurrent threads for processing the tasks in the task queue, and the number of the concurrent threads is adjusted according to the speed of processing the tasks in the thread pool, so that on one hand, the phenomenon that the consumption speed of the tasks in the task queue is too low is avoided, and on the other hand, the phenomenon that the CPU resource is wasted due to too high number of the concurrent threads is avoided.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A third embodiment of the present invention relates to an electronic device, as shown in fig. 3, including at least one processor 301; and a memory 302 communicatively coupled to the at least one processor 301; the memory 302 stores instructions executable by the at least one processor 301, and the instructions are executed by the at least one processor 301, so that the at least one processor 301 can execute the distributed asynchronous task scheduling method.
Where the memory 302 and the processor 301 are coupled in a bus, the bus may comprise any number of interconnected buses and bridges, the buses coupling one or more of the various circuits of the processor 301 and the memory 302. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 301 is transmitted over a wireless medium through an antenna, which further receives the data and transmits the data to the processor 301.
The processor 301 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 302 may be used to store data used by processor 301 in performing operations.
A fourth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. A distributed asynchronous task scheduling method is characterized by comprising the following steps:
task information is obtained from a configuration center used for carrying out task configuration on each process, wherein the task information at least comprises a target task queue configured for the current process, and asynchronous tasks to be executed are stored in the target task queue;
acquiring a batch of target tasks from the target task queue;
and calling and executing the target task.
2. The distributed asynchronous task scheduling method of claim 1,
the configuration center is further configured to configure a task concurrency rule for each task queue, and the task information further includes a task concurrency rule corresponding to the target task queue;
the step of obtaining the batch of target tasks from the target task queue specifically includes:
and acquiring a batch of target tasks from the target task queue according to the task concurrency rule corresponding to the target task queue.
3. The distributed asynchronous task scheduling method of claim 2,
the configuration center configures task concurrency rules for each task queue through a task configuration table and a task queue management table, the task configuration table is also used for recording task types configured for each process, and the task queue management table is also used for recording task queues respectively corresponding to each task type;
the task information is obtained from a configuration center for task configuration of each process, and specifically includes:
and acquiring task information from a task configuration table and a task queue management table in the configuration center.
4. The distributed asynchronous task scheduling method of claim 3 wherein the task concurrency rules configured for each of the task queues at least include the number of concurrent threads to process tasks in each of the task queues; and the number of concurrent threads for processing the tasks in each task queue is adjusted in real time according to the speed of processing the tasks by the thread pool.
5. The distributed asynchronous task scheduling method of claim 3 wherein the task concurrency rules configured for the respective task queues further include on-peak periods of time, and wherein for a task queue storing non-core tasks, the number of concurrent threads during the on-peak periods of time is lower than the number of concurrent threads during off-peak periods of time.
6. The distributed asynchronous task scheduling method according to claim 1, wherein the task information further includes a number of tasks acquired from the target task queue each time, and the number of tasks acquired from the target task queue each time is obtained by:
and acquiring the number of tasks acquired from the target task queue each time, namely the number of processing threads of the target task queue, the thread pool load coefficient, the number of tasks waiting for processing and the number of tasks being processed.
7. The asynchronous task scheduling method of claim 1, further comprising: the configuration center is also used for changing the task concurrency rule and the task state.
8. The distributed asynchronous task scheduling method of claim 1, wherein said invoking and executing of the target task comprises at least:
and if the execution of the task fails, judging whether the retry times of the task does not reach a preset threshold value, if so, adding the retry task into the task queue again, wherein the preset threshold value is different according to different task service types.
9. The distributed asynchronous task scheduling method of claim 1,
and the tasks in the target task queue are arranged according to the priority order.
10. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the distributed asynchronous task scheduling method of any of claims 1 to 9.
CN201910903256.6A 2019-09-24 2019-09-24 Distributed asynchronous task scheduling method and electronic equipment Pending CN110851245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910903256.6A CN110851245A (en) 2019-09-24 2019-09-24 Distributed asynchronous task scheduling method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910903256.6A CN110851245A (en) 2019-09-24 2019-09-24 Distributed asynchronous task scheduling method and electronic equipment

Publications (1)

Publication Number Publication Date
CN110851245A true CN110851245A (en) 2020-02-28

Family

ID=69596030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910903256.6A Pending CN110851245A (en) 2019-09-24 2019-09-24 Distributed asynchronous task scheduling method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110851245A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488203A (en) * 2020-04-13 2020-08-04 深圳市友杰智新科技有限公司 Processing method and device for sound recording identification task, computer equipment and storage medium
CN111782360A (en) * 2020-06-28 2020-10-16 中国工商银行股份有限公司 Distributed task scheduling method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102591721A (en) * 2011-12-30 2012-07-18 北京新媒传信科技有限公司 Method and system for distributing thread execution task
CN102981904A (en) * 2011-09-02 2013-03-20 阿里巴巴集团控股有限公司 Task scheduling method and system
CN106557363A (en) * 2016-12-05 2017-04-05 广发证券股份有限公司 A kind of system and method for big data task scheduling
CN106802826A (en) * 2016-12-23 2017-06-06 中国银联股份有限公司 A kind of method for processing business and device based on thread pool

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102981904A (en) * 2011-09-02 2013-03-20 阿里巴巴集团控股有限公司 Task scheduling method and system
CN102591721A (en) * 2011-12-30 2012-07-18 北京新媒传信科技有限公司 Method and system for distributing thread execution task
CN106557363A (en) * 2016-12-05 2017-04-05 广发证券股份有限公司 A kind of system and method for big data task scheduling
CN106802826A (en) * 2016-12-23 2017-06-06 中国银联股份有限公司 A kind of method for processing business and device based on thread pool

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488203A (en) * 2020-04-13 2020-08-04 深圳市友杰智新科技有限公司 Processing method and device for sound recording identification task, computer equipment and storage medium
CN111488203B (en) * 2020-04-13 2023-02-28 深圳市友杰智新科技有限公司 Processing method and device for recording recognition task, computer equipment and storage medium
CN111782360A (en) * 2020-06-28 2020-10-16 中国工商银行股份有限公司 Distributed task scheduling method and device
CN111782360B (en) * 2020-06-28 2023-08-11 中国工商银行股份有限公司 Distributed task scheduling method and device

Similar Documents

Publication Publication Date Title
US7441240B2 (en) Process scheduling apparatus, process scheduling method, program for process scheduling, and storage medium recording a program for process scheduling
DE69130620T2 (en) Data transmission adapter and method for its operation
KR100509794B1 (en) Method of scheduling jobs using database management system for real-time processing
CN113778694B (en) Task processing method, device, equipment and medium
CN111400022A (en) Resource scheduling method and device and electronic equipment
CN112817728B (en) Task scheduling method, network device and storage medium
US7920282B2 (en) Job preempt set generation for resource management
CN102662740A (en) Asymmetric multi-core system and realization method thereof
CN112114973A (en) Data processing method and device
JP4961931B2 (en) Job execution scheduling program, job execution scheduling method, and job execution scheduling apparatus
CN110851245A (en) Distributed asynchronous task scheduling method and electronic equipment
CN115686805A (en) GPU resource sharing method and device, and GPU resource sharing scheduling method and device
CN105677744A (en) Method and apparatus for increasing service quality in file system
WO2018133821A1 (en) Memory-aware plan negotiation in query concurrency control
WO2020108337A1 (en) Cpu resource scheduling method and electronic equipment
CN114816709A (en) Task scheduling method, device, server and readable storage medium
CN114721818A (en) Kubernetes cluster-based GPU time-sharing method and system
CN111708799A (en) Spark task processing method and device, electronic equipment and storage medium
CN114443236A (en) Task processing method, device, system, equipment and medium
CN115878333A (en) Method, device and equipment for judging consistency between process groups
US10635336B1 (en) Cache-based partition allocation
CN115129438A (en) Method and device for task distributed scheduling
CN114116150A (en) Task scheduling method and device and related equipment
CN110109760B (en) Memory resource control method and device
CN114945909B (en) Optimized query scheduling for resource utilization optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200228

RJ01 Rejection of invention patent application after publication