CN115292025A - Task scheduling method and device, computer equipment and computer readable storage medium - Google Patents

Task scheduling method and device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN115292025A
CN115292025A CN202211229141.1A CN202211229141A CN115292025A CN 115292025 A CN115292025 A CN 115292025A CN 202211229141 A CN202211229141 A CN 202211229141A CN 115292025 A CN115292025 A CN 115292025A
Authority
CN
China
Prior art keywords
processed
task
tasks
preset queue
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211229141.1A
Other languages
Chinese (zh)
Inventor
李洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Digital China Rongxin Cloud Technology Service Co ltd
Original Assignee
Digital China Rongxin Cloud Technology Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digital China Rongxin Cloud Technology Service Co ltd filed Critical Digital China Rongxin Cloud Technology Service Co ltd
Priority to CN202211229141.1A priority Critical patent/CN115292025A/en
Publication of CN115292025A publication Critical patent/CN115292025A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a task scheduling method, a task scheduling device, computer equipment and a computer readable storage medium. The method comprises the steps of obtaining tasks to be processed of a plurality of target objects; storing the tasks to be processed of a plurality of target objects into the same preset queue; and carrying out concurrent processing on the tasks to be processed in the preset queue. According to the task scheduling method, the task scheduling device, the computer equipment and the computer readable storage medium, the tasks to be processed of the target objects are stored in the same preset queue, and the tasks to be processed in the preset queue are processed concurrently, so that one service can process the tasks to be processed of the target objects simultaneously, the processing time of the tasks to be processed is shortened, and the processing efficiency of the tasks to be processed is improved.

Description

Task scheduling method and device, computer equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer application technologies, and in particular, to a task scheduling method, a task scheduling apparatus, a computer device, and a computer-readable storage medium.
Background
In the prior art, a set of service is deployed independently for each user to execute a batch processing task of each user, however, as business expands and the number of users continuously increases, executing a task to be processed of one user by one service greatly prolongs the time for processing the tasks to be processed of all users, and reduces the efficiency for processing the tasks to be processed.
Disclosure of Invention
In view of this, the present application provides a task scheduling method, a task scheduling apparatus, a computer device, and a computer-readable storage medium, so as to enable concurrent processing of to-be-processed tasks of multiple target objects in one service.
The task scheduling method comprises the steps of obtaining tasks to be processed of a plurality of target objects; storing the tasks to be processed of a plurality of target objects into the same preset queue; and carrying out concurrent processing on the tasks to be processed in the preset queue.
The task scheduling device comprises an acquisition module, a queue module and a processing module. The acquisition module is used for acquiring tasks to be processed of a plurality of target objects; the queue module is used for storing the tasks to be processed of the target objects into the same preset queue; the processing module is used for carrying out concurrent processing on the tasks to be processed in the preset queue.
The computer equipment of the embodiment of the application comprises one or more processors, one or more processors and a processing module, wherein the one or more processors are used for acquiring the tasks to be processed of a plurality of target objects; storing the tasks to be processed of a plurality of target objects into the same preset queue; and carrying out concurrent processing on the tasks to be processed in the preset queue.
The computer-readable storage medium of embodiments of the present application contains a computer program that, when executed by one or more processors, causes the processors to perform a task scheduling method of: acquiring tasks to be processed of a plurality of target objects; storing the tasks to be processed of a plurality of target objects into the same preset queue; and carrying out concurrent processing on the tasks to be processed in the preset queue.
According to the task scheduling method, the task scheduling device, the computer equipment and the computer readable storage medium, the tasks to be processed of the target objects are stored in the same preset queue, and the tasks to be processed in the preset queue are processed concurrently, so that one service can process the tasks to be processed of the target objects simultaneously, the processing time of the tasks to be processed is shortened, and the processing efficiency of the tasks to be processed is improved.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of a task scheduling method in accordance with certain embodiments of the present application;
FIG. 2 is a schematic flow chart diagram of a task scheduling method in accordance with certain embodiments of the present application;
FIG. 3 is a schematic flow chart diagram of a task scheduling method in accordance with certain embodiments of the present application;
FIG. 4 is a schematic flow chart diagram of a task scheduling method in accordance with certain embodiments of the present application;
FIG. 5 is a schematic flow chart diagram of a task scheduling method in accordance with certain embodiments of the present application;
FIG. 6 is a schematic flow chart diagram of a task scheduling method in accordance with certain embodiments of the present application;
FIG. 7 is a schematic flow chart diagram of a task scheduling method in accordance with certain embodiments of the present application;
FIG. 8 is a block diagram of a task scheduler of some embodiments of the present application;
FIG. 9 is a schematic plan view of a computer device according to some embodiments of the present application; and
FIG. 10 is a schematic diagram of the interaction of a computer-readable storage medium and a processor of certain embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the embodiments of the present application, and are not construed as limiting the embodiments of the present application.
The terms appearing in the present application are explained first below:
referring to fig. 1, a task scheduling method according to an embodiment of the present disclosure includes:
step 01: and acquiring the tasks to be processed of a plurality of target objects.
Specifically, when each target object submits a service, it also submits data needed to complete the service. Then, after receiving the service, the service needs to generate a plurality of tasks to be processed according to the service of each target object, and the data submitted by each target object is stored in the same database. And the service executes the service submitted by the target object according to the task to be processed of the target object obtained after the service is received and the corresponding database. For example, if a target object needs to generate an annual financial statement, a plurality of tasks to be processed may be generated after the business is submitted, for example, an asset and debt table, a profit table, and a cash flow table are generated, and data is submitted and a corresponding database is generated when the business is submitted.
Meanwhile, if one service only acquires and processes tasks and data to be processed of one target object at a time, when the number of target objects is continuously increased, the number of services is also continuously increased, which greatly affects the occupation of server resources and processing efficiency, so that the method is only suitable for the case of small number of target objects. Therefore, in order to reduce the occupation of server resources and improve the processing efficiency, the task scheduling method according to the embodiment of the present application can enable one service to acquire the to-be-processed tasks of multiple target objects at a time, so that when the number of the target objects is continuously increased, one service can process the to-be-processed tasks of multiple target objects at a time.
Step 02: and storing the tasks to be processed of the plurality of target objects into the same preset queue.
Specifically, one service corresponds to one preset queue, and when the service processes the task to be processed, the service takes the task to be processed out of the preset queue and then calls the database corresponding to the task to be processed. Therefore, in order to process the tasks to be processed of the plurality of target objects in one service, after the service acquires the tasks to be processed of the plurality of target objects, all the tasks to be processed of the plurality of target objects are stored in the same preset queue.
And 03: and carrying out concurrent processing on the tasks to be processed in the preset queue.
Specifically, when the service stores the tasks to be processed of the multiple target objects in the same preset queue and starts to process the tasks to be processed, the tasks to be processed in the preset queue are concurrently processed, that is, after the service starts to process the tasks to be processed, the service simultaneously processes the multiple tasks to be processed and calls the database corresponding to the processed tasks to be processed. After the service finishes processing one task, the service can take out one task from the preset queue, and the service is ensured to process a plurality of tasks to be processed all the time until all the tasks to be processed are processed.
For example, the service acquires the tasks D11 to D18 to be processed of the target object M1 and the tasks D21 to D27 to be processed of the target object M2, and puts the tasks to be processed of the two target objects into a preset queue, and at this time, the service can process two tasks to be processed at a time. After the service starts to process the task, two tasks to be processed, such as the task D11 to be processed and the task D22 to be processed, are sequentially obtained from the preset queue, and the databases corresponding to the task D11 to be processed and the task D22 to be processed are respectively called, so that the task D11 to be processed and the task D22 to be processed are completed. After a certain task to be processed is completed, the service will extract another task to be processed from the preset queue, for example, after the task D11 to be processed is completed, the server will extract another task to be processed, such as the task D12 to be processed, from the preset queue, so as to ensure that the service is always processing two tasks to be processed simultaneously until all tasks to be processed are completed.
According to the task scheduling method, the tasks to be processed of the target objects are stored in the same preset queue, and the tasks to be processed in the preset queue are processed concurrently, so that one service can process the tasks to be processed of the target objects simultaneously, the processing time of the tasks to be processed is shortened, and the processing efficiency of the tasks to be processed is improved.
Referring to fig. 2, optionally, step 01: the method for acquiring the tasks to be processed of the plurality of target objects comprises the following steps:
a step 011: acquiring a preset task table of each target object;
step 012: and acquiring the tasks to be processed and the dependency relationship between the tasks to be processed from the task table.
Specifically, a task form is preset when each target object submits a service, and the task form specifies the tasks to be processed and the dependency relationship between the tasks to be processed.
It can be understood that, in order to achieve some processing purposes, when each target object sets a preset task table, it is necessary to define the dependency relationship between the tasks to be processed and the tasks to be processed, that is, define the execution sequence of some tasks to be processed, according to the logic of the task flow.
For example, when a bank performs daily final processing, a plurality of tasks to be processed are generated, such as reconciliation, checking of total branch account balance, checking of account balance and generation amount, generation of daily statements, and the like. Obviously, when the daily end processing is performed, the daily report can be generated only after all data processing is completed, otherwise, the accuracy of the generated daily report cannot be ensured. Therefore, in the task form of the final bank day processing, the tasks to be processed for generating the daily statement are limited to be processed after all the tasks to be processed for data processing are completed. When the service processes the tasks to be processed which are processed in the end of the bank day, the service processes the tasks to be processed which are processed in the end of the bank day according to the sequence of the tasks to be processed specified in the task table processed in the end of the bank day.
Referring to fig. 1, optionally, the preset queue in the task scheduling method according to the embodiment of the present application is a non-blocking queue.
Specifically, the service maintains a non-blocking concurrent queue container (ConcurrentLinkedQueue) to ensure that when the tasks to be processed in the preset queue are concurrently processed, the processing of the tasks to be processed is not blocked because a plurality of tasks to be processed are extracted at one time. The enqueue and dequeue operations in the non-blocking concurrent queue container both utilize a compare and swap (CAS) lock, and only compare whether the values are the same, so that the non-blocking concurrent queue container allows concurrent processing of multiple to-be-processed tasks, and does not block the processing of the to-be-processed tasks due to locking, so that the concurrency performance is better, and the to-be-processed tasks can be concurrently processed more stably and efficiently.
Referring to fig. 3, optionally, the task scheduling method of the present application further includes:
step 04: acquiring the current resource amount available for task processing to determine the concurrency amount;
step 05: establishing threads with the same quantity as the concurrency quantity according to the concurrency quantity;
step 03: the method for concurrently processing the to-be-processed tasks in the preset queue may include:
step 031: and performing concurrent processing on the tasks to be processed in the preset queue through the threads with the same quantity as the concurrent quantity.
Specifically, the concurrent amount of the service at this time, that is, the number of tasks to be processed that the service can process at one time, may be determined according to the amount of resources that the service can currently use to perform task processing, such as the server memory condition and the CPU performance condition. And then, threads with the same number as the concurrency amount are established according to the concurrency amount, and the tasks to be processed in the preset queue are concurrently processed through the threads, so that the threads matched with the current resource amount available for task processing are established, and the tasks to be processed in the preset queue are guaranteed to be stably and efficiently concurrently processed.
Referring to fig. 4, optionally, the task scheduling method of the present application further includes:
and step 06: judging whether a service for processing the task is started and whether a preset queue is empty;
step 07: under the condition that the service is in a running state and the preset queue is not empty, the step of concurrently processing the tasks to be processed in the preset queue is carried out;
and step 08: and under the condition that the service is in a non-running state and the preset queue is empty, stopping the service.
Specifically, when the service takes out the task to be processed in the preset queue, whether the service is started or not and whether the preset queue is empty or not are judged firstly, that is, the starting state of the service and the condition of the preset queue are judged in advance before the task to be processed is processed, so that the next operation step is selected according to the starting state of the service and the condition of the preset queue, and whether the service has a problem or not in the running process is judged in real time.
And when the service is judged to be in the running state and the preset queue is not empty, the service starts to concurrently process the tasks to be processed in the preset queue, simultaneously calls a database corresponding to the tasks to be processed which need to be processed at the moment, and processes the data in the corresponding database according to the tasks to be processed. And when the service is judged to be in a non-running state and the preset queue is empty, namely all the tasks to be processed in the queue are processed, stopping the service.
In addition, if the service is in the running state and the preset queue is not empty, or the service is in the non-running state and the preset queue is not empty, it indicates that the running of the service is in problem, and at this moment, the service sends an alarm and searches for the reason of the running of the service in problem. For example, when the preset queue is not empty, the service is in a non-running state, that is, the service cannot continue to process the service to be processed in the preset queue, so that the service may issue an alarm at this time, and find a reason why the service is in the non-running state when the preset queue is not empty.
Referring to fig. 5, optionally, step 03: the method for concurrently processing the tasks to be processed in the preset queue further comprises the following steps:
step 032: taking out the current task to be processed from a preset queue;
step 033: judging whether the current task to be processed is the last task to be processed in a preset queue or not;
step 034: under the condition that the current task to be processed is the last task to be processed in the preset queue, judging whether all the tasks to be processed in the preset queue are processed completely;
step 035: under the condition that all the tasks to be processed in the preset queue are processed, executing the current task to be processed, and stopping service after the current task to be processed is processed;
step 036: under the condition that the tasks to be processed in the preset queue are not processed and completed, storing the current tasks to be processed into the preset queue again;
step 037: under the condition that the current task to be processed is not the last task to be processed in the preset queue, judging whether the first task to be processed in the preset queue is processed;
step 038: executing the current task to be processed under the condition that the first task to be processed in the preset queue is processed;
step 039: and under the condition that the first task to be processed in the preset queue is not processed and completed, storing the current task to be processed into the preset queue again.
Specifically, after the service is determined to be in the running state and the preset queue is not empty, the service will take out the current task to be processed from the preset queue, and determine whether the current task to be processed can be processed.
After the last task to be processed is completed, the service needs to be suspended, so that the service is needed to judge whether the current task to be processed is the last task to be processed in the preset queue. If the current task to be processed is the last task to be processed in the preset queue, further judging whether all the tasks to be processed in the preset queue are processed completely, if all the tasks to be processed in the preset queue are processed completely, executing the current task to be processed, and stopping service after the current task to be processed is processed completely; if the to-be-processed task is judged to be incomplete in the preset queue, the current to-be-processed task is stored in the preset queue again, and the service takes out other incomplete to-be-processed tasks in the preset queue again. For example, the enterprise may merge all the tables as the last pending task when generating the financial statements. And combining the forms generated by all the tasks to be processed in the preset queue only when the service finishes all the tasks to be processed in the preset queue and finishes the preset processing on the financial data of the enterprise, and putting the tasks to be processed, which are combined with all the forms, in the queue again as long as the conditions that the tasks to be processed in the preset queue are not processed and finished, namely, the forms are not generated are judged. For another example, the last task to be processed is a preset ending task, that is, after the ending task is completed, it means that all the tasks to be processed of the preset queue are completed.
Because the service also obtains the task table preset by each target object and the dependency relationship between the task to be processed and the task to be processed when obtaining the tasks to be processed of the plurality of target objects, the service needs to follow the dependency relationship between the tasks to be processed of each target object when processing the task to be processed of each target object, and therefore the service can execute other tasks to be processed in the preset queue only when the first task to be processed is completed. Therefore, if the service determines that the current task to be processed is not the last task to be processed in the preset queue, it needs to determine whether the first task to be processed in the preset queue is processed and completed. If the service judges that the current task to be processed is not the last task to be processed in the preset queue and the first task to be processed in the preset queue is processed, the current task to be processed can be executed. If the first task to be processed in the preset queue is not yet processed, the current task to be processed needs to be stored in the preset queue again, and then the service takes out other tasks to be processed from the preset queue. Particularly, when the service judges whether the first to-be-processed task in the preset queue is processed and completed, if the current to-be-processed task is identified as the first to-be-processed task in the preset queue, the service can directly execute the current to-be-processed task.
Referring to fig. 6, optionally, step 03: the method for concurrently processing the tasks to be processed in the preset queue further comprises the following steps:
step 040: judging whether the tasks to be processed which are preposed by the current task to be processed are all completed according to a preset dependency relationship;
step 041: executing the current task to be processed under the condition that the tasks to be processed which are preposed in the current task to be processed are all completed;
step 042: and under the condition that the tasks to be processed which are preposed in the current task to be processed have uncompleted tasks to be processed, storing the current task to be processed to the preset queue again.
Specifically, when the service acquires the tasks to be processed of the plurality of target objects, the service also acquires the task table preset by each target object and the relationship between the tasks to be processed and the tasks to be processed, so that when the service processes the tasks to be processed, it is necessary to determine whether the current task to be processed can be processed according to the dependency relationship between the tasks to be processed of each target object. It can be understood that, when the service processes the task to be processed, it is also required to determine whether all the tasks to be processed ahead of the current task to be processed are completed, and the current task to be processed can be executed only when all the tasks to be processed ahead of the current task to be processed are completed. Therefore, when the service determines that the current task to be processed is not the last task to be processed in the preset queue and the first task to be processed in the preset queue is processed completely, it needs to further determine whether all the tasks to be processed ahead of the current task to be processed are completed according to the preset dependency relationship, and if all the tasks to be processed ahead of the current task to be processed are completed, execute the current task to be processed; and if the tasks to be processed which are preposed in the current task to be processed have uncompleted tasks to be processed, storing the current task to be processed to the preset queue again.
Referring to fig. 7, optionally, step 03: the method for concurrently processing the tasks to be processed in the preset queue further comprises the following steps:
step 043: judging whether the current task to be processed enters a pause state or not;
step 044: under the condition that the current task to be processed enters a pause state, storing the current task to be processed to the preset queue again;
step 045: and executing the current task to be processed under the condition that the current task to be processed does not enter the suspended state.
Specifically, when the service processes the to-be-processed tasks in the preset queue, it may happen that the service sets a certain to-be-processed task in a suspended state for a certain purpose, or in the service running process, a worker finds that a problem occurs in a certain to-be-processed task and sets the to-be-processed task in the suspended state. For example, the staff may find that the database of the target object corresponding to a certain to-be-processed task may be wrong, and need to check whether the database of the target object corresponding to the to-be-processed task that may be wrong is wrong, so the staff may manually set the to-be-processed task that may be wrong in the database to a suspended state.
Therefore, under the condition that the service determines that the current task to be processed is not the last task to be processed in the preset queue and the first task to be processed in the preset queue is already completed, the service needs to further determine whether the current task to be processed enters a suspended state. If the current task to be processed enters a pause state, storing the current task to be processed into the preset queue again; and if the current task to be processed does not enter the pause state, executing the current task to be processed.
In order to better implement the task scheduling method according to the embodiment of the present application, the embodiment of the present application further provides a task scheduling device 10. Referring to fig. 8, the task scheduling device 10 may include:
the acquiring module 11 is configured to acquire to-be-processed tasks of a plurality of target objects;
the queue module 12 is configured to store to-be-processed tasks of multiple target objects in the same preset queue;
and the processing module 13 is configured to perform concurrent processing on the tasks to be processed in the preset queue.
The obtaining module 11 is specifically configured to obtain a task table preset by each target object, and obtain the to-be-processed task and the dependency relationship between the to-be-processed tasks from the task table.
The obtaining module 11 is specifically configured to obtain a current amount of resources available for task processing, so as to determine a concurrency amount.
The task scheduling device 10 of the present application may further include:
and the building module 14 is used for building the threads with the same number as the concurrency quantity according to the concurrency quantity.
The processing module 13 is specifically configured to concurrently process the to-be-processed tasks in the preset queue through the threads with the same number as the concurrent amount.
The task scheduling device 10 of the present application may further include:
and the judging module 15 is configured to judge whether a service for performing task processing is started and whether a preset queue is empty.
The processing module 13 is specifically configured to enter a step of concurrently processing the to-be-processed tasks in the preset queue when the service is in the running state and the preset queue is not empty, or stop the service when the service is in the non-running state and the preset queue is empty.
The task scheduling device 10 of the present application may further include:
and the extracting module 16 is configured to take out the current task to be processed from the preset queue.
The determining module 15 is specifically configured to determine whether the current task to be processed is the last task to be processed in the preset queue.
The determining module 15 is specifically configured to determine whether all the tasks to be processed in the preset queue are processed completely when the current task to be processed is the last task to be processed in the preset queue.
The processing module 13 is specifically configured to execute the current task to be processed when all the tasks to be processed in the preset queue are processed, and stop serving after the current task to be processed is processed; and under the condition that the tasks to be processed are not processed and completed in the preset queue, storing the current tasks to be processed into the preset queue again.
The determining module 15 is specifically configured to determine whether the first to-be-processed task in the preset queue is completed when the current to-be-processed task is not the last to-be-processed task in the preset queue.
The processing module 13 is specifically configured to execute the current task to be processed when the processing of the first task to be processed in the preset queue is completed; and under the condition that the first task to be processed in the preset queue is not processed and completed, storing the current task to be processed to the preset queue again.
The judging module 15 is specifically configured to judge whether all the tasks to be processed ahead of the current task to be processed are completed according to the preset dependency relationship.
The processing module 13 is specifically configured to execute the current task to be processed when all tasks to be processed that are ahead of the current task to be processed are completed; and under the condition that the tasks to be processed which are preposed in the current task to be processed have uncompleted tasks to be processed, storing the current task to be processed into the preset queue again.
The determining module 15 is specifically configured to determine whether the current task to be processed enters a suspended state.
The processing module 13 is specifically configured to store the current task to be processed into the preset queue again when the current task to be processed enters the suspended state; and executing the current task to be processed under the condition that the current task to be processed does not enter the suspended state.
The various modules in the task scheduler 10 described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules may be embedded in hardware or independent from the processor 20 in the computer device, or may be stored in a memory in the computer device in software, so that the processor 20 can call and execute operations corresponding to the modules.
Referring to fig. 9, a computer device 100 according to an embodiment of the present application includes a processor 20. The processor 20 is configured to execute the task scheduling method according to any of the above embodiments, and for brevity, the description is omitted here.
Referring to fig. 10, an embodiment of the present application further provides a computer-readable storage medium 200, where a computer program 210 is stored on the computer-readable storage medium, and steps of the task scheduling method according to any one of the above embodiments are implemented when the computer program 210 is executed by the processor 20, which are not described herein again for brevity.
It will be appreciated that the computer program 210 comprises computer program code. The computer program code may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable storage medium can be a computer readable storage medium such as any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a Random Access Memory (RAM), a software distribution medium, and so forth.
In the description herein, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples and features of the various embodiments or examples described in this specification can be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (11)

1. A method for task scheduling, comprising:
acquiring tasks to be processed of a plurality of target objects;
storing the tasks to be processed of a plurality of target objects into the same preset queue;
and carrying out concurrent processing on the tasks to be processed in the preset queue.
2. The task scheduling method according to claim 1, wherein the obtaining the to-be-processed tasks of the plurality of target objects comprises:
acquiring a task table preset by each target object;
and acquiring the tasks to be processed and the dependency relationship between the tasks to be processed from the task table.
3. The task scheduling method according to claim 1, wherein the predetermined queue is a non-blocking queue.
4. The task scheduling method according to claim 1, wherein the concurrently processing the to-be-processed tasks in the preset queue further comprises:
acquiring the current resource quantity available for task processing to determine the concurrency quantity;
establishing threads with the same quantity as the concurrency quantity according to the concurrency quantity;
and carrying out concurrent processing on the tasks to be processed in the preset queue through threads with the same number as the concurrent amount.
5. The task scheduling method according to claim 1 or 4, wherein before concurrently processing the to-be-processed tasks in the preset queue, the task scheduling method comprises:
judging whether a service for processing the task is started and whether the preset queue is empty;
under the condition that the service is in a running state and the preset queue is not empty, the step of concurrently processing the tasks to be processed in the preset queue is carried out;
and stopping the service under the condition that the service is in a non-running state and the preset queue is empty.
6. The task scheduling method according to claim 5, wherein the concurrently processing the to-be-processed tasks in the preset queue includes:
taking out the current task to be processed from the preset queue;
judging whether the current task to be processed is the last task to be processed of the preset queue;
under the condition that the current task to be processed is the last task to be processed of the preset queue, judging whether all the tasks to be processed in the preset queue are processed completely;
under the condition that all the tasks to be processed in the preset queue are processed, executing the current task to be processed, and stopping the service after the current task to be processed is processed;
when the task to be processed is not processed and completed in the preset queue, storing the current task to be processed into the preset queue again;
under the condition that the current task to be processed is not the last task to be processed of the preset queue, judging whether the first task to be processed in the preset queue is processed;
executing the current task to be processed under the condition that the first task to be processed in the preset queue is processed;
and under the condition that the first task to be processed in the preset queue is not processed and completed, storing the current task to be processed into the preset queue again.
7. The task scheduling method according to claim 6, wherein a preset dependency relationship exists between the to-be-processed tasks of each target object, and when processing of a first to-be-processed task in the preset queue is completed, the concurrently processing of the to-be-processed tasks in the preset queue further comprises:
judging whether the tasks to be processed which are preposed by the current task to be processed are all completed or not according to the preset dependency relationship;
executing the current task to be processed under the condition that the current task to be processed which is preposed by the task to be processed is completed;
and under the condition that the tasks to be processed which are preposed in the current tasks to be processed have uncompleted tasks to be processed, storing the current tasks to be processed to the preset queue again.
8. The task scheduling method according to claim 6, wherein, when processing of a first to-be-processed task in the preset queue is completed, the concurrently processing the to-be-processed task in the preset queue further comprises:
judging whether the current task to be processed enters a pause state or not;
under the condition that the current task to be processed enters the pause state, storing the current task to be processed into the preset queue again;
and executing the current task to be processed under the condition that the current task to be processed does not enter the pause state.
9. A task scheduling apparatus, comprising:
the acquisition module is used for acquiring tasks to be processed of a plurality of target objects;
the queue module is used for storing the tasks to be processed of the target objects into the same preset queue;
and the processing module is used for carrying out concurrent processing on the tasks to be processed in the preset queue.
10. A computer device comprising one or more processors configured to obtain tasks to be processed for a plurality of target objects; storing the tasks to be processed of a plurality of target objects into the same preset queue; and carrying out concurrent processing on the tasks to be processed in the preset queue.
11. A computer-readable storage medium comprising a computer program which, when executed by a processor, causes the processor to carry out the task scheduling method of any one of claims 1 to 8.
CN202211229141.1A 2022-09-30 2022-09-30 Task scheduling method and device, computer equipment and computer readable storage medium Pending CN115292025A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211229141.1A CN115292025A (en) 2022-09-30 2022-09-30 Task scheduling method and device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211229141.1A CN115292025A (en) 2022-09-30 2022-09-30 Task scheduling method and device, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115292025A true CN115292025A (en) 2022-11-04

Family

ID=83819197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211229141.1A Pending CN115292025A (en) 2022-09-30 2022-09-30 Task scheduling method and device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115292025A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020083063A1 (en) * 2000-12-26 2002-06-27 Bull Hn Information Systems Inc. Software and data processing system with priority queue dispatching
CN101246439A (en) * 2008-03-18 2008-08-20 中兴通讯股份有限公司 Automatized test method and system based on task scheduling
CN102752136A (en) * 2012-06-29 2012-10-24 广东东研网络科技有限公司 Method for operating and scheduling communication equipment
CN103870348A (en) * 2012-12-14 2014-06-18 中国电信股份有限公司 Test method and system for concurrent user access
CN104133724A (en) * 2014-04-03 2014-11-05 腾讯科技(深圳)有限公司 Concurrent task scheduling method and concurrent task scheduling device
CN104142858A (en) * 2013-11-29 2014-11-12 腾讯科技(深圳)有限公司 Blocked task scheduling method and device
CN104156260A (en) * 2014-08-07 2014-11-19 北京航空航天大学 Concurrent queue access control method and system based on task eavesdropping
CN105893126A (en) * 2016-03-29 2016-08-24 华为技术有限公司 Task scheduling method and device
CN106993058A (en) * 2017-05-24 2017-07-28 儒安科技有限公司 The transfer method and apparatus of network request
CN112306713A (en) * 2020-10-30 2021-02-02 深圳前海微众银行股份有限公司 Task concurrent computation method and device, equipment and storage medium
CN114756357A (en) * 2022-06-14 2022-07-15 浙江保融科技股份有限公司 Non-blocking distributed planned task scheduling method based on JVM (Java virtual machine)

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020083063A1 (en) * 2000-12-26 2002-06-27 Bull Hn Information Systems Inc. Software and data processing system with priority queue dispatching
CN101246439A (en) * 2008-03-18 2008-08-20 中兴通讯股份有限公司 Automatized test method and system based on task scheduling
CN102752136A (en) * 2012-06-29 2012-10-24 广东东研网络科技有限公司 Method for operating and scheduling communication equipment
CN103870348A (en) * 2012-12-14 2014-06-18 中国电信股份有限公司 Test method and system for concurrent user access
CN104142858A (en) * 2013-11-29 2014-11-12 腾讯科技(深圳)有限公司 Blocked task scheduling method and device
CN104133724A (en) * 2014-04-03 2014-11-05 腾讯科技(深圳)有限公司 Concurrent task scheduling method and concurrent task scheduling device
CN104156260A (en) * 2014-08-07 2014-11-19 北京航空航天大学 Concurrent queue access control method and system based on task eavesdropping
CN105893126A (en) * 2016-03-29 2016-08-24 华为技术有限公司 Task scheduling method and device
CN106993058A (en) * 2017-05-24 2017-07-28 儒安科技有限公司 The transfer method and apparatus of network request
CN112306713A (en) * 2020-10-30 2021-02-02 深圳前海微众银行股份有限公司 Task concurrent computation method and device, equipment and storage medium
CN114756357A (en) * 2022-06-14 2022-07-15 浙江保融科技股份有限公司 Non-blocking distributed planned task scheduling method based on JVM (Java virtual machine)

Similar Documents

Publication Publication Date Title
CN108280150B (en) Distributed asynchronous service distribution method and system
CN110825535B (en) Job scheduling method and system
US8752059B2 (en) Computer data processing capacity planning using dependency relationships from a configuration management database
CN113535367B (en) Task scheduling method and related device
US20070214413A1 (en) Method and system for cascaded processing a plurality of data objects
CN110362611B (en) Database query method and device, electronic equipment and storage medium
CN111400011B (en) Real-time task scheduling method, system, equipment and readable storage medium
US9086911B2 (en) Multiprocessing transaction recovery manager
WO2019149032A1 (en) Distributed transaction processing method and device
US6917947B2 (en) Collection command applicator
US10768974B2 (en) Specifying an order of a plurality of resources in a transaction according to distance
CN109408216A (en) Task creating method, device, equipment and storage medium
CN110704170A (en) Batch task processing method and device, computer equipment and storage medium
CN111858062A (en) Evaluation rule optimization method, service evaluation method and related equipment
Zhong et al. Speeding up Paulson’s procedure for large-scale problems using parallel computing
CN113626173B (en) Scheduling method, scheduling device and storage medium
CN114519006A (en) Test method, device, equipment and storage medium
US10984011B1 (en) Distributing non-transactional workload across multiple database servers
CN113157411A (en) Reliable configurable task system and device based on Celery
CN111143041B (en) Data consistency method, distributed coordinator and central coordinator
CN115292025A (en) Task scheduling method and device, computer equipment and computer readable storage medium
CN116881003A (en) Resource allocation method, device, service equipment and storage medium
US11216352B2 (en) Method for automatically analyzing bottleneck in real time and an apparatus for performing the method
US9323509B2 (en) Method and system for automated process distribution
CN112199401B (en) Data request processing method, device, server, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221104