CN108228240B - Method and device for processing tasks in multi-task queue - Google Patents

Method and device for processing tasks in multi-task queue Download PDF

Info

Publication number
CN108228240B
CN108228240B CN201611155748.4A CN201611155748A CN108228240B CN 108228240 B CN108228240 B CN 108228240B CN 201611155748 A CN201611155748 A CN 201611155748A CN 108228240 B CN108228240 B CN 108228240B
Authority
CN
China
Prior art keywords
queue
task
tasks
same type
name
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611155748.4A
Other languages
Chinese (zh)
Other versions
CN108228240A (en
Inventor
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Gridsum Technology Co Ltd
Original Assignee
Beijing Gridsum Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Gridsum Technology Co Ltd filed Critical Beijing Gridsum Technology Co Ltd
Priority to CN201611155748.4A priority Critical patent/CN108228240B/en
Publication of CN108228240A publication Critical patent/CN108228240A/en
Application granted granted Critical
Publication of CN108228240B publication Critical patent/CN108228240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3854Instruction completion, e.g. retiring, committing or graduating
    • G06F9/3856Reordering of instructions, e.g. using queues or age tags
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)
  • Multi Processors (AREA)

Abstract

The invention discloses a method and a device for processing tasks in a multitask queue. Wherein, the method comprises the following steps: receiving request information of one or more tasks, wherein the request information at least comprises: the type of the task and the priority of the task; placing the one or more tasks into corresponding queues according to the request information; and processing the queues according to the types of the tasks and the priorities of the tasks, and putting the tasks of the same type into different queues. The invention solves the technical problem of low task execution efficiency caused by the competition of the same type of tasks on the same resource in the same queue.

Description

Method and device for processing tasks in multi-task queue
Technical Field
The invention relates to the field of queue algorithms, in particular to a method and a device for processing tasks in a multi-task queue.
Background
At present, queue algorithms are mainly divided into two types, one type is first-in first-out (FIFO), namely, the task of the first-in queue is taken out and executed firstly; yet another type is first-in-last-out (FILO), i.e., the task that was first enqueued is last to be taken out, and the queue always performs the most recently entered task preferentially. The two schemes have no problem in the single thread mode, and each task can be well executed because the competition of resources does not occur between each task. However, in the multi-thread mode, it often happens that adjacent tasks in the queue compete for the same resource, causing task failure. For example, the thread a takes out the task 1 in the queue, the task 1 needs to read the file text.txt, when the task 1 is not executed yet, the thread 2 takes out the task 2, and the task 2 needs to write data into the file text.txt, because the task 1 is not executed yet, the file text.txt is in a locked state, and thus the task 2 fails to be executed.
In order to avoid the problem that the execution of the tasks fails because adjacent tasks in the queue compete for the same resource in the multi-thread mode, the prior art adopts a scheme of locking the currently executed task to make the currently executed task in a locked state, so that the later taken out task is in a waiting state, and the next task is locked and executed until the execution of the task in the locked state is finished. Under the queue algorithm, the task in the locking period can block the following task, the computing power of the computer can not be fully exerted, the waiting time of the system is too long, and the efficiency is not high.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for processing tasks in a multi-task queue, which are used for at least solving the technical problem of low task execution efficiency caused by competition of the same type of tasks on the same resource in the same queue.
According to an aspect of the embodiments of the present invention, a method for processing tasks in a multitask queue is provided, including: receiving request information of one or more tasks, wherein the request information at least comprises: the type of the task and the priority of the task; placing one or more tasks into corresponding queues according to the request information; and processing the queues according to the types and the priorities of the tasks in the queues, and putting the tasks of the same type into different queues.
Further, placing one or more tasks into a corresponding queue according to the request information includes: judging whether the request information contains the name of a queue to which the task enters; if the request information contains the name of the queue to which the task enters, the task is placed in the queue corresponding to the name; and if the request information does not contain the name of the queue to be entered by the task, putting the task into a default queue.
Further, if the request information includes the name of the queue to which the task is to enter, placing the task in the queue corresponding to the name includes: judging whether a queue corresponding to the name exists or not; if the queue corresponding to the name exists, the task is placed into the queue corresponding to the name; if no queue corresponding to the name exists, a new queue is created and the task is placed in the new queue.
Further, after creating a new queue and placing the task in the new queue, the method further comprises: the new queue is named as the name of the queue contained in the request information of the task.
Further, processing the queue according to the type and priority of the tasks in the queue, and putting the tasks of the same type into different queues, including: judging whether the tasks with the same type exist in the queue; if the tasks of the same type do not exist in the queue, the tasks in the queue are sequenced according to the priority of all the tasks in the queue; if the tasks of the same type exist in the queues, the tasks of the same type are placed in different queues.
Further, placing a plurality of tasks of the same type into a plurality of different queues includes: judging whether the other queues except the queue where the tasks with the same type are located have the tasks with the same type as the tasks with the same type; if not, one of the plurality of tasks of the same type is placed in one of the other queues.
Further, if there are tasks of the same type as the plurality of tasks of the same type in the remaining other queues, at least one new queue is created and each task of the same type is placed in a different new queue until each task of the same type is placed in a different queue.
According to another aspect of the embodiments of the present invention, there is also provided a device for processing tasks in a multitasking queue, including: a receiving module, configured to receive request information of one or more tasks, where the request information at least includes: the type of the task and the priority of the task; the first processing module is used for placing one or more tasks into corresponding queues according to the request information; and the second processing module is used for processing the queues according to the types and the priorities of the tasks in the queues and putting the tasks of the same type into different queues.
Further, the first processing module comprises: the first judgment module is used for judging whether the request information contains the name of a queue to which the task enters; the first execution module is used for putting the task into the queue corresponding to the name if the request information contains the name of the queue to which the task enters; and the second execution module is used for putting the task into the default queue if the request information does not contain the name of the queue to be entered by the task.
Further, the first execution submodule includes: the second judgment module is used for judging whether a queue corresponding to the name exists or not; the third execution module is used for putting the task into the queue corresponding to the name if the queue corresponding to the name exists; and the fourth execution module is used for creating a new queue and putting the task into the new queue if the queue corresponding to the name does not exist.
Further, the fourth execution module includes: and the creating module is used for naming the new queue as the name of the queue contained in the request information of the task.
Further, the second processing module includes: the third judging module is used for judging whether tasks with the same type exist in the queue; the fifth execution module is used for putting the tasks into the queue and sequencing the tasks in the queue according to the priority of the tasks if the tasks with the same type do not exist in the queue; and the sixth execution module is used for placing a plurality of tasks with the same type into a plurality of different queues if the tasks with the same type exist in the queues.
Further, the sixth execution module includes: the fourth judging module is used for judging whether the other queues except the queue where the plurality of tasks with the same type are located have the tasks with the same type as the plurality of tasks with the same type; and the seventh execution module is used for placing one task in the plurality of tasks with the same type into one queue in other queues if the task with the same type does not exist in the queue.
Further, the seventh execution module includes: and the eighth execution module is used for creating at least one new queue if the tasks with the same type as the tasks with the same type exist in the rest other queues, and putting each task with the same type into a different new queue until each task with the same type is put into a different queue.
In the embodiment of the invention, the method comprises the steps of receiving request information of one or more tasks, putting the one or more tasks into corresponding queues according to the request information, processing the queues according to the types and the priorities of the tasks in the queues, putting the tasks of the same type into different queues, and judging whether the tasks contain the names to be entered into the queues and whether the queues corresponding to the names exist or not; if both the tasks exist, whether the tasks with the same type exist in the queues or not is judged, and the tasks with the same type are sequenced according to the task priority, so that the purpose of putting the tasks with the same type into different queues is achieved, the technical effects of shortening the task waiting time and improving the execution efficiency are achieved, and the technical problem of low task execution efficiency caused by the competition of the tasks with the same type on the same resource in the same queue is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flowchart of a method for processing tasks in a multitasking queue according to an embodiment of the present invention;
FIG. 2 is a flowchart of an alternative method for processing tasks in a multitasking queue according to an embodiment of the present invention;
FIG. 3 is a flowchart of an alternative method for processing tasks in a multitasking queue according to an embodiment of the present invention;
FIG. 4 is a flowchart of an alternative method for processing tasks in a multitasking queue, according to an embodiment of the present invention;
FIG. 5 is a flowchart of an alternative method for processing tasks in a multitasking queue, according to an embodiment of the present invention;
FIG. 6 is a flowchart of an alternative method for processing tasks in a multitasking queue, according to an embodiment of the present invention; and
fig. 7 is a schematic structural diagram of a processing device for tasks in an optional multitask queue according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for processing tasks in a multitasking queue, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than presented herein.
Fig. 1 is a flowchart of a method for processing tasks in a multitask queue according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S102, receiving request information of one or more tasks, wherein the request information at least comprises: type of task, priority of task.
Specifically, in step S102, the task may be a series of operations for achieving a certain purpose during the execution in the multi-thread mode, and may be a thread or a process. For example, read and write operations may be performed on the same file for different threads; the request information may be request information for resources in a task execution process, where the request information includes a type of a task, a priority of the task, and a sequence number of the request task; the tasks include: the name and task content to be placed in the queue; and requesting the task serial number to be updated synchronously with the task serial number in the server.
In an alternative embodiment, in multi-threaded mode, multiple threads fetch tasks from the queue and execute the tasks, such as: task 1 is a read data file, task 2 is a write data file, task 3 is a modify data file, thread a executes the read data file, thread B executes the write data file, thread C executes the modify data file, wherein task 1 comprises: the name to be put into the queue is queue a, the task content is read data file text.txt, and the task 2 comprises: the name to be put into the queue is queue b, the task content is write data file text.txt, and the task 3 comprises: txt, the task content is the modified data file.
And step S104, placing one or more tasks into corresponding queues according to the request information.
Specifically, in step S104, a corresponding task is found according to a request task sequence number in the request information, and one or more tasks are placed in a queue corresponding to a name, a default queue, or a created queue.
In an optional embodiment, the request task sequence numbers in the request information are task 1, task 2 and task 3, the task 1, task 2 and task 3 are found from the server, the name of the queue to be placed in the task 1 is queue a, whether a queue a corresponding to the name of the queue to be placed exists is judged, and if yes, the task 1 enters the queue a; and if judging that the queue b corresponding to the name of the queue to be put into the task 2 does not exist, establishing the queue b, enabling the task 2 to enter the queue b, and enabling the task 3 to enter a default queue if the task 3 does not have the name of the queue to be put into the queue.
And step S106, processing the queues according to the types and the priorities of the tasks in the queues, and putting the tasks of the same type into different queues.
Specifically, in step S106, existing tasks in the queue are refreshed, and the tasks of the same type are placed in different queues; wherein the types of tasks include: synchronizing data and updating a report; the priority of the tasks is evaluated according to the sequence of the importance grades of the tasks from big to small.
In an optional embodiment, for the task refreshing processing existing in the queues a and b and the default queue, the priority of the task 1 in the request information is 3, and the type of the task 1 is synchronous data; the priority of the task 2 is 2, and the type of the task 2 is synchronous data; the priority of the task 3 is 4, and the type of the task 3 is an updated report; judging whether a task with a task type of synchronous data exists in the queue a, if not, putting the task 1 into the queue a and sequencing the tasks according to the priority of the tasks, if the task with the task type of synchronous data exists in the queue a, judging whether a task with the task type of synchronous data exists in the queue b, if not, putting the task 1 into the queue b and sequencing the tasks according to the priority of the tasks, if the task with the task type of synchronous data exists in the queue b, judging whether a task with the task type of synchronous data exists in a default queue, and if not, putting the task 1 into the default queue and sequencing the tasks according to the priority of the tasks; if the task type in the default queue is the task of synchronous data, a queue is newly established; task 2 and task 3 use the same queue algorithm to place the same type of task into different queues.
In the solutions disclosed in the above steps S102 to S106 of the present invention, in the embodiment of the present invention, request information of one or more tasks is received, the one or more tasks are placed in corresponding queues according to the request information, the queues are processed according to the types and priorities of the tasks in the queues, and the tasks of the same type are placed in different queues; therefore, whether the task contains the name of the queue to be entered and whether a queue corresponding to the name exists is judged; if both the tasks exist, whether the tasks with the same type exist in the queues or not is judged and the tasks are sequenced according to the task priority, so that the aim of putting the tasks with the same type into different queues is fulfilled, the technical effects of shortening the task waiting time and improving the execution efficiency are achieved, and the technical problems that the resource competition caused by the tasks of the same type in the same queues is low and the task success rate is low are solved.
In an alternative embodiment, as shown in fig. 2, the step of placing one or more tasks into corresponding queues according to the request information may be implemented by:
step S202: finding a corresponding task according to a request task serial number in the request information, and judging whether the task contains a name to enter a queue;
step S204: if the task contains the name to be entered into the queue, the task is placed into the queue corresponding to the name;
step S206: if the name of the queue to be entered is not included in the task, the task is placed in the default queue.
Specifically, in the solutions disclosed in the above steps S202 to S206, the default queue is a preset queue;
it should be noted that, if the name of the task 1 to be put into the queue is queue a, and it is determined that there is queue a corresponding to the name of the put queue, the task 1 enters queue a; task 3 has no name to be placed in the queue, then task 3 enters the default queue.
Through the steps S202 to S206 in the above embodiment, the task that has the name to be put into the queue is put into the corresponding queue, and the task that has no name to be put into the queue is put into the default queue.
In an alternative embodiment, as shown in fig. 3, if the request information includes the name of the queue to which the task is to enter, the task is placed in the queue corresponding to the name, which may be implemented by the following steps:
step S302: judging whether a queue corresponding to the name exists or not;
step S304: if the queue corresponding to the name exists, the task is placed into the queue corresponding to the name;
step S306: if no queue corresponding to the name exists, a new queue is created and the task is placed in the new queue.
Here, if the name of the queue to which the task 2 is to be put is the queue b, and it is determined that there is no queue b corresponding to the name of the incoming queue, a new queue b is created, and the task 2 is put into the new queue b.
Through the steps S302 to S306 in the above embodiment, the task with the name to be put into the queue is put into the existing queue, and the task is put into the new queue after the new queue is created if the queue does not exist.
In an alternative embodiment, the new queue is named the name of the queue contained in the request message for the task.
Specifically, the name of the queue b is data synchronization.
In an alternative embodiment, as shown in fig. 4, the queues are processed according to the types of the tasks in the queues and the priorities of the tasks, and the tasks of the same type are put into different queues, which can be implemented by the following steps:
step S402: judging whether the tasks with the same type exist in the queue;
step S404: if the tasks of the same type do not exist in the queue, the tasks in the queue are sequenced according to the priority of all the tasks in the queue;
step S406: if the tasks of the same type exist in the queues, the tasks of the same type are placed in different queues.
Specifically, whether a task of which the type is synchronous data exists in the queue a is judged, if not, the task 1 is put into the queue a, and task sequencing is carried out according to the priority of the task; and if the task with the type of the synchronous data exists in the queue a, judging whether the task with the type of the synchronous data exists in the queue b.
Through the steps S402 to S406 in the above embodiment, the tasks of the same type are put into different queues, and the queues after adjustment are reordered according to the priorities of the tasks.
In an alternative embodiment, as shown in fig. 5, putting a plurality of tasks of the same type into different queues can be implemented by the following steps:
step S502: judging whether the other queues except the queue where the tasks with the same type are located have the tasks with the same type as the tasks with the same type;
step S504: if not, one of the plurality of tasks of the same type is placed in one of the other queues.
Specifically, whether a task with the type of synchronous data exists in the queue b or not is judged, if not, the task 1 is placed in the queue b and is subjected to task sequencing according to the priority of the task, if the task with the type of synchronous data exists in the queue b, whether a task with the type of synchronous data exists in the default queue or not is judged, and if the task with the type of synchronous data does not exist in the default queue, the task 1 is placed in the default queue and is subjected to task sequencing according to the priority of the task;
through the steps S502 to S504 in the above embodiment, the tasks of the same type are put into different queues, and the queues after adjustment are reordered according to the priorities of the tasks.
In an alternative embodiment, if there are tasks of the same type as the plurality of tasks of the same type in the remaining other queues, at least one new queue is created and each task of the same type is placed in a different new queue until each task of the same type is placed in a different queue.
Specifically, if tasks of the type of synchronous data exist in other queues, a queue c is newly created, and the tasks of the type of synchronous data are put into the queue c.
As a preferred implementation, the following describes the above-mentioned embodiment of the present application with reference to fig. 6, and as shown in fig. 6, the following steps are included:
step S602: receiving one or more tasks to be placed in a queue;
step S604: judging whether the task to be put into the queue has an appointed queue name or not; if the task to be put into the queue has the specified queue name, step S606 is performed.
Step S604 b: if the task to be put into the queue does not specify the queue name, setting the specified queue as a default queue, putting the task to be put into the queue into the default queue, and executing the step S608;
step S606: judging whether the designated queue exists or not;
step S606 a: if the specified queue exists, putting the task to be put into the queue into the specified queue and executing the step S608;
step S606 b: if the specified queue does not exist, a new queue is created according to the already existing queue name and step S608 is executed;
step S608: the tasks in the queue are hashed and reordered according to the types of the tasks and the priority of the tasks, and if the tasks of the same task type exist in the queue, the queue is restored to an initial state;
step S610: and putting the task to be put into the next queue again.
Through the steps S602 to S610, the tasks of the same task type are put into different queues, so that the competition of the tasks of the same task type in the same queue for resources is avoided, and the execution efficiency is improved.
Specifically, the queue interface is designed as follows:
Figure BDA0001180654230000091
it should be noted here that, a Queue name is provided under an interface of the Queue, and an enqueuing method Queue () and a reference parameter of a task to be put into the Queue are the number of the tasks to be put into the Queue; the enqueuing method Queue () and the reference parameter of the tasks to be put into the Queue in batches are the number of the tasks to be put into the Queue in batches; a task dequeue method QueueItem Unqueue (), a task batch dequeue method QueueItem [ ] Unqueue (), and reference parameters are the number of any dequeue; a queue priority arrangement method Prior (), reference parameters are the type of the task and the priority of the task; the method for sorting the queue (), reference parameter is QueueItem [ ] items.
Example 2
According to the embodiment of the invention, the embodiment of the device for processing the tasks in the multitask queue is also provided. The method for processing the tasks in the multitask queue in embodiment 1 of the present invention may be executed in the processing apparatus in embodiment 2 of the present invention.
Fig. 7 is a schematic structural diagram of a processing apparatus for tasks in a multitask queue according to an embodiment of the present invention, the apparatus including: a receiving module 201, a first processing module 203 and a second processing module 205.
The receiving module 201 is configured to receive request information of one or more tasks; the first processing module 203 is used for placing one or more tasks into corresponding queues according to the request information; the second processing module 205 is configured to process the queue according to the type and priority of the task in the queue, and place the task of the same type into different queues.
In the receiving module 201, the task may be a series of operations for achieving a certain purpose during the execution in the multi-thread mode, and may be a thread or a process. For example, read and write operations may be performed on the same file for different threads; the request information may be request information for resources in a task execution process, where the request information includes a type of a task, a priority of the task, and a sequence number of the request task; the tasks include: the name and task content to be placed in the queue; and requesting the task serial number to be updated synchronously with the task serial number in the server.
It should be noted here that the receiving module 201 corresponds to step S102 in embodiment 1. In multi-threaded mode, multiple threads take tasks out of the queue and execute the tasks, for example: task 1 is a read data file, task 2 is a write data file, task 3 is a modify data file, thread a executes the read data file, thread B executes the write data file, thread C executes the modify data file, wherein task 1 comprises: the name to be put into the queue is queue a, the task content is read data file text.txt, and the task 2 comprises: the name to be put into the queue is queue b, the task content is write data file text.txt, and the task 3 comprises: txt, the task content is the modified data file.
In the first processing module 203, a corresponding task is found according to a request task sequence number in the request information, and one or more tasks are placed in a queue corresponding to a queue name, a default queue, or a created queue.
Here, it should be noted that, the first processing module 203 corresponds to step S104 in embodiment 1, where the serial numbers of the requested tasks in the request information are task 1, task 2, and task 3, and the task 1, task 2, and task 3 are found from the server, the task 1 needs to be placed into a queue with a queue name a, and if it is determined that a queue a corresponding to the placed queue name exists, the task 1 enters the queue a; and (3) the task 2 is to be put into the queue with the queue name of queue b, if the queue b corresponding to the queue name is judged to be absent, the queue b is created, the task 2 is to be put into the queue b, and the task 3 is to be put into the default queue if the task 3 does not have the queue name to be put into.
In the second processing module 205, existing tasks in the queue are refreshed, and the tasks of the same type are put into different queues; wherein the types of tasks include: synchronizing data and updating a report; the priority of the tasks is evaluated according to the sequence of the importance grades of the tasks from big to small.
It should be noted here that the first processing module 203 corresponds to step S106 in embodiment 1. Refreshing the existing tasks in the queues a and b and the default queue, wherein the priority of the task 1 in the request information is 3, and the type of the task 1 is synchronous data; the priority of the task 2 is 2, and the type of the task 2 is synchronous data; the priority of the task 3 is 4, and the type of the task 3 is an updated report; judging whether a task with a task type of synchronous data exists in the queue a, if not, putting the task 1 into the queue a and sequencing the tasks according to the priority of the tasks, if the task with the task type of synchronous data exists in the queue a, judging whether a task with the task type of synchronous data exists in the queue b, if not, putting the task 1 into the queue b and sequencing the tasks according to the priority of the tasks, if the task with the task type of synchronous data exists in the queue b, judging whether a task with the task type of synchronous data exists in a default queue, and if not, putting the task 1 into the default queue and sequencing the tasks according to the priority of the tasks; if the task type in the default queue is the task of synchronous data, a queue is newly established; task 2 and task 3 use the same queue algorithm to place the same type of task into different queues.
Optionally, the first processing module 203 further includes: the first judgment module is used for judging whether the request information contains the name of a queue to which the task enters; the first execution module is used for putting the task into the queue corresponding to the name if the request information contains the name of the queue to which the task enters; and the second execution module is used for putting the task into the default queue if the request information does not contain the name of the queue to be entered by the task.
Optionally, the first execution module further includes: the second judgment module is used for judging whether a queue corresponding to the name exists or not; the third execution module is used for putting the task into the queue corresponding to the name if the queue corresponding to the name exists; and the fourth execution module is used for creating a new queue and putting the task into the new queue if the queue corresponding to the name does not exist.
Optionally, the fourth executing module further includes: and the creating module is used for naming the new queue as the name of the queue contained in the request information of the task.
Optionally, the second processing module 205 further includes: the third judging module is used for judging whether tasks with the same type exist in the queue; the fifth execution module is used for sequencing the tasks in the queue according to the priority of all the tasks in the queue if the tasks with the same type do not exist in the queue; and the sixth execution module is used for placing a plurality of tasks with the same type into a plurality of different queues if the tasks with the same type exist in the queues.
Optionally, the sixth execution module further includes: the fourth judging module is used for judging whether the other queues except the queue where the plurality of tasks with the same type are located have the tasks with the same type as the plurality of tasks with the same type; and the seventh execution module is used for placing one task in the plurality of tasks with the same type into one queue in other queues if the task with the same type does not exist in the queue.
Optionally, the seventh executing module further includes: and the eighth execution module is used for creating at least one new queue if the tasks of the same type with the tasks of the same type exist in the rest other queues, and putting each task of the same type into a different new queue until each task of the same type is put into a different queue.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit may be a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (7)

1. A method for processing tasks in a multitask queue is characterized by comprising the following steps:
receiving request information of one or more tasks, wherein the request information at least comprises: the type of the task, the priority of the task;
placing the one or more tasks into corresponding queues according to the request information;
processing the queue according to the type and priority of the tasks in the queue, and putting the tasks of the same type into different queues;
wherein, the step of placing the one or more tasks into corresponding queues according to the request information comprises: judging whether the request information contains the name of a queue to which the task enters; if the request information contains the name of the queue to which the task enters, putting the task into the queue corresponding to the name; if the request information does not contain the name of the queue to which the task enters, the task is placed into a default queue;
processing the queue according to the type and priority of the tasks in the queue, and putting the tasks of the same type into different queues, wherein the processing comprises the following steps: judging whether the tasks with the same type exist in the queue or not; if the tasks of the same type do not exist in the queue, the tasks in the queue are sequenced according to the priority of all the tasks in the queue; and if the tasks of the same type exist in the queues, putting the tasks of the same type into different queues.
2. The method according to claim 1, wherein if the request information includes a name of a queue to which the task is to enter, placing the task in the queue corresponding to the name comprises:
judging whether a queue corresponding to the name exists or not;
if the queue corresponding to the name exists, putting the task into the queue corresponding to the name;
and if the queue corresponding to the name does not exist, creating a new queue, and putting the task into the new queue.
3. The method of claim 2, wherein after creating a new queue and placing the task in the new queue, the method further comprises: and naming the new queue as the name of the queue contained in the request information of the task.
4. The method of claim 1, wherein placing a plurality of tasks of the same type into a different plurality of queues comprises:
judging whether the other queues except the queue where the plurality of tasks with the same type are located have the tasks with the same type as the plurality of tasks with the same type;
and if not, putting one task of the plurality of tasks with the same type into one queue of the other queues.
5. The method of claim 4, wherein if there are tasks of the same type as the plurality of tasks of the same type in the remaining other queues, creating at least one new queue and placing each task of the same type in a different new queue until each task of the same type is placed in a different queue.
6. An apparatus for processing tasks in a multitask queue, comprising:
a receiving module, configured to receive request information of one or more tasks, where the request information at least includes: the type of the task, the priority of the task;
the first processing module is used for placing the one or more tasks into corresponding queues according to the request information;
the second processing module is used for processing the queue according to the type and the priority of the tasks in the queue and putting the tasks of the same type into different queues;
wherein the first processing module comprises: the first judging module is used for judging whether the request information contains the name of the queue to which the task enters; the first execution module is used for putting the task into a queue corresponding to the name if the request information contains the name of the queue to which the task enters; the second execution module is used for putting the task into a default queue if the request information does not contain the name of the queue to which the task enters;
wherein, the second processing module further comprises: the third judging module is used for judging whether tasks with the same type exist in the queue; the fifth execution module is used for sequencing the tasks in the queue according to the priority of all the tasks in the queue if the tasks with the same type do not exist in the queue; and the sixth execution module is used for placing a plurality of tasks with the same type into a plurality of different queues if the tasks with the same type exist in the queues.
7. The apparatus of claim 6, wherein the first execution module comprises:
the second judgment module is used for judging whether a queue corresponding to the name exists or not;
a third execution module, configured to, if a queue corresponding to the name exists, place the task in the queue corresponding to the name;
and the fourth execution module is used for creating a new queue and putting the task into the new queue if the queue corresponding to the name does not exist.
CN201611155748.4A 2016-12-14 2016-12-14 Method and device for processing tasks in multi-task queue Active CN108228240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611155748.4A CN108228240B (en) 2016-12-14 2016-12-14 Method and device for processing tasks in multi-task queue

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611155748.4A CN108228240B (en) 2016-12-14 2016-12-14 Method and device for processing tasks in multi-task queue

Publications (2)

Publication Number Publication Date
CN108228240A CN108228240A (en) 2018-06-29
CN108228240B true CN108228240B (en) 2021-02-26

Family

ID=62650118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611155748.4A Active CN108228240B (en) 2016-12-14 2016-12-14 Method and device for processing tasks in multi-task queue

Country Status (1)

Country Link
CN (1) CN108228240B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985629B (en) * 2018-07-17 2022-04-08 创新先进技术有限公司 Method and device for executing service node in service chain and server

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807159A (en) * 2010-03-18 2010-08-18 西北工业大学 Self-adapting task scheduling method
US8185897B2 (en) * 2008-09-30 2012-05-22 Verizon Patent And Licensing Inc. Task management system
CN104407921A (en) * 2014-12-25 2015-03-11 浪潮电子信息产业股份有限公司 Time-based method for dynamically scheduling yarn task resources
CN104731651A (en) * 2013-12-20 2015-06-24 南京南瑞继保电气有限公司 Power automation task scheduling and triggering method, system and processor
CN105893126A (en) * 2016-03-29 2016-08-24 华为技术有限公司 Task scheduling method and device
CN106020954A (en) * 2016-05-13 2016-10-12 深圳市永兴元科技有限公司 Thread management method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5844980A (en) * 1993-03-03 1998-12-01 Siemens Business Communication Systems, Inc. Queue managing system and method
US20100153957A1 (en) * 2008-12-16 2010-06-17 Sensormatic Electronics Corporation System and method for managing thread use in a thread pool
CN102567086B (en) * 2010-12-30 2014-05-07 中国移动通信集团公司 Task scheduling method, equipment and system
US9058208B2 (en) * 2012-11-12 2015-06-16 Skymedi Corporation Method of scheduling tasks for memories and memory system thereof
CN104657214A (en) * 2015-03-13 2015-05-27 华存数据信息技术有限公司 Multi-queue multi-priority big data task management system and method for achieving big data task management by utilizing system
CN105204933A (en) * 2015-09-18 2015-12-30 上海斐讯数据通信技术有限公司 Multitask switching execution method based on single process, multitask switching execution system based on single process and processor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8185897B2 (en) * 2008-09-30 2012-05-22 Verizon Patent And Licensing Inc. Task management system
CN101807159A (en) * 2010-03-18 2010-08-18 西北工业大学 Self-adapting task scheduling method
CN104731651A (en) * 2013-12-20 2015-06-24 南京南瑞继保电气有限公司 Power automation task scheduling and triggering method, system and processor
CN104407921A (en) * 2014-12-25 2015-03-11 浪潮电子信息产业股份有限公司 Time-based method for dynamically scheduling yarn task resources
CN105893126A (en) * 2016-03-29 2016-08-24 华为技术有限公司 Task scheduling method and device
CN106020954A (en) * 2016-05-13 2016-10-12 深圳市永兴元科技有限公司 Thread management method and device

Also Published As

Publication number Publication date
CN108228240A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
CN106802826B (en) Service processing method and device based on thread pool
CN108108463B (en) Synchronous task processing method and device based on time slice scheduling
WO2017114199A1 (en) Data synchronisation method and apparatus
CN107391243B (en) Thread task processing equipment, device and method
CA2928865C (en) Strict queue ordering in a distributed system
CN101902487B (en) Queue scheduling method and device based on linked list
CN106598725A (en) Android-based Handler memory leakage prevention device and method
CN105119997A (en) Data processing method of cloud computing system
CN105306552A (en) Consumption equilibrium method and system based on message queues
CN107479981B (en) Processing method and device for realizing synchronous call based on asynchronous call
CN102023899B (en) Multithreaded data synchronization method and device
CN111427670A (en) Task scheduling method and system
CN104980515A (en) Method and apparatus for distributing and processing messages in cloud storage systems
CN109327321B (en) Network model service execution method and device, SDN controller and readable storage medium
CN111274021A (en) GPU cluster task scheduling and distributing method
CN108228240B (en) Method and device for processing tasks in multi-task queue
CN115658153A (en) Sleep lock optimization method and device, electronic equipment and storage medium
CN113656189A (en) Message processing method and device
CN109426554B (en) Timing implementation method and device for server
CN106406997B (en) Timer scheduling method and device
CN110888739B (en) Distributed processing method and device for delayed tasks
CN105159690B (en) A kind of method and device of automatic synchronization user interface UI thread
CN106293970B (en) Asynchronous processing method and system between a kind of process based on IPC
CN107797870A (en) A kind of cloud computing data resource dispatching method
CN108958808A (en) Method for starting terminal and device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100083 No. 401, 4th Floor, Haitai Building, 229 North Fourth Ring Road, Haidian District, Beijing

Applicant after: Beijing Guoshuang Technology Co.,Ltd.

Address before: 100086 Cuigong Hotel, 76 Zhichun Road, Shuangyushu District, Haidian District, Beijing

Applicant before: Beijing Guoshuang Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant