CN107450971A - Task processing method and device - Google Patents

Task processing method and device Download PDF

Info

Publication number
CN107450971A
CN107450971A CN201710517218.8A CN201710517218A CN107450971A CN 107450971 A CN107450971 A CN 107450971A CN 201710517218 A CN201710517218 A CN 201710517218A CN 107450971 A CN107450971 A CN 107450971A
Authority
CN
China
Prior art keywords
task
queue
waiting task
thread
waiting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710517218.8A
Other languages
Chinese (zh)
Other versions
CN107450971B (en
Inventor
吴朝彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 58 Information Technology Co Ltd
Original Assignee
Beijing 58 Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 58 Information Technology Co Ltd filed Critical Beijing 58 Information Technology Co Ltd
Priority to CN201710517218.8A priority Critical patent/CN107450971B/en
Publication of CN107450971A publication Critical patent/CN107450971A/en
Application granted granted Critical
Publication of CN107450971B publication Critical patent/CN107450971B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the present invention, which provides a kind of task processing method and device, this method, to be included:Obtain the waiting task of the application program generation and the type of the waiting task;According to the type of the waiting task, object queue is determined in collection of queues, the collection of queues includes multiple queues;The waiting task is cached to the object queue;It is determined that in the object queue, after the previous task processing completion of the waiting task, the waiting task is handled by the thread in thread pool, the number of threads in the thread pool is less than the number of the queue in the collection of queues.For avoid application program because the number of threads run parallel is too many and occur exception the problem of.

Description

Task processing method and device
Technical field
The present embodiments relate to field of computer technology, more particularly to a kind of task processing method and device.
Background technology
In application program (software), there is much time-consuming service logic, for example, obtaining the data of server, storage is matched somebody with somebody Confidence breath, log information etc. is regularly reported, in order to ensure the normal operation of application program, IntentService classes can be passed through (serviced component) is realized time-consuming service logic.
In the running of application program, when application program runs to default node, application program is pre- in order to realize If function, application program can instantiate to corresponding service logic, with being instantiated for task, and pass through instantiation Task completes corresponding function.Wherein, each service logic is used to realize a kind of function, and the task of instantiation can realize industry The function that business logic is provided, and instantiating for task can also have customized function.Run in application program to difference Node when, can to same service logic carry out different instances, to obtain multiple tasks, the plurality of task can be realized The function that service logic is provided, but the plurality of task, which might have custom feature, might have difference, wherein, Yi Zhongye The multiple tasks that business logical instance obtains serially are performed, and the multiple tasks that different business logical instance obtains can be held parallel OK.
In the prior art, after a kind of service logic is instantiated to obtain task, task is created by serviced component Corresponding thread, and the task is performed by thread.However, when the task of instantiation is more, it is necessary to create multiple threads, When number of threads is more, application program operation exception may result in.
The content of the invention
The embodiment of the present invention provides a kind of task processing method and device, avoids application program due to the line run parallel The problem of exception that journey number is too many and occurs.
In a first aspect, the embodiment of the present invention provides a kind of task processing method, including:
Obtain the waiting task of the application program generation and the type of the waiting task;
According to the type of the waiting task, object queue is determined in collection of queues, the collection of queues includes Multiple queues;
The waiting task is cached to the object queue;
It is determined that in the object queue, after the previous task processing completion of the waiting task, pass through thread Thread in pond is handled the waiting task, and the number of threads in the thread pool is less than in the collection of queues The number of queue.
In a kind of possible embodiment, it is described it is determined that in the object queue, the waiting task it is previous After individual task processing is completed, the waiting task is handled by the thread in thread pool, including:
Obtain in the object queue, processing completion message corresponding to the previous task of the waiting task;
According to the processing completion message, the waiting task is handled by the thread in the thread pool.
It is described according to the processing completion message in alternatively possible embodiment, by the thread pool Thread is handled the waiting task, including:
According to the processing completion message, the waiting task is stored to buffer queue corresponding to the thread pool;
It is determined that the waiting task, which is located in the head of the queue of the buffer queue and the thread pool, has idle thread When, the waiting task is handled by the idle thread.
It is described according to the processing completion message in alternatively possible embodiment, the waiting task is deposited Before storage to buffer queue corresponding to the thread pool, in addition to:
It is determined that after the application program launching, the buffer queue is created;
When it is determined that first task be present in queue in the collection of queues, first in each queue is appointed Business is stored to the buffer queue.
In alternatively possible embodiment, according to the processing completion message, by the waiting task store to After buffer queue corresponding to the thread pool, in addition to:
The waiting task is removed into the object queue.
In alternatively possible embodiment, the waiting task is carried out by the idle thread to handle it Afterwards, in addition to:
The waiting task is removed into the buffer queue.
Second aspect, the embodiment of the present invention provide a kind of Task Processing Unit, including acquisition module, determining module, storage Module and processing module, wherein,
The acquisition module is used for, and obtains the waiting task of application program generation and the waiting task Type;
The determining module is used for, and according to the type of the waiting task, object queue is determined in collection of queues, institute Stating collection of queues includes multiple queues;
The memory module is used for, and the waiting task is cached to the object queue;
The processing module is used for, it is determined that in the object queue, the processing of the previous task of the waiting task After completion, the waiting task is handled by the thread in thread pool, the number of threads in the thread pool is small The number of queue in the collection of queues.
In a kind of possible embodiment, the processing module includes acquiring unit and processing unit, wherein,
The acquiring unit is used for, and obtains in the object queue, corresponding to the previous task of the waiting task Handle completion message;
The processing unit is used for, and according to the processing completion message, is treated by the thread in the thread pool to described Processing task is handled.
In alternatively possible embodiment, the processing unit is specifically used for:
According to the processing completion message, the waiting task is stored to buffer queue corresponding to the thread pool;
It is determined that the waiting task, which is located in the head of the queue of the buffer queue and the thread pool, has idle thread When, the waiting task is handled by the idle thread.
In alternatively possible embodiment, described device also includes creation module, wherein,
The creation module is used for, described in the processing unit according to the processing completion message, will be described pending Task is stored to before buffer queue corresponding to the thread pool, it is determined that after the application program launching, is created described slow Deposit queue;
The memory module is additionally operable to, will be every when it is determined that first task be present in queue in the collection of queues First task in one queue is stored to the buffer queue.
In alternatively possible embodiment, the memory module is additionally operable to:
In the processing unit according to the processing completion message, the waiting task is stored to the thread pool pair After the buffer queue answered, the waiting task is removed into the object queue.
In alternatively possible embodiment, the memory module is additionally operable to:
After the processing unit is handled the waiting task by the idle thread, wait to locate by described Reason task removes the buffer queue.
Task processing method and device provided in an embodiment of the present invention, Application Instance obtain waiting task it Afterwards, the type that serviced component obtains waiting task is reconstructed, determines to be used to cache pending according to the type of waiting task The object queue of business, and waiting task is cached to object queue.Serviced component is reconstructed by sharing the thread in thread pool Thread process is carried out to the task in a queue, parallel processing is carried out to the task in different queue, until reconstruct service group Part is determined in object queue, the previous task of waiting task has been processed into afterwards, and reconstruct serviced component passes through thread again Thread in pond is handled waiting task, in this manner it is ensured that the task in same queue is processed serially, different teams Task in row is processed in parallel, further, because the number of threads in thread pool is less than of the queue in collection of queues Number, so, when the species of the task of application program generation is more, it is ensured that the number of threads run parallel can be certain In the range of, so application program can be avoided because the number of threads run parallel is too many and occurred exception the problem of.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are this hairs Some bright embodiments, for those of ordinary skill in the art, without having to pay creative labor, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is the application scenarios schematic diagram of task processing method provided in an embodiment of the present invention;
Fig. 2 is the schematic flow sheet of task processing method provided in an embodiment of the present invention;
Fig. 3 is the schematic flow sheet provided in an embodiment of the present invention that processing method is carried out to waiting task;
Fig. 4 is queue structure's schematic diagram provided in an embodiment of the present invention;
Fig. 5 is the structural representation one of Task Processing Unit provided in an embodiment of the present invention;
Fig. 6 is the structural representation two of Task Processing Unit provided in an embodiment of the present invention.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is Part of the embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Fig. 1 is the application scenarios schematic diagram of task processing method provided in an embodiment of the present invention.Fig. 1 is referred to, is arranged on Application program in terminal device includes multiple service logics and reconstruct serviced component, and each service logic is used to realize one Kind of function, the service logic shown in the application are more time-consuming service logic, and the service logic shown in the application Instantiating obtained task needs to realize by reconstructing serviced component.During application program operation in terminal device, Application program instantiates to service logic, obtains task corresponding to service logic, wherein, a kind of service logic can be by reality Exampleization obtains multiple tasks, after being instantiated for task, is realized by reconstructing serviced component in running background task.
Specifically, after reconstruct serviced component gets and needs task in running background, reconstruct serviced component according to The type of task, by task place corresponding in queue, and control the thread in thread pool to carry out the task in each queue Processing, wherein, the thread in thread pool needs to handle the task serial in a queue, to the tasks in parallel in different queue Processing.In this application, by being improved to the execution flow for reconstructing serviced component so that reconstruct serviced component can be to same The task serial processing of one type, pass through to the processing of different types of tasks in parallel, and in reconstruct serviced component in thread pool Thread is handled task, can pre-set the number of threads in thread pool, and then application program operation can be avoided different Often.
Below, by specific embodiment, the technical scheme shown in the application is described in detail.It should be noted that Several embodiments can be combined with each other below, no longer be repeated in various embodiments for same or analogous content.
Fig. 2 is the schematic flow sheet of task processing method provided in an embodiment of the present invention.Refer to Fig. 2, this method can be with Including:
S201, the waiting task for obtaining application program generation and waiting task type.
The executive agent of the embodiment of the present invention can be Task Processing Unit, and Task Processing Unit can be in application program Reconstruct serviced component, for example, reconstruct serviced component can be BatchIntentService, the reconstruct serviced component can be Improvement to existing serviced component IntentService.It should be noted that in the following description, with Task Processing Unit To be illustrated exemplified by reconstruct serviced component.Optionally, the reconstruct serviced component can be realized by software.
It should be noted that reconstruct processing procedure phase of the serviced component to any one task in the embodiment of the present invention Together, in embodiments of the present invention, said exemplified by reconstructing serviced component to the processing procedure of an arbitrary waiting task It is bright.
Application program shown in the embodiment of the present invention can be arbitrarily to apply journey in mobile phone, apparatus such as computer Sequence.Include multiple service logics in the application, during application program is run, application program can be to service logic It is called to realize corresponding function, in application call service logic, service logic is instantiated, obtains reality The waiting task of exampleization, and call reconstruct serviced component to handle waiting task, optionally, can be by pending Business passes to reconstruct serviced component, so that reconstruct serviced component acquires waiting task.
In embodiments of the present invention, the type of waiting task service logic can represent corresponding to waiting task, Wherein, service logic can include " data for obtaining server ", " storage configuration information ", " regularly reporting log information " etc.. The type that same service logic instantiates obtained multiple waiting tasks is identical, and different business logical instance obtains multiple The type of waiting task is different.
Optionally, reconstruct serviced component can obtain service logic corresponding to waiting task, according to waiting task pair The service logic answered, determine the type of waiting task.
S202, the type according to waiting task, determine object queue in collection of queues, and collection of queues includes multiple Queue.
Optionally, the queue in the collection of queues can be after application program launching, in the internal memory of terminal device Create.Collection of queues includes multiple queues, and each queue corresponds to a kind of task type.
After reconstruct serviced component acquires the type of waiting task, according to the type of waiting task, in team Arrange and object queue corresponding to the type of waiting task is determined with reference in.
S203, waiting task cached to object queue.
In embodiments of the present invention, each queue in collection of queues is fifo queue, therefore, will be pending Task buffer to object queue tail of the queue.
S204, it is determined that in object queue, after the processing of the previous task of waiting task completes, by thread pool Thread waiting task is handled.
Thread pool includes multiple threads, and the thread in thread pool can create after application program launching, should Multiple threads can be handled the task in each queue in collection of queues, optionally, the number of threads in thread pool The number of queue typically smaller than in collection of queues.Wherein, the thread in thread pool needs to go here and there the task in a queue Row processing, i.e., could be in the queue after only the task processing in the queue is completed for a queue Next task is handled.Thread in thread pool can carry out parallel processing to the task in different queue, i.e. Ke Yitong The different threads crossed in thread pool are carried out while handled to the task in different queue simultaneously.
Because the task in a queue needs serial process, therefore, for any one queue, in reconstruct serviced component After determining that the task processing in the queue is completed, reconstruct serviced component just can be by next pending in the queue Business transfers to the thread in thread pool to be handled.
Accordingly, waiting task is cached to object queue in reconstruct serviced component, it is true in reconstruct serviced component Set the goal in queue, after the previous task processing of waiting task is completed, just by the thread in thread pool to pending Task is handled.
Task processing method provided in an embodiment of the present invention, after Application Instance obtains waiting task, weight Structure serviced component obtains the type of waiting task, and the mesh for caching waiting task is determined according to the type of waiting task Queue is marked, and waiting task is cached to object queue.Serviced component is reconstructed by sharing the thread in thread pool to one Task in queue carries out thread process, and parallel processing is carried out to the task in different queue, until reconstruct serviced component determines In object queue, the previous task of waiting task have been processed into afterwards, reconstruct serviced component again by thread pool Thread is handled waiting task, in this manner it is ensured that the task in same queue is processed serially, in different queue Task is processed in parallel, further, because the number of threads in thread pool is less than the number of the queue in collection of queues, this Sample, when the species of the task of application program generation is more, it is ensured that the number of threads run parallel can be in certain limit It is interior, so application program can be avoided because the number of threads run parallel is too many and occurred exception the problem of.
On the basis of any one above-mentioned embodiment, optionally, reconstruct serviced component can pass through following feasible reality Existing mode is handled waiting task (S204 in embodiment illustrated in fig. 2) by the thread in thread pool, specifically, please Embodiment shown in Figure 3.
Fig. 3 is the schematic flow sheet provided in an embodiment of the present invention that processing method is carried out to waiting task.Refer to figure 3, this method can include:
S301, obtain in object queue, processing completion message corresponding to the previous task of waiting task.
In embodiments of the present invention, after thread is completed to a task processing, generate and locate corresponding to the processing task Completion message is managed, and processing completion message is fed back into reconstruct serviced component.
S302, according to processing completion message, waiting task is stored to buffer queue corresponding to thread pool.
Optionally, it is determined that after application program launching, can be created in the internal memory of the terminal device of installation application program The buffer queue built.When it is determined that first task be present in queue in collection of queues, by first in each queue Task is stored to buffer queue.
Optionally, first task in queue refers to, after application program launching during first caching to team Task in row, each queue have its corresponding first task.Optionally, during application program is run, sentence It whether there is first task in each queue in disconnected collection of queues, will when it is determined that first task be present in queue First task is stored to buffer queue, and first task is deleted in queue.
In reconstruct serviced component acquires object queue, handle and complete corresponding to the previous task of waiting task After message, reconstruct serviced component stores waiting task into buffer queue corresponding to thread pool.Buffer queue is one Fifo queue, therefore, reconstruct serviced component store waiting task to the tail of the queue of buffer queue.
S303, waiting task removed into object queue.
S304, it is determined that waiting task be located at idle thread be present in the head of the queue of buffer queue and thread pool when, pass through Idle thread is handled waiting task.
Thread in reconstruct serviced component control thread pool is handled the task in buffer queue in sequence, optional , when reconstruct serviced component determines idle thread to be present in thread pool, then the task of buffer queue head of the queue is transferred into idle line Journey processing, meanwhile, task of transferring to idle thread to handle is removed into buffer queue.
Determine that waiting task is located in the head of the queue of buffer queue and thread pool and idle thread be present in reconstruct serviced component When, then the idle thread given waiting task in thread pool is handled waiting task.
S305, waiting task removed into buffer queue.
After the idle thread processing during waiting task to be transferred to thread pool, serviced component is reconstructed by waiting task Remove buffer queue.
In the embodiment shown in fig. 3, by setting buffer queue corresponding to thread pool, it can be ensured that the line in thread pool Journey is uniformly handled the task in each queue.
Below, queue structure's schematic diagram with reference to shown in Fig. 4, by specific example, to shown in above method embodiment Technical scheme is described in detail.Fig. 4 is queue structure's schematic diagram provided in an embodiment of the present invention.
Exemplary, it is assumed that application program includes 5 kinds of service logics, is designated as service logic 1- service logics 5 respectively.Again Assuming that thread pool includes 3 threads, thread 1- threads 3 are designated as respectively.
After application program starts in terminal device, each service logic pair is created in the internal memory of terminal device The queue answered, queue 1- queues 5 are designated as respectively, buffer queue corresponding to thread pool, note are created also in the internal memory of terminal device For buffer queue 1.
In the running of application program, application program can instantiate to service logic 1- service logics 5, false It is located at after application program launching, multiple tasks is obtained to the instantiation of service logic 1- service logics 5, each task is stored in In corresponding queue, it is assumed that being stored in each queue for task is as shown in 401 in Fig. 4.
401 are referred to, when initial, queue 1 includes task 11- tasks 13, and queue 2 includes task 21- tasks 22, Queue 3 includes task 31, and queue 5 includes task 51- tasks 53, now, any task is not included in buffer queue.
Assuming that first task in queue 1 is task 11, first task in queue 2 is task 21, in queue 3 First task is task 31, and first task in queue 5 is task 51.It is determined that after first task in queue, Respectively by the storage of each first task into buffer queue 1, and first task is deleted in corresponding queue, now, often Being stored in individual queue for task is as shown in 402 in Fig. 4.
402 are referred to, first task in queue 1- queues 5 is respectively to be transferred in buffer queue 1.
Task 11- tasks 31 in buffer queue are transferred to handle to thread 1- threads 3 by reconstruct serviced component respectively, this When, the task and each currently processed task of thread stored in each queue is as shown in 403 in Fig. 4.
After thread 1 is completed to the processing of task 11, processing completion message 1, reconstructs service group corresponding to generation task 11 After part gets processing completion message 1, reconstruct serviced component stores the task 12 of head of the queue in queue 1 to buffer queue 1, and Task 12 in queue 1 is deleted, simultaneously as task 51, which is located in the head of the queue of buffer queue 1 and thread pool, has the free time Thread (thread 1), then transfer to thread 1 to handle task 51, and now, the task and each thread stored in each queue is current The task of processing is as shown in 404 in Fig. 4.
The like, until the task in all queue 1- queues 5 has been processed into.It should be noted that to queue During task processing in 1- queues 5, it is also possible to increasing new task, for new task in queue 1- queues 5 Processing procedure is similar with said process, is no longer repeated herein.
In above process, if according to method of the prior art, at most need to create 5 threads, and 5 thread parallels Operation, and in this application, most 3 thread parallels operations, avoid application program due to the number of threads run parallel too The problem of exception occurred more.
Fig. 5 is the structural representation one of Task Processing Unit provided in an embodiment of the present invention.Fig. 5 is referred to, the device can With including acquisition module 11, determining module 12, memory module 13 and processing module 14, wherein,
The acquisition module 11 is used for, and obtains the waiting task of the application program generation and the waiting task Type;
The determining module 12 is used for, and according to the type of the waiting task, object queue is determined in collection of queues, The collection of queues includes multiple queues;
The memory module 13 is used for, and the waiting task is cached to the object queue;
The processing module 14 is used for, it is determined that in the object queue, at the previous task of the waiting task After reason is completed, the waiting task is handled by the thread in thread pool, the number of threads in the thread pool Less than the number of the queue in the collection of queues.
Task Processing Unit provided in an embodiment of the present invention can perform the technical scheme shown in above method embodiment, its Realization principle and beneficial effect are similar, are no longer repeated herein.
Fig. 6 is the structural representation two of Task Processing Unit provided in an embodiment of the present invention.In the base of embodiment illustrated in fig. 5 On plinth, Fig. 6 is referred to, the processing module 14 includes acquiring unit 141 and processing unit 142, wherein,
The acquiring unit 141 is used for, and obtains in the object queue, the previous task of the waiting task corresponds to Processing completion message;
The processing unit 142 is used for, according to the processing completion message, by the thread in the thread pool to described Waiting task is handled.
In a kind of possible embodiment, the processing unit 142 is specifically used for:
According to the processing completion message, the waiting task is stored to buffer queue corresponding to the thread pool;
It is determined that the waiting task, which is located in the head of the queue of the buffer queue and the thread pool, has idle thread When, the waiting task is handled by the idle thread.
In alternatively possible embodiment, described device also includes creation module 15, wherein,
The creation module 15 is used for, and, according to the processing completion message, is being treated described in the processing unit 142 by described Processing task is stored to before buffer queue corresponding to the thread pool, it is determined that after the application program launching, creates institute State buffer queue;
The memory module 13 is additionally operable to, will when it is determined that first task be present in queue in the collection of queues First task in each queue is stored to the buffer queue.
In alternatively possible embodiment, the memory module 13 is additionally operable to:
In the processing unit 142 according to the processing completion message, the waiting task is stored to the thread After buffer queue corresponding to pond, the waiting task is removed into the object queue.
In alternatively possible embodiment, the memory module 13 is additionally operable to:
After the processing unit 142 is handled the waiting task by the idle thread, by described in Waiting task removes the buffer queue.
Task Processing Unit provided in an embodiment of the present invention can perform the technical scheme shown in above method embodiment, its Realization principle and beneficial effect are similar, are no longer repeated herein.
One of ordinary skill in the art will appreciate that:Realizing all or part of step of above-mentioned each method embodiment can lead to The related hardware of programmed instruction is crossed to complete.Foregoing program can be stored in a computer read/write memory medium.The journey Sequence upon execution, execution the step of including above-mentioned each method embodiment;And foregoing storage medium includes:ROM, RAM, magnetic disc or Person's CD etc. is various can be with the medium of store program codes.
Finally it should be noted that:Various embodiments above is only illustrating the technical scheme of the embodiment of the present invention, rather than to it Limitation;Although the embodiment of the present invention is described in detail with reference to foregoing embodiments, one of ordinary skill in the art It should be understood that:It can still modify to the technical scheme described in foregoing embodiments, either to which part or All technical characteristic carries out equivalent substitution;And these modifications or replacement, the essence of appropriate technical solution is departed from this hair The scope of bright embodiment scheme.

Claims (12)

  1. A kind of 1. task processing method, it is characterised in that including:
    Obtain the waiting task of the application program generation and the type of the waiting task;
    According to the type of the waiting task, object queue is determined in collection of queues, the collection of queues includes multiple Queue;
    The waiting task is cached to the object queue;
    It is determined that in the object queue, after the processing of the previous task of the waiting task completes, by thread pool Thread the waiting task is handled, the number of threads in the thread pool be less than the collection of queues in queue Number.
  2. 2. according to the method for claim 1, it is characterised in that it is described it is determined that in the object queue, it is described pending After the previous task processing of task is completed, the waiting task is handled by the thread in thread pool, including:
    Obtain in the object queue, processing completion message corresponding to the previous task of the waiting task;
    According to the processing completion message, the waiting task is handled by the thread in the thread pool.
  3. 3. according to the method for claim 2, it is characterised in that it is described according to the processing completion message, pass through the line Thread in Cheng Chi is handled the waiting task, including:
    According to the processing completion message, the waiting task is stored to buffer queue corresponding to the thread pool;
    It is determined that the waiting task be located at idle thread be present in the head of the queue of the buffer queue and the thread pool when, The waiting task is handled by the idle thread.
  4. 4. according to the method for claim 3, it is characterised in that it is described according to the processing completion message, wait to locate by described Reason task is stored to before buffer queue corresponding to the thread pool, in addition to:
    It is determined that after the application program launching, the buffer queue is created;
    When it is determined that first task be present in queue in the collection of queues, first task in each queue is deposited Store up to the buffer queue.
  5. 5. the method according to claim 3 or 4, it is characterised in that, will be described pending according to the processing completion message Task is stored to buffer queue corresponding to the thread pool, in addition to:
    The waiting task is removed into the object queue.
  6. 6. the method according to claim 3 or 4, it is characterised in that by the idle thread to the waiting task After being handled, in addition to:
    The waiting task is removed into the buffer queue.
  7. A kind of 7. Task Processing Unit, it is characterised in that including acquisition module, determining module, memory module and processing module, its In,
    The acquisition module is used for, and obtains the waiting task of the application program generation and the type of the waiting task;
    The determining module is used for, and according to the type of the waiting task, object queue, the team are determined in collection of queues Row set includes multiple queues;
    The memory module is used for, and the waiting task is cached to the object queue;
    The processing module is used for, it is determined that in the object queue, the previous task of the waiting task handles and complete Afterwards, the waiting task is handled by the thread in thread pool, the number of threads in the thread pool is less than institute State the number of the queue in collection of queues.
  8. 8. device according to claim 7, it is characterised in that the processing module includes acquiring unit and processing unit, Wherein,
    The acquiring unit is used for, and obtains in the object queue, is handled corresponding to the previous task of the waiting task Completion message;
    The processing unit is used for, according to the processing completion message, by the thread in the thread pool to described pending Task is handled.
  9. 9. device according to claim 8, it is characterised in that the processing unit is specifically used for:
    According to the processing completion message, the waiting task is stored to buffer queue corresponding to the thread pool;
    It is determined that the waiting task be located at idle thread be present in the head of the queue of the buffer queue and the thread pool when, The waiting task is handled by the idle thread.
  10. 10. device according to claim 9, it is characterised in that described device also includes creation module, wherein,
    The creation module is used for, described in the processing unit according to the processing completion message, by the waiting task Store to before buffer queue corresponding to the thread pool, it is determined that after the application program launching, create the caching team Row;
    The memory module is additionally operable to, when it is determined that first task be present in queue in the collection of queues, by each First task in queue is stored to the buffer queue.
  11. 11. device according to claim 8 or claim 9, it is characterised in that the memory module is additionally operable to:
    In the processing unit according to the processing completion message, the waiting task is stored to corresponding to the thread pool After buffer queue, the waiting task is removed into the object queue.
  12. 12. device according to claim 8 or claim 9, it is characterised in that the memory module is additionally operable to:
    After the processing unit is handled the waiting task by the idle thread, by described pending Business removes the buffer queue.
CN201710517218.8A 2017-06-29 2017-06-29 Task processing method and device Active CN107450971B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710517218.8A CN107450971B (en) 2017-06-29 2017-06-29 Task processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710517218.8A CN107450971B (en) 2017-06-29 2017-06-29 Task processing method and device

Publications (2)

Publication Number Publication Date
CN107450971A true CN107450971A (en) 2017-12-08
CN107450971B CN107450971B (en) 2021-01-29

Family

ID=60488527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710517218.8A Active CN107450971B (en) 2017-06-29 2017-06-29 Task processing method and device

Country Status (1)

Country Link
CN (1) CN107450971B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108228354A (en) * 2017-12-29 2018-06-29 杭州朗和科技有限公司 Dispatching method, system, computer equipment and medium
CN109343941A (en) * 2018-08-14 2019-02-15 阿里巴巴集团控股有限公司 Task processing method, device, electronic equipment and computer readable storage medium
CN110765167A (en) * 2019-10-23 2020-02-07 泰康保险集团股份有限公司 Policy data processing method, device and equipment
CN110928905A (en) * 2019-11-07 2020-03-27 泰康保险集团股份有限公司 Data processing method and device
CN111190751A (en) * 2019-12-30 2020-05-22 广州酷狗计算机科技有限公司 Task processing method and device based on song list, computer equipment and storage medium
WO2020134425A1 (en) * 2018-12-24 2020-07-02 深圳市中兴微电子技术有限公司 Data processing method, apparatus, and device, and storage medium
CN111381976A (en) * 2018-12-28 2020-07-07 广州市百果园信息技术有限公司 Message prompt data updating method and device, storage medium and computer equipment
CN111736976A (en) * 2020-06-30 2020-10-02 中国工商银行股份有限公司 Task processing method and device, computing equipment and medium
CN111858046A (en) * 2020-07-13 2020-10-30 海尔优家智能科技(北京)有限公司 Service request processing method and device, storage medium and electronic device
CN112711490A (en) * 2021-03-26 2021-04-27 统信软件技术有限公司 Message processing method, computing device and storage medium
CN112817745A (en) * 2021-01-14 2021-05-18 内蒙古蒙商消费金融股份有限公司 Task processing method and device
CN113313600A (en) * 2020-02-26 2021-08-27 京东数字科技控股股份有限公司 Message processing method, device and system, storage medium and electronic device
CN113905273A (en) * 2021-09-29 2022-01-07 上海阵量智能科技有限公司 Task execution method and device
CN111858046B (en) * 2020-07-13 2024-05-24 海尔优家智能科技(北京)有限公司 Service request processing method and device, storage medium and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702784A (en) * 2009-11-18 2010-05-05 上海市共进通信技术有限公司 Multitask communication system and method of optical access multiuser residential unit embedded device
CN102722417A (en) * 2012-06-07 2012-10-10 腾讯科技(深圳)有限公司 Distribution method and device for scan task
CN103166845A (en) * 2013-03-01 2013-06-19 华为技术有限公司 Data processing method and device
CN106325980A (en) * 2015-06-30 2017-01-11 中国石油化工股份有限公司 Multi-thread concurrent system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702784A (en) * 2009-11-18 2010-05-05 上海市共进通信技术有限公司 Multitask communication system and method of optical access multiuser residential unit embedded device
CN102722417A (en) * 2012-06-07 2012-10-10 腾讯科技(深圳)有限公司 Distribution method and device for scan task
CN103166845A (en) * 2013-03-01 2013-06-19 华为技术有限公司 Data processing method and device
CN106325980A (en) * 2015-06-30 2017-01-11 中国石油化工股份有限公司 Multi-thread concurrent system

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108228354A (en) * 2017-12-29 2018-06-29 杭州朗和科技有限公司 Dispatching method, system, computer equipment and medium
CN109343941A (en) * 2018-08-14 2019-02-15 阿里巴巴集团控股有限公司 Task processing method, device, electronic equipment and computer readable storage medium
CN109343941B (en) * 2018-08-14 2023-02-21 创新先进技术有限公司 Task processing method and device, electronic equipment and computer readable storage medium
WO2020134425A1 (en) * 2018-12-24 2020-07-02 深圳市中兴微电子技术有限公司 Data processing method, apparatus, and device, and storage medium
CN111381976B (en) * 2018-12-28 2023-08-04 广州市百果园信息技术有限公司 Method and device for updating message prompt data, storage medium and computer equipment
CN111381976A (en) * 2018-12-28 2020-07-07 广州市百果园信息技术有限公司 Message prompt data updating method and device, storage medium and computer equipment
CN110765167A (en) * 2019-10-23 2020-02-07 泰康保险集团股份有限公司 Policy data processing method, device and equipment
CN110928905A (en) * 2019-11-07 2020-03-27 泰康保险集团股份有限公司 Data processing method and device
CN110928905B (en) * 2019-11-07 2024-01-26 泰康保险集团股份有限公司 Data processing method and device
CN111190751B (en) * 2019-12-30 2023-12-08 广州酷狗计算机科技有限公司 Task processing method and device based on song list, computer equipment and storage medium
CN111190751A (en) * 2019-12-30 2020-05-22 广州酷狗计算机科技有限公司 Task processing method and device based on song list, computer equipment and storage medium
CN113313600B (en) * 2020-02-26 2024-05-17 京东科技控股股份有限公司 Message processing method, device and system, storage medium and electronic device
CN113313600A (en) * 2020-02-26 2021-08-27 京东数字科技控股股份有限公司 Message processing method, device and system, storage medium and electronic device
CN111736976B (en) * 2020-06-30 2023-08-15 中国工商银行股份有限公司 Task processing method, device, computing equipment and medium
CN111736976A (en) * 2020-06-30 2020-10-02 中国工商银行股份有限公司 Task processing method and device, computing equipment and medium
CN111858046A (en) * 2020-07-13 2020-10-30 海尔优家智能科技(北京)有限公司 Service request processing method and device, storage medium and electronic device
CN111858046B (en) * 2020-07-13 2024-05-24 海尔优家智能科技(北京)有限公司 Service request processing method and device, storage medium and electronic device
CN112817745A (en) * 2021-01-14 2021-05-18 内蒙古蒙商消费金融股份有限公司 Task processing method and device
CN112711490A (en) * 2021-03-26 2021-04-27 统信软件技术有限公司 Message processing method, computing device and storage medium
CN113905273A (en) * 2021-09-29 2022-01-07 上海阵量智能科技有限公司 Task execution method and device
CN113905273B (en) * 2021-09-29 2024-05-17 上海阵量智能科技有限公司 Task execution method and device

Also Published As

Publication number Publication date
CN107450971B (en) 2021-01-29

Similar Documents

Publication Publication Date Title
CN107450971A (en) Task processing method and device
KR102333341B1 (en) Exception handling in microprocessor systems
CN109409513B (en) Task processing method based on neural network and related equipment
CN110019133B (en) Data online migration method and device
US9396028B2 (en) Scheduling workloads and making provision decisions of computer resources in a computing environment
US20090300629A1 (en) Scheduling of Multiple Tasks in a System Including Multiple Computing Elements
CN104252405A (en) Log information output method and device
US20200019854A1 (en) Method of accelerating execution of machine learning based application tasks in a computing device
US10031773B2 (en) Method to communicate task context information and device therefor
CN111190741B (en) Scheduling method, equipment and storage medium based on deep learning node calculation
WO2018204032A1 (en) Conditional debugging of server-side production code
GB2479638A (en) Generating persistent sessions in a graphical interface for managing communication sessions
CN105700956A (en) Distributed job processing method and system
CN110083533A (en) Data processing method and device based on Mock service
CN111352896B (en) Artificial intelligence accelerator, equipment, chip and data processing method
US10318456B2 (en) Validation of correctness of interrupt triggers and delivery
CN105224410A (en) A kind of GPU of scheduling carries out method and the device of batch computing
CN111124685A (en) Big data processing method and device, electronic equipment and storage medium
CN109976725A (en) A kind of process program development approach and device based on lightweight flow engine
WO2021047118A1 (en) Image processing method, device and system
CN103391225B (en) Futures and the parallel automatization test system of securities industry test case
CN112241289A (en) Text data processing method and electronic equipment
CN108874556A (en) A kind of data interactive method, device, storage medium and mobile terminal
US11307974B2 (en) Horizontally scalable distributed system for automated firmware testing and method thereof
CN108062224A (en) Data read-write method, device and computing device based on file handle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant