CN113064705A - Thread pool capacity expansion method, device, server, medium and product - Google Patents

Thread pool capacity expansion method, device, server, medium and product Download PDF

Info

Publication number
CN113064705A
CN113064705A CN202110291710.4A CN202110291710A CN113064705A CN 113064705 A CN113064705 A CN 113064705A CN 202110291710 A CN202110291710 A CN 202110291710A CN 113064705 A CN113064705 A CN 113064705A
Authority
CN
China
Prior art keywords
thread
tasks
target
queue
executed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110291710.4A
Other languages
Chinese (zh)
Other versions
CN113064705B (en
Inventor
邵帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110291710.4A priority Critical patent/CN113064705B/en
Publication of CN113064705A publication Critical patent/CN113064705A/en
Application granted granted Critical
Publication of CN113064705B publication Critical patent/CN113064705B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The present disclosure provides a thread pool capacity expansion method, apparatus, server, medium, and product, wherein if a first thread pool has a corresponding target first thread with a number of tasks to be executed greater than or equal to a first threshold, the target queue corresponding to the target first thread may store a plurality of tasks having the same attribute identification at a longer execution time, therefore, the first thread pool needs to be expanded, the present disclosure does not expand the first thread pool as a whole, but expands the target queue corresponding to the target first thread, i.e. at least one second thread included in the second thread pool, since the second thread and the target first thread can sequentially fetch and process tasks from the target queue, i.e. the second thread and the target first thread can process tasks in parallel, the tasks stored in the target queue can be processed quickly, thereby shortening the execution time of a plurality of tasks stored in the target queue and having the same attribute identification.

Description

Thread pool capacity expansion method, device, server, medium and product
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a server, a medium, and a product for thread pool expansion.
Background
The affinity thread pool comprises a plurality of threads, different threads correspond to different queues, a plurality of tasks with the same attribute identification in the affinity thread pool are stored in the same queue, the attributes of the tasks stored in different queues are different, each thread can take out the tasks from the corresponding queue and process the tasks, the number of the tasks stored in different queues possibly has a larger difference, and for the queue with a larger number of stored tasks, the execution time of the tasks with the same attribute identification stored in the queue can be longer.
Therefore, how to reduce the execution time of multiple tasks with the same attribute identification is a difficult problem to be solved by those skilled in the art.
Disclosure of Invention
The present disclosure provides a method, an apparatus, a server, a medium, and a product for thread pool expansion to at least solve the problem of reducing the execution time of a plurality of tasks having the same attribute identifier in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, a method for thread pool expansion is provided, including:
acquiring the number of to-be-executed tasks respectively corresponding to a plurality of first threads contained in a first thread pool, wherein the number of the to-be-executed tasks corresponding to the first threads is the number of the tasks stored in a queue corresponding to the first threads;
determining a target first thread of which the corresponding number of the tasks to be executed is greater than or equal to a first threshold value from a plurality of first threads;
acquiring at least one second thread contained in a second thread pool, wherein the first thread pool is different from the second thread pool;
taking out tasks from a queue corresponding to the target first thread through the at least one second thread and the target first thread in turn;
and outputting a result corresponding to the task processed by the second thread according to the sequence of taking out the tasks from the queue corresponding to the target first thread through the at least one second thread and the target first thread.
With reference to the first aspect, in a first possible implementation manner, the determining, from among the plurality of first threads, a target first thread whose corresponding number of tasks to be executed is greater than or equal to a first threshold includes:
determining a first minimum number of tasks to be executed from the number of tasks to be executed respectively corresponding to the plurality of first threads;
determining the first threshold based on the first minimum number of tasks to be performed;
and determining a target first thread of which the corresponding number of the tasks to be executed is greater than or equal to the first threshold and greater than or equal to a second threshold from a plurality of first threads.
With reference to the first aspect, in a second possible implementation manner, the obtaining at least one second thread included in a second thread pool includes:
controlling the second thread pool to create the at least one second thread; or the like, or, alternatively,
obtaining the at least one second thread which is created in an idle state from the second thread pool.
With reference to the first aspect, in a third possible implementation manner, after the obtaining at least one second thread included in the second thread pool, the method further includes:
removing the at least one second thread from the second thread pool; or the like, or, alternatively,
and switching the thread state of the at least one second thread contained in the second thread pool from an idle state to a non-idle state.
With reference to the first aspect, in a fourth possible implementation manner, the method further includes:
acquiring the number of tasks to be executed corresponding to the target first thread at the current time;
and if the number of the tasks to be executed corresponding to the target first thread at the current time is less than or equal to a third threshold value, releasing the at least one second thread, so that the at least one second thread cannot obtain the tasks from the queue corresponding to the target first thread.
With reference to the first aspect, in a fifth possible implementation manner, if the number of to-be-executed tasks corresponding to the target first thread at the current time is less than or equal to a third threshold, the releasing the at least one second thread includes:
acquiring the third threshold, wherein the third threshold is a numerical value obtained based on the number of to-be-executed tasks respectively corresponding to the current time of the first threads or the third threshold is a preset fourth threshold;
and if the number of the tasks to be executed corresponding to the target first thread in the current time is less than or equal to the third threshold value, releasing the at least one second thread.
With reference to the first aspect, in a sixth possible implementation manner, after the step of releasing the at least one second thread, the method further includes:
destroying the at least one second thread; or the like, or, alternatively,
adding the at least one second thread to the second thread pool; or the like, or, alternatively,
and switching the thread state of the at least one second thread contained in the second thread pool from a non-idle state to an idle state.
According to a second aspect of the embodiments of the present disclosure, there is provided a thread pool capacity expansion apparatus, including:
the system comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is configured to obtain the number of tasks to be executed corresponding to a plurality of first threads contained in a first thread pool respectively, and the number of the tasks to be executed corresponding to the first threads is the number of tasks stored in a queue corresponding to the first threads;
a first determining module configured to determine, from the plurality of first threads, a corresponding target first thread for which the number of tasks to be executed is greater than or equal to a first threshold;
a second obtaining module configured to obtain at least one second thread included in a second thread pool, the first thread pool being different from the second thread pool;
the first control module is configured to take out tasks from a queue corresponding to the target first thread through the at least one second thread and the target first thread in turn;
and the second control module is configured to output a result corresponding to the task processed by the second control module per se through the at least one second thread and the target first thread according to the sequence of taking out the tasks from the queue corresponding to the target first thread.
With reference to the second aspect, in a first possible implementation manner, the first determining module is specifically configured to:
a first determining unit configured to determine a first minimum number of tasks to be executed from the number of tasks to be executed respectively corresponding to the plurality of first threads;
a second determination unit configured to determine the first threshold based on the first minimum number of tasks to be executed;
a third determining unit configured to determine, from the plurality of first threads, a target first thread for which the corresponding number of tasks to be executed is greater than or equal to the first threshold and greater than or equal to a second threshold.
With reference to the second aspect, in a second possible implementation manner, the second obtaining module is specifically configured to:
a first control unit configured to control the second thread pool to create the at least one second thread; or the like, or, alternatively,
a first obtaining unit configured to obtain the at least one second thread that has been created in an idle state from the second thread pool.
With reference to the second aspect, in a third possible implementation manner, the method further includes:
a removal module configured to remove the at least one second thread from the second thread pool; or the like, or, alternatively,
a first state switching module configured to switch the thread state of the at least one second thread included in the second thread pool from an idle state to a non-idle state.
With reference to the second aspect, in a fourth possible implementation manner, the method further includes:
the third acquisition module is configured to acquire the number of tasks to be executed corresponding to the target first thread at the current time;
a releasing module configured to release the at least one second thread if the number of to-be-executed tasks corresponding to the target first thread at the current time is less than or equal to a third threshold, so that the at least one second thread cannot obtain tasks from a queue corresponding to the target first thread.
With reference to the second aspect, in a fifth possible implementation manner, the releasing module is specifically configured to:
a second obtaining unit, configured to obtain the third threshold, where the third threshold is a numerical value obtained based on the number of to-be-executed tasks respectively corresponding to the current time of the plurality of first threads, or the third threshold is a preset fourth threshold;
a releasing unit configured to release the at least one second thread if the number of tasks to be executed corresponding to the target first thread at the current time is less than or equal to the third threshold.
With reference to the second aspect, in a sixth possible implementation manner, the method further includes:
a destruction module configured to destroy the at least one second thread; or the like, or, alternatively,
an add module configured to add the at least one second thread to the second thread pool; or the like, or, alternatively,
a second state switching module configured to switch the thread state of the at least one second thread included in the second thread pool from a non-idle state to an idle state.
According to a third aspect of the embodiments of the present disclosure, there is provided a server, including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the thread pool extension method according to the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions of the storage medium, when executed by a processor of a server, enable the server to perform the thread pool extension method according to the first aspect.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product, which is directly loadable into an internal memory of a computer, for example, a memory included in the server according to the third aspect, and contains software codes, and which, when loaded and executed by the computer, is capable of implementing the thread pool extension method according to the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
in the thread pool capacity expansion method provided by the embodiment of the present disclosure, the number of to-be-executed tasks respectively corresponding to a plurality of first threads included in a first thread pool is obtained, and the number of to-be-executed tasks corresponding to the first threads is the number of tasks stored in a queue corresponding to the first threads; if a target first thread with the number of corresponding tasks to be executed being greater than or equal to a first threshold exists in the plurality of first threads, the execution time of the plurality of tasks with the same attribute identification stored in the target queue corresponding to the target first thread may be longer, so that the first thread pool needs to be expanded, in the embodiment of the present disclosure, when the first thread pool needs to be expanded, the entire first thread pool is not expanded, but the target queue corresponding to the target first thread is expanded, that is, at least one second thread included in the second thread pool is obtained, since the at least one second thread and the target first thread can sequentially obtain the tasks from the target queue and process the tasks, that is, the at least one second thread and the target first thread can process the tasks in parallel, so that the tasks stored in the target queue can be processed quickly, thereby shortening the execution time of the plurality of tasks with the same attribute identification stored in the target queue, the 'at least one second thread' and the target first thread take out the tasks from the target queue in turn, and the at least one second thread and the target first thread output results corresponding to the tasks processed by the second thread and the target first thread according to the sequence of taking out the tasks from the target queue. Therefore, the first thread pool can achieve the purpose of sequentially outputting the results corresponding to the tasks with the same attribute identification according to the sequence that the tasks with the same attribute identification are input into the first thread pool, namely, the first thread still has "affinity". Further, the second thread pool is different from the first thread pool, so that the total number of the plurality of first threads contained in the first thread pool is not changed, and therefore, the plurality of tasks which are already allocated with the queues do not need to be reallocated.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a diagram illustrating one implementation of a process for allocating a queue to a task by an affinity thread pool in accordance with an illustrative embodiment;
FIG. 2 is a schematic diagram illustrating a hardware environment to which embodiments of the present disclosure relate, according to an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a method for thread pool expansion in accordance with an illustrative embodiment;
FIG. 4 is a diagram illustrating a process by which multiple threads take turns to fetch tasks from an object queue in accordance with an illustrative embodiment;
FIG. 5 is a block diagram illustrating a thread pool capacity mechanism in accordance with one illustrative embodiment;
fig. 6 is a block diagram illustrating an apparatus 600 for a server according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The embodiment of the disclosure provides a thread pool capacity expansion method, a thread pool capacity expansion device, a server and a storage medium, and before explaining the technology provided by the embodiment of the disclosure, the affinity thread pool and the related technology related to the embodiment of the disclosure are explained first.
First, the affinity thread pool according to the embodiment of the present disclosure will be described.
The affinity thread pool has affinity, that is, the results corresponding to the tasks with the same attribute identification can be sequentially output according to the sequence of inputting the tasks with the same attribute identification into the affinity thread pool.
There are many application scenarios that require affinity thread pools for processing, as will be illustrated below.
In an application scenario where a viewer sequence list for viewing a certain video needs to be obtained, after each user performs an operation of viewing the video through a client, a task is sent to an affinity thread pool of a server, if a plurality of users view the video through the clients, the affinity thread pool of the server receives the plurality of tasks, the sequence in which the affinity thread pool receives the plurality of tasks is the sequence in which the plurality of users view the video, and the tasks have the same attribute identifier, for example, the tasks all include a video ID of the video. Since the viewer order list needs to be obtained, the affinity thread pool needs to output results corresponding to a plurality of tasks having the same video ID in the order of receiving the plurality of tasks having the same video ID, so as to obtain the viewer order list.
The following describes related art related to the embodiments of the present disclosure with reference to the above description of affinity thread pools.
The affinity thread pool comprises a plurality of threads, different threads correspond to different queues, and each thread can take out tasks from the corresponding queue and process the tasks. The tasks stored in the queue are allocated by the affinity thread pool, and the process of allocating the queue to the task by the affinity thread pool is described below with reference to the drawings.
For each task, after the affinity thread pool receives the task, a queue ID is determined based on the attribute identifier included in the task and the total number of the plurality of threads included in the affinity thread pool, and the task is stored in the queue with the queue ID.
FIG. 1 is a diagram illustrating one implementation of a process for allocating a queue for a task by an affinity thread pool in accordance with an illustrative embodiment.
It is assumed that the affinity thread pool includes N threads, and each thread corresponds to a queue, that is, the affinity thread pool includes N queues, where N is any positive integer greater than or equal to 1. The queue IDs of the N queues are: queue ID1, queue ID2, …, queue IDN.
Of the N queues, each queue may store zero, one, or more tasks, and fig. 1 shows that each queue stores at least 3 tasks, e.g., the queue with queue ID1 stores task 11, task 12, task 13, …; the queue with queue ID2 stores task 21, task 22, task 23, …; the queue with queue IDN stores task N1, task N2, tasks N3, …. Fig. 1 is an example only, and the number of tasks stored in each queue is not limited herein.
Illustratively, the process of determining the queue ID based on the attribute identification contained in the task and the total number N of the plurality of threads contained in the affinity thread pool comprises: and carrying out hash operation on the attribute identifier contained in the task to obtain a first numerical value, carrying out modulo operation or residue operation on the total number N of the multiple threads contained in the affinity thread pool by using the first numerical value to obtain a second numerical value, and obtaining a queue ID corresponding to the second numerical value from a preset numerical value and queue ID corresponding relation table.
For example, the modulo operation is illustrated in fig. 1, so the following formula is used in fig. 1 to obtain the second value corresponding to the task 20 of the queue to be allocated. The second value hash (attribute identification) mod N.
For example, in the above-mentioned relationship table, one value may correspond to one queue ID, but one queue ID may correspond to one or more values. Both value 1 and value 3 in the relationship table 10 shown in fig. 1 correspond to queue ID 1.
Illustratively, the queue ID corresponding to the second value can be obtained from the relationship table 10 shown in fig. 1, and assuming that the second value corresponds to the queue IDN, the task 20 is stored in the queue IDN.
Because the second values corresponding to the tasks with the same attribute identification are equal, the tasks with the same attribute identification can be allocated to the same queue, that is, the tasks with the same attribute identification stored in the queue can be sequentially executed by one thread, so that the purpose of outputting the results corresponding to the tasks respectively according to the sequence of receiving the tasks with the same attribute identification is achieved.
For example, the second values corresponding to the tasks with different attribute identifications may also be equal, that is, a plurality of tasks with a plurality of different attribute identifications may be stored in one queue. For example, the queue ID1 stores therein 10 tasks having the attribute identification a and 2 tasks having the attribute identification B.
In the related art related to the embodiments of the present disclosure, the number of tasks stored in different queues may differ greatly, and the reason why the number of tasks stored in a queue is large may be as follows.
The first reason is that a queue stores a plurality of task sets, one task set includes one or more tasks with the same attribute identifier, the task sets include different attribute identifiers, each task set includes a small number of tasks, but the number of task sets stored in the queue is large, which results in the number of tasks stored in the queue being large.
The second reason is that the queue stores a plurality of task sets, the number of the task sets stored in the queue is small, but at least one task set comprises a plurality of tasks with the same attribute identification, and the number of the tasks stored in the queue is large.
The third reason is that the number of task sets stored in the queue is large, and at least one task set comprises a plurality of tasks with the same attribute identification, so that the number of tasks stored in the queue is large.
Illustratively, the queue is a first-in-first-out (FIFO) linear table, i.e., a task taken from the front of the queue is inserted into the task from the back of the queue. For a queue, the order of the plurality of tasks stored by the queue (the order from the front end to the back end) is the order in which the plurality of tasks are input into the affinity thread pool.
For example, the time when a plurality of tasks included in the same task set are input into the affinity thread pool may have a time interval in which one or more tasks included in other task sets are input into the affinity thread pool, so that the plurality of tasks included in the same task set are not continuously stored in the queue, that is, the tasks included in different task sets may be stored in the queue by interleaving, and for example, in fig. 1, the attribute identifications of the stored tasks 11 and 12 in the queue having the queue ID1 may be different.
For example, the execution time of a plurality of tasks having the same attribute identifier refers to: and the absolute value of the difference between the time when the plurality of tasks with the same attribute identifications are input into the affinity thread pool and the time when the affinity thread pool outputs the results corresponding to the plurality of tasks with the same attribute identifications.
If the number of tasks stored in one queue is larger, the execution time of the corresponding thread for a plurality of tasks stored in the queue and having the same attribute identifier may be longer, and the following description will be made with reference to the above three cases that the number of tasks stored in the queue is larger.
For the case that the number of the tasks stored in the queue is large due to the first reason, since the number of the tasks stored in each queue is small, the larger the number of the tasks stored in the queue is, the larger the number of the task sets stored in the queue is, the more the tasks with the same attribute identification are input into the affinity thread pool may have a time interval, and the larger the number of the task sets stored in the queue is, the more the number of the tasks with other attribute identifications input into the affinity thread pool may be in the time interval, so that the position interval in which the tasks with the same attribute identification are stored in the queue may be larger, and the execution time of the tasks with the same attribute identification may be longer.
It is assumed that one task set includes the task 11 and the task 100, and the larger the interval between the positions where the task 11 and the task 100 are stored in the queue, the longer the execution time for the task set including the task 11 and the task 100.
For the case that the number of the tasks stored in the queue is large due to the second reason, since the number of the task sets included is small and the number of the tasks stored in the queue is large, it means that the number of the tasks having the same attribute identifier included in at least one task set is large, since the tasks having the same attribute identifier in the queue may not be continuously stored in the queue and the number of the tasks having the same attribute identifier is large, the execution time of the tasks having the same attribute identifier is long.
For the case that the number of the tasks stored in the queue is large due to the third reason, the above description of the first reason and the second reason may be combined, and will not be repeated here.
In order to reduce the execution time of a plurality of tasks with the same attribute identifier, the affinity thread pool needs to be expanded.
After the affinity thread pool is expanded, the total number of threads contained in the affinity thread pool is increased, that is, the total number of queues contained in the affinity thread pool is increased, and the number of task sets contained in each queue is reduced, so that the position intervals among a plurality of tasks stored in each queue and having the same attribute identifier are small, even a plurality of tasks having the same attribute identifier are continuously stored in the queue, and therefore, the execution time of the plurality of tasks having the same attribute identifier is reduced.
After the affinity thread pool is expanded, because the total number of threads contained in the affinity thread pool changes, the queues are required to be redistributed to tasks which are distributed and not processed before, otherwise, a plurality of tasks with the same attribute identification cannot be stored in one queue, namely, the 'result corresponding to the tasks with the same attribute identification is sequentially output by the affinity thread pool according to the sequence that the tasks with the same attribute identification are input into the affinity thread pool' cannot be realized. The following examples are given.
For example, before the affinity thread pool is expanded, the total number of threads included in the affinity thread pool is a first number, the affinity thread pool has received 20 tasks with attribute identifications a, and the queue ID1 corresponding to the tasks with attribute identifications a is determined based on the attribute identifications a and the first number; assuming that after the affinity thread pool is expanded, the total number of threads contained in the affinity thread pool is a second number (the second number is greater than the first number), 20 tasks with the attribute identifier a are not processed yet, if the 20 tasks with the attribute identifier a are not redistributed, if the affinity thread pool receives 5 tasks with the attribute identifier a again, determining a queue ID2 corresponding to the tasks with the attribute identifier a based on the attribute identifier a and the second number; since 25 tasks with attribute a are not stored in the same queue, processing 25 tasks with attribute a is not a thread, and assuming that queue ID1 corresponds to thread a and queue ID2 corresponds to thread B, after thread B has processed 5 tasks with attribute a stored in the queue with queue ID2, thread a may not have processed 20 tasks with attribute a stored in the queue with queue ID1, that is, the affinity thread pool does not output the results of 25 tasks with attribute a in the order that 25 tasks with attribute a are input to the affinity thread pool.
In summary, in the related art related to the embodiment of the present disclosure, when the affinity thread pool is expanded, the multiple tasks that have been allocated to the queue need to be reallocated to the queue, and if a preset relationship table between the numerical value and the queue ID exists, the relationship table needs to be updated, which increases extra overhead.
Because it takes time to expand the affinity thread pool, during the expansion, if the affinity thread pool receives one or more tasks, it cannot be determined to which queue the tasks should be allocated, so special processing is required, for example, based on the attribute identifier included in the task and the total number of the plurality of threads before expansion included in the affinity thread pool, a queue ID is determined, the task is stored in the queue with the queue ID, and after the expansion is finished, the queue is reallocated to the one or more tasks received during the expansion; or storing one or more tasks received during the expansion period in a single storage space, and allocating the queue to the one or more tasks received during the expansion period after the expansion is finished.
In summary, when the affinity thread pool is expanded in the related art, the embodiment of the present disclosure needs to perform special processing on one or more threads received during the expansion period, so as to increase the overhead.
Based on this, an embodiment of the present disclosure provides a thread pool capacity expansion method, where when a first thread pool (a affinity thread pool is a first thread pool in an embodiment of the present disclosure) needs to be expanded, capacity expansion is performed on a target queue with a large number of stored tasks instead of the whole first thread pool, that is, at least one second thread included in a second thread pool is obtained, where the second thread pool is different from the first thread pool, so that a total number of a plurality of first threads included in the first thread pool (a thread included in the first thread pool is referred to as a first thread in an embodiment of the present disclosure) is not changed, and therefore, it is not necessary to redistribute a plurality of tasks of an already distributed queue to a queue. There are also no tasks received during the expansion, and thus no special processing is required for one or more threads received during the expansion.
In the embodiment of the present disclosure, the "at least one second thread" and the target first thread corresponding to the target queue all take out tasks from the target queue in turn, the "at least one second thread" is "bound" with the target queue, and the tasks processed by the "at least one second thread" are not allocated to the "at least one second thread" in a manner of "determining a queue ID based on the attribute identifier included in the task and the total number N of the multiple threads included in the affinity thread pool," so that the relationship table does not need to be changed.
After the at least one second thread and the target first thread take out the task from the target queue, the at least one second thread and the target first thread do not influence each other on the processing process of the task, namely, the parallel processing of a plurality of tasks is realized, so that the execution time of the plurality of tasks with the same attribute identification stored in the target queue is reduced.
In the embodiment of the disclosure, at least one second thread and the target first thread output results corresponding to tasks processed by the second thread and the target first thread according to the sequence of taking out the tasks from the target queue. Therefore, the first thread pool can sequentially output the results corresponding to the tasks with the same attribute identification according to the sequence that the tasks with the same attribute identification are input into the first thread pool, and the first thread pool still has affinity.
The following describes a hardware environment related to embodiments of the present disclosure.
Fig. 2 is a schematic diagram illustrating a hardware environment according to an exemplary embodiment of the present disclosure, the hardware environment including at least one electronic device 21 and a server 22.
Any electronic device 21 may be any electronic product capable of interacting with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, a voice interaction device, a handwriting device, and the like, for example, a mobile phone, a tablet computer, a palm computer, a personal computer, a wearable device, a smart television, and the like.
The server 22 may be, for example, one server, a server cluster composed of a plurality of servers, or a cloud computing service center.
Illustratively, the electronic device 21 may establish a connection and communicate with the server 22 through a wired network or a wireless network.
It should be noted that fig. 2 is only an example, and 3 electronic devices 21 and one server 22 are shown in fig. 2. The number of the electronic devices 21 and the number of the servers 22 may be determined based on actual situations, and the number of the electronic devices 21 and the servers 22 is not limited in the embodiments of the present disclosure.
Illustratively, at least one electronic device 21 is used to generate one or more tasks.
The server 22 is configured to receive one or more tasks sent by at least one electronic device 21, and implement the thread pool expansion method mentioned in the embodiments of the present disclosure to process the one or more tasks.
It will be understood by those skilled in the art that the foregoing electronic devices and servers are merely exemplary and that other existing or future electronic devices or servers may be suitable for use with the present disclosure and are intended to be included within the scope of the present disclosure and are hereby incorporated by reference.
The following describes a thread pool capacity expansion method provided by the embodiment of the present disclosure with reference to the drawings.
Fig. 3 is a flowchart illustrating a thread pool capacity expansion method, which may be applied to the server shown in fig. 2, according to an exemplary embodiment, and which includes the following steps S31 to S35 in implementation.
In step S31, the number of tasks to be executed corresponding to each of the first threads included in the first thread pool is obtained.
The number of the tasks to be executed corresponding to the first thread is the number of the tasks stored in the queue corresponding to the first thread.
Illustratively, the queues for different first threads are different.
In step S32, a target first thread, of which the corresponding number of tasks to be executed is greater than or equal to a first threshold, is determined from the plurality of first threads.
In step S33, at least one second thread included in a second thread pool is obtained, the first thread pool being different from the second thread pool.
In step S34, a task is fetched from the target queue by the at least one second thread taking turns with the target first thread.
Illustratively, the target queue is used to store tasks to be processed by the target first thread.
In step S35, the result corresponding to the task processed by the second thread is output in the order in which the tasks are fetched from the target queue by the second thread and the target first thread.
In an alternative implementation, the above steps S31 to S35 may be performed by a third thread created by the server, where the third thread does not belong to the first thread pool or the second thread pool.
In an optional implementation manner, the first threshold is the same for a plurality of first threads, that is, in the process of determining whether each first thread is a target first thread, the number of tasks to be executed corresponding to each first thread is compared with the same first threshold. For example, the first threshold may be a fixed value, or the first threshold is related to the number of tasks to be executed corresponding to each of the plurality of first threads.
In an optional implementation manner, the first threshold may be different for a plurality of first threads, that is, in the process of determining whether each first thread is a target first thread, the number of tasks to be executed corresponding to the first thread is compared with the corresponding first threshold. Illustratively, the first threshold corresponding to each first thread is a preset multiple of the maximum number of tasks that can be stored in the queue corresponding to the first thread, and the preset multiple is any value greater than 0 and less than or equal to 1, for example, 0.8. The maximum number of tasks which can be stored in the queues corresponding to the first threads is different, and the first threshold values corresponding to the maximum number of tasks are different.
The first thread pool and the second thread pool are explained below.
Exemplary, methods of creating a thread pool include, but are not limited to, the following four: newfixedthreedPool (), newSingleThreadExecutor (), newCachedThredPool (), newScheduledThreadPool ().
Illustratively, the first thread pool and the second thread pool may be created by any of the above four methods.
Various attributes of different thread pools may differ, for example, the total number of threads contained by different thread pools may differ, the number of queues contained by different thread pools may differ, and the types of queues contained by different thread pools may differ.
In an alternative implementation, each first thread included in the first thread pool has a queue, and each second thread included in the second thread pool has no queue, that is, the second thread in the second thread pool does not actively acquire and process the task.
In an optional implementation manner, the second thread located in the second thread pool may passively acquire and process the task, for example, the embodiment of the present disclosure may control at least one second thread to passively fetch and process the task from the target queue that needs capacity expansion.
Illustratively, the queue is a first-in-first-out (FIFO) linear table. And after the at least one second thread and the target first thread take the task out of the target queue in turn, the taken task is deleted from the queue.
The following describes "the at least one second thread takes a task from the target queue in turn with the target first thread" mentioned in step S34.
By rotating is meant that the at least one second thread and the target first thread are repeated one after the other in order.
The above-mentioned "order" is explained below, and the process of determining the "order" involves, but is not limited to, the following two cases.
In the first case, the target first thread may have fetched a task from the target queue and be processing before controlling the at least one second thread to fetch the task from the target queue for the first time.
In the first case, the implementation of step S34 includes the following steps: before controlling the at least one second thread to take out the tasks from the target queue for the first time, controlling the at least one second thread and the target first thread to take out the tasks from the target queue in sequence according to a first sequence, wherein the target first thread takes out the tasks from the target queue and is not processed completely.
The first sequence sequentially comprises a target first thread and a result obtained after sequencing the at least one second thread.
For example, the at least one second thread may be randomly ordered, or ordered according to the thread identifier of the at least one second thread or the time when the second thread is created, so as to obtain the ordered result of the at least one second thread.
In the first case, the first position in the "order" is the target first thread, since the target first thread is the first to fetch tasks from the target queue relative to the at least one second thread.
In the process of controlling the at least one second thread and the target first thread to take out the tasks from the target queue in sequence according to the first sequence for the first time, the target first thread already takes out the tasks from the target queue, so that the at least one second thread can be controlled to take out the tasks from the target queue in sequence.
In a second case, the target first thread processes the task just before controlling the at least one second thread to fetch the task from the target queue.
In the second case, the implementation of step S34 includes the following steps: before controlling the at least one second thread to take out the tasks from the target queue for the first time, the target first thread finishes processing the tasks taken out from the target queue last time and does not take out the tasks from the target queue currently, and the at least one second thread and the target first thread are controlled to take out the tasks from the target queue in sequence according to a second sequence.
For example, the at least one second thread and the target first thread may be randomly ordered to obtain a second order, or ordered according to a thread identifier of the at least one second thread and the target first thread or a time when the at least one second thread and the target first thread are created to obtain the second order.
It can be understood that, for a thread (whether the first thread or the second thread), after processing the task obtained by the thread, the thread can have the right to obtain a new task from the target queue, and if it is time for a thread to take out the task from the target queue, but the thread still does not process the task taken out from the target queue last time, it is necessary to wait for the thread to process the task taken out from the target queue last time, control the thread to take out the new task from the target queue again, and then it is time for the thread to take out the task from the target queue next thread.
Next, step S34 will be described by way of example.
FIG. 4 is a diagram illustrating a process by which multiple threads take turns to fetch tasks from an object queue, according to an illustrative embodiment.
In fig. 4, the number of the at least one second thread is 2 for example, the number of the at least one second thread is not limited in the embodiments of the present disclosure, for example, the number of the at least one second thread may be any positive integer of 1, 2, 3, 4, 5, ….
It is assumed that 2 second threads included in the at least one second thread are the second thread 41 and the second thread 42, respectively, and the target first thread is the target first thread 43. Assume that the plurality of tasks stored in the target queue 44 include task 11, task 12, task 13, task 14, task 15, task 16, … in order from the front end to the back end of the queue.
Illustratively, the order of the plurality of tasks stored from the front end to the back end of the queue is the order in which the plurality of tasks are input to the first thread pool. For example, the order in which task 11, task 12, task 13, task 14, task 15, and task 16 are input to the first thread pool is: task 11, task 12, task 13, task 14, task 15, task 16.
Assume that the at least one second thread and the target first thread fetch tasks from the target queue 44 in order of the target first thread, the second thread 41, and the second thread 42.
Fig. 4 shows a process of two rounds of the at least one second thread and the target first thread sequentially obtaining tasks from the target queue 44 according to the sequence of the target first thread, the second thread 41 and the second thread 42.
In fig. 4, i represents the ith task obtained from the target queue in the first round, and the values of i are 1, 2 and 3. And ii represents the ith task in the second round to be acquired from the target queue.
Assuming that the target first thread 43 in the second round has already fetched the task 14 from the target queue 44, and it is the turn that the second thread 41 fetches the task 15 from the target queue 44, but the second thread 41 has not yet processed the task 12 obtained from the target queue 44 in the first round, it is necessary to wait for the second thread 41 to process the task 12, and then obtain the task 15 from the target queue 44; after the second thread 41 gets the task 15 from the target queue 44, the second thread 42 can get the task 16 from the target queue 44, and even if the second thread 42 has already processed the task 13 while the second thread 41 is processing the task 12, the second thread 41 cannot be skipped to directly control the second thread 42 to take the task 15 out of the target queue 44.
It can be understood that the order in which the tasks are stored in the target queue is the order in which the tasks are input into the first thread pool, and since at least one second thread and the target first thread are taken out from the target queue in turn, it is ensured that the tasks stored in the target queue are taken out in turn according to the order in which the tasks are input into the first thread pool.
In the embodiment of the disclosure, the first thread with the number of the corresponding tasks to be executed being greater than or equal to the first threshold is referred to as a target first thread, and one or more second threads are allocated to each target first thread, so that the target first thread and the one or more second threads take out the tasks from the target queue in turn and process the tasks, that is, the target first thread and the one or more second threads process a plurality of tasks in parallel, so that the processing speed of the tasks stored in the target queue is increased, the tasks stored in the target queue can be processed in time, and the execution time of the plurality of tasks stored in the target queue and having the same attribute identifier is shortened. Therefore, the capacity expansion method of the thread pool provided by the embodiment of the disclosure can be applied to scenes with high real-time requirements, for example, in shopping live broadcast, a consumption amount ordered list of shoppers needs to be displayed in a live broadcast room so as to give rewards to the first 1000 shoppers.
It can be understood that, if at least one second thread and the target first thread output results corresponding to tasks processed by the second thread according to the order of taking out the tasks from the target queue, the purpose of sequentially outputting the results corresponding to the tasks with the same attribute identification according to the order of inputting the tasks with the same attribute identification into the first thread pool can be achieved.
Next, step S35 will be explained.
Still taking fig. 4 as an example, the order in which the at least one second thread and the target first thread fetch tasks from the target queue 44 is: task 11, task 12, task 13, task 14, task 15, task 16, …; when the at least one second thread outputs the result corresponding to the task with the target first thread, the result of task 11, the result of task 12, the result of task 13, the result of task 14, the result of task 15, and the result of task 16 are also sequentially output ….
Illustratively, two situations are involved in the process of the control thread outputting the result corresponding to the task processed by itself.
In the first case, the durations of the tasks processed by the different threads may be different, so that the thread a that first takes out the task from the target queue may not finish the task, and the thread B that later takes out the task from the target queue may already finish the task, in order to ensure that "the at least one second thread and the target first thread output the result corresponding to the task processed by the thread B according to the order of taking out the task from the target queue", the thread B may output the result of the task processed by the thread B after the thread a finishes the task and outputs the result of the task.
Taking fig. 4 as an example, assuming that the second thread 42 has already processed the task 13, but the second thread 41 has not yet processed the task 12, in this case, it is necessary to control the second thread 42 not to output the result of the task 13, and control the second thread 42 to output the result of the task 13 after the second thread 41 has processed the task 12 and output the result of the task 12.
In the second case, the thread a that first takes out the task from the target queue has already processed the task that itself processed and output the result, and then the thread B that takes out the task from the target queue can directly output the result of the task after processing the task, without waiting.
In the thread pool capacity expansion method provided by the embodiment of the present disclosure, the number of to-be-executed tasks respectively corresponding to a plurality of first threads included in a first thread pool is obtained, and the number of to-be-executed tasks corresponding to the first threads is the number of tasks stored in a queue corresponding to the first threads; if a target first thread with the number of corresponding tasks to be executed being greater than or equal to a first threshold exists in the plurality of first threads, the execution time of the plurality of tasks with the same attribute identification stored in the target queue corresponding to the target first thread may be longer, so that the first thread pool needs to be expanded, in the embodiment of the present disclosure, when the first thread pool needs to be expanded, the entire first thread pool is not expanded, but the target queue corresponding to the target first thread is expanded, that is, at least one second thread included in the second thread pool is obtained, since the at least one second thread and the target first thread can sequentially obtain the tasks from the target queue and process the tasks, that is, the at least one second thread and the target first thread can process the tasks in parallel, so that the tasks stored in the target queue can be processed quickly, thereby shortening the execution time of the plurality of tasks with the same attribute identification stored in the target queue, the 'at least one second thread' and the target first thread take out the tasks from the target queue in turn, and the at least one second thread and the target first thread output results corresponding to the tasks processed by the second thread and the target first thread according to the sequence of taking out the tasks from the target queue. Therefore, the first thread pool can achieve the purpose of sequentially outputting the results corresponding to the tasks with the same attribute identification according to the sequence that the tasks with the same attribute identification are input into the first thread pool, namely, the first thread still has "affinity". Further, the second thread pool is different from the first thread pool, so that the total number of the plurality of first threads contained in the first thread pool is not changed, and therefore, the plurality of tasks which are already allocated with the queues do not need to be reallocated.
Explaining a chance that the first thread pool needs to be expanded, if the plurality of first threads comprise one or more target first threads, explaining that the first thread pool needs to be expanded; if the target first thread is not included in the plurality of first threads, it is described that the first thread pool does not need to be expanded, and therefore, a process of determining a trigger that the first thread pool needs to be expanded is a process of determining the target first thread, that is, step S32 is described.
In an alternative implementation manner, there are various implementation manners of step S32, and the disclosed embodiment provides, but is not limited to, the following three.
The first implementation of step S32 includes the following steps a11 to a 12.
In step a11, a first threshold is determined based on the number of tasks to be executed corresponding to the first threads respectively.
For example, the first threshold may be an average value a of the numbers of the tasks to be executed corresponding to the plurality of first threads, respectively.
Illustratively, step a11 includes the following steps a111 through a 112.
In step a111, a first minimum number of tasks to be executed is determined from the numbers of tasks to be executed respectively corresponding to the plurality of first threads.
In step a112, the first threshold is determined based on the first minimum number of tasks to be performed.
The first minimum number of the tasks to be executed is the minimum number of the tasks to be executed in the number of the tasks to be executed corresponding to the plurality of first threads respectively.
In an alternative implementation manner, step a112 is specifically: the first threshold value is a first minimum number of tasks to be executed and is a first preset multiple, and the first preset multiple is any value larger than 1. Illustratively, the first preset multiple is 2.
For example, the first preset multiple may be determined based on actual conditions, and is not limited herein.
For example, the number of to-be-executed tasks corresponding to the plurality of first threads at different times may be different, and the calculated first threshold may be different.
Illustratively, the first threshold is related to the number of tasks to be executed respectively corresponding to the plurality of first threads, for example, the first threshold is positively related to a first minimum number of tasks to be executed or a mean value a.
If the first minimum number of tasks to be executed or the average value a is relatively small, the first threshold is relatively small, and in a case that the first threshold is relatively small, the determined target first thread does not necessarily need to be expanded, for example, if the first minimum number of tasks to be executed is 2, the first preset multiple is 2, and the first threshold is the first minimum number of tasks to be executed and the first preset multiple is 4, then, if the corresponding first threads whose number of tasks to be executed is greater than or equal to 4 are all the target first threads, it is obviously unreasonable. Based on this, in an alternative, a method of determining the first threshold is provided in which step a112 includes the following steps a21 through a 23.
In step a21, determining a threshold a based on the first minimum number of tasks to be performed;
in step a22, for each of the first threads, the product of the maximum number corresponding to the first thread and a second preset multiple is determined as the threshold B corresponding to the first thread.
The maximum number corresponding to the first thread is the maximum number of tasks which can be stored in the queue corresponding to the first thread.
In step a23, the larger of the threshold a and the threshold B is determined as the first threshold.
Illustratively, the second predetermined multiple is any value less than or equal to 1 and greater than 0, and illustratively, the second predetermined multiple is 0.8. The threshold B is different for different maximum numbers of first threads.
For example, the threshold B corresponding to different first threads is the same, that is, the threshold B may be a fixed value.
The specific value of the threshold B may be determined based on actual conditions, and is not limited herein, and may be 100, for example.
It can be understood that, since the first threshold is max { threshold a, threshold B }, the number of tasks to be executed corresponding to the determined target first thread is greater than or equal to both threshold a and threshold B. Wherein, the "number of corresponding tasks to be executed is greater than or equal to the threshold a" represents a "relative situation", that is, the number of tasks stored in the queue is greater than that in other queues; "the number of the corresponding tasks to be executed is greater than or equal to the threshold value B" represents an "absolute condition", that is, the number of the tasks stored in the queue is actually more from the aspect of appearance.
In summary, the foregoing implementation excludes the case where "the corresponding number of tasks to be executed is greater than or equal to the threshold a" and "the corresponding number of tasks to be executed is less than the threshold B", for example, the first threads whose corresponding number of tasks to be executed is greater than or equal to 4 are all the target first threads.
In step a12, a target first thread, from among the plurality of first threads, is determined, where the corresponding number of tasks to be executed is greater than or equal to the first threshold.
The second implementation of step S32 includes the following steps a31 to a 33.
In step a31, a maximum number of tasks that can be stored in the queues corresponding to the first threads is obtained.
In step a32, for each of the first threads, a product of the maximum number corresponding to the first thread and a second preset multiple is determined as a first threshold corresponding to the first thread.
Illustratively, the second predetermined multiple is any value less than or equal to 1 and greater than 0, and illustratively, the second predetermined multiple is 0.8.
In step a33, for each first thread, if the number of tasks to be executed corresponding to the first thread is greater than or equal to a first threshold corresponding to the first thread, it is determined that the first thread is a target first thread.
Illustratively, the first thresholds for the corresponding first threads that differ by the maximum number differ.
In a second implementation manner, the number of tasks to be executed corresponding to each first thread is only compared with the first threshold corresponding to itself.
The implementation of the third step S32 includes the following step a 41.
In step a41, a target first thread, of which the corresponding number of tasks to be executed is greater than or equal to a first threshold, is determined from the plurality of first threads.
The first threshold is a set fixed value. The first threshold values corresponding to different first threads are the same.
In an alternative implementation manner, there are various implementation manners of step S33, and the embodiments of the present disclosure provide, but are not limited to, the following two.
The first implementation manner of step S33 includes: controlling the second thread pool to create the at least one second thread.
For example, the second thread pool may be controlled to create a corresponding number of second threads each time the second threads are needed; the second thread is destroyed when it is not needed.
For example, if the sum of the total number of created threads contained in the second thread pool and the number of the at least one second thread is less than the maximum number of threads corresponding to the second thread pool, the at least one second thread may be created.
It can be understood that the creation and destruction of the threads both increase the overhead, for example, when the threads are created, memory needs to be allocated to the threads, scheduling needs to be performed, and the like, so after one or more second threads are created, if the second threads are not needed, the second threads can be put back into the second thread pool, and if the second threads are needed again, a corresponding number of second threads can be obtained from the second thread pool, thereby reducing the overhead increased by the creation and destruction of the threads. A second method is provided based thereon.
The second implementation manner of step S33 includes: obtaining the at least one second thread which is created in an idle state from the second thread pool.
In an alternative implementation, after the at least one second thread in the idle state that has been created is obtained from the second thread pool, in order to prevent the at least one second thread from being bound to the plurality of target queues, the at least one second thread in the second thread pool may be moved out of the second thread pool, for example. In this case, one or more second threads remaining in the second thread pool are all in an idle state, since the second threads that are in a non-idle state are all moved out of the second thread pool.
In an alternative implementation, after the at least one created second thread in the idle state is obtained from the second thread pool, in order to prevent the at least one second thread from being bound to the plurality of target queues, the thread state of the at least one second thread may be switched from the idle state to a non-idle state, in which case one or more second threads included in the second thread pool may be in the idle state and may be in the non-idle state.
For example, the above-mentioned "the at least one second thread can be removed from the second thread pool" does not destroy the at least one second thread, but deletes the name of the at least one second thread included in the second thread pool, and the memory allocated for the at least one second thread still exists.
In an optional implementation manner, the steps included in the thread pool capacity expansion method may be executed once every first preset time period, or the steps included in the thread pool capacity expansion method may be executed in real time.
For example, the first preset time period may be determined based on actual situations, and the embodiment of the present disclosure does not limit this.
In an optional implementation manner, the method for expanding a thread pool provided in the embodiment of the present disclosure further includes a method for shrinking a thread pool. The thread pool capacity reduction method is executed after the capacity of the thread pool is expanded, namely, whether the number of tasks to be executed corresponding to the target first thread is recovered to be normal or not is judged, if the number of the tasks to be executed is recovered to be normal, the capacity reduction is carried out on the thread pool, and the at least one second thread is released. In the following, a description is given of a capacity reduction method for a thread pool, which includes the following steps B1 to B2.
In step B1, the number of tasks to be executed corresponding to the target first thread at the current time is obtained.
In step B2, if the number of tasks to be executed corresponding to the target first thread at the current time is less than or equal to a second threshold, the at least one second thread is released, so that the at least one second thread cannot obtain tasks from the target queue.
Illustratively, the at least one second thread is released, i.e. the binding relationship between the at least one second thread and the target queue is released, so that the at least one second thread cannot obtain the task from the target queue.
Illustratively, after releasing the at least one second thread, the at least one second thread may be destroyed.
For example, if the at least one second thread is moved out of the second thread pool after the at least one second thread is obtained during capacity expansion of the target queue, the at least one second thread may be moved into the second thread pool after the at least one second thread is released.
For example, if the thread state of the at least one second thread included in the second thread pool is switched from the idle state to the non-idle state after the at least one second thread is obtained during capacity expansion of the target queue, the thread state of the at least one second thread included in the second thread pool may be switched from the non-idle state to the idle state after the at least one second thread is released.
For example, after the steps included in the capacity expansion method for executing the thread pool, the capacity reduction method for executing the thread pool may be executed after a second preset time interval, or the capacity reduction method for executing the thread pool may be executed in real time.
There are various implementations of step B2 in the embodiments of the present disclosure, and the embodiments of the present disclosure provide, but are not limited to, the following two.
The first implementation of step B2 includes the following steps C11 through C12.
In step C11, the second threshold is obtained.
There are various implementations of step C11, and the embodiments of the present application provide, but are not limited to, the following four.
The first step C11 is implemented by: and obtaining a second threshold value based on the number of tasks to be executed corresponding to the current time of the first threads respectively.
The implementation manner of the first step C11 includes multiple implementation manners, and the embodiments of the present application provide, but are not limited to, the following manners.
The first method is as follows: and the second threshold value is the average value of the number of the tasks to be executed corresponding to the first threads at the current time.
The second method comprises the following steps: determining a second minimum number of tasks to be executed from the number of tasks to be executed respectively corresponding to the current time of the plurality of first threads; determining a second threshold based on the second minimum number of tasks to be performed.
The second minimum waiting task data refers to a minimum number of tasks to be executed in the number of tasks to be executed respectively corresponding to the current time of the plurality of first threads.
In an optional implementation manner, the specific implementation manner of "determining the second threshold based on the second minimum number of tasks to be executed" is: the second threshold is a second minimum number of tasks to be executed and is a third preset multiple, and the third preset multiple is any value larger than 1. Illustratively, the third predetermined multiple is 1.5.
For example, the third preset multiple may be determined based on actual situations, and is not limited herein.
In this implementation manner, the number of tasks to be executed by the target first thread at the current time is less than or equal to the second threshold, which is a "relative situation", that is, the number of tasks stored in the target queue exceeds the number of tasks stored in other queues by a small amount. However, even if the number of tasks to be executed by the target first thread at the current time is less than or equal to the second threshold, the number of tasks to be executed by the target first thread at the current time is still large, and assuming that the second minimum number of tasks to be executed is 1000 and the third preset multiple is 1.5, the value is the second minimum number of tasks to be executed — the third preset multiple 1500, and if the number of tasks to be executed by the target first thread at the current time is 1400 < 1500, the at least one second thread is released, which is obviously unreasonable. Based on this, the following implementation of "determining the second threshold based on the second minimum number of tasks to be performed" is provided.
In an optional implementation manner, the specific implementation manner of "determining the second threshold based on the second minimum number of tasks to be executed" is: obtaining a threshold value C based on the second minimum number of tasks to be executed; obtaining the maximum number corresponding to the target first thread; obtaining a threshold value D based on the maximum number corresponding to the target first thread; the minimum value of the threshold value C and the threshold value D is determined as the second threshold value.
The maximum number corresponding to the target first thread refers to the maximum number of tasks that the corresponding queue can store.
For example, the threshold C is the second minimum number of tasks to be executed by the third preset multiple.
Illustratively, the threshold D is the maximum number corresponding to the target first thread.
Illustratively, the fourth predetermined multiple is any value less than or equal to 1 and greater than 0, and illustratively, the fourth predetermined multiple is 0.6.
In this implementation manner, the number of tasks to be executed by the target first thread at the current time is less than or equal to a second threshold, and the number of tasks to be executed by the target first thread at the current time is less than or equal to a threshold C and less than or equal to a threshold D. The number of the tasks to be executed corresponding to the target first thread at the current time is less than or equal to the threshold value C, which represents a relative situation, that is, the number of the tasks stored in the target queue exceeds the number of the tasks stored in other queues; "the number of tasks to be executed corresponding to the current time by the target first thread is less than or equal to the threshold value D" represents an "absolute condition", that is, the number of tasks stored in the queue is actually reduced to a certain extent from the aspect of appearance.
In summary, the implementation method excludes a scheme that "when the number of tasks to be executed by the target first thread at the current time is less than or equal to the threshold C, and the number of tasks to be executed by the target first thread at the current time is greater than the threshold D", the at least one second thread is released.
The second implementation manner of step C11 includes: obtaining the maximum number corresponding to the target first thread; and obtaining a second threshold value based on the maximum number corresponding to the target first thread.
For example, the second threshold is the maximum number corresponding to the target first thread.
The third implementation manner of step C11 includes: the second threshold values corresponding to different target first threads are the same, namely the second threshold values are preset threshold values.
The specific value of the preset threshold may be determined based on actual conditions, and is not limited herein, and may be 90, for example.
In step C12, if the number of tasks to be executed corresponding to the target first thread at the current time is less than or equal to the second threshold, the at least one second thread is released.
In summary, in the capacity expansion method for the thread pool provided in the embodiment of the present disclosure, the affinity of the first thread pool can still be ensured, that is, the results corresponding to the tasks with the same attribute can be output according to the order in which the tasks with the same attribute are input into the first thread pool; in the process of expanding the first thread pool, the total number of the first threads contained in the first thread pool is not changed, so that the queue does not need to be reallocated to the tasks of which the queue is already allocated. The embodiment of the disclosure allocates the queue for the task all the time based on the attribute identifier of the task and the total number of the first threads contained in the first thread pool, so that the problem of performing special processing on one or more tasks received during capacity expansion does not exist. In the process of capacity reduction of the thread pool provided by the embodiment of the present disclosure, the total number of the plurality of first threads included in the first thread pool is not changed, and therefore, it is not necessary to reallocate the queue to the task to which the queue has been allocated, and therefore, the process of capacity reduction or capacity expansion of the first thread pool by the method provided by the embodiment of the present disclosure has no influence on the allocation of the task.
The method is described in detail in the embodiments of the present disclosure, and the method of the embodiments of the present disclosure can be implemented by various types of apparatuses, so that various apparatuses are also disclosed in the present disclosure, and specific embodiments are described in detail below.
Fig. 5 is a block diagram illustrating a thread pool capacity increasing apparatus according to an exemplary embodiment, and referring to fig. 5, the apparatus includes: a first obtaining module 51, a first determining module 52, a second obtaining module 53, a first control module 54, and a second control module 55, wherein:
a first obtaining module 51, configured to obtain the number of to-be-executed tasks respectively corresponding to a plurality of first threads included in a first thread pool, where the number of to-be-executed tasks corresponding to the first threads is the number of tasks stored in a queue corresponding to the first threads;
a first determining module 52 configured to determine, from the plurality of first threads, a corresponding target first thread whose number of tasks to be executed is greater than or equal to a first threshold;
a second obtaining module 53 configured to obtain at least one second thread included in a second thread pool, the first thread pool being different from the second thread pool;
a first control module 54 configured to take out a task from a queue corresponding to the target first thread by the at least one second thread in turn with the target first thread;
and a second control module 55 configured to output a result corresponding to the task processed by itself through the at least one second thread and the target first thread in an order of taking out the tasks from the queue corresponding to the target first thread.
In an optional implementation manner, the first determining module is specifically configured to:
a first determining unit configured to determine a first minimum number of tasks to be executed from the number of tasks to be executed respectively corresponding to the plurality of first threads;
a second determination unit configured to determine the first threshold based on the first minimum number of tasks to be executed;
a third determining unit configured to determine, from the plurality of first threads, a target first thread for which the corresponding number of tasks to be executed is greater than or equal to the first threshold and greater than or equal to a second threshold.
In an optional implementation manner, the second obtaining module is specifically configured to:
a first control unit configured to control the second thread pool to create the at least one second thread; or the like, or, alternatively,
a first obtaining unit configured to obtain the at least one second thread that has been created in an idle state from the second thread pool.
In an optional implementation manner, the capacity expansion device of the thread pool further includes:
a removal module configured to remove the at least one second thread from the second thread pool; or the like, or, alternatively,
a first state switching module configured to switch the thread state of the at least one second thread included in the second thread pool from an idle state to a non-idle state.
In an optional implementation manner, the capacity expansion device of the thread pool further includes:
the third acquisition module is configured to acquire the number of tasks to be executed corresponding to the target first thread at the current time;
a releasing module configured to release the at least one second thread if the number of to-be-executed tasks corresponding to the target first thread at the current time is less than or equal to a third threshold, so that the at least one second thread cannot obtain tasks from a queue corresponding to the target first thread.
In an optional implementation, the releasing module is specifically configured to:
a second obtaining unit, configured to obtain the third threshold, where the third threshold is a numerical value obtained based on the number of to-be-executed tasks respectively corresponding to the current time of the plurality of first threads, or the third threshold is a preset fourth threshold;
a releasing unit configured to release the at least one second thread if the number of tasks to be executed corresponding to the target first thread at the current time is less than or equal to the third threshold.
In an optional implementation manner, the capacity expansion device of the thread pool further includes:
a destruction module configured to destroy the at least one second thread; or the like, or, alternatively,
an add module configured to add the at least one second thread to the second thread pool; or the like, or, alternatively,
a second state switching module configured to switch the thread state of the at least one second thread included in the second thread pool from a non-idle state to an idle state.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 is a block diagram illustrating an apparatus 600 for a server according to an example embodiment.
Servers include, but are not limited to: a processor 61, a memory 62, a network interface 63, an I/O controller 64, and a communication bus 65.
It should be noted that the structure of the server shown in fig. 6 is not limited to the server, and the server may include more or less components than those shown in fig. 6, or some components may be combined, or a different arrangement of components may be used, as will be understood by those skilled in the art.
The following describes each component of the server in detail with reference to fig. 6:
the processor 61 is a control center of the server, connects various parts of the entire server using various interfaces and lines, and performs various functions of the server and processes data by running or executing software programs and/or modules stored in the memory 62 and calling data stored in the memory 62, thereby performing overall monitoring of the server. Processor 61 may include one or more processing units; illustratively, the processor 61 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 61.
Processor 61 may be a Central Processing Unit (CPU), or an application Specific Integrated circuit (asic), or one or more Integrated circuits configured to implement embodiments of the present invention, etc.;
the Memory 62 may include Memory, such as a Random-Access Memory (RAM) 621 and a Read-Only Memory (ROM) 622, and may also include a mass storage device 623, such as at least 1 disk storage. Of course, the server may also include hardware needed for other services.
The memory 62 is configured to store the executable instructions of the processor 61. The processor 61 has the following functions: acquiring the number of to-be-executed tasks respectively corresponding to a plurality of first threads contained in a first thread pool, wherein the number of the to-be-executed tasks corresponding to the first threads is the number of the tasks stored in a queue corresponding to the first threads;
determining a target first thread of which the corresponding number of the tasks to be executed is greater than or equal to a first threshold value from a plurality of first threads;
acquiring at least one second thread contained in a second thread pool, wherein the first thread pool is different from the second thread pool;
taking out tasks from a queue corresponding to the target first thread through the at least one second thread and the target first thread in turn;
and outputting a result corresponding to the task processed by the second thread according to the sequence of taking out the tasks from the queue corresponding to the target first thread through the at least one second thread and the target first thread.
A wired or wireless network interface 63 is configured to connect the server to a network.
The processor 61, the memory 62, the network interface 63, and the I/O controller 64 may be connected to each other by a communication bus 65, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc.
In an exemplary embodiment, the server may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described thread pool extension method.
In an exemplary embodiment, the disclosed embodiments provide a storage medium comprising instructions, such as a memory 62 comprising instructions, executable by a processor 61 of a server to perform the above-described method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer readable storage medium is provided, which can be directly loaded into the internal memory of a computer, such as the memory 62, and contains software codes, and the computer program can be loaded into and executed by the computer to implement the steps shown in any embodiment of the thread pool extension method.
In an exemplary embodiment, a computer program product is also provided, which can be directly loaded into an internal memory of a computer, for example, a memory included in the server, and contains software codes, and which can be loaded into and executed by the computer to implement the steps shown in any embodiment of the thread pool extension method described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for thread pool capacity expansion, comprising:
acquiring the number of to-be-executed tasks respectively corresponding to a plurality of first threads contained in a first thread pool, wherein the number of the to-be-executed tasks corresponding to the first threads is the number of the tasks stored in a queue corresponding to the first threads;
determining a target first thread of which the corresponding number of the tasks to be executed is greater than or equal to a first threshold value from a plurality of first threads;
acquiring at least one second thread contained in a second thread pool, wherein the first thread pool is different from the second thread pool;
taking out tasks from a queue corresponding to the target first thread through the at least one second thread and the target first thread in turn;
and outputting a result corresponding to the task processed by the second thread according to the sequence of taking out the tasks from the queue corresponding to the target first thread through the at least one second thread and the target first thread.
2. The thread pool capacity expansion method according to claim 1, wherein the step of determining a corresponding target first thread from the plurality of first threads, wherein the number of tasks to be executed is greater than or equal to a first threshold value, comprises:
determining a first minimum number of tasks to be executed from the number of tasks to be executed respectively corresponding to the plurality of first threads;
determining the first threshold based on the first minimum number of tasks to be performed;
and determining a target first thread of which the corresponding number of the tasks to be executed is greater than or equal to the first threshold value from the plurality of first threads.
3. The data acquisition method according to claim 1, wherein the step of acquiring at least one second thread contained in a second thread pool comprises:
controlling the second thread pool to create the at least one second thread; or the like, or, alternatively,
obtaining the at least one second thread which is created in an idle state from the second thread pool.
4. The data acquisition method according to claim 3, further comprising, after said acquiring at least one second thread contained in a second thread pool:
removing the at least one second thread from the second thread pool; or the like, or, alternatively,
and switching the thread state of the at least one second thread contained in the second thread pool from an idle state to a non-idle state.
5. The thread pool capacity expansion method according to any one of claims 1 to 4, further comprising:
acquiring the number of tasks to be executed corresponding to the target first thread at the current time;
and if the number of the tasks to be executed corresponding to the target first thread at the current time is less than or equal to a second threshold value, releasing the at least one second thread, so that the at least one second thread cannot obtain the tasks from the queue corresponding to the target first thread.
6. The thread pool expansion method according to claim 5, wherein the step of releasing the at least one second thread if the number of tasks to be executed corresponding to the target first thread at the current time is less than or equal to a second threshold value comprises:
acquiring the second threshold, wherein the second threshold is a numerical value obtained based on the number of to-be-executed tasks respectively corresponding to the current time of the plurality of first threads or is a preset threshold;
and if the number of the tasks to be executed corresponding to the target first thread in the current time is less than or equal to the second threshold value, releasing the at least one second thread.
7. A thread pool capacity device, comprising:
the system comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is configured to obtain the number of tasks to be executed corresponding to a plurality of first threads contained in a first thread pool respectively, and the number of the tasks to be executed corresponding to the first threads is the number of tasks stored in a queue corresponding to the first threads;
a first determining module configured to determine, from the plurality of first threads, a corresponding target first thread for which the number of tasks to be executed is greater than or equal to a first threshold;
a second obtaining module configured to obtain at least one second thread included in a second thread pool, the first thread pool being different from the second thread pool;
the first control module is configured to take out tasks from a queue corresponding to the target first thread through the at least one second thread and the target first thread in turn;
and the second control module is configured to output a result corresponding to the task processed by the second control module per se through the at least one second thread and the target first thread according to the sequence of taking out the tasks from the queue corresponding to the target first thread.
8. A server, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the thread pool extension method of any of claims 1 to 6.
9. A computer readable storage medium having instructions which, when executed by a processor of a server, enable the server to perform the thread pool capacity method of any one of claims 1 to 6.
10. A computer program product directly loadable into the internal memory of a computer, said memory being the memory comprised by the server according to claim 8 and containing software code, said computer program being loadable and executable by the computer and being capable of implementing the thread pool extension method according to any of claims 1 to 6.
CN202110291710.4A 2021-03-18 2021-03-18 Thread pool capacity expansion method, device, server, medium and product Active CN113064705B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110291710.4A CN113064705B (en) 2021-03-18 2021-03-18 Thread pool capacity expansion method, device, server, medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110291710.4A CN113064705B (en) 2021-03-18 2021-03-18 Thread pool capacity expansion method, device, server, medium and product

Publications (2)

Publication Number Publication Date
CN113064705A true CN113064705A (en) 2021-07-02
CN113064705B CN113064705B (en) 2024-04-09

Family

ID=76562052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110291710.4A Active CN113064705B (en) 2021-03-18 2021-03-18 Thread pool capacity expansion method, device, server, medium and product

Country Status (1)

Country Link
CN (1) CN113064705B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113656444A (en) * 2021-08-26 2021-11-16 傲网信息科技(厦门)有限公司 Data persistence method, server and management equipment
CN115391019A (en) * 2022-10-26 2022-11-25 小米汽车科技有限公司 Data acquisition method and device, readable storage medium and chip

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880475A (en) * 2012-10-23 2013-01-16 上海普元信息技术股份有限公司 Real-time event handling system and method based on cloud computing in computer software system
CN108762913A (en) * 2018-03-23 2018-11-06 阿里巴巴集团控股有限公司 service processing method and device
CN111031336A (en) * 2019-12-10 2020-04-17 北京达佳互联信息技术有限公司 Live broadcast list data updating method and device, electronic equipment and storage medium
CN111338787A (en) * 2020-02-04 2020-06-26 浙江大华技术股份有限公司 Data processing method and device, storage medium and electronic device
US20200310869A1 (en) * 2019-03-28 2020-10-01 Fujitsu Limited Information processing apparatus and storage medium storing execution control program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880475A (en) * 2012-10-23 2013-01-16 上海普元信息技术股份有限公司 Real-time event handling system and method based on cloud computing in computer software system
CN108762913A (en) * 2018-03-23 2018-11-06 阿里巴巴集团控股有限公司 service processing method and device
US20200310869A1 (en) * 2019-03-28 2020-10-01 Fujitsu Limited Information processing apparatus and storage medium storing execution control program
CN111031336A (en) * 2019-12-10 2020-04-17 北京达佳互联信息技术有限公司 Live broadcast list data updating method and device, electronic equipment and storage medium
CN111338787A (en) * 2020-02-04 2020-06-26 浙江大华技术股份有限公司 Data processing method and device, storage medium and electronic device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113656444A (en) * 2021-08-26 2021-11-16 傲网信息科技(厦门)有限公司 Data persistence method, server and management equipment
CN113656444B (en) * 2021-08-26 2024-02-27 友安云(厦门)数据科技有限公司 Data persistence method, server and management equipment
CN115391019A (en) * 2022-10-26 2022-11-25 小米汽车科技有限公司 Data acquisition method and device, readable storage medium and chip
CN115391019B (en) * 2022-10-26 2022-12-27 小米汽车科技有限公司 Data acquisition method and device, readable storage medium and chip

Also Published As

Publication number Publication date
CN113064705B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN113064705B (en) Thread pool capacity expansion method, device, server, medium and product
CN107391243B (en) Thread task processing equipment, device and method
CN112988423A (en) Message consumption and message distribution method, device, server and storage medium
US20150120852A1 (en) Subscriber based priority of messages in a publisher-subscriber domain
CN106155811B (en) Resource service device, resource scheduling method and device
CN111475235A (en) Acceleration method, device and equipment for function computation cold start and storage medium
CN107153643B (en) Data table connection method and device
US10614542B2 (en) High granularity level GPU resource allocation method and system
CN111478781B (en) Message broadcasting method and device
JP2021518955A (en) Processor core scheduling method, equipment, terminals and storage media
CN109918145A (en) A kind of accelerated method, the device, electronic equipment of application program cold start-up
KR102020358B1 (en) Terminal and method for synchronizing application thereof
CN113259770A (en) Video playing method, device, electronic equipment, medium and product
CN112311597A (en) Message pushing method and device
CN108520401B (en) User list management method, device, platform and storage medium
CN111611017B (en) Display card selection method and related device
CN112596820A (en) Resource loading method, device, equipment and storage medium
CN112003930A (en) Task allocation method, device, equipment and storage medium
CN111124655A (en) Network request scheduling method, terminal device and storage medium
CN110955461A (en) Processing method, device and system of computing task, server and storage medium
CN109426529B (en) Method, device and terminal for drawing graphics based on X window system
CN115834483A (en) Flow control method, device and equipment based on cluster and storage medium
CN115421888A (en) File processing method and device based on multithreading, electronic equipment and storage medium
JP2015184909A (en) Image processing system, data management method and program
CN109246470B (en) Multi-thread synchronous bullet screen distribution method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant