CN116089094A - Thread pool allocation method and device - Google Patents

Thread pool allocation method and device Download PDF

Info

Publication number
CN116089094A
CN116089094A CN202310147054.XA CN202310147054A CN116089094A CN 116089094 A CN116089094 A CN 116089094A CN 202310147054 A CN202310147054 A CN 202310147054A CN 116089094 A CN116089094 A CN 116089094A
Authority
CN
China
Prior art keywords
target
task
execution
thread pool
execution information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310147054.XA
Other languages
Chinese (zh)
Inventor
张若凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Original Assignee
Douyin Vision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Douyin Vision Co Ltd filed Critical Douyin Vision Co Ltd
Priority to CN202310147054.XA priority Critical patent/CN116089094A/en
Publication of CN116089094A publication Critical patent/CN116089094A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the specification provides a thread pool allocation method and device. One embodiment of the method comprises the following steps: acquiring a plurality of pieces of historical task execution information which are transmitted by a plurality of terminal devices and correspond to tasks executed in the terminal devices; taking any one of a plurality of equipment models as a target equipment model, taking any one of a plurality of tasks as a target task, and splitting multi-target task execution information which is sent by terminal equipment of the target equipment model and is used for the target task from the historical task execution information; taking the historical execution time length included in each target task execution information as a first execution time length, and determining a second execution time length for executing the target task by the terminal equipment of the target equipment model based on the first execution time length of the multi-item target task execution information; and determining a target thread pool allocated to the target task in the terminal equipment of the target equipment model according to the second execution duration.

Description

Thread pool allocation method and device
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a thread pool allocation method and device.
Background
With the rapid development of computer technology and internet technology, various application layers are endless, and great convenience is brought to the life of people. The thread pool is used as a common thread management tool and can be used for performing operations such as thread establishment, thread placement, thread ending and the like. Most tasks in an application may be performed using threads in a thread pool. Typically, when creating asynchronous tasks, a developer is required to actively select a particular thread pool for the task to execute, or to directly create a new thread to execute. Thus, the thread is easy to overflow, the intensive task and the non-intensive task of the CPU (central processing unit ) are easy to be put into the same thread pool for execution, and the mode of manually designating the thread pool is not very accurate, and the like. Eventually, each thread is easy to occupy CPU time, and the threads are starved, which results in long waiting time and low processing efficiency of tasks.
Disclosure of Invention
The embodiment of the specification describes a thread pool allocation method and apparatus.
According to a first aspect, there is provided a thread pool allocation method, comprising: acquiring a plurality of pieces of historical task execution information which are sent by a plurality of terminal devices and correspond to tasks executed in the terminal devices, wherein each piece of historical task execution information comprises a task identifier, a historical execution duration and a device model; taking any one of a plurality of equipment models as a target equipment model, taking any one of a plurality of tasks as a target task, and splitting multi-target task execution information which is sent by terminal equipment of the target equipment model and is sent by the terminal equipment of the target equipment model from the historical task execution information; the historical execution time length included in each target task execution information is used as a first execution time length, and the second execution time length of the target task executed by the terminal equipment of the target equipment model is determined based on the first execution time length of the multi-item target task execution information; and determining a target thread pool allocated to the target task in the terminal equipment of the target equipment model according to the second execution duration.
According to a second aspect, there is provided a thread pool allocation apparatus comprising: an obtaining unit configured to obtain a plurality of pieces of historical task execution information corresponding to tasks executed in the terminal device, the pieces of historical task execution information being sent by the plurality of terminal devices, wherein each piece of historical task execution information comprises a task identifier, a historical execution duration and a device model; a splitting unit configured to split, from the pieces of historical task execution information, pieces of task execution information sent by terminal devices of the target device model and to the target task, with any one of the plurality of device models as a target device model and any one of the plurality of tasks as a target task; the first determining unit is configured to determine a second execution duration of the target task executed by the terminal device of the target device model based on the first execution duration of the multi-item target task execution information by taking the historical execution duration included in each target task execution information as the first execution duration; and the second determining unit is configured to determine a target thread pool allocated to the target task in the terminal equipment of the target equipment model according to the second execution duration.
According to a third aspect, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method as described in any of the first aspects.
According to a fourth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of the first aspects.
According to a fifth aspect, there is provided an electronic device comprising a memory having executable code stored therein and a processor, which when executing the executable code, implements the method of any one of the first aspects.
According to the thread pool allocation method and device provided by the embodiment of the specification, firstly, a plurality of pieces of historical task execution information which are sent by a plurality of terminal devices and correspond to tasks executed in the terminal devices are obtained, wherein each piece of historical task execution information can comprise a task identifier, a historical execution duration and a device model. And secondly, taking any one of the plurality of equipment models as a target equipment model, taking any one of the plurality of tasks as a target task, and splitting multi-item target task execution information which is sent by terminal equipment of the target equipment model and is used for the target task from the acquired pieces of historical task execution information. And then, taking the historical execution duration included in each target task execution information as a first execution duration, and determining a second execution duration of the target task executed by the terminal equipment of the target equipment model based on the first execution duration of the multi-item target task execution information. And finally, determining a target thread pool allocated to the target task in the terminal equipment of the target equipment model according to the second execution duration. Because the conditions that the terminal devices with different equipment models execute the same task are not very same, the thread pool distributed to the task in the terminal devices with different equipment models can be determined according to the execution time of the task in the terminal devices with different equipment models. Therefore, a proper thread pool can be allocated for the task based on the equipment model, so that the task can be executed more smoothly, long-time waiting of the task is avoided, and the processing efficiency is improved.
Drawings
FIG. 1 shows a schematic diagram of one application scenario in which embodiments of the present description may be applied;
FIG. 2 illustrates a flow diagram of a thread pool allocation method according to one embodiment;
FIG. 3 shows a schematic diagram of points mapping each marked task execution information into a coordinate system;
FIG. 4 is a schematic diagram showing clustering of points in FIG. 3 using the OPTICS algorithm to obtain target class clusters;
FIG. 5 illustrates a schematic block diagram of a thread pool allocation apparatus in accordance with one embodiment;
fig. 6 shows a schematic structural diagram of an electronic device suitable for implementing embodiments of the present application.
Detailed Description
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
The technical scheme provided in the present specification is further described in detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. It should be noted that, without conflict, the embodiments of the present specification and features in the embodiments may be combined with each other.
As described above, the manner in which a developer designates a specific thread pool for a task during development tends to result in long waiting time and low processing efficiency for the task. Therefore, the embodiment of the specification provides a thread pool allocation method, which can allocate a proper thread pool for a task based on a device model, so that the task can be executed more smoothly, long-time waiting of the task is avoided, and the processing efficiency is improved.
Fig. 1 shows a schematic diagram of one application scenario in which embodiments of the present description may be applied. As shown in fig. 1, the server 101 may receive, at intervals of a preset period (for example, one month), a plurality of pieces of historical task execution information corresponding to executed tasks sent by the terminal device 102, the terminal device 103, the terminal device 104, … …, and the like of different device models, where each terminal device may include a plurality of thread pools, and the plurality of thread pools may be different types of thread pools. The historical task execution information may include a device model, a task identifier, a historical execution duration, and the like, where the historical execution duration may refer to an execution duration of a task corresponding to the execution task identifier. When there are a plurality of device models and a plurality of task identifications, a plurality of pieces of historical task execution information may be divided according to the device models and the task identifications, for example, the historical task execution information having the same device model and task identification may be divided together. For example, any one of the plurality of device models may be set as the target device model, and any one of the plurality of tasks may be set as the target task. Thus, the target task corresponding to the target equipment model and the target task identifier can be corresponding to multi-item target task execution information. The historical execution time period included in each target task execution information may be taken as the first execution time period. Based on the first execution duration of the multi-label task execution information, the server 101 may determine a second execution duration of the target task for the terminal device of the target device model, and determine a target thread pool allocated to the target task in the terminal device of the target device model according to the second execution duration. Then, the server 101 may send the target task identifier of the target task and the thread pool identifier of the target thread pool to the terminal device corresponding to the target device model, in this example, assuming that the terminal device corresponding to the target device model may include the terminal device 102, the server 101 may send the target task identifier and the thread pool identifier of the target thread pool to the terminal device 102, so that the terminal device 102 allocates the thread pool for the target task.
With continued reference to FIG. 2, FIG. 2 shows a flow diagram of a thread pool allocation method. The method may be performed by a server. As shown in fig. 2, the thread pool allocation method may include the steps of:
step 201, acquiring a plurality of pieces of historical task execution information which are sent by a plurality of terminal devices and correspond to tasks executed in the terminal devices.
In this embodiment, each terminal device may record various information of each task that has been executed, for example, a task identification (e.g., task name), execution duration, and the like. Then, the recorded historical task execution information is uploaded to the server, wherein the historical task execution information uploaded to the server by each terminal device can include, but is not limited to: task identification, historical execution duration, own device model, etc.
Step 202, taking any one of a plurality of equipment models as a target equipment model, taking any one of a plurality of tasks as a target task, and splitting multi-target task execution information, which is sent by terminal equipment of the target equipment model and is sent to the target task, from a plurality of pieces of historical task execution information.
In this embodiment, the server may divide a plurality of pieces of historical task execution information according to the device model and the task identifier, specifically: the historical task execution information with the same device model and task identification is divided together. The server can obtain historical task execution information of each task under the terminal equipment of each equipment model. In this example, any one of the plurality of device models may be used as the target device model, any one of the plurality of tasks may be used as the target task, and the task identifier of the target task may be the target task identifier. The server can obtain the multi-target task execution information of the target task, which is sent by the terminal equipment of the target equipment model.
In general, the hardware, software, etc. configurations of the terminal devices of different device models may be different, and thus, the execution time period of the same task on the terminal devices of different device models may also be different. For example, the task of reading an album may be performed on a device model 1 terminal for about 1 minute and 20 seconds, and on a device model 2 terminal for about 49 seconds.
Alternatively, the multi-label task execution information may be generated by executing the target task by a plurality of terminal devices of the target device model within a preset period of time (for example, within one month). That is, the thread pool allocation method of the present application may be re-executed once for the target device model and the target task every preset time, thereby re-determining the target thread pool allocated to the target task in the terminal device of the target device model. In practice, the performance of the terminal device may deteriorate as the usage time becomes longer, and thus, even if the same task is performed on the same terminal device, the execution time periods may be different. For example, the new mobile phone will have a faster speed than the old mobile phone when the same task is performed, and the production date of the terminal devices of the same device model will be substantially the same, so the duration of use will be substantially the same. The target thread pool distributed to the target task in the terminal equipment of the target equipment model is redetermined at intervals of preset time, so that the distribution of the thread pool is more reasonable, and the task processing efficiency is further improved.
Step 203, taking the historical execution duration included in each target task execution information as a first execution duration, and determining a second execution duration of the target task executed by the terminal device of the target device model based on the first execution duration of the multi-item target task execution information.
In this embodiment, based on the first execution duration of the multi-label task execution information, the second execution duration of the target task by the terminal device of the target device model may be determined. For example, the first execution duration included in the multi-label task execution information may be statistically analyzed, for example, a mean value, a weighted average value, or the like may be calculated for all or part of the first execution duration, and the statistical analysis result may be used as the second execution duration.
In some alternative implementations, the step 202 may further include the following:
s1, clustering the multi-item target task execution information by using a clustering algorithm, and determining a class cluster from at least one class cluster obtained by clustering as a target class cluster.
In this example, multiple clustering algorithms may be used, for example, a clustering algorithm based on partitioning, a clustering algorithm based on hierarchy, a clustering algorithm based on density, and the like, to classify the multi-label task execution information, where at least one class cluster may correspond to one class. One class cluster may be determined from at least one class cluster as a target class cluster, for example, a class cluster containing the largest amount of target task execution information may be selected as the target class cluster. The target class cluster may include multi-label task execution information.
Alternatively, the clustering algorithm may include a density-based clustering algorithm, such as DBSCAN (Density Based Spatial Clustering of Application with Noise, density-based spatial clustering application with noise), OPTICS (Ordering Points To Identify the Clustering Structure, point ordering recognition cluster structure), and the like. And, the above step S1 may be specifically performed as follows: first, each item of target task execution information in the multi-item target task execution information is mapped to a point in a coordinate system according to a first execution duration. In practice, the target task execution information may be mapped to points in a coordinate system in a variety of ways. For example, each target task execution information may be mapped to a point in a coordinate system with the device model as the horizontal axis and the execution duration as the vertical axis. For another example, each item of task execution information may be mapped to a point in the coordinate system with the execution time length as the horizontal axis and the device model as the vertical axis. For another example, each target task execution information may be mapped to a point in the time axis with the execution duration as the time axis. Then, clustering points in the coordinate system by using a clustering algorithm based on density to obtain at least one cluster. Finally, a class cluster is determined from at least one class cluster as a target class cluster, for example, the class cluster with the most points can be selected as the target class cluster.
Taking the clustering algorithm as an OPTICS algorithm as an example, as shown in fig. 3, fig. 3 shows a schematic diagram of points where each target task execution information is mapped into a coordinate system. In this example, the horizontal axis of the coordinate system shown in fig. 3 is the device model number, and the vertical axis is the execution time period (in seconds in this example). It will be understood that the example shown in fig. 3 is a mapping result of target task execution information of a target task under a terminal device of a target device model. Fig. 4 shows a schematic diagram of clustering points in fig. 3 using an OPTICS algorithm to obtain target class clusters, where clusters represented by dotted circles correspond to target class clusters. Through a clustering algorithm, the execution time rule of the target task executed on the terminal equipment of the target equipment model can be learned from a large amount of target task execution information, so that the obtained target class cluster is more accurate.
S2, determining second task duration of the target task executed by the terminal equipment of the target equipment model based on the first execution duration of the multi-target task execution information included in the target class cluster.
In this example, the second execution duration may be determined according to the first execution duration of the target task execution information included in the target class cluster. For example, a first execution duration of any piece of target task execution information contained in the target class cluster may be selected as the second execution duration.
Alternatively, the above step S2 may be specifically performed as follows: first, a statistical analysis is performed on a first execution duration of the multi-label task execution information included in the target class cluster, for example, a mean value, a weighted average value, or the like is calculated. Then, according to the statistical analysis result, a second execution duration of the target task executed by the terminal device of the target device model is determined, for example, the statistical analysis result such as the average value, the weighted average value and the like can be used as the second execution duration.
And 204, determining a target thread pool allocated to the target task in the terminal equipment of the target equipment model according to the second execution duration.
In this embodiment, the terminal device may include a plurality of thread pools, where the plurality of thread pools may be the same type of thread pool or different types of thread pool. In practice, the number of thread pools in the terminal device and the types of the thread pools can be set according to actual needs, which is not limited herein. As an example, common types of thread pools may include, but are not limited to: fixedThreadPool, cachedThreadPool, scheduledThreadPool, singleThreadExecutor, etc.
Wherein FixedThreadPool is a thread pool that can reuse a fixed number of threads. FixedThreadPool is a thread pool with only core threads and is fixed in size. When a thread pool of a fixed size is created, one thread is created each time a task is submitted until the thread reaches the maximum size of the thread pool. The thread pool size remains unchanged once it reaches a maximum, and if a thread ends due to an execution exception, the thread pool is replenished with a new thread. Since the threads in FixedThreadPool are not reclaimed, fixThreadPool responds to external requests faster.
CachedThreadpool, which may be referred to as a cacheable thread pool, is characterized by an almost unlimited increase in thread count, and by the ability to reclaim threads when they are empty. If the thread pool size exceeds the threads needed to process tasks, then some idle (e.g., 60 seconds without executing tasks) threads are reclaimed, and as the number of tasks increases, the thread pool can intelligently add new threads to process tasks. The thread pool is well suited to perform a large number of less time consuming tasks.
scheduledThreadPool, which may be referred to as a timed task thread pool, is a thread pool of finite size that is fixed for the core thread. The number of core threads is fixed and the number of non-core threads (which do not execute tasks will be immediately reclaimed) is not limited. The thread pool supports timed and periodic execution of tasks.
SingleThreadExecutor is a pool of single lines Cheng Xiancheng where only one core thread is working, i.e., equivalent to single line Cheng Chuanhang performing all tasks. If this unique thread ends because of an exception, then there is a new thread to replace it. The thread pool can ensure that the execution sequence of all tasks is executed according to the submitting sequence of the tasks.
In this example, the target thread pool allocated to the target task in the terminal device of the target device model may be determined according to the second execution duration. For example, a developer may preset a correspondence between an execution duration interval and a thread pool, so that the thread pool corresponding to the second execution duration may be determined according to the correspondence and the execution duration interval in which the second execution duration is located, and the thread pool is used as a target thread pool allocated to a target task in a terminal device of a target device model.
In some alternative implementations, for this purpose, the terminal device includes 3 thread pools, the step 203 may specifically be performed as follows:
1) In response to determining that the second execution time is greater than or equal to a preset first time threshold (e.g., 1 minute), determining that a thread pool allocated to the target task in the terminal device of the target device model is a first thread pool, wherein the first thread pool is a thread pool with a reusable fixed number of threads, i.e., fixedthreads pool. Here, the first time period threshold may be a threshold set by a developer according to actual needs. Because the size of the FixedThreadPool is fixed, the size of the thread pool is kept unchanged once the size of the thread pool reaches the maximum value, the tasks with long time consumption can be distributed to the same thread pool by distributing the target tasks with the second execution time longer than or equal to the first time length threshold to the FixedThreadPool, and the number of threads for executing the tasks with long time consumption can be limited, so that the simultaneous execution of too many long time consumption tasks is avoided, the situations that the CPU is always occupied and cannot execute other tasks to cause starvation and blocking of the CPU and the like are avoided, and the task execution is smoother.
2) In response to determining that the second execution duration is between the first duration threshold and a preset second duration threshold (e.g., 10 seconds), it may be determined that a thread pool allocated to the target task in the terminal device of the target device model is a second thread pool, where the second duration threshold is less than the first duration threshold, and the second thread pool is a timed task thread pool, i.e., a scheduledthread pool. Here, the second time period threshold may be a threshold set by a developer according to actual needs. By directly multiplexing the core threads in the scheduledThreadpool to execute tasks, the cost of many unnecessary recreated threads and destroyed threads can be reduced, thereby improving the processing efficiency.
3) In response to determining that the second execution duration is less than or equal to the second duration threshold, a thread pool allocated to the target task in the terminal device of the target device model may be determined to be a third thread pool, where the third thread pool may be a cacheable thread pool, i.e., cachedThreadPool. By placing all tasks with short execution time into the CachedThreadPool for execution, the response speed of the tasks can be ensured, the normally executed tasks can be numerous, and once the tasks are executed by one thread, if no new task is executed within a preset time (for example, 1 minute), the tasks can be destroyed and do not occupy resources. Therefore, resources can be saved, and the processing efficiency of the task can be improved.
In some optional implementations, the thread pool allocation method may further include the following steps: and sending the target task identifier of the target task and the thread pool identifier of the target thread pool to the terminal equipment corresponding to the target equipment model so as to enable the terminal equipment corresponding to the target equipment model to allocate the thread pool for the target task, for example, the terminal equipment can allocate the target task to the target thread pool.
Reviewing the above procedure, in the above embodiment of the present specification, first, pieces of history task execution information corresponding to tasks executed in terminal devices, which are transmitted by a plurality of terminal devices, are acquired, wherein each history task execution information may include a task identifier, a history execution duration, and a device model. And secondly, taking any one of the plurality of equipment models as a target equipment model, taking any one of the plurality of tasks as a target task, and splitting multi-item target task execution information which is sent by terminal equipment of the target equipment model and is used for the target task from the acquired pieces of historical task execution information. And then, taking the historical execution duration included in each target task execution information as a first execution duration, and determining a second execution duration of the target task executed by the terminal equipment of the target equipment model based on the first execution duration of the multi-item target task execution information. And finally, determining a target thread pool allocated to the target task in the terminal equipment of the target equipment model according to the second execution duration. Because the conditions that the terminal devices with different equipment models execute the same task are not very same, the thread pool distributed to the task in the terminal devices with different equipment models can be determined according to the execution time of the task in the terminal devices with different equipment models. Therefore, a proper thread pool can be allocated for the task based on the equipment model, so that the task can be executed more smoothly, long-time waiting of the task is avoided, and the processing efficiency is improved.
According to an embodiment of another aspect, a thread pool allocation apparatus is provided. The thread pool allocation apparatus described above may be deployed in a server.
FIG. 5 illustrates a schematic block diagram of a thread pool allocation apparatus, according to one embodiment. The apparatus shown in fig. 5 is used to perform the method shown in fig. 2. As shown in fig. 5, the thread pool allocation apparatus 500 includes: an obtaining unit 501, configured to obtain a plurality of pieces of historical task execution information sent by a plurality of terminal devices and corresponding to tasks executed in the terminal devices, where each piece of historical task execution information includes a task identifier, a historical execution duration and a device model; a splitting unit 502, configured to split, from the pieces of historical task execution information, pieces of task execution information sent by terminal devices of the target device model and to the target task, with any one device model of the plurality of device models as a target device model and any one task of the plurality of tasks as a target task; a first determining unit 503, configured to determine, based on the first execution duration of the multi-item target task execution information, a second execution duration of the target task executed by the terminal device of the target device model, with the historical execution duration included in each target task execution information as a first execution duration; and a second determining unit 504, configured to determine, according to the second execution duration, a target thread pool allocated to the target task in the terminal device of the target device model.
In some optional implementations of the present embodiment, the first determining unit 503 includes: a clustering module (not shown in the figure) configured to cluster the multi-item target task execution information by using a clustering algorithm, and determine a class cluster from at least one class cluster obtained by clustering as a target class cluster, wherein the target class cluster comprises the multi-item target task execution information; a determining module (not shown in the figure) configured to determine a second execution duration of the target task performed by the terminal device of the target device model based on a first execution duration of the multi-item target task execution information included in the target class cluster.
In some optional implementations of this embodiment, the clustering algorithm includes a density-based clustering algorithm; and the clustering module is further configured to: according to the first execution duration, mapping each item of target task execution information in the multi-item target task execution information into a point in a coordinate system; clustering points in a coordinate system by using a density-based clustering algorithm to obtain at least one cluster; and determining one class cluster from the at least one class cluster as a target class cluster.
In some optional implementations of this embodiment, the determining module is further configured to: carrying out statistical analysis on a first execution duration of the multi-target task execution information included in the target class cluster; and determining the second execution duration of the target task for the terminal equipment of the target equipment model according to the statistical analysis result.
In some optional implementations of this embodiment, the apparatus 500 further includes: a sending unit (not shown in the figure) configured to send the target task identifier of the target task and the thread Chi Biaoshi of the target thread pool to the terminal device corresponding to the target device model, so as to allocate the thread pool for the target task by using the terminal device corresponding to the target device model.
In some optional implementations of the present embodiment, the second determining unit 504 is further configured to: in response to determining that the second execution time length is greater than or equal to a preset first time length threshold, determining that a thread pool allocated to the target task in the terminal equipment of the target equipment model is a first thread pool, wherein the first thread pool is a thread pool with a reusable fixed thread number; in response to determining that the second execution duration is between the first duration threshold and a preset second duration threshold, determining that a thread pool allocated to the target task in the terminal equipment of the target equipment model is a second thread pool, wherein the second thread pool is a timed task thread pool, and the second duration threshold is smaller than the first duration threshold; and in response to determining that the second execution duration is less than or equal to the second duration threshold, determining that a thread pool allocated to the target task in the terminal equipment of the target equipment model is a third thread pool, wherein the third thread pool is a cacheable thread pool.
In some optional implementations of this embodiment, the multi-label task execution information is generated by executing the target task by the plurality of terminal devices of the target device model within a preset period of time.
The foregoing apparatus embodiments correspond to the method embodiments, and specific descriptions may be referred to descriptions of method embodiment portions, which are not repeated herein. The device embodiments are obtained based on corresponding method embodiments, and have the same technical effects as the corresponding method embodiments, and specific description can be found in the corresponding method embodiments.
According to an embodiment of another aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in fig. 2.
According to an embodiment of still another aspect, there is provided an electronic device including a memory and a processor, wherein executable code is stored in the memory, and the processor implements the method described in fig. 2 when executing the executable code.
The foregoing describes certain embodiments of the present disclosure, other embodiments being within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. Furthermore, the processes depicted in the accompanying figures are not necessarily required to achieve the desired result in the particular order shown, or in a sequential order. In some embodiments, multitasking and parallel processing are also possible, or may be advantageous.
Referring now to fig. 6, a schematic diagram of an electronic device 600 suitable for use in implementing embodiments of the present application is shown. The electronic device shown in fig. 6 is only an example and should not impose any limitation on the functionality and scope of use of the embodiments of the present application.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM602, and the RAM603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, etc.; an output device 607 including, for example, a liquid crystal display (LCD, liquid Crystal Di splay), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 6 may represent one device or a plurality of devices as needed.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present application are performed when the computer program is executed by the processing means 601.
The present description also provides a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method provided in the present description.
The computer readable medium according to the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present specification, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Whereas in embodiments of the present description, a computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (Radio Frequency), and the like, or any suitable combination thereof.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a plurality of pieces of historical task execution information which are sent by a plurality of terminal devices and correspond to tasks executed in the terminal devices, wherein each piece of historical task execution information comprises a task identifier, a historical execution duration and a device model; taking any one of a plurality of equipment models as a target equipment model, taking any one of a plurality of tasks as a target task, and splitting multi-target task execution information which is sent by terminal equipment of the target equipment model and is sent by the terminal equipment of the target equipment model from the historical task execution information; the historical execution time length included in each target task execution information is used as a first execution time length, and the second execution time length of the target task executed by the terminal equipment of the target equipment model is determined based on the first execution time length of the multi-item target task execution information; and determining a target thread pool allocated to the target task in the terminal equipment of the target equipment model according to the second execution duration.
Computer program code for carrying out operations for embodiments of the present description may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or electronic device. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for storage media and computing device embodiments, since they are substantially similar to method embodiments, the description is relatively simple, with reference to the description of method embodiments in part.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The foregoing detailed description of the embodiments of the present invention further details the objects, technical solutions and advantageous effects of the embodiments of the present invention. It should be understood that the foregoing description is only specific to the embodiments of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements, etc. made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.

Claims (10)

1. A thread pool allocation method, comprising:
acquiring a plurality of pieces of historical task execution information which are sent by a plurality of terminal devices and correspond to tasks executed in the terminal devices, wherein each piece of historical task execution information comprises a task identifier, a historical execution duration and a device model;
taking any one of a plurality of equipment models as a target equipment model, taking any one of a plurality of tasks as a target task, and splitting multi-target task execution information which is sent by terminal equipment of the target equipment model and is used for the target task from the historical task execution information;
Taking the historical execution time length included in each target task execution information as a first execution time length, and determining a second execution time length for executing the target task by the terminal equipment of the target equipment model based on the first execution time length of the multi-item target task execution information;
and determining a target thread pool allocated to the target task in the terminal equipment of the target equipment model according to the second execution duration.
2. The method of claim 1, wherein the determining, based on the first execution duration of the multi-label task execution information, the second execution duration of the target task performed by the terminal device of the target device model includes:
clustering the multi-item target task execution information by using a clustering algorithm, and determining a class cluster from at least one class cluster obtained by clustering as a target class cluster, wherein the target class cluster comprises the multi-item target task execution information;
and determining a second execution duration of the target task executed by the terminal equipment of the target equipment model based on the first execution duration of the multi-target task execution information included in the target class cluster.
3. The method of claim 2, wherein the clustering algorithm comprises a density-based clustering algorithm; and clustering the multi-item target task execution information by using a clustering algorithm, wherein the determining a class cluster from at least one class cluster obtained by clustering as a target class cluster comprises the following steps:
According to the first execution duration, mapping each item of target task execution information in the multi-item target task execution information into a point in a coordinate system;
clustering points in a coordinate system by using a density-based clustering algorithm to obtain at least one cluster;
and determining one class cluster from the at least one class cluster as a target class cluster.
4. The method of claim 2, wherein the determining, based on the first execution duration of the multi-label task execution information included in the target class cluster, the second execution duration of the target task performed by the terminal device of the target device model includes:
carrying out statistical analysis on a first execution duration of the multi-target task execution information included in the target class cluster;
and determining the second execution duration of the target task for the terminal equipment of the target equipment model according to the statistical analysis result.
5. The method of claim 1, wherein the method further comprises:
and sending the target task identifier of the target task and the thread Chi Biaoshi of the target thread pool to the terminal equipment corresponding to the target equipment model so as to enable the terminal equipment corresponding to the target equipment model to allocate the thread pool for the target task.
6. The method according to claim 1, wherein the determining, according to the second execution duration, a thread pool allocated to the target task in the terminal device of the target device model includes:
in response to determining that the second execution time is greater than or equal to a preset first time threshold, determining that a thread pool allocated to the target task in terminal equipment of the target equipment model is a first thread pool, wherein the first thread pool is a thread pool with a reusable fixed thread number;
in response to determining that the second execution duration is between the first duration threshold and a preset second duration threshold, determining that a thread pool allocated to the target task in terminal equipment of the target equipment model is a second thread pool, wherein the second thread pool is a timed task thread pool, and the second duration threshold is smaller than the first duration threshold;
and in response to determining that the second execution duration is less than or equal to the second duration threshold, determining that a thread pool allocated to the target task in the terminal equipment of the target equipment model is a third thread pool, wherein the third thread pool is a cacheable thread pool.
7. The method of claim 1, wherein the multi-label task execution information is generated by a plurality of terminal devices of the target device model executing the target task within a preset time period.
8. A thread pool allocation apparatus, comprising:
an obtaining unit configured to obtain a plurality of pieces of historical task execution information corresponding to tasks executed in the terminal device, the pieces of historical task execution information being sent by the plurality of terminal devices, wherein each piece of historical task execution information comprises a task identifier, a historical execution duration and a device model;
a splitting unit configured to split, from the pieces of historical task execution information, pieces of task execution information sent by terminal devices of the target device model and to the target task, with any one of the plurality of device models as a target device model and any one of the plurality of tasks as a target task;
the first determining unit is configured to determine a second execution duration of the target task executed by the terminal device of the target device model based on the first execution duration of the multi-item target task execution information by taking the historical execution duration included in each target task execution information as the first execution duration;
And the second determining unit is configured to determine a target thread pool allocated to the target task in the terminal equipment of the target equipment model according to the second execution duration.
9. A computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of claims 1-7.
10. An electronic device comprising a memory having executable code stored therein and a processor, which when executing the executable code, implements the method of any of claims 1-7.
CN202310147054.XA 2023-02-20 2023-02-20 Thread pool allocation method and device Pending CN116089094A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310147054.XA CN116089094A (en) 2023-02-20 2023-02-20 Thread pool allocation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310147054.XA CN116089094A (en) 2023-02-20 2023-02-20 Thread pool allocation method and device

Publications (1)

Publication Number Publication Date
CN116089094A true CN116089094A (en) 2023-05-09

Family

ID=86214038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310147054.XA Pending CN116089094A (en) 2023-02-20 2023-02-20 Thread pool allocation method and device

Country Status (1)

Country Link
CN (1) CN116089094A (en)

Similar Documents

Publication Publication Date Title
CN107885762B (en) Intelligent big data system, method and equipment for providing intelligent big data service
CN111813545B (en) Resource allocation method, device, medium and equipment
CN109408205B (en) Task scheduling method and device based on hadoop cluster
CA3000422C (en) Workflow service using state transfer
US9189543B2 (en) Predicting service request breaches
US10102033B2 (en) Method and system for performance ticket reduction
US9423957B2 (en) Adaptive system provisioning
US20120197677A1 (en) Multi-role based assignment
CN114037293A (en) Task allocation method, device, computer system and medium
US10635492B2 (en) Leveraging shared work to enhance job performance across analytics platforms
CN112631751A (en) Task scheduling method and device, computer equipment and storage medium
CN113204425A (en) Method and device for process management internal thread, electronic equipment and storage medium
CN110389817B (en) Scheduling method, device and computer readable medium of multi-cloud system
CN111373374B (en) Automatic diagonal scaling of workloads in a distributed computing environment
US20220179693A1 (en) Estimating attributes of running workloads on platforms in a system of multiple platforms as a service
CN116089094A (en) Thread pool allocation method and device
CN118056187A (en) Processing transaction requests
CN116010020A (en) Container pool management
CN115480897A (en) Task processing method, device, equipment, storage medium and program product
CN115187097A (en) Task scheduling method and device, electronic equipment and computer storage medium
US20220179709A1 (en) Scheduling jobs
CN114064403A (en) Task delay analysis processing method and device
CN114168329A (en) Distributed batch optimization method, electronic device and computer-readable storage medium
CN113760524A (en) Task execution method and device
CN115344359A (en) Computing power resource allocation method, device, computer readable storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination