CN114461385A - Thread pool scheduling method, device and equipment and readable storage medium - Google Patents

Thread pool scheduling method, device and equipment and readable storage medium Download PDF

Info

Publication number
CN114461385A
CN114461385A CN202111638547.0A CN202111638547A CN114461385A CN 114461385 A CN114461385 A CN 114461385A CN 202111638547 A CN202111638547 A CN 202111638547A CN 114461385 A CN114461385 A CN 114461385A
Authority
CN
China
Prior art keywords
thread pool
thread
task
signaling
consumed time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111638547.0A
Other languages
Chinese (zh)
Inventor
胡星蓓
郭明青
员晓毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yueling Information Technology Co ltd
Shenzhen ZNV Technology Co Ltd
Original Assignee
Shanghai Yueling Information Technology Co ltd
Shenzhen ZNV Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yueling Information Technology Co ltd, Shenzhen ZNV Technology Co Ltd filed Critical Shanghai Yueling Information Technology Co ltd
Priority to CN202111638547.0A priority Critical patent/CN114461385A/en
Publication of CN114461385A publication Critical patent/CN114461385A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention provides a thread pool scheduling method, a device, equipment and a readable storage medium, wherein after a signaling task is preprocessed, the ratio of the number of tasks between the amount of tasks to be processed and the total amount of tasks in a thread pool, the average consumed time of the processing tasks and the thread utilization rate are obtained; determining the load state value of each thread pool according to the task number ratio, the average consumed time and the thread utilization rate; determining a thread pool allocation strategy corresponding to the load state value, the task number ratio, the average consumed time and the thread utilization rate, and allocating a thread pool corresponding to the signaling task according to the thread pool allocation strategy, wherein the thread pool allocation strategy comprises determining the release of the thread pool according to the load state value and the average consumed time, and determining the creation of the thread pool according to at least one of the task number ratio, the average consumed time and the thread utilization rate. The capability of concurrent communication of the service equipment is improved, and the problem of low information processing efficiency of the dynamic loop system is solved.

Description

Thread pool scheduling method, device and equipment and readable storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method, an apparatus, a device, and a readable storage medium for scheduling a thread pool.
Background
The dynamic environment system is mainly used for monitoring the running state and working parameters of each system device, finding out component faults or parameter abnormity, immediately adopting various alarm modes, recording historical data and alarm events, and providing powerful functions such as intelligent expert diagnosis suggestions, remote monitoring and management functions, web browsing and the like. The monitored working content is simpler and more convenient, and the screen can be regulated and controlled at one side, so that unattended operation is realized, and human resource input can be saved.
The dynamic loop system usually needs to frequently issue signaling tasks to hardware equipment of a user end, and has a high requirement on the real-time performance of signaling issuing; however, because the types of the terminal devices of the user side are different, the devices of different manufacturers are influenced by factors such as hardware and networks of the devices, the response speed to the signaling is also different, and when the issued task amount is large and the hardware response speed is slow, the task processing is not timely, so that the service response is brute, the information processing pressure of the hardware device is increased, and the information processing efficiency of the dynamic loop system is reduced.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a thread pool scheduling method, aiming at solving the problem of low information processing efficiency of a dynamic loop system.
In order to achieve the above object, the present invention provides a thread pool scheduling method, where the thread pool scheduling method includes:
after the signaling task is preprocessed, acquiring the ratio of the number of tasks to be processed to the total number of tasks in a thread pool, the average consumed time of processing the tasks and the thread utilization rate;
determining a load state value of each thread pool according to the task number ratio, the average consumed time and the thread utilization rate;
determining a thread pool allocation strategy corresponding to the load state value, the task number ratio, the average consumed time and the thread utilization rate, and allocating a thread pool corresponding to the signaling task according to the thread pool allocation strategy, wherein the thread pool allocation strategy comprises determining the release of the thread pool according to the load state value and the average consumed time, and determining the creation of the thread pool according to at least one of the task number ratio, the average consumed time and the thread utilization rate.
Optionally, before the step of obtaining the ratio of the number of tasks to be processed to the total number of tasks in the thread pool, the average consumed time of processing the tasks, and the thread usage rate, the method further includes:
when the signaling task is received, acquiring a signaling parameter in the signaling task, wherein the signaling parameter comprises at least one of task issuing time, task response time, task type, equipment, keywords and a serial number;
and preprocessing the signaling according to the signaling identifier, wherein the preprocessing comprises analysis, filtering and/or combination.
Optionally, the step of determining the load status value of each thread pool according to the task number ratio, the average consumed time and the thread usage rate includes:
determining a weighted threshold corresponding to the task number ratio, the average consumed time and the thread utilization rate according to server configuration, wherein the weighted threshold is a positive value;
and determining the load state value according to the weighting threshold value, the task quantity ratio, the average consumed time and the thread utilization rate.
Optionally, after the step of determining the load status value of each thread pool according to the task number ratio, the average consumed time, and the thread usage rate, the method further includes:
and when the thread pools with the same load state values appear, determining the thread pool with higher priority by the thread pools with the same state values through a Hash algorithm.
Optionally, the step of determining a thread pool allocation policy corresponding to the load status value, the task number ratio, the average consumed time, and the thread usage rate, and allocating a thread pool corresponding to the signaling task according to the thread pool allocation policy includes:
detecting the load state value, the task number ratio, the average consumed time and the thread utilization rate;
when the load state value and the average consumed time meet a first scheduling condition, judging that the thread pool is idle, and releasing a sub-thread pool in the thread pool to reduce the signaling occupation;
when the ratio of the thread utilization rate to the task quantity meets a second scheduling condition, judging that the execution of the thread pool is busy, and creating a new sub-thread pool in the thread pool to prevent signaling loss;
when the thread utilization rate and the average consumed time meet a third scheduling condition, judging that the execution of the thread pool is abnormal, and establishing a new sub-thread pool to isolate the signaling;
and when the ratio of the load state value to the task quantity meets a fourth scheduling condition, judging that the thread pool is saturated to execute, and returning the signaling to the user side so as to return the signaling which cannot be processed by the thread pool to the user side.
Optionally, the first allocation condition is that the load status value is equal to 0 and the average elapsed time is equal to 0; the second allocation condition is that the thread usage rate is equal to 1 and the task number ratio is greater than 0.8; the third allocation condition is that the thread usage rate is greater than 0.8 and the average elapsed time is greater than 3 seconds; the fourth allocation condition is that the load status value is equal to 1 and the task number ratio is equal to 1.
Optionally, after the step of determining the load status value, the task quantity ratio, the average consumed time, and the thread pool allocation policy corresponding to the thread usage rate, and allocating the thread pool corresponding to the signaling task according to the thread pool allocation policy, the method further includes:
and processing the signaling task according to the allocated thread pool, and feeding back a task result obtained after the signaling task is processed to the user side.
In addition, to achieve the above object, the present invention further provides a thread pool scheduling apparatus, including:
the parameter acquisition module is used for acquiring the task quantity ratio between the task quantity to be processed and the total task quantity in the thread pool, the average consumed time of processing tasks and the thread utilization rate after preprocessing the signaling task;
the load state value determining module is used for determining the load state value of each thread pool according to the task number ratio, the average consumed time and the thread utilization rate;
and the thread pool scheduling module is used for determining a thread pool allocation strategy corresponding to the load state value, the task quantity ratio, the average consumed time and the thread utilization rate by a user and allocating a thread pool corresponding to the signaling task according to the thread pool allocation strategy.
In addition, to achieve the above object, the present invention further provides a thread pool scheduling apparatus, where the thread pool scheduling apparatus includes a memory, a processor, and a thread pool scheduler stored in the memory and capable of running on the processor, and the thread pool scheduling implements the steps of the thread pool scheduling method when executed by the processor.
In addition, to achieve the above object, the present invention further provides a computer readable storage medium, having a thread pool scheduler stored thereon, which when executed by a processor implements the steps of the thread pool scheduling method as described in any one of the above.
The embodiment of the invention provides a thread pool scheduling method, a thread pool scheduling device and a computer readable storage medium, wherein after a signaling task is preprocessed, the ratio of the number of tasks between the amount of tasks to be processed and the total amount of tasks in a thread pool, the average consumed time of the processing tasks and the thread utilization rate are obtained; determining a load state value of each thread pool according to the task number ratio, the average consumed time and the thread utilization rate; determining a thread pool allocation strategy corresponding to the load state value, the task number ratio, the average consumed time and the thread utilization rate, and allocating a thread pool corresponding to the signaling task according to the thread pool allocation strategy, wherein the thread pool allocation strategy comprises determining the release of the thread pool according to the load state value and the average consumed time, and determining the creation of the thread pool according to at least one of the task number ratio, the average consumed time and the thread utilization rate. The method comprises the steps of defining three parameters of task quantity ratio, average consumed time and thread utilization rate capable of reflecting the current working load condition of a thread pool, defining a standard load state value reflecting the health state of each thread pool according to the three parameters, and finally judging the size relation between the four thresholds and the four parameters according to a load state threshold, a task quantity proportion threshold, an average consumed time threshold and/or a thread utilization rate threshold set in a preset scheduling condition to call a corresponding thread pool allocation strategy, so that the concurrent communication capacity of service equipment is improved, and the problem of low information processing efficiency of a dynamic loop system is solved.
Drawings
Fig. 1 is a schematic hardware structure diagram of a thread pool scheduling device according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a thread pool scheduling method according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a thread pool scheduling method according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating a detailed process of step S10 in the third embodiment of the thread pool scheduling method according to the present invention;
FIG. 5 is a flowchart illustrating a thread pool scheduling method according to a fourth embodiment of the present invention;
FIG. 6 is a flowchart illustrating a detailed process of step S30 in the fifth embodiment of the thread pool scheduling method according to the present invention;
FIG. 7 is a flowchart illustrating a thread pool scheduling method according to a sixth embodiment of the present invention;
FIG. 8 is a block diagram of a thread pool scheduler according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It is to be understood that the appended drawings illustrate exemplary embodiments of the invention, which may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
At the initial stage of task execution, the load condition in each thread pool processes the task signaling sent by the user side at random, which may cause the workload of the thread pools to be unbalanced, thereby causing the response speed of the dynamic loop system to the task signaling to be different. The response speed is low, and the task processing speed of a program is influenced, so that the invention provides the method for realizing the ordered calling of the thread pool by setting the priority condition on the basis of multithreading, and the capability of concurrent communication is improved.
As an implementation, the hardware architecture of the thread pool scheduling apparatus may be as shown in fig. 1.
The embodiment of the invention relates to a thread pool scheduling device, which comprises: a processor 101, e.g. a CPU, a memory 102, a communication bus 103. Wherein a communication bus 103 is used for enabling the connection communication between these components.
The memory 102 may be a high-speed RAM memory or a non-volatile memory (e.g., a disk memory). As shown in FIG. 1, a program for thread pool scheduling may be included in the memory 102, which is a computer-readable storage medium; and the processor 101 may be configured to invoke a thread pool scheduled program stored in the memory 102 and perform the following operations:
after the signaling task is preprocessed, acquiring the ratio of the number of tasks to be processed to the total number of tasks in a thread pool, the average consumed time of processing the tasks and the thread utilization rate;
determining a load state value of each thread pool according to the task number ratio, the average consumed time and the thread utilization rate;
determining a thread pool allocation strategy corresponding to the load state value, the task number ratio, the average consumed time and the thread utilization rate, and allocating a thread pool corresponding to the signaling task according to the thread pool allocation strategy, wherein the thread pool allocation strategy comprises determining the release of the thread pool according to the load state value and the average consumed time, and determining the creation of the thread pool according to at least one of the task number ratio, the average consumed time and the thread utilization rate.
In one embodiment, the processor 101 may be configured to invoke a thread pool scheduler stored in the memory 102 and perform the following operations:
when the signaling task is received, acquiring a signaling parameter in the signaling task, wherein the signaling parameter comprises at least one of task issuing time, task response time, task type, equipment, keywords and a serial number;
and preprocessing the signaling according to the signaling identifier, wherein the preprocessing comprises analysis, filtering and/or combination.
In one embodiment, the processor 101 may be configured to invoke a thread pool scheduler stored in the memory 102 and perform the following operations:
determining a weighted threshold corresponding to the task number ratio, the average consumed time and the thread utilization rate according to server configuration, wherein the weighted threshold is a positive value;
and determining the load state value according to the weighting threshold value, the task quantity ratio, the average consumed time and the thread utilization rate.
In one embodiment, the processor 101 may be configured to invoke a thread pool scheduler stored in the memory 102 and perform the following operations:
and when the thread pools with the same load state values appear, determining the thread pool with higher priority by the thread pools with the same state values through a Hash algorithm.
In one embodiment, the processor 101 may be configured to invoke a thread pool scheduler stored in the memory 102 and perform the following operations:
detecting the load state value, the task number ratio, the average consumed time and the thread utilization rate;
when the load state value and the average consumed time meet a first scheduling condition, judging that the thread pool is idle, and releasing a sub-thread pool in the thread pool to reduce the signaling occupation;
when the ratio of the thread utilization rate to the task quantity meets a second scheduling condition, judging that the execution of the thread pool is busy, and creating a new sub-thread pool in the thread pool to prevent signaling loss;
when the thread utilization rate and the average consumed time meet a third scheduling condition, judging that the execution of the thread pool is abnormal, and establishing a new sub-thread pool to isolate the signaling;
and when the ratio of the load state value to the task quantity meets a fourth scheduling condition, judging that the thread pool is saturated to execute, and returning the signaling to the user side so as to return the signaling which cannot be processed by the thread pool to the user side.
In one embodiment, the processor 101 may be configured to invoke a thread pool scheduler stored in the memory 102 and perform the following operations:
and processing the signaling task according to the distributed thread pool, and feeding back a task result obtained after processing to the user side.
Based on the hardware architecture of the thread pool scheduling device based on the communication technology, the embodiment of the thread pool scheduling method is provided.
Referring to fig. 2, in a first embodiment, the thread pool scheduling method includes the following steps:
step S10 is that after the signaling task is preprocessed, the ratio of the number of tasks to be processed in the thread pool to the total number of tasks, the average consumed time of processing the tasks and the thread utilization rate are obtained;
in this embodiment, the current workload condition of the thread pool is reflected according to the ratio of the to-be-processed task amount to the total task amount in the thread pool, the average time of processing tasks in the last minute, the percentage of the thread pool for processing tasks in the total thread pool, and the like, and the data are quantized into three different parameters, namely the task amount ratio, the average consumed time, and the thread utilization rate.
Step S20: determining a load state value of each thread pool according to the task number ratio, the average consumed time and the thread utilization rate;
in this embodiment, a linear function is constructed from three parameters, namely, the task quantity ratio, the average time consumption and the thread utilization rate, and the linear function value is defined as a load state value, and the three parameters are integrated together by the load state value to serve as a standard for judging the health state of each thread pool.
Step S30: determining a thread pool allocation strategy corresponding to the load state value, the task number ratio, the average consumed time and the thread utilization rate, and allocating a thread pool corresponding to the signaling task according to the thread pool allocation strategy, wherein the thread pool allocation strategy comprises determining the release of the thread pool according to the load state value and the average consumed time, and determining the creation of the thread pool according to at least one of the task number ratio, the average consumed time and the thread utilization rate.
In this embodiment, the preset scheduling condition is used as one or more preset programs, and includes a load state threshold, a task quantity proportion threshold, an average consumed time threshold, and/or a thread usage threshold, and a distribution policy for determining a linear pool according to a size relationship between these parameters and the thresholds, where the distribution policy increases or decreases the number of corresponding linear pools for processing signaling tasks according to the load state value, the task quantity proportion, the average consumed time, and/or the thread usage parameter, so that the linear pool has an "automatic scaling" capability.
In the technical scheme provided by this embodiment, three parameters, namely, the task number ratio, the average consumed time and the thread utilization rate, which can reflect the current working load condition of the thread pool are defined, a standard load state value which reflects the health state of each thread pool is defined according to the three parameters, and finally, the relationship between the four thresholds and the four parameters is judged according to the load state threshold, the task number ratio threshold, the average consumed time threshold and/or the thread utilization rate threshold which are set in the preset scheduling condition to call the corresponding thread pool allocation strategy, so that the capability of concurrent communication of the service equipment is improved, and the problem of low information processing efficiency of the dynamic loop system is solved.
Referring to fig. 3, in the second embodiment, based on the first embodiment, before the step S10, the method further includes:
step S40: when the signaling task is received, acquiring signaling parameters in the signaling task, wherein the signaling parameters comprise at least one of task issuing time, task response time, task type, equipment, keywords and serial numbers;
step S50: and preprocessing the signaling according to the signaling identifier, wherein the preprocessing comprises analysis, filtering and/or combination.
Optionally, before obtaining the task number ratio, the average consumed time, and the thread usage rate in each thread pool, when the service device receives a signaling, obtaining parameter information including task issuing time, task response time, task type, device, keywords, and/or sequence number in the signaling, and preprocessing the signaling task, where the preprocessing may include analyzing the signaling task, that is, analyzing whether the signaling task is an executable signaling task, then performing task filtering, filtering out task signaling that cannot be executed, and finally merging task signaling and merging/deduplicating task signaling that is redundant of repetition. The specific preprocessing is not described in detail in the embodiment, but it is emphasized that the preprocessing of the signaling is also a means for reducing the load of the thread pool and improving the parallel capability of the communication.
In the technical scheme provided by this embodiment, before the workload data of the linear pool is obtained, signaling parameters in the received signaling are obtained, and the signaling is preprocessed according to task issuing time, task response time, task type, device, keywords and/or sequence number in the signaling, so that the load of the thread pool is reduced, and the information processing efficiency of the dynamic loop system is improved.
Referring to fig. 4, in the third embodiment, based on the above embodiment, the step S10 includes:
step S11: determining a weighted threshold corresponding to the task number ratio, the average consumed time and the thread utilization rate according to server configuration, wherein the weighted threshold is a positive value;
step S12: and determining the load state value according to the weighting threshold value, the task quantity ratio, the average consumed time and the thread utilization rate.
Optionally, this embodiment provides a method for defining a load status of a thread pool, where the server configuration may include the number of cores and the memory size of a processor in the server, and the determined weighting thresholds are different according to different configurations of the server. Exemplarily, a server configured with a CPU core number >16 and a memory >32G is defined as a server with better performance, and the weighting threshold is determined as follows: the task quantity proportion weighting threshold value P1 is 0.4, the average time consumption weighting threshold value P2 is 0.4, and the thread utilization weighting threshold value P3 is 0.2, wherein the conditions P1, P2 and P3 meet the conditions P1+ P2+ P3 is 1, P1>0, P2>0 and P3> 0; otherwise, when the configuration of the server does not meet the condition, defining a weighting threshold value as follows; p1 ═ 0.25, P2 ═ 0.25, and P3 ═ 0.5. It is emphasized that the weighting threshold is not set and cannot be changed, and can be automatically adjusted based on subsequent server operating conditions. Further, after determining the weighting threshold, constructing a linear function according to the weighting threshold, the task number ratio, the average consumed time, the thread usage rate, and other parameters, and quantifying the load status according to the linear function value, for example, setting the load status of the thread pool as H, then H ═ C × P1+ AC × P2+ TAV × P3, where a larger value of H means a larger workload degree of the current thread pool, the worse "health" of the thread pool, and the worse "health" of the thread pool in subsequent thread pool scheduling, the lower scheduling priority of the thread pool.
In the technical scheme provided by this embodiment, the weighting threshold is determined according to the server configuration, the load state is determined according to the weighting threshold, the task number proportion, the average consumed time and the thread utilization rate, and the threshold is reasonably planned according to the server configuration, so that the generated load state value can reasonably reflect the "health condition" of the server thread pool.
Referring to fig. 5, in the fourth embodiment, based on the above embodiment, after step S12, the method further includes:
step S60: and when the thread pools with the same load state values appear, determining the thread pool with higher priority by the thread pools with the same state values through a Hash algorithm.
Optionally, the present embodiment provides a selection case when the thread pool load status values are the same. When thread pools with the same load state value appear, a Hash (Hash) algorithm is adopted to carry out priority division on the thread pools. The Hash algorithm is a structure for storing data by key-value (key-indexed), and a value corresponding to the key can be found as long as the value to be found (i.e., the key) is input.
In the technical scheme provided by this embodiment, a hash algorithm is introduced to allocate thread pools with the same load state value, so that a situation that scheduling cannot be performed when thread pools with the same load state value occur is avoided.
Referring to fig. 6, in a fifth embodiment, based on the above embodiment, the step S30 includes:
step S31: detecting the load state value, the task number ratio, the average consumed time and the thread utilization rate;
step S32: when the load state value and the average consumed time meet a first scheduling condition, judging that the thread pool is idle, and releasing a sub-thread pool in the thread pool to reduce the signaling occupation; when the ratio of the thread utilization rate to the task quantity meets a second scheduling condition, judging that the execution of the thread pool is busy, and creating a new sub-thread pool in the thread pool to prevent signaling loss; when the thread utilization rate and the average consumed time meet a third scheduling condition, judging that the execution of the thread pool is abnormal, and establishing a new sub-thread pool to isolate the signaling; and when the ratio of the load state value to the task quantity meets a fourth scheduling condition, judging that the thread pool is saturated to execute, and returning the signaling to the user side so as to return the signaling which cannot be processed by the thread pool to the user side.
Optionally, the present embodiment provides an allocation policy of a thread pool. The preset scheduling conditions are used as one or more preset programs, and comprise parameter thresholds such as a load state threshold, a task quantity proportion threshold, an average time consumption threshold and/or a thread utilization rate threshold, and an allocation strategy of the linear pool is determined according to the size relation between the parameters and the thresholds, and the allocation strategy increases and decreases the corresponding linear pool number for processing the signaling tasks according to the load state value, the task quantity proportion, the average time consumption and/or the thread utilization rate parameters, so that the linear pool has the automatic expansion capacity.
Illustratively, the liability state value is set to be H, the average consumed time is set to be AC, the task quantity ratio is set to be C, and the thread usage rate is set to be TAV. When H is 0 and AC is 0 (i.e., the number of queue waiting tasks is 0, the average consumed time of task processing in approximately 1min is 0, and the thread utilization rate is 0), the first preset scheduling condition is satisfied, which means that the number of task signaling is small at this time, and it is determined that the current thread pool is idle, because the thread pool also occupies the memory space of the processor, in order to maximize the memory space utilization, the sub-thread pool in the thread pool is released, and the core thread pool/main thread pool is kept unchanged; when the TAV is 1 and C is greater than 0.8, the second preset scheduling condition is satisfied, which means that the task signaling is more but the thread pool can still continue to receive and process the signaling at this time, in order to prevent the task signaling from being lost in the thread pool, a new sub-thread pool is established in the memory space to extend the number of the thread pools to prevent the signaling loss condition in advance, but in order to avoid that too many memory spaces are occupied by too many thread pools, usually not more than 10 sub-thread pools need to be set; when the TAV is 1 and the AC is greater than 3, the third preset scheduling condition is satisfied, because the response time of the device in the dynamic loop project is generally less than 2 seconds, and it can be determined that the operation of the monitoring device is abnormal at this time when it exceeds 3 seconds, in order to avoid the overstock of a large number of signaling tasks caused by this situation, a large amount of processor resources are occupied, a new sub-thread pool is established in the memory space to isolate the original thread pool, but after the thread pool on which one type of service depends is full, the thread pools corresponding to other types of services are not affected, and it needs to be emphasized that the third preset scheduling condition and the fourth preset scheduling condition are mutually exclusive and do not occur simultaneously. Finally, when TAV is 1 and C is 1, that is, when the fourth preset scheduling condition is satisfied, it means that all the thread pools are fully occupied at this time, the device cannot continue to process more signaling requests, and in order to avoid abnormal damage or loss of signaling caused by a large backlog of signaling tasks, the signaling request is returned to the user side, and a prompt message with similar meaning such as "the current server is busy, please resend the signaling later" is accompanied.
In the technical solution provided in this embodiment, four preset scheduling conditions are set to respectively schedule the thread pool and/or process the signaling for four situations, such as less signaling, more signaling, abnormal signaling processing, thread pool saturation, and the like. The waste of memory space and the damage and loss caused by accumulation of a large amount of signaling are avoided, and the information processing efficiency of the dynamic loop system is improved.
Referring to fig. 7, in the sixth embodiment, based on any of the above embodiments, after step S30, the method further includes:
step S70: and processing the signaling task according to the distributed thread pool, and feeding back a task result obtained after processing to the user side.
Optionally, the present embodiment provides a method after the thread pool is allocated. After the thread pool is allocated according to the preset scheduling policy, the corresponding signaling task is processed according to the allocated thread pool, and the processed signaling task is fed back to the user side.
In the technical scheme provided by this embodiment, a processing completion message is fed back to the user after the dynamic ring device completes processing the signaling, so that the problem that the user cannot know the processing condition of the receiving end due to the fact that the receiving end processes the signaling abnormally after the user end sends the task request is avoided.
In addition, referring to fig. 8, this embodiment further provides a thread pool scheduling apparatus, where the thread pool scheduling apparatus includes:
a parameter obtaining module 100, configured to obtain, after preprocessing a signaling task, a task number ratio between a to-be-processed task amount and a total task amount in a thread pool, an average consumed time of processing tasks, and a thread utilization rate;
a load status value determining module 200, configured to determine a load status value of each thread pool according to the task number ratio, the average consumed time, and the thread usage rate;
a thread pool scheduling module 300, configured to determine a thread pool allocation policy corresponding to the load status value, the task number ratio, the average consumed time, and the thread usage rate, and allocate a thread pool corresponding to the signaling task according to the thread pool allocation policy.
In addition, the present invention also provides a thread pool scheduling device, where the thread pool scheduling device includes a memory, a processor, and a thread pool scheduler that is stored in the memory and can run on the processor, and when the thread pool scheduling is executed by the processor, the thread pool scheduling device implements the steps of the thread pool scheduling method described above.
In addition, the present invention also provides a computer readable storage medium, which stores a thread pool scheduler, and the thread pool scheduler, when being executed by a processor, implements the steps of the thread pool scheduling method according to the above embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a computer-readable storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above, and includes several instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A thread pool scheduling method is characterized in that the thread pool scheduling method comprises the following steps:
after the signaling task is preprocessed, acquiring the ratio of the number of tasks to be processed to the total number of tasks in a thread pool, the average consumed time of processing the tasks and the thread utilization rate;
determining a load state value of each thread pool according to the task number ratio, the average consumed time and the thread utilization rate;
determining a thread pool allocation strategy corresponding to the load state value, the task number ratio, the average consumed time and the thread utilization rate, and allocating a thread pool corresponding to the signaling task according to the thread pool allocation strategy, wherein the thread pool allocation strategy comprises determining the release of the thread pool according to the load state value and the average consumed time, and determining the creation of the thread pool according to at least one of the task number ratio, the average consumed time and the thread utilization rate.
2. The thread pool scheduling method according to claim 1, wherein before the step of obtaining the ratio of the number of tasks to be processed to the total number of tasks in the thread pool, the average consumed time for processing the tasks, and the thread usage rate, the method further comprises:
when the signaling task is received, acquiring a signaling parameter in the signaling task, wherein the signaling parameter comprises at least one of task issuing time, task response time, task type, equipment, keywords and a serial number;
and preprocessing the signaling according to the signaling identifier, wherein the preprocessing comprises analysis, filtering and/or combination.
3. The method according to claim 1, wherein the step of determining the load status value of each thread pool according to the task number ratio, the average elapsed time and the thread usage rate comprises:
determining a weighted threshold corresponding to the task number ratio, the average consumed time and the thread utilization rate according to server configuration, wherein the weighted threshold is a positive value;
and determining the load state value according to the weighting threshold value, the task quantity ratio, the average consumed time and the thread utilization rate.
4. The method according to claim 3, wherein the step of determining the load status value of each thread pool according to the task number ratio, the average elapsed time and the thread usage rate further comprises:
and when the thread pools with the same load state values appear, determining the thread pool with higher priority by the thread pools with the same state values through a Hash algorithm.
5. The method according to claim 1, wherein the step of determining the thread pool allocation policy corresponding to the load status value, the task number ratio, the average consumed time, and the thread usage rate, and allocating the thread pool corresponding to the signaling task according to the thread pool allocation policy comprises:
detecting the load state value, the task number ratio, the average consumed time and the thread utilization rate;
when the load state value and the average consumed time meet a first scheduling condition, judging that the thread pool is idle, and releasing a sub-thread pool in the thread pool to reduce the signaling occupation;
when the ratio of the thread utilization rate to the task quantity meets a second scheduling condition, judging that the execution of the thread pool is busy, and creating a new sub-thread pool in the thread pool to prevent signaling loss;
when the thread utilization rate and the average consumed time meet a third scheduling condition, judging that the execution of the thread pool is abnormal, and establishing a new sub-thread pool to isolate the signaling;
and when the ratio of the load state value to the task quantity meets a fourth scheduling condition, judging that the thread pool is saturated to execute, and returning the signaling to the user side so as to return the signaling which cannot be processed by the thread pool to the user side.
6. The thread pool scheduling method of claim 5, wherein the first allocation condition is that the load status value is equal to 0 and the average elapsed time is equal to 0; the second allocation condition is that the thread usage rate is equal to 1 and the task number ratio is greater than 0.8; the third allocation condition is that the thread usage rate is greater than 0.8 and the average elapsed time is greater than 3 seconds; the fourth allocation condition is that the load status value is equal to 1 and the task number ratio is equal to 1.
7. The method according to claim 1, wherein after the step of determining the thread pool allocation policy corresponding to the load status value, the task number ratio, the average consumed time, and the thread usage rate, and allocating the thread pool corresponding to the signaling task according to the thread pool allocation policy, the method further comprises:
and processing the signaling task according to the distributed thread pool, and feeding back a task result obtained after processing to the user side.
8. A thread pool scheduling apparatus, wherein the thread pool scheduling apparatus comprises:
the parameter acquisition module is used for acquiring the task quantity ratio between the task quantity to be processed and the total task quantity in the thread pool, the average consumed time of processing tasks and the thread utilization rate after preprocessing the signaling task;
a load state value determining module, configured to determine a load state value of each thread pool according to the task number ratio, the average consumed time, and the thread usage rate;
and the thread pool scheduling module is used for determining a thread pool allocation strategy corresponding to the load state value, the task number ratio, the average consumed time and the thread utilization rate, and allocating a thread pool corresponding to the signaling task according to the thread pool allocation strategy.
9. A thread pool scheduling apparatus, characterized in that the thread pool scheduling apparatus comprises: memory, a processor and a thread pool scheduler stored on the memory and operable on the processor, the thread pool scheduler when executed by the processor implementing the steps of the thread pool scheduling method according to any of claims 1-7.
10. A computer-readable storage medium, having stored thereon a thread pool scheduler, which when executed by a processor implements the steps of the thread pool scheduling method according to any of claims 1-7.
CN202111638547.0A 2021-12-28 2021-12-28 Thread pool scheduling method, device and equipment and readable storage medium Pending CN114461385A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111638547.0A CN114461385A (en) 2021-12-28 2021-12-28 Thread pool scheduling method, device and equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111638547.0A CN114461385A (en) 2021-12-28 2021-12-28 Thread pool scheduling method, device and equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114461385A true CN114461385A (en) 2022-05-10

Family

ID=81408021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111638547.0A Pending CN114461385A (en) 2021-12-28 2021-12-28 Thread pool scheduling method, device and equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114461385A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016916A (en) * 2022-06-30 2022-09-06 苏州浪潮智能科技有限公司 Thread pool scheduling method, system, equipment and readable storage medium
CN116126545A (en) * 2023-04-12 2023-05-16 江苏曼荼罗软件股份有限公司 Data extraction method, system, storage medium and equipment for resource scheduling
CN116308850A (en) * 2023-05-19 2023-06-23 深圳市四格互联信息技术有限公司 Account checking method, account checking system, account checking server and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016916A (en) * 2022-06-30 2022-09-06 苏州浪潮智能科技有限公司 Thread pool scheduling method, system, equipment and readable storage medium
CN116126545A (en) * 2023-04-12 2023-05-16 江苏曼荼罗软件股份有限公司 Data extraction method, system, storage medium and equipment for resource scheduling
CN116308850A (en) * 2023-05-19 2023-06-23 深圳市四格互联信息技术有限公司 Account checking method, account checking system, account checking server and storage medium
CN116308850B (en) * 2023-05-19 2023-09-05 深圳市四格互联信息技术有限公司 Account checking method, account checking system, account checking server and storage medium

Similar Documents

Publication Publication Date Title
CN114461385A (en) Thread pool scheduling method, device and equipment and readable storage medium
CN106557369B (en) Multithreading management method and system
CN110351384B (en) Big data platform resource management method, device, equipment and readable storage medium
US8112644B2 (en) Dynamic voltage scaling scheduling mechanism for sporadic, hard real-time tasks with resource sharing
CN112000445A (en) Distributed task scheduling method and system
CN108595282A (en) A kind of implementation method of high concurrent message queue
CN108681481B (en) Service request processing method and device
CN112506808B (en) Test task execution method, computing device, computing system and storage medium
CN110990142A (en) Concurrent task processing method and device, computer equipment and storage medium
CN111240864A (en) Asynchronous task processing method, device, equipment and computer readable storage medium
CN111258746A (en) Resource allocation method and service equipment
CN116991585A (en) Automatic AI calculation power scheduling method, device and medium
CN117149414A (en) Task processing method and device, electronic equipment and readable storage medium
CN103150503A (en) Trojan scanning method and Trojan scanning device
CN115033375A (en) Distributed task scheduling method, device, equipment and storage medium in cluster mode
CN109801425B (en) Queue polling prompting method, device, equipment and storage medium in surface tag service
CN113687931A (en) Task processing method, system and device
CN117056080A (en) Distribution method and device of computing resources, computer equipment and storage medium
CN111143063B (en) Task resource reservation method and device
CN117170842A (en) Thread pool management architecture and thread pool management method
CN114666615B (en) Resource allocation method, device, server, program, and storage medium
CN112395063B (en) Dynamic multithreading scheduling method and system
EP2413240A1 (en) Computer micro-jobs
CN114035926A (en) Application thread scheduling method and device, storage medium and electronic equipment
CN114721791A (en) Task scheduling method, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination