CN115617527A - Management method, configuration method, management device and configuration device of thread pool - Google Patents

Management method, configuration method, management device and configuration device of thread pool Download PDF

Info

Publication number
CN115617527A
CN115617527A CN202211392740.5A CN202211392740A CN115617527A CN 115617527 A CN115617527 A CN 115617527A CN 202211392740 A CN202211392740 A CN 202211392740A CN 115617527 A CN115617527 A CN 115617527A
Authority
CN
China
Prior art keywords
thread pool
configuration information
request
configuration
updated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211392740.5A
Other languages
Chinese (zh)
Inventor
裔韩诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Migu Cultural Technology Co Ltd
China Mobile Communications Group Co Ltd
Original Assignee
Migu Cultural Technology Co Ltd
China Mobile Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Migu Cultural Technology Co Ltd, China Mobile Communications Group Co Ltd filed Critical Migu Cultural Technology Co Ltd
Priority to CN202211392740.5A priority Critical patent/CN115617527A/en
Publication of CN115617527A publication Critical patent/CN115617527A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a management method, a configuration method, a management device and a configuration device of a thread pool, wherein the management method comprises the following steps: inquiring a configuration object of a service thread pool to be polled at preset time intervals, and generating a long polling request task according to the configuration object; initiating a long polling request to the server according to the long polling request task so that the server returns updated configuration information according to the long polling request; and receiving the updated configuration information returned by the server, and updating the configuration information of the service thread pool by using the updated configuration information. In the above manner, the unified configuration information of the server is obtained by means of the long polling request to dynamically adjust the thread pool parameters in the computing device, so that the cost of modifying the thread pool configuration is reduced, and the disadvantage that the online application needs to be restarted after the thread pool parameters are modified in the prior art is avoided.

Description

Management method, configuration method, management device and configuration device of thread pool
Technical Field
The invention relates to the technical field of computers, in particular to a management method, a configuration method, a management device and a configuration device of a thread pool.
Background
Internet servers often need to handle highly concurrent application request tasks from clients, and frequent creation of threads needed to handle these requests is a very resource consuming operation. The conventional approach is to create a new thread for a new request, and while this approach appears to be simple to implement, it has significant drawbacks in that creating a new thread for each request takes more time and also more system resources when creating and destroying threads. In order to solve the above problem, a common method in the prior art is to use a thread pool, that is, a thread reuse technique, to use a pre-configured thread created to execute a current task, and to provide a solution to the problems of thread cycle overhead and resource conflict.
However, the use of the current thread pool technology often encounters the following problems: the setting of the thread pool parameters is difficult to evaluate, because the running mechanism of the thread pool is complex, the reasonable parameter configuration strongly depends on the personal technology and business knowledge level of developers, especially when sudden flow is encountered, if the previous parameter configuration is too small, the abnormal execution is refused, and if the thread pool queue is set to be too long, a large number of tasks are accumulated in the queue, so that the task execution time is too long. In addition, the cost of modifying the thread pool parameters is high, the online application must be restarted after the thread pool parameters are modified, and tens or even hundreds of application service instances in the internet distributed cluster are often generated at the present stage, and the influence of the online service is often caused by restarting. Moreover, the conventional thread pool technology lacks an effective monitoring means, developers often cannot perceive various indexes of the operation of the thread pool, and faults caused by unreasonable parameter configuration cannot be avoided in advance.
Disclosure of Invention
In view of the above, the present invention has been made to provide a management method, a configuration method, a management apparatus, and a configuration apparatus for a thread pool that overcome or at least partially solve the above problems.
According to an aspect of the present invention, there is provided a method for managing a thread pool, the method including:
inquiring a configuration object of a service thread pool to be polled at preset time intervals, and generating a long polling request task according to the configuration object;
initiating a long polling request to a server according to the long polling request task so that the server returns updated configuration information according to the long polling request;
and receiving the updated configuration information returned by the server, and updating the configuration information of the service thread pool by using the updated configuration information.
Optionally, the configuration object includes configuration information, a configuration information encryption value and/or a listener instance of the service thread pool;
the configuration information comprises a business thread pool name space, the number of core threads, a thread maximum value, a queue type, a queue length, an alarm strategy and/or an alarm threshold value.
Optionally, the method further includes:
acquiring running data of a service thread pool at regular time through a monitoring interface, and reporting the running data to a server for storage for viewing; the running data comprises the number of core threads, the maximum number of threads, the current number of threads, the number of active threads, the maximum number of allowed elements in the queue, the number of elements already stored in the queue and/or the number of times of executing the rejection strategy.
Optionally, the method further includes:
when the task cache queue of the business thread pool is full and/or the number of occupied threads reaches the maximum thread pool number, or when the task cache queue of the business thread pool and/or the number of occupied threads exceeds a preset threshold value, an alarm rule is triggered, and a notification is sent out through a communication application program.
Optionally, after receiving the updated configuration information returned by the server and updating the configuration information of the service thread pool by using the updated configuration information, the method further includes:
receiving a request of an application program, and counting the number of times of requests occurring in unit time;
acquiring the service time of each request according to the identifier of the request, and counting to obtain the total service processing time of the requests in unit time;
determining the request rate of requests and the average service time occupied by the requests in unit time according to the total service processing time and the request times;
and determining the actually required service thread quantity according to the average service time and the request rate, and dynamically adjusting the size of the thread service thread pool according to the actually required service thread quantity.
Optionally, the determining the actually required traffic thread amount according to the average service time and the request rate includes:
judging whether the average service time is less than or equal to a preset time threshold value or not;
if yes, determining the actually required business thread quantity according to the request rate;
if not, the actually required business thread quantity is obtained according to the product value of the average service time and the request rate.
According to another aspect of the present invention, there is provided a method for configuring a thread pool, the method including:
receiving a long polling request, wherein the long polling request is sent by a client terminal by inquiring a configuration object of a service thread pool to be polled at preset time intervals and generating a long polling request task according to the configuration object;
and inquiring whether the configuration information corresponding to the configuration object maintained in the server is updated according to the long polling request, and if the configuration information is updated, returning the updated configuration information to the client so that the client can update the configuration information of the service thread pool by using the updated configuration information.
Optionally, the querying, according to the long polling request, whether the configuration information corresponding to the configuration object maintained in the server is updated, and if there is an update, returning the updated configuration information to the client further includes:
generating a configuration request task according to the long polling request, sending the configuration request task to a request task queue, and dividing the overtime time of the suspension request into a delay waiting period and a data checking period;
if the configuration information maintained in the delay waiting period is updated, traversing the request task queue, searching a configuration request task corresponding to the updated configuration information through a configuration key value of a business thread pool name space, and if the configuration request task is searched, returning the updated configuration information to the client;
if the configuration information maintained in the delay waiting period is not updated, extracting the configuration request task from the request task queue in the data checking period, judging whether the configuration information corresponding to the name space of the business thread pool requested by the client is updated or not, and if so, returning the updated configuration item to the client.
According to still another aspect of the present invention, there is provided an apparatus for managing a thread pool, the apparatus including:
the generation module is suitable for inquiring the configuration object of the service thread pool to be polled at intervals of preset time and generating a long polling request task according to the configuration object;
the request module is suitable for initiating a long polling request to the server according to the long polling request task so that the server returns updated configuration information according to the long polling request;
and the updating module is suitable for receiving the updated configuration information returned by the server and updating the configuration information of the service thread pool by using the updated configuration information.
According to still another aspect of the present invention, there is provided an apparatus for configuring a thread pool, the apparatus including:
the system comprises a receiving module, a long polling module and a polling module, wherein the receiving module is suitable for receiving a long polling request, and the long polling request is sent by a client terminal by inquiring a configuration object of a service thread pool to be polled at preset time intervals and generating a long polling request task according to the configuration object;
and the updating module is suitable for inquiring whether the configuration information corresponding to the configuration object maintained in the server is updated according to the long polling request, and if the configuration information is updated, the updated configuration information is returned to the client so that the client can update the configuration information of the service thread pool by using the updated configuration information.
According to yet another aspect of the present invention, there is provided a computing device comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the management method of the thread pool or the configuration method of the thread pool.
According to another aspect of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, where the executable instruction causes a processor to perform an operation corresponding to the management method of the thread pool or the configuration method of the thread pool.
According to the management scheme of the thread pool, the computing device serving as the client can dynamically acquire the configuration parameters of the thread pool from the server through the long polling request, so that the problems of managing the thread pool by the computing device alone are solved, and the following beneficial effects are achieved: the modification of the configuration parameters can be realized without changing the use mode of the original thread pool, so that the cost of modifying the thread pool parameters is reduced, and the problem that the on-line application is required to be restarted after the thread pool parameters are modified in the traditional mode is avoided; further, a selectable configuration option is also provided: the size of the thread pool is dynamically calculated and can be adaptively adjusted and updated based on the load fluctuation condition of the thread pool, the dynamic optimization adjustment of the size of the thread pool according to the load condition of the task execution of the thread pool is realized, the configuration management overhead is reduced, and the overall performance of the service is improved to the greatest extent; in addition, by realizing the embedding of the task level in the thread pool, the multi-dimensional thread pool operation monitoring index and the abnormal alarming capability are provided, and the failure probability is greatly reduced.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart of a method for managing thread pools according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a method for configuring thread pools according to an embodiment of the present invention;
FIG. 3 is a flow diagram illustrating thread pool configuration information request processing provided by an embodiment of the invention;
FIG. 4 is a flow chart of a method for managing thread pools according to another embodiment of the present invention;
FIG. 5 is a block diagram of an apparatus for managing thread pools according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram illustrating a configuration apparatus of a thread pool according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computing device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
FIG. 1 is a flow chart of an embodiment of the thread pool management method of the present invention, which is applied to a computing device, the computing device being a computer device and/or a cloud installed with a computer program using a thread pool, wherein the computer device comprises a single network server and a plurality of network server sets; the Cloud is made up of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a type of distributed Computing, a virtual supercomputer consisting of a collection of loosely coupled computers.
As shown in fig. 4, the thread pool management framework includes a console, a server and a client, where the server provides application services for the client, the console is used to uniformly store, update and manage configuration information of the thread pool, and the client obtains the latest configuration information from the server through a long polling request to dynamically adjust configuration parameters of its own service thread pool. Of course, the client and the server are relative concepts, and can be mapped to a specific application scenario according to needs.
Specifically, as shown in fig. 1, the management method applied to the client includes the following steps:
step 110: and inquiring a configuration object of a service thread pool to be polled at preset time intervals, and generating a long polling request task according to the configuration object.
The embodiment of the invention realizes the management of the service thread pool in the computing equipment serving as the client, and the computing equipment can be a distributed cluster application server. In order to realize the management of the thread pool, a program for managing the thread pool is arranged in the computing equipment, a configuration object (configCacheMap) is created for each service thread pool, and relevant information such as thread pool configuration content and the like is arranged and stored in the configuration object. Then, inquiring the examples of the configuration objects of the service thread pool to be polled at preset time intervals, and assembling the examples to generate a long polling request task.
Step 120: and initiating a long polling request to the server according to the long polling request task so that the server returns updated configuration information according to the long polling request.
For example, a long polling request may be initiated to the server in the form of an HTTP protocol to obtain updated configuration information in the server. And the equipment serving as the server inquires whether the configuration information corresponding to the configuration object maintained by the equipment is updated or not, and returns the updated configuration information to the client if the configuration information is updated.
Step 130: and receiving the updated configuration information returned by the server, and updating the configuration information of the service thread pool by using the updated configuration information.
Specifically, the client can obtain the updated configuration information through the listener, and refresh the parameter configuration of the service thread pool. Specifically, the client may set the thread pool in real time by obtaining the latest parameter values through methods such as setcoreploalsize (), setmaximumploolszie (), setRejectedExecutionHandle () and the like provided by the thread pool threadpooxecutor. In addition, although the development kit in the aspect of blocking the queue of the service thread pool does not provide a method for modifying the size of the queue in real time, the function of dynamically modifying the size of the queue of the thread pool can be realized by inheriting the abstract queue and adding a setCapacity () method.
In summary, in the embodiment, the latest configuration information is obtained by the long polling request to the server, so that the thread pool parameters of the client are updated in a hot manner to achieve the effect of dynamically configuring the thread pool parameters and avoiding restarting, the configuration parameters can be modified without changing the use mode of the original thread pool, and the cost of modifying the thread pool parameters is reduced.
In a preferred embodiment, the configuration object comprises configuration information of the business thread pool, a configuration information encryption value and/or a listener instance.
Specifically, the configuration object may be a configCacheMap object implemented by java, C + +, or the like, and the configuration object is packaged and stored with thread pool configuration content, a configuration content MD5 value, a registered listener instance, and the like.
The configuration information comprises a business thread pool name space, the number of core threads, a thread maximum value, a queue type, a queue length, an alarm strategy and/or an alarm threshold value. Referring to fig. 2, the configuration information on the server may be updated by a developer on a console of the server, or may be automatically updated by a preset program.
In one embodiment, the method further comprises the step of monitoring the running status of the thread pool on the client: and acquiring running data of the service thread pool at regular time (or at regular intervals) through the monitoring interface, and reporting the running data to a server for storage so as to be checked by subsequent developers. The running data comprises the number of core threads, the maximum number of threads, the current number of threads, the number of active threads, the maximum number of allowed elements in the queue, the number of elements already stored in the queue and/or the number of times of executing the rejection strategy.
In one embodiment, an alarm method is further provided for a computing device as a client, and the method is used for judging whether the business thread pool is too high in load and needs to send an alarm from the following two aspects: when the task cache queue of the business thread pool is full and/or the number of occupied threads reaches the maximum thread pool number, or when the task cache queue of the business thread pool and/or the number of occupied threads exceed a preset threshold value, an alarm rule is triggered, and a notification is sent through a communication application program, for example, the notification can be pushed to a preset associated service developer through a nail, a mailbox and the like.
In one embodiment, the computing device serving as the application server further implements a thread pool dynamic adaptive adjustment function, and after receiving updated configuration information returned by the server and updating the configuration information of the service thread pool by using the updated configuration information, the method further includes the following steps: receiving a request of an application program, and counting the number of times of the request occurring in unit time; acquiring the service time of each request according to the identifier of the request, and counting to obtain the total service processing time of the requests in unit time; determining the request rate of requests and the average service time occupied by the requests in unit time according to the total service processing time and the request times; and determining the actually required service thread quantity according to the average service time and the request rate, and dynamically adjusting the size of the thread service thread pool according to the actually required service thread quantity.
The number of requests per second can be obtained through a counter in the client, and the request rate and the average service time are respectively obtained through a preset frequency calculator and a preset service time statistical calculator, wherein the processing time of each request can be obtained through calculation after a database table (such as a hash table) records the service starting processing time and the service ending processing time according to the identifier of each request.
For example, the number of application requests received by the computing device per second is 8, the total time of the 8 requests requiring service operation is 24s, the average service time is 3s, and the number of actually required service threads is 3*8, that is, 24.
In a preferred embodiment, the determining the actually required traffic thread amount according to the average service time and the request rate includes: judging whether the average service time is less than or equal to a preset time threshold (the preset time threshold is preferably 1 second); if yes, determining the actually required business thread quantity according to the request rate; if not, the actually required business thread quantity is obtained according to the product value of the average service time and the request rate.
Specifically, assume that a suitable equation for calculating the number of threads in the thread pool is NewPoolSize = δ × Avg _ Service _ Time, where δ is the request rate per second and Avg _ Service _ Time is the average Service Time. Assuming a request rate of 10 requests per second and an average service time of 2s for all requests, newPoolSize =10 × 2 is obtained by calculation. If the service time of all requests is 0.5s, then NewPoolSize =10 × 0.5=5. This does not affect throughput since 5 threads can process 10 requests in one second (0.5 s processing one request), but the response time of 5 of the 10 requests is affected, i.e. 5 requests must wait 0.5 seconds to get the thread's right of use, and the continuous arrival of requests will further increase the latency of requests. Thus, for small Service timing workloads (Avg _ Service _ Time less than or equal to 1 s), only δ (i.e., request arrival rate) may be considered to set the pool size, so that 10 threads may be allocated for 10 requests to obtain an improvement in response Time.
Fig. 2 is a flowchart illustrating an embodiment of a configuration method for a thread pool of the present invention, where the configuration method is applied to a server and specifically includes the following steps:
step 210: receiving a long polling request, wherein the long polling request is sent by a client terminal by inquiring a configuration object of a service thread pool to be polled at preset time intervals and generating a long polling request task according to the configuration object.
Wherein the long polling request corresponds to a long polling request generated by a client side.
Step 220: and inquiring whether the configuration information corresponding to the configuration object maintained in the server is updated according to the long polling request, and if the configuration information is updated, returning the updated configuration information to the client so that the client can update the configuration information of the service thread pool by using the updated configuration information.
According to this step 220, the corresponding configuration information in the server is queried according to the information in the long polling request, and whether the configuration information is updated or not is determined, and if the corresponding configuration information is updated, the updated configuration information is returned to the client.
In an optional embodiment, the querying, in step 220, whether the configuration information corresponding to the configuration object maintained in the server is updated according to the long polling request, and if there is an update, returning the updated configuration information to the client further includes:
generating a configuration request task according to the long polling request, sending the configuration request task to a request task queue, and dividing the overtime time of the suspension request into a delay waiting period and a data checking period;
if the configuration information maintained in the delay waiting period is updated, traversing the request task queue, searching a configuration request task corresponding to the updated configuration information through a configuration key value of a business thread pool name space, and if the configuration request task is searched, returning the updated configuration information to the client;
if the configuration information maintained in the delay waiting period is not updated, extracting the configuration request task from the request task queue in the data inspection period, judging whether the configuration information corresponding to the name space of the business thread pool requested by the client is updated or not, and if the configuration information is updated, returning the updated configuration item to the client, wherein the configuration information is preferably updated through a control console by a worker.
According to the configuration method disclosed in the embodiment, after receiving a long polling request in the form of HTTP sent by a client, a server generates a configuration request task, and checks whether or not configuration information of a response maintained by the server is updated, and if so, returns an updated configuration item to the client, and if not, suspends the long polling request, and divides a timeout period of the suspension request into a delay waiting period and a data checking period within a suspension delay time, and performs handling in different cases according to whether or not a console updates the configuration information.
Specifically, the configuration and execution may be performed in conjunction with the request processing flowchart of the configuration information shown in fig. 3 and step S103 in fig. 4.
In the following, with reference to fig. 3 and fig. 4, an embodiment of the method for managing a thread pool is further described by another specific embodiment, where the embodiment specifically includes the following steps:
s101: after the client of the application service is started and initialized, a configCacheMap object is created for each service thread pool, wherein thread pool configuration content, a configuration content MD5 value and a registered monitor instance are stored, in addition, two task threads are started, one task thread is a single-thread executive, a configCacheMap instance to be polled is inquired and obtained every 20ms, and a long polling task instance is assembled and submitted to a second task thread pool executive service for processing. The main processing logic of the long polling task instance is as follows: and (3) using each ThreadPoolNameSpace configuration key value to obtain the latest configuration from the server, and the server checks whether the MD5 values of the configuration content are the same or not, if not, the configuration is changed, and the configuration content is returned. And the client executes the callback processing logic to update the related parameters of the service thread pool by using the latest server configuration, simultaneously updates the thread pool configuration content in the configCacheMap, and calculates and updates the value of the configuration content MD 5. If the server side does not find the change of the key value of the configuration item, the server side suspends the HTTP request, and the default 20s is overtime, so that the polling frequency of the client side and the pressure of the server side are relieved.
S102: the developer operates and updates configuration in the console, and can configure configuration options including a thread pool name space (namespace), the number of cores, a maximum value, a queue type, a queue length, an alarm policy, an alarm threshold value and the like. In addition, the option of whether to start the thread pool dynamic adaptive algorithm can be selected, and the algorithm logic will be explained in detail later.
S103: as shown in fig. 3, after the request of the client arrives at the server, the server encapsulates the long polling request of the client into a clientRequestTask, and delivers the clientRequestTask to a scheduler schedule to execute, the schedule establishes a scheduling task with a delay time of 500ms subtracted from the timeout time of the polling request of the client (if the timeout time of the client is 20s, the delay latency is 19.5 s), and adds the instance of the clientRequestTask to a clientrequestquery.
During the delay waiting period, if the user updates the configuration item on the console, the data change event is used to notify that the data of the server has changed, the queue of the clientRequestQuene is traversed first, the request tasks of all the clients are maintained in the queue, a clientRequestTask task corresponding to the threedPoolNamespace of the currently changed configuration item needs to be found, after the corresponding task is found, the matched configuration content is written into a response object through the clientRequestTask, and the client receives the response result immediately.
If the dataChangeEvent event is not triggered during the whole delay waiting period, the scheduling task starts to check the data change, firstly, if a clientrequestTask instance is used for searching and searching whether the instance is updated, the instance can be removed from the clientrequestQuene, then the information whether the configuration corresponding to the threepoolNamesspace requested by the client in the server is changed or not is inquired and obtained, and if the update occurs, the checked result is written into a response object and returned to the client.
The client side obtains corresponding latest configuration information according to the thread pool threeadPoolNamespace information returned by the server side response, the parameter configuration of the local thread pool is refreshed, in the aspect of the implementation of the thread pool parameter modification technology, the ThreadPoolExecutor provides methods of setCoreParsize (), setMaximumPoolSzie (), setRejectedExecutionHandle () and the like, and the thread pool can be set in real time by obtaining the latest parameter values. The method for modifying the queue size in real time is not provided for the jdk in the aspect of blocking the queue, and the function of dynamically modifying the queue size of the thread pool can be realized by inheriting the abstract queue and then adding a setCapacity () method. In summary, by considering both the consumption of the long polling IO request as low as possible and the real-time property of obtaining the latest configuration, after the latest configuration data of the server is obtained, the thread pool parameters of the client are updated thermally, so as to achieve the effect of dynamically configuring the thread pool parameters without restarting.
S104: the monitoring API (Application Programming Interface) of the client acquires the running data of the collection thread pool at regular time, wherein the running data comprises indexes such as core thread number, maximum thread number, current thread number, active thread number, maximum occurring thread number, maximum number of allowed elements in a queue, number of elements stored in the queue, strategy execution times refusal and the like, the data are packaged and sent to the server, and the server receives the data reported by the client and stores the data in a warehouse for persistent storage, so that a subsequent developer can check the running index data of the thread pool.
S105: the alarm module judges whether the thread pool is overloaded and needs to send an alarm from two aspects: on one hand, a program has a task rejection exception, the task rejection module is a protection part of a thread pool, the thread pool has a maximum capacity, when a task cache queue of the thread pool is full and the number of threads in the thread pool reaches the configured maximum thread pool number, the task needs to be rejected, a task rejection strategy is adopted and the exception is thrown out, so that the thread pool is protected; on the other hand, when the number of active threads tends to the maximum number of thread pools and the waiting tasks in the task queue approach to the maximum queue length, the overall load of the thread pools tends to be high, and when the waiting tasks in the queue are continuously backlogged and are greater than the threshold value of the configured alarm, the alarm rule is triggered and pushed to the associated service developers configured in advance through the modes of nails, mailboxes and the like.
It should be noted that, in step S102, if the configuration item selects the option of starting the dynamic adaptive adjustment algorithm of the thread pool, the dynamic adaptive algorithm module is enabled in the thread management module, and the algorithm module enables a separate Acceptor thread to receive the request from the client, and each time the request arrives, it will increment and put into the Counter object for tracking the request rate per second on the thread pool, and every second, the value of the Counter will be repeatedly saved and reset to zero by the frequency calculator. In this way, both the counter and the frequency calculator can track the request rate per second across the thread pool. The Acceptor forwards the request to the Hash Access Service (Service _ Hash _ Binder) for further processing, which uses the identifier ID of the request as an index into a Hash table to obtain the Service time of the request. If the identifier exists in the existing table, which indicates that the calculation statistics have been performed, and the Service Time of the identifier is obtained, the Service Time is added to a variable named Total _ Service _ Time, namely the Total Service processing Time, the Service Time tag is bound with the request, and finally the request is put into a request queue of a first-in first-out (FIFO). If the ID of the request does not exist in the hash table, the request is directly put into a request queue, and then the service processing time of the request queue is recorded and solved. Wherein, total _ Service _ Time is used to calculate the sum of the Service Time of all requests entering the system per second.
Specifically, the frequency calculator accesses Total _ Service _ Time to calculate an average value of Service Time per second. The frequency calculator is a timer thread that is activated once per second and tracks the frequency of changes in the actual load per second on the system. The actual load comprises a combination of the request rate and the service time of all requests entered per second in the system. The request rate per second is tracked by the Counter object and stored in the frequency count object. The Service Time of all requests entered per second in the system is recorded as identified by the Total _ Service _ Time variable, which is used by the frequency calculator to calculate the average Service Time of all requests entered per second in the system.
For all requests entered in the system per second, starting from 1 Time to the end of the Counter, avg _ Service _ Time, the average Service processing Time, is calculated by dividing the sum of their Service times by the Counter. However, since the service time when the request first enters the system is unknown, it is necessary to calculate the service time of the newly input request by a time stamp and record the service time in the hash table. The request is time stamped in two phases, startTime is recorded before the request is executed, and EndTime is recorded after the request is completed, marking only those requests that enter the system for the first time and whose service time does not exist in the hash table. This hash table maintains the service processing time of each request, the request _ ID is used by the hash function as an index to the hash table, and if an index exists, the service time of the request is maintained. With request _ ID as the index, the average search time complexity of any element is O (1). Unmarked requests are marked by the thread as StartTime markers before execution. After the requests are completed, these unmarked requests are marked again with an EndTime tag and handed over to the logging Service _ Recorder, which is used to calculate the Service time of the request. The logging service logs the service time in a hash table and finally places the request in a FIFO (first in first out) response queue.
When the frequency calculator runs, the frequency calculator stores the Counter in the frequency object, calculates Avg _ Service _ Time, resets Total _ Service _ Time and Counter to zero, and finally runs a thread (DTT) for dynamically adjusting the size of the thread pool to be responsible for optimizing and adjusting the size of the thread pool. The thread pool is a dynamic linked list which holds threads for executing client requests, and when the DTT thread adds the threads in the thread pool, the thread pool can be dynamically expanded when needed. The DTT shrinks the thread pool when the request rate drops from a high request rate to a low request rate, or when some threads idle in the pool for some specified threshold time. The threads of each thread pool are provided with a timer that is started only when the thread is idle at Chi Zhongkong, and if it is idle for a threshold time (800 milliseconds by default), the timer will destroy the corresponding thread to reduce the thread pool size.
The DTT thread is run by the frequency calculator every second before completing its execution task. It dynamically adjusts the size of the optimized thread pool by evaluating the request rate and service processing time of queued requests. When it first starts running, it keeps its pool size parallel to the request rate, but in subsequent runs it also evaluates the service time of the workload. The Avg _ Service _ Time is transmitted to the DTT thread by the frequency calculator, and the current request rate is read from the frequency object by the DTT thread. The DTT thread maintains adjustments to the thread pool size to the appropriate number of threads by evaluating Avg _ Service _ Time and the request rate. The formula for calculating the appropriate number of threads in the thread pool is NewPoolSize = δ Avg _ Service _ Time, where δ is the request rate per second. Assuming a request rate of 10 requests per second and a service time of 2s for all requests, newPoolSize =10 × 2 is obtained by calculation. If the service time of all requests is 0.5s, then NewPoolSize =10 × 0.5=5. This does not affect throughput since 5 threads can process 10 requests in one second (0.5 s processing one request), but the response time of 5 of the 10 requests is affected, i.e. 5 requests must wait 0.5 seconds to get the thread's right of use, and the continuous arrival of requests will further increase the latency of requests. Thus, for a shorter Service timing workload (Avg _ Service _ Time less than or equal to 1 s), it may be considered to set the pool size with δ (i.e., request arrival rate) only, so that 10 threads may be allocated for 10 requests to obtain an improvement in response Time. To summarize, newPoolSize = δ × Avg _ Service _ Time when Avg _ Service _ Time is greater than 1, and NewPoolSize = δ otherwise.
Referring to fig. 5, an embodiment of the present invention discloses an apparatus 500 for managing a thread pool, where the apparatus 500 is used for a client, and includes:
the generating module 510 is adapted to query a configuration object of a service thread pool to be polled at preset intervals, and generate a long polling request task according to the configuration object;
a request module 520, adapted to initiate a long polling request to the server according to the long polling request task, so that the server returns updated configuration information according to the long polling request;
the updating module 530 is adapted to receive updated configuration information returned by the server, and update the configuration information of the service thread pool by using the updated configuration information.
In one embodiment, the configuration object comprises configuration information, a configuration information encryption value and/or a listener instance of a business thread pool;
the configuration information comprises a business thread pool name space, the number of core threads, a thread maximum value, a queue type, a queue length, an alarm strategy and/or an alarm threshold value.
In one embodiment, the apparatus further comprises a monitoring module adapted to:
acquiring running data of a service thread pool at regular time through a monitoring interface, and reporting the running data to a server for storage for viewing; the running data comprises the number of core threads, the maximum number of threads, the current number of threads, the number of active threads, the maximum number of allowed elements in the queue, the number of elements already stored in the queue and/or the number of times of executing the rejection strategy.
In one embodiment, the apparatus further comprises an alert module adapted to:
when the task cache queue of the business thread pool is full and/or the number of occupied threads reaches the maximum thread pool number, or when the task cache queue of the business thread pool and/or the number of occupied threads exceeds a preset threshold value, an alarm rule is triggered, and a notification is sent out through a communication application program.
In one embodiment, the apparatus further comprises an adaptive adjustment module adapted to:
receiving a request of an application program, and counting the number of times of the request occurring in unit time;
acquiring the service time of each request according to the identifier of the request, and counting to obtain the total service processing time of the requests in unit time;
determining the request rate of requests and the average service time occupied by the requests in unit time according to the total service processing time and the request times;
and determining the actually required service thread quantity according to the average service time and the request rate, and dynamically adjusting the size of the thread service thread pool according to the actually required service thread quantity.
In one embodiment, the adaptive adjustment module is further adapted to:
judging whether the average service time is less than or equal to a preset time threshold value or not;
if yes, determining the actually required business thread quantity according to the request rate;
if not, the actually required business thread quantity is obtained according to the product value of the average service time and the request rate.
Referring to fig. 6, an embodiment of the present invention further discloses a device 600 for configuring a thread pool, where the device 600 is used for a server, and includes:
the receiving module 610: the method comprises the steps that the method is suitable for receiving a long polling request, wherein the long polling request is sent by a client terminal by inquiring a configuration object of a service thread pool to be polled at preset time intervals and generating a long polling request task according to the configuration object;
the update module 620: and inquiring whether the configuration information corresponding to the configuration object maintained in the server is updated according to the long polling request, and if the configuration information is updated, returning the updated configuration information to the client so that the client can update the configuration information of the service thread pool by using the updated configuration information.
In an alternative embodiment, the update module 620 is further adapted to:
generating a configuration request task according to the long polling request, sending the configuration request task to a request task queue, and dividing the overtime time of the suspension request into a delay waiting period and a data checking period;
if the configuration information maintained in the delay waiting period is updated, traversing the request task queue, searching a configuration request task corresponding to the updated configuration information through a configuration key value of a business thread pool name space, and if the configuration request task is searched, returning the updated configuration information to the client;
if the configuration information maintained in the delay waiting period is not updated, extracting the configuration request task from the request task queue in the data checking period, judging whether the configuration information corresponding to the name space of the business thread pool requested by the client is updated or not, and if so, returning the updated configuration item to the client.
As can be seen, the configuration apparatus 600 disclosed in this embodiment can be used by the server to determine whether to update the configuration of the thread pool in the client, based on the request information from the client and the configuration update information of the console.
To sum up, the management and configuration scheme of the thread pool disclosed by the embodiment of the invention enables each business thread pool in the computing device to monitor and synchronize the update information in the external server and dynamically modify the configuration parameters according to the update information, supports real-time capacity expansion and capacity reduction of the thread pool, refuses real-time change of the policy rules and the like; the scheme can avoid the problems that the traditional mode needs to expand or shrink the thread pool, and the service application needs to be restarted after the thread pool parameters are modified (if the service has a plurality of instances and each instance needs to be restarted); the size of the thread pool can be dynamically calculated and adaptively adjusted and updated based on the load fluctuation condition of the thread pool; adding a running state index data monitoring and reporting function to a tangent plane buried point in a life cycle of a thread pool executing a task; based on reported data support alarm notification capability, when the capacity of a blocking queue reaches a set threshold value, a rejection strategy is triggered and other scenes, development, operation and maintenance personnel can be automatically notified at the first time in the modes of mails, nails and the like; and moreover, the configuration capability of the thread pool is placed on the console side of the server, developers are supported to check index data of tasks executed by the thread pool through the console and dynamically modify the parameter configuration of the thread pool to avoid restarting service application, development, operation and maintenance personnel can quickly try and error at low cost to observe the running condition of the adjusted thread pool.
The beneficial effects obtained by the embodiment of the invention include but are not limited to:
1. by acquiring the thread pool parameters dynamically configured and updated in the console, modifying the configuration parameters of the thread pool at the client, reducing the cost of modifying the thread pool parameters on the basis of not changing the original thread pool use mode, and avoiding the problem that the on-line application is required to be restarted after the thread pool parameters are modified in the traditional mode;
2. the method has the advantages that a selectable configuration option is provided, namely, the function of dynamically calculating and adaptively adjusting and updating the size of the thread pool based on the load fluctuation condition of the thread pool is realized, so that the configuration management overhead is reduced, and the overall performance of the service is improved to the greatest extent; in addition, by realizing the embedding of the task level in the thread pool, the multi-dimensional thread pool operation monitoring index and the abnormal alarm capability are provided, and the probability of fault occurrence is greatly reduced.
An embodiment of the present invention provides a non-volatile computer storage medium, where the computer storage medium stores at least one executable instruction, and the computer executable instruction may execute a management method or a configuration method of a thread pool in any method embodiment described above.
Fig. 7 is a schematic structural diagram of an embodiment of a computing device according to the present invention, and a specific embodiment of the present invention does not limit a specific implementation of the computing device.
As shown in fig. 7, the computing device may include: a processor (processor) 702, a Communications Interface 704, a memory 706, and a communication bus 708.
Wherein: the processor 702, communication interface 704, and memory 706 communicate with each other via a communication bus 708. A communication interface 704 for communicating with network elements of other devices, such as clients or other servers. The processor 702, configured to execute the program 710, may specifically perform relevant steps in the above embodiment of the method for managing a thread pool of a computing device.
In particular, the program 710 may include program code that includes computer operating instructions.
The processor 702 may be a central processing unit CPU, or an Application Specific Integrated Circuit ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement an embodiment of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
The memory 706 stores a program 710. The memory 706 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 710 may be specifically configured to enable the processor 702 to execute operations corresponding to the management method or the configuration method of the thread pool.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

Claims (12)

1. A method for managing a thread pool, the method comprising:
inquiring a configuration object of a service thread pool to be polled at preset time intervals, and generating a long polling request task according to the configuration object;
initiating a long polling request to a server according to the long polling request task so that the server returns updated configuration information according to the long polling request;
and receiving the updated configuration information returned by the server, and updating the configuration information of the service thread pool by using the updated configuration information.
2. The method according to claim 1, wherein the configuration object comprises configuration information, configuration information encryption value and/or listener instance of the business thread pool;
the configuration information comprises a business thread pool name space, the number of core threads, a thread maximum value, a queue type, a queue length, an alarm strategy and/or an alarm threshold value.
3. The method of claim 1, further comprising:
acquiring running data of a service thread pool at regular time through a monitoring interface, and reporting the running data to a server for storage for viewing; the running data comprises the number of core threads, the maximum number of threads, the current number of threads, the number of active threads, the maximum number of allowed elements in the queue, the number of elements already stored in the queue and/or the number of times of executing the rejection strategy.
4. The method according to any one of claims 1-3, further comprising:
when the task cache queue of the business thread pool is full and/or the number of occupied threads reaches the maximum thread pool number, or when the task cache queue of the business thread pool and/or the number of occupied threads exceeds a preset threshold value, an alarm rule is triggered, and a notification is sent out through a communication application program.
5. The method according to any one of claims 1 to 3, wherein after receiving the updated configuration information returned by the server and updating the configuration information of the business thread pool by using the updated configuration information, the method further comprises:
receiving a request of an application program, and counting the number of times of the request occurring in unit time;
acquiring the service time of each request according to the identifier of the request, and counting to obtain the total service processing time of the requests in unit time;
determining the request rate of requests and the average service time occupied by the requests in unit time according to the total service processing time and the request times;
and determining the actually required service thread quantity according to the average service time and the request rate, and dynamically adjusting the size of the thread service thread pool according to the actually required service thread quantity.
6. The method of claim 5, wherein determining the actual required traffic thread volume based on the average service time and the request rate comprises:
judging whether the average service time is less than or equal to a preset time threshold value or not;
if yes, determining the actually required business thread quantity according to the request rate;
if not, the actually required business thread quantity is obtained according to the product value of the average service time and the request rate.
7. A method for configuring a thread pool, the method comprising:
receiving a long polling request, wherein the long polling request is sent by a client terminal by inquiring a configuration object of a service thread pool to be polled at preset time intervals and generating a long polling request task according to the configuration object;
and inquiring whether the configuration information corresponding to the configuration object maintained in the server is updated according to the long polling request, and if the configuration information is updated, returning the updated configuration information to the client so that the client can update the configuration information of the service thread pool by using the updated configuration information.
8. The method of claim 7, wherein the querying whether the configuration information corresponding to the configuration object maintained in the server is updated according to the long polling request, and if the configuration information is updated, returning the updated configuration information to the client further comprises:
generating a configuration request task according to the long polling request, sending the configuration request task to a request task queue, and dividing the overtime time of the suspension request into a delay waiting period and a data checking period;
if the configuration information maintained in the delay waiting period is updated, traversing the request task queue, searching a configuration request task corresponding to the updated configuration information through a configuration key value of a business thread pool name space, and if the configuration request task is searched, returning the updated configuration information to the client;
if the configuration information maintained in the delay waiting period is not updated, extracting the configuration request task from the request task queue in the data checking period, judging whether the configuration information corresponding to the name space of the business thread pool requested by the client is updated or not, and if so, returning the updated configuration item to the client.
9. An apparatus for managing a thread pool, the apparatus comprising:
the generation module is suitable for inquiring the configuration object of the service thread pool to be polled at intervals of preset time and generating a long polling request task according to the configuration object;
the request module is suitable for initiating a long polling request to the server according to the long polling request task so that the server returns updated configuration information according to the long polling request;
and the updating module is suitable for receiving the updated configuration information returned by the server and updating the configuration information of the service thread pool by using the updated configuration information.
10. An apparatus for configuring a thread pool, the apparatus comprising:
the system comprises a receiving module, a long polling module and a polling module, wherein the receiving module is suitable for receiving a long polling request, and the long polling request is sent by a client terminal by inquiring a configuration object of a service thread pool to be polled at preset time intervals and generating a long polling request task according to the configuration object;
and the updating module is suitable for inquiring whether the configuration information corresponding to the configuration object maintained in the server side is updated according to the long polling request, and if the configuration information is updated, the updated configuration information is returned to the client side so that the client side can update the configuration information of the service thread pool by using the updated configuration information.
11. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the management method of the thread pool according to any one of claims 1-6 or the configuration method of the thread pool according to claim 7 or 8.
12. A computer storage medium, wherein at least one executable instruction is stored in the storage medium, and the executable instruction causes a processor to execute operations corresponding to the management method of the thread pool according to any one of claims 1 to 6 or the configuration method of the thread pool according to claim 7 or 8.
CN202211392740.5A 2022-11-08 2022-11-08 Management method, configuration method, management device and configuration device of thread pool Pending CN115617527A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211392740.5A CN115617527A (en) 2022-11-08 2022-11-08 Management method, configuration method, management device and configuration device of thread pool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211392740.5A CN115617527A (en) 2022-11-08 2022-11-08 Management method, configuration method, management device and configuration device of thread pool

Publications (1)

Publication Number Publication Date
CN115617527A true CN115617527A (en) 2023-01-17

Family

ID=84879175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211392740.5A Pending CN115617527A (en) 2022-11-08 2022-11-08 Management method, configuration method, management device and configuration device of thread pool

Country Status (1)

Country Link
CN (1) CN115617527A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115794449A (en) * 2023-02-10 2023-03-14 中科源码(成都)服务机器人研究院有限公司 Dynamic thread pool construction method, remote procedure calling method and device
CN116366436A (en) * 2023-04-21 2023-06-30 南京弘竹泰信息技术有限公司 Method for providing various telecom value-added services based on wide area networking

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115794449A (en) * 2023-02-10 2023-03-14 中科源码(成都)服务机器人研究院有限公司 Dynamic thread pool construction method, remote procedure calling method and device
CN115794449B (en) * 2023-02-10 2023-10-03 中科源码(成都)服务机器人研究院有限公司 Dynamic thread pool construction method, remote procedure call method and device
CN116366436A (en) * 2023-04-21 2023-06-30 南京弘竹泰信息技术有限公司 Method for providing various telecom value-added services based on wide area networking
CN116366436B (en) * 2023-04-21 2024-03-05 南京弘竹泰信息技术有限公司 Method for providing various telecom value-added services based on wide area networking

Similar Documents

Publication Publication Date Title
CN112162865B (en) Scheduling method and device of server and server
US9755990B2 (en) Automated reconfiguration of shared network resources
CN107729139B (en) Method and device for concurrently acquiring resources
CN115617527A (en) Management method, configuration method, management device and configuration device of thread pool
US8191068B2 (en) Resource management system, resource information providing method and program
US7912949B2 (en) Systems and methods for recording changes to a data store and propagating changes to a client application
US8954971B2 (en) Data collecting method, data collecting apparatus and network management device
US8516509B2 (en) Methods and computer program products for monitoring system calls using safely removable system function table chaining
CN111522636B (en) Application container adjusting method, application container adjusting system, computer readable medium and terminal device
CN107451147B (en) Method and device for dynamically switching kafka clusters
US8930521B2 (en) Method, apparatus, and computer program product for enabling monitoring of a resource
CN103019853A (en) Method and device for dispatching job task
CN110351366B (en) Service scheduling system and method for internet application and storage medium
CN111338773A (en) Distributed timed task scheduling method, scheduling system and server cluster
CN107515784B (en) Method and equipment for calculating resources in distributed system
US20120072575A1 (en) Methods and computer program products for aggregating network application performance metrics by process pool
US10148531B1 (en) Partitioned performance: adaptive predicted impact
US20150280981A1 (en) Apparatus and system for configuration management
US10142195B1 (en) Partitioned performance tracking core resource consumption independently
CN110351532B (en) Video big data cloud platform cloud storage service method
CN107872517A (en) A kind of data processing method and device
CN111813868B (en) Data synchronization method and device
WO2017074320A1 (en) Service scaling for batch processing
CN115587118A (en) Task data dimension table association processing method and device and electronic equipment
US10033620B1 (en) Partitioned performance adaptive policies and leases

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination