CN107341050B - Service processing method and device based on dynamic thread pool - Google Patents

Service processing method and device based on dynamic thread pool Download PDF

Info

Publication number
CN107341050B
CN107341050B CN201610274785.0A CN201610274785A CN107341050B CN 107341050 B CN107341050 B CN 107341050B CN 201610274785 A CN201610274785 A CN 201610274785A CN 107341050 B CN107341050 B CN 107341050B
Authority
CN
China
Prior art keywords
service
thread
processing
thread pool
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610274785.0A
Other languages
Chinese (zh)
Other versions
CN107341050A (en
Inventor
胡峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201610274785.0A priority Critical patent/CN107341050B/en
Publication of CN107341050A publication Critical patent/CN107341050A/en
Application granted granted Critical
Publication of CN107341050B publication Critical patent/CN107341050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration

Abstract

The application discloses a service processing method and device based on a dynamic thread pool. One embodiment of the method comprises: receiving a processing request of a user, wherein the processing request comprises a processing task; selecting a service queue with the same processing task as the processing request from a preset service queue set, and adding the processing request into the service queue; respectively calculating the thread request quantity required by processing the processing request of each service queue in the service queue set; and respectively allocating a corresponding number of threads from a thread pool to a service thread pool corresponding to the service queue in the service queue set according to the relationship between the thread request quantity required by the processing request of each service queue and the bus thread request quantity required by the processing requests of all service queues in the service queue set. The embodiment realizes the effective utilization of thread resources and the allocation of threads according to needs.

Description

Service processing method and device based on dynamic thread pool
Technical Field
The present application relates to the field of computer technologies, and in particular, to the field of internet technologies, and in particular, to a service processing method and apparatus based on a dynamic thread pool.
Background
Currently, under the scenario that multiple users send task processing requests to a server concurrently, a multithreading parallel processing mode is mostly adopted for processing, so as to fully utilize the server to improve the processing efficiency. In the processing process of multi-user concurrent task request processing service, a static thread pool is adopted, and all different service processing flows in the same service process share the same thread pool; different service processing flows occupy different time lengths of thread resources, so that the different service processing flows are mutually influenced and interfered due to shared thread resources and cannot be effectively isolated; it is also impossible to guarantee that a service processing flow with a high request amount obtains sufficient thread resources during a traffic peak.
Disclosure of Invention
The present application is directed to an improved dynamic thread pool-based service processing method and apparatus, so as to solve the technical problems mentioned in the above background.
In a first aspect, the present application provides a service processing method based on a dynamic thread pool, where the method includes: receiving a processing request of a user, wherein the processing request comprises a processing task; selecting a service queue with the same processing task as the processing request from a preset service queue set, and adding the processing request into the service queue; respectively calculating the thread request quantity required by processing the processing request of each service queue in the service queue set; according to the relation between the thread request quantity required by the processing request of each service queue and the bus thread request quantity required by the processing requests of all service queues in the service queue set, respectively allocating a corresponding quantity of threads from a thread pool to a service thread pool corresponding to the service queues in the service queue set, wherein the service queues in the service queue set respectively correspond to the service thread pools in the thread pool one by one.
In some embodiments, the thread pool comprises at least two service thread pools, and threads between different service thread pools in the thread pool are isolated from each other.
In some embodiments, a number of service queues in the set of service queues is the same as a number of service thread pools in the thread pool, wherein the processing requests in each service queue have the same processing task.
In some embodiments, the allocating, according to a relationship between a thread request amount required for processing a request of each service queue and a bus thread request amount required for processing requests of all service queues in the service queue set, a corresponding number of threads from a thread pool to a service thread pool corresponding to a service queue in the service queue set respectively includes: comparing the bus thread request quantity with the quantity of all threads in the thread pool; if the number of all threads in the thread pool is more than the bus thread request amount, distributing the threads from the thread pool to a service thread pool corresponding to the service queue according to the thread request amount; and if the number of all threads in the thread pool is less than the bus thread request quantity, distributing the threads from the thread pool to a service thread pool corresponding to the service queue according to the ratio of the thread request quantity to the bus thread request quantity.
In some embodiments, said allocating threads from a thread pool to a service thread pool corresponding to the service queue according to a ratio relationship between the thread request amount and the bus request amount if the number of all threads in the thread pool is less than the bus request amount includes: the service queue requests threads from a service thread pool corresponding to the service queue according to the arrival time sequence of the processing requests; and when the processing request is finished, returning the thread for processing the processing request to the thread pool.
In some embodiments, said allocating threads from a thread pool to a service thread pool corresponding to said service queue according to said thread request amount if the number of all threads in said thread pool is greater than said bus thread request amount includes: comparing the thread request quantity with a preset service thread pool threshold value; and if the thread request quantity is greater than or equal to the service thread pool threshold value, distributing the threads with the same quantity as the service thread pool threshold value from the thread pool to the service thread pool corresponding to the service queue.
In a second aspect, the present application provides a dynamic thread pool-based service processing apparatus, including: the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving a processing request of a user, and the processing request comprises a processing task; the matching unit is configured to select a service queue with the same processing task as the processing request from a preset service queue set, and add the processing request into the service queue; the computing unit is configured to respectively compute thread request quantities required by processing requests of each service queue in the service queue set; and the thread allocation unit is configured to allocate a corresponding number of threads from a thread pool to service thread pools corresponding to the service queues in the service queue set according to the relationship between the thread request amount required by the processing request of each service queue and the bus thread request amount required by the processing requests of all the service queues in the service queue set, wherein the service queues in the service queue set correspond to the service thread pools in the thread pool one by one.
In some embodiments, the thread pool comprises at least two service thread pools, and threads between different service thread pools in the thread pool are isolated from each other.
In some embodiments, a number of service queues in the set of service queues is the same as a number of service thread pools in the thread pool, wherein the processing requests in each service queue have the same processing task.
In some embodiments, the thread allocation unit is further configured to: comparing the bus thread request quantity with the quantity of all threads in the thread pool; if the number of all threads in the thread pool is more than the bus thread request amount, distributing the threads from the thread pool to a service thread pool corresponding to the service queue according to the thread request amount; and if the number of all threads in the thread pool is less than the bus thread request amount, distributing the threads from the thread pool to a service thread pool corresponding to the service queue according to the ratio relation of the thread request amount and the bus thread request amount.
In some embodiments, the thread allocation unit is further configured to: if the number of all threads in the thread pool is less than the bus thread request amount, the service queue requests the threads from the service thread pool corresponding to the service queue according to the arrival time sequence of the processing requests; and when the processing request is finished, returning the thread for processing the processing request to the thread pool.
In some embodiments, the thread allocation unit is further configured to: if the number of all threads in the thread pool is less than the bus thread request amount, comparing the thread request amount with a preset service thread pool threshold value; and if the thread request quantity is greater than or equal to the service thread pool threshold value, distributing threads with the same quantity as the service thread pool threshold value from a thread pool to a service thread pool corresponding to the service queue.
According to the dynamic thread pool-based service processing method and device, the processing requests of users are counted into the service queues with the same processing task, the thread demand quantity required by processing the processing requests in each service queue in the service queue set is calculated, and according to the relation between the thread demand quantity required by processing the processing requests in the service queues and the bus thread request quantity required by processing requests in all service queues in the set in the service queues, corresponding threads are respectively distributed from the thread pool to the service thread pool corresponding to the service queue in the service queue set. And the dynamic on-demand allocation of the threads in the thread pool is realized.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram for one embodiment of a dynamic thread pool based service processing method according to the present application;
FIG. 3 is a schematic diagram of an application scenario of a dynamic thread pool based service processing method according to the present application;
FIG. 4 is a flow diagram of yet another embodiment of a dynamic thread pool based service processing method according to the present application;
FIG. 5 is a block diagram illustrating an embodiment of a dynamic thread pool based service processing method according to the present application;
fig. 6 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present application, the embodiments and features of the embodiments may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the dynamic thread pool based service processing method or dynamic thread pool based service processing apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a shopping application, a search application, a web browser application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting shopping-like applications, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background processing server providing support for shopping-like applications displayed on the terminal devices 101, 102, 103. The background processing server can request the processing request of the user to provide a service thread through the service interface service thread pool according to the processing task for the received data such as the user processing request, and executes the service processing flow.
It should be noted that the service processing method based on the dynamic thread pool provided in the embodiment of the present application is generally executed by the server 105, and accordingly, the service processing method apparatus based on the dynamic thread pool is generally disposed in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a dynamic thread pool based service processing method in accordance with the present application is shown. The service processing method based on the dynamic thread pool comprises the following steps:
step 201, receiving a processing request of a user.
In this embodiment, an electronic device (for example, a server shown in fig. 1) on which the dynamic thread pool-based service processing method operates may receive, through a wired connection manner or a wireless connection manner, a processing request of a user from a terminal with which the user performs online shopping, where the user request includes a processing task of the processing request, where the processing task is a service processing flow that the server needs to provide for the processing request, and for example, the processing task may be a processing task of querying a commodity price in an order, requesting to place an order, or requesting to pay.
Generally, a user uses application software installed on a terminal to send a processing request to a server, for example, a request for data query, or a request for order placement processing, payment processing, and other various requests in order processing.
Step 202, selecting a service queue having the same processing task as the processing request from a preset service queue set, and adding the processing request into the selected service queue.
In this embodiment, based on the processing task of the user request obtained in step 201, the electronic device (for example, the server shown in fig. 1) may preset a queue service set in a preset buffer area, where the queue service set includes a plurality of service queues, and each service queue is composed of processing requests having the same processing task. Each service queue is an independent service call requester, is linked to a service thread pool for processing the processing tasks through a preset service interface, and requests the service thread pool to allocate threads to execute a service processing flow. The electronic equipment selects a service queue with the same processing task as the processing request from the preset service queue set according to the processing request of the user, and adds the processing request into the service queue.
In some optional implementations of the embodiment, the number of service queues in the service queue set is the same as the number of service thread pools in the thread pool, where the processing requests in each service queue have the same processing task. Here, the service queue is used to link processing requests with the same task to the same service thread pool, so that the threads in the same service thread pool process the processing requests of the same processing task, and the processing efficiency is improved.
Step 203, calculating the thread request amount required for processing the processing request of each service queue in the service queue set.
In this embodiment, the electronic device calculates, based on the service queue selected in step 202, the number of threads required to process all processing requests in the service queue according to the time for processing a single processing task in the service queue and the number of processing requests in the service queue, where the number of threads required to process all processing requests in the service queue is the number of thread requests in the service queue. And respectively calculating the thread request quantity of each service queue in the service queue set, and calculating the bus thread request quantity of all service queues in the service queue set according to the thread request quantity of each service queue in the service queue set.
And step 204, respectively allocating a corresponding number of threads from the thread pool to the service thread pool corresponding to the service queue in the service queue set according to the relationship between the thread request quantity required by the processing request of each service queue and the bus thread request quantity required by the processing requests of all service queues in the service queue set.
In this embodiment, based on the calculation result in step 203, the electronic device may allocate the threads in the thread pool according to the relationship between the thread request amount of the service queue and the bus thread request amount of all service queues in the service queue set. Here, threads may be allocated to the service thread pool corresponding to each service queue according to the ratio of the thread request amount to the bus thread request amount of each service queue and according to the ratio relationship. Here, the service queues in the service queue set correspond to the service thread pools in the thread pool one-to-one, respectively.
In some optional implementations of this embodiment, the thread pool in the electronic device includes at least two service thread pools, and threads in different service thread pools in the thread pool are isolated from each other. Here, the number of service thread pools in the thread pool may be set in advance according to the number of tasks having different processing tasks. For example, the processing tasks are: order service and query service, the number of service thread pools is set to be 2, and processing threads are respectively provided for processing requests of the order service and the query service; if, the processing task has: and order service, query service and evaluation service, the number of the service thread pools is set to be 3, and processing threads are respectively provided for the processing requests of the order service, the query service and the evaluation service. Different service threads are set according to different processing tasks, so that delay release threads generated by waiting time of different processing tasks in the same service thread due to different time needed for processing different processing tasks can be reduced.
In some optional implementations of this embodiment, the electronic device may allocate the threads of the thread pool to the service thread pool according to a relationship between a bus thread request amount required by the service queue set and the number of all threads in the thread pool. Here, the bus thread request amount is compared with the number of all threads in the thread pool; if the number of all threads in the thread pool is more than the bus thread request amount, the threads in the thread pool are sufficient, the threads can be distributed according to the processing requirements of the service queue, and the threads with the same number as the thread request amount can be distributed from the thread pool to the service thread pool corresponding to the service queue. If the number of all threads in the thread pool is less than the bus thread request amount, the threads in the thread pool are insufficient, and the threads can be distributed to the service thread pool corresponding to the service queue from the thread pool according to the ratio relationship between the thread request amount and the bus thread request amount. And the number of the threads in the thread pool is obtained by rounding the number of the threads in the thread pool according to the ratio relation, wherein the number of the threads is the number of the threads in the service thread pool corresponding to the service queue.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the service processing method based on the dynamic thread pool according to the present embodiment. In the application scenario of fig. 3, a service queue in the service queue set requests an allocation thread to the service thread pool by calling a service interface corresponding thereto. Here, the service queue in the service queue set is a service caller, the service thread pool in the thread pool is a service provider, and the service caller requests the service provider to allocate a thread by calling a service interface. As shown in fig. 3-a, as an example, the service invoker has two types of service queues for processing tasks, and the two corresponding service thread pools at the service provider can be distinguished by using numbers, and the numbers themselves have no precedence order, and do not form a limitation on the number. The service queue set is provided with a service queue A and a service queue B, the thread pool is provided with a service thread pool A and a service thread pool B, the service thread pool A and the service thread pool B are independent from each other, and thread resources are isolated from each other. And the service queue A requests the service thread pool A to allocate threads by calling the service interface A, and the service queue B requests the service thread pool B to allocate threads by calling the service interface B. In fig. 3-a, the same number of threads may be initially allocated to the service thread pool a and the service thread pool B, the number of threads in the service thread pool is represented by the size of a circular area, and when the thread request is small, the threads in the thread pool may be allocated to the service thread pool a and the service thread pool B as needed. As shown in fig. 3-B, when the class a request volume of the service queue a increases (here, indicated by a thick arrow), the number of threads of the service thread pool a corresponding to class a increases, and the size of the service thread pool a becomes larger (indicated by a larger circular area). Here, a corresponding number of threads in the thread pool are allocated to the service thread pool a or the service thread pool B by the thread coordinator of the service provider according to the thread request amount required for processing the request by the service queue.
According to the method provided by the embodiment of the application, the threads in the thread pool are distributed into the service thread pool according to the thread request amount of the service queue, so that the threads are effectively distributed according to the requirement by utilizing thread resources.
With further reference to FIG. 4, a flow 400 of yet another embodiment of a dynamic thread pool based service processing method is illustrated. The flow 400 of the dynamic thread pool-based service processing method includes the following steps:
step 401, a processing request of a user is received.
In this embodiment, an electronic device (for example, a server shown in fig. 1) on which the dynamic thread pool-based service processing method operates may receive a processing request of a user from a terminal with which the user performs online shopping through a wired connection manner or a wireless connection manner, where the user request includes a processing task of the processing request, where the processing task is a service processing flow that the server needs to provide for processing the request.
Step 402, selecting a service queue having the same processing task as the processing request from a preset service queue set, and adding the processing request into the selected service queue.
In this embodiment, based on the processing task of the user request obtained in step 401, the electronic device (for example, the server shown in fig. 1) may set a queue service set in a preset buffer area, where the queue service set includes a plurality of service queues, and each service queue is composed of processing requests having the same processing task. Each service queue is an independent service call requester, is linked to a service thread pool for processing the processing tasks through the service interface, and requests the service thread pool to distribute threads to execute the service processing flow. The electronic equipment selects a service queue with the same processing task as the processing request from the preset service queue set according to the processing request of the user, and adds the processing request into the service queue.
Step 403, calculating the thread request amount required for processing the processing request of each service queue in the service queue set.
In this embodiment, based on the service queue selected in step 402, the electronic device calculates, according to the time for processing a single processing task in the service queue and the number of processing requests in the service queue, the number of threads required for processing all processing requests in the service queue. Here, the number of threads required to process all processing requests in the service queue is the thread request amount of the service queue. And respectively calculating the thread request quantity of each service queue in the service queue set, and calculating the bus thread request quantity of all service queues in the service queue set according to the thread request quantity of each service queue in the service queue set.
In step 404, the service queue requests the allocation of threads to the service thread pool in sequence according to the arrival time sequence of the processing requests.
In this embodiment, with the use of the bus-thread request amount calculated in step 403, if the number of all threads in the thread pool is less than the bus-thread request amount, the service queue sequentially requests the service thread pool corresponding to the service queue to allocate threads according to the arrival time sequence of the processing requests. The thread allocation to the service thread pool can be generally performed according to a first-come-first-serve principle (FIFO) by default, for example, if the thread pool is configured with 100 threads, then 100 processing requests can be processed at the same time, and if the quantity of concurrent processing requests exceeds 100, the service queue waits for processing; if the concurrent processing request is less than 100, a request is allocated to a thread from the thread pool for processing, the thread is processed firstly, and the thread is placed back in the thread pool after the processing is finished.
Step 405, according to the relationship between the thread request amount required for processing the request of each service queue and the bus thread request amount required for processing the requests of all service queues in the service queue set, allocating a corresponding number of threads from the thread pool to the service thread pool corresponding to the service queue in the service queue set.
In this embodiment, based on the calculation result in step 403, the electronic device may allocate the threads in the thread pool according to the relationship between the thread request amount of the service queue and the bus thread request amount of all service queues in the service queue set. Here, the threads may be allocated to the service thread pool corresponding to each service queue according to the ratio of the thread request amount to the bus thread request amount of each service queue and the ratio relationship. Here, the service queues in the service queue set correspond to the service thread pools in the thread pool one-to-one, respectively.
In some optional implementations of this embodiment, the electronic device may allocate the threads according to a relationship between the number of all threads in the thread pool and the bus thread request amount. And if the quantity of all threads in the thread pool is less than the quantity of the bus thread requests, comparing the quantity of the thread requests with a preset service thread pool threshold, and if the quantity of the thread requests is greater than or equal to the service thread pool threshold, distributing the threads with the same quantity as the service thread pool threshold from the thread pool to the service thread pool corresponding to the service queue. Here, setting the service thread pool threshold value can prevent a service request of a certain processing task from occupying a large amount of thread resources at the same time, and a service request of another processing task from being in a long-term waiting process.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the service processing method based on dynamic thread pool in this embodiment highlights the step of allocating threads to the service thread pool requests corresponding to the service queue by the service queue in turn according to the time sequence of arrival of the processing requests. Therefore, the scheme described in this embodiment can sequentially request the thread allocation to the thread pool according to the order of the service requests, and can realize more comprehensive thread allocation and execution of the service processing flow.
With further reference to fig. 5, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of a service processing apparatus based on a dynamic thread pool, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 5, the dynamic thread pool based service processing apparatus 500 according to this embodiment includes: a receiving unit 501, a matching unit 502, a calculating unit 503, and a thread allocating unit 504. The receiving unit 501 is configured to receive a processing request of a user, where the processing request includes a processing task; the matching unit 502 is configured to select a service queue having the same processing task as the processing request from a preset service queue set, and add the processing request to the service queue; the calculating unit 503 is configured to calculate a thread request amount required for processing a processing request of each service queue in the service queue set; the thread allocating unit 504 is configured to allocate, according to a relationship between a thread request amount required for processing a request of each service queue and a bus thread request amount required for processing requests of all service queues in the service queue set, a corresponding number of threads from the thread pool to service thread pools corresponding to service queues in the service queue set, where the service queues in the service queue set correspond to the service thread pools in the thread pool one to one.
In this embodiment, the receiving unit 501 of the dynamic thread pool-based service processing apparatus 500 may receive a processing request of a user, which includes a processing task of the processing request, from a terminal with which the user performs online shopping through a wired connection manner or a wireless connection manner.
In this embodiment, based on the processing request obtained by the receiving unit 501, the matching unit 502 selects a service queue having the same processing task as the processing request from the preset service queue set, and adds the processing request to the service queue.
In this embodiment, the calculating unit 503 calculates the number of threads required to process all the processing requests in the service queue according to the time for processing a single processing task in the service queue and the number of processing requests in the service queue, where the number of threads required to process all the processing requests in the service queue is the number of thread requests in the service queue.
In this embodiment, based on the calculation result of the calculation unit 503, the thread allocation unit 504 may allocate the threads in the thread pool according to the relationship between the thread request amount of the service queue and the bus thread request amount of all service queues in the service queue set. Here, the service queues in the service queue set correspond to the service thread pools in the thread pool one-to-one, respectively.
In some optional implementations of the present embodiment, the thread pool of the dynamic thread pool-based service processing apparatus 500 includes at least two service thread pools, where threads between different service thread pools in the thread pool are isolated from each other.
In some optional implementations of the present embodiment, the number of service queues in the service queue set preset in the cache region of the dynamic thread pool-based service processing apparatus 500 is the same as the number of service thread pools in the thread pool, where the processing requests in each service queue have the same processing task.
In some optional implementations of the present embodiment, the dynamic thread pool based service processing apparatus 500 thread allocation unit 504 is further configured to: comparing the request quantity of the bus threads with the quantity of all threads in the thread pool; if the number of all threads in the thread pool is more than the bus thread request quantity, distributing the threads from the thread pool to a service thread pool according to the thread request quantity required by the service queue; and if the number of all threads in the thread pool is less than the bus thread request amount, distributing the threads from the thread pool to a service thread pool according to the ratio relation of the thread request amount and the bus thread request amount.
In some optional implementations of the present embodiment, the dynamic thread pool based service processing apparatus 500 thread allocation unit 504 is further configured to: if the number of all threads in the thread pool is less than the total thread request amount, the service queue requests the service thread pool to distribute thread execution processing flows according to the arrival time sequence of the processing requests; and when the processing request is finished, returning the thread for processing the processing request to the thread pool.
In some optional implementations of the present embodiment, the dynamic thread pool based service processing apparatus 500 thread allocating unit 504 is further configured to: if the number of all threads in the thread pool is less than the total thread request amount, comparing the thread request amount with a preset service thread pool threshold value; and if the thread request quantity is greater than or equal to the service thread pool threshold value, distributing the threads with the same quantity as the service thread pool threshold value from the thread pool to the service thread pool corresponding to the service queue.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing a terminal device or server of an embodiment of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU) 601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. A driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, which may be described as: a processor includes a receiving unit, a matching unit, a calculating unit, and a thread allocating unit. Where the names of these units do not in some cases constitute a limitation on the unit itself, for example, a receiving unit may also be described as a "unit that receives a processing request of a user".
As another aspect, the present application also provides a non-volatile computer storage medium, which may be the non-volatile computer storage medium included in the apparatus in the above-described embodiments; or it may be a non-volatile computer storage medium that exists separately and is not incorporated into the terminal. The non-transitory computer storage medium stores one or more programs that, when executed by a device, cause the device to: receiving a processing request of a user, wherein the processing request comprises a processing task; selecting a service queue with the same processing task as the processing request from a preset service queue set, and adding the processing request into the service queue; respectively calculating the thread request quantity required by processing the processing request of each service queue in the service queue set; according to the relation between the thread request quantity required by the processing request of each service queue and the bus thread request quantity required by the processing requests of all service queues in the service queue set, respectively allocating a corresponding quantity of threads from a thread pool to a service thread pool corresponding to the service queues in the service queue set, wherein the service queues in the service queue set respectively correspond to the service thread pools in the thread pool one by one.
The foregoing description is only exemplary of the preferred embodiments of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (14)

1. A service processing method based on a dynamic thread pool is characterized by comprising the following steps:
receiving a processing request of a user, wherein the processing request comprises a processing task;
selecting a service queue with the same processing task as the processing request from a preset service queue set, and adding the processing request into the service queue;
respectively calculating the thread request quantity required by processing the processing request of each service queue in the service queue set;
according to the relation between the thread request quantity required by the processing request of each service queue and the bus thread request quantity required by the processing requests of all service queues in the service queue set, respectively allocating a corresponding quantity of threads from a thread pool to a service thread pool corresponding to the service queues in the service queue set, wherein the service queues in the service queue set respectively correspond to the service thread pools in the thread pool one to one;
wherein, according to the relationship between the thread request quantity required by the processing request of each service queue and the bus thread request quantity required by the processing requests of all service queues in the service queue set, respectively allocating a corresponding number of threads from a thread pool to the service thread pool corresponding to the service queue in the service queue set, includes:
comparing the bus thread request quantity with the quantity of all threads in the thread pool;
and if the number of all threads in the thread pool is less than the bus thread request quantity, distributing the threads from the thread pool to a service thread pool corresponding to the service queue according to the ratio of the thread request quantity to the bus thread request quantity.
2. The method of claim 1, wherein the thread pool comprises at least two service thread pools, and wherein threads in different service thread pools in the thread pool are isolated from each other.
3. The method of claim 1, wherein the number of service queues in the set of service queues is the same as the number of service thread pools in the thread pool, wherein the processing requests in each service queue have the same processing task.
4. The method according to claim 1, wherein said allocating a corresponding number of threads from a thread pool to a service thread pool corresponding to a service queue in the service queue set according to a relationship between a thread request amount required for processing a request of each service queue and a bus thread request amount required for processing requests of all service queues in the service queue set, further comprises:
and if the number of all threads in the thread pool is more than the bus thread request amount, distributing the threads from the thread pool to a service thread pool corresponding to the service queue according to the thread request amount.
5. The method according to claim 4, wherein said allocating threads from a thread pool to a service thread pool corresponding to said service queue according to a ratio of said thread request amount to said bus thread request amount if the number of all threads in said thread pool is less than said bus thread request amount comprises:
the service queue requests threads from a service thread pool corresponding to the service queue according to the arrival time sequence of the processing requests;
and when the processing request is finished, returning the thread for processing the processing request to the thread pool.
6. The method according to any of claims 1-5, wherein said allocating threads from a thread pool to a service thread pool corresponding to said service queue according to said thread request amount if the number of all threads in said thread pool is less than said bus thread request amount comprises:
comparing the thread request quantity with a preset service thread pool threshold value;
and if the thread request quantity is greater than or equal to the service thread pool threshold value, distributing threads with the same quantity as the service thread pool threshold value from a thread pool to a service thread pool corresponding to the service queue.
7. A dynamic thread pool based service processing apparatus, the apparatus comprising:
the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving a processing request of a user, and the processing request comprises a processing task;
the matching unit is configured to select a service queue with the same processing task as the processing request from a preset service queue set, and add the processing request into the service queue;
the computing unit is configured to respectively compute thread request quantities required by processing requests of each service queue in the service queue set;
the thread allocation unit is configured to allocate a corresponding number of threads from a thread pool to a service thread pool corresponding to the service queues in the service queue set according to a relationship between the thread request amount required by the processing request of each service queue and the bus thread request amount required by the processing requests of all the service queues in the service queue set, wherein the service queues in the service queue set correspond to the service thread pools in the thread pool one by one;
wherein the thread allocation unit is further configured to:
comparing the bus thread request quantity with the quantity of all threads in the thread pool;
and if the number of all threads in the thread pool is less than the bus thread request quantity, distributing the threads from the thread pool to a service thread pool corresponding to the service queue according to the ratio of the thread request quantity to the bus thread request quantity.
8. The apparatus of claim 7, wherein the thread pool comprises at least two service thread pools, and wherein threads in different service thread pools in the thread pool are isolated from each other.
9. The apparatus of claim 7, wherein the number of service queues in the set of service queues is the same as the number of service thread pools in the thread pool, and wherein the processing requests in each service queue have the same processing task.
10. The apparatus of claim 7, wherein the thread allocation unit is further configured to:
and if the number of all threads in the thread pool is more than the bus thread request amount, distributing the threads from the thread pool to a service thread pool corresponding to the service queue according to the thread request amount.
11. The apparatus of claim 10, wherein the thread allocation unit is further configured to:
if the number of all threads in the thread pool is less than the bus thread request amount, the service queue requests the threads from the service thread pool corresponding to the service queue according to the arrival time sequence of the processing requests;
and when the processing request is finished, returning the thread for processing the processing request to the thread pool.
12. The apparatus according to one of claims 7 to 11, wherein the thread allocation unit is further configured to:
if the number of all threads in the thread pool is less than the bus thread request amount, comparing the thread request amount with a preset service thread pool threshold value;
and if the thread request quantity is greater than or equal to the service thread pool threshold value, distributing the threads with the same quantity as the service thread pool threshold value from the thread pool to the service thread pool corresponding to the service queue.
13. A server, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN201610274785.0A 2016-04-28 2016-04-28 Service processing method and device based on dynamic thread pool Active CN107341050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610274785.0A CN107341050B (en) 2016-04-28 2016-04-28 Service processing method and device based on dynamic thread pool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610274785.0A CN107341050B (en) 2016-04-28 2016-04-28 Service processing method and device based on dynamic thread pool

Publications (2)

Publication Number Publication Date
CN107341050A CN107341050A (en) 2017-11-10
CN107341050B true CN107341050B (en) 2022-12-27

Family

ID=60221680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610274785.0A Active CN107341050B (en) 2016-04-28 2016-04-28 Service processing method and device based on dynamic thread pool

Country Status (1)

Country Link
CN (1) CN107341050B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107872398A (en) * 2017-06-25 2018-04-03 平安科技(深圳)有限公司 High concurrent data processing method, device and computer-readable recording medium
CN110231981B (en) * 2018-03-06 2022-12-13 华为技术有限公司 Service calling method and device
CN108681481B (en) * 2018-03-13 2021-10-15 创新先进技术有限公司 Service request processing method and device
CN110413419A (en) * 2018-04-28 2019-11-05 北京京东尚科信息技术有限公司 A kind of method and apparatus that rule executes
CN108846632A (en) * 2018-05-28 2018-11-20 浙江口碑网络技术有限公司 Thread processing method and device
CN109254933A (en) * 2018-09-25 2019-01-22 郑州云海信息技术有限公司 A kind of processing method of I/O Request, system and associated component
CN109710402A (en) * 2018-12-17 2019-05-03 平安普惠企业管理有限公司 Method, apparatus, computer equipment and the storage medium of process resource acquisition request
CN109814994B (en) * 2019-01-03 2021-10-08 福建天泉教育科技有限公司 Method and terminal for dynamically scheduling thread pool
CN110018892A (en) * 2019-03-12 2019-07-16 平安普惠企业管理有限公司 Task processing method and relevant apparatus based on thread resources
CN110569123B (en) * 2019-07-31 2022-08-02 苏宁云计算有限公司 Thread allocation method and device, computer equipment and storage medium
CN111679900B (en) * 2020-06-15 2023-10-31 杭州海康威视数字技术股份有限公司 Task processing method and device
CN111897643A (en) * 2020-08-05 2020-11-06 深圳鼎盛电脑科技有限公司 Thread pool configuration system, method, device and storage medium
CN113794650A (en) * 2021-09-16 2021-12-14 平安国际智慧城市科技股份有限公司 Concurrent request processing method, computer device and computer-readable storage medium
CN114374657A (en) * 2022-01-04 2022-04-19 京东科技控股股份有限公司 Data processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102821164A (en) * 2012-08-31 2012-12-12 河海大学 Efficient parallel-distribution type data processing system
CN102855293A (en) * 2012-08-10 2013-01-02 广东电网公司电力科学研究院 Mass data processing method of electric vehicle and charging/battery swap facility system
CN103810072A (en) * 2012-11-09 2014-05-21 上海飞田通信技术有限公司 Device and method for guaranteeing order execution of multithread tasks

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294531B (en) * 2012-03-05 2017-03-01 阿里巴巴集团控股有限公司 A kind of method for allocating tasks and system
CN102629220A (en) * 2012-03-08 2012-08-08 北京神州数码思特奇信息技术股份有限公司 Dynamic task allocation and management method
CN102722417B (en) * 2012-06-07 2015-04-15 腾讯科技(深圳)有限公司 Distribution method and device for scan task
CN103677997B (en) * 2012-09-21 2017-04-12 宏达国际电子股份有限公司 Multi-core device and multi-thread scheduling method thereof
CN103197968B (en) * 2013-03-18 2016-03-30 焦点科技股份有限公司 A kind of thread pool disposal route and system merging synchronous asynchronous feature
CN103268247B (en) * 2013-06-05 2017-01-18 中国电子科技集团公司第十五研究所 Method and device for executing task and adjusting number of remaining threads in thread pool
CN103455377B (en) * 2013-08-06 2019-01-22 北京京东尚科信息技术有限公司 System and method for management business thread pool
CN103473129B (en) * 2013-09-18 2017-01-18 深圳前海大数金融服务有限公司 Multi-task queue scheduling system with scalable number of threads and implementation method thereof
CN103605498B (en) * 2013-12-05 2016-07-06 用友网络科技股份有限公司 The multithreading of mono-thread tasks performs method and system
CN105389208B (en) * 2015-11-10 2018-12-14 中国建设银行股份有限公司 Job processing method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855293A (en) * 2012-08-10 2013-01-02 广东电网公司电力科学研究院 Mass data processing method of electric vehicle and charging/battery swap facility system
CN102821164A (en) * 2012-08-31 2012-12-12 河海大学 Efficient parallel-distribution type data processing system
CN103810072A (en) * 2012-11-09 2014-05-21 上海飞田通信技术有限公司 Device and method for guaranteeing order execution of multithread tasks

Also Published As

Publication number Publication date
CN107341050A (en) 2017-11-10

Similar Documents

Publication Publication Date Title
CN107341050B (en) Service processing method and device based on dynamic thread pool
CN107832143B (en) Method and device for processing physical machine resources
CN111666147B (en) Resource scheduling method, equipment, system and central server
CN112379982B (en) Task processing method, device, electronic equipment and computer readable storage medium
CN114155026A (en) Resource allocation method, device, server and storage medium
CN110113176B (en) Information synchronization method and device for configuration server
CN111062572A (en) Task allocation method and device
CN112686528B (en) Method, device, server and medium for distributing customer service resources
CN107045452B (en) Virtual machine scheduling method and device
CN113051456A (en) Request processing method and device, electronic equipment and computer readable medium
CN109842665B (en) Task processing method and device for task allocation server
CN113765966A (en) Load balancing method and device
CN107609852B (en) Method and apparatus for processing payment requests
CN107634978B (en) Resource scheduling method and device
CN108683608B (en) Method and device for distributing flow
CN113127561B (en) Method and device for generating service single number, electronic equipment and storage medium
CN111093281B (en) Method and device for allocating resources
CN116069518A (en) Dynamic allocation processing task method and device, electronic equipment and readable storage medium
CN109471574B (en) Method and device for configuring resources
CN114489978A (en) Resource scheduling method, device, equipment and storage medium
US10979359B1 (en) Polling resource management system
CN112561301A (en) Work order distribution method, device, equipment and computer readable medium
CN113448717A (en) Resource scheduling method and device
CN111694670A (en) Resource allocation method, device, equipment and computer readable medium
CN115391042B (en) Resource allocation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant