CN117311939A - Client request processing method, computing device and storage medium - Google Patents

Client request processing method, computing device and storage medium Download PDF

Info

Publication number
CN117311939A
CN117311939A CN202311285819.2A CN202311285819A CN117311939A CN 117311939 A CN117311939 A CN 117311939A CN 202311285819 A CN202311285819 A CN 202311285819A CN 117311939 A CN117311939 A CN 117311939A
Authority
CN
China
Prior art keywords
batch
thread pool
task
request
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311285819.2A
Other languages
Chinese (zh)
Inventor
王皓琪
朱德权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Cnb Express It Co ltd
Original Assignee
Shanghai Cnb Express It Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Cnb Express It Co ltd filed Critical Shanghai Cnb Express It Co ltd
Priority to CN202311285819.2A priority Critical patent/CN117311939A/en
Publication of CN117311939A publication Critical patent/CN117311939A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a processing method of a client request, which comprises the following steps: receiving a client request, and judging whether the client request is a batch task request or a common access request; distributing a batch thread pool to the batch task request, and processing batch tasks in parallel through threads in the batch thread pool; distributing a general thread pool to the general access request, and processing tasks one by one according to the task queue through threads in the general thread pool; and returning the task execution results of the batch thread pool and the general thread pool to the client. The scheme can improve the system parallelism performance and reduce resource waste.

Description

Client request processing method, computing device and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method for processing a client request, a computing device, and a storage medium.
Background
Multithreading refers to a process in which multiple threads are run simultaneously, each of which can run independently and access shared resources simultaneously. Multithreading concurrency is widely used to improve the performance and response capabilities of systems, and effective management of multithreading is critical to ensure system stability and resource utilization.
For the situation that the client side simultaneously has a common access request and a one-time batch task request to process approval tasks, the traditional serial processing mode may cause low processing efficiency of the system and low customer satisfaction, and the common access request does not need the parallel processing capability of the system. If the main thread directly opens the multithreading in the service, the core thread number of the thread pool and the maximum thread number need to be increased, so that the resource waste can be caused for the request which does not need to be processed in parallel.
Disclosure of Invention
In order to overcome the defects of the prior art, the scheme provides a processing method of a client request, which isolates task processing threads of a common access request and a batch task request, and can improve the concurrency performance of a system and isolate other threads which do not need concurrency capability by processing the batch requests through concurrent multithreading so as to achieve the maximum utilization rate of resources.
According to a first aspect of the present invention, there is provided a method for processing a client request, including: receiving a client request, and judging whether the client request is a batch task request or a common access request; distributing a batch thread pool to the batch task request, and processing batch tasks in parallel through threads in the batch thread pool; distributing a general thread pool to the general access request, and processing tasks one by one according to the task queue through threads in the general thread pool; and returning the task execution results of the batch thread pool and the general thread pool to the client.
The scheme can flexibly carry out task allocation and thread management according to the client request, and can meet different business requirements.
Optionally, in the method for processing a client request provided by the present invention, the maximum thread count, the core thread count and the task queue capacity of the general thread pool are respectively smaller than the maximum thread count, the core thread count and the task queue capacity of the batch thread pool.
Optionally, in the method for processing the client request provided by the invention, whether the client request is a batch task request or a common access request is judged according to the request parameter, the request frequency or the interface document of the client request.
Optionally, in the method for processing the client request provided by the invention, whether a batch thread pool exists currently is judged, if not, a batch thread pool is created through a thread pool manager, and if so, the current batch thread pool is used;
distributing tasks to threads in a batch thread pool according to the task concurrency number; the multiple threads process the distributed tasks in parallel, and return a first future object list containing all the tasks after all the tasks are completed; and circularly traversing the first future object list, and acquiring the execution result of the task from each future object.
Optionally, in the method for processing a client request provided by the invention, a batch semaphore is defined according to the number of task concurrency in a batch task request; tasks are evenly distributed to each thread in the batch thread pool according to the batch semaphore size.
Optionally, in the method for processing the client request provided by the invention, if the batch thread pool is used completely, the method is transferred to the general thread pool to execute the task.
Optionally, in the method for processing the client request provided by the invention, a thread is allocated to each task in the common access request for execution according to the task queue; returning a second future object list containing all tasks after waiting for completion of all tasks; and circularly traversing the second future object list, and acquiring the execution result of the task from each future object.
Optionally, in the method for processing a client request provided by the invention, a task execution result is acquired from the first future object list of the batch thread pool and the second future object list of the general thread pool through the thread manager, and returned to the client.
According to a second aspect of the present invention there is provided a computing device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor performing a method of processing a client request as in the first aspect.
According to a third aspect of the present invention there is provided a computer readable storage medium comprising a computer program stored with instructions executable by a processor to load and execute a method of processing a client request as in the first aspect.
By the processing method of the client requests, corresponding thread pools are distributed for task processing according to different client requests, so that the task processing threads of common access requests and batch task requests can be isolated, the concurrent processing efficiency of the tasks can be improved, the resource waste and human errors can be reduced, the processing accuracy can be improved, and different business requirements can be met.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 illustrates a block diagram of a computing device 100 according to one embodiment of the invention;
FIG. 2 illustrates a flow diagram of a method 200 of processing a client request according to one embodiment of the invention.
Detailed Description
The scheme provides a processing method of the client request, different thread processing modes are selected according to the client request mode, so that the concurrency performance of the system can be improved, thread waste is avoided, and the resource utilization rate is improved.
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
FIG. 1 illustrates a block diagram of a computing device 100 according to one embodiment of the invention. As shown in FIG. 1, in a basic configuration 102, a computing device 100 typically includes a system memory 106 and one or more processors 104. The memory bus 108 may be used for communication between the processor 104 and the system memory 106.
Depending on the desired configuration, the processor 104 may be any type of processor, including, but not limited to: microprocessor (μp), microcontroller (μc), digital information processor (DSP), or any combination thereof. The processor 104 may include one or more levels of caches, such as a first level cache 110 and a second level cache 112, a processor core 114, and registers 116. The example processor core 114 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 118 may be used with the processor 104, or in some implementations, the memory controller 118 may be an internal part of the processor 104.
Depending on the desired configuration, system memory 106 may be any type of memory including, but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. Physical memory in a computing device is often referred to as volatile memory, RAM, and data in disk needs to be loaded into physical memory in order to be read by processor 104. The system memory 106 may include an operating system 120, one or more applications 122, and program data 124.
In some implementations, the application 122 may be arranged to execute instructions on an operating system by the one or more processors 104 using the program data 124. The operating system 120 may be, for example, linux, windows or the like, which includes program instructions for handling basic system services and performing hardware-dependent tasks. The application 122 includes program instructions for implementing various functions desired by the user, and the application 122 may be, for example, a browser, instant messaging software, a software development tool (e.g., integrated development environment IDE, compiler, etc.), or the like, but is not limited thereto. When an application 122 is installed into computing device 100, a driver module may be added to operating system 120.
When the computing device 100 starts up running, the processor 104 reads the program instructions of the operating system 120 from the memory 106 and executes them. Applications 122 run on top of operating system 120, utilizing interfaces provided by operating system 120 and underlying hardware to implement various user-desired functions. When a user launches the application 122, the application 122 is loaded into the memory 106, and the processor 104 reads and executes the program instructions of the application 122 from the memory 106.
Computing device 100 also includes storage device 132, storage device 132 including removable storage 136 and non-removable storage 138, both removable storage 136 and non-removable storage 138 being connected to storage interface bus 134.
Computing device 100 may also include an interface bus 140 that facilitates communication from various interface devices (e.g., output devices 142, peripheral interfaces 144, and communication devices 146) to basic configuration 102 via bus/interface controller 130. The example output device 142 includes a graphics processing unit 148 and an audio processing unit 150. They may be configured to facilitate communication with various external devices such as a display or speakers via one or more a/V ports 152. Example peripheral interfaces 144 may include a serial interface controller 154 and a parallel interface controller 156, which may be configured to facilitate communication with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 158. An example communication device 146 may include a network controller 160, which may be arranged to facilitate communication with one or more other computing devices 162 via one or more communication ports 164 over a network communication link.
The network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media in a modulated data signal, such as a carrier wave or other transport mechanism. A "modulated data signal" may be a signal that has one or more of its data set or changed in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or special purpose network, and wireless media such as acoustic, radio Frequency (RF), microwave, infrared (IR) or other wireless media. The term computer readable media as used herein may include both storage media and communication media. In the computing device 100 according to the invention, the application 122 comprises instructions for performing the method 200 of processing a client request of the invention.
For a client request, if the main thread directly opens multithreading in service, the core thread number and the maximum thread number of a thread pool need to be increased, so that resource waste is caused for the request which does not need parallel processing. While too small a thread pool may result in request queuing, response time is prolonged, and too large a thread pool may result in wasted resources and additional overhead.
In order to solve the problem that the concurrency performance of the system and the maximum utilization rate of resources cannot be compatible, the scheme provides a processing method of the client request, and the processing threads of the common access request and the batch access request are isolated, so that the processing speed can be increased, the resource waste can be reduced, and different business requirements can be met.
FIG. 2 illustrates a flow diagram of a method 200 of processing a client request according to one embodiment of the invention. As shown in fig. 2, the method 200 begins with step S210, receiving a client request.
The client may send interface access requests using get, post, etc., such as query database requests, add or modify database requests, call third party service requests, etc. The client requests in the personnel system may include document application tasks and approval tasks.
Step S220 is then performed to determine whether the client request is a bulk task request or a normal access request.
Where a common access request typically contains only a single data item, one request performs a data processing task, and where multiple data items need to be processed, the common request requires the separate sending of multiple requests and the receiving of multiple responses. The batch task request contains a plurality of data items, and a plurality of data processing tasks are executed in one request at the same time, so that the data processing efficiency can be improved by utilizing parallel processing and synchronous optimization.
Therefore, the request mode needs to be weighed and selected according to specific service requirements, system performance and user experience requirements. After receiving the request from the client, the server parses the request into corresponding request parameters, and transmits the request parameters to the corresponding processing modules. Whether the client request is a batch task request or a common access request can be judged according to request parameters, request frequency, interface documents and the like.
For example, the request includes a plurality of data items or identification parameters of batch processing, and the request can be judged as a batch task request. A batch interface access may be determined if the same or similar requests are repeatedly sent for a short period of time and are intended to process multiple data items at once or to perform a batch operation. Interface documents or specifications may also be viewed and if the interface specification supports bulk operations or is a specialized bulk interface, it may be determined directly to be a bulk task request.
Typically when a client process connects to a server process, the server process creates a thread to handle interactions with this client. When a client exits, the server caches the corresponding thread, and when another client connects, the cached thread can be allocated to the client, so that the frequency of creating and destroying threads is reduced.
Subsequently in step S230, a batch thread pool is allocated to the batch task request, and the batch tasks are processed in parallel by threads in the batch thread pool.
The batch thread pool is a special type thread pool and is used for concurrently executing a large number of batch processing tasks, and compared with the general thread pool, the batch thread pool can allocate a group of tasks to a plurality of threads for concurrent execution according to a certain rule or configuration. Multiple threads are started simultaneously to perform batch tasks based on available thread resources.
The batch thread pool can divide a large number of tasks into task blocks, and distribute each task block to a plurality of threads for parallel processing, so that thread resources can be better utilized, and the problem that the system performance is reduced due to the fact that a single thread processes too many tasks is avoided.
The processing results of each task can be arranged, summarized and processed after the task is completed so as to carry out subsequent analysis processing.
In one embodiment of the invention, the tasks in each batch task request include one or more documents, each of which requires execution of an application process flow. Batch processing tasks require the allocation of a batch thread pool with a large maximum number of threads and cores and a large capacity per task queue.
Specifically, it may first be determined whether the current system has a batch thread pool, if not, a batch thread pool is created, and if so, the current batch thread pool is used. For example, a thread pool object may be created using a library-provided function or class, and typically the size of the thread pool, core thread number, maximum thread number, thread idle time, etc. may be specified.
Then, the tasks are assigned to threads in the batch thread pool according to the task concurrency number. The batch semaphore may be defined according to the number of task concurrency in the batch task request for controlling the size of the thread used. Tasks are evenly distributed to each thread in the batch thread pool according to the batch semaphore. For example, a batch job contains 50 request documents, and the batch semaphore may be set to 10, so that the number of threads occupied by a batch job request is 10.
Finally, each thread processes the task it is responsible for in parallel and obtains the returned result through the thread manager. The multiple threads process the distributed tasks in parallel, and return a first future object list containing all the tasks after all the tasks are completed; and circularly traversing the first future object list, and acquiring the execution result of the task from each future object.
In step S231, a general thread pool is allocated to the general access request, and tasks are processed one by one according to the task queue by threads in the general thread pool.
The general thread pool receives the submission of a single task, and allocates an independent thread for each task to execute, and the size of the thread pool can be set according to the needs, so that the number of threads executed simultaneously can be limited, excessive threads are prevented from being created, and the consumption of system resources is reduced.
The general thread pool is generally provided with a task queue for storing tasks to be executed, and when a thread in the thread pool completes a current task, the next task execution, i.e. the task submitted first is executed first, is obtained from the queue. Thus, the thread resources can be effectively utilized, and frequent creation and extinction of threads caused by excessive tasks are avoided.
The tasks in the common access request can be submitted to a general thread pool, and the tasks are sequentially executed according to the sequence or a preset rule. For example, the next task to be executed is selected according to the priority of the task or other scheduling strategies, so that thread resources can be reasonably allocated according to a certain scheduling strategy, and the task execution efficiency is improved.
And the exception in the task execution process can be captured and the corresponding processing is performed, so that the stability of the thread pool can be maintained, and the whole thread pool is prevented from being crashed due to the exception of a certain task. Returning a second future object list containing all tasks after waiting for completion of all tasks; and circularly traversing the second future object list, and acquiring the execution result of the task from each future object.
And finally, executing step S240, and returning the task execution results of the batch thread pool and the general thread pool to the client.
And respectively acquiring task execution results from the first future object list of the batch thread pool and the second future object list of the general thread pool through the thread manager, and returning the task execution results to the client.
According to one embodiment of the invention, client requests for viewing applications, batch 100-person document applications, and batch approval of 50-person documents are initiated simultaneously. By using the client request processing method provided by the invention, the first checking application task is processed by using the general thread pool, and the second two batch tasks are processed by using the batch thread pool. Thus, the overall processing time can be reduced, the resource use is reduced, and the concurrency performance of the system is improved.
According to the processing method of the client request, corresponding thread pools are distributed according to different client request modes to process tasks, so that the concurrent processing efficiency of the tasks can be improved, the resource waste and human errors are reduced, the processing accuracy is improved, and different business requirements are met.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment, or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into a plurality of sub-modules.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Furthermore, some of the embodiments are described herein as methods or combinations of method elements that may be implemented by a processor of a computer system or by other means of performing the functions. Thus, a processor with the necessary instructions for implementing the described method or method element forms a means for implementing the method or method element. Furthermore, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is for carrying out the functions performed by the elements for carrying out the objects of the invention.
As used herein, unless otherwise specified the use of the ordinal terms "first," "second," "third," etc., to describe a general object merely denote different instances of like objects, and are not intended to imply that the objects so described must have a given order, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments are contemplated within the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is defined by the appended claims.

Claims (10)

1. A method for processing a client request, comprising:
receiving a client request;
judging whether the client request is a batch task request or a common access request;
distributing a batch thread pool to the batch task request, and processing batch tasks in parallel through threads in the batch thread pool;
distributing a general thread pool to the general access request, and processing tasks one by one according to the task queue through threads in the general thread pool;
and returning the task execution results of the batch thread pool and the general thread pool to the client.
2. The method of claim 1, wherein the maximum number of threads, the number of core threads, and the task queue capacity of the general purpose thread pool are less than the maximum number of threads, the number of core threads, and the task queue capacity of the batch thread pool, respectively.
3. The method for processing a client request according to claim 1, wherein the step of determining whether the client request is a bulk task request or a normal access request comprises:
and judging whether the client request is a batch task request or a common access request according to the request parameters, the request frequency or the interface document of the client request.
4. The method of claim 1, wherein the step of allocating a batch thread pool to the batch task request, and processing the batch task in parallel by threads in the batch thread pool comprises:
judging whether a batch thread pool exists currently, if not, creating a batch thread pool through a thread pool manager, and if so, using the current batch thread pool;
distributing tasks to threads in a batch thread pool according to the task concurrency number;
the multiple threads process the distributed tasks in parallel, and return a first future object list containing all the tasks after all the tasks are completed;
and circularly traversing the first future object list, and acquiring the execution result of the task from each future object.
5. The method of claim 4, wherein the step of assigning tasks to threads based on the number of task concurrency comprises:
defining a batch semaphore according to the task concurrency number in the batch task request;
tasks are evenly distributed to each thread in the batch thread pool according to the batch semaphore size.
6. The method of claim 4, wherein the step of allocating a batch thread pool to the batch task request, and processing the batch task in parallel by threads in the batch thread pool further comprises:
if the batch thread pool is used, the method goes to the general thread pool to execute tasks.
7. The method for processing a client request according to claim 4, wherein the step of allocating a general thread pool to the general access request, and processing tasks one by one according to a task queue by threads in the general thread pool comprises:
submitting tasks in the common access request to a common thread pool, and executing the tasks in sequence or according to a preset rule;
returning a second future object list containing all tasks after waiting for completion of all tasks;
and circularly traversing the second future object list, and acquiring the execution result of the task from each future object.
8. The method according to claim 7, wherein the step of returning the task execution results of the batch thread pool and the general thread pool to the client comprises:
and respectively acquiring task execution results from the first future object list of the batch thread pool and the second future object list of the general thread pool through the thread manager, and returning the task execution results to the client.
9. A computing device, comprising:
at least one processor; and a memory storing program instructions, wherein the program instructions are configured to be adapted to be executed by the at least one processor, the program instructions comprising instructions for performing the method of processing a client request according to any of claims 1-8.
10. A readable storage medium storing program instructions which, when read and executed by a computing device, cause the computing device to perform a method of processing a client request as claimed in any one of claims 1 to 8.
CN202311285819.2A 2023-10-07 2023-10-07 Client request processing method, computing device and storage medium Pending CN117311939A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311285819.2A CN117311939A (en) 2023-10-07 2023-10-07 Client request processing method, computing device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311285819.2A CN117311939A (en) 2023-10-07 2023-10-07 Client request processing method, computing device and storage medium

Publications (1)

Publication Number Publication Date
CN117311939A true CN117311939A (en) 2023-12-29

Family

ID=89273377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311285819.2A Pending CN117311939A (en) 2023-10-07 2023-10-07 Client request processing method, computing device and storage medium

Country Status (1)

Country Link
CN (1) CN117311939A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118297603A (en) * 2024-06-04 2024-07-05 中国中金财富证券有限公司 Service resource statistical method, device, computer equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118297603A (en) * 2024-06-04 2024-07-05 中国中金财富证券有限公司 Service resource statistical method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN106371894B (en) Configuration method and device and data processing server
US8914805B2 (en) Rescheduling workload in a hybrid computing environment
US8739171B2 (en) High-throughput-computing in a hybrid computing environment
CN110489213B (en) Task processing method and processing device and computer system
US7650601B2 (en) Operating system kernel-assisted, self-balanced, access-protected library framework in a run-to-completion multi-processor environment
US9875139B2 (en) Graphics processing unit controller, host system, and methods
US8627325B2 (en) Scheduling memory usage of a workload
US9378047B1 (en) Efficient communication of interrupts from kernel space to user space using event queues
JP2004171234A (en) Task allocation method in multiprocessor system, task allocation program and multiprocessor system
US7681196B2 (en) Providing optimal number of threads to applications performing multi-tasking using threads
KR20120070303A (en) Apparatus for fair scheduling of synchronization in realtime multi-core systems and method of the same
CN109840149B (en) Task scheduling method, device, equipment and storage medium
CN114579285B (en) Task running system and method and computing device
CN117311939A (en) Client request processing method, computing device and storage medium
CN111191777A (en) Neural network processor and control method thereof
KR101770191B1 (en) Resource allocation and apparatus
US20120144039A1 (en) Computing scheduling using resource lend and borrow
CN109582445A (en) Message treatment method, device, electronic equipment and computer readable storage medium
CN118210632A (en) Memory allocation method and device, electronic equipment and storage medium
US20150212859A1 (en) Graphics processing unit controller, host system, and methods
CN115951988B (en) Job scheduling method, computing equipment and storage medium
EP3783484B1 (en) Data processing method and computer device
CN116881003A (en) Resource allocation method, device, service equipment and storage medium
CN115525226A (en) Hardware batch fingerprint calculation method, device and equipment
US20070043869A1 (en) Job management system, job management method and job management program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination