CN112000455B - Multithreading task processing method and device and electronic equipment - Google Patents

Multithreading task processing method and device and electronic equipment Download PDF

Info

Publication number
CN112000455B
CN112000455B CN202010946007.8A CN202010946007A CN112000455B CN 112000455 B CN112000455 B CN 112000455B CN 202010946007 A CN202010946007 A CN 202010946007A CN 112000455 B CN112000455 B CN 112000455B
Authority
CN
China
Prior art keywords
threshold range
thread
core
initialization
thread pool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010946007.8A
Other languages
Chinese (zh)
Other versions
CN112000455A (en
Inventor
王明星
包明生
杨接康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huayun Data Holding Group Co ltd
Original Assignee
Huayun Data Holding Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huayun Data Holding Group Co ltd filed Critical Huayun Data Holding Group Co ltd
Priority to CN202010946007.8A priority Critical patent/CN112000455B/en
Publication of CN112000455A publication Critical patent/CN112000455A/en
Application granted granted Critical
Publication of CN112000455B publication Critical patent/CN112000455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/485Resource constraint
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5022Workload threshold

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention provides a multithreading task processing method, a multithreading task processing device and electronic equipment, wherein the multithreading task processing method comprises the steps of setting an initialization threshold range for the number of core threads of a thread pool; filtering IO intensive tasks for the tasks input into the thread pool; acquiring index data of a host machine in a polling mode to determine whether the number of core threads of a thread pool triggers adjustment of an initialization threshold range to obtain an event of a current threshold range; the event for triggering and adjusting the initialization threshold range is determined based on index data of the host, and the initialization threshold range is determined by the maximum value of the number of core threads and the minimum value of the number of core threads. By the technical scheme, the response capability of the core thread to multiple tasks is improved, the response capability of the IO-intensive task based on the thread pool is improved, and unnecessary calculation overhead caused by frequent creation or cancellation of the core thread can be effectively prevented.

Description

Multithreading task processing method and device and electronic equipment
Technical Field
The invention relates to the technical field of computers, in particular to a multithreading task processing method and device and electronic equipment.
Background
The thread pool is used as a set of threads (threads), a large number of idle threads are created when the system is started, if a program transmits a task to the thread pool, the thread pool starts a thread to execute the task, after the execution is finished, the thread does not die, but returns to the thread pool again to be in an idle state, and waits for the execution of the next task. In the cloud computing technology, threads comprise a User thread (User thread) and a Kernel thread (Kernel thread), a thread pool ensures the full utilization of the Kernel thread, prevents the Kernel thread from being excessively called, and can support highly concurrent task responses.
The thread pool in the prior art can only adapt to the scene of executing the intensive tasks of the CPU, and if the current thread number exceeds the core thread number of the CPU, the thread pool is limited by the limitation of the core thread number of the CPU in the host, so that the core threads are not additionally added, and the threads are arranged in the queue for waiting so as to execute calling operation on the threads when the core threads are idle. However, in cloud computing scenarios (e.g., cloud platform, Web system, APP), there are a large number of non-CPU intensive tasks such as IO operations, read/write operations, migration operations, etc. However, when the number of the core threads reaches the upper limit, the task is still blocked even if the task is put into the queue, and at this time, the core threads need to be waited for to finish and can not be preempted by other tasks to be continuously executed. Therefore, the utilization rate of the core thread in the thread pool is low, the requirement for timely response to a high-concurrency task cannot be met, and certain influence can be caused on the stability of a software system or a cloud platform.
Data or task processing is a CPU intensive task and performing receive or transmit operations on data or tasks is an IO intensive task. The number of core threads included at the time of initialization of the thread pool is fixed because of the core thread size (coreploalsize) in the thread pool. At present, in order to improve the concurrent processing capacity for data or tasks, a technical means of increasing core threads in a thread pool is generally adopted. However, if the number of the core threads is configured too much, the CPU and the memory effect in the system or the cloud platform will be too great. In order to ensure a reasonable response of resources to other services of the system or the cloud platform, the utilization rate of the CPU and/or the memory cannot be generally allocated to the utilization rate of the core thread to reach 100%. In the prior art, in order to ensure effective response of the kernel thread to the task, different priorities are usually set for different kernel threads to satisfy the requirement of positive response to different tasks, which is specifically disclosed in the chinese patent disclosure with publication number CN 110502320A. Meanwhile, the applicant indicates that if the core thread is created or deleted excessively and unorderly, unnecessary computing overhead is inevitably caused, so that the instability of a system or a cloud platform is caused, and certain influence is caused on the user experience.
In view of the above, there is a need for an improved multi-thread task processing method and apparatus in the prior art to solve the above problems.
Disclosure of Invention
The invention aims to disclose a multithreading task processing method, a multithreading task processing device and electronic equipment, which are used for solving the defects in the prior art, in particular for improving the response capability of a core thread to multitask, improving the response capability of an IO-intensive task based on a thread pool and preventing the core thread from being created or cancelled frequently to cause unnecessary computation overhead.
To achieve the first object, the present invention provides a multithreading task processing method, including:
setting an initialization threshold range for the number of core threads of the thread pool;
filtering IO intensive tasks for the tasks input into the thread pool;
acquiring index data of a host machine in a polling mode to determine whether the number of core threads of a thread pool triggers adjustment of an initialization threshold range to obtain an event of a current threshold range;
the event for triggering and adjusting the initialization threshold range is determined based on index data of the host, and the initialization threshold range is determined by the maximum value of the number of core threads and the minimum value of the number of core threads.
As a further improvement of the present invention, the initialization threshold range is determined only by the thread pool according to the index data of the host, and the current threshold range is determined only by the thread pool according to the index data of the host in the previous state of performing polling on the host.
As a further improvement of the invention, the core thread responds to access requests for the multi-threaded task initiated from the outside received by the thread pool using a saturation policy issued by the thread pool.
As a further improvement of the invention, the index data of the host is at least defined by CPU overhead and/or disk IOPS.
The method is further improved by adopting a polling mode to obtain index data of a host machine and then judging whether a core thread matched with a response IO intensive task exists in a thread pool;
if so, not changing the initialization threshold range or the current threshold range;
if not, determining the adjustment trend of the initialization threshold range or the current threshold range based on the index data of the host, wherein the adjustment trend is to increase the number of the core threads or decrease the number of the core threads.
As a further improvement of the present invention, the multithread task processing method further includes: setting a safety threshold range for the initialization threshold range or the current threshold range, wherein the safety threshold range is larger than the minimum value of the initialization threshold range or the minimum value of the current threshold range and is smaller than the maximum value of the initialization threshold range or the maximum value of the current threshold range.
Based on the same invention idea, the invention also discloses a multithreading task processing device, which comprises:
the initialization module is used for setting an initialization threshold range for the number of core threads of the thread pool;
the task filtering module is used for filtering IO intensive tasks for the tasks input into the thread pool;
the polling module acquires index data of the host machine in a polling mode to determine whether the number of core threads of the thread pool triggers the adjustment of the initialization threshold range to obtain an event of the current threshold range;
and the threshold adjusting module is used for determining whether an event for adjusting an initialization threshold range is triggered or not based on the index data of the host, wherein the initialization threshold range is determined by the maximum value of the number of core threads and the minimum value of the number of core threads.
As a further improvement of the present invention, the initialization threshold range is determined only by the thread pool according to the index data of the host, and the current threshold range is determined only by the thread pool according to the index data of the host in the previous state of performing polling on the host.
As a further improvement of the invention, the index data of the host is at least defined by CPU overhead and/or disk IOPS.
As a further improvement of the present invention, a safety threshold range is set for the initialization threshold range or the current threshold range based on the threshold adjustment module, and the safety threshold range is greater than the minimum value of the initialization threshold range or the minimum value of the current threshold range and less than the maximum value of the initialization threshold range or the maximum value of the current threshold range.
Finally, the present invention also discloses an electronic device, comprising:
processor, memory device comprising at least one memory unit, and
a communication bus establishing a communication connection between the processor and the storage device;
the processor is used for executing one or more programs stored in the storage device to realize the multithread task processing method disclosed by any one of the invention creations.
As a further development of the invention, the electronic device is configured at least as a computer, a server, a data center, a virtual cluster, a portable mobile terminal, a financial payment platform or an ERP system.
Compared with the prior art, the invention has the beneficial effects that:
firstly, IO intensive tasks are filtered out for a plurality of tasks input into a thread pool, particularly CPU intensive tasks or mixed tasks are eliminated, so that the thread pool can effectively avoid creating unnecessary core threads in full view when receiving a plurality of different types of tasks, and the core threads in the existing state can be fully utilized to efficiently respond to the IO intensive tasks;
secondly, a technical means of determining the core thread by polling index data of the host is adopted, so that the core thread can be effectively prevented from breaking through the upper limit of a thread pool in the process of responding to the IO intensive task by a cloud platform or a database system configured on the basis of the host, and the running stability of the cloud platform or the database system is improved;
finally, by introducing a safety threshold range, the core thread determined after the polling is performed on the index data of the host at the last time can keep relatively durable tolerance and stability, and unnecessary calculation overhead caused by frequent creation or cancellation of the core thread can be effectively prevented while the response to the IO intensive task is met.
Drawings
FIG. 1 is an overall flow diagram of a multi-threaded task processing method of the present invention;
FIG. 2 is a topology diagram of connections between hosts and clients configuring a thread pool;
FIG. 3 is a schematic diagram of a logical structure of a plurality of core threads in a thread pool for a plurality of tasks issued by a task initiator located in a client;
FIG. 4 is a schematic diagram of a thread pool formed by creating a new kernel thread for the kernel thread size shown in FIG. 3;
FIG. 5 is an example of a threshold range for initialization including at least one core thread;
FIG. 6 is an example of a current threshold range and setting a safe threshold range for the current threshold range;
FIG. 7 is a detailed flow chart of a method of processing multithreading tasks according to the present invention;
FIG. 8 is a flowchart of filtering IO intensive tasks for tasks in an input thread pool;
FIG. 9 is a flowchart of a system garbage collection mechanism's lifecycle for normal core threads in a thread pool;
FIG. 10 is a flowchart of the life cycle of the system garbage collection mechanism waiting for normal kernel threads in the thread pool to exceed Keep Alive Time;
FIG. 11 is an overall topology of a multithreaded task processing device of the present invention;
FIG. 12 is a topology diagram of an electronic device of the present invention.
Detailed Description
The present invention is described in detail with reference to the embodiments shown in the drawings, but it should be understood that these embodiments are not intended to limit the present invention, and those skilled in the art should understand that functional, methodological, or structural equivalents or substitutions made by these embodiments are within the scope of the present invention.
Before each example of the present application is explained in detail, it is necessary to explain the meaning of technical terms referred to in the present application, and the applicant indicates that the definition of the technical terms described below does not constitute the only definition or designation of the present invention, and that those skilled in the art can reasonably extend the technical meaning contained in the following description of terms and embodiments.
Term "Core thread size", CorePoolSize, is limited in hardware, memory, and performance, so it is not possible to create any number of threads without restriction, and the maximum core thread allowed for each machine/system/host is a bounded value. The number of core threads managed based on Thread Pool executors is bounded.
Term "Minimum thread size"is the number of core threads that the thread pool allows to be created the least; term "Maximum thread size"is the maximum number of core threads that the thread pool is allowed to create. Term "Scale of”、“Maximum of Value of'and'Minimum value"in this application refers to the number of core threads or non-core threads.
Term "Core thread count"is the number of core threads.
Term "Non-core threads"typically set a timeout period beyond which the non-core threads are reclaimed. The duration is usually set by Keep Alive Time, and the timeout duration is also applicable to the kernel thread and the non-kernel thread.
Term "Initializing threshold values'or'Safe threshold range"is the number range of core threads.
The first embodiment is as follows:
referring to FIG. 1, the present embodiment discloses an embodiment of a method for processing a multi-threaded task. It should be noted that, in the present application, the textual descriptions of step S1 to step S4 are only required for convenience of description of the technical solutions, and the order between the steps is not strictly limited.
Because the thread pool in the prior art can only adapt to the scenario of executing CPU-intensive tasks, if the current core thread number exceeds the maximum core thread number (i.e., the maximum value Q of the core thread number) that can be created by the CPU of the host 300, the core thread number created in the thread pool 100 is limited by the limit of the core thread number of the CPU, so that the core thread is not additionally added, but the thread is queued in the to-be-executed task queue shown in fig. 3 to wait for the core thread to execute a call operation on the thread when the core thread is idle. Typically, the number of core threads in thread pool 100 is typically two plus the number of CPUs in host 300.
In this embodiment, a method for processing a multi-thread task includes the following steps. The applicant indicates that the following distinction of step S1 to step S3 is merely for convenience of description of the technical solution and does not constitute a strict distinction between the three steps.
First, step S1 is executed to set an initialization threshold range for the number of core threads of the thread pool 100.
The essence of the Thread pool mode is to use a relatively small number of kernel threads to process a relatively unlimited number of tasks, to put the tasks to be processed into the Task Queue to be executed as shown in fig. 3 by using a Task Queue (Task Queue) to buffer the tasks, to reuse a certain number of Worker threads (Worker Thread) to take out a certain Task or certain tasks from the Task Queue to be executed, to execute the tasks by a certain kernel Thread or certain kernel threads, to release occupation of the kernel threads by the Task after the Task is executed, and to allocate the kernel threads to other tasks in the Task Queue to be executed of the input Thread pool 100.
As shown in fig. 2 and 3, the thread pool 100 runs in the host 300, and the client 200 (for example, a browser or a computer with a browser installed therein, i.e., the task initiating terminal 21 in fig. 3) establishes a communication relationship with the host 300 in a wired (network cable or optical fiber) or wireless (WIFI, ZIGBEE, 2G-5G protocol) manner. A user initiates a request to a Virtual Machine (VM) installed on the host computer 300 or started by the host computer 300 through a bottom layer virtualization technology through the client 200, and executes a specific operation. The operation may be a computing transaction (i.e., a CPU-intensive task), a data reception or forwarding or data query (i.e., an IO-intensive task), or a transaction that includes both a computing transaction and a data reception (i.e., a hybrid task). The applicant indicates that the "user" in this embodiment may be a user (Client) of a terminal service or an application in a general sense, and may also be understood as a developer, an administrator, or an operation and maintenance person of a cloud platform or an electronic device (e.g., a computer system) running the multithread task processing method.
The initialization threshold range is determined only by the thread pool 100 based on the index data of the host 300, and the current threshold range is determined only by the thread pool 100 based on the index data of the host 300 in the previous state of performing polling on the host 300. Specifically, the index data of the host 300 may be defined only by the CPU overhead or only by the disk IOPS, or may be defined by both the CPU overhead and the disk IOPS, or may even be defined by the IO overhead of the host 300 and the CPU overhead and the disk IOPS. The CPU overhead and the disk IOPS are both index data of the CPU and the disk of the host 300. The initialization threshold range and the current threshold range mentioned below are relative; and the number of core threads contained in the core thread size 110 (shown in fig. 11) in the thread pool 100 is also dynamic. The CPU and the disk of the host 300 according to this embodiment may be physical CPUs and disks, or virtual CPUs or virtual disks.
As a matter of course, as shown in fig. 3 and 4, the initialization threshold range set for the thread pool 100 in the initialization stage includes four kernel threads, i.e., kernel thread 101 to kernel thread 104 (constituting the kernel thread size in the current state), and a plurality of non-kernel threads, i.e., non-kernel thread 105, non-kernel thread 106, non-kernel thread 107, non-kernel thread 108, and the like (constituting the non-kernel thread size in the current state). Therefore, when polling the index data of the host 300, the sizes of the kernel threads and the non-kernel threads in the thread pool 100 may change. Therefore, in the present embodiment, the minimum thread size and the maximum thread size in fig. 3 are also adjusted according to the concurrency and task amount of the IO-intensive tasks received. When the polling data obtained by performing the polling operation on the host 300 considers that a kernel Thread needs to be added in the Thread Pool 100, a kernel Thread, that is, the kernel Thread 110 in fig. 4, is newly created by a Thread Pool executive (Thread Pool manager) built in the Thread Pool 100. Meanwhile, part of the non-core threads in the thread pool 100 may be destroyed, and one or more core threads may be newly created.
Step S2, filtering IO-intensive tasks for the tasks input into the thread pool 100. The applicant indicates that in the present embodiment, the effect is the same whether step S1 is performed first and then step S2 is performed, or step S2 is performed first and then step S1 is performed.
For a task that has been input to the thread pool 100, since the task is still in execution (i.e., the task is not recycled), type parameters for different task types can be configured in the thread pool 100 to distinguish between a CPU-intensive task, an IO-intensive task, and a hybrid task by the type parameters. Since the thread pool 100 cannot predict whether a task input from the client 200 is a CPU-intensive or IO-intensive task or a hybrid task including both of the foregoing tasks, the execution type of the task is set in advance in the joining thread pool 100.
In this embodiment, any one of the following types of parameters may be added for each task entered into the thread pool 100: 1) CPU intensive parameters (default settings); 2) IO intensive parameters; 3) intensive parameters are mixed. Any of the above types of parameters are set for each task as it joins the thread pool 100. After the tasks are added into the thread pool 100, the IO-intensive tasks can be filtered out from the tasks input into the thread pool 100 according to the type parameters set by each task to determine whether the tasks are IO-intensive tasks.
Referring to fig. 7, in this embodiment, the specific steps S21 to S24 are shown for filtering the IO-intensive tasks in the input thread pool 100, and the filtering module 12 can perform this step.
Step S21, the task in the to-be-executed task queue is about to enter the running state.
Step S22, determining whether the task is an IO intensive task according to the configured type parameters; if not, skipping to execute the step S23, and continuing to add the task into the thread pool to configure a core thread for the task when the core thread is idle, so as to prevent the core thread with limited number in the thread pool 100 from being occupied by the CPU-intensive task or the hybrid task for a long time; if yes, the process skips to execute step S24, puts the task into the IO intensive thread number queue, and finally skips to execute step S3.
In a cloud platform, IO intensive tasks such as access operation to a database, log query operation, cache query operation and the like. Therefore, if the various tasks initiated from the task initiator 21 are not distinguished and IO-intensive tasks are filtered out, the kernel thread in the host 300 is idle, and the kernel thread is wasted. Therefore, in this embodiment, the tasks delivered from the client 200 are filtered, and only all the IO-intensive tasks are delivered to one or more hosts 300 of the cloud platform, so that the IO-intensive tasks are prevented from being temporarily stored in the to-be-executed task queue in fig. 3, and a larger number of IO-intensive tasks are executed in a unit time, thereby improving the timely response of the hosts 300 to the IO-intensive tasks.
Then, step S3 is executed, index data of the host 300 is acquired in a polling manner, so as to determine whether the number of core threads of the thread pool 100 triggers adjustment of the initialization threshold range to obtain an event of the current threshold range; the event of whether to trigger the adjustment of the initialization threshold range is determined based on the index data of the host computer 300, and the initialization threshold range is determined by the maximum value of the number of core threads and the minimum value of the number of core threads. The maximum value and the minimum value of the number of core threads defining the initialization threshold range may be determined by an administrator according to the configuration of the host 300, or the thread pool 100 may perform a polling operation on the host 300 when the host 300 starts to receive and process the IO-intensive task to determine the index data of the host 300 in the initial state, so as to accurately initialize the maximum value and the minimum value of the threshold range according to the technical solution disclosed in step S2, thereby determining the specific value of the core thread N included in the initialization threshold range defined by the N core threads in fig. 5. The polling interval for acquiring the index data of the host 300 by the polling method may be a timeout duration (Keep Alive Time).
In this embodiment, creating or deleting the kernel thread in the thread pool 100 can be implemented by using a Linux performance monitoring tool, such as a Top tool (for checking the CPU overhead of the host) and/or a vmstat tool (for checking the memory or virtual memory of the host). The safety threshold range, the initialization threshold range, and the current threshold range are mutually relative concepts, and are determined after performing polling operation on the index data of the host computer 300, and whether to create or delete a core thread is determined.
Referring to fig. 5, if the maximum value of the kernel threads in the initialization threshold range is N, N + P kernel threads may be formed or N-M kernel threads may be formed due to an increase or decrease in the number of kernel threads caused by creation or deletion of the kernel threads performed subsequently. The maximum value Q of the kernel threads that can be created in the thread pool 100 of the host 300 is a fixed value. At the same time, at least one core thread in the minimum value S of the core threads within the initialization threshold range, and the minimum value S of the number of the core threads is an integrated core thread.
Referring to fig. 6, the number of kernel threads included in each phase of polling the host 300 and adjusting the current threshold range is not a fixed value. Meanwhile, in this embodiment, the method for processing a multi-thread task further includes: setting a safety threshold range for the initialization threshold range or the current threshold range, wherein the safety threshold range is larger than the minimum value of the initialization threshold range or the minimum value of the current threshold range and is smaller than the maximum value of the initialization threshold range or the maximum value of the current threshold range.
Specifically, the current threshold range includes N 'kernel threads, where N' may be equal to or different from N, and the minimum value of the kernel threads of the current threshold range is S ', where S' may be equal to or different from S. The safe threshold range is defined by the minimum value of the safe threshold range (including T core threads) and the maximum value of the safe threshold range (including U core threads). And T is greater than S 'and U is less than N'. Meanwhile, if the maximum value of the kernel threads in the initialization threshold range is N ', N ' + P ' kernel threads may be formed or N ' -M ' kernel threads may be formed due to an increase or decrease in the number of kernel threads caused by creation or deletion of the kernel threads performed subsequently. The maximum value Q of the kernel threads that can be created in the thread pool 100 of the host 300 is a fixed value. It should be noted that, based on the concept that the current threshold range corresponding to different polling operations and the safety threshold range corresponding to the current threshold range are relative to each other, the maximum value and the minimum value of the core threads included in the current threshold range may be the same or different. By the technical scheme, unnecessary calculation overhead caused by frequent creation or revocation of the core thread can be effectively prevented. Meanwhile, the minimum value of the number of the core threads included in the safety threshold range is larger than the minimum value of the initialization threshold range or the minimum value of the current threshold range, so that the defect that the core thread scale 111 cannot adapt to the IO intensive task within a period of time due to blind deletion of the core threads in the thread pool 100 can be prevented, and the stability and tolerance of the core thread scale 111 are ensured.
In this embodiment, after the index data of the host 300 is obtained in a polling manner, it is determined whether a core thread matched with the response IO intensive task exists in the thread pool;
if so, not changing the initialization threshold range or the current threshold range;
if not, an adjustment trend (i.e., a left arrow of "decrease" or a right arrow of "increase" in fig. 5 or fig. 6) of the initialization threshold range or the current threshold range is determined based on the index data of the host 300, wherein the adjustment trend is to increase the number of core threads or decrease the number of core threads.
Referring to FIG. 7, the core threads respond to access requests for externally initiated multithreaded tasks received by the thread pool 100 using a saturation policy that is issued by the thread pool 100. The access request is initiated by the user in the task initiator 21. The saturation strategy and the multi-threaded task processing method are described in detail below.
And starting.
Step 401, obtaining host machine index data.
Step 402, setting an initialization threshold for the number of core threads.
And step 403, filtering out IO intensive tasks.
Step 404, determine whether the core thread count is full. The determination in step 404 may be performed by the thread pool 100 or by the host 30. By performing this step 404, it may be determined whether the number of core threads determined at the initialization threshold range are all occupied to perform the processing operations of the IO-intensive task. If yes, then go to step 406; if not, it is proven that a new kernel thread can still be created at this point in kernel thread size 111 to perform the processing operations of the IO-intensive task, thus skipping execution step 405, creating a kernel thread, and executing the task.
Step 406, determining whether to trigger an initialization threshold adjustment event. The determination in step 406 may be performed by the process pool 100 or by the host 30. By executing this step 406, it is possible to determine whether the core thread included in the core thread size 111 in the current state meets the requirement of the processing operation of the IO-intensive task currently being executed. If yes, skipping to execute step 405; if not, go to step 405 (see above).
Step 407, judging whether the core thread number reaches a safety threshold range, if so, skipping to execute step 408; if not, go to step 405 (see above). The operations in step 407 may be performed by the thread pool 100, or by the threshold adjustment module 14 in fig. 10. It should be noted that, in this embodiment, a step of setting a safety threshold range for the initialization threshold range or the current threshold range is also implied between step 407 and step 406, where the operation of setting the safety threshold range for the initialization threshold range and the operation of setting the safety threshold range for the current threshold range are both executed by the threshold adjustment module 14, so that after the index data for the host 300 is obtained by the polling operation in the next period, it is determined whether the safety threshold range needs to be set for the initialization threshold range and the current threshold range.
Step 408, judging whether the task queue to be executed is completely occupied, if so, skipping to execute step 409; if not, go to step 405 (see above).
Step 409, releasing part of the core threads by using a system garbage recovery mechanism; for example, a kernel thread release operation may be performed using a JVM.
Conditions under which core threads in thread pool 100 are reclaimed: when the number of threads in the thread pool 100 is greater than the number of kernel threads, if no new task (i.e., IO-intensive task) is committed at this Time, the non-kernel threads outside the kernel threads are not destroyed immediately, but perform a wait until the wait Time exceeds Keep Alive Time. If the wait Time exceeds the set Keep Alive Time, the kernel thread may be reclaimed or destroyed (i.e., dead). The Keep Alive Time is one of the attributes of the thread pool 100, and a recovery policy for the kernel thread may be implemented based on the Keep Alive Time, for example, when the Keep Alive Time is 0, the kernel thread may be immediately recovered as long as the kernel thread is idle, thereby saving resources.
Fig. 9 shows a specific flow of normal execution of the core thread, and fig. 10 shows a specific flow of execution of the recycle thread. "death" in FIGS. 9 and 10 refers to the destruction or reclamation of a core thread.
As shown in FIG. 9, a new kernel thread is created and enters a ready state, and once the kernel thread enters the ready state, it is proven that the kernel thread is ready to wait for the CPU (processor) to schedule execution. The kernel thread starts to run the IO-intensive task after obtaining the resources of the processor, and starts to run. Once the running state is entered, the CPU (processor) at that time starts scheduling the core thread in the ready state. When the IO intensive task is completed, the kernel thread dies. When a kernel thread is in a running state, once the resources of the processor are lost, the kernel thread is caused to return to a ready state again and waits for the resources of the processor to be reacquired. The thread in the running state temporarily gives up the right to use the CPU for some reason, stops execution, and enters the blocked state. A core thread in a blocking state needs to be reactivated until it enters a ready state, and can be called again by the CPU to enter a running state.
As shown in fig. 10, during the execution of the recycling operation on the core threads, when some or some of the core threads enter the ready state, if the number of threads (including the number of core threads and non-core threads) in the thread pool 100 in the current state is greater than the number of core threads, and the waiting Time of the core threads exceeds Keep Alive Time, the core threads in the ready state are directly killed, and finally the core threads are killed.
And (6) ending.
By the multithreading task processing method disclosed by the embodiment, the response capability of the core thread to multitasks is improved, the response capability of the thread pool 100-based IO intensive tasks is improved, unnecessary computing overhead caused by frequent creation or cancellation of the core thread can be effectively prevented, and the running stability of a cloud platform or a database system is improved. Meanwhile, since the priority does not need to be set for the issued task, based on the technical scheme disclosed by the embodiment, the technical problem of uneven task distribution caused by distributing the core thread for the IO-intensive task is not caused.
Example two:
based on the technical solution disclosed in the first embodiment, the present embodiment also discloses a multithread task processing apparatus. The multithreaded task processing device is responsive to the client 200 and is configured to implement and execute a multithreaded task processing method disclosed in the first embodiment.
Referring to fig. 2 to 4 and fig. 11 in combination, in the present embodiment, the multithread task processing apparatus includes: the initialization module 11 sets an initialization threshold range for the number of core threads of the thread pool 100; the task filtering module 12 is configured to filter IO-intensive tasks for the tasks input into the thread pool 100; the polling module 13 acquires the index data of the host 300 in a polling manner to determine whether the number of core threads of the thread pool 100 triggers the adjustment of the initialization threshold range to obtain an event in the current threshold range. The thread pool 100 may run in computer memory, a JVM (Java virtual machine), or a host machine 300. The threshold adjustment module 14 determines whether to trigger an event for adjusting an initialization threshold range based on the index data of the host 300, where the initialization threshold range is determined by the maximum core thread number and the minimum core thread number.
After filtering the specification of the IO-intensive task, the task filtering module 12 notifies the initialization module 11, so that the initialization module 11 configures the number of the kernel threads included in the kernel thread scale 111 in the initial state or the current state for the thread pool 100. The "specification" herein refers to the number of IO intensive tasks.
The polling module 13 acquires the index data of the host 300 in a polling manner and then judges whether a core thread matched with the response IO intensive task exists in the thread pool 100; if so, not changing the initialization threshold range or the current threshold range; if not, determining the adjustment trend of the initialization threshold range or the current threshold range based on the index data of the host computer 300, wherein the adjustment trend is to increase the number of the core threads or decrease the number of the core threads. The polling module 13 initiates a polling operation to the host 300, and the host 300 returns polling data to the polling module 13, and the polling module 13 sends the polling data to the threshold adjustment module 14, so that the threshold adjustment module 14 performs an operation of adjusting the initialization threshold range to obtain the current threshold range, or an operation of setting a safety threshold range for the initialization threshold range and the current threshold range.
Preferably, in this embodiment, the multithread task processing apparatus may further be configured with a threshold adjustment module 14, which determines whether to trigger an event for adjusting an initialization threshold range based on the index data of the host 300, where the initialization threshold range is determined by a maximum value of the number of core threads and a minimum value of the number of core threads, and this part of technical solutions may be matched with the above description in the first embodiment.
The initialization threshold range is determined only by the thread pool 100 according to the index data of the host 300, and the current threshold range is determined only by the thread pool 100 according to the index data of the host 300 in the previous state of performing polling on the host 300. The target data of the host 300 is defined by at least CPU overhead and/or disk IOPS. Based on the threshold adjustment module 14 setting a safety threshold range for the initialization threshold range or the current threshold range, where the safety threshold range is greater than the minimum value of the initialization threshold range or the minimum value of the current threshold range and is less than the maximum value of the initialization threshold range or the maximum value of the current threshold range, this part of technical solutions may be matched with the description in the first embodiment.
The initialization module 11, the task filtering module 12, the polling module 13, and the threshold adjustment module 14 may be deployed in the host 300 in the form of executable programs or may be created in one or more virtual machines (not shown) in the host 300. If the kernel thread size 111 in the thread pool 100 is full, the IO-intensive task is placed in the to-be-executed task queue, and when the kernel threads needed by the IO-intensive task exceed the kernel thread with the largest number (i.e., the maximum value Q of the number of kernel threads in fig. 5 or fig. 6) that may be created by the host 300, reject Execution is entered.
The technical solutions of the same parts of the multi-thread task processing apparatus disclosed in this embodiment and the first embodiment are please refer to the description of the first embodiment, which is not repeated herein.
Example three:
as described in conjunction with fig. 12, the present application further discloses an electronic device 500 comprising: a processor 51, a memory device 52 consisting of at least one memory unit, and a communication bus 53 establishing a communication connection between the processor 51 and the memory device 52. The processor 51 is used for executing one or more programs stored in the storage device 52 to implement the multithread task processing method according to the first embodiment.
Specifically, the storage device 52 may be composed of a storage unit 521 and a storage unit 52j, where the parameter j is a positive integer greater than or equal to 1. The processor 51 may be an ASIC, FPGA, CPU, MCU or other physical hardware or virtual device with instruction processing functions. Of a communication bus 53The form is not particularly limited, I2The C bus, the SPI bus, the SCI bus, the PCI-E bus, the ISA bus, etc., and may be changed reasonably according to the specific type and application scenario requirements of the electronic device 500. The communication bus 53 is not the point of the invention of the present application and is not set forth herein.
The storage device 52 may be based on a distributed file system such as Ceph or GlusterFS, may also be a RAID 0-7 Disk array, and may also be configured as one or more hard disks or removable storage devices, a database server, an SSD (Solid-state Disk), an NAS storage system, or an SAN storage system. Specifically, in this embodiment, the electronic device 500 may be configured as a super-convergence all-in-one machine, a computer, a server, a data center, a virtual cluster, a portable mobile terminal, a Web system, a financial payment platform or an ERP system, a virtual online payment platform/system, and the like; the ultra-convergence all-in-one machine is a high-performance multi-node server, mainly adopts a distributed storage and server virtualization technology, highly integrates computing nodes, storage resources and network switching into a 1U, 2U or 4U server, and provides ultra-convergence infrastructure facilities for enterprises or terminal users so as to comprehensively improve the IT (information technology) capability of the enterprises.
In particular, the electronic device 500 disclosed in this embodiment may be based on the multi-thread task processing method disclosed in the first embodiment or include one or more multi-thread task processing apparatuses shown in the second embodiment, and reliably respond to one task or multiple parallel tasks corresponding to an access request or operation initiated by a user in the client 200, and especially in a scenario with very strict requirements on real-time performance and security, such as an online payment system of a shopping website, a settlement system of a financial institution, an electronic ticket purchasing system, and the like, the electronic device 500 has an extremely important technical application value.
The electronic device 500 disclosed in this embodiment may be understood as a physical device (e.g., a POS machine, an atm), a software system (a financial system or an ERP system) or an internet online application (APP software) running the multi-thread task processing method disclosed in the embodiment, or even two or more computer systems/data centers which are interconnected by an optical fiber or a network cable and form a direct connection topology, a tree topology, or a star topology. Please refer to the description of the first embodiment and/or the second embodiment, and details of the electronic device 500 according to the present embodiment and the technical solutions of the same parts in the first embodiment and/or the second embodiment are not repeated herein.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (12)

1. A method for multi-threaded task processing, comprising:
setting an initialization threshold range for the number of core threads of the thread pool;
filtering IO intensive tasks for the tasks input into the thread pool;
acquiring index data of a host machine in a polling mode, and determining whether to create or delete a core thread so as to determine whether the number of the core threads of a thread pool triggers the adjustment of an initialization threshold range to obtain an event of a current threshold range;
the event for triggering and adjusting the initialization threshold range is determined based on index data of the host, and the initialization threshold range is determined by the maximum value of the number of core threads and the minimum value of the number of core threads.
2. The method of claim 1, wherein the initialization threshold range is determined only by the thread pool based on index data of the host, and the current threshold range is determined only by the thread pool based on index data of the host in a previous state in which polling is performed on the host.
3. A method of multithreaded task processing as in claim 1 wherein the core threads respond to externally initiated access requests for multithreaded tasks received by the thread pool using a saturation policy, the saturation policy issued by the thread pool.
4. A method of multithreaded task processing as in any of claims 1-3, wherein the target data for the host is defined by at least CPU overhead and/or disk IOPS.
5. The multithreading task processing method according to claim 4, wherein index data of the host is acquired in a polling manner, and then whether a core thread matched with the response IO intensive task exists in the thread pool is judged;
if so, not changing the initialization threshold range or the current threshold range;
if not, determining the adjustment trend of the initialization threshold range or the current threshold range based on the index data of the host, wherein the adjustment trend is to increase the number of the core threads or decrease the number of the core threads.
6. The multithread task processing method of claim 4, further comprising: setting a safety threshold range for the initialization threshold range or the current threshold range, wherein the safety threshold range is larger than the minimum value of the initialization threshold range or the minimum value of the current threshold range and is smaller than the maximum value of the initialization threshold range or the maximum value of the current threshold range.
7. A multithread task processing apparatus comprising:
the initialization module is used for setting an initialization threshold range for the number of core threads of the thread pool;
the task filtering module is used for filtering IO intensive tasks for the tasks input into the thread pool;
the polling module acquires index data of the host machine in a polling mode, and determines whether to create or delete a core thread so as to determine whether the number of the core threads of the thread pool triggers the adjustment of an initialization threshold range to obtain an event of a current threshold range;
and the threshold adjusting module is used for determining whether an event for adjusting an initialization threshold range is triggered or not based on the index data of the host, wherein the initialization threshold range is determined by the maximum value of the number of core threads and the minimum value of the number of core threads.
8. The multithreaded task processing apparatus of claim 7, wherein the initialization threshold range is determined only by the thread pool based on the index data of the hosts, and wherein the current threshold range is determined only by the thread pool based on the index data of the hosts in a previous state in which polling was performed on the hosts.
9. A multi-threaded task processing device as claimed in claim 7, wherein the target data of the host is defined by at least CPU overhead and/or disk IOPS.
10. The multithreaded task processing device of claim 7, wherein a safe threshold range is set for the initialization threshold range or the current threshold range based on the threshold adjustment module, the safe threshold range being greater than a minimum value of the initialization threshold range or a minimum value of the current threshold range and less than a maximum value of the initialization threshold range or a maximum value of the current threshold range.
11. An electronic device, comprising:
processor, memory device comprising at least one memory unit, and
a communication bus establishing a communication connection between the processor and the storage device;
the processor is configured to execute one or more programs stored in the storage device to implement the method of multi-threaded task processing according to any one of claims 1 to 6.
12. The electronic device of claim 11, wherein the electronic device is configured as at least a computer, a server, a data center, a virtual cluster, a portable mobile terminal, a financial payment platform, or an ERP system.
CN202010946007.8A 2020-09-10 2020-09-10 Multithreading task processing method and device and electronic equipment Active CN112000455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010946007.8A CN112000455B (en) 2020-09-10 2020-09-10 Multithreading task processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010946007.8A CN112000455B (en) 2020-09-10 2020-09-10 Multithreading task processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112000455A CN112000455A (en) 2020-11-27
CN112000455B true CN112000455B (en) 2022-02-01

Family

ID=73468548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010946007.8A Active CN112000455B (en) 2020-09-10 2020-09-10 Multithreading task processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112000455B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112817719A (en) * 2021-01-28 2021-05-18 平安普惠企业管理有限公司 Method, device and equipment for adjusting parameters of thread pool and readable storage medium
CN113010286A (en) * 2021-03-12 2021-06-22 京东数字科技控股股份有限公司 Parallel task scheduling method and device, computer equipment and storage medium
CN113051051B (en) * 2021-03-12 2024-02-27 北京百度网讯科技有限公司 Scheduling method, device, equipment and storage medium of video equipment
CN113064620A (en) * 2021-04-02 2021-07-02 北京天空卫士网络安全技术有限公司 Method and device for processing system data
CN113553152A (en) * 2021-07-20 2021-10-26 中国工商银行股份有限公司 Job scheduling method and device
CN113590285A (en) * 2021-07-23 2021-11-02 上海万物新生环保科技集团有限公司 Method, system and equipment for dynamically setting thread pool parameters
CN114422498A (en) * 2021-12-14 2022-04-29 杭州安恒信息技术股份有限公司 Big data real-time processing method and system, computer equipment and storage medium
CN115878664B (en) * 2022-11-24 2023-07-18 灵犀科技有限公司 Real-time query matching method and system for massive input data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461845A (en) * 2014-11-17 2015-03-25 中国航天科工集团第二研究院七〇六所 Self-adaption method of thread pool of log collection system
CN111488255A (en) * 2020-03-27 2020-08-04 深圳壹账通智能科技有限公司 Multithreading concurrent monitoring method, device, equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9547521B2 (en) * 2014-09-25 2017-01-17 Oracle International Corporation System and method for supporting dynamic thread pool sizing in a distributed data grid
US10073718B2 (en) * 2016-01-15 2018-09-11 Intel Corporation Systems, methods and devices for determining work placement on processor cores
US10445141B2 (en) * 2016-08-18 2019-10-15 Honeywell International Inc. System and method supporting single software code base using actor/director model separation
CN108681481B (en) * 2018-03-13 2021-10-15 创新先进技术有限公司 Service request processing method and device
CN110837401A (en) * 2018-08-16 2020-02-25 苏宁易购集团股份有限公司 Hierarchical processing method and device for java thread pool
CN109710400A (en) * 2018-12-17 2019-05-03 平安普惠企业管理有限公司 The method and device of thread resources grouping
CN111142943A (en) * 2019-12-27 2020-05-12 中国银行股份有限公司 Automatic control concurrency method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461845A (en) * 2014-11-17 2015-03-25 中国航天科工集团第二研究院七〇六所 Self-adaption method of thread pool of log collection system
CN111488255A (en) * 2020-03-27 2020-08-04 深圳壹账通智能科技有限公司 Multithreading concurrent monitoring method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Simultaneous and Speculative Thread Migration for Improving Energy Efficiency of Heterogeneous Core Architectures";Changmin Lee;《IEEE Transaction on Computers》;20171107;第67卷(第99期);第498-512页 *
"一种面向片上众核处理器的虚拟核资源分配算法";沈阳;《华南理工大学学报》;20180115;第46卷(第01期);第112-121页 *

Also Published As

Publication number Publication date
CN112000455A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
CN112000455B (en) Multithreading task processing method and device and electronic equipment
US10545789B2 (en) Task scheduling for highly concurrent analytical and transaction workloads
US10191772B2 (en) Dynamic resource configuration based on context
US9727372B2 (en) Scheduling computer jobs for execution
CA3000422C (en) Workflow service using state transfer
US9507631B2 (en) Migrating a running, preempted workload in a grid computing system
JP5653151B2 (en) Cloud computing system, cloud computing system control method, and management application
US20170017511A1 (en) Method for memory management in virtual machines, and corresponding system and computer program product
US20200389416A1 (en) Checkpoint-inclusive resource allocation
US11822454B2 (en) Mitigating slow instances in large-scale streaming pipelines
US20240015068A1 (en) Method and system for virtual server dormancy
CN114968567A (en) Method, apparatus and medium for allocating computing resources of a compute node
CN116303132A (en) Data caching method, device, equipment and storage medium
US11204810B2 (en) Job concentration system, method and process
US20240061714A1 (en) System and method for managing cloud service resources in a cloud computing environment
CN105677253A (en) Optimization method and device for IO instruction processing queue
CN115061779A (en) Kernel lock management method and device of user-mode virtual machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant