CN117850995A - Coroutine scheduling method, coroutine scheduling device and storage medium - Google Patents

Coroutine scheduling method, coroutine scheduling device and storage medium Download PDF

Info

Publication number
CN117850995A
CN117850995A CN202311704043.3A CN202311704043A CN117850995A CN 117850995 A CN117850995 A CN 117850995A CN 202311704043 A CN202311704043 A CN 202311704043A CN 117850995 A CN117850995 A CN 117850995A
Authority
CN
China
Prior art keywords
coroutine
task
waiting
cooperative
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311704043.3A
Other languages
Chinese (zh)
Inventor
孙继枫
李跃森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202311704043.3A priority Critical patent/CN117850995A/en
Publication of CN117850995A publication Critical patent/CN117850995A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a coroutine scheduling method, a coroutine scheduling device and a storage medium. According to the coroutine scheduling method disclosed by the application, the processing model based on coroutine scheduling is used for packaging the request of the client as a coroutine task, when the slow SQL statement needing long-time waiting is executed, the corresponding coroutine Cheng Rangchu CPU enables the working thread to continue scheduling and executing other coroutine tasks, prevents a large number of slow SQL statements from occupying the working thread for a long time, enables the subsequent request not to be responded in time, and improves the throughput rate of the MySQL database server. The application also provides a coroutine scheduling device and a storage medium.

Description

Coroutine scheduling method, coroutine scheduling device and storage medium
Technical Field
The present disclosure relates to the field of computing technologies, and in particular, to a coroutine scheduling method, apparatus, and storage medium.
Background
Currently, mySQL databases default to using a per-connection per-thread request processing model, i.e., for each database connection, mySQL servers create an independent service thread to process the database connection, and disconnection does not destroy the thread until the request is over. In a high concurrency scenario, the processing procedure may cause frequent creation of kernel threads, and at the same time, a large number of threads may cause an operating system to frequently switch the threads, where switching between a user mode and a kernel mode and switching of a large amount of context information may cause a reduction in event scheduling efficiency, an increase in resource contention, and a reduction in service performance to some extent.
Although the problem of frequently creating and destroying kernel threads can be avoided by a thread pool and the like, a fixed number of working threads are controlled, connection requests of clients are queued, and the thread pool is responsible for scheduling the working threads to process and respond to the connection requests. However, in a high concurrency scenario, a problem still exists with the processing model of the thread pool. If a large number of slow SQL with long time consumption needs to be scheduled, all working threads in a thread pool are exhausted to process the slow SQL, at the moment, other SQL requests with short time consumption and quick response in a queue are blocked because of the slow SQL, and cannot be responded in time, so that response delay is caused, and the throughput rate of the MySQL server is influenced.
Therefore, the long-tail delay caused by the influence of the database connection query request on the normal request in the high-concurrency scene and the high-slow SQL scene is a technical problem to be solved in the current urgent need.
Disclosure of Invention
Aiming at the technical problems, the embodiment of the application provides a coroutine scheduling method, a coroutine scheduling device and a storage medium, which are used for improving the throughput rate of a database server.
In a first aspect, a coroutine scheduling method provided in an embodiment of the present application includes:
the working threads in each scheduling group receive the inquiry request task of the client, and circularly pull the coroutine task from the coroutine task queue and execute the coroutine task;
the working thread pulls the cooperative task from the task queue, then carries out stack cutting operation, switches to the cooperative context and executes the cooperative task;
when the working thread executes a cooperative task processing database query statement, if an event requiring blocking waiting is executed, the cooperative task automatically gives off a Central Processing Unit (CPU), and the working thread continues to try to pull a next cooperative task from a task queue;
after the coroutine task joins the waiting event to monitor and give up the CPU, other coroutine Cheng Huanxing is waited until the waiting event is blocked or the resource is ready, and the coroutine task is awakened again.
Preferably, the work threads in each scheduling group receive the query request task of the client, and loop pull the coroutine task from the coroutine task queue and execute the coroutine task, including:
if the task queues in the single scheduling group are empty but the task queues in other scheduling groups are not empty, the coroutine task is stolen from the task queues in other scheduling groups through a task stealing mechanism and executed.
The work thread in each scheduling group receiving the inquiry request task of the client comprises:
the inquiry request of the client is sent to a cooperative scheduler in a socket connection mode, and the scheduler packages the inquiry request into a cooperative and distributes the cooperative inquiry request to each scheduling group;
the scheduling group comprises database working threads which are created in advance.
Preferably, the working thread performs stack cutting operation after pulling the co-range task from the task queue, and the switching to the co-range context and executing the co-range task comprise:
saving register information in the current CPU context to the to-be-swapped out protocol Cheng Zhan;
and loading the stored register context information related to the coroutine operation into a CPU register through the coroutine context stored in the target coroutine Cheng Zhan to be swapped in, so as to complete the context switching.
Further, the method may further include: and when the swapped coroutine needs to be restored, reading and restoring the running state of the original coroutine from the information stored in the coroutine Cheng Zhan.
Preferably, when the worker thread executes the cooperative task processing database query statement, if executing the event requiring blocking waiting, the cooperative task automatically yields the CPU, and the worker thread continuously tries to pull the next cooperative task from the task queue includes:
when the coroutine task lets out the CPU, pulling the next coroutine task to be executed, and storing the current context into a source protocol Cheng Zhan;
loading the target coroutine context into a register to realize the switching of coroutine;
if the task queue is an unavailable cooperative task, switching to a main cooperative task to operate;
the event requiring blocking waiting comprises one or a combination of the following: input output IO operations, lock contention, and process execution plans.
Preferably, if the event requiring blocking waiting is an IO operation, when the coroutine task yields, registering the waiting IO event on an event polling epoll, and registering a corresponding callback function, where the epoll listens to all registered listening events, and when a ready event occurs, triggering and executing the corresponding callback function, resubmitting the corresponding coroutine task waiting for the event to a scheduler, and distributing the resubmitted task to each scheduling group.
Preferably, if the event requiring blocking waiting is lock contention, the cooperative task yields a CPU and adds the cooperative task to a waiting queue corresponding to the resource through a lock and a condition variable object maintained in a user state;
when the other coroutine releases the resource, a coroutine task is obtained from the waiting queue corresponding to the resource and is submitted to the scheduler again.
Preferably, if the event requiring blocking waiting is a processing execution plan, storing a corresponding coroutine task when the database executor is initialized, letting out the CPU, and resubmitting the coroutine task to the scheduler when the executor completes the processing execution plan and returns a result.
In a second aspect, an embodiment of the present application further provides a coroutine scheduling apparatus, including: a memory, a processor, and a user interface;
the memory is used for storing a computer program;
the user interface is used for realizing interaction with a user;
the processor is used for reading the computer program in the memory, and when the processor executes the computer program, the co-program scheduling method provided by the invention is realized.
According to the coroutine scheduling method, the processing model based on coroutine scheduling is used for packaging the request of the client as the coroutine task, when the slow SQL statement needing long-time waiting is executed, the corresponding coroutine Cheng Rangchu CPU enables the working thread to continue scheduling and executing other coroutine tasks, a large number of slow SQL statements are prevented from occupying the working thread for a long time, subsequent requests cannot be responded in time, and the throughput rate of the MySQL database server is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of cooperative journey switching provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a coroutine scheduling system architecture according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of a coroutine scheduling method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a coroutine scheduling device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Some words appearing hereinafter are explained:
1. in the embodiment of the invention, the term "and/or" describes the association relation of the association objects, which means that three relations can exist, for example, a and/or B can be expressed as follows: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
2. The term "plurality" in the embodiments of the present application means two or more, and other adjectives are similar thereto.
3. The cooperative journey: the system is a lightweight scheduling unit realized in a user mode, is a function capable of suspending and resuming operation, and can multiplex the same system thread by a plurality of coroutines.
4. MySQL is a relational database management system.
5. SQL, structured Query Language, structured query language.
6. epoll, event polling, is an I/O method.
7. Slow SQL: slow query refers to an SQL statement whose run time exceeds a preset length of time, here refers to an SQL statement that the client sends to the database. Wherein the preset time length is determined in advance according to the requirement.
8. Blocking statement: the blocking statement in the invention is not an SQL statement sent to the database by the client, but an instruction executed by the coroutine itself, such as system IO, locking and the like. Is directed to a coroutine, not an SQL statement that is processed by the database itself.
9. Non-blocking statement: corresponding to the "blocking statement" described above, refers to non-blocking instructions, such as user-state code logic, that are executed by the coroutine.
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that, the display sequence of the embodiments of the present application only represents the sequence of the embodiments, and does not represent the advantages or disadvantages of the technical solutions provided by the embodiments.
Aiming at the problems in the prior art, the invention provides a coroutine scheduling method, which is a MySQL database connection query processing method based on coroutine scheduling. The coroutine scheduling method of the present invention, as shown in fig. 2, includes: a co-program scheduler having a thread for executing a listening task solely for listening for ready events and waking up co-programs; the co-program scheduler includes a plurality of scheduling groups, each scheduling group having a task queue for storing co-program tasks encapsulating user connections and query requests and a worker thread for executing the co-program tasks.
According to the co-program scheduling method, after the client is connected to the co-program scheduler, the query request of the client is packaged into a co-program task and submitted to the co-program scheduler. The coroutine scheduler distributes coroutine tasks to coroutine task queues in each scheduling group through a load balancing algorithm and the like. The working threads of the dispatching groups poll and pull the coroutine tasks in the task queues, when the local task queues are empty, the task stealing mechanism is used for attempting to steal the tasks from the task queues of other dispatching groups, and the corresponding coroutine is executed when the tasks are stolen. If the coroutine task can not be stolen, the method is switched to the thread stack to continue to poll the pulling task. When the coroutine task is to be executed, the coroutine context of the source coroutine at the moment is saved in the corresponding coroutine Cheng Zhan through stack cutting operation, and the coroutine context of the target coroutine is loaded in the CPU to complete coroutine switching. For a worker thread, its thread context is stored in a unique main co-thread stack within each thread. When executing the coroutine task, executing the blocking statement, the coroutine automatically gives up the CPU and switches to the next coroutine. If no switchable coroutine exists, switching to the thread stack. After the coroutine yields, the wake-up needs to be waited. And when the event or the resource waiting for the cooperative program is ready, the event or the resource is submitted to the scheduler again, and the scheduling execution of the working thread is waited.
In the coroutine scheduling method, the coroutine implementation is based on the stack coroutine implementation, and a memory space is actively opened up in a process heap area to serve as a coroutine Cheng Zhan for storing relevant CPU (central processing unit) contexts necessary for coroutine operation, and the coroutine implementation mainly comprises register information and coroutine local variables. In addition to the protocol Cheng Zhan, the protocol should also include information related to tasks, such as functions to be executed, etc., for recording the client requests to be executed, so as to correctly package the query requests of the clients.
In the coroutine scheduling method of the present invention, the switching mode of coroutine is shown in fig. 1, when coroutine switching is executed, the contents of the coroutine Cheng Zhan (namely the current coroutine Cheng Zhan a) which is currently executed are already operated in the CPU register, at this time, register information is saved to the corresponding coroutine Cheng Zhan, and coroutine information and state are recorded; and reading and loading the context information stored in the protocol Cheng Zhan (namely the protocol Cheng Zhan b) to be executed in the CPU register to complete the switching of the protocol.
The following describes a coroutine scheduling method according to fig. 3, and the coroutine scheduling method provided in the embodiment of the present invention is a schematic diagram, where the method includes steps S301 to S304:
s301, receiving inquiry request tasks of clients by working threads in each scheduling group, and circularly pulling and executing coroutine tasks from coroutine task queues;
the cooperative scheduler comprises a plurality of scheduling groups, and each scheduling group consists of a plurality of cooperative programs.
As an alternative example, if the task queues in a single scheduling group are empty but the task queues of other scheduling groups are not, the coroutine task is stolen from the task queues of other scheduling groups and executed by a task stealing mechanism.
As an alternative example, the work threads in each scheduling group receiving the query request task of the client includes:
the inquiry request of the client is sent to a cooperative scheduler in a socket connection mode, and the scheduler packages the inquiry request into a cooperative and distributes the cooperative inquiry request to each scheduling group;
the scheduling group comprises database working threads which are created in advance.
For example, the query request of the client is sent to the cooperative scheduler by means of socket connection and the like, and the scheduler packages the query request into the cooperative, and distributes the cooperative request to each scheduling group through a load balancing algorithm. The scheduling group comprises MySQL database working threads which are created in advance. When the task queues of the scheduling group are empty, the worker thread cannot pull the cooperative task, and at the moment, the task is attempted to be stolen from the task queues of other scheduling groups and executed.
S302, pulling the working thread from the task queue to a cooperative task, performing stack cutting operation, switching to a cooperative context, and executing the cooperative task;
as an alternative example, the step includes:
saving register information in the current CPU context to the to-be-swapped out protocol Cheng Zhan;
and loading the stored register context information related to the coroutine operation into a CPU register through the coroutine context stored in the target coroutine Cheng Zhan to be swapped in, so as to complete the context switching.
Further, when the swapped-out coroutine needs to be restored, the running state of the original coroutine is read from the information stored in the coroutine Cheng Zhan and restored.
For example: the working thread is switched into the coroutine context through stack cutting operation. The information such as the register in the current CPU context is firstly stored in the to-be-swapped-out protocol Cheng Zhan, and when the protocol needs to be restored, the running state of the original protocol is read from the information stored in the protocol Cheng Zhan and restored. And then, the context information such as the registers and the like related to the stored coroutine operation is loaded into the CPU registers through the coroutine context stored in the target coroutine Cheng Zhan to be replaced, so that the context switching is completed, and the CPU performs the target coroutine task in the target coroutine context.
S303, when the working thread executes a cooperative task processing database query statement, if an event requiring blocking waiting is executed, the cooperative task automatically yields a Central Processing Unit (CPU), and the working thread continues to try to pull the next cooperative task from a task queue;
as an alternative example, the step includes:
when the coroutine task lets out the CPU, pulling the next coroutine task to be executed, and storing the current context into a source protocol Cheng Zhan;
loading the target coroutine context into a register to realize the switching of coroutine;
if the task queue is an unavailable cooperative task, switching to a main cooperative task to operate;
the event requiring blocking waiting comprises one or a combination of the following: input output IO operations, lock contention, and process execution plans.
For example, when the coroutine yields, the coroutine is also completed through a stack cutting operation, the next coroutine task to be executed is pulled, the current context is saved in the source coroutine Cheng Zhan, and then the target coroutine context is loaded into a register, so that the coroutine switching is realized. If the task queue is empty at this time and no coroutine task is available, the task queue is switched to the main coroutine, i.e. the thread context is operated.
And S304, after the coroutine task joins the waiting event to monitor and give away the CPU, waiting for other coroutines Cheng Huanxing, and when the waiting event is blocked or the resource is ready, waking up the coroutine task again.
As an optional example, if the event requiring blocking waiting is an IO operation, when the coroutine task yields, registering the waiting IO event on an event polling epoll, and registering a corresponding callback function, where the epoll listens to all registered listening events, when a ready event occurs, triggering and executing the corresponding callback function, resubmitting the corresponding coroutine task waiting for the event to a scheduler, and distributing the resubmitted task to each scheduling group.
As another optional example, if the event requiring blocking waiting is lock contention, the coroutine task yields the CPU and adds the coroutine task to the waiting queue corresponding to the resource through the lock maintained in the user state and the condition variable object;
when the other coroutine releases the resource, a coroutine task is obtained from the waiting queue corresponding to the resource and is submitted to the scheduler again.
As yet another alternative example, if the event requiring blocking waiting is a processing execution plan, the corresponding coroutine task is saved when the database executor is initialized, then the CPU is issued, and when the executor completes the processing execution plan and returns a result, the coroutine task is resubmitted to the scheduler.
For example, S304 may specifically be: for IO operation, registering the waiting IO event on epoll when the coroutine yields, and registering a corresponding callback function. The epoll monitors all registered monitoring events, triggers and executes a corresponding callback function after a ready event occurs, resubmisses the corresponding coroutine task waiting for the event to a scheduler, and distributes the task to each scheduling group; for the synchronous scene using mutual exclusion lock, condition variable and the like, the cooperative Cheng Rangchu CPU encounters resource contention and adds the cooperative into a waiting queue corresponding to the resource through the lock and the condition variable object maintained in a user state, and when the other cooperative releases the resource, acquires a cooperative from the waiting queue corresponding to the resource and resubmittes the cooperative to a scheduler, thereby realizing cooperative synchronization primitive; for the processing of SQL sentences, corresponding coroutine tasks are saved when the MySQL database executor is initialized, then a CPU is led out, and the coroutine is resubmitted to a scheduler to wait for execution when the executor completes an execution plan and returns a result.
According to the coroutine scheduling method, when coroutine tasks are executed to the blocking operations such as resource contention, IO blocking, sleep and the like, the CPU is automatically suspended and yielded, so that a working thread can continuously try to pull and execute the next coroutine task from a task queue without blocking waiting, resources or events which need to be coroutine waiting before yielding are added to monitoring, and monitoring can be performed in an epoll mode. When the resources or events needed to be waited by the original coroutine are ready, the corresponding coroutine task is resubmitted to the coroutine scheduler, and the working thread is waited for obtaining and executing.
By using the coroutine scheduling method, the request of the MySQL database client is packaged by the coroutine, so that a plurality of coroutine tasks can multiplex the same MySQL processing thread, namely, the coroutine tasks are scheduled on the working thread. Compared with the original processing model of each connected thread of the MySQL database, the problems of system performance degradation caused by excessive thread count and excessive resource occupation under a high concurrency scene are avoided through a fixed number of scheduling groups and MySQL processing threads which are created in advance; compared with the existing MySQL connection pool technology, the cooperative journey switching and scheduling are automatically carried out when the slow SQL request is processed in a cooperative journey scheduling mode, so that other requests can be responded in time, and long-tail delay of a system is reduced.
Based on the same inventive concept, the embodiment of the invention also provides a coroutine scheduling device, as shown in fig. 4, which comprises:
including a memory 402, a processor 401 and a user interface 403;
the memory 402 is used for storing a computer program;
the user interface 403 is configured to interact with a user;
the processor 401 is configured to read a computer program in the memory 402, where the processor 401 implements:
the working threads in each scheduling group receive the inquiry request task of the client, and circularly pull the coroutine task from the coroutine task queue and execute the coroutine task;
the working thread pulls the cooperative task from the task queue, then carries out stack cutting operation, switches to the cooperative context and executes the cooperative task;
when the working thread executes a cooperative task processing database query statement, if an event requiring blocking waiting is executed, the cooperative task automatically gives off a Central Processing Unit (CPU), and the working thread continues to try to pull a next cooperative task from a task queue;
after the coroutine task joins the waiting event to monitor and give up the CPU, other coroutine Cheng Huanxing is waited until the waiting event is blocked or the resource is ready, and the coroutine task is awakened again.
Where in FIG. 4, a bus architecture may comprise any number of interconnected buses and bridges, with one or more processors, represented in particular by processor 401, and various circuits of memory, represented by memory 402, linked together. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. The processor 401 is responsible for managing the bus architecture and general processing, and the memory 402 may store data used by the processor 501 in performing operations.
The processor 401 may be CPU, ASIC, FPGA or CPLD, and the processor 401 may also employ a multi-core architecture.
As an alternative example, the processor 401 may further implement, when executing the computer program:
if the task queues in the single scheduling group are empty but the task queues in other scheduling groups are not empty, the coroutine task is stolen from the task queues in other scheduling groups through a task stealing mechanism and executed.
As an optional example, the query request of the client is sent to a cooperative scheduler in a socket connection manner, and the scheduler packages the query request into a cooperative, and distributes the cooperative request to each scheduling group; the scheduling group comprises database working threads which are created in advance.
As an alternative example, the processor 401 may further implement, when executing the computer program:
saving register information in the current CPU context to the to-be-swapped out protocol Cheng Zhan;
and loading the stored register context information related to the coroutine operation into a CPU register through the coroutine context stored in the target coroutine Cheng Zhan to be swapped in, so as to complete the context switching.
Optionally, when the swapped-out coroutine needs to be restored, the running state of the original coroutine is read from the information stored in the coroutine Cheng Zhan and restored.
As an alternative example, the processor 401 may further implement, when executing the computer program: when a working thread executes a cooperative task processing database query statement, if an event requiring blocking waiting is executed, the cooperative task automatically gives up a CPU, and the working thread continues to try to pull a next cooperative task from a task queue, including:
when the coroutine task lets out the CPU, pulling the next coroutine task to be executed, and storing the current context into a source protocol Cheng Zhan;
loading the target coroutine context into a register to realize the switching of coroutine;
if the task queue is an unavailable cooperative task, switching to a main cooperative task to operate;
the event requiring blocking waiting comprises one or a combination of the following: input output IO operations, lock contention, and process execution plans.
As an alternative example, the processor 401 may further implement, when executing the computer program:
and if the event requiring blocking waiting is IO operation, registering the waiting IO event on an event polling epoll when the coroutine task yields, registering a corresponding callback function, monitoring all registered monitoring events by the epoll, triggering and executing the corresponding callback function when a ready event occurs, resubmitting the corresponding coroutine task waiting for the event to a scheduler, and distributing the resubmitted task to each scheduling group.
As an alternative example, the processor 401 may further implement, when executing the computer program:
if the event needing blocking waiting is lock contention, the cooperative task gives up the CPU and adds the cooperative task into a waiting queue corresponding to the resource through a lock maintained in a user state and a condition variable object; when the other coroutine releases the resource, a coroutine task is obtained from the waiting queue corresponding to the resource and is submitted to the scheduler again.
As an alternative example, the processor 401 may further implement, when executing the computer program:
if the event requiring blocking waiting is a processing execution plan, storing a corresponding coroutine task when the database executor is initialized, then letting out the CPU, and resubmitting the coroutine task to the scheduler when the executor completes the processing execution plan and returns a result.
As an alternative example, the processor 401, when executing the computer program stored in the memory 402, implements any of the coroutine scheduling methods in the first embodiment.
It should be noted that, the device provided in this embodiment and the method provided in the foregoing method embodiment belong to the same inventive concept, solve the same technical problem, achieve the same technical effect, and are not described in detail.
The present application also proposes a processor readable storage medium. The processor-readable storage medium stores a computer program, and when the processor executes the computer program, any one of the coroutine scheduling methods in the first embodiment is implemented.
It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice. In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (10)

1. The coroutine scheduling method is characterized by comprising the following steps of:
the working threads in each scheduling group receive the inquiry request task of the client, and circularly pull the coroutine task from the coroutine task queue and execute the coroutine task;
the working thread pulls the cooperative task from the task queue, then carries out stack cutting operation, switches to the cooperative context and executes the cooperative task;
when the working thread executes a cooperative task processing database query statement, if an event requiring blocking waiting is executed, the cooperative task automatically gives off a Central Processing Unit (CPU), and the working thread continues to try to pull a next cooperative task from a task queue;
after the coroutine task joins the waiting event to monitor and give up the CPU, other coroutine Cheng Huanxing is waited until the waiting event is blocked or the resource is ready, and the coroutine task is awakened again.
2. The method of claim 1, wherein the worker threads in each dispatch group receive the client's query request task, and wherein cycling through the coroutine tasks from the coroutine task queue and performing comprises:
if the task queues in the single scheduling group are empty but the task queues in other scheduling groups are not empty, the coroutine task is stolen from the task queues in other scheduling groups through a task stealing mechanism and executed.
3. The method of claim 2, wherein the worker threads in each dispatch group receiving a query request task of a client comprises:
the inquiry request of the client is sent to a cooperative scheduler in a socket connection mode, and the scheduler packages the inquiry request into a cooperative and distributes the cooperative inquiry request to each scheduling group;
the scheduling group comprises database working threads which are created in advance.
4. The method of claim 1, wherein the worker thread pulling from the task queue to a coroutine task and performing a stack-cut operation, switching to a coroutine context and executing a coroutine task comprises:
saving register information in the current CPU context to the to-be-swapped out protocol Cheng Zhan;
and loading the stored register context information related to the coroutine operation into a CPU register through the coroutine context stored in the target coroutine Cheng Zhan to be swapped in, so as to complete the context switching.
5. The method as recited in claim 4, further comprising:
and when the swapped coroutine needs to be restored, reading and restoring the running state of the original coroutine from the information stored in the coroutine Cheng Zhan.
6. The method of claim 1, wherein the worker thread automatically yields the CPU when executing the event requiring blocking of waiting while executing the coroutine task processing database query statement, and wherein the worker thread continuing to attempt to pull a next coroutine task from the task queue comprises:
when the coroutine task lets out the CPU, pulling the next coroutine task to be executed, and storing the current context into a source protocol Cheng Zhan;
loading the target coroutine context into a register to realize the switching of coroutine;
if the task queue is an unavailable cooperative task, switching to a main cooperative task to operate;
the event requiring blocking waiting comprises one or a combination of the following: input output IO operations, lock contention, and process execution plans.
7. The method of claim 6, wherein waiting for other co-ordinates Cheng Huanxing after the co-ordinates task joins the waiting event to snoop and yield the CPU, waiting until the event requiring blocking of waiting or the resource is ready, and re-waking the co-ordinates task comprises:
and if the event requiring blocking waiting is IO operation, registering the waiting IO event on an event polling epoll when the coroutine task yields, registering a corresponding callback function, monitoring all registered monitoring events by the epoll, triggering and executing the corresponding callback function when a ready event occurs, resubmitting the corresponding coroutine task waiting for the event to a scheduler, and distributing the resubmitted task to each scheduling group.
8. The method of claim 6, wherein waiting for other co-ordinates Cheng Huanxing after the co-ordinates task joins the waiting event to snoop and yield the CPU, waiting until the event requiring blocking of waiting or the resource is ready, and re-waking the co-ordinates task comprises:
if the event needing blocking waiting is lock contention, the cooperative task gives up the CPU and adds the cooperative task into a waiting queue corresponding to the resource through a lock maintained in a user state and a condition variable object;
when the other coroutine releases the resource, a coroutine task is obtained from the waiting queue corresponding to the resource and is submitted to the scheduler again.
9. The method of claim 6, wherein waiting for other co-ordinates Cheng Huanxing after the co-ordinates task joins the waiting event to snoop and yield the CPU, waiting until the event requiring blocking of waiting or the resource is ready, and re-waking the co-ordinates task comprises:
if the event requiring blocking waiting is a processing execution plan, storing a corresponding coroutine task when the database executor is initialized, then letting out the CPU, and resubmitting the coroutine task to the scheduler when the executor completes the processing execution plan and returns a result.
10. A coroutine scheduling device, which is characterized by comprising a memory, a processor and a user interface;
the memory is used for storing a computer program;
the user interface is used for realizing interaction with a user;
the processor being configured to read a computer program in the memory, the processor implementing the coroutine scheduling method according to one of claims 1 to 9 when the processor executes the computer program.
CN202311704043.3A 2023-12-12 2023-12-12 Coroutine scheduling method, coroutine scheduling device and storage medium Pending CN117850995A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311704043.3A CN117850995A (en) 2023-12-12 2023-12-12 Coroutine scheduling method, coroutine scheduling device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311704043.3A CN117850995A (en) 2023-12-12 2023-12-12 Coroutine scheduling method, coroutine scheduling device and storage medium

Publications (1)

Publication Number Publication Date
CN117850995A true CN117850995A (en) 2024-04-09

Family

ID=90530090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311704043.3A Pending CN117850995A (en) 2023-12-12 2023-12-12 Coroutine scheduling method, coroutine scheduling device and storage medium

Country Status (1)

Country Link
CN (1) CN117850995A (en)

Similar Documents

Publication Publication Date Title
US5452452A (en) System having integrated dispatcher for self scheduling processors to execute multiple types of processes
CN108595282A (en) A kind of implementation method of high concurrent message queue
US5390329A (en) Responding to service requests using minimal system-side context in a multiprocessor environment
KR101915198B1 (en) Method and Apparatus for processing the message between processors
US8756613B2 (en) Scalable, parallel processing of messages while enforcing custom sequencing criteria
JPH113232A (en) Signal generation and distribution for double level multi-thread system
US20070050771A1 (en) System and method for scheduling tasks for execution
CN105187327A (en) Distributed message queue middleware
CN102081557A (en) Resource management method and system in cloud computing operating system
CN114138434B (en) Big data task scheduling system
US8132171B2 (en) Method of controlling thread access to a synchronization object
CN108073414B (en) Implementation method for merging multithreading concurrent requests and submitting and distributing results in batches based on Jedis
CN109800067A (en) Database connection optimization method, device and relevant device based on cloud monitoring
US7765548B2 (en) System, method and medium for using and/or providing operating system information to acquire a hybrid user/operating system lock
KR20000060827A (en) method for implementation of transferring event in real-time operating system kernel
JP3546694B2 (en) Multi-thread computer system and multi-thread execution control method
CN110633145A (en) Real-time communication method and device in distributed system and distributed system
EP3084603B1 (en) System and method for supporting adaptive busy wait in a computing environment
CN116724294A (en) Task allocation method and device
CN117850995A (en) Coroutine scheduling method, coroutine scheduling device and storage medium
JP7346649B2 (en) Synchronous control system and method
CN110488714A (en) A kind of asynchronism state machine control method and device
CN116225688A (en) Multi-core collaborative rendering processing method based on GPU instruction forwarding
CN112749020A (en) Microkernel optimization method of Internet of things operating system
CN113778700A (en) Message processing method, system, medium and computer system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination