CN112346835A - Scheduling processing method and system based on coroutine - Google Patents

Scheduling processing method and system based on coroutine Download PDF

Info

Publication number
CN112346835A
CN112346835A CN202011142092.9A CN202011142092A CN112346835A CN 112346835 A CN112346835 A CN 112346835A CN 202011142092 A CN202011142092 A CN 202011142092A CN 112346835 A CN112346835 A CN 112346835A
Authority
CN
China
Prior art keywords
coroutine
unit
virtual
running
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011142092.9A
Other languages
Chinese (zh)
Other versions
CN112346835B (en
Inventor
朱烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaran Information Technology Co ltd
Original Assignee
Shanghai Handpal Information Technology Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Handpal Information Technology Service Co ltd filed Critical Shanghai Handpal Information Technology Service Co ltd
Priority to CN202011142092.9A priority Critical patent/CN112346835B/en
Publication of CN112346835A publication Critical patent/CN112346835A/en
Application granted granted Critical
Publication of CN112346835B publication Critical patent/CN112346835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/524Deadlock detection or avoidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Abstract

The invention discloses a scheduling processing method and a system based on coroutine, wherein the scheduling method comprises the following steps: step S1, creating at least one coroutine in the user mode and caching; step S2, importing at least one program to be run in a user mode, wherein the program to be run comprises a plurality of instructions and/or data and corresponds to at least one coroutine; step S3, creating at least one virtual processing unit in the user mode for running coroutines to execute corresponding instructions and/or data; step S4, releasing the corresponding processing resource after the coroutine completes running in the virtual processing unit. By the technical scheme, a coroutine scheduling scheme of the user mode is provided, the coroutine model is used for replacing a thread model, scheduling performance in each application scene can be greatly improved, and design and application of the coroutine model on platforms such as Java and the like are achieved.

Description

Scheduling processing method and system based on coroutine
Technical Field
The invention relates to the technical field of user mode scheduling, in particular to a scheduling processing method and system based on coroutine.
Background
The coroutine is a light-weight scheduling unit similar to a thread, supports a pure user mode working mode, and is simple, quick and efficient to schedule; in many application scenarios, the method has stronger attraction compared with the traditional thread scheduling model:
one is that the scheduling efficiency: compared with the process, the thread shares data and codes, and saves a lot of switching work of the memory in scheduling, thereby being more efficient and faster; however, the implementation of the thread still cannot be separated from the support of the kernel (for example, the implementation of the posix thread library adopts a 1:1 model of the kernel thread), and the user mode and the core mode are switched during thread scheduling; the coroutine has all the original advantages of the thread, the maximum advantage is that the coroutine completely belongs to the category of user mode, and the switching of the coroutine cannot be sensed by the kernel, so that the heavy switching work of the user mode core mode is omitted, and the task scheduling efficiency is higher than that of the first floor.
The second is that the locking mechanism: although different threads in the same process share code and data spaces, when critical zone data is operated, a lock mechanism is inevitably required to be added among multiple threads, so that the consistency of the data is ensured, and the execution efficiency of a program is limited to a certain extent by adding the lock mechanism; and because a plurality of coroutines run on the same thread and are automatically controlled and scheduled by the library program, the problem of competition among the coroutines running on the same thread on the access of critical zone data does not exist, thereby greatly improving the running efficiency of the program.
The third is in the processing of IO operation: IO operation is usually used as a blocking event, and needs thread suspension for waiting, which wastes CPU resources, and although asynchronous IO appearing later solves the problem of CPU resource waste to a certain extent, the asynchronous IO operation cannot be completely eliminated; when the coroutine encounters IO operation, the current coroutine is blocked, other coroutines capable of running are scheduled at the same time, the originally blocked coroutine is rescheduled when the IO response is returned, and resources of a CPU (Central processing Unit) can not be wasted.
The fourth lies in the limitations of the system: no matter which model the traditional thread model is, the 1:1 corresponding relation with the basic scheduling unit of the kernel is finally needed, and the basic scheduling unit is limited by the kernel of the operating system, so that scheduling resources which can be provided for execution on one server are extremely limited; the coroutine belongs to a user function module and is not influenced by the bottom layer of an operating system, so that the coroutine naturally has no previous limitation, and more scheduling units can be set as long as the physical memory allows; meanwhile, the coroutine is used as a scheduling unit with smaller granularity under the thread and shares all data of the thread, so that the coroutine data structure only needs a plurality of registers supporting operation, and is much smaller than the thread in the data structure, thereby enabling the coroutine data structure to create much larger quantity than the thread.
The coroutine model greatly improves the performance of the thread model in the above points, and a coroutine-based scheduling processing method and system are urgently needed at present.
Disclosure of Invention
In view of the above problems in the prior art, a scheduling processing method and system based on coroutine are provided, and the specific technical scheme is as follows:
a scheduling processing method based on coroutine comprises the following steps:
step S1, creating at least one coroutine in the user mode and caching;
step S2, importing at least one program to be run in a user mode, wherein the program to be run comprises a plurality of instructions and/or data and corresponds to at least one coroutine;
step S3, creating at least one virtual processing unit in the user mode for running coroutines to execute corresponding instructions and/or data;
step S4, releasing the corresponding processing resource after the coroutine completes running in the virtual processing unit.
Preferably, in the scheduling processing method, step S3 further includes:
step S31, acquiring the number of runnable co-programs of each virtual processing unit;
step S32, allocating coroutines to corresponding virtual processing units according to the number of runnable coroutines and forming corresponding running queues.
Preferably, in the scheduling processing method, step S4 further includes:
step S41, in the operation process of the coroutine, judging whether a first user instruction from the outside exists and outputting a corresponding first judgment result;
step S42, according to the first determination result, when there is a first user instruction, selecting a coroutine in the coroutine switching operation process from the operation queue.
Preferably, in the scheduling processing method, step S4 further includes:
step S4a, in the operation process of the coroutine, judging whether a blocking event exists and outputting a corresponding second judgment result;
step S4b, according to the second determination result, when there is a blocking event, selecting a coroutine in the coroutine switching operation process from the operation queue and suspending the coroutine in the operation process.
Preferably, in the scheduling processing method, step S4 further includes:
and step S4c, according to the second judgment result, when the blocking event is ended, reinserting the suspended coroutine into the running queue.
Preferably, the scheduling processing method, wherein each virtual processing unit comprises a plurality of virtual registers;
the switching process comprises the following steps:
step A1, storing the memory data in each virtual register into a memory area;
step A2, storing the address data of coroutines in the running process into a stack data structure;
step A3, restoring the address data to the protocol of switching;
step a4, restoring the memory data from the memory area to the corresponding virtual register.
A scheduling processing system, applied to any one of the scheduling processing methods, includes:
the creating unit is used for creating at least one coroutine according to an external creating instruction;
the cache unit is connected with the creating unit and used for caching the created coroutine;
and the virtual processing unit is used for running the coroutine so as to execute the instruction and/or data of the program to be run corresponding to the coroutine.
Preferably, the scheduling processing system further includes:
and the guide unit is respectively connected with each virtual processing unit and the cache unit, allocates the coroutines with corresponding quantity to the corresponding virtual processing unit according to the runnable coroutines number of each virtual processing unit and forms a corresponding running queue.
Preferably, the scheduling processing system further includes:
the first judgment unit is used for judging whether a first user instruction from the outside exists or not and outputting a corresponding first judgment result;
and the switching unit is connected with the first judging unit and each virtual processing unit, and selects a coroutine in the coroutine switching operation process in the operation queue according to the first judging result when the first user instruction exists.
Preferably, the scheduling processing system further includes:
the second judgment unit is used for judging whether a blocking event exists or not and outputting a corresponding second judgment result;
the switching unit is also connected with a second judgment unit, and according to the second judgment result, when a blocking event exists, a coroutine in the running process is selected from the running queue to switch the coroutine in the running process and is suspended.
Preferably, the scheduling processing system further includes:
and the recovery unit is respectively connected with the second judgment unit and each virtual processing unit, and reinserts the suspended coroutines into the running queue when the blocking event is ended.
Preferably, the scheduling processing system further includes:
the identification unit is used for identifying the real-time state of the coroutine;
when the coroutine is in the cache unit, marking the coroutine as a new state;
when the coroutine is in the running queue, marking the coroutine as a standby state;
when the protocol is in the running process in the virtual processing unit, marking the protocol as a running state;
when the coroutine runs in the virtual processing unit, marking the coroutine as a termination state;
when the protocol is suspended in the switch unit, the protocol is marked as a blocking state.
This technical scheme has following advantage or beneficial effect:
by the technical scheme, a coroutine scheduling scheme of the user mode is provided, the coroutine model is used for replacing a thread model, scheduling performance in each application scene can be greatly improved, and design and application of the coroutine model on platforms such as Java and the like are achieved.
Drawings
Fig. 1 is a schematic flow diagram of a scheduling processing method in a coroutine-based scheduling processing method and system of the present invention.
Fig. 2-5 are schematic diagrams illustrating a scheduling method according to a preferred embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a scheduling processing system in the scheduling processing method and system based on coroutine of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
In view of the above problems in the prior art, a scheduling processing method and system based on coroutine are provided, and the specific technical scheme is as follows:
a scheduling processing method based on coroutine, as shown in fig. 1, includes:
step S1, creating at least one coroutine in the user mode and caching;
step S2, importing at least one program to be run in a user mode, wherein the program to be run comprises a plurality of instructions and/or data and corresponds to at least one coroutine;
step S3, creating at least one virtual processing unit in the user mode for running coroutines to execute corresponding instructions and/or data;
step S4, releasing the corresponding processing resource after the coroutine completes running in the virtual processing unit.
As a preferred implementation, in this scheduling processing method, as shown in fig. 2, step S3 further includes:
step S31, acquiring the number of runnable co-programs of each virtual processing unit;
step S32, allocating coroutines to corresponding virtual processing units according to the number of runnable coroutines and forming corresponding running queues.
In a preferred embodiment of the present invention, the newly created coroutines are cached in a cache region, and need to be bound with the virtual processing units before running, and the operable coroutines of each virtual processing unit are obtained to extract and bind the corresponding coroutines, so as to form a corresponding running queue to wait for scheduling and execution.
As a preferred implementation, in this scheduling processing method, as shown in fig. 3, step S4 further includes:
step S41, in the operation process of the coroutine, judging whether a first user instruction from the outside exists and outputting a corresponding first judgment result;
step S42, according to the first determination result, when there is a first user instruction, selecting a coroutine in the coroutine switching operation process from the operation queue.
In another preferred embodiment of the present invention, the user program can give up the control right of the current virtual processing unit through the api interface and give up the resources to other coroutines that really need to be executed; in the above preferred embodiment, a coroutine in the ready state closest to the current time is selected as a scheduling coroutine in the running queue, then the running state of the current coroutine is modified from the running state to the ready state to discard the corresponding processing resource, and finally the instruction and/or data in the program to be executed are transferred to the scheduling coroutine for continuous execution by switching the coroutine stack. Wherein the specific steps related to the coroutine handover will be described in further detail later.
As a preferred embodiment, as shown in fig. 4, the scheduling processing method further includes, in step S4:
step S4a, in the operation process of the coroutine, judging whether a blocking event exists and outputting a corresponding second judgment result;
step S4b, according to the second determination result, when there is a blocking event, selecting a coroutine in the coroutine switching operation process from the operation queue and suspending the coroutine in the operation process.
In another preferred embodiment of the present invention, when a blocking event such as IO operation occurs, the blocked current coroutine will give up CPU resources and schedule other coroutines to run; unlike coroutines that forego CPU operation, the state of the current coroutine changes to a blocked state rather than the aforementioned ready state.
As a preferred embodiment, as shown in fig. 4, the scheduling processing method further includes, in step S4:
and step S4c, according to the second judgment result, when the blocking event is ended, reinserting the suspended coroutine into the running queue.
In the above preferred embodiment, the IO operations executed in the coroutine are all executed asynchronously: after the asynchronous IO operation is completed, the corresponding working thread calls the recovery interface and informs the coroutine framework that the coroutine blocking event corresponding to the coroutine framework is finished, and the subsequent operation can be rescheduled and executed.
In the above preferred embodiment, in a complex environment with multiple threads, it is necessary to ensure that the blocking operation and the recovery operation of the coroutine are executed in series, and the recovery operation of the coroutine must be executed when the coroutine is in the blocking state, and the state of the coroutine needs to be changed to the standby state while the coroutine is recovered.
As a preferred embodiment, the scheduling processing method, wherein each virtual processing unit includes a plurality of virtual registers;
as shown in fig. 5, the handover procedure includes:
step A1, storing the memory data in each virtual register into a memory area;
step A2, storing the address data of coroutines in the running process into a stack data structure;
step A3, restoring the address data to the protocol of switching;
step a4, restoring the memory data from the memory area to the corresponding virtual register.
In another preferred embodiment of the present invention, a specific scheduling flow of coroutines is explained as follows:
in this embodiment, the coroutine switching is similar to the thread switching performed by the operating system: when the method is applied to a Java platform, coroutine switching is switching of a Java code execution stack (namely a Java stack frame), considering that a Java virtual machine does not provide an access interface of a programmer to the Java stack frame, and based on visibility of a Java local stack and stack consistency, when coroutine switching occurs, a program to be executed needs to be artificially guided into a local method, so that all stack information of a previous execution method is found, and the specific implementation steps comprise:
1) storing memory data between a local method stack top (low address) and a certain preset specific address (high address) into a data structure corresponding to a coroutine, and simultaneously recording the memory data of a relevant register and doing field storage operation;
2) copying stack data stored in a data structure of a scheduled coroutine to a stack of a current thread in a high address alignment mode; it is to be noted here that: due to the difference of different coroutine execution logics, the respective stack heights are different; when copying, the position of the preset specific address is invariable, and the specific address needs to always point to the stack bottom of a certain preset specific Java method; and restoring the stored memory data and the field of each register after the copying is finished so as to realize coroutine switching.
In the above preferred embodiment, when applied to a Java platform, the bottom position of a particular Java method needs to be selected as the base address for saving stack data, because a Java thread needs to run the logic common to many Java virtual machines from creation to running of a Java method, and for each coroutine bound to it, these same stack data do not need to be saved; therefore, only the strategy of a set of system calls, i.e. fork/exec, needs to be adopted to change the content of a part of stack top addresses, so that the execution logic of the whole thread can be changed.
A scheduling processing system, applied to any one of the scheduling processing methods, as shown in fig. 6, includes:
a creation unit 1 for creating at least one coroutine according to an external creation instruction;
the cache unit 2 is connected with the creating unit 1 and used for caching the created coroutine;
and the virtual processing unit 3 is used for running the coroutines so as to execute the instructions and/or data of the program to be run corresponding to the coroutines.
As a preferred embodiment, the scheduling processing system further includes:
and the guide unit 4 is respectively connected with each virtual processing unit 3 and the cache unit 2, allocates a corresponding number of coroutines to the corresponding virtual processing unit 3 according to the number of runnable coroutines of each virtual processing unit 3 and forms a corresponding running queue.
As a preferred embodiment, the scheduling processing system further includes:
the first judging unit 5 is used for judging whether a first user instruction from the outside exists or not and outputting a corresponding first judging result;
and the switching unit 6 is connected with the first judging unit 5 and each virtual processing unit 3, and selects a coroutine in the coroutine switching operation process in the operation queue according to the first judgment result when a first user instruction exists.
As a preferred embodiment, the scheduling processing system further includes:
a second judging unit 7, configured to judge whether a blocking event exists and output a corresponding second judgment result;
the switching unit 6 is further connected to a second judging unit 7, and according to a second judgment result, when a blocking event exists, selects a coroutine in the operation process from the operation queue to switch the coroutine in the operation process and suspends the coroutine in the operation process.
As a preferred embodiment, the scheduling processing system further includes:
and the recovery unit 8 is respectively connected with the second judging unit 7 and each virtual processing unit 3, and reinserts the suspended coroutines into the running queue when the blocking event is ended.
As a preferred embodiment, the scheduling processing system further includes:
the identification unit is used for identifying the real-time state of the coroutine;
when the coroutine is in the cache unit, marking the coroutine as a new state;
when the coroutine is in the running queue, marking the coroutine as a standby state;
when the protocol is in the running process in the virtual processing unit, marking the protocol as a running state;
when the coroutine runs in the virtual processing unit, marking the coroutine as a termination state;
when the protocol is suspended in the switch unit, the protocol is marked as a blocking state.
In another preferred embodiment of the present invention, it is noted that scheduling control of coroutines will cause a series of coroutine state changes, and these changes simultaneously constitute the whole coroutine lifecycle; in the above preferred embodiment, the identification unit identifies each state of the coroutine lifecycle, so that the user can more intuitively acquire the schedulable state and the running state of the coroutine.
A specific example is now provided to further explain and explain the present technical solution:
in the foregoing specific embodiment, the technical solution is applied to a Java platform, and provides an coroutine library as an environment for scheduling processing, where a specific data structure of the coroutine library includes:
a coroutine buffer area: all coroutines in the new state are stored in the buffer, and each virtual CPU obtains coroutines needing to be operated from the buffer area when having idle computing power and binds the coroutines to the corresponding virtual CPU to wait for scheduling operation; the coroutine buffer area is composed of a one-dimensional array, and the capacity of the array determines the concurrent bearing capacity of the whole coroutine frame;
a plurality of virtual CPUs: each virtual CPU corresponds to a specific Java thread respectively, the data structure of the virtual CPU only comprises two parts, one part points to a coroutine buffer area to obtain a newly-built coroutine, all the virtual CPUs obtain coroutines needing to be executed from the same coroutine buffer area, the other part points to a coroutine running environment, and the operation of the whole coroutine frame is maintained through an api interface provided by the environment; the virtual CPU obtains a newly-built coroutine from a coroutine buffer area repeatedly after being started and runs a designated code logic of the coroutine buffer area;
coordinating the program operating environment: the state of each virtual CPU operation is shown, and a group of api interfaces which can control a coroutine framework, and the main body of the data structure is 2 task queues: one is an idle queue and the other is a running queue; a task item is firstly allocated from an idle list and then is put into an operation queue after a newly-built coroutine obtained from a coroutine buffer area, and all coroutines which are in operation or are in a blocking state are stored in the operation list; when the coroutine is finished, the task items are returned to the idle queue from the running queue again, so that the purpose of resource reuse is achieved; the sum of the lengths of the idle queue and the running queue represents the maximum processing capacity of 1 virtual CPU, and under the condition that an idle task item still exists, the virtual CPU can continuously acquire a new coroutine from a coroutine buffer area to run;
virtual CPU context: the context data structure representing the operation of each virtual CPU is defined by C language, and maintains the local parts of all coroutine contexts bound under the virtual CPU and the preset high-order address used when switching the stack;
coroutine context: the system comprises a Java part and a local part, wherein the Java part defines an identifier, a state and execution method entry information of the coroutine, and the information provides necessary data required by coroutine running logic; the local part records the saved stack content and the value of the stack top register, and the information provides necessary support for coroutine switching.
In the foregoing embodiment, before starting scheduling control, it is necessary to preferentially initialize the coroutine framework, which mainly includes initializing the virtual CPU context and a local portion in the coroutine context and starting the virtual CPU, including:
1) reading preset configuration parameters, wherein the configuration parameters include but are not limited to the number of virtual CPUs, the size of a coroutine buffer area and the coroutine number of concurrent processing of the virtual CPUs;
2) initializing a virtual CPU context, constructing a corresponding coroutine array and determining a preset high-order address used by stack alignment;
3) creating a plurality of simulated CPUs according to the configuration parameters, and simultaneously constructing and starting Java threads corresponding to the simulated CPUs;
4) creating a bootstrap program: the bootstrap coroutine is used as a special built-in coroutine and is used for initializing stacks of other coroutines by using stack data of the bootstrap coroutine, and acquiring a new coroutine from a coroutine buffer area for execution when the virtual CPU is idle; setting the created bootstrap coroutine identifier as 0 and creating a first coroutine on the corresponding virtual CPU;
5) initializing other protocol stacks by using stack data of the guide protocol, finding out all stack data between a preset high-order address in the context of the CPU and a stack top register, taking the stack data as initial stack data of the guide protocol and assigning the initial stack data to other protocol stacks;
6) and at the moment, the coroutine framework is initialized, the coroutine is guided to start running and enter a cycle, and a certain number of newly-built coroutines are obtained from a coroutine buffer area and are bound to the current virtual CPU in the cycle process.
It should be noted that, in the coroutine framework, when the virtual CPU is turned off, the current coroutine needs to be forcibly switched to the boot coroutine, and the entire Java thread is exited safely by the code logic of the boot coroutine.
In summary, according to the technical scheme, a coroutine scheduling scheme of the user mode is provided, the coroutine model is used for replacing a thread model, scheduling performance in each application scene can be greatly improved, and meanwhile design and application of the coroutine model on platforms such as Java and the like are achieved.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (12)

1. A scheduling processing method based on coroutine is characterized in that the scheduling processing method comprises the following steps:
step S1, creating at least one coroutine in the user mode and caching;
step S2, importing at least one program to be run in a user mode, wherein the program to be run comprises a plurality of instructions and/or data and corresponds to at least one coroutine;
step S3, creating at least one virtual processing unit in the user mode for running the coroutine to execute the corresponding instructions and/or data;
and step S4, releasing the corresponding processing resource after the coroutine runs in the virtual processing unit.
2. The scheduling processing method of claim 1 wherein said step S3 further comprises:
step S31, acquiring the number of runnable routines of each virtual processing unit;
step S32, allocating the coroutines to the corresponding virtual processing units according to the runnable coroutines number and forming corresponding running queues.
3. The scheduling processing method of claim 2 wherein said step S4 further comprises:
step S41, in the operation process of the coroutine, judging whether a first user instruction from the outside exists and outputting a corresponding first judgment result;
step S42, according to the first determination result, when the first user instruction exists, selecting one coroutine in the coroutine switching operation process from the operation queue.
4. The scheduling processing method of claim 2 wherein said step S4 further comprises:
step S4a, in the operation process of the coroutine, judging whether a blocking event exists and outputting a corresponding second judgment result;
step S4b, according to the second determination result, when the blocking event exists, selecting one coroutine in the operation queue to switch the coroutine in the operation process and suspending the coroutine in the operation process.
5. The scheduling processing method of claim 4 wherein said step S4 further comprises:
step S4c, according to the second determination result, when the blocking event is ended, reinserting the suspended coroutine into the running queue.
6. A method of scheduling processing according to claim 3 or 4 wherein each said virtual processing unit comprises a plurality of virtual registers;
the switching process comprises the following steps:
step A1, storing the memory data in each virtual register into a memory area;
step A2, storing the address data of coroutine in the running process into a stack data structure;
step A3, restoring the address data to the protocol of switching;
step a4, restoring the memory data from the storage area to the corresponding virtual register.
7. A scheduling processing system, applied in the scheduling processing method according to any one of claims 1 to 6, comprising:
the creating unit is used for creating at least one coroutine according to an external creating instruction;
the cache unit is connected with the creating unit and used for caching the created coroutine;
and the virtual processing unit is used for running the coroutine to execute the instruction and/or data of the program to be run corresponding to the coroutine.
8. The dispatch processing system of claim 7, wherein the dispatch processing system further comprises:
and the guide unit is respectively connected with each virtual processing unit and the cache unit, and distributes a corresponding number of coroutines to the corresponding virtual processing units according to the number of runnable coroutines of each virtual processing unit and forms corresponding running queues.
9. The dispatch processing system of claim 7, wherein the dispatch processing system further comprises:
the first judgment unit is used for judging whether a first user instruction from the outside exists or not and outputting a corresponding first judgment result;
and the switching unit is connected with the first judging unit and each virtual processing unit, and selects one coroutine in the coroutine switching operation process in the operation queue according to the first judging result when the first user instruction exists.
10. The dispatch processing system of claim 9, wherein the dispatch processing system further comprises:
the second judgment unit is used for judging whether a blocking event exists or not and outputting a corresponding second judgment result;
the switching unit is further connected to the second judging unit, and according to the second judging result, when the blocking event exists, selects one coroutine in the coroutine switching operation process from the operation queue and suspends the coroutine in the operation process.
11. The dispatch processing system of claim 10, wherein the dispatch processing system further comprises:
and the recovery unit is respectively connected with the second judging unit and each virtual processing unit, and reinserts the suspended coroutines into the running queue when the blocking event is ended.
12. The dispatch processing system of claim 10, wherein the dispatch processing system further comprises:
the identification unit is used for identifying the real-time state of the coroutine;
when the coroutine is in the cache unit, marking the coroutine as a new state;
when the coroutine is in the running queue, marking the coroutine as a standby state;
when the coroutine is in the running process in the virtual processing unit, marking the coroutine as a running state;
when the coroutine runs in the virtual processing unit completely, marking the coroutine as a termination state;
when the protocol is suspended in the switching unit, marking the protocol as a blocking state.
CN202011142092.9A 2020-10-22 2020-10-22 Scheduling processing method and system based on coroutine Active CN112346835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011142092.9A CN112346835B (en) 2020-10-22 2020-10-22 Scheduling processing method and system based on coroutine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011142092.9A CN112346835B (en) 2020-10-22 2020-10-22 Scheduling processing method and system based on coroutine

Publications (2)

Publication Number Publication Date
CN112346835A true CN112346835A (en) 2021-02-09
CN112346835B CN112346835B (en) 2022-12-09

Family

ID=74359860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011142092.9A Active CN112346835B (en) 2020-10-22 2020-10-22 Scheduling processing method and system based on coroutine

Country Status (1)

Country Link
CN (1) CN112346835B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466151A (en) * 2022-04-11 2022-05-10 武汉中科通达高新技术股份有限公司 Video storage system, computer equipment and storage medium of national standard camera
CN116155686A (en) * 2023-01-30 2023-05-23 浪潮云信息技术股份公司 Method for judging node faults in cloud environment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6732138B1 (en) * 1995-07-26 2004-05-04 International Business Machines Corporation Method and system for accessing system resources of a data processing system utilizing a kernel-only thread within a user process
CN104142858A (en) * 2013-11-29 2014-11-12 腾讯科技(深圳)有限公司 Blocked task scheduling method and device
CN105760237A (en) * 2016-02-05 2016-07-13 南京贝伦思网络科技股份有限公司 Communication method based on coroutine mechanism
CN107992344A (en) * 2016-10-25 2018-05-04 腾讯科技(深圳)有限公司 One kind association's journey implementation method and device
CN108021449A (en) * 2017-12-01 2018-05-11 厦门安胜网络科技有限公司 One kind association journey implementation method, terminal device and storage medium
CN111767159A (en) * 2020-06-24 2020-10-13 浙江大学 Asynchronous system calling system based on coroutine

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6732138B1 (en) * 1995-07-26 2004-05-04 International Business Machines Corporation Method and system for accessing system resources of a data processing system utilizing a kernel-only thread within a user process
CN104142858A (en) * 2013-11-29 2014-11-12 腾讯科技(深圳)有限公司 Blocked task scheduling method and device
CN105760237A (en) * 2016-02-05 2016-07-13 南京贝伦思网络科技股份有限公司 Communication method based on coroutine mechanism
CN107992344A (en) * 2016-10-25 2018-05-04 腾讯科技(深圳)有限公司 One kind association's journey implementation method and device
CN108021449A (en) * 2017-12-01 2018-05-11 厦门安胜网络科技有限公司 One kind association journey implementation method, terminal device and storage medium
CN111767159A (en) * 2020-06-24 2020-10-13 浙江大学 Asynchronous system calling system based on coroutine

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴楠: "基于FPGA的虚拟平台硬件仿真加速单元的设计", 《中国优秀博硕士论文学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466151A (en) * 2022-04-11 2022-05-10 武汉中科通达高新技术股份有限公司 Video storage system, computer equipment and storage medium of national standard camera
CN116155686A (en) * 2023-01-30 2023-05-23 浪潮云信息技术股份公司 Method for judging node faults in cloud environment

Also Published As

Publication number Publication date
CN112346835B (en) 2022-12-09

Similar Documents

Publication Publication Date Title
EP0533805B1 (en) Method for efficient non-virtual main memory management
US7406699B2 (en) Enhanced runtime hosting
CN112465129B (en) On-chip heterogeneous artificial intelligent processor
CN109144710B (en) Resource scheduling method, device and computer readable storage medium
CN101727351B (en) Multicore platform-orientated asymmetrical dispatcher for monitor of virtual machine and dispatching method thereof
JP4964243B2 (en) Processor method and apparatus
US20050188177A1 (en) Method and apparatus for real-time multithreading
CN110597606B (en) Cache-friendly user-level thread scheduling method
KR102334511B1 (en) Manage task dependencies
GB2348306A (en) Batch processing of tasks in data processing systems
CN112346835B (en) Scheduling processing method and system based on coroutine
EP1934737A1 (en) Cell processor methods and apparatus
JP3810735B2 (en) An efficient thread-local object allocation method for scalable memory
JP5030647B2 (en) Method for loading a program in a computer system including a plurality of processing nodes, a computer readable medium containing the program, and a parallel computer system
JPH11259318A (en) Dispatch system
CN109656868B (en) Memory data transfer method between CPU and GPU
US9619277B2 (en) Computer with plurality of processors sharing process queue, and process dispatch processing method
CN112162840A (en) Coroutine processing and managing method based on interrupt reentrant mechanism
CN111736998A (en) Memory management method and related product
WO2023097424A1 (en) Method and apparatus for fusing layers of different models
JPH1153327A (en) Multiprocessor system
CN117311990A (en) Resource adjustment method and device, electronic equipment, storage medium and training platform
CN114168344A (en) GPU resource allocation method, device, equipment and readable storage medium
CN112540840A (en) Efficient task execution method based on Java multithreading and reflection
CN116401005A (en) Information processing method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231018

Address after: Room A320, 3rd Floor, No. 1359 Zhonghua Road, Huangpu District, Shanghai, 200010

Patentee after: Shanghai Jiaran Information Technology Co.,Ltd.

Address before: 200001 4th floor, Fengsheng building, 763 Mengzi Road, Huangpu District, Shanghai

Patentee before: SHANGHAI HANDPAL INFORMATION TECHNOLOGY SERVICE Co.,Ltd.

TR01 Transfer of patent right