CN114691339A - Process scheduling method and computing device - Google Patents

Process scheduling method and computing device Download PDF

Info

Publication number
CN114691339A
CN114691339A CN202210382316.6A CN202210382316A CN114691339A CN 114691339 A CN114691339 A CN 114691339A CN 202210382316 A CN202210382316 A CN 202210382316A CN 114691339 A CN114691339 A CN 114691339A
Authority
CN
China
Prior art keywords
scheduling
data structure
latest
scheduling policy
current process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210382316.6A
Other languages
Chinese (zh)
Inventor
倪振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uniontech Software Technology Co Ltd
Original Assignee
Uniontech Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uniontech Software Technology Co Ltd filed Critical Uniontech Software Technology Co Ltd
Priority to CN202210382316.6A priority Critical patent/CN114691339A/en
Publication of CN114691339A publication Critical patent/CN114691339A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a process scheduling method and computing equipment, and relates to the technical field of process scheduling. The method is executed in a computing device, an operating system runs in the computing device, a kernel of the operating system comprises a packet filter and a shared data storage area, and an application program runs in a user space above the operating system, and the method comprises the following steps: the application adds an additional scheduler to the kernel using the packet filter; the method comprises the steps that a latest scheduling strategy data structure is established according to the current load of computing equipment in a user-defined mode, and an additional scheduling program is called to write the latest scheduling strategy data structure into a shared data storage area; and calling an additional scheduling program to circularly traverse the shared data storage area to acquire a latest scheduling policy data structure, and scheduling the execution process according to the latest scheduling policy data structure. According to the technical scheme of the invention, the execution process can be scheduled according to the scheduling strategy which is most suitable for the current load.

Description

Process scheduling method and computing device
Technical Field
The present invention relates to the field of process scheduling technologies, and in particular, to a process scheduling method and a computing device.
Background
The CFS is the normal process default scheduler. The CFS scheduler maintains a virtual running time (vruntme) for each scheduling entity based on the red and black tree data structure, and schedules by selecting the scheduling entity with the minimum vruntime, so that the efficiency and the fairness are considered, and the CFS scheduler can be suitable for general scenes. However, under some specific scenarios, specific loads, the scheduling result of CFS is not always optimal.
The scheduling occasions of the existing scheduler mainly include two occasions: active scheduling and passive preemption. The active scheduling function is schedule, and the main flow is to close preemption, call the schedule function and then open preemption. Passive preemption opportunities include four types: return to user state from system call, return to user state from interrupt, kernel boot preemptible, return to kernel state from interrupt.
The passive preemption judges whether a scheduling flag TIF _ NEED _ RESCHED exists or not, and if the scheduling flag exists, scheduling is possible to be executed by using a schedule function. The setting of the scheduling flag mainly includes two scenarios: the process wakes up try _ to _ wake _ up and schedules _ tick periodically.
The function to check whether preemption is check _ preempt _ curr by selecting the corresponding method in the corresponding scheduling class. For example, in a CFS scheduler, a method of checking whether to preempt in a process wake-up scenario is check _ preempt _ wakeup; the method for checking whether to preempt in the periodic scheduling scenario is check _ preempt _ tick.
According to the existing scheduling scheme, the scheduling policy can only be determined by the kernel, and the scheduling policy cannot be self-defined and set according to the current specific scene and specific load in the user mode. Scheduling occasions for proactive scheduling are generally not changed. Aiming at passive preemption, how to define and set a scheduling strategy in a user mode according to an actual scene and a load so as to flexibly select the scheduling strategy most suitable for the current load needs to be solved urgently.
For this reason, a process scheduling method is required to solve the problems in the above-described scheme.
Disclosure of Invention
To this end, the present invention provides a process scheduling method to solve or at least alleviate the above existing problems.
According to an aspect of the present invention, there is provided a process scheduling method, executed in a computing device, where an operating system runs in the computing device, a kernel of the operating system includes a packet filter and a shared data storage area, and an application program runs in a user space above the operating system, the method including the steps of: the application adds an additional scheduler to the kernel using a packet filter; a latest scheduling policy data structure is established according to the current load of the computing equipment in a self-defining mode, and the additional scheduling program is called to write the latest scheduling policy data structure into the shared data storage area; and calling the additional scheduling program to circularly traverse the shared data storage area to acquire a latest scheduling policy data structure, and scheduling an execution process according to the latest scheduling policy data structure.
Optionally, in the process scheduling method according to the present invention, the step of obtaining a latest scheduling policy data structure and scheduling an execution process according to the latest scheduling policy data structure includes: searching a corresponding latest scheduling strategy data structure from a shared data storage area based on the process identification of the current process and the awakening process, and determining whether a user-defined scheduling strategy exists according to the latest scheduling strategy data structure; if the wake-up process has the user-defined scheduling policy, determining whether the wake-up process can preempt the current process according to the latest scheduling policy data structure; and if the current process can be preempted, setting a scheduling flag for the awakening process so as to schedule the awakening process to be executed.
Optionally, in the process scheduling method according to the present invention, the step of determining whether there is a custom scheduling policy according to the latest scheduling policy data structure further includes: if the user-defined scheduling strategy does not exist, determining a new minimum awakening scheduling time interval according to the latest scheduling strategy data structure; judging whether the difference value of the virtual running time of the current process and the virtual running time of the awakening process is larger than the new minimum awakening scheduling time interval or not; and if the minimum awakening scheduling time interval is larger than the new minimum awakening scheduling time interval, setting a scheduling flag for the awakening process so as to schedule the awakening process to be executed.
Optionally, in the process scheduling method according to the present invention, the shared data storage area uses a Map data structure to establish and store a mapping relationship between a process data structure and a scheduling policy data structure; wherein the process data structure comprises process identifiers of a current process and a wake-up process; the scheduling policy data structure includes: the user-defined scheduling policy writes status (state), status (node) whether preemption is possible, new minimum wakeup scheduling interval (granularity).
Optionally, in the process scheduling method according to the present invention, the step of obtaining a latest scheduling policy data structure and scheduling an execution process according to the latest scheduling policy data structure includes: searching a corresponding latest scheduling strategy data structure from a shared data storage area based on a process identifier of a current process, and determining whether a user-defined scheduling strategy exists according to the latest scheduling strategy data structure; if the user-defined scheduling strategy exists, determining whether the current process can be preempted according to the latest scheduling strategy data structure; if the current process can be preempted, a schedule flag is set to schedule the next process for execution.
Optionally, in the process scheduling method according to the present invention, the step of determining whether there is a custom scheduling policy according to the latest scheduling policy data structure further includes: if the user-defined scheduling strategy does not exist, determining a new minimum scheduling time interval according to the latest scheduling strategy data structure; judging whether the actual running time of the current process is greater than the theoretical running time of the current process; if it is greater than the theoretical run time, a schedule flag is set to schedule the next process for execution.
Optionally, in the process scheduling method according to the present invention, the step of determining whether the actual running time of the current process is greater than the theoretical running time of the current process further includes: if the actual running time of the current process is less than or equal to the theoretical running time of the current process, judging whether the actual running time of the current process is greater than the new minimum scheduling time interval; if the difference value of the virtual running time of the current process and the virtual running time of the next process is larger than the theoretical running time of the current process, judging whether the difference value of the virtual running time of the current process and the virtual running time of the next process is larger than the new minimum scheduling time interval; and if the difference is larger than the theoretical running time of the current process, setting a scheduling flag so as to schedule the next process to be executed.
Optionally, in the process scheduling method according to the present invention, the shared data storage area uses a Map data structure to establish and store a mapping relationship between a process data structure and a scheduling policy data structure; wherein the process data structure comprises a process identification of a current process; the scheduling policy data structure includes a custom scheduling policy write state (state), a state of whether preemption is possible (cancel), and a new minimum scheduling time interval (granularity).
According to an aspect of the invention, there is provided a computing device comprising: at least one processor; a memory storing program instructions configured to be executed by the at least one processor, the program instructions comprising instructions for performing the process scheduling method as described above.
According to an aspect of the present invention, there is provided a readable storage medium storing program instructions which, when read and executed by a computing device, cause the computing device to perform a process scheduling method as described above.
According to the technical scheme of the invention, the process scheduling method is provided, and the application program in the user mode can set the personalized scheduling strategy aiming at the current load. The application program running in the user space utilizes the packet filter to add the additional scheduling program into the kernel, and further can define and construct a latest scheduling policy data structure according to the current load of the computing equipment and write the latest scheduling policy data structure into a shared data storage area of the kernel, so that the additional scheduling program can schedule an execution process according to the latest scheduling policy data structure most suitable for the current load, and optimization of the existing method for checking whether to perform preemption based on the scheduling policy of passive preemption is realized. Furthermore, the process scheduling method according to the present invention does not require restarting the operating system or shutting down the application.
The above description is only an overview of the technical solutions of the present invention, and the present invention can be implemented in accordance with the content of the description so as to make the technical means of the present invention more clearly understood, and the above and other objects, features, and advantages of the present invention will be more clearly understood.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 shows a schematic diagram of a computing device 100, according to one embodiment of the invention;
FIG. 2 illustrates a flow diagram of a process scheduling method 200 according to one embodiment of the invention;
FIG. 3 is a flowchart illustrating scheduling of an execution process in a process wake-up scenario according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a process for scheduling execution in a periodic scheduling scenario, according to an embodiment of the invention;
FIG. 5 shows a hardware architecture diagram of the computing device 100, according to one embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
According to the process scheduling scheme, aiming at passive preemption, a scheduling strategy can be defined according to an actual scene and load in a user mode, so that an execution process is scheduled by using the scheduling strategy which is most suitable for the current load.
In the specific embodiment of the present invention, only by taking a CFS scheduler as an example, how to set a scheduling policy according to current load customization to affect an original scheduling policy based on passive preemption (including a method check _ preempt _ wakeup in a process wakeup scene and a method check _ preempt _ tick in a periodic scheduling scene) is specifically described. However, it should be noted that the process scheduling scheme of the present invention is not limited to the specific scheduling policy based on passive preemption that was originally provided in the embodiments.
FIG. 1 shows a schematic diagram of a computing device 100, according to one embodiment of the invention.
As shown in fig. 1, an operating system runs in the computing device 100, the operating system of the computing device 100 includes a kernel 120, a user space 110 is disposed above the operating system, and the user space 110 can run one or more applications 111. Applications 111 of user space 110 may communicate with kernel 120 through system calls.
The kernel 120 includes a packet filter 123 therein, upon which instrumentation may be performed in the kernel. In this way, the application 111 running in user space can utilize the packet filter 123 to add additional schedulers 124 for the present invention to the kernel 120 of the operating system. Also, an additional scheduler may be run in the kernel 120 based on the packet filter 123.
In one implementation, the Packet Filter 123 may be implemented as, for example, eBPF (Extended Berkeley Packet Filter).
According to an embodiment of the present invention, the kernel 120 further includes a shared data storage area 125 for data sharing with the user space 110, and the shared data storage area 125 may adopt a Map data structure, and establish a mapping relationship between a process and a scheduling policy based on Key and Value.
According to an embodiment of the present invention, the application 111 may custom build the latest scheduling policy data structure according to the current load of the computing device 100, and call the additional scheduler 124 to write the built latest scheduling policy data structure to the shared data storage area 125 in the kernel 120.
The application 111 cycles through the shared data storage area 125 by calling the additional scheduler 125 in the kernel to check whether the shared data storage area is newly added with the latest scheduling policy data structure, and the additional scheduler 125 schedules the execution process according to the latest scheduling policy data structure when checking that the shared data storage area is newly added with the latest scheduling policy data structure. It is understood that scheduling the execution process is allocating the time occupying the CPU for the process to execute the process by the CPU.
In an embodiment in accordance with the invention, the computing device 100 is configured to perform a process scheduling method 200 in accordance with the invention. The computing device can realize self-definition setting of the scheduling policy according to the actual scene and load in a user mode by executing the process scheduling method 200 of the invention.
FIG. 2 shows a flow diagram of a process scheduling method 200 according to one embodiment of the invention. The method 200 is suitable for execution in a computing device, such as the computing device 100 described above.
An operating system is run in the computing device 100 according to the present invention, and a user space 110 is arranged above the operating system, and the user space 110 may run one or more applications 111. The kernel 120 of the operating system includes a packet filter 123 and a shared data storage area 125 for data sharing with the user space.
It should be noted that, in the specific embodiment of the present invention, only by taking the CFS scheduler as an example, how to set a scheduling policy according to the current load self-definition to affect the original scheduling policy based on passive preemption (including a method of checking whether to perform preemption in a process wake-up scenario, and a method of checking whether to perform preemption in a periodic scheduling scenario) is specifically described. However, it should be noted that the process scheduling method 200 of the present invention is not limited to the specific scheduling policy based on passive preemption in the past provided in the embodiments.
As shown in fig. 2, the method 200 begins at step S210.
In step S210, the application 111 running in the user space adds the additional scheduler 124 to the kernel 120 of the operating system using the packet filter 123. Thereafter, the application 111 may call an additional scheduler 124 in the kernel to update or access data stored in the shared data storage area in the kernel.
In step S220, the application 111 may custom build the latest scheduling policy data structure according to the current load of the computing device 100, and call the additional scheduler 124 to write the built latest scheduling policy data structure to the shared data storage area 125 in the kernel.
It should be noted that, by establishing the shared data storage area 125 in the kernel, data sharing between the kernel and the user space can be achieved. In one embodiment, the shared data storage area adopts a Map data structure, and a mapping relation between the process and the scheduling policy is established based on Key and Value.
In one implementation, the Key may be implemented as a process data structure. Value may be implemented as a scheduling policy data structure (struct _ preempt) associated with the process data structure. That is, the shared data storage area may employ a Map data structure to establish and store a mapping relationship of a process data structure and a scheduling policy data structure (struct _ preempt). The scheduling policy data structure (struct preempt) may include a custom scheduling policy write state (state), a state of whether preemption is possible (cancel), and a new minimum wakeup scheduling time interval (granularity).
In step S230, the application 111 calls the additional scheduler 125 to cycle through the shared data storage area in the kernel to check whether the shared data storage area is newly added with the latest scheduling policy data structure, wherein the additional scheduler 125 acquires the latest scheduling policy data structure when determining that the shared data storage area is newly added with the latest scheduling policy data structure, and schedules the execution process according to the latest scheduling policy data structure. It is understood that scheduling the execution process is allocating the time occupying the CPU for the process to execute the process by the CPU.
In one implementation, for a process wake-up scene, a Map data structure is used by a shared data storage area to establish and store a mapping relationship between a process data structure (struct wakeup) and a scheduling policy data structure (struct preview). The process data structure struct wakeup comprises process identifications of a current process and a wakeup process. The scheduling policy data structure (struct _ preempt) may include a custom scheduling policy write state (state), a state of whether preemption is possible (cancel), and a new minimum wakeup scheduling interval (grant).
FIG. 3 is a flowchart illustrating scheduling of an execution process in a process wake-up scenario according to an embodiment of the present invention.
In the process wake-up scenario, after adding the process to be woken up into the running queue of the CPU and waking up the process, and checking whether to perform preemption, step S230 may be executed according to the flow illustrated in fig. 3. Specifically, as shown in fig. 3, a corresponding latest scheduling policy data structure may be searched from the shared data storage area based on the process identifiers of the current process (i.e., the running process) and the wake-up process, and whether a custom scheduling policy exists may be determined according to the searched latest scheduling policy data structure corresponding to the current process and the wake-up process. Here, it may be specifically determined whether there is a custom scheduling policy according to a custom scheduling policy write state in the latest scheduling policy data structure, that is, whether it is necessary to change the original scheduling policy.
If the wake-up process has the self-defined scheduling policy (the original scheduling policy needs to be changed), whether the wake-up process can preempt the current process is further determined according to the latest scheduling policy data structure. Here, it may be specifically determined whether the wake-up process can preempt the current process according to the preemptible state decide in the latest scheduling policy data structure.
If it is determined that the current process can be preempted, a scheduling flag is set for the wake-up process to schedule execution of the wake-up process. Here, by setting a scheduling flag for the wake-up process, the wake-up process can preempt the current process so as to preferentially execute the wake-up process. In one implementation, preemption of the current process by the wake process may be implemented by reserved _ curr.
If the scheduling policy does not have the self-defined scheduling policy, the original scheduling policy is not changed, a new minimum wakeup scheduling time interval needs to be determined according to the granularity in the latest scheduling policy data structure, then whether the new minimum scheduling time interval is greater than 0 or not is judged, and if the minimum scheduling time interval is less than or equal to 0, the process is scheduled based on the original minimum scheduling time interval. And in the case that the new minimum awakening scheduling time interval is larger than 0, updating and setting the new minimum awakening scheduling time interval (replacing the original minimum awakening scheduling time interval) so as to schedule the process according to the new minimum awakening scheduling time interval. Here, the original minimum wakeup schedule time interval needs to be saved before the new minimum wakeup schedule time interval is set up by the update.
Then, judging whether the virtual running time (vruntime) of the current process is greater than the virtual running time of the wake-up process, if so, further calculating the difference value of the virtual running times of the current process and the wake-up process, and judging whether the difference value of the virtual running times of the current process and the wake-up process is greater than a new minimum wake-up scheduling time interval.
If the difference between the virtual run time of the current process and the wake-up process is greater than the new minimum wake-up scheduling interval, a scheduling flag (TIF _ NEED _ RESCHED) is set for the wake-up process to schedule execution of the wake-up process.
Otherwise, if the difference between the virtual running times of the current process and the wake-up process is less than or equal to 0, or if the difference between the virtual running times of the current process and the wake-up process is less than or equal to the new minimum wake-up scheduling interval, the scheduling flag is not set any more to perform scheduling.
And finally, recovering the original minimum awakening scheduling time interval.
It will be appreciated that by setting the scheduling flag to indicate that there is currently a process that needs to be run with priority, the corresponding process is scheduled by checking the scheduling flag when the next scheduling occasion occurs.
It should be noted that, by setting a new minimum wakeup scheduling time interval in the latest scheduling policy data structure, the process scheduling frequency, that is, the frequency of process preemption, can be affected. The scheduling flag is set only when the difference value between the virtual running time of the current process and the virtual running time of the awakening process is larger than the new minimum awakening scheduling time interval, so that the system overhead caused by the frequency switching process is avoided as much as possible.
Thus, according to the method 200 of the present invention, after the application adds the additional scheduler to the kernel by using the packet filter, the latest scheduling policy data structure may be defined and constructed according to the current load of the computing device and written into the shared data storage area of the kernel, so that the additional scheduler may schedule the execution process according to the latest scheduling policy data structure suitable for the current load, and optimize the method check _ preempt _ wakeup for checking whether to preempt in the original scheduling policy based on passive preemption in the process wakeup scene.
In one implementation, for a periodic scheduling scenario, a Map data structure is used by a shared data storage region to establish and store a mapping relationship between a process data structure (struct tick) and a scheduling policy data structure (struct preview). Wherein the process data structure struct tick includes a process identification (pid) of the current process. The scheduling policy data structure (struct _ preempt) may include a custom scheduling policy write state (state), a state of whether preemption is possible (cancel), and a new minimum wakeup scheduling interval (grant).
FIG. 4 is a flowchart illustrating a process for scheduling execution in a periodic scheduling scenario, according to an embodiment of the invention.
In the periodic scheduling scenario, when checking whether to perform preemption, step S230 may be executed according to the flow illustrated in fig. 4. Specifically, as shown in fig. 4, a corresponding latest scheduling policy data structure may be searched from the shared data storage area based on the process identifier of the current process (i.e., the running process), and whether a custom scheduling policy exists may be determined according to the searched latest scheduling policy data structure corresponding to the current process. Here, it may be specifically determined whether there is a custom scheduling policy according to a custom scheduling policy write state in the latest scheduling policy data structure, that is, whether it is necessary to change the original scheduling policy.
If the current process can be preempted, the current process is determined according to the latest scheduling policy data structure. Here, it may be specifically determined whether the current process can be preempted according to the state decide whether the current process can be preempted in the latest scheduling policy data structure.
If it is determined that the current process can be preempted, a schedule flag is set to schedule execution of the next process. Here, by setting the scheduling flag for the next process, the current process can be preempted by the next process so as to preferentially execute the next process. In one implementation, preemption of the current process by the next process may be implemented by reserved _ curr.
If the scheduling policy is not customized, the original scheduling policy is not changed, and a new minimum scheduling time interval needs to be determined according to the granularity in the latest scheduling policy data structure. And then judging whether the new minimum scheduling time interval is greater than 0, and if the minimum scheduling time interval is less than or equal to 0, scheduling the process based on the original minimum scheduling time interval. In case the new minimum scheduling time interval is larger than 0, the new minimum scheduling time interval is set up (replacing the original minimum scheduling time interval) to schedule the process according to the new minimum scheduling time interval. Here, before the new minimum scheduling time interval is set by the update, the original minimum scheduling time interval needs to be saved.
Next, the theoretical run time of the current process is calculated, and the actual run time of the current process is calculated. And judging whether the actual running time of the current process is greater than the theoretical running time of the current process. Here, the theoretical running time of a process is the time that the process should run.
If it is greater than the theoretical run time (the current process runs out of time), the schedule flag is set to schedule the next process for execution.
It should be noted that the next process to be executed in scheduling refers to a process with the highest priority in the run queue, that is, a process with the smallest virtual run time (vruntime) in the run queue scheduled periodically, that is, a leftmost node process in the run queue. It will be appreciated that the smaller vruntime the process is, the more left it is located in the red and black tree data structure of the scheduler; conversely, the larger the vruntime of a process, the more right is the position in the red-black tree data structure. The smaller vruntime of the process, the less CPU occupation time before the process and the higher priority of the process are shown.
In addition, if the actual running time of the current process is less than or equal to the theoretical running time of the current process, it is determined whether the actual running time of the current process is greater than the new minimum scheduling interval. If not (the actual run time of the current process is less than or equal to the new minimum scheduling interval), then the scheduling flag is no longer set to perform scheduling.
And if the difference value is greater than or equal to 0, further judging whether the difference value of the virtual running time of the current process and the virtual running time of the next process is greater than the theoretical running time of the current process.
And if the difference value of the virtual running time of the current process and the next process is larger than the theoretical running time of the current process, setting a scheduling flag so as to schedule the next process to be executed. Here, by setting the scheduling flag for the next process, the current process can be preempted by the next process so as to preferentially execute the next process. In one implementation, preemption of the current process by the next process may be implemented by reserved _ curr.
In addition, if the difference value of the virtual running time of the current process and the next process is less than 0, or the difference value of the virtual running time of the current process and the next process is less than or equal to the theoretical running time of the current process, the scheduling flag is not set any more to execute the scheduling.
And finally, recovering the original minimum scheduling time interval.
Thus, according to the method 200 of the present invention, after the application adds the additional scheduler to the kernel by using the packet filter, the latest scheduling policy data structure may be defined and constructed according to the current load of the computing device and written into the shared data storage area of the kernel, so that the additional scheduler may schedule the execution process according to the latest scheduling policy data structure suitable for the current load, and optimize the method check _ preempt _ tick of checking whether to perform preemption under the periodic scheduling scene in the original scheduling policy based on passive preemption.
FIG. 5 shows a hardware architecture diagram of the computing device 100, according to one embodiment of the invention. As shown in fig. 5, the computing device 100 may include an input device 90, a processor 91, an output device 92, a memory 93, and at least one communication bus 94. The communication bus 94 is used to enable communication connections between the elements. The memory 93 may include a high-speed RAM memory, and may also include a non-volatile storage NVM, such as at least one disk memory, and various program instructions may be stored in the memory 93 for performing various processing functions and implementing the process scheduling method according to the embodiment of the present invention.
Alternatively, the processor 91 may be implemented by, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 91 is coupled to the input device 90 and the output device 92 through a wired or wireless connection.
Alternatively, the input device 90 may include a variety of input devices, such as at least one of a user-oriented user interface, a device-oriented device interface, a software-programmable interface, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware plug-in interface (e.g., a USB interface, a serial port, etc.) for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip; optionally, the transceiver may be a radio frequency transceiver chip with a communication function, a baseband processing chip, a transceiver antenna, and the like. An audio input device such as a microphone may receive voice data. The output device 92 may include a display, a sound, or other output device.
In one embodiment of the invention, computing device 100 includes one or more processors and one or more readable storage media storing program instructions. The program instructions, when configured to be executed by one or more processors, cause a computing device to perform a process scheduling method in an embodiment of the invention.
According to the process scheduling method 200 of the present invention, the application program in the user mode can set a personalized scheduling policy for the current load. The application program running in the user space utilizes the packet filter to add the additional scheduling program into the kernel, and further can define and construct a latest scheduling policy data structure according to the current load of the computing equipment and write the latest scheduling policy data structure into a shared data storage area of the kernel, so that the additional scheduling program can schedule an execution process according to the latest scheduling policy data structure most suitable for the current load, and optimization of the existing method for checking whether to perform preemption based on the scheduling policy of passive preemption is realized. Furthermore, the process scheduling method according to the present invention does not require restarting the operating system or shutting down the application.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U.S. disks, floppy disks, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the mobile terminal generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to execute the process scheduling method of the present invention according to instructions in the program code stored in the memory.
By way of example, and not limitation, readable media may comprise readable storage media and communication media. Readable storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of this invention. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may additionally be divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor with the necessary instructions for carrying out the method or the method elements thus forms a device for carrying out the method or the method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense with respect to the scope of the invention, as defined in the appended claims.

Claims (10)

1. A process scheduling method executed in a computing device, the computing device running an operating system, a kernel of the operating system including a packet filter and a shared data storage area, and a user space above the operating system running an application program, the method comprising the steps of:
the application adds an additional scheduler to the kernel using the packet filter;
a latest scheduling policy data structure is established according to the current load of the computing equipment in a self-defining mode, and the additional scheduling program is called to write the latest scheduling policy data structure into the shared data storage area; and
and calling the additional scheduling program to circularly traverse the shared data storage area to acquire a latest scheduling policy data structure, and scheduling an execution process according to the latest scheduling policy data structure.
2. The method of claim 1, wherein obtaining a latest scheduling policy data structure and scheduling an execution process according to the latest scheduling policy data structure comprises:
searching a corresponding latest scheduling strategy data structure from a shared data storage area based on the process identification of the current process and the awakening process, and determining whether a user-defined scheduling strategy exists according to the latest scheduling strategy data structure;
if the wake-up process has the user-defined scheduling policy, determining whether the wake-up process can preempt the current process according to the latest scheduling policy data structure;
and if the current process can be preempted, setting a scheduling flag for the awakening process so as to schedule the awakening process to be executed.
3. The method of claim 2, wherein determining whether there is a custom scheduling policy based on the latest scheduling policy data structure further comprises:
if the user-defined scheduling strategy does not exist, determining a new minimum awakening scheduling time interval according to the latest scheduling strategy data structure;
judging whether the difference value of the virtual running time of the current process and the virtual running time of the awakening process is larger than the new minimum awakening scheduling time interval or not;
and if the minimum awakening scheduling time interval is larger than the new minimum awakening scheduling time interval, setting a scheduling flag for the awakening process so as to schedule the awakening process to be executed.
4. The method of any one of claims 1-3, wherein the shared data storage area employs a Map data structure to establish and store a mapping of process data structures to scheduling policy data structures;
wherein the process data structure comprises process identifiers of a current process and a wake-up process;
the scheduling policy data structure includes: the user-defined scheduling policy writes status (state), status (node) whether preemption is possible, new minimum wakeup scheduling interval (granularity).
5. The method of claim 1, wherein obtaining a latest scheduling policy data structure and scheduling an execution process according to the latest scheduling policy data structure comprises:
searching a corresponding latest scheduling strategy data structure from a shared data storage area based on the process identification of the current process, and determining whether a user-defined scheduling strategy exists according to the latest scheduling strategy data structure;
if the user-defined scheduling strategy exists, determining whether the current process can be preempted according to the latest scheduling strategy data structure;
if the current process can be preempted, a schedule flag is set to schedule the next process for execution.
6. The method of claim 5, wherein determining whether there is a custom scheduling policy based on the latest scheduling policy data structure further comprises:
if the user-defined scheduling strategy does not exist, determining a new minimum scheduling time interval according to the latest scheduling strategy data structure;
judging whether the actual running time of the current process is greater than the theoretical running time of the current process;
if it is greater than the theoretical run time, a scheduling flag is set to schedule execution of the next process.
7. The method of claim 6, wherein the step of determining whether the actual runtime of the current process is greater than the theoretical runtime of the current process further comprises:
if the actual running time of the current process is less than or equal to the theoretical running time of the current process, judging whether the actual running time of the current process is greater than the new minimum scheduling time interval;
if the difference value of the virtual running time of the current process and the virtual running time of the next process is larger than the theoretical running time of the current process, judging whether the difference value of the virtual running time of the current process and the virtual running time of the next process is larger than the new minimum scheduling time interval;
and if the difference is larger than the theoretical running time of the current process, setting a scheduling flag so as to schedule the next process to be executed.
8. The method of any one of claims 5-7, wherein the shared data storage area employs a Map data structure to establish and store a mapping of process data structures to scheduling policy data structures;
wherein the process data structure comprises a process identification of a current process;
the scheduling policy data structure includes a custom scheduling policy write state (state), a state of whether preemption is possible (cancel), and a new minimum scheduling time interval (granularity).
9. A computing device, comprising:
at least one processor; and
a memory storing program instructions, wherein the program instructions are configured to be adapted to be executed by the at least one processor, the program instructions comprising instructions for performing the method of any of claims 1-8.
10. A readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform the method of any of claims 1-8.
CN202210382316.6A 2022-04-12 2022-04-12 Process scheduling method and computing device Pending CN114691339A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210382316.6A CN114691339A (en) 2022-04-12 2022-04-12 Process scheduling method and computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210382316.6A CN114691339A (en) 2022-04-12 2022-04-12 Process scheduling method and computing device

Publications (1)

Publication Number Publication Date
CN114691339A true CN114691339A (en) 2022-07-01

Family

ID=82143882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210382316.6A Pending CN114691339A (en) 2022-04-12 2022-04-12 Process scheduling method and computing device

Country Status (1)

Country Link
CN (1) CN114691339A (en)

Similar Documents

Publication Publication Date Title
EP3155521B1 (en) Systems and methods of managing processor device power consumption
US8255912B2 (en) Techniques for setting events in a multi-threaded system
CN113504985B (en) Task processing method and network equipment
CN104380257A (en) Scheduling tasks among processor cores
US20110072434A1 (en) System, method and computer program product for scheduling a processing entity task
CN109992366B (en) Task scheduling method and task scheduling device
US11822958B2 (en) Method and a device for data transmission between an internal memory of a system-on-chip and an external memory
US20180032376A1 (en) Apparatus and method for group-based scheduling in multi-core processor system
CN103329102A (en) Multiprocessor system
CN113190282A (en) Android operating environment construction method and device
US11422857B2 (en) Multi-level scheduling
CN114579285A (en) Task running system and method and computing device
CN114416389B (en) Activity identification method and related equipment
US10042659B1 (en) Caching virtual contexts for sharing of physical instances of a hardware resource
CN115437755A (en) Interrupt scheduling method, electronic device and storage medium
US8127064B2 (en) Method of managing the software architecture of a radio communication circuit, corresponding application, computer program product and circuit
CN113032119A (en) Task scheduling method and device, storage medium and electronic equipment
CN114691339A (en) Process scheduling method and computing device
CN114911538A (en) Starting method of running system and computing equipment
CN110993014A (en) Behavior test method and device of SSD in idle state, computer equipment and storage medium
CN111416780A (en) Real-time priority ceiling optimization method, system, medium and terminal
CN112506626A (en) Application program starting method, computer equipment and storage medium
CN114064227A (en) Interrupt processing method, device, equipment and storage medium applied to microkernel
CN117271144B (en) Thread processing method and electronic equipment
CN117421106B (en) Task scheduling method, system and equipment for embedded software

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination