CN115543551A - Thread scheduling method and device and electronic equipment - Google Patents

Thread scheduling method and device and electronic equipment Download PDF

Info

Publication number
CN115543551A
CN115543551A CN202110735256.7A CN202110735256A CN115543551A CN 115543551 A CN115543551 A CN 115543551A CN 202110735256 A CN202110735256 A CN 202110735256A CN 115543551 A CN115543551 A CN 115543551A
Authority
CN
China
Prior art keywords
thread
target
system load
scheduling
weighting factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110735256.7A
Other languages
Chinese (zh)
Inventor
师荣堃
李宗峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110735256.7A priority Critical patent/CN115543551A/en
Publication of CN115543551A publication Critical patent/CN115543551A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The application provides a thread scheduling method and device and electronic equipment. The method comprises the following steps: acquiring a system load; and when the system load exceeds a preset threshold value, adjusting the virtual running time of a target thread so as to enable the target thread to obtain priority scheduling, wherein the target thread belongs to a fair scheduling thread. The method can improve the probability of the dispatching of the threads of the important tasks, thereby improving the thread dispatching efficiency, improving the system performance and improving the user experience.

Description

Thread scheduling method and device and electronic equipment
Technical Field
The present application relates to the technical field of electronic devices, and in particular, to a thread scheduling method and apparatus, and an electronic device.
Background
With the development of data analysis technology and electronic device technology, the functions of the application programs on the electronic devices, particularly mobile devices, are further enriched and powerful, the number of threads required by the application programs and the system resources consumed by the computation-intensive tasks are further increased, the concurrent access and contention conditions of the threads to the shared resources are correspondingly aggravated, and thus more elaborate system resource scheduling is required. If the threads of important tasks (such as threads for executing related tasks in an interaction event) are not scheduled in time, the use of the whole machine is easy to be jammed, and thus the user experience is affected.
Currently, core selection and frequency modulation of a scheduler can be dynamically influenced by scaling task load, so that threads of important tasks are helped to migrate from a small core running queue to a large core running queue. Or by adjusting the priority of the task, for example, setting the priority of the thread of the important task higher, so that the task scheduler schedules the thread with high priority preferentially.
The former scheme only affects the core selection of tasks, and a large probability still exists after the threads migrate the running queue and cannot be scheduled in time. In the latter scheme, if there are more tasks with the same priority, the threads of important tasks may still not be scheduled in time due to the problem of the fair scheduling (CFS) mechanism itself. Finally, the whole machine still can be jammed in use, and the user experience is influenced.
Disclosure of Invention
The application provides a thread scheduling method, a thread scheduling device and an electronic device, which can improve the probability of scheduling the threads of important tasks, thereby improving the thread scheduling efficiency, improving the system performance and improving the user experience.
In a first aspect, a thread scheduling method is provided, including: acquiring a system load; and when the system load exceeds a preset threshold value, adjusting the virtual running time of a target thread so as to enable the target thread to obtain priority scheduling, wherein the target thread belongs to a fair scheduling thread.
In the embodiments of the present application, by adjusting the virtual running time of the target thread, the scheduling of the fair scheduling thread can be fundamentally affected, and the priority scheduling of the target thread is implemented, instead of implementing the priority scheduling probabilistically as in the foregoing conventional scheme by scaling the task load or adjusting the task priority. Therefore, the method can improve the probability of dispatching the threads of important tasks, and particularly can solve the problem of dispatching timeliness of the threads with the same priority due to the fact that system load is increased, so that the occurrence of whole machine jamming is reduced, the thread dispatching efficiency is improved, the system performance is improved, and the user experience is improved.
In addition, the embodiment of the application fundamentally realizes the priority scheduling of the thread by adjusting the virtual running time, and can solve the problem that the high-priority tasks which occur more and more probabilistically cannot be scheduled.
With reference to the first aspect, in a possible implementation manner, the adjusting the virtual runtime of the target thread includes: determining a target weight factor corresponding to the target thread, wherein the target weight factor is greater than or equal to 0 and less than 1; and adjusting the virtual running time of the target thread according to the target weight factor to obtain the adjusted virtual running time, wherein the adjusted virtual running time is used for fair scheduling.
The target weight factor is greater than or equal to 0 and less than 1, and when the virtual running time of the target thread is adjusted by the target weight factor, the adjusted virtual running time can be reduced. The smaller the virtual run time, the higher the probability that it will be scheduled. Therefore, the virtual running time of the target thread can be reduced to obtain preferential scheduling in the embodiment of the application.
With reference to the first aspect, in a possible implementation manner, the determining a target weight factor corresponding to the target thread includes: according to the system load, determining a system load grade corresponding to the system load from a plurality of preset system load grades; and determining the target weight factor from a plurality of preset weight factors according to the system load grades corresponding to the system loads, wherein the plurality of preset weight factors correspond to the plurality of preset system load grades.
Different system load grades can correspond to different weight factors, so that the weight factors can be flexibly selected according to the system load condition, and more refined and accurate scheduling is realized.
With reference to the first aspect, in a possible implementation manner, the preset multiple system load levels include a first level and a second level, the preset multiple weight factors include a first weight factor and a second weight factor, the first weight factor corresponds to the first level, and the second weight factor corresponds to the second level; the system load corresponding to the first level is greater than the system load corresponding to the second level, and the first weight factor is smaller than the second weight factor.
The larger the system load corresponding to the system load level is, the smaller the weight factor corresponding to the system load level is. Thus, the larger the system load is, the smaller the target weight factor is, and the smaller the adjusted virtual running time obtained according to the target weight factor is, the higher the probability that the target thread is scheduled preferentially can be. Particularly, when the target weight factor is 0%, the target thread may be considered as a super-priority thread, and may be scheduled preferentially, and the processor schedules other threads after completing execution, so that the target thread occupies an absolute priority scheduling position on the processor.
With reference to the first aspect, in a possible implementation manner, the determining a target weight factor corresponding to the target thread includes: determining a first group to which the target thread belongs from a plurality of preset groups according to the information of the target thread; determining the target weight factor from at least one weight factor according to the first group to which the target thread belongs, wherein the at least one weight factor corresponds to the first group.
Different groups can correspond to different weight factors, so that the weight factors can be flexibly selected according to the specific conditions of the threads, and one or a group of threads can share one or a group of weight factors, thereby realizing more refined and more accurate scheduling.
With reference to the first aspect, in a possible implementation manner, the preset multiple groups include the first group and a second group, and at least one weighting factor corresponding to the second group includes a third weighting factor, where the target weighting factor is smaller than the third weighting factor.
Different groups correspond to different weighting factors, threads in the first group are threads which need to be scheduled preferentially, and therefore the weighting factor corresponding to the first group is smaller than that corresponding to the second group. That is, the probability that the target thread belongs to the first packet is scheduled preferentially is greater than the probability that the target thread belongs to the second packet.
With reference to the first aspect, in one possible implementation manner, the first packet includes a thread related to a foreground application.
The threads in the first group are threads that need to be scheduled preferentially, and may include threads related to foreground applications, which often affect whether the interface is stuck or not in the user interaction scenario. When at least one weight factor corresponding to the first group is smaller, the probability that the thread related to the foreground application program is scheduled preferentially is higher, the problem of unsmooth user interaction interface can be effectively solved, the system performance is improved, and the user experience is improved.
With reference to the first aspect, in a possible implementation manner, the method further includes: and when the event corresponding to the target thread is completed, adjusting the target weight factor to 1.
After the event corresponding to the target thread is completed, the target weight factor may be adjusted to 1 (for example, 1 may be a default value), so that the target weight factor may not affect the next scheduling process. It should be appreciated that the operating system may require at least one thread to implement when a certain event (e.g., a click event) is completed, and the virtual runtime of a certain thread of the at least one thread may be adjusted using a weighting factor to cause the thread to be scheduled preferentially. When the event is completed or ended, the weighting factor is adjusted to 1.
With reference to the first aspect, in a possible implementation manner, the adjusting the virtual runtime of the target thread according to the target weight factor to obtain an adjusted virtual runtime includes: and multiplying the target weight factor by the time allocated by the target thread in the current scheduling period, and adding the multiplication result to the virtual running time of the target thread after the last scheduling period is finished to obtain the adjusted virtual running time.
The target weighting factor may modify the time allocated by the target thread during the current scheduling period to affect the virtual runtime of the target thread.
With reference to the first aspect, in a possible implementation manner, before the determining a target weight factor corresponding to the target thread, the method further includes: determining that a threshold switch is in an open state, wherein whether to open the threshold switch is determined according to whether the system load exceeds the preset threshold value.
Whether the weighting factor is effective or not can be indicated through the opening and closing states of the threshold switch so as to adjust the virtual running time of the thread.
With reference to the first aspect, in a possible implementation manner, the target thread is a thread used for executing a relevant task in an interaction event.
The threads used for executing the related tasks in the interaction events are threads which have influence on the blocking of the user interaction scene or threads which influence the user experience, and the virtual running time of the threads is adjusted to be preferentially scheduled, so that the blocking problem can be effectively solved, and the user experience is improved.
With reference to the first aspect, in a possible implementation manner, the target thread is any one of the following threads: a user interface thread, a rendering thread, a distribution thread of user input events, a detection thread of user input events, an interface composition thread, a system animation thread, or a system interface thread.
In the course of executing user interaction events, some system-level threads may be involved in addition to application-level threads to complete tasks. Thus, threads that have an impact on the katton presence of a user interaction scenario may include application-level threads as well as some system-level threads.
In a second aspect, a thread scheduling apparatus is provided, including: the acquisition module is used for acquiring system load; and the adjusting module is used for adjusting the virtual running time of a target thread when the system load exceeds a preset threshold value so as to enable the target thread to obtain priority scheduling, wherein the target thread belongs to a fair scheduling thread.
With reference to the second aspect, in a possible implementation manner, the adjusting module is specifically configured to: determining a target weight factor corresponding to the target thread, wherein the target weight factor is greater than or equal to 0 and less than 1; and adjusting the virtual running time of the target thread according to the target weight factor to obtain the adjusted virtual running time, wherein the adjusted virtual running time is used for fair scheduling.
With reference to the second aspect, in a possible implementation manner, the adjusting module is specifically configured to: according to the system load, determining a system load grade corresponding to the system load from a plurality of preset system load grades; and determining the target weight factor from a plurality of preset weight factors according to the system load grades corresponding to the system loads, wherein the plurality of preset weight factors correspond to the plurality of preset system load grades.
With reference to the second aspect, in a possible implementation manner, the preset multiple system load levels include a first level and a second level, the preset multiple weighting factors include a first weighting factor and a second weighting factor, the first weighting factor corresponds to the first level, and the second weighting factor corresponds to the second level; wherein the system load corresponding to the first level is greater than the system load corresponding to the second level, and the first weighting factor is smaller than the second weighting factor.
With reference to the second aspect, in a possible implementation manner, the adjusting module is specifically configured to: determining a first group to which the target thread belongs from a plurality of preset groups according to the information of the target thread; and determining the target weight factor from at least one weight factor according to the first grouping to which the target thread belongs, wherein the at least one weight factor corresponds to the first grouping.
With reference to the second aspect, in a possible implementation manner, the preset plurality of packets include the first packet and a second packet, and the at least one weighting factor corresponding to the second packet includes a third weighting factor, where the target weighting factor is smaller than the third weighting factor.
With reference to the second aspect, in one possible implementation manner, the first packet includes a thread related to a foreground application.
With reference to the second aspect, in a possible implementation manner, the adjusting module is further configured to: and when the event corresponding to the target thread is completed, adjusting the target weight factor to 1.
With reference to the second aspect, in a possible implementation manner, the adjusting module is specifically configured to: and multiplying the target weight factor by the time allocated by the target thread in the current scheduling period, and adding the multiplication result to the virtual running time of the target thread after the last scheduling period is finished to obtain the adjusted virtual running time.
With reference to the second aspect, in a possible implementation manner, before the determining a target weight factor corresponding to the target thread, the adjusting module is further configured to: determining that a threshold switch is in an open state, wherein whether to open the threshold switch is determined according to whether the system load exceeds the preset threshold value.
With reference to the second aspect, in a possible implementation manner, the target thread is a thread for executing a relevant task in an interaction event.
With reference to the second aspect, in a possible implementation manner, the target thread is any one of the following threads: a user interface thread, a rendering thread, a distribution thread of user input events, a detection thread of user input events, an interface composition thread, a system animation thread, or a system interface thread.
The beneficial effects of the apparatus according to the second aspect can refer to the beneficial effects of the method according to the first aspect, and are not described herein again.
A third aspect provides a thread scheduling apparatus, where the thread scheduling apparatus is disposed in an electronic device, and the thread scheduling apparatus has a function of implementing the method in the first aspect and any one of the possible implementation manners of the first aspect. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules or units corresponding to the above-described functions.
In a fourth aspect, a thread scheduling apparatus is provided, where the thread scheduling apparatus is disposed in an electronic device, and the thread scheduling apparatus includes: the thread scheduling apparatus may provide the processing unit and the scheduling unit to implement part or all of the steps performed by the thread scheduling apparatus in the method according to the first aspect and any possible implementation manner of the first aspect.
Optionally, the thread scheduling device may also be referred to as a scheduler or other name.
In a fifth aspect, an electronic device is provided, which includes the thread scheduling apparatus in the second aspect and any possible implementation manner of the second aspect, or includes the thread scheduling apparatus in the third aspect or the fourth aspect.
In a sixth aspect, an electronic device is provided that includes one or more processors; one or more memories; the one or more memories are configured to store a computer program comprising instructions that, when executed by the one or more processors, cause the electronic device to perform the method of the first aspect and any of the possible implementations of the first aspect.
In one possible design, the one or more memories are coupled with the one or more processors.
In a seventh aspect, a chip is provided, comprising at least one processor and interface circuitry; the interface circuit is configured to provide program instructions or data to the at least one processor, and the at least one processor is configured to execute the program instructions to implement the method in any one of the above-mentioned first aspect and possible implementation manners of the first aspect.
In an eighth aspect, a chip system is provided, where the chip system includes at least one processor, and when program instructions are executed in the at least one processor, the functions of the method in the first aspect and any possible implementation manner of the first aspect are implemented on an electronic device.
In one possible design, the system-on-chip further includes a memory; the memory is used for storing program instructions and data and is located inside the processor or outside the processor. The chip system may be formed by a chip, and may also include a chip and other discrete devices.
A ninth aspect provides a computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the method of the first aspect and any one of the possible implementation manners of the first aspect.
A tenth aspect provides a computer program product, which includes computer instructions that, when executed on an electronic device, cause the electronic device to perform the method of the first aspect and any one of the possible implementation manners of the first aspect.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 2 is a software system schematic diagram of an electronic device according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an electronic system for software and hardware interaction according to an embodiment of the present disclosure.
Fig. 4 is a schematic flowchart of a thread scheduling method according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a process of adjusting a virtual runtime in a thread scheduling method according to an embodiment of the present application.
Fig. 6-8 are schematic flow charts of a thread scheduling method provided in an embodiment of the present application.
Fig. 9 is a schematic diagram of a software and hardware interaction process in a thread scheduling method according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of a thread scheduling apparatus according to an embodiment of the present application.
Fig. 11 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The embodiments of the present application provideThe thread scheduling method in (1) may be applied to a physical machine, and the physical machine may include a processor circuit integrated with each Central Processing Unit (CPU), and the processing circuit may be integrated on a chip, and the chip may run various Operating Systems (OS), such as Linux, windows, UNIX systems, etc., that is, a System On Chip (SOC) is formed on the chip. The physical machine includes, but is not limited to, a mobile phone, a tablet computer, a wearable device (e.g., a smart watch, a smart bracelet, smart glasses, smart jewelry, etc.), an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a television, a display, and other electronic devices. Exemplary embodiments of the electronic device include, but are not limited to, a mount
Figure BDA0003140034770000051
Figure BDA0003140034770000052
Or other operating system, as well as other types of computer systems.
For convenience of understanding, the technical terms and background that may be involved in the embodiments of the present application are described below.
Event (event), refers to a user's operation of the touch screen for controlling the execution of threads or for controlling the control of some threads over others. The events include, for example, touch events, click events, slide events, and the like. The basic events include three event states of press (actiondown), move (actionmove), and pop (actionup). The operating system may respond to events through a series of threads.
Program (program) refers to an ordered collection of instructions in a set. The programs may include an operating system program and an application program. A program is simply a static entity that itself has no operational meaning.
Task refers to an activity performed by software, and simply refers to an operation for achieving a certain purpose. A task is typically a run of a program, such as reading data and placing it in memory, closing a file, opening a dialog box, etc. A task may be a process or a thread. The task can be a starting task and can be any task in the running process of the application program, such as a sliding task. The starting task can be a task of starting an application program, and the sliding task can be a task of switching the application program on the display interface.
A process refers to the execution of a program and may be understood as a program that is running in the system. In short, a process can be regarded as a stand-alone program with complete data and code spaces in memory, and the process has data and variables that belong to itself only. The different processes are independent of each other, for example, address space independent, resource independent, and each process runs in its dedicated and protected memory.
The process is a basic unit of resource allocation and can also be a basic unit of scheduling operation. For example, when a user runs a program, the system creates a process and allocates resources (e.g., memory space, disk space, input/output (I/O) devices, tables, etc.) for the process. The process is then placed in its ready queue and, if selected by the process scheduler, the system allocates CPU and other related resources to the process so that it is actually running. A process is a dynamic entity, having its own lifecycle, which reflects the overall dynamic process of a program running on a certain set of data.
A thread (thread) refers to a program that runs independently in a process, i.e., a thread exists in a process. In short, a thread is the basic unit of execution of a process, and all tasks of a process are executed in the thread. A process may include one or more threads, each sharing the address space (e.g., including code and global data or variables) and resources (e.g., memory resources, I/O resources, CPU resources, etc.) of the process, but each having its own stack and local variables. That is, the global variables of a process are shared by all threads, and the resources allocated to the process by the system are available to all threads.
A thread is the smallest unit in a process that performs operations, i.e., the basic unit in which processor scheduling is performed. If a process is understood to logically operate a task that a system completes, then the thread represents one of many possible subtasks to complete the task. Threads may be independently scheduled for execution on the processors, so that multiple threads of a process may be executing on different processors.
In summary, a process has at least one process, and a process must have and only one program corresponding to it. A process may have multiple threads, but at least one thread, a thread, can only belong to a process. The process is a basic unit for resource allocation (including CPU resource, memory resource, input/output I/O resource), that is, the resource is allocated to the process, and multiple threads in the same process share the resource. A thread is the basic unit of processor scheduling, i.e. it is a thread that actually runs on a processor. Each thread has an entry for program execution, a sequential execution sequence, and an exit for the program, but the threads cannot execute individually and must constitute a process or be dependent on the program. Multiple threads in the same process can execute concurrently.
A Central Processing Unit (CPU) refers to a processor including a plurality of processor cores (cores). There are differences in computing power of processor cores (also called CPU cores), and a core with a strong computing power may be called a big core, and a core with a weak computing power may be called a small core. For a multi-core CPU, scheduling is performed according to a certain scheduling rule, and all threads running on a processor core receive overall scheduling of the system. For example, when a multi-core CPU is scheduled, a CPU core needs to be allocated to a thread, so that the thread is added to a running queue of the allocated CPU core, and the thread is made to run on the allocated CPU core. Each processor core may act as an independent processing unit, and a processor core may only execute one thread at a time.
The time slice (timer) refers to the time allocated to the thread running on the CPU, and the time slice of one thread is switched to another thread after the execution of the thread is finished. Under the condition of only considering one CPU, the same CPU cannot simultaneously run a plurality of tasks, and each process or program can be alternately run in alternation through the time slice. Since the time slices are short (typically on the order of 10-100 ms), the user does not perceive that the individual processes or programs are taking place simultaneously from a macroscopic point of view. A time slice may typically be assigned to each thread by the operating system kernel's scheduler.
Generally, the time slices to which all threads in a system are allocated are not equal, and generally, in order to obtain a faster response speed, a thread with strong interactivity is allocated to a time slice longer than a thread with weak interactivity.
A task scheduler (scheduler) for scheduling or allocating tasks to the processor cores. In the embodiment of the present application, the task scheduler may schedule or allocate tasks in a multi-core processor system or a single-core processor, where the multi-core processor system may be a multi-core system or a multi-processor system. In a multi-core system, all processor cores are located in one processor core. In a multiprocessor system, each processor core may be located in one processor core. The task scheduler may be implemented as a kernel of an operating system, and may be referred to as simply a scheduler in some embodiments.
Tasks may be scheduled according to different scheduling policies (or scheduling algorithms), for example, the scheduling policies may include a Complete Fair Scheduling (CFS) policy, a First In First Out (FIFO) policy, a real-time scheduling (RTS) policy, a Global Task Scheduling (GTS) policy, an energy-saving scheduling (EAS) policy, and so on. The thread scheduling method provided in the embodiment of the present application is mainly used for a CFS policy, and a task scheduler that correspondingly executes the CFS policy may be referred to as a CFS scheduler. The following embodiments mainly describe the relevant contents of the CFS policy, and other scheduling policies are not described in detail.
The goal of a fully fair scheduler (i.e., a CFS scheduler) is to ensure fully fair scheduling of each thread, i.e., to ensure that each thread gains fairness in CPU execution time. In the scheduling process, the system kernel may perform scheduling based on the priority of the thread, and specifically, based on a virtual running time (vruntime) indicated by a virtual clock (virtual clock), the virtual clock of the high-priority thread runs slower, the virtual clock of the low-priority process runs faster, and although the virtual running times of the high-priority thread and the low-priority thread are relatively fair, the actual execution time of the high-priority thread is longer than the actual execution time of the low-priority thread. That is, to ensure that the virtual runtime of each thread is the same, the CFS scheduler selects the thread with the smallest virtual runtime when scheduling the next thread to be run. Therefore, the high-priority thread is scheduled by the CFS scheduler because the virtual clock is slower than the real clock and the virtual running time is small, so that the thread with higher priority can obtain more actual running time on the premise of absolute fairness. The virtual runtime of a thread is accumulated from the birth of the thread through the virtual runtime of at least one scheduling cycle.
The fair scheduling thread (i.e. CFS scheduling thread) refers to a thread scheduled by a CFS scheduler, and may generally include some threads created during the process running for executing related tasks in an interactive event, threads running for a long time but not related to basic tasks interacted with by a user, and threads related to background tasks with the lowest priority, and the like. The threads for executing the related tasks in the interactivity event may include a User Interface (UI) thread, a rendering (render) thread, a GL thread, a distribution thread of a user input event, a detection thread of a user input event, and the like, where the GL thread is a rendering thread of an open graphics library (open graphics library). It should be understood that references herein to a user refer to an interface user, i.e., a user interacting with a user interface, or a user using an electronic device.
Ready queue (runqueue), a queue composed of threads in ready state. Each CPU in the system has a global ready queue, which is described by using a struct _ rq structure. Each scheduling class also has a ready queue managed by itself, for example, struct _ CFS _ rq is a ready queue of a CFS scheduler, and is used to manage scheduling entities (i.e., threads) in a ready state, and then a scheduling entity with the smallest virtual time is selected from the ready queue through a pick _ next _ task interface for scheduling.
A control group (cgroup) manages and controls the behavior of a process using system resources in a grouped manner. Specifically, the electronic device can group all the processes through cgroup, and perform resource allocation and control on the whole group. Each cgroup may or may not include one or more processes.
In one design, the corresponding processes of an application (or task, or process) may be partitioned into different cgroups based on the foreground and background states of the application. Specifically, the processes of the foreground application are divided into foreground process groups (forkround groups) as much as possible, and more system resources are allocated to the processes in the foreground process groups; the process of the background application is divided into background process groups (background groups) as much as possible, and less system resources are distributed to the processes in the background process groups. For example, the processes in the foreground process group may use all of the processor cores and may occupy 95% of the utilization of the processor; the processes in the background process group can use only one processor core and can occupy 5% of the processor utilization at most. Of course, the processes may also be divided into top-level application groups (top-app groups), system-background groups (system-background groups), or other cgroups.
A single cgroup may correspond to one or more policies, and accordingly, the policy corresponding to the cgroup to which it belongs may be executed for the process. The cgroup policy may include an algorithm policy, a core selection policy, a frequency modulation policy, a preemption policy, an IO execution policy, a memory allocation policy, a network request policy, and the like.
It should be noted that the system resources involved in the implementation of the present application may include CPU resources, I/O resources, memory resources, and the like.
Fig. 1 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 1, the electronic device 100 includes a memory 180 and a processor 150. Memory 180 is used to store computer programs including operating system programs 182 and application programs 181, among others. The processor 150 is configured to read the computer program stored in the memory 180 and then execute a method defined by the computer program, for example, the processor 150 reads the operating system program 182 to run an operating system on the electronic device 100 and implement various functions of the operating system, or reads the one or more application programs 181 to run an application on the electronic device 100.
The processor 150 may include one or more processors (or processing units), for example, the processor 150 may include one or more Central Processing Units (CPUs), graphics Processing Units (GPUs), application Processors (APs), image Signal Processors (ISPs), modem processors, digital Signal Processors (DSPs), baseband processors, neural-Network Processing Units (NPUs), and/or video codecs, and so on. When the processor 150 includes a plurality of processors, the plurality of processors may be integrated on the same chip or may be independent chips. A processor may include one or more processing cores.
A memory may also be provided in processor 150 for storing instructions and data. In some embodiments, the memory in the processor 150 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 150. If the processor 150 needs to reuse the instruction or data, the instruction or data can be directly called from the memory, so that repeated access is avoided, the waiting time of the processor 150 is reduced, and the efficiency of the system can be improved.
In addition, memory 180 may store other data 183 in addition to computer programs, where other data 183 may include data generated by operating system programs 182 or application programs 181 as they are executed, including system data (e.g., operating system configuration parameters) and user data.
The storage 180 generally includes both internal and external memory. The memory may be Random Access Memory (RAM), read-only memory (ROM), cache, and the like. The external memory may be a hard disk, an optical disk, a Universal Serial Bus (USB) disk, a floppy disk, or a tape drive. Computer programs, such as application programs, may be stored on external memory, from which a processor loads the computer programs into memory before performing processing, and an operating system may be stored in the memory. As such, the memory may store computer-executable program code, which includes instructions. The processor 150 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the memory.
The operating system program 182 includes a computer program capable of implementing the thread scheduling method provided in the embodiment of the present application, so that after the processor 150 reads the operating system program 182 and runs the operating system, the operating system may have the function of the thread scheduling method provided in the embodiment of the present application.
The electronic device 100 may further include an input device 130 for receiving input numerical information, character information, or contact touch operation/non-contact gesture, and generating signal input related to user setting and function control of the electronic device 100, and the like. Specifically, in the embodiment of the present application, the input device 130 may include a touch panel 131. The touch panel 131, also referred to as a touch screen, may collect touch operations of a user (e.g., operations of the user on the touch panel 131 or on the touch panel 131 by using any suitable object or accessory such as a finger or a stylus) on or near the touch panel 131, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 131 may include two parts, i.e., a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 150, and can receive and execute commands sent by the processor 150.
The touch panel 131 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to touch panel 131, input device 130 may include other input devices 132, and other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The electronic device 100 may further include a display device 140 for displaying information input by the user or information provided to the user, various menu interfaces of the electronic device 100, and the like, and in this embodiment, the display device is mainly used for displaying an application process in the foreground or a desktop of the electronic device. The display device 140 may include a display panel 141. Alternatively, the display panel 141 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), a flexible light-emitting diode (FLED), a quantum dot light-emitting diode (QLED), or the like. In some other embodiments, the touch panel 131 can be overlaid on the display panel 141 to form a touch display screen.
In addition to the above, the electronic device 100 may also include a power supply 190 for powering other modules and a camera 160 for taking pictures or video. The electronic device 100 may also include one or more sensors 120, such as acceleration sensors, light sensors, fingerprint sensors, touch sensors, and the like. The electronic device 100 may further include a Radio Frequency (RF) circuit 110 for performing network communication with a wireless network device, and a wireless fidelity (WiFi) module 170 for performing WiFi communication with other devices.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The thread scheduling method provided by the embodiment of the application can be realizedNow in the operating system 182 shown in fig. 1. The operating system 182 may be
Figure BDA0003140034770000091
An operating system,
Figure BDA0003140034770000092
Operating system or
Figure BDA0003140034770000093
An operating system, etc. For ease of understanding, reference is now made to FIGS. 2 and 3 in conjunction with the following
Figure BDA0003140034770000101
An operating system is taken as an example to introduce implementation positions of the method provided by the embodiment of the application.
Fig. 2 shows a software system schematic diagram of an electronic device according to an embodiment of the present application. As shown in fig. 2, a software system 200 of an electronic device may employ a layered architecture. The layered architecture is to divide software into a plurality of layers, each layer comprises a large number of sub-modules or subsystems, and the layers communicate with each other through software interfaces.
In some embodiments, the software system 200 may be divided into four layers, which are an Application (APP) layer 210, an application framework (framework) layer 220, a system runtime library layer (including C/C + + libraries and Android runtimes) 230, and a kernel layer (kernel) 240 from top to bottom.
The application layer 210 may include a series of application packages. As shown in fig. 2, the application packages may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, and short message. The application is mainly in the aspect of user interface (user interface), and is usually written by calling an interface of the application framework layer 220 using JAVA language.
The application program in the electronic device may be in an operating state or an non-operating state. When the application program is in a running state, the application program can be divided into a foreground application program and a background application program according to different running positions. The foreground application program runs in the foreground and is displayed on a display interface of the electronic equipment; the background application program runs in the background and is not displayed on the display interface of the electronic equipment.
The application framework layer 220 is a series of services and systems hidden behind each application, and is used to provide an Application Programming Interface (API) and a programming framework for the application of the application layer 210. The application framework layer 220 includes a number of predefined functions. As shown in FIG. 2, the application framework layer 220 may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and an activity manager, among others.
A window manager (window manager) is used to manage the window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
Content providers (content providers) are used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc. Content providers allow publishing and sharing of data between applications.
View systems (view systems) are used to build applications, and may include, for example, lists (lists), grids (grids), text boxes (text boxes), buttons (buttons), and embeddable world wide web browsers, among others. The display interface of the electronic device may be composed of one or more views, for example, a display interface including a short message notification icon, a view displaying text, and a view displaying pictures.
A telephone manager (telephony manager) is used to provide the communication functionality of the electronic device 100. Such as management of call status (including on, off, etc.).
A resource manager (resource manager) provides various resources, such as localized strings, icons, pictures, layout files, video files, etc., to an application.
The notification manager (notification manager) allows the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
An activity manager (activity manager) is used to manage all aspects of the application lifecycle and activity stack and provides the usual navigation fallback functionality. Information of activities (activity) running in the system, such as processes, applications, services, tasks, can be obtained through the activity manager. For example, the activity manager may obtain global memory usage information, count memory information in a process, obtain running process information (e.g., obtain an activity running at the front end, determine whether an application is running at the front end), and the like.
The system runtime (libraries) layer 230 is a collection of a series of libraries that underlie the application framework layer 220. The system runtime layer 230 may include two parts, a C/C + + native library and an android runtime, respectively.
An Android runtime (Android runtime), namely an Android runtime environment, comprises a core library and a virtual machine (such as a Dalvik virtual machine). The android runtime is responsible for scheduling and managing the android system. The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer 210 and the application framework layer 220 run in a virtual machine. The virtual machine executes java files of the application layer 210 and the application framework layer 220 as binary files. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The C/C + + native library (also referred to as a system library) is a support for an application framework, is an important link for connecting the application framework layer 220 and the kernel layer 240, and may include a plurality of functional modules, for example: surface manager (surface manager), media framework (media frame), standard C library (libc), 2D engine, 3D engine, etc.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media framework supports playback and recording of a variety of commonly used audio and video formats, as well as still image files, and the like. The media framework can support a plurality of audio and video coding formats, such as MPEG4, H.264, MP3, AAC, AMR, JPG, PNG and the like.
The standard C library, also called system C library and C function library, is used for providing macro, type definition, character string operation function, mathematical calculation function, input and output function, etc. used in C language.
The 2D engine is a graphics engine for two-dimensional drawing. The 2D engine includes, but is not limited to, a vector graphics engine (e.g., a Skin Graphics Library (SGL)), a browser engine (e.g., webkit), a relational database engine (e.g., SQLite), and the like.
The 3D engine is a graphic engine for three-dimensional drawing and is used for realizing three-dimensional graphic drawing, image rendering, composition, layer processing and the like. The 3D engine includes, but is not limited to, an open graphics library (OpenGL ES) for embedded systems.
The kernel layer 240 is a layer between hardware and software for providing essential functions of the operating system such as file management, memory management, process management, network protocol stack, device management (e.g., camera, keyboard, display, etc.), and the like. The kernel layer 240 includes at least a display driver, a camera driver, an audio driver, a sensor driver, a bluetooth driver, etc.
The application framework layer 220, the system runtime layer 230, and the kernel layer 240 may constitute an operating system layer of the electronic device. The operating system layer illustrated in FIG. 2 may be considered a specific implementation of the operating system 182 in FIG. 1, and the application layer 210 in FIG. 2 may be considered a specific implementation of the application programs 181 in FIG. 1. The thread scheduling method provided by the embodiment of the present application may be implemented in the operating system layer in fig. 2.
Fig. 1 and fig. 2 respectively describe a hardware architecture and a software system involved in the embodiment of the present application, and the following further describes from the perspective of software and hardware interaction with reference to fig. 3, and briefly introduces a scheduler involved in the method provided by the embodiment of the present application.
The virtual memory can be divided by an operating system into a kernel space (kernel space) where kernel code runs and a user space (user space) where user program code runs. When a process executes in kernel code, the process may be said to be in a kernel run state (kernel state for short). When a process executes its own code, the process may be said to be in a user running state (user state for short). Specifically, kernel mode and user mode are two levels of operating system execution.
As shown in fig. 3, the kernel space 320 is based on a kernel 321 (also called an operating system kernel, which is equivalent to the kernel layer 240 shown in fig. 2), and the user space 310 is composed of the system runtime layer 230 and the application framework layer 220 shown in fig. 2, and includes an application 311, a library function 312, and a shell (shell) 313, and the kernel space 320 and the user space 310 of the system can be connected through a system call (syscall) 322. For the user space 310, C + + and Java code writing can be adopted, and a Java layer and a native (native) layer (i.e., C/C + + layer) of the user space are communicated through a Java Native Interface (JNI) technology, so that the whole system is communicated.
The kernel 321 may control the hardware resources of the computer and provide a standard interface for the hardware system 330 (e.g., including processors, memory, disks, printers, etc.) through a Hardware Abstraction Layer (HAL) 323. The hardware abstraction layer 323 is a hardware interface layer abstracted from a specific hardware platform, and is responsible for realizing the functions and control of the specific hardware platform and providing a uniform API interface for other software modules. Specifically, the hardware abstraction layer 323 can abstract the commonality of hardware operation and control, hide the hardware interface details of a specific platform, and provide a unified virtual hardware platform and control interface for upper-layer software, so as to realize isolation of other software modules from bottom-layer hardware, thereby facilitating the transplantation of the system on a new hardware platform.
The kernel 321 is a first layer software extension based on the hardware system 330, which is used to provide the most basic functions of the operating system and is the basis for the operation of the operating system. The kernel 321 is generally responsible for process scheduling and management, file system management, memory management, device driver management, network system management, and the like. Thus, the kernel 321 can be divided into a plurality of modules according to the implemented functions, wherein the module responsible for process scheduling can be referred to as a scheduler (i.e., the aforementioned task scheduler).
With the development of data analysis technology and electronic device technology, the functions of applications on electronic devices, particularly mobile devices, are further enriched and powerful, the number of threads required by applications and the system resources consumed by computationally intensive tasks are further increased, concurrent access and contention of threads to shared resources are also increased, and thus more elaborate scheduling of system resources is required. If the threads of important tasks (such as threads for executing related tasks in an interaction event) are not scheduled in time, the use of the whole machine is easy to be jammed, and thus the user experience is affected.
Several schemes are currently provided to implement preferential scheduling of tasks (or threads). One is to dynamically adjust various behaviors of the task scheduler by scaling the task load, such as flexibly selecting cores and tuning, so that tasks are added into different running queues (i.e. the ready queues), which can help the threads of important tasks to migrate from the small-core running queue to the large-core running queue. Another is to adjust the priority of the task, for example, to set the priority of the thread of the important task higher, so that the task scheduler schedules the thread with higher priority preferentially.
However, the former scheme is to perform weighting setting for task load, only the core selection of the task is affected, and the thread still has a high probability after migrating the running queue and cannot be scheduled in time. Furthermore, the CPU frequency boosting is also limited by power consumption, and it is necessary to perform a short-term frequency boosting operation of CPU resources on threads that process important tasks of the core on the premise of reducing power consumption as much as possible, rather than arbitrarily increasing the operating frequency of the CPU. In the latter scheme, after the priority of the task is adjusted to enable the thread of the important task to be added into the running queue of a certain CPU, if the CPU processes more tasks with the same priority, especially under the condition of heavy overall load of the system, the thread of the important task still may not be scheduled in time because the virtual running time is too long. Moreover, with the development of the big data era, the system task load is more and more, more tasks can be adjusted with high priority, such problems are more and more obvious, and finally the method may fail, and the task priority scheduling cannot be really and effectively realized.
Therefore, it is necessary to provide a thread scheduling method to improve the probability of scheduling the threads of important tasks, thereby improving task scheduling efficiency and system performance.
Fig. 4 shows a schematic flowchart of a thread scheduling method provided in an embodiment of the present application. On a hardware level, the method 400 shown in fig. 4 may be performed by an electronic device, such as the electronic device 100 shown in fig. 1, and in particular may be performed by the processor 150 in the electronic device 100. From a software level, the method 400 may be implemented in an operating system of an electronic device, such as the software system 200 shown in fig. 2, and in particular may be implemented in operating system layers, which may include an application framework layer 220, a system runtime layer 230, and a kernel layer 240. That is, the method 400 may be implemented during execution of an operating system of the electronic device by a processor of the electronic device. The method 400 includes step S410 and step S420.
In step S410, a system load is acquired.
In the embodiment of the present application, the system load is used to reflect the busy degree of the system, or reflect the magnitude degree of the system resource pressure.
In some embodiments, the system load may be quantified or evaluated by the sum of the number of threads (or the sum of the number of processes) currently being executed by the CPU and waiting to be executed by the CPU (i.e., waiting to allocate CPU time).
As one example, the system load may be represented by the sum of the loads of all CPUs in the system.
By way of example and not limitation, for a single-core CPU, if the system load is 0, it may indicate that no thread is running or waiting for the CPU to execute, i.e., the CPU is completely idle; at this point, if a thread needs to execute, the thread can be directly allocated with CPU time. If the system load is 0.5, it may indicate that no threads are waiting, but the CPU is processing the previous thread at 50% capacity; at this point, if a thread needs to execute, it can be allocated CPU time immediately without placing it in a hold state. If the system load is 1, it may indicate that there is no thread waiting in the run queue, but the CPU is processing the previous thread with 100% capacity; at this point, if a thread needs to be executed, the thread needs to be retained until the previous thread completes its processing or the previous thread's time slice is exhausted. If the system load is 1.5, it may indicate that the CPU is processing at 100% capacity and that a thread in the run queue is waiting to be executed by the CPU and that a thread in the run queue must be queued to be processed.
For a multi-core system or a multi-processor system, the number of processors the system has and the number of cores each processor has need to be considered when considering the system load. The system load of a multi-core system or a multi-processor system should be the sum of the loads of all cores of all CPUs. For convenience of understanding, reference to "CPU" in the following embodiments of the present application may be understood as the smallest processing unit in a computer device, i.e., a processor core, or simply a core.
As another example, the system load may be represented by the system average load.
Here, the average load of the system refers to the average number of threads (or processes) in which the system is in an operable state and an uninterruptible state per unit time, that is, the average number of active threads (or processes). Where a thread in a runnable state refers to a thread that is using a CPU or is waiting on a CPU. An uninterruptible state thread refers to a thread that is in a kernel-mode critical flow and that is not disruptive, such as a thread waiting for an I/O response from a hardware device. Specifically, the system average load may be (the sum of the loads of all CPUs/the number of CPUs).
In other embodiments, the system load may be quantified or estimated by the amount of system resource pressure.
In the embodiment of the application, the system resources comprise CPU resources, memory resources and I/O resources. Here, the system resource pressure can be detected by a resource measurement tool, for example, by Pressure Stall Information (PSI), which evaluates the magnitude of the system resource pressure in real time and presents it by way of time occupancy that a task on the CPU is blocked due to waiting. Generally, the degree of contention of the system resource can be obtained through the system resource pressure, for example, the system resource pressure is large, and it can be indirectly considered that the system resource will have contention.
Step S410 may be implemented in the bottom layer (i.e., kernel layer) of the operating system.
In step S420, when the system load exceeds the preset threshold, the virtual runtime of the target thread is adjusted, so that the target thread is scheduled preferentially.
When the system load exceeds a preset threshold, the system may be considered to be busy, or the system resource pressure may be large. In the embodiment of the present application, the design of the preset threshold may be different according to different indexes for quantifying the system load. For example, if the system load is represented by the sum of the loads of all CPUs in the system, the preset value may be designed according to the number of all CPUs in the system, such as 70%, 80%, 90%, 100%, or 120% of the number of CPUs. As another example, if the system load is expressed by the system average load, the preset value may be set to 0.7, 0.8, 0.9, 1, 1.2, or 1.5, etc. As another example, if the system load is expressed in terms of system resource pressure, the preset value may be designed to be a time occupancy of the blockage, such as 20%, 40%, or 60%.
It should be noted that the system load represented by PSI can be considered as a comprehensive evaluation of CPU, I/O and memory resources, for example, a percentage of blocking time within a 10-second time period is selected to represent the system load.
In the embodiment of the application, the target thread belongs to a fair scheduling class thread, and is scheduled based on the virtual running time. The target thread is a thread having an influence on the mortem of the user interaction scene or a core thread influencing the user experience, for example, a thread for executing a related task in an interaction event, a thread of a rendering related task, and the like. It will be appreciated that whether the thread that has an impact on the stuck presence of the user interaction scenario is running smoothly determines whether a user perceptible stuck will be generated in the user's interaction interface with the system. In practice, some system-level threads may be involved in performing tasks in addition to application-level threads in the course of performing user interaction events. Thus, threads that have an impact on the katton presence of a user interaction scenario may include application-level threads (including UI threads, rendering threads, distribution threads, and detection threads, etc.), as well as some system-level threads (including interface composition threads, system animation threads, and system UI threads, etc.).
By way of example and not limitation, the target thread may be any of the following: a User Interface (UI) thread, a rendering thread, a distribution thread of user input events, a detection thread of user input events, an interface composition (surfdeflinger) thread, a system animation thread, or a system interface (system UI) thread, etc.
In step S420, the target thread may be scheduled preferentially by adjusting the virtual runtime of the target thread. The smaller the virtual run time, the higher the probability that it will be scheduled. Therefore, the virtual running time of the target thread can be reduced to obtain preferential scheduling in the embodiment of the application. In implementation, various methods may be utilized to reduce the virtual runtime of the target thread.
In one possible implementation, the adjusted virtual runtime can be obtained by subtracting a value directly from the real virtual runtime of the target thread. It will be appreciated that the adjusted virtual run time is used for fair scheduling, which is less than the real virtual run time of the target thread.
In another possible implementation, the virtual run time of the target thread may be directly set to a value, such as 0, or the same value as the minimum virtual run time, or a value less than the minimum virtual run time, so that absolute priority scheduling may be implemented.
In yet another possible implementation, the virtual run time of the target thread may be adjusted according to a weighting factor. For example, a target weight factor corresponding to the target thread may be determined, and then the virtual running time of the target thread may be adjusted according to the target weight factor, so as to obtain the adjusted virtual running time. Here, the target weight factor is greater than or equal to 0 and less than 1. It will be appreciated that the adjusted virtual run time is used for fair scheduling.
The following illustrates how the virtual run time of the target thread is adjusted according to the target weight factor.
For example, the target weight factor may be multiplied by the time (also referred to as a virtual running time stride or an increase of the virtual running time) allocated by the target thread in the current scheduling cycle, and the multiplied result may be added to the virtual running time of the target thread after the last scheduling cycle is completed, so as to obtain the adjusted virtual running time. In other words, the target weighting factor may act on the time allocated by the target thread in the current scheduling period, so as to reduce the time allocated by the target thread in the current scheduling period, thereby achieving the purpose of reducing the virtual running time. It is simply understood that it can be expressed by the following formula (1):
vruntime (adjusted) = vruntime (last scheduling period) + vruntime amplification × target weight factor.
The vruntime increment may be a proportional value of the weight when the actual runtime relative NICE value is 0, that is, the virtual runtime = actual runtime (NICE _0 _load/weight) of one scheduling interval. NICE _0 _LOADis the weight for a nic value of 0, and the actual run time and virtual run time for a thread with a nic value of 0 are the same.
It should be noted that, the weight (weight) involved in calculating vruntime amplification corresponds to the priority of the thread one to one, so as to indicate whether the thread needs to run more, specifically, to convert the actual running time and the virtual running time. The weighting factor according to the embodiment of the present application is used to adjust the virtual runtime size. Therefore, the two concepts are different and should be distinguished herein. In some embodiments, the weighting factors may also be referred to as correction factors for clarity of description.
For ease of understanding, the following is described in greater detail in connection with the example of FIG. 5. As shown in fig. 5, the thread a and the thread B may both be threads waiting for execution of the CPU, and the virtual run times of the thread a and the thread B are the sum of the times (i.e., virtual run time increases) allocated to the respective scheduling cycles since the thread was born. As can be seen from the figure, the virtual running time of the thread a after the end of the last scheduling cycle is the sum of the virtual running time increases represented by the boxes numbered 1, 3, 5 and 7, and the virtual running time of the thread B after the end of the last scheduling cycle is the sum of the virtual running time increases represented by the boxes numbered 2, 4 and 6. Block 9 represents the time that thread a is allocated during the current scheduling cycle and block 8 represents the time that thread B is allocated during the current scheduling cycle. The target weighting factor may therefore be applied to the virtual runtime increase represented by block 9 (or block 8) to affect the virtual runtime of thread a (or thread B).
It should be noted that, the thread a or the thread B referred to in fig. 5 for both the "last scheduling cycle" and the "current scheduling cycle" does not refer to other threads per se. In other words, the current scheduling cycle of thread A and the current scheduling cycle of thread B may not be the same scheduling cycle.
According to the idea provided by the embodiment of the application, other ways for adjusting the virtual running time can be deduced, so that the virtual running time of the target thread can be ensured to be increased in different scheduling cycles, and the virtual running time of the target thread in the current scheduling cycle is reduced, so that the target thread is scheduled preferentially.
While the virtual runtime of the target thread can be adjusted according to the target weight factor, in the embodiment of the present application, the target weight factor can be determined in various ways, which is described below as an example.
As one example, a target weight factor may be determined based on system load. Specifically, according to the system load obtained in step S410, a system load level corresponding to the system load may be determined from a plurality of preset system load levels; and then determining a target weight factor from a plurality of preset weight factors according to the system load grade corresponding to the system load. The preset weighting factors correspond to preset system load levels.
That is, a plurality of system load levels may be preset, and each level may correspond to a range of values of system load. Thus, after the system load is obtained, the system load level corresponding to the system load can be determined. And, the preset plurality of system load levels correspond to a preset plurality of weighting factors, wherein each level of the plurality of system load levels may correspond to at least one weighting factor of the plurality of weighting factors. Thus, after determining the system load level corresponding to the system load, at least one weighting factor corresponding to the system load level may be determined, and then a target weighting factor may be determined from the at least one weighting factor corresponding to the system load level.
For example, 6 system load levels may be preset, which are level 1 (level 1), level2 (level 2), level 3 (level 3), level 4 (level 4), level5 (level 5), and level 6 (level 6), and each level corresponds to a certain range of system loads, where the system load at level 1 is the largest, and the system load at level 6 is the smallest. The 6 system load levels correspond to weight factors of 0%, 20%, 40%, 60%, 80%, and 100%, respectively. After the system load is obtained, a system load level corresponding to the system load, for example, a level 3 corresponding to the system load, may be determined according to the size of the system load and the system load range corresponding to each level. Since the weighting factor corresponding to level 3 is 40%, the target weighting factor is 40%.
In the above example, each level corresponds to a weight factor. If a certain level or each level corresponds to a plurality of weighting factors, after determining the system load level corresponding to the system load, one weighting factor can be selected from the plurality of weighting factors corresponding to the system load level as the target weighting factor. For example, one of the multiple weighting factors may be randomly selected as the target weighting factor, or one of the multiple weighting factors may be selected as the target weighting factor according to other conditions, for example, attribute information of the target thread (for example, priority of the target thread, a group to which the target thread belongs, and the like), which is not limited in this embodiment of the present application.
In the embodiment of the application, different system load grades can correspond to different weight factors, so that the weight factors can be flexibly selected according to the condition of the system load, and more refined and accurate scheduling is realized.
In some embodiments, the predetermined plurality of system load levels may include a first level and a second level, and the predetermined plurality of weighting factors includes a first weighting factor and a second weighting factor, the first weighting factor corresponding to the first level and the second weighting factor corresponding to the second level. The system load corresponding to the first level is greater than the system load corresponding to the second level, and the first weight factor is smaller than the second weight factor.
In other words, the larger the system load corresponding to the system load level, the smaller the weighting factor corresponding to the system load level. Still taking the above-mentioned preset 6 system load levels as an example, the target weighting factor determined when the system load belongs to level 1 is smaller than the target weighting factor determined when the system load belongs to level 4. Thus, the larger the system load is, the smaller the target weight factor is, and the smaller the adjusted virtual running time obtained according to the target weight factor is, the higher the probability that the target thread is scheduled preferentially can be. Particularly, when the target weight factor is determined to be 0%, the target thread can be considered as a super-priority thread, scheduling can be performed preferentially, and other threads are scheduled after the CPU finishes executing, so that the target thread occupies an absolute priority scheduling position on the CPU.
It should be noted that, in the case that the system load does not exceed the preset threshold, the scheduling may be performed directly based on the real virtual runtime of the target thread, where the virtual runtime of the target thread may be obtained by the following formula (2):
vruntime (current scheduling cycle) = vruntime (last scheduling cycle) + vruntime amplification.
Alternatively, the steps of determining a system load level based on the system load and then determining the weighting factor based on the system load level may be performed. In this case, the weighting factor corresponding to the system load level to which the system load belongs may be 100%. Thus, the weighting factors applied in equations (1) - (3) above do not affect thread scheduling because the determined adjusted virtual run time is the same as the virtual run time obtained in equation (4).
That is, in the case that the system load does not exceed the preset threshold, the virtual runtime of the target thread may not be adjusted by using the weighting factor, or the target weighting factor may be set to 100% without affecting the virtual runtime of the target thread.
In some embodiments, a preset threshold and a plurality of preset weighting factors may be set through detailed load analysis, and then, after different system loads exceed the preset threshold, offline algorithm training is performed on the different system loads and normalized weighting factors, so as to obtain the preset grades of the plurality of system loads and the weighting factors corresponding to the different grades. For example, different background loads may be set manually, and after loading, different weighting factors may be set. And then, carrying out a fluency performance test to obtain a memory psi value. Thus, a series of test results, load and weight factor values can be obtained, and the best combination of the preset threshold value and the weight factor setting can be obtained through off-line algorithm training.
As another example, the target weight factor may be determined from a grouping of threads. Specifically, a first group to which the target thread belongs may be determined from a plurality of preset groups according to information of the target thread; a target weighting factor is then determined from at least one weighting factor based on the first group to which the target thread belongs, where the at least one weighting factor corresponds to the first group.
That is, a plurality of packets, each including some threads, may be preset. Thus, the group to which the target thread belongs can be determined according to the information of the target thread. And, each of the preset plurality of packets corresponds to at least one weight factor. Thus, after the group to which the target thread belongs is determined, at least one weighting factor corresponding to the group can be determined, and then the target weighting factor is determined from the at least one weighting factor corresponding to the group.
Here, the information of the target thread may include foreground and background information of the target thread (e.g., whether the thread is a thread of a foreground application or a thread of a background application), priority information of the target thread, identification information of the target thread, grouping information of the target thread, name information of the target thread, and the like. The information of the target thread is used to indicate the packet to which the target thread belongs.
For example, 3 groups, i.e., group 1, group 2, and group 3 (group 3), may be preset, and each group corresponds to at least one weight factor. Illustratively, the 3 groups correspond to weight factors of 20%, 45%, and 70%, respectively. Thus, the group to which the target thread belongs may be determined based on information of the target thread, e.g., the target thread belongs to group 1. Since the weighting factor corresponding to group 1 is 20%, the target weighting factor is 20%.
In the above example, each packet corresponds to a weight factor. If a plurality of weighting factors are associated with a group or each group, a weighting factor may be selected from the plurality of weighting factors associated with the group to which the target thread belongs as the target weighting factor. For example, one of the multiple weight factors may be randomly selected as the target weight factor, or one of the multiple weight factors may be selected as the target weight factor according to other conditions, for example, attribute information of the target thread (for example, a priority of the target thread), a system load size, or a system load level corresponding to the system load, which is not limited in this embodiment of the present application.
In the embodiment of the application, different groups can correspond to different weight factors, so that the weight factors can be flexibly selected according to the specific conditions of the threads, and one or a group of threads can share one or a group of weight factors, thereby realizing more refined and more accurate scheduling.
In some embodiments, the predetermined plurality of packets includes a first packet and a second packet, the at least one weighting factor for the second packet includes a third weighting factor, wherein the target weighting factor is less than the third weighting factor. In other words, different packets have different weighting factors, and the thread in the first packet is the thread that needs to be scheduled preferentially, so the weighting factor corresponding to the first packet is smaller than the weighting factor corresponding to the second packet. That is, the probability that the target thread belongs to the first packet is scheduled with priority is greater than the probability that the target thread belongs to the second packet.
In some embodiments, the predetermined grouping criteria for the plurality of groups may be the same as the grouping criteria for the control group cgroup. For example, the predetermined plurality of packets may include a top layer group, a foreground group and a background group, or the predetermined plurality of packets may include a foreground group and a background group, and so on. The top layer group can comprise threads used for executing event-related tasks in the top layer window or the floating window, the foreground group comprises related threads of a foreground application, and the background group comprises related threads of a background application. It can be simply understood that a weighting factor parameter is added to each cgroup based on the cgroup of the control group, i.e. each cgroup corresponds to at least one weighting factor.
Through the method, the group to which the target thread belongs is determined to be the first group. In some embodiments, the first packet may include a thread associated with a foreground application. When at least one weight factor corresponding to the first group is smaller, the probability that the thread related to the foreground application program is scheduled preferentially is higher, the problem of unsmooth user interaction interface can be effectively solved, the system performance is improved, and the user experience is improved.
As yet another example, the target weight factor may be determined based on system load and the grouping of target threads. Specifically, reference may be made to the methods in the two examples above, in one aspect, according to the obtained system load, a system load level corresponding to the system load is determined from a plurality of preset system load levels, and then at least one weighting factor (for convenience of description, hereinafter referred to as at least one fourth weighting factor) corresponding to the system load level is determined according to the system load level. On the other hand, a packet to which the target thread belongs is determined from a plurality of preset packets according to the information of the target thread, and then at least one weighting factor (hereinafter, referred to as at least one fifth weighting factor for convenience of description) corresponding to the packet is determined according to the packet to which the target thread belongs. The target weight factor may be any one of at least one fourth weight factor and at least one fifth weight factor; or the smallest value of the at least one fourth weighting factor and the at least one fifth weighting factor; or a product of a fourth weighting factor determined from the at least one fourth weighting factor and a fifth weighting factor determined from the at least one fifth weighting factor, which is not limited in the embodiments of the present application.
As yet another example, the target weight factor may be a user-set value. That is to say, the method can provide an interface for setting or modifying the weighting factor corresponding to the thread for the user, so that the user can set the weighting factor according to the actual requirement. It should be understood that references to a user herein refer to a background user or a user doing the system, and not a user using an application. For the sake of convenience of distinction, in this embodiment, a user who will be making the system may be referred to as an interface user or a background user (e.g., an operating system developer), and a user who will use the application may be referred to as an interface user (e.g., a consumer who uses the electronic device or interacts with a user interface).
In some embodiments, the weighting factor may be a factory default value, such as 100%, before the weighting factor is set by the background user.
The interaction between the system and the background user is performed in a kernel interface mode, specifically, an initial default value of a weighting factor is provided by a bottom kernel code, and an interface is opened for an upper layer user to schedule. The process of adjusting the weighting factors may be implemented in the system runtime layer or the application framework layer.
Therefore, the target weighting factor determined in step S420 may be a factory default value or a value set by a background user, which is not limited in this embodiment of the application.
It can be understood that after the system load is obtained in step S410, there are two cases: the system load exceeds a preset threshold value and the system load does not exceed the preset threshold value. Whether to perform step S420 needs to be determined according to the above two cases. In this embodiment of the present application, a threshold switch may be set to indicate the two situations, for example, when the system load exceeds a preset threshold, the threshold switch is turned on, and step S420 is executed; if the system load does not exceed the preset threshold, the threshold switch is turned off, and step S420 is not executed.
Therefore, in some embodiments, before determining the target weight factor corresponding to the target thread, it may be determined whether the threshold switch is in an open state, where whether to open the threshold switch is determined according to whether the system load exceeds a preset threshold value. That is, when the threshold switch is in the on state, step S420 may be executed, specifically, a target weight factor corresponding to the target thread may be determined, and then the virtual running time of the target thread may be adjusted according to the weight factor. When the threshold switch is in the off state, it may be considered that the virtual running time of the thread is not adjusted, and therefore the weighting factor is not obtained, or the default weighting factor is 1.
The above describes which threads the target thread may be, but before that, the target thread needs to be determined, that is, the virtual runtime of which thread needs to be adjusted, or which thread needs to be scheduled preferentially. From the perspective of the thread spawning process, determining the target thread may include: receiving a first operation (such as a clicking operation, a sliding operation, a zooming operation and the like) of an interface user; creating at least one thread for responding to the first operation; a target thread is determined from the at least one thread.
In the embodiment of the application, which threads related to operations of an interface user are required to be scheduled preferentially can be preset, and when the system receives, detects or identifies the preset operations generated by the interface user, the system can consider that threads with the virtual running time required to be adjusted possibly. That is, when the user of the interface generates a preset operation, it is equivalent to trigger the implementation flow of the method 400 of the present application.
In other embodiments, after the interface user generates the first operation, the system may not create a new thread, and use the created thread to complete the process in response to the first operation. The target thread may also be an already created thread.
In other embodiments, the target thread may be determined from thread groupings. For example, after the interface user generates the first operation, the thread responding to the first operation may be added to the corresponding packet, so that the thread added to the corresponding packet may be determined as the target thread, i.e., the thread that needs to be scheduled preferentially.
In other words, when the system senses the start of an event, the target thread and the target weight factor corresponding to the target thread may be determined, so as to adjust the virtual running event of the target thread by using the target weight factor.
In some embodiments, after step S420, when the event corresponding to the target thread is completed, the target weighting factor may be adjusted to 1 (i.e., 100%). This may not affect the following scheduling process. For example, the weighting factor may be restored to a default value (e.g., 100%) when the interface user's click event is complete. When a new event is generated and some threads need to be scheduled preferentially, the steps S410 and S420 are repeated.
In the method provided by the embodiment of the application, the scheduling of the fair scheduling type thread can be fundamentally influenced by adjusting the virtual running time of the target thread, and the priority scheduling of the target thread is realized, instead of realizing the priority scheduling probabilistically as by scaling the task load or adjusting the task priority in the conventional scheme. Therefore, the method can improve the probability of dispatching the threads of important tasks and reduce the occurrence of whole machine jamming, thereby improving the thread dispatching efficiency, improving the system performance and improving the user experience.
The weighting factor and the mechanism for adjusting the virtual running time by using the weighting factor in the embodiment of the application can meet different weighting factor requirements under different upper layer load conditions of the system. Due to the fact that the influence of system load on scheduling is considered, the problem of scheduling timeliness caused by the fact that the system load is increased for threads with the same priority can be solved, the occurrence of whole machine jamming is reduced, and user experience is improved.
In addition, the embodiment of the application fundamentally realizes the priority scheduling of the threads by adjusting the virtual running time, and can solve the problem that the high-priority tasks which are more and more probabilistically generated cannot be scheduled. The scheduling algorithm for the packets provided by the embodiment of the application can realize more refined and more accurate scheduling. According to the method, the virtual running time is adjusted by using the weight factor, so that the whole machine jam can be reduced and the user experience can be improved under the condition that an interface user does not sense.
Further, for better understanding of the present application, a specific non-limiting example is listed below in conjunction with fig. 6 to fig. 9, so as to describe the thread scheduling method provided by the embodiment of the present application.
Fig. 6 shows a schematic flowchart for setting the threshold switch to be turned on and off according to an embodiment of the present application. In this example, a threshold switch is provided to indicate both the system load exceeding a preset threshold and the system load not exceeding the preset threshold. The process shown in fig. 6 includes steps S601 to S606, which will be described below with reference to the drawings.
In step S601, a system load is acquired.
In this step, the system load may be monitored in real time (e.g., may be monitored by setting PSI) and obtained.
In step S602, it is determined whether the system load exceeds a preset threshold.
For the related contents of the system load and the preset threshold, reference may be made to the related description in the method 400 above, and for brevity, no further description is provided here.
If yes, in step S603, it is determined that the system load is a heavy load.
Accordingly, in step S604, the threshold switch is turned on. The threshold switch is turned on to indicate that the weighting factor in the embodiment of the present application is effective, or the weighting factor may be applied to adjust the virtual running time of the thread.
If not, in step S605, it is determined that the system load is light.
Accordingly, in step S606, the threshold switch is turned off. The threshold switch off is used to indicate that the weighting factor in the embodiment of the present application is disabled or that the weighting factor is not applied to adjust the virtual running time of the thread. It should be noted that "closing the threshold switch" herein may be understood as closing the threshold switch in the open state or keeping the threshold switch in the closed state closed.
It should be understood that steps S603 and S605 for determining the system load as a heavy load or a light load may not be provided, and step S604 or S606 is directly performed after step S602.
In the embodiment of the present application, the process of acquiring the system load and turning on or off the threshold switch according to the size of the system load, which is illustrated in fig. 6, may be implemented by a bottom layer mechanism, and specifically may be implemented by a kernel layer in an operating system.
Fig. 7 is a schematic flowchart illustrating determining a weighting factor and performing thread scheduling according to the weighting factor according to an embodiment of the present application. The process shown in fig. 7 includes steps S701 to S710, which will be described below with reference to the drawings.
In step S701, a first operation of an interface user is received.
The first operation is an operation for triggering the processes of determining the target thread and the target weight factor, in other words, the first operation of the interface user is used for the system to determine the target thread. The first operation may be a click operation or a slide operation, or may be an operation performed by other interface users to interact with the system, which is not limited in the embodiment of the present application.
It is understood that the triggering of the system to determine the target thread through the first operation of the interface user is merely exemplary, and in some other embodiments, the target thread may be determined in other manners, for example, the target thread is determined according to the grouping information of the threads or the attribute information of the threads, which is not limited in this application.
At step S702, at least one thread for responding to the first operation is created.
This step is optional, and in other embodiments, the first operation may be responded to with an already created thread.
In step S703, a target thread is determined from at least one thread.
For example, the target thread may be determined from at least one thread by attribute information (e.g., whether it is a UI thread, whether it is a rendering thread, etc.) or usage information (e.g., whether it is for detecting user input events, whether it is for layer composition, etc.) of the thread. For example, the target thread may be a user interface thread, a rendering thread, a distribution thread of user input events, a detection thread of user input events, an interface composition thread, a system animation thread, or a system interface thread, etc.
For example, the target thread may be determined from the at least one thread through grouping information of the threads. For example, when the interface user generates a first operation, the system may divide the threads for responding to the first operation into corresponding groups, and then determine the threads in the corresponding groups as target threads.
Optionally, the steps S701 to S703 may be replaced with: the target thread is determined at the beginning of the aware event, where the target thread may be the thread used to execute the event. The target thread may be a thread newly created at the beginning of an event, or may be a thread already created before the beginning of the event, which is not limited in the embodiment of the present application.
In step S703, a target weighting factor corresponding to the target thread is determined.
In this step, there are various ways to determine the target weight factor, which may specifically refer to the related description in the method 400, and for brevity, no further description is given here.
In step S705, it is determined whether the threshold switch is turned on.
Here, the opening or closing of the threshold switch may determine whether to adjust the virtual run time of the target thread using the target weight factor.
If the judgment result is yes, that is, the threshold switch is turned on, step S706 is executed, and the target weight factor takes effect.
Accordingly, in step S707, the virtual run time of the target thread is adjusted according to the target weight factor. The adjusted virtual runtime thus obtained is used for fair scheduling.
If the determination result is negative, that is, the threshold switch is turned off, step S708 is executed, and the target weight factor is invalid.
Accordingly, in step S709, the virtual runtime of the target thread is not adjusted. The virtual run time of the target thread thus obtained according to the conventional calculation formula (i.e., formula (2) above) is used for fair scheduling.
That is, even if the target weight factor is determined in step S704, the target weight factor cannot be used to adjust the virtual runtime of the target thread if the threshold switch is not turned on.
In the embodiment of the present application, step S706 and step S708 may not be provided, so that step S707 or step S709 is directly executed after step S705.
Finally, in step S710, fair scheduling is performed according to the adjusted virtual running time or the unadjusted virtual running time of the target thread, and primary scheduling of the target thread is completed.
In the flowchart shown in fig. 7, after the step of determining whether the threshold switch is turned on (i.e., step S705) is performed to determine the target weighting factor corresponding to the target thread (i.e., step S704), in some other embodiments, the two steps may be performed in reverse order or simultaneously.
Fig. 8 is a schematic flow chart illustrating another method for determining a weighting factor and scheduling threads according to the weighting factor according to an embodiment of the present disclosure. The process shown in fig. 8 includes steps S801 to S809, which will be described below with reference to the drawings.
In step S801, a first operation of an interface user is received.
At step S802, at least one thread for responding to the first operation is created.
In step S803, a target thread is determined from at least one thread.
Steps S801 to S803 are the same as steps S701 to S703 in fig. 7, and specific reference is made to the above description, which is omitted here for brevity.
In step S804, it is determined whether the threshold switch is turned on.
Here, the opening or closing of the threshold switch may determine whether to adjust the virtual run time of the target thread using the target weight factor.
If the judgment result is yes, that is, the threshold switch is turned on, step S805 is executed to determine the target weight factor corresponding to the target thread, where 0 is greater than or equal to the target weight factor and less than 1.
Accordingly, in step S806, the virtual runtime of the target thread is adjusted according to the target weight factor. The adjusted virtual running time thus obtained is used for fair scheduling.
If the determination result is yes, that is, the threshold switch is turned off, step S807 is executed, and the weighting factor is not set, or the target weighting factor corresponding to the target thread is set to 1.
Accordingly, in step S808, the virtual runtime of the target thread is not adjusted. The virtual run time of the target thread thus obtained according to the conventional calculation formula (i.e., formula (2) above) is used for fair scheduling.
In step S809, fair scheduling is performed according to the adjusted virtual running time or the unadjusted virtual running time of the target thread, and primary scheduling of the target thread is completed.
The flow shown in fig. 7 or fig. 8 is a specific but non-limiting example of a thread scheduling method provided in the embodiment of the present application, and in related steps in fig. 7 or fig. 8, a part not described in detail may refer to the embodiment of the method 400.
When the target thread completes the task, or at least one thread for responding to the first operation completes the task completely, the relevant weighting factor may be set to 1, so that the thread scheduling is not affected any more.
In the embodiment of the present application, the threshold switch and the preset threshold value for determining whether the threshold switch is turned on may be set for a complete system load when the system is initialized. For example, the initial state of the threshold switch may be an off state, and the initial value thereof may be false. With the real-time monitoring of the system load, when the system load exceeds a preset threshold value, the state of the threshold switch may be switched to an open state, and the value may be true.
When the thread of the fair scheduling class starts periodic scheduling, corresponding virtual running time calculation formulas, such as a calculation formula with a weight factor influence (e.g., formula (1) above) and a calculation formula without a weight factor influence (e.g., formula (2) above), may be selected according to the turning on or off of the threshold switch, the virtual running time of the thread is calculated, and fair scheduling is performed based on the calculated virtual running time. In a scheduling period, the scheduler preferentially selects the thread with the minimum virtual running time for scheduling.
In the embodiment of the present application, at the time of task initialization, for example, at the time of creating at least one thread for responding to the first operation as shown in fig. 7 or fig. 8, a weighting factor may be set for each task (i.e., thread), and the initial value thereof may be 100% (i.e., 1). After the target thread needing preferential scheduling is determined, the weight factor value of the target thread can be modified, namely the target weight factor is determined, and the target weight factor is used for adjusting the virtual running time of the target thread. For other threads that do not need to be scheduled preferentially, their weighting factors may remain at initial default values.
In other embodiments, at the time of task initialization, a weighting factor may be set for each task (i.e., thread), and then the initial value of the target thread that needs to be preferentially scheduled is set to a factory default value less than 100%, for example, 40% or 60%, etc., while the initial value of the weighting factor of the thread that does not need to be preferentially scheduled is set to 100%.
In other embodiments, a weighting factor may be set for threads that need or may need to be scheduled preferentially at task initialization, while a weighting factor is not set for threads that do not need to be scheduled preferentially. The initial value of the weighting factor may be 100%, or may be a factory default value smaller than 100%, which is not limited in this embodiment of the application.
The thread scheduling method provided by the embodiment of the present application is described below from the perspective of hardware and software interaction with fig. 9. As shown in fig. 9, when fair scheduling is performed, the CFS scheduler 910 in the kernel is used to perform thread scheduling, and the following process may be included.
First, an Energy Aware Scheduling (EAS) 920 in the CFS scheduler 910 is used for core selection and frequency modulation. Specifically, the Energy Model (EM) is used to estimate an energy value of the system as a whole, and the energy value of the system as a whole is used for a per-scheduling entity load tracking algorithm (pel) to calculate a requirement of each scheduling entity (scheduling entity) on the system, that is, to calculate a task load of each scheduling entity. The scheduling frequency modulation subsystem (schedule) is used for providing a user mode interface, so that user mode management software can track scene requirements, scale current task load and dynamically adjust various behaviors of the scheduler by setting a schedule node value, and a frequency modulation effect is achieved. The scheduler can sense the task requirement and flexibly select the kernel or modulate the frequency by using the parameters. Android, for example, provides a set of task groupings: the top layer group (top-app), the foreground group (foreground) and the background group (background) respectively correspond to the Android top layer (top) task, the foreground task and the background task. The schedule adjustable parameters are different for different packets. The CPU governor (CPU governor) is used to select a CPU core (i.e., core selection). Illustratively, a CPU cluster is shown that includes multiple CPU cores, each having a run queue (or dispatch queue or ready queue). The run queue of each CPU core includes tasks (i.e., threads) waiting for processing by that CPU core. When a task wakes up from a blocked or sleeping state, it needs to be placed in a run queue of a certain CPU core. In a hybrid processor architecture, the energy-efficient overhead of placing tasks on different CPU cores varies. In order to ensure the performance and achieve the lowest system complete machine energy consumption, the scheduler determines on which CPU core a task runs through an energy efficiency table of an EAS energy efficiency model. This is also the EAS core selection logic. It should be noted that, in the embodiment of the present application, no limitation is made on the number of CPU clusters and the number of CPU cores included in each cluster, and the process of selecting cores and tuning frequency is the same as that of the prior art, and is only an exemplary description here.
After the core selection is completed, the CFS scheduler 910 may set or determine the value of the weight factor of the thread according to the information of the control group cgroup. Specifically, for how to determine the weight factor of the thread, reference may be made to the above embodiments, which are not described herein again. When the weighting factor corresponding to the thread takes effect, the virtual runtime of the thread can be adjusted by using the weighting factor. When the weight factor corresponding to the thread fails, or the weight factor is 100%, or there is no corresponding weight factor, calculating the virtual running time of the thread according to the calculation method in the prior art.
In this way, CFS scheduler 910 can schedule tasks based on the calculated virtual runtime, and schedule tasks onto selected CPU cores for further processing.
Method embodiments of the present application are described above in detail with reference to fig. 1 to 9, and apparatus embodiments of the present application are described below in detail with reference to fig. 10 to 11. It is to be understood that the description of the method embodiments corresponds to the description of the apparatus embodiments, and therefore reference may be made to the preceding method embodiments for parts not described in detail.
Fig. 10 is a schematic diagram of a thread scheduling apparatus according to an embodiment of the present application. The apparatus 1000 shown in fig. 10 may be an apparatus on the electronic device 100 shown in fig. 1, or an apparatus on an electronic device having the software system 200 shown in fig. 2. The apparatus 1000 includes an obtaining module 1010 and an adjusting module 1020.
The apparatus 1000 may be configured to execute the thread scheduling method provided in the embodiment of the present application. For example, the obtaining module 1010 may be configured to perform step S410 of the method shown in fig. 4, and the adjusting module 1020 may be configured to perform step S420 of the method shown in fig. 4.
As another example, apparatus 1000 may also be used to perform the thread scheduling methods shown in FIGS. 6-8. The obtaining module 1010 is configured to perform a step of obtaining a system load, and may further perform a step of setting a threshold switch state, for example, the obtaining module 1010 may be configured to perform step S601 in fig. 6, and may also be configured to perform steps S602 to S606 in fig. 6. The adjusting module 1020 is configured to perform a step of adjusting the virtual runtime of the target thread, and may also perform a step of thread scheduling and a step of setting a weighting factor, for example, the adjusting module 1020 may be configured to perform steps S705 to S710 in fig. 7, and may also be configured to perform step S704, and the like; also for example, the adjusting module 1020 may be configured to perform steps S804 to S809 in fig. 8.
The apparatus 1000 may correspond to the operating system layer in fig. 2 or 3, and in particular, may correspond to the CFS910 in fig. 9.
Fig. 11 shows a hardware structure diagram of an electronic device according to an embodiment of the present application. The electronic device 1100 includes a processor 1110 and a memory 1120.
A memory 1120 for storing programs;
a processor 1110 for executing a program stored in the memory 1120, the processor 1110 for obtaining a system load when the program is executed; and when the system load exceeds a preset threshold value, adjusting the virtual running time of a target thread so as to enable the target thread to obtain priority scheduling, wherein the target thread belongs to a fair scheduling thread.
Alternatively, the processor 1110 may have the functions of the processor 150 shown in fig. 1 to implement the above-described functions of executing the relevant programs.
Alternatively, processor 1110 may also be an integrated circuit chip having information processing capabilities. In the implementation process, the steps of the thread scheduling method according to the embodiment of the present application may be implemented by integrated logic circuits of hardware in a processor or instructions in the form of software.
Alternatively, the memory 1120 may be a Read Only Memory (ROM), a static memory device, a dynamic memory device, or a Random Access Memory (RAM). The memory 1120 may store a program, and the processor 1110 is configured to perform the steps of the thread scheduling method according to the embodiment of the present application when the program stored in the memory 1120 is executed by the processor 1210.
Alternatively, the memory 1110 may have the functions of the memory 180 shown in fig. 1 to realize the above-described functions of storing programs. Alternatively, the processor 1120 may be a general-purpose CPU, a microprocessor, an ASIC, or one or more integrated circuits, and is configured to execute the relevant programs to implement the functions required by the units/modules in the thread scheduling apparatus according to the embodiment of the present application, or to execute the steps of the thread scheduling method according to the embodiment of the present application.
Optionally, the processor 1110 and the memory 1120 may be coupled together.
In the embodiments of the present application, "first", "second", and various numerical references are only used for convenience of description and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated, and thus, the features defined as "first", "second" may explicitly or implicitly include one or more of the features.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. In addition, in the description of the embodiments of the present application, "a plurality" means two or more, "at least one", "one or more" means one, two or more.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between 2 or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented as a software functional unit and sold or used as a separate product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (26)

1. A method for thread scheduling, comprising:
acquiring a system load;
and when the system load exceeds a preset threshold value, adjusting the virtual running time of a target thread so as to enable the target thread to obtain priority scheduling, wherein the target thread belongs to a fair scheduling thread.
2. The method of claim 1, wherein adjusting the virtual runtime of the target thread comprises:
determining a target weight factor corresponding to the target thread, wherein the target weight factor is greater than or equal to 0 and less than 1;
and adjusting the virtual running time of the target thread according to the target weight factor to obtain the adjusted virtual running time, wherein the adjusted virtual running time is used for fair scheduling.
3. The method of claim 2, wherein determining the target weighting factor for the target thread comprises:
according to the system load, determining a system load grade corresponding to the system load from a plurality of preset system load grades;
and determining the target weight factor from a plurality of preset weight factors according to the system load grades corresponding to the system loads, wherein the plurality of preset weight factors correspond to the plurality of preset system load grades.
4. The method of claim 3, wherein the predetermined plurality of system load levels comprises a first level and a second level, and wherein the predetermined plurality of weighting factors comprises a first weighting factor and a second weighting factor, wherein the first weighting factor corresponds to the first level and the second weighting factor corresponds to the second level;
the system load corresponding to the first level is greater than the system load corresponding to the second level, and the first weight factor is smaller than the second weight factor.
5. The method according to any of claims 2 to 4, wherein the determining the target weighting factor corresponding to the target thread comprises:
determining a first group to which the target thread belongs from a plurality of preset groups according to the information of the target thread;
determining the target weight factor from at least one weight factor according to the first group to which the target thread belongs, wherein the at least one weight factor corresponds to the first group.
6. The method of claim 5, wherein the predetermined plurality of packets comprises the first packet and a second packet, wherein the at least one weighting factor for the second packet comprises a third weighting factor, and wherein the target weighting factor is less than the third weighting factor.
7. The method of claim 5 or 6, wherein the first packet comprises a thread associated with a foreground application.
8. The method of any of claims 2 to 7, further comprising:
and when the event corresponding to the target thread is completed, adjusting the target weight factor to 1.
9. The method according to any one of claims 2 to 8, wherein the adjusting the virtual runtime of the target thread according to the target weight factor to obtain an adjusted virtual runtime comprises:
and multiplying the target weight factor by the time distributed by the target thread in the current scheduling period, and adding the multiplication result to the virtual running time of the target thread after the last scheduling period is finished to obtain the adjusted virtual running time.
10. The method according to any one of claims 2 to 9, further comprising, before said determining a target weighting factor corresponding to the target thread:
determining that a threshold switch is in an open state, wherein whether to open the threshold switch is determined according to whether the system load exceeds the preset threshold value.
11. The method according to any one of claims 1 to 10, wherein the target thread is a thread for executing a task related to an interactivity event.
12. The method according to any one of claims 1 to 11, wherein the target thread is any one of the following threads: a user interface thread, a rendering thread, a distribution thread of user input events, a detection thread of user input events, an interface composition thread, a system animation thread, or a system interface thread.
13. A thread scheduling apparatus, comprising:
the acquisition module is used for acquiring system load;
and the adjusting module is used for adjusting the virtual running time of a target thread when the system load exceeds a preset threshold value so as to enable the target thread to obtain priority scheduling, wherein the target thread belongs to a fair scheduling thread.
14. The apparatus of claim 13, wherein the adjustment module is specifically configured to:
determining a target weight factor corresponding to the target thread, wherein the target weight factor is greater than or equal to 0 and less than 1;
and adjusting the virtual running time of the target thread according to the target weight factor to obtain the adjusted virtual running time, wherein the adjusted virtual running time is used for fair scheduling.
15. The apparatus of claim 14, wherein the adjustment module is specifically configured to:
according to the system load, determining a system load grade corresponding to the system load from a plurality of preset system load grades;
and determining the target weight factor from a plurality of preset weight factors according to the system load grades corresponding to the system loads, wherein the plurality of preset weight factors correspond to the plurality of preset system load grades.
16. The apparatus according to claim 15, wherein the predetermined plurality of system load levels comprises a first level and a second level, the predetermined plurality of weighting factors comprises a first weighting factor and a second weighting factor, the first weighting factor corresponds to the first level, and the second weighting factor corresponds to the second level;
the system load corresponding to the first level is greater than the system load corresponding to the second level, and the first weight factor is smaller than the second weight factor.
17. The apparatus according to any one of claims 14 to 16, wherein the adjusting module is specifically configured to:
determining a first group to which the target thread belongs from a plurality of preset groups according to the information of the target thread;
determining the target weight factor from at least one weight factor according to the first group to which the target thread belongs, wherein the at least one weight factor corresponds to the first group.
18. The apparatus of claim 17, wherein the predetermined plurality of packets comprises the first packet and a second packet, wherein the at least one weighting factor associated with the second packet comprises a third weighting factor, and wherein the target weighting factor is less than the third weighting factor.
19. The apparatus of claim 17 or 18, wherein the first packet comprises a thread associated with a foreground application.
20. The apparatus of any of claims 14 to 19, wherein the adjustment module is further configured to:
and when the event corresponding to the target thread is completed, adjusting the target weight factor to 1.
21. The apparatus according to any one of claims 14 to 20, wherein the adjusting module is specifically configured to:
and multiplying the target weight factor by the time allocated by the target thread in the current scheduling period, and adding the multiplication result to the virtual running time of the target thread after the last scheduling period is finished to obtain the adjusted virtual running time.
22. The apparatus according to any of claims 14 to 21, wherein before the determining the target weighting factor corresponding to the target thread, the adjusting module is further configured to:
determining that a threshold switch is in an open state, wherein whether to open the threshold switch is determined according to whether the system load exceeds the preset threshold value.
23. The apparatus according to any of claims 13 to 22, wherein the target thread is a thread for performing a task related to an interactivity event.
24. The apparatus according to any one of claims 13 to 23, wherein the target thread is any one of the following threads: a user interface thread, a rendering thread, a distribution thread of user input events, a detection thread of user input events, an interface composition thread, a system animation thread, or a system interface thread.
25. An electronic device, comprising:
one or more processors;
one or more memories;
the one or more memories are for storing a computer program comprising instructions that, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-12.
26. A computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-12.
CN202110735256.7A 2021-06-30 2021-06-30 Thread scheduling method and device and electronic equipment Pending CN115543551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110735256.7A CN115543551A (en) 2021-06-30 2021-06-30 Thread scheduling method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110735256.7A CN115543551A (en) 2021-06-30 2021-06-30 Thread scheduling method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115543551A true CN115543551A (en) 2022-12-30

Family

ID=84716861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110735256.7A Pending CN115543551A (en) 2021-06-30 2021-06-30 Thread scheduling method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115543551A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117112154A (en) * 2023-04-21 2023-11-24 荣耀终端有限公司 Thread scheduling method and related device
CN117707720A (en) * 2023-08-07 2024-03-15 荣耀终端有限公司 Process scheduling method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117112154A (en) * 2023-04-21 2023-11-24 荣耀终端有限公司 Thread scheduling method and related device
CN117707720A (en) * 2023-08-07 2024-03-15 荣耀终端有限公司 Process scheduling method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN106557367B (en) Apparatus, method and device for providing granular quality of service for computing resources
CN108009006B (en) Scheduling method and device of I/O (input/output) request
US9946563B2 (en) Batch scheduler management of virtual machines
JP6320520B2 (en) Thread assignment and scheduling for many priority queues
US7617375B2 (en) Workload management in virtualized data processing environment
US9411649B2 (en) Resource allocation method
US7698531B2 (en) Workload management in virtualized data processing environment
WO2018059280A1 (en) Method and terminal for allocating system resource to application
US8782674B2 (en) Wait on address synchronization interface
US20160077571A1 (en) Heuristic Processor Power Management in Operating Systems
CN112416546A (en) Multitask scheduling method, electronic device and computer storage medium
CN111597042A (en) Service thread running method and device, storage medium and electronic equipment
US20210208935A1 (en) Method for Scheduling Multi-Core Processor, Terminal, and Storage Medium
JP2007512632A (en) Managing virtual machines using activity information
CN115543551A (en) Thread scheduling method and device and electronic equipment
TWI549052B (en) Method, computer-readable storage device and device for modifying behavior of an operating system
CN113495780A (en) Task scheduling method and device, storage medium and electronic equipment
US11954419B2 (en) Dynamic allocation of computing resources for electronic design automation operations
CN111813521A (en) Thread scheduling method and device, storage medium and electronic equipment
US7698530B2 (en) Workload management in virtualized data processing environment
CN111831434A (en) Resource allocation method, device, storage medium and electronic equipment
US10275007B2 (en) Performance management for a multiple-CPU platform
CN111831432B (en) IO request scheduling method and device, storage medium and electronic equipment
CN111831436A (en) Scheduling method and device of IO (input/output) request, storage medium and electronic equipment
CN113032154B (en) Scheduling method and device for virtual CPU, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination