WO2022252986A1 - Interrupt scheduling method, electronic device, and storage medium - Google Patents

Interrupt scheduling method, electronic device, and storage medium Download PDF

Info

Publication number
WO2022252986A1
WO2022252986A1 PCT/CN2022/093584 CN2022093584W WO2022252986A1 WO 2022252986 A1 WO2022252986 A1 WO 2022252986A1 CN 2022093584 W CN2022093584 W CN 2022093584W WO 2022252986 A1 WO2022252986 A1 WO 2022252986A1
Authority
WO
WIPO (PCT)
Prior art keywords
interrupt
processor core
scheduling
processor
scheduling delay
Prior art date
Application number
PCT/CN2022/093584
Other languages
French (fr)
Chinese (zh)
Inventor
王辉
成坚
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022252986A1 publication Critical patent/WO2022252986A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4812Task transfer initiation or dispatching by interrupt, e.g. masked
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the field of operating systems, and in particular to an interrupt scheduling method, electronic equipment and storage media.
  • FIG. 1 shows a schematic diagram of an architecture in the related art that uses a Linux operating system to process high real-time services.
  • the architecture includes a hardware layer 120 , a kernel layer 140 and a user layer 160 .
  • the user layer 160 can run at least one thread, and each thread is used for processing tasks.
  • the task scheduling process and interrupt handling process of each thread are mainly implemented by the kernel layer 140 .
  • the current interrupt processing strategy mainly includes the following two possible implementation methods.
  • One possible implementation method is interrupt balance processing, that is, sending interrupt requests (Interrupt Request, IRQ) to each processor (Central Processing Unit, CPU) relatively evenly. core, this method can guarantee the throughput of interrupt processing, but it cannot solve the problem of uncontrollable scheduling delay caused by interrupt processing;
  • another possible implementation method is interrupt binding core processing, such as pre-binding the network card interrupt request to a certain
  • the scheduling delay on other processor cores other than the bound processor core can be controlled, but in this way, the interrupt load on the bound processor core may be too large, and the The frequency of the entire cluster causes waste of power consumption.
  • a reasonable and effective interrupt scheduling method has not been provided, which can ensure a reasonable interrupt processing throughput on the basis of ensuring the scheduling delay.
  • an embodiment of the present application proposes an interrupt scheduling method, electronic equipment, and a storage medium.
  • the maximum value of the scheduling delay caused by the interrupt processing configured for the first processor core is the first delay threshold, so that when the scheduling delay value of the first processor core is greater than the first delay threshold.
  • migrating and binding part of the current interrupt requests of the first processor core to the second processor core can ensure reasonable interrupt processing throughput while ensuring scheduling delay.
  • an embodiment of the present application provides an interrupt scheduling method, the method comprising:
  • the first delay threshold is the maximum value of scheduling delay caused by interrupt processing configured for the first processor core
  • the maximum value of the scheduling delay caused by the interrupt processing configured for the first processor core is the first delay threshold, so that the scheduling delay value of the first processor core is greater than the first delay threshold.
  • some of the current interrupt requests of the first processor core are migrated and bound to the second processor core, which ensures that the scheduling delay caused by interrupt processing on the first processor core is controllable, avoiding
  • the problem of excessive scheduling delay caused by interrupt processing in the related art ensures the real-time performance of the business processing process and improves the overall performance of the electronic equipment.
  • the scheduling delay value of the first processor core after the migration binding is less than or equal to the first delay threshold.
  • the scheduling delay value of the first processor core is less than or equal to the first delay threshold, which reduces the scheduling delay on the first processor core and further ensures the real-time performance of the service processing process.
  • the absolute values of the differences between the scheduling delay values of other processor cores are all smaller than a preset first difference threshold, and the other processor cores are All processor cores except the first processor core.
  • the part of the interrupt requests moved out is reasonably evenly distributed on other processor core clusters, so that the absolute value of the difference between the scheduling delay values of other processor cores after migration and binding is less than the preset The first difference threshold, to ensure the concurrency of interrupt processing, and to avoid the excessive load of a single processor core to increase the frequency of the entire cluster, resulting in waste of power consumption.
  • the method is used in an electronic device including a user layer, a kernel layer, and a hardware layer, and the acquiring the preconfigured first delay threshold includes:
  • the user layer sends first configuration information to the kernel layer, where the first configuration information includes a processor core identifier of the first processor core and the first delay threshold;
  • the kernel layer receives the first configuration information sent by the user layer
  • migrating and binding the current part of the interrupt request of the first processor core to the second processor core includes:
  • the kernel layer sends second configuration information to the interrupt controller of the hardware layer, and the second configuration information includes the first processing The interrupt number to be transferred out of the processor core and the processor core identification of the second processor core;
  • the interrupt controller migrates and binds at least one interrupt request corresponding to the interrupt number to the second processor core according to the second configuration information.
  • the kernel layer dynamically adjusts the interrupt binding according to the current scheduling delay value of the first processor core and the first delay threshold configured by the user layer. If the current scheduling delay value of the first processor core is greater than the first Once the delay threshold is reached, the second configuration information is sent to the interrupt controller at the hardware layer, so that the interrupt controller migrates and binds at least one interrupt request corresponding to the interrupt number to the second processor core according to the second configuration information, further ensuring The scheduling delay caused by interrupt processing on the first processor core is controllable, and the overall performance of the electronic device is improved.
  • the method further includes:
  • the interrupt processing duration corresponding to the interrupt request is obtained, and the interrupt processing duration includes the total duration of hard interrupt processing and soft interrupt processing;
  • a preset algorithm is used to determine the scheduling delay value corresponding to the interrupt request
  • the scheduling delay value corresponding to the interrupt request is summed with the current scheduling delay value of the first processor core to obtain the updated scheduling delay value.
  • the embodiment of the present application tracks and calculates the load overhead of each interrupt request, so as to avoid the traditional calculation based on the number of interrupts
  • the extensive algorithm of the load supports more accurate interrupt balancing processing, and the time overhead of soft interrupt processing triggered by hard interrupt processing is included in the corresponding interrupt processing time, making the scheduling delay value determined based on the interrupt processing time more accurate, so that In the follow-up, interrupt equalization processing can be performed more accurately.
  • the method further includes:
  • the second latency threshold is the maximum value of scheduling latency caused by interrupt processing configured for a specified thread
  • the processor core that meets the preset core selection condition is determined as the target processor core, and the preset core selection condition includes that the current scheduling delay value of the processor core is less than or equal to the specified The second delay threshold;
  • the task scheduling delay requirement and the scheduling delay value of the processor core are considered when selecting a task core, and only the current scheduling delay value of the processor core is less than or equal to the second delay threshold of the task scheduling delay requirement , it is possible to use the processor core as the target processor core to ensure that the key designated thread can be selected to the processor core with a low scheduling delay value and be scheduled in time.
  • the specified thread includes a frame drawing process and/or a process of an inter-process communication mechanism of the foreground application.
  • this implementation mode supports the configuration of scheduling delay requirements for task granularity, separates foreground/background threads, and reduces the impact of background threads on foreground specified threads.
  • an embodiment of the present application provides an electronic device, and the electronic device includes:
  • memory for storing processor-executable instructions
  • the processor is configured to implement the first aspect or the method provided in any possible implementation manner of the first aspect when executing the instruction.
  • the embodiments of the present application provide a non-volatile computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above-mentioned first aspect or the first aspect can be realized A method provided by any of the possible implementations in .
  • an interrupt scheduling device is provided, and the device includes at least one unit, and the at least one unit is configured to implement the method provided in the first aspect or any possible implementation manner of the first aspect.
  • the embodiments of the present application provide a computer program product, including computer readable code, or a non-volatile computer readable storage medium bearing computer readable code, when the computer readable code is stored in an electronic
  • the processor in the electronic device executes the method provided in the first aspect or any one possible implementation manner of the first aspect.
  • FIG. 1 shows a schematic diagram of an architecture in the related art that uses a Linux operating system to process high real-time services.
  • Fig. 2 shows a schematic diagram of five scheduling classes in the Linux kernel in the related art.
  • FIG. 3 shows a schematic diagram of a scheduling queue of a processor core in the related art.
  • FIG. 4 shows a schematic diagram of an interrupt processing architecture of a Linux operating system in the related art.
  • Fig. 5a shows a schematic diagram of time-consuming interrupt processing of multiple processor cores in the case of adopting interrupt balancing processing.
  • Fig. 5b shows a schematic diagram of time-consuming interrupt processing of multiple processor cores in the case of using interrupt bundled core processing.
  • FIG. 6 shows a schematic diagram of an electronic device involved in an embodiment of the present application.
  • Fig. 7 shows a flowchart of an interrupt scheduling method provided by an exemplary embodiment of the present application.
  • Fig. 8 shows a flowchart of a process of counting scheduling delay values provided by an exemplary embodiment of the present application.
  • Fig. 9 shows a flowchart of a task selection process provided by an exemplary embodiment of the present application.
  • Fig. 10 shows a schematic diagram of an interface involved in an interrupt scheduling method provided by another exemplary embodiment of the present application.
  • Fig. 11 shows a flowchart of an interrupt scheduling method provided by another exemplary embodiment of the present application.
  • Fig. 12 shows a schematic diagram of a scheduling delay of a specified thread provided by an exemplary embodiment of the present application.
  • Fig. 13 shows a flowchart of an interrupt scheduling method provided by another exemplary embodiment of the present application.
  • Fig. 14 shows a block diagram of an interrupt scheduling device provided by an exemplary embodiment of the present application.
  • a process is an entity in the execution period of a program. In addition to executing code, it also includes information such as open files and pending signals.
  • Thread A process can include multiple threads, and the address space is shared between threads
  • Task The kernel (the core of the operating system) schedules the object, which can be a process or a thread.
  • the task_struct structure is used uniformly in the Linux kernel to describe processes and threads.
  • a processor core includes at least one scheduling queue. After a task wakes up, it will join the scheduling queue of a certain processor core and wait for scheduling. Optionally, the task will join the scheduling queue (Deadline Runqueue, dl_rq) of the deadline-oriented scheduler in the processor core, the scheduling queue of the real-time scheduler (Real Time Runqueue, rt_rq) and the scheduling queue of the completely fair scheduler in the processor core according to the scheduling policy. In a dispatch queue in the dispatch queue (Completely Fair Runqueue, cfs_rq).
  • each processor core includes a process queue (Runqueue, rq) for managing dl_rq, rt_rq and cfs_rq in the processor core.
  • process queue Requeue, rq
  • all DL tasks need to add dl_rq to wait for scheduling before running
  • all RT tasks need to add rt_rq to wait for scheduling before running
  • all CFS tasks need to add cfs_rq to wait for scheduling before running.
  • Scheduling delay the time it takes for a task to wake up and join the scheduling queue until it actually starts executing.
  • the scheduling delay in the real-time system can reach the lowest us level.
  • Context switching switching between operating states of the kernel, and/or, the kernel switches tasks on the processor core.
  • the Linux kernel includes the following running states: user state, kernel state (running in process context), kernel state (running in interrupt context), switching between these running states, and switching between tasks, all It's called a context switch. Context switching has certain overhead due to the need to save and restore state information such as registers and page tables.
  • Preemption After a high-priority task wakes up, if the currently executing task has a lower priority, immediately switch the currently executing task to a high-priority task. In this way, high-priority tasks can be executed as soon as possible with low scheduling delay.
  • Off Preemption Turn off the above preemption ability. This is because the kernel needs to avoid concurrency competition introduced by preemption when processing some key resources.
  • Throughput the amount of data processed by the system per unit time. It reflects the data processing capability of the system. It is usually negatively correlated with the number of context switches, that is, the more the number of context switches, the lower the throughput; on the contrary, the smaller the number of context switches, the higher the throughput.
  • any unusual or unexpected emergency processing event occurs in the system, causing the processor to temporarily interrupt the currently executing program and turn to execute the corresponding event processing program. Return to the original interrupted process to continue execution or schedule a new process to execute. Events that cause an interrupt to occur are called interrupt sources.
  • the request interrupt processing signal sent by the interrupt source to the processor is called an interrupt request.
  • the process by which the processor processes an interrupt request is called interrupt handling.
  • the scheduling subsystem is one of the core modules in the Linux kernel.
  • the scheduling subsystem is used for task scheduling, such as determining the task to be executed, the execution start time and execution duration of the task, and so on.
  • five scheduling classes are defined, as shown in FIG. 2 .
  • the five scheduling classes are STOP scheduling class, Deadline (DL) scheduling class, Real Time (RT) scheduling class, Completely Fair Scheduler (CFS) scheduling class and idle ( IDLE) scheduling class, where STOP and IDLE are two special scheduling classes, not used for scheduling common tasks.
  • the kernel will preferentially select a task to execute from the scheduling queue of the RT scheduling class.
  • the tasks in the CFS scheduling class can only be scheduled for execution when all the tasks in the RT scheduling class are executed, or the processor core is voluntarily given up (such as sleep), or the running time of the RT task exceeds the pre-configured threshold.
  • the RT scheduling class and the CFS scheduling class take over most of the tasks in the operating system.
  • the CFS scheduling class is the default scheduling algorithm of the Linux kernel. This algorithm focuses on ensuring the fairness of scheduling, and can guarantee that all processes will be scheduled within a certain period of time. But precisely because of its fairness, even if the task has the highest priority, it cannot be guaranteed that the task will always be scheduled and executed first (even if the nice value is adjusted to -20), that is to say, the scheduling delay is uncontrollable.
  • the RT scheduling class is an algorithm that schedules strictly according to the priority. After a task wakes up, if it has a higher priority than the currently running task, it will trigger the preemption of the current running task, so that the processor core will immediately switch to execute the scheduled task.
  • the wake-up high-priority task ensures the scheduling delay of the high-priority task. In order to ensure timely drawing (for example, a mobile phone with a refresh rate of 60 frames needs to complete one frame of drawing in 16.7ms), it is necessary to strictly ensure the scheduling delay of designated threads such as UI/Render thread and Surfaceflinger thread. If the scheduling delay is too large, the It will cause a freeze because it is too late to draw the frame. Therefore, the system recognizes these designated threads and configures them as RT tasks.
  • the scheduling queue of a processor core is shown in FIG. 3 .
  • the parameter prio of the task indicates the normalized priority of the task, and the value range is [0, 139].
  • the value of the parameter prio is in [100, 139]
  • the value of the parameter prio is in [0, 99]
  • the task is a task managed by the RT scheduling class.
  • the value of the parameter prio is negatively correlated with the priority of the task, that is, the lower the value of the parameter prio, the higher the priority of the task.
  • the parameter prio of task 1 is 97
  • the parameter prio of task 2 is 98.
  • the scheduling queue of core 0 includes task B, task A, task X, and task Y in order of execution.
  • the scheduling queue of core 0 includes task A, task B, task X, and task Y in order of execution.
  • a preemption operation is performed. During this period, if the Surfaceflinger thread joins the scheduling queue, it cannot be scheduled immediately due to preemption. It usually waits for a period of time (such as 4ms) before being scheduled for execution, which eventually leads to too late to draw, resulting in frame loss. In the case of large network traffic (such as updating applications in batches, downloading videos, etc.), the scheduling delay of soft interrupt processing can even reach 10ms, which is very prone to similar frame loss and freeze problems.
  • Interrupt is an asynchronous event processing mechanism, which is used to improve the concurrent processing capability of the system.
  • the interrupt handler When an interrupt request occurs, it will trigger the execution of the interrupt handler, and the interrupt handler is divided into two parts: the upper part of the interrupt and the lower part of the interrupt.
  • the upper part of the interrupt corresponds to the hard interrupt, which is used to quickly process the interrupt.
  • the processor core calls the registered interrupt function according to the interrupt table. This interrupt function will call the corresponding function in the driver (Driver); the lower part of the interrupt corresponds to the soft interrupt.
  • Interrupt used to asynchronously handle the unfinished work in the upper half of the interrupt.
  • the ksoftirqd process in the Linux kernel is specially responsible for the processing of soft interrupts. When it receives a soft interrupt, it will call the processing function corresponding to the corresponding soft interrupt, such as the net_rx_action function.
  • the main reason why the Surfaceflinger thread cannot be scheduled in time is that preemption is disabled for both the upper half of the interrupt and the lower half of the interrupt, and the processing of the lower half of the interrupt takes a long time. After the processing of the lower half of the interrupt is completed and the preemption is enabled, The Surfaceflinger thread will be scheduled for execution, but it is too late to complete the drawing frame.
  • a processing method in the related art is to thread the interrupt processing, so that the interrupt processing can be preempted, so that high-priority tasks can be scheduled in time, effectively reducing the scheduling delay.
  • a real-time set of patches such as PREEMPT_RT patches maintained outside the Linux mainline, this set of patches threads the kernel's interrupt handling and allows interrupt handling threads to be preempted by high-priority tasks. In this way, high-priority tasks will not be unscheduled due to interrupt processing, thereby reducing the scheduling delay caused by interrupt processing.
  • the interrupt binding method which is to bind the interrupt processing to the target processor core, and the target processor core is at least one pre-set processor core; according to the processor core load, the specified Threads are scheduled to other processor cores except the target processor core, so that the number of interrupts on other processor cores is controllable except for the target processor core, and the scheduling delay is also controllable without causing excessive scheduling delay.
  • the electronic device includes 8 processors (core 0 to core 7), as shown in Figure 5a, the interrupt processing strategy adopted is interrupt balanced processing, and the interrupt processing is basically amortized on each processor core , that is, the interrupt processing time (in us) corresponding to each processor core is similar. Under this strategy, interrupt processing may migrate back and forth, resulting in uncontrollable scheduling delay.
  • the interrupt processing strategy adopted is interrupt binding core processing, such as binding interrupt processing to core 0 and core 4 (other processor cores still have interrupt loads because some interrupt requests cannot be bound to cores , such as a clock interrupt). According to the processor core load, the specified thread is scheduled to other processor cores except core 0 and core 4 to control its scheduling delay.
  • embodiments of the present application provide an interrupt scheduling method, electronic equipment, and a storage medium, so as to solve the problems existing in the above-mentioned related technologies.
  • the maximum value of the scheduling delay caused by the interrupt processing configured for the first processor core is the first delay threshold, so that the scheduling delay value of the first processor core is greater than the first
  • the delay threshold the current part of the interrupt request of the first processor core is migrated and bound to the second processor core, which ensures that the scheduling delay caused by interrupt processing on the first processor core is controllable, and avoids related problems.
  • the problem of excessive scheduling delay caused by interrupt processing in the technology ensures the real-time nature of business processing and improves the overall performance of electronic equipment.
  • FIG. 6 shows a schematic diagram of an electronic device involved in an embodiment of the present application.
  • the electronic device includes a hardware layer 610 , a kernel layer 620 and a user layer 630 .
  • User layer 630 may run with at least one thread, each thread for processing tasks.
  • the task scheduling process and interrupt response process of each thread are mainly implemented by the kernel layer 620 .
  • the hardware layer 610 is the hardware basis in the electronic device.
  • the electronic equipment may be base station equipment, transmission equipment, industrial robots and other electronic equipment that have certain requirements for real-time task processing.
  • the interrupt scheduling method provided by the embodiment of the present application can be applied to application scenarios that require fast response, such as autopilot, industrial control, virtual reality technology (Virtual Reality, VR) and other scenarios. Configuration can reduce task scheduling delay and ensure that key tasks can be scheduled in time.
  • Hardware layer 610 includes peripherals 612 , interrupt controller 614 and at least one processor 616 .
  • the peripheral device 612 includes a wireless network card, a Bluetooth device, and the like.
  • the processor 616 may be a single-core processor or a multi-core processor.
  • the peripheral device 612 generates an interrupt request when processing data (for example, a wireless network card sends and receives packets), and routes the interrupt request to one of the multiple processor cores through the interrupt controller 614 .
  • processing data for example, a wireless network card sends and receives packets
  • the kernel layer 620 is the layer where the operating system kernel, virtual storage space, and driver applications run.
  • the operating system kernel is the Linux kernel.
  • the kernel layer 620 includes an interrupt subsystem 622 and a scheduling subsystem 624 .
  • the interrupt subsystem 622 includes an interrupt processing module 640 , an interrupt load collection module 641 , an interrupt load calculation module 642 , an interrupt load information statistics module 643 , an interrupt load policy module 644 , and an interrupt load balancing execution module 645 .
  • the scheduling subsystem 624 includes a specified thread configuration module 651 , a task core selection policy module 652 and a task scheduling execution module 653 .
  • the interrupt processing module 640 obtains the interrupt request of the processor core, and starts interrupt processing in the interrupt subsystem 622, and the interrupt processing includes hard interrupt processing and soft interrupt processing.
  • the interrupt load collection module 641 records the interrupt processing duration, and sends the interrupt processing duration to the interrupt load calculation module 642 .
  • the interrupt processing duration includes the total duration of hard interrupt processing and soft interrupt processing.
  • the interrupt load calculation module 642 uses a preset algorithm to determine the scheduling delay value of the current processor core according to the interrupt processing duration provided by the interrupt load collection module 641, and the unit of the calculation result is us or ns.
  • the interrupt load information statistics module 643 saves and summarizes the scheduling delay information on the processor core, and the scheduling delay information includes the summary information of the scheduling delay value of each interrupt request on the processor core.
  • the interrupt load policy module 644 obtains the first delay threshold configured for the first processor core by the user layer 630, and makes a decision according to the scheduling delay information and the first delay threshold stored in the interrupt load information statistics module 643.
  • the core binding policy is determined, and the core binding policy is sent to the interrupt load balancing execution module 645. out, bound to other second processor cores, so as to ensure that the scheduling delay value of the first processor core is less than the first delay threshold.
  • the interrupt load balancing execution module 645 receives the core binding policy from the interrupt load policy module 644, operates the interrupt controller 614 according to the core binding policy, and binds the corresponding interrupt request to the second processor core.
  • the scheduling subsystem 624 is responsible for scheduling and executing all processes/threads in the system.
  • the specified thread configuration module 651 supports configuring the second delay threshold for the specified thread.
  • the task core selection policy module 652 is used to check the interrupt load of the first processor core in the task core selection process, and when the scheduling delay value is less than or equal to the second delay threshold, it is considered that the corresponding processor core meets the condition , as a second processor core for further screening.
  • the task scheduling execution module 653 is used to add the process/thread to the scheduling queue of the corresponding processor core. Since the interrupt load of the processor core has been checked in the previous step, it can be considered that the corresponding process/thread can be obtained within a certain period of time. Schedule execution.
  • the user layer 630 is the layer where normal applications run.
  • the user layer includes an application framework layer (such as a Framework layer).
  • the user layer 630 includes an interrupt load management module 632 and a specified thread identification/control module 634 .
  • the user layer 630 is responsible for configuring the first latency threshold of the first processor core, and identifying and configuring specified threads.
  • the interrupt load control module 632 is responsible for monitoring the overall interrupt load of the system, and when the interrupt load of a certain processor core is too large, selects a suitable second processor core (such as selecting a processor core with a lighter interrupt load at present, which can reduce the interrupt load). Migration), and configure the first latency threshold for the processor core to ensure that there is at least one processor core with a controllable interrupt load in each cluster.
  • the specified thread identification/control module 634 is responsible for identifying the thread responsible for drawing frames (such as UI/Render) in the user layer 630, and configuring the second delay threshold for it.
  • FIG. 7 shows a flowchart of an interrupt scheduling method provided by an exemplary embodiment of the present application.
  • the embodiment of the present application is illustrated by taking the interrupt scheduling method applied to the electronic device shown in FIG. 6 as an example.
  • the interrupt scheduling method includes:
  • step 701 the user layer configures a first delay threshold of the first processor core, and sends first configuration information to the kernel layer.
  • the user layer determines that at least one processor core in the electronic device is the first processor core, and the maximum value of the scheduling delay caused by interrupt processing configured for the first processor core is the first delay threshold.
  • the user layer sends the first configuration information to the kernel layer.
  • the first processor core is a processor core
  • the first configuration information includes a processor core identifier of the first processor core and a first latency threshold.
  • the first processor core is at least two processor cores
  • the first configuration information includes at least two processor core identifiers and their corresponding first delay thresholds
  • the at least two processor core identifiers respectively correspond to first time delay thresholds.
  • the delay thresholds can be the same or different. This embodiment of the present application does not limit this, and for the convenience of introduction, only the first processor core is taken as an example for description.
  • the processor core identification is used to uniquely identify the first processor core among the multiple processor cores of the electronic device
  • the first delay threshold is the scheduling delay caused by the interrupt processing configured for the first processor core. maximum value.
  • the first delay threshold may be dynamically adjusted subsequently according to the scheduling delay value in each processor core and/or the completion of drawing frames.
  • Step 702 the kernel layer judges whether the scheduling delay value of the first processor core is greater than the first delay threshold according to the first configuration information.
  • the kernel layer receives the first configuration information sent by the user layer, where the first configuration information includes the processor core identifier of the first processor core and the first delay threshold.
  • the kernel layer when the kernel layer receives the first configuration information sent by the user layer or updates information on the scheduling delay values of each processor core, it determines whether the scheduling delay value of the first processor core is greater than the first Delay threshold, if the current scheduling delay value of the first processor core is less than or equal to the first delay threshold, the process of this embodiment ends, and if the scheduling delay value of the first processor core is greater than the first delay threshold, execute Step 703.
  • the scheduling latency value of the first processor core is used to indicate the current interrupt load of the first processor core.
  • the scheduling delay value is positively correlated with the interrupt load, that is, the greater the interrupt load, the greater the scheduling delay value.
  • the scheduling latency value of the first processor core is an actual value or an estimated value of the current scheduling latency of the first processor core.
  • the scheduling delay value of the first processor core is an estimated value of the scheduling delay determined based on the current interrupt processing duration of the first processor core.
  • Step 703 If the scheduling latency value of the first processor core is greater than the first latency threshold, the kernel layer performs interrupt equalization processing and sends second configuration information to the interrupt controller.
  • the kernel layer determines an interrupt balancing strategy, and sends second configuration information indicating the interrupt balancing strategy to the interrupt controller.
  • the kernel layer sends the second configuration information to the interrupt controller by scheduling the latency balance execution module.
  • the interrupt balancing strategy indicated by the second configuration information is to migrate and bind part of the current interrupt requests of the first processor core to the second processor core.
  • the second processor core is different from the first processor core. After the migration and binding
  • the scheduling delay value of the first processor core is less than or equal to the first delay threshold.
  • the second configuration information includes an interrupt number to be migrated out of the first processor core and a processor core identifier of the second processor core.
  • the interrupt numbers to be moved out are m interrupt numbers to be moved out, and m is a positive integer.
  • One of the interrupt numbers corresponds to at least one interrupt request.
  • the second configuration information includes the interrupt number to be migrated out and the binding relationship information corresponding to the interrupt number to be migrated out, wherein the binding relationship information corresponding to one interrupt number is used to indicate at least one interrupt number corresponding to the interrupt number
  • the binding relationship between requests and processor cores For example, the electronic device includes 8 processor cores, the interrupt number to be migrated is interrupt number 10, and the interrupt number 10 corresponds to multiple interrupt requests, and the second configuration information includes the interrupt number 10 and the 8-bit information corresponding to the interrupt number 10.
  • bit when the bit is the first value, it is used to indicate that multiple interrupt requests corresponding to the interrupt number are bound to the processor corresponding to the bit
  • the core when the bit is the second value, is used to indicate that the multiple interrupt requests corresponding to the interrupt number have no binding relationship with the processor core corresponding to the bit.
  • the first value is 1, and the second value is 0. This embodiment of the present application does not limit it.
  • the first processor currently includes multiple interrupt loads, and there is a one-to-one correspondence between the multiple interrupt loads and the multiple interrupt numbers. If the current scheduling delay value of the first processor core is greater than the first delay threshold, the kernel layer determines the absolute value of the difference between the current scheduling delay value of the first processor core and the first delay threshold as the first difference value.
  • the first processor selects at least one interrupt load from a plurality of interrupt loads according to a preset algorithm, and determines the interrupt number corresponding to the selected at least one interrupt load as the interrupt number to be moved out, so that the scheduling time corresponding to the interrupt number to be moved out If the total number of delays is greater than the first difference, it is determined that the interrupt balancing strategy is to migrate and bind the interrupt request corresponding to the interrupt number to be migrated out to the second processor core.
  • the default algorithm is to select the interrupt numbers sequentially according to the sequence of interrupt loads corresponding to the interrupt numbers from high to low.
  • the preset algorithm may also adopt other possible implementation manners, which are not limited in this embodiment of the present application.
  • the second processor core is other processor cores in the electronic device except the first processor core.
  • the second processor core is any processor core except the first processor core.
  • the second processor core is any at least two processor cores other than the first processor core.
  • the second processor core is at least one processor core other than the first processor core whose scheduling delay value is less than the third delay threshold.
  • the kernel layer traverses to query the scheduling delay values of other processor cores, and determines the processor core whose scheduling delay value is less than the third delay threshold as the second processor core.
  • the third delay threshold is a user-defined setting or a default setting. This embodiment of the present application does not limit it.
  • the second processor core is at least one processor core other than the first processor core with the smallest scheduling delay value.
  • the kernel layer traverses to query the scheduling delay values of other processor cores, sorts the second processor cores in ascending order of scheduling delay values, and determines the first n processor cores after sorting as The second processor core, n is a positive integer.
  • step 704 the interrupt controller migrates and binds some current interrupt requests of the first processor core to the second processor core according to the second configuration information.
  • the interrupt controller receives the second configuration information sent by the kernel layer, executes the interrupt balancing strategy indicated by the second configuration information, that is, migrates and binds part of the current interrupt requests of the first processor core to the second processor core, and the second processing
  • the processor core is different from the first processor core, and the scheduling delay value of the first processor core after migration and binding is less than or equal to the first delay threshold.
  • the second processor core is at least one processor core other than the first processor core.
  • the second configuration information includes an interrupt number to be migrated out of the first processor core and a processor core identifier of the second processor core.
  • the interrupt controller migrates and binds at least one interrupt request corresponding to the interrupt number to the second processor core according to the second configuration information.
  • the interrupt numbers to be moved out are m interrupt numbers to be moved out, and m is a positive integer.
  • the interrupt controller migrates and binds part of the current interrupt requests of the first processor core to the second processor core in a preset amortization mode, and the preset amortization mode is used to indicate the number of other processor cores after migration and binding.
  • the absolute value of the difference between the scheduling delay values is smaller than the first difference threshold.
  • the other processor cores are all processor cores in the electronic device except the first processor core, and the first difference threshold is a custom setting or a default setting. This embodiment of the present application does not limit it.
  • the kernel layer dynamically adjusts the interrupt binding according to the current scheduling delay value of the first processor core and the first delay threshold configured by the user layer. If the current scheduling time of the first processor core is If the delay value is greater than the first delay threshold, the interrupt controller will migrate and bind some of the current interrupt requests of the first processor core to the second processor core, so that the scheduling delay value of the first processor core after migration and binding Less than or equal to the first delay threshold, the scheduling delay on the first processor core is reduced. Part of the interrupt requests moved out can be reasonably shared on other processor core clusters to ensure the concurrency of interrupt processing and avoid the problem of excessive load on a single processor core that increases the frequency of the entire cluster, resulting in waste of power consumption.
  • the two are related.
  • the kernel enters the hard interrupt processing for simple hardware configuration (the execution time is relatively short), and then triggers the soft interrupt processing through the raise_softirq_irqoff interface.
  • the data packet is actually processed in the handler function (the execution takes a long time). It can be said that in this scenario, soft interrupt processing will not be triggered without hard interrupt processing, and both should be included in the overhead statistics of the same interrupt number.
  • the statistics of scheduling delay values are based on the granularity of processor cores, and do not support load statistics based on the granularity of interrupts. Therefore, some current software that implements the interrupt balance function can only evaluate the scheduling delay value of a certain interrupt number based on the number of interrupts, and the processing time of different interrupts is inconsistent. Such statistical results are not accurate and cannot be well supported.
  • the interrupt balance processing in the embodiment of the present application For example, the hard interrupt processing of the network card itself does not take much time, but its soft interrupt processing is time-consuming. If the calculation method of the scheduling delay value in the related technology does not link the two, it may cause the interruption of the network card. The illusion of less overhead.
  • the embodiment of this application proposes a Per-Interrupt Load Tracking (PILT) method of interrupt granularity, that is, the scheduling delay value statistics are performed according to the interrupt request, and the corresponding soft interrupt processing overhead is included in the In the scheduling delay value corresponding to the interrupt request.
  • PILT Per-Interrupt Load Tracking
  • the kernel layer needs to count the scheduling delay value of the first processor core .
  • the kernel layer performs real-time statistics on the scheduling delay value according to the received interrupt request, and the process of scheduling delay value statistics includes the following steps, as shown in Figure 8:
  • step 801 the interrupt processing duration corresponding to the interrupt request is acquired, and the interrupt processing duration includes the total duration of hard interrupt processing and soft interrupt processing.
  • the start time and end time of the hard interrupt processing corresponding to the interrupt request after receiving an interrupt request, determine the start time and end time of the hard interrupt processing corresponding to the interrupt request, and determine the absolute value of the difference between the start time and end time of the hard interrupt processing as the first processing duration ; Determine the start time and end time of the soft interrupt processing corresponding to the interrupt request, and determine the absolute value of the difference between the start time and the end time of the soft interrupt processing as the second processing duration; the first processing duration and the second processing duration of the interrupt request The sum of the two processing durations is determined as the interrupt processing duration of the interrupt request.
  • interrupt number of the interrupt request and the soft interrupt processing soft interrupt value triggered by it are saved.
  • interrupt type wherein, the interrupt number of an interrupt request is used to uniquely identify the interrupt request among multiple interrupt requests.
  • Soft interrupt types include network receive interrupt, network send interrupt, timing interrupt, scheduling interrupt, read-copy update (Read-Copy Update, RCU) lock and other types.
  • the interrupt number of the interrupt request and the soft interrupt type of the soft interrupt triggered by it are stored in an array.
  • the interrupt number of an interrupt request is "200" and the softirq type is network receive interrupt "NET_RX”, then "softirq_type: NET_RX; hw_irq: 200" is stored in an array.
  • Soft interrupt processing obtains the first array member that matches the soft interrupt type from the array saved in the previous step, for example, the soft interrupt for network packet receiving processing corresponds to the soft interrupt type "NET_RX", and obtains the matching with the soft interrupt type "NET_RX"
  • the interrupt number in the first array member of , the second processing duration of the soft interrupt processing is included in the interrupt processing duration of the interrupt number.
  • the first processing time of interrupt number 200 is delta_hardirq200
  • the second processing time of interrupt number 200 is delta_softirq200
  • Step 802 according to the interrupt processing duration of the interrupt request, a preset algorithm is used to determine the scheduling delay value corresponding to the interrupt request.
  • the preset algorithm includes an entity load tracking algorithm (Per-entity load tracking, PELT) or a window assisted load tracking (Window Assisted Load Tracking, WALT).
  • PELT entity load tracking algorithm
  • WALT Window Assisted Load Tracking
  • the WALT algorithm is used to preset the length of the window (for example, 10ms), and the average value or the current maximum interrupt processing duration in the window of the latest preset number (for example, the preset number is 5) is counted. value, and determine the average value or maximum value obtained from the statistics as the scheduling delay value corresponding to the interrupt request.
  • the PELT algorithm is used, the value of the attenuation factor is preset, and the attenuation factor is used to perform weighted summation of the interrupt processing duration in a preset number of windows, and the attenuation factor is a positive number less than 1. For example, if the attenuation factor is y, and the interrupt processing durations in the three windows are 10, 9, and 8 respectively, then the scheduling delay value corresponding to the interrupt request is 10 ⁇ y+9 ⁇ y 2 +8 ⁇ y 3 .
  • information about the interrupt number and the scheduling delay value corresponding to the interrupt request is saved.
  • the information of the interrupt number and the scheduling delay value corresponding to the interrupt request is kept in a designated variable of the first processor core, for example, the designated variable is a per processor core variable.
  • Step 803 Summing the scheduling delay value corresponding to the interrupt request and the current scheduling delay value of the first processor core to obtain an updated scheduling delay value.
  • a weighted sum is performed on the scheduling delay value corresponding to the interrupt request and the current scheduling delay value of the first processor core to obtain an updated scheduling delay value.
  • the embodiment of the present application tracks and calculates the load overhead of each interrupt request, so as to avoid the load that can only be calculated based on the number of interrupts in the past.
  • the extensive algorithm supports more accurate interrupt balance processing, and includes the time overhead of soft interrupt processing triggered by hard interrupt processing into the corresponding interrupt processing time, making the scheduling delay value determined based on interrupt processing time more accurate, so that subsequent It can perform interrupt equalization processing more accurately.
  • the scheduling delay caused by interrupt processing is not considered when selecting task cores, resulting in excessive scheduling delay; or a unified interrupt load judgment standard is adopted for all threads, resulting in the background thread and the specified thread being selected for the same processing core, resulting in unpredictable impacts (such as locks leading to unschedulable).
  • the embodiment of the present application supports configuring the scheduling delay requirement for the specified thread, and judges whether the scheduling delay requirement is greater than or equal to the scheduling delay value of the processor core when selecting a core, and only selects the corresponding processor core when the requirement is met, ensuring that the specified Threads can be scheduled in time, reducing the probability of frame loss.
  • the task selection process includes the following steps, as shown in Figure 9:
  • step 901 the user layer configures a second delay threshold for a specified thread of the foreground application, and sends third configuration information to the kernel layer.
  • the user layer determines the designated thread of the foreground application, and the maximum value of the scheduling delay configured for the designated thread is the second delay threshold.
  • the user layer sends third configuration information to the kernel layer, where the third configuration information includes a thread identifier of a specified thread and a second delay threshold.
  • the thread identifier of the specified thread is used to uniquely identify the specified thread among multiple threads.
  • the designated thread is a thread that requires a scheduling delay.
  • the designated thread includes a frame drawing thread and/or a process of an inter-process communication mechanism.
  • the specified thread is UI/Render thread, Surfaceflinger thread, and communication-related binder thread.
  • the embodiment of the present application does not limit the type of the specified thread.
  • the second delay threshold is an upper limit value of a scheduling delay value configured for the specified thread.
  • the second delay threshold can be dynamically adjusted later according to the completion of frame drawing. For example, if the absolute value of the difference between the actual end time drawn in the previous frame and the specified end time is greater than the second difference threshold, then increase the second delay threshold; if the difference between the actual end time drawn in the previous frame and the specified end time If the absolute value of the value is less than or equal to the second difference threshold, the second delay threshold is reduced to ensure that the thread can be scheduled faster.
  • the second difference threshold is a custom setting or a default setting. This embodiment of the present application does not limit it.
  • the user layer identifies the specified thread, obtains the thread identifier of the specified thread, configures the second delay threshold for the specified thread, and sends third configuration information to the kernel layer in a preset manner.
  • the default mode is an I/O device control (input/output control, ioctl) mode.
  • Step 902 According to the third configuration information, the kernel layer determines the processor core that satisfies the preset core selection condition as the target processor core.
  • the preset core selection condition includes that the current scheduling delay value of the processor core is less than or equal to the second time delay threshold.
  • the kernel layer After the kernel layer receives the third configuration information sent by the user layer, it checks the current state of the specified thread corresponding to the thread identifier, and if the specified thread is executing, no operation is performed; if the specified thread is not executing, the kernel layer tries to wake up the specified thread. A designated thread, if the designated thread is not allowed to be woken up (for example, waiting for a lock), then modify the second delay threshold required by the scheduling delay (for example, reduce the second delay threshold). If the specified thread is allowed to be awakened, it enters the core selection process.
  • the kernel layer judges whether the processor core satisfies a preset core selection condition, and the preset core selection condition includes the current scheduling of the processor core The delay value is less than the second delay threshold. If the processor core meets the preset core selection condition, then determine the processor core as the target processor core, and execute step 903; if the processor core does not meet the preset core selection condition, continue to check the next processor core , performing the step of judging whether the processor core satisfies the preset core selection condition again.
  • the current statistical method of scheduling delay value of the processor core can refer to the above-mentioned current statistical method of scheduling delay value of the target processor core by analogy, which will not be repeated here.
  • the preset core selection condition includes that the current scheduling delay value of the processor core is less than the second delay threshold and other core selection conditions.
  • other core selection conditions include that the priority of the processor is greater than the preset priority threshold, and/or the attribute information of the processor matches the specified thread. This embodiment of the present application does not limit it.
  • Step 903 the kernel layer adds the specified thread to the scheduling queue of the target processor core.
  • the kernel layer After the kernel layer adds the specified thread to the scheduling queue of the target processor core, it waits for scheduling execution. Since the scheduling delay value of the specified thread is in a controllable range, the specified thread can be scheduled in time within the time that meets the scheduling delay requirement.
  • the embodiment of the present application supports the configuration of scheduling delay requirements for task granularity, distinguishes between foreground and background threads, and reduces the impact of background threads on specified foreground threads.
  • selecting a task core consider the task scheduling delay requirement and the scheduling delay value of the processor core. Only when the current scheduling delay value of the processor core is less than or equal to the second delay threshold required by the task scheduling delay, can the The processor core is used as the target processor core to ensure that key threads can be selected to the processor core with a low scheduling delay value and be scheduled in time.
  • the “app market” application A is running in the foreground of the mobile phone.
  • the click operation signal acting on the control 1001 in the application A is received, five recommended application programs are executed. Batch updates.
  • the "Magazine Exchange” application is opened to display multiple magazine covers. Since application A needs to download the upgrade package through the network when performing batch updates, a large amount of network traffic will be received/forwarded through the wireless network card at this time, and the network card notifies the kernel layer to read and write data from the network card memory through an interrupt request. There are a large number of NIC interrupt requests generated.
  • the task selection method includes the following steps, as shown in Figure 11: Step 1101, the Framework layer identifies the foreground application as "Magazine Collection" application B, and it is the UI/Render of application B
  • the second delay threshold required by the thread configuration key scheduling delay is 500us
  • step 1102 the Framework layer sets the first delay threshold of processor core 0 and processor core 4 to 500us (4 small core + 4 large core architecture), Ensure that both the small core and the large core have at least one processor core with a controllable task scheduling delay.
  • Step 1103 after the Framework layer delivers the first delay threshold, processor core 0 and processor core 4 respectively decide whether to bind part of the interrupt requests sent to the processor core to other processor cores according to the current scheduling delay value.
  • Step 1104 when the mobile phone receives the sliding operation signal in the application B, the frame drawing operation is triggered, and the application B calls the UI/Render thread to draw the frame.
  • Step 1105 after the UI/Render thread is woken up, the core selection process is performed according to the second delay threshold required by the scheduling delay. At this time, since other processor cores are processing network card interrupts and the load is high, there is a high probability that the core will be selected.
  • Processor core 0 and processor core 4 after adding the UI/Render thread to the scheduling queue of these two processor cores, since the scheduling delay value is less than 500us, it can ensure that the UI/Render thread is scheduled within 500us, thereby reducing Probability of dropped frames.
  • the specified thread may be interrupted for a long time In the ready (runnable) state, such as 3ms or 8ms. Once the interrupt processing time exceeds 6ms, it is very prone to frame loss.
  • the scheduling delay of the specified thread is adjusted to 500us, thereby ensuring that the scheduling delay of the specified thread is within a controllable range, and will not cause frame loss and freeze due to scheduling delay The problem.
  • FIG. 13 shows a flowchart of an interrupt scheduling method provided by another exemplary embodiment of the present application.
  • the embodiment of the present application is illustrated by taking the interrupt scheduling method applied to an electronic device as an example.
  • the interrupt scheduling method includes:
  • Step 1301 acquire a preconfigured first latency threshold, where the first latency threshold is the maximum value of scheduling latency caused by interrupt processing configured for the first processor core.
  • Step 1302 acquiring a scheduling delay value of the first processor core, where the scheduling delay value is used to indicate the current interrupt load of the first processor core.
  • Step 1303 when the scheduling delay value is greater than the first delay threshold, migrate and bind some of the current interrupt requests of the first processor core to the second processor core, the second processor core is different from the first processor core nuclear.
  • FIG. 14 shows a block diagram of an interrupt scheduling device provided by an exemplary embodiment of the present application.
  • the device can be implemented as all or part of the electronic equipment provided above through software, hardware or a combination of the two.
  • the apparatus may include: a first obtaining unit 1410 , a second obtaining unit 1420 and a binding unit 1430 .
  • the first obtaining unit 1410 is configured to obtain a preconfigured first delay threshold, where the first delay threshold is the maximum value of the scheduling delay caused by the interrupt processing configured for the first processor core;
  • the second acquiring unit 1420 is configured to acquire a scheduling delay value of the first processor core, where the scheduling delay value is used to indicate the current interrupt load of the first processor core;
  • the binding unit 1430 is configured to migrate and bind part of the current interrupt requests of the first processor core to the second processor core when the scheduling delay value is greater than the first delay threshold, and the second processor core is different from the first processor core.
  • the scheduling delay value of the first processor core after the migration binding is less than or equal to the first delay threshold.
  • the absolute values of the differences between the scheduling delay values of other processor cores are all smaller than the preset first difference threshold, and the other processor cores are Each processor core other than the processor core.
  • the apparatus is used in an electronic device including a user layer, a kernel layer, and a hardware layer, and the first obtaining unit 1410 is further configured to send the first configuration information to the kernel layer through the user layer.
  • the configuration information includes the processor core identification of the first processor core and the first delay threshold; the kernel layer receives the first configuration information sent by the user layer;
  • the binding unit 1430 is further configured to send second configuration information to the interrupt controller at the hardware layer through the kernel layer when the scheduling delay value is greater than the first delay threshold, the second configuration information includes The interrupt number to be migrated out and the processor core identifier of the second processor core; the interrupt controller migrates and binds at least one interrupt request corresponding to the interrupt number to the second processor core according to the second configuration information.
  • the device further includes: a statistical unit; the statistical unit is used for:
  • the interrupt processing duration includes the total duration of hard interrupt processing and soft interrupt processing
  • the preset algorithm is used to determine the scheduling delay value corresponding to the interrupt request
  • the scheduling delay value corresponding to the interrupt request is summed with the current scheduling delay value of the first processor core to obtain an updated scheduling delay value.
  • the device further includes: a nuclear selection unit; the nuclear selection unit is used for:
  • the second latency threshold is the maximum value of the scheduling latency caused by the interrupt processing configured for the specified thread
  • the processor core that meets the preset core selection condition is determined as the target processor core, and the preset core selection condition includes that the current scheduling delay value of the processor core is less than or equal to the second delay threshold;
  • the specified thread includes a frame drawing process of a foreground application and/or a process of an inter-process communication mechanism.
  • the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned function allocation can be completed by different functional modules according to the needs.
  • the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the device and the method embodiment provided by the above embodiment belong to the same idea, and the specific implementation process thereof is detailed in the method embodiment, and will not be repeated here.
  • An embodiment of the present application provides an electronic device, which includes: a processor; a memory for storing processor-executable instructions; wherein, the processor is configured to implement the steps performed by the electronic device in the above-mentioned embodiments when executing the instructions. Interrupt dispatch method.
  • An embodiment of the present application provides a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are run in a processor of an electronic device , the processor in the electronic device executes the interrupt scheduling method executed by the electronic device in the foregoing embodiments.
  • An embodiment of the present application provides a non-volatile computer-readable storage medium, on which computer program instructions are stored.
  • the computer program instructions are executed by a processor, the interrupt scheduling method performed by the electronic device in the foregoing embodiments is implemented.
  • a computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
  • a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer-readable storage media include: portable computer disk, hard disk, random access memory (Random Access Memory, RAM), read only memory (Read Only Memory, ROM), erasable Electrically Programmable Read-Only-Memory (EPROM or flash memory), Static Random-Access Memory (Static Random-Access Memory, SRAM), Portable Compression Disk Read-Only Memory (Compact Disc Read-Only Memory, CD -ROM), Digital Video Disc (DVD), memory sticks, floppy disks, mechanically encoded devices such as punched cards or raised structures in grooves with instructions stored thereon, and any suitable combination of the foregoing .
  • RAM Random Access Memory
  • ROM read only memory
  • EPROM or flash memory erasable Electrically Programmable Read-Only-Memory
  • Static Random-Access Memory SRAM
  • Portable Compression Disk Read-Only Memory Compact Disc Read-Only Memory
  • CD -ROM Compact Disc Read-Only Memory
  • DVD Digital Video Disc
  • Computer readable program instructions or codes described herein may be downloaded from a computer readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, local area network, wide area network, and/or wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for performing the operations of the present application may be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more source or object code written in any combination of programming languages, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as the “C” language or similar programming languages.
  • Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it can be connected to an external computer such as use an Internet service provider to connect via the Internet).
  • electronic circuits such as programmable logic circuits, field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or programmable logic arrays (Programmable Logic Array, PLA), the electronic circuit can execute computer-readable program instructions, thereby realizing various aspects of the present application.
  • These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
  • each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented with hardware (such as circuits or ASIC (Application Specific Integrated Circuit, application-specific integrated circuit)), or can be implemented with a combination of hardware and software, such as firmware.
  • hardware such as circuits or ASIC (Application Specific Integrated Circuit, application-specific integrated circuit)
  • firmware such as firmware

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Power Sources (AREA)
  • Multi Processors (AREA)

Abstract

The present application relates to the field of operation systems, and in particular to an interrupt scheduling method, an electronic device, and a storage medium. The method comprises: obtaining a pre-configured first delay threshold, the first delay threshold being a maximum value of a scheduling delay caused by interrupt handling configured for a first processor core; obtaining a scheduling delay value of the first processor core, the scheduling delay value being used for indicating a current interrupt load of the first processor core; and when the scheduling delay value is greater than the first delay threshold, migrating and binding some of current interrupt requests of the first processor core to a second processor core. In embodiments of the present application, the first delay threshold is configured for the first processor core, such that when the scheduling delay value of the first processor core is greater than the first delay threshold, some of the current interrupt requests of the first processor core are migrated and bound to the second processor core. Therefore, a reasonable interrupt handling throughput can be simultaneously guaranteed on the basis of ensuring the scheduling delay.

Description

中断调度方法、电子设备及存储介质Interrupt scheduling method, electronic device and storage medium
本申请要求于2021年06月02日提交中国专利局、申请号为202110613606.2、申请名称为“中断调度方法、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202110613606.2 and the application name "interrupt scheduling method, electronic equipment and storage medium" submitted to the China Patent Office on June 2, 2021, the entire contents of which are incorporated herein by reference Applying.
技术领域technical field
本申请涉及操作系统领域,尤其涉及一种中断调度方法、电子设备及存储介质。The present application relates to the field of operating systems, and in particular to an interrupt scheduling method, electronic equipment and storage media.
背景技术Background technique
在电信领域中,越来越多地采用Linux操作系统来实现高实时性业务的处理。In the field of telecommunications, more and more Linux operating systems are used to process high real-time services.
请参考图1,其示出了相关技术中一种采用Linux操作系统来处理高实时性业务的架构示意图。该架构包括硬件层120、内核层140和用户层160。用户层160可以运行有至少一个线程,每个线程用于处理任务。每个线程的任务调度过程和中断处理过程主要由内核层140实现。Please refer to FIG. 1 , which shows a schematic diagram of an architecture in the related art that uses a Linux operating system to process high real-time services. The architecture includes a hardware layer 120 , a kernel layer 140 and a user layer 160 . The user layer 160 can run at least one thread, and each thread is used for processing tasks. The task scheduling process and interrupt handling process of each thread are mainly implemented by the kernel layer 140 .
目前的中断处理策略主要包括如下两种可能的实现方式,一种可能的实现方式为中断均衡处理,即将中断请求(Interrupt Request,IRQ)相对平均地发送至各个处理器(Central Processing Unit,CPU)核,这种方式可以保证中断处理的吞吐量,但无法解决中断处理引发的调度延时不可控问题;另一种可能的实现方式为中断绑核处理,比如将网卡中断请求预先绑定在某一个处理器核上,使得除了绑定的处理器核以外的其它处理器核上的调度时延可控,但是这种方式下可能出现绑定的处理器核上的中断负载过大,拉高整个集群(cluster)的频率,造成功耗浪费。目前尚未提供一种合理且有效的中断调度方法,能够在保证调度时延的基础上,同时保障合理的中断处理吞吐量。The current interrupt processing strategy mainly includes the following two possible implementation methods. One possible implementation method is interrupt balance processing, that is, sending interrupt requests (Interrupt Request, IRQ) to each processor (Central Processing Unit, CPU) relatively evenly. core, this method can guarantee the throughput of interrupt processing, but it cannot solve the problem of uncontrollable scheduling delay caused by interrupt processing; another possible implementation method is interrupt binding core processing, such as pre-binding the network card interrupt request to a certain On one processor core, the scheduling delay on other processor cores other than the bound processor core can be controlled, but in this way, the interrupt load on the bound processor core may be too large, and the The frequency of the entire cluster causes waste of power consumption. At present, a reasonable and effective interrupt scheduling method has not been provided, which can ensure a reasonable interrupt processing throughput on the basis of ensuring the scheduling delay.
发明内容Contents of the invention
有鉴于此,本申请实施例提出了一种中断调度方法、电子设备及存储介质。本申请实施例通过为第一处理器核配置的中断处理造成的调度时延的最大值即第一时延阈值,使得在第一处理器核的调度时延值大于第一时延阈值的情况下,将第一处理器核当前的部分中断请求迁移绑定至第二处理器核,能够在保证调度时延的基础上,同时保障合理的中断处理吞吐量。In view of this, an embodiment of the present application proposes an interrupt scheduling method, electronic equipment, and a storage medium. In the embodiment of the present application, the maximum value of the scheduling delay caused by the interrupt processing configured for the first processor core is the first delay threshold, so that when the scheduling delay value of the first processor core is greater than the first delay threshold In this case, migrating and binding part of the current interrupt requests of the first processor core to the second processor core can ensure reasonable interrupt processing throughput while ensuring scheduling delay.
第一方面,本申请的实施例提供了一种中断调度方法,所述方法包括:In a first aspect, an embodiment of the present application provides an interrupt scheduling method, the method comprising:
获取预配置的第一时延阈值,所述第一时延阈值是为第一处理器核配置的中断处理造成的调度时延的最大值;Acquiring a preconfigured first delay threshold, where the first delay threshold is the maximum value of scheduling delay caused by interrupt processing configured for the first processor core;
获取所述第一处理器核的调度时延值,所述调度时延值用于指示所述第一处理器核当前的中断负载;Acquire a scheduling delay value of the first processor core, where the scheduling delay value is used to indicate a current interrupt load of the first processor core;
在所述调度时延值大于所述第一时延阈值的情况下,将所述第一处理器核当前的部分中断请求迁移绑定至第二处理器核,所述第二处理器核不同于所述第一处理器核。When the scheduling delay value is greater than the first delay threshold, migrate and bind the current part of the interrupt request of the first processor core to the second processor core, and the second processor core is different from the on the first processor core.
在该实现方式中,本申请实施例通过为第一处理器核配置的中断处理造成的调度时延的最大值即第一时延阈值,使得在第一处理器核的调度时延值大于第一时延阈值的情况下,将第一处理器核当前的部分中断请求迁移绑定至第二处理器核,保证了第一处理器核上由于中断处理造成的调度时延可控,避免了相关技术中因中断处理引发的调度时延过大的问题,保证了业务处理过程的实时性,提高了电子设备的整体性能。In this implementation, in this embodiment of the present application, the maximum value of the scheduling delay caused by the interrupt processing configured for the first processor core is the first delay threshold, so that the scheduling delay value of the first processor core is greater than the first delay threshold. In the case of a delay threshold, some of the current interrupt requests of the first processor core are migrated and bound to the second processor core, which ensures that the scheduling delay caused by interrupt processing on the first processor core is controllable, avoiding The problem of excessive scheduling delay caused by interrupt processing in the related art ensures the real-time performance of the business processing process and improves the overall performance of the electronic equipment.
在一种可能的实现方式中,所述迁移绑定后所述第一处理器核的调度时延值小于或等于所述第一时延阈值。In a possible implementation manner, the scheduling delay value of the first processor core after the migration binding is less than or equal to the first delay threshold.
在该实现方式中,若第一处理器核当前的调度时延值大于第一时延阈值,则将第一处理器核当前的部分中断请求迁移绑定至第二处理器核,使得迁移绑定后第一处理器核的调度时延值小于或等于第一时延阈值,降低了第一处理器核上的调度时延,进一步保证了业务处理过程的实时性。In this implementation, if the current scheduling delay value of the first processor core is greater than the first delay threshold, some current interrupt requests of the first processor core are migrated and bound to the second processor core, so that the migration bound After setting, the scheduling delay value of the first processor core is less than or equal to the first delay threshold, which reduces the scheduling delay on the first processor core and further ensures the real-time performance of the service processing process.
在另一种可能的实现方式中,所述迁移绑定后其他处理器核的调度时延值之间的差值绝对值均小于预设的第一差值阈值,所述其他处理器核为除所述第一处理器核以外的各个处理器核。In another possible implementation manner, after the migration and binding, the absolute values of the differences between the scheduling delay values of other processor cores are all smaller than a preset first difference threshold, and the other processor cores are All processor cores except the first processor core.
在该实现方式中,还通过将迁出的部分中断请求在其他处理器核集群上合理均摊,使得迁移绑定后其他处理器核的调度时延值之间的差值绝对值均小于预设的第一差值阈值,保证中断处理的并发度,并避免单一处理器核负载过大拉高整个集群的频率,造成功耗浪费的问题。In this implementation, the part of the interrupt requests moved out is reasonably evenly distributed on other processor core clusters, so that the absolute value of the difference between the scheduling delay values of other processor cores after migration and binding is less than the preset The first difference threshold, to ensure the concurrency of interrupt processing, and to avoid the excessive load of a single processor core to increase the frequency of the entire cluster, resulting in waste of power consumption.
在另一种可能的实现方式中,所述方法用于包括用户层、内核层和硬件层的电子设备中,所述获取预配置的第一时延阈值,包括:In another possible implementation manner, the method is used in an electronic device including a user layer, a kernel layer, and a hardware layer, and the acquiring the preconfigured first delay threshold includes:
所述用户层向所述内核层发送第一配置信息,所述第一配置信息包括所述第一处理器核的处理器核标识和所述第一时延阈值;The user layer sends first configuration information to the kernel layer, where the first configuration information includes a processor core identifier of the first processor core and the first delay threshold;
所述内核层接收所述用户层发送的所述第一配置信息;The kernel layer receives the first configuration information sent by the user layer;
所述在所述调度时延值大于所述第一时延阈值的情况下,将所述第一处理器核当前的部分中断请求迁移绑定至第二处理器核,包括:When the scheduling delay value is greater than the first delay threshold, migrating and binding the current part of the interrupt request of the first processor core to the second processor core includes:
在所述调度时延值大于所述第一时延阈值的情况下,所述内核层向所述硬件层的中断控制器发送第二配置信息,所述第二配置信息包括所述第一处理器核中待迁出的中断号和所述第二处理器核的处理器核标识;When the scheduling delay value is greater than the first delay threshold, the kernel layer sends second configuration information to the interrupt controller of the hardware layer, and the second configuration information includes the first processing The interrupt number to be transferred out of the processor core and the processor core identification of the second processor core;
所述中断控制器根据所述第二配置信息,将所述中断号对应的至少一个中断请求迁移绑定至所述第二处理器核。The interrupt controller migrates and binds at least one interrupt request corresponding to the interrupt number to the second processor core according to the second configuration information.
在该实现方式中,内核层根据第一处理器核当前的调度时延值和用户层配置的第一时延阈值动态调整中断绑定,若第一处理器核当前的调度时延值大于第一时延阈值,则向硬件层的中断控制器发送第二配置信息,使得中断控制器根据第二配置信息,将中断号对应的至少一个中断请求迁移绑定至第二处理器核,进一步保证了第一处理器核上由于中断处理造成的调度时延可控,提高了电子设备的整体性能。In this implementation, the kernel layer dynamically adjusts the interrupt binding according to the current scheduling delay value of the first processor core and the first delay threshold configured by the user layer. If the current scheduling delay value of the first processor core is greater than the first Once the delay threshold is reached, the second configuration information is sent to the interrupt controller at the hardware layer, so that the interrupt controller migrates and binds at least one interrupt request corresponding to the interrupt number to the second processor core according to the second configuration information, further ensuring The scheduling delay caused by interrupt processing on the first processor core is controllable, and the overall performance of the electronic device is improved.
在另一种可能的实现方式中,所述方法还包括:In another possible implementation, the method further includes:
在接收到中断请求后获取所述中断请求对应的中断处理时长,所述中断处理时长包括硬中断处理和软中断处理的总时长;After receiving the interrupt request, the interrupt processing duration corresponding to the interrupt request is obtained, and the interrupt processing duration includes the total duration of hard interrupt processing and soft interrupt processing;
根据所述中断请求的中断处理时长,采用预设算法确定所述中断请求对应的调度 时延值;According to the interrupt processing duration of the interrupt request, a preset algorithm is used to determine the scheduling delay value corresponding to the interrupt request;
将所述中断请求对应的调度时延值与所述第一处理器核当前的所述调度时延值进行求和,得到更新后的所述调度时延值。The scheduling delay value corresponding to the interrupt request is summed with the current scheduling delay value of the first processor core to obtain the updated scheduling delay value.
在该实现方式中,与相关技术中按照处理器核维度统计调度时延值开销的方案相比,本申请实施例通过跟踪计算每个中断请求的负载开销,从而避免过往只能根据中断数量计算负载的粗放算法,支撑更精确的中断均衡处理,将由硬中断处理触发的软中断处理的时间开销一并计入对应的中断处理时长,使得基于中断处理时长确定的调度时延值更加精确,从而后续能够更加准确的进行中断均衡处理。In this implementation, compared with the scheme in the related art that calculates the scheduling delay value overhead according to the processor core dimension, the embodiment of the present application tracks and calculates the load overhead of each interrupt request, so as to avoid the traditional calculation based on the number of interrupts The extensive algorithm of the load supports more accurate interrupt balancing processing, and the time overhead of soft interrupt processing triggered by hard interrupt processing is included in the corresponding interrupt processing time, making the scheduling delay value determined based on the interrupt processing time more accurate, so that In the follow-up, interrupt equalization processing can be performed more accurately.
在另一种可能的实现方式中,所述方法还包括:In another possible implementation, the method further includes:
获取预配置的第二时延阈值,所述第二时延阈值是为指定线程配置的中断处理造成的调度时延的最大值;Obtaining a preconfigured second latency threshold, where the second latency threshold is the maximum value of scheduling latency caused by interrupt processing configured for a specified thread;
在所述指定线程被唤醒后,将满足预设选核条件的处理器核确定为目标处理器核,所述预设选核条件包括处理器核当前的所述调度时延值小于或等于所述第二时延阈值;After the specified thread is woken up, the processor core that meets the preset core selection condition is determined as the target processor core, and the preset core selection condition includes that the current scheduling delay value of the processor core is less than or equal to the specified The second delay threshold;
将所述指定线程加入所述目标处理器核的调度队列。Add the specified thread to the scheduling queue of the target processor core.
在该实现方式中,任务选核时考虑任务调度时延要求和处理器核的调度时延值,只有处理器核当前的调度时延值小于或等于任务调度时延要求的第二时延阈值时,才可能将该处理器核作为目标处理器核,保证关键的指定线程能够选择到调度时延值低的处理器核上,并得到及时调度。In this implementation, the task scheduling delay requirement and the scheduling delay value of the processor core are considered when selecting a task core, and only the current scheduling delay value of the processor core is less than or equal to the second delay threshold of the task scheduling delay requirement , it is possible to use the processor core as the target processor core to ensure that the key designated thread can be selected to the processor core with a low scheduling delay value and be scheduled in time.
在另一种可能的实现方式中,所述指定线程包括所述前台应用的绘帧进程和/或进程间通信机制的进程。In another possible implementation manner, the specified thread includes a frame drawing process and/or a process of an inter-process communication mechanism of the foreground application.
在该实现方式中,支持针对任务粒度的调度时延要求配置,将前台/后台线程区分处理,降低后台线程对前台指定线程的影响。In this implementation mode, it supports the configuration of scheduling delay requirements for task granularity, separates foreground/background threads, and reduces the impact of background threads on foreground specified threads.
第二方面,本申请的实施例提供了一种电子设备,所述电子设备包括:In a second aspect, an embodiment of the present application provides an electronic device, and the electronic device includes:
处理器;processor;
用于存储处理器可执行指令的存储器;memory for storing processor-executable instructions;
其中,所述处理器被配置为执行所述指令时实现上述第一方面或第一方面中的任意一种可能的实现方式所提供的方法。Wherein, the processor is configured to implement the first aspect or the method provided in any possible implementation manner of the first aspect when executing the instruction.
第三方面,本申请的实施例提供了一种非易失性计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述第一方面或第一方面中的任意一种可能的实现方式所提供的方法。In the third aspect, the embodiments of the present application provide a non-volatile computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above-mentioned first aspect or the first aspect can be realized A method provided by any of the possible implementations in .
第四方面,提供了一种中断调度装置,该装置包括至少一个单元,至少一个单元用于实现上述第一方面或第一方面中的任意一种可能的实现方式所提供的方法。In a fourth aspect, an interrupt scheduling device is provided, and the device includes at least one unit, and the at least one unit is configured to implement the method provided in the first aspect or any possible implementation manner of the first aspect.
第五方面,本申请的实施例提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行上述第一方面或第一方面中的任意一种可能的实现方式所提供的方法。In the fifth aspect, the embodiments of the present application provide a computer program product, including computer readable code, or a non-volatile computer readable storage medium bearing computer readable code, when the computer readable code is stored in an electronic When running in the device, the processor in the electronic device executes the method provided in the first aspect or any one possible implementation manner of the first aspect.
附图说明Description of drawings
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本申请的示 例性实施例、特征和方面,并且用于解释本申请的原理。The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate exemplary embodiments, features, and aspects of the application and, together with the specification, serve to explain the principles of the application.
图1示出了相关技术中一种采用Linux操作系统来处理高实时性业务的架构示意图。FIG. 1 shows a schematic diagram of an architecture in the related art that uses a Linux operating system to process high real-time services.
图2示出了相关技术中的Linux内核中的五个调度类的示意图。Fig. 2 shows a schematic diagram of five scheduling classes in the Linux kernel in the related art.
图3示出了相关技术中的处理器核的调度队列的示意图。FIG. 3 shows a schematic diagram of a scheduling queue of a processor core in the related art.
图4示出了相关技术中Linux操作系统进行中断处理的架构示意图。FIG. 4 shows a schematic diagram of an interrupt processing architecture of a Linux operating system in the related art.
图5a示出了采用中断均衡处理的情况下多个处理器核的中断处理耗时的示意图。Fig. 5a shows a schematic diagram of time-consuming interrupt processing of multiple processor cores in the case of adopting interrupt balancing processing.
图5b示出了采用中断绑核处理的情况下多个处理器核的中断处理耗时的示意图。Fig. 5b shows a schematic diagram of time-consuming interrupt processing of multiple processor cores in the case of using interrupt bundled core processing.
图6示出了本申请实施例涉及的电子设备的示意图。FIG. 6 shows a schematic diagram of an electronic device involved in an embodiment of the present application.
图7示出了本申请一个示例性实施例提供的中断调度方法的流程图。Fig. 7 shows a flowchart of an interrupt scheduling method provided by an exemplary embodiment of the present application.
图8示出了本申请一个示例性实施例提供的调度时延值统计的过程的流程图。Fig. 8 shows a flowchart of a process of counting scheduling delay values provided by an exemplary embodiment of the present application.
图9示出了本申请一个示例性实施例提供的任务选核过程的流程图。Fig. 9 shows a flowchart of a task selection process provided by an exemplary embodiment of the present application.
图10示出了本申请另一个示例性实施例提供的中断调度方法涉及的界面示意图。Fig. 10 shows a schematic diagram of an interface involved in an interrupt scheduling method provided by another exemplary embodiment of the present application.
图11示出了本申请另一个示例性实施例提供的中断调度方法的流程图。Fig. 11 shows a flowchart of an interrupt scheduling method provided by another exemplary embodiment of the present application.
图12示出了本申请一个示例性实施例提供的指定线程的调度时延的示意图。Fig. 12 shows a schematic diagram of a scheduling delay of a specified thread provided by an exemplary embodiment of the present application.
图13示出了本申请另一个示例性实施例提供的中断调度方法的流程图。Fig. 13 shows a flowchart of an interrupt scheduling method provided by another exemplary embodiment of the present application.
图14示出了本申请一个示例性实施例提供的中断调度装置的框图。Fig. 14 shows a block diagram of an interrupt scheduling device provided by an exemplary embodiment of the present application.
具体实施方式Detailed ways
以下将参考附图详细说明本申请的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。Various exemplary embodiments, features, and aspects of the present application will be described in detail below with reference to the accompanying drawings. The same reference numbers in the figures indicate functionally identical or similar elements. While various aspects of the embodiments are shown in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as superior or better than other embodiments.
另外,为了更好的说明本申请,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本申请同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本申请的主旨。In addition, in order to better illustrate the present application, numerous specific details are given in the following specific implementation manners. It will be understood by those skilled in the art that the present application may be practiced without certain of the specific details. In some instances, methods, means, components and circuits well known to those skilled in the art have not been described in detail in order to highlight the gist of the present application.
首先,对本申请实施例中涉及的一些名词进行介绍。First, some nouns involved in the embodiments of this application are introduced.
1、进程:进程是程序处于执行期的一个实体,除了执行代码外,还包括打开的文件、挂起的信号等信息。1. Process: A process is an entity in the execution period of a program. In addition to executing code, it also includes information such as open files and pending signals.
2、线程:一个进程中可以包括多个线程,线程之间共享地址空间2. Thread: A process can include multiple threads, and the address space is shared between threads
3、任务:内核(操作系统的核心)调度对象,可以是进程或线程。比如,Linux内核中统一使用task_struct结构描述进程和线程。3. Task: The kernel (the core of the operating system) schedules the object, which can be a process or a thread. For example, the task_struct structure is used uniformly in the Linux kernel to describe processes and threads.
4、调度队列:一个处理器核中包括至少一个调度队列,在一个任务唤醒后,会加入某个处理器核的调度队列等待调度。可选地,任务会按照调度策略加入该处理器核中的面向最后期限调度器的调度队列(Deadline Runqueue,dl_rq)、实时调度器的调度队列(Real Time Runqueue,rt_rq)和完全公平调度器的调度队列(Completely Fair  Runqueue,cfs_rq)中的一个调度队列中。4. Scheduling queue: A processor core includes at least one scheduling queue. After a task wakes up, it will join the scheduling queue of a certain processor core and wait for scheduling. Optionally, the task will join the scheduling queue (Deadline Runqueue, dl_rq) of the deadline-oriented scheduler in the processor core, the scheduling queue of the real-time scheduler (Real Time Runqueue, rt_rq) and the scheduling queue of the completely fair scheduler in the processor core according to the scheduling policy. In a dispatch queue in the dispatch queue (Completely Fair Runqueue, cfs_rq).
可选地,每个处理器核中包括一个进程队列(Runqueue,rq),用于管理该处理器核中dl_rq、rt_rq和cfs_rq。其中,所有的DL任务运行前都需要加入dl_rq等待调度,所有的RT任务运行前都需要加入rt_rq等待调度,所有的CFS任务运行前都需要加入cfs_rq等待调度。Optionally, each processor core includes a process queue (Runqueue, rq) for managing dl_rq, rt_rq and cfs_rq in the processor core. Among them, all DL tasks need to add dl_rq to wait for scheduling before running, all RT tasks need to add rt_rq to wait for scheduling before running, and all CFS tasks need to add cfs_rq to wait for scheduling before running.
5、调度时延:一个任务从唤醒并加入调度队列,到实际开始执行的时长。实时系统中调度时延最低可以达到us级别。5. Scheduling delay: the time it takes for a task to wake up and join the scheduling queue until it actually starts executing. The scheduling delay in the real-time system can reach the lowest us level.
6、上下文切换:内核的运行状态之间的切换,和/或,内核在处理器核上对任务进行切换。比如,Linux内核包括以下几种运行状态:用户态,内核态(运行在进程上下文),内核态(运行在中断上下文),这几种运行状态之间的切换,以及任务之间的切换,均称为上下文切换。上下文切换由于需要保存和恢复寄存器、页表等状态信息,存在一定开销。6. Context switching: switching between operating states of the kernel, and/or, the kernel switches tasks on the processor core. For example, the Linux kernel includes the following running states: user state, kernel state (running in process context), kernel state (running in interrupt context), switching between these running states, and switching between tasks, all It's called a context switch. Context switching has certain overhead due to the need to save and restore state information such as registers and page tables.
7、抢占:高优先级任务唤醒后,若当前正在执行的任务优先级较低,则立刻将当前正在执行的任务切换成高优先级任务。这样,高优先级任务就能得到尽快的执行,具有较低的调度时延。7. Preemption: After a high-priority task wakes up, if the currently executing task has a lower priority, immediately switch the currently executing task to a high-priority task. In this way, high-priority tasks can be executed as soon as possible with low scheduling delay.
8、关抢占:关闭上述抢占的能力。这是因为内核在处理部分关键资源时,需要避免因抢占引入的并发竞争。8. Off Preemption: Turn off the above preemption ability. This is because the kernel needs to avoid concurrency competition introduced by preemption when processing some key resources.
9、吞吐量:系统在单位时间内处理数据量大小。体现了系统的数据处理能力。通常和上下文切换次数成负相关关系,即上下文切换次数越多,吞吐量越低;相反,上下文切换次数越小,吞吐量越高。9. Throughput: the amount of data processed by the system per unit time. It reflects the data processing capability of the system. It is usually negatively correlated with the number of context switches, that is, the more the number of context switches, the lower the throughput; on the contrary, the smaller the number of context switches, the higher the throughput.
10、中断:电子设备在执行期间,系统内发生任何非寻常的或非预期的急需处理事件,使得处理器暂时中断当前正在执行的程序而转去执行相应的事件处理程序,待处理完毕后又返回原来被中断处继续执行或调度新的进程执行的过程。引起中断发生的事件被称为中断源。中断源向处理器发出的请求中断处理信号称为中断请求。处理器对中断请求进行处理的过程称为中断处理。10. Interruption: During the execution of the electronic device, any unusual or unexpected emergency processing event occurs in the system, causing the processor to temporarily interrupt the currently executing program and turn to execute the corresponding event processing program. Return to the original interrupted process to continue execution or schedule a new process to execute. Events that cause an interrupt to occur are called interrupt sources. The request interrupt processing signal sent by the interrupt source to the processor is called an interrupt request. The process by which the processor processes an interrupt request is called interrupt handling.
调度子系统是Linux内核中的核心模块之一,调度子系统用于进行任务调度,比如决定待投入执行的任务、该任务的执行开始时刻和执行时长等。当前的Linux内核中,定义了五个调度类,如图2所示。五个调度类分别为停止(STOP)调度类、面向最后期限(Deadline,DL)调度类、实时(Real Time,RT)调度类、完全公平调度器(Completely Fair Scheduler,CFS)调度类和空闲(IDLE)调度类,其中STOP和IDLE是两个特殊的调度类,不用于调度普通任务。The scheduling subsystem is one of the core modules in the Linux kernel. The scheduling subsystem is used for task scheduling, such as determining the task to be executed, the execution start time and execution duration of the task, and so on. In the current Linux kernel, five scheduling classes are defined, as shown in FIG. 2 . The five scheduling classes are STOP scheduling class, Deadline (DL) scheduling class, Real Time (RT) scheduling class, Completely Fair Scheduler (CFS) scheduling class and idle ( IDLE) scheduling class, where STOP and IDLE are two special scheduling classes, not used for scheduling common tasks.
其中,调度类之间存在优先级顺序。比如,RT调度类和CFS调度类中均有任务等待调度,则内核会优先从RT调度类的调度队列中选择一个任务执行。在RT调度类中的所有任务全部执行完成,或主动让出处理器核(比如睡眠),或RT任务运行时长超过预配置的时长阀值的情况下,CFS调度类中的任务才能得到调度执行。在一些系统(比如安卓系统)中,RT调度类和CFS调度类接管了操作系统中的大部分任务。Among them, there is a priority order among scheduling classes. For example, if there are tasks waiting to be scheduled in both the RT scheduling class and the CFS scheduling class, the kernel will preferentially select a task to execute from the scheduling queue of the RT scheduling class. The tasks in the CFS scheduling class can only be scheduled for execution when all the tasks in the RT scheduling class are executed, or the processor core is voluntarily given up (such as sleep), or the running time of the RT task exceeds the pre-configured threshold. . In some systems (such as Android), the RT scheduling class and the CFS scheduling class take over most of the tasks in the operating system.
CFS调度类是Linux内核默认的调度算法,该算法着重保证调度的公平性,可以保证一定时间内,所有进程都会得到调度。但也正是因为其公平性,即使任务优先级 最高,也无法保证该任务始终被优先调度执行(即使将nice值调到-20也无法保证),也就是说,调度时延不可控。The CFS scheduling class is the default scheduling algorithm of the Linux kernel. This algorithm focuses on ensuring the fairness of scheduling, and can guarantee that all processes will be scheduled within a certain period of time. But precisely because of its fairness, even if the task has the highest priority, it cannot be guaranteed that the task will always be scheduled and executed first (even if the nice value is adjusted to -20), that is to say, the scheduling delay is uncontrollable.
RT调度类是一种严格按照优先级进行调度的算法,某任务唤醒后,若相比当前正在运行任务的优先级高,则会触发对当前运行任务的抢占,让处理器核立刻切换执行被唤醒的高优先级任务,保证了高优先级任务的调度时延。为了保证及时绘制(比如60帧刷新率的手机需在16.7ms中完成一帧的绘制),需要严格保证UI/Render线程、Surfaceflinger线程等指定线程的调度时延,如果调度时延过大,就会因为来不及绘帧,造成卡顿的情况。因此,系统会识别这些指定线程,并将其配置为RT任务。The RT scheduling class is an algorithm that schedules strictly according to the priority. After a task wakes up, if it has a higher priority than the currently running task, it will trigger the preemption of the current running task, so that the processor core will immediately switch to execute the scheduled task. The wake-up high-priority task ensures the scheduling delay of the high-priority task. In order to ensure timely drawing (for example, a mobile phone with a refresh rate of 60 frames needs to complete one frame of drawing in 16.7ms), it is necessary to strictly ensure the scheduling delay of designated threads such as UI/Render thread and Surfaceflinger thread. If the scheduling delay is too large, the It will cause a freeze because it is too late to draw the frame. Therefore, the system recognizes these designated threads and configures them as RT tasks.
在一个示意性的例子中,处理器核的调度队列如图3所示。其中,任务的参数prio指示该任务的归一化优先级,取值范围为[0,139]。当参数prio的数值位于[100,139]时指示该任务为CFS调度类管理的任务,当参数prio的数值位于[0,99]时指示该任务为RT调度类管理的任务。参数prio的数值与该任务的优先级呈负相关关系,即参数prio的数值越低,代表该任务的优先级越高,比如,任务1的参数prio为97,任务2的参数prio为98,则任务1的优先级高于任务2的优先级。图3中,处理器核0(简称为核0)当前的调度队列中按照执行顺序依次包括RT调度类管理的任务A(prio=98)、CFS调度类管理的任务X(prio=120)和CFS调度类管理的任务Y(prio=120)。RT调度类管理的任务B(prio=97)唤醒后,在一种可能的实现方式中,若支持高优先级任务抢占低优先级任务,则核0立刻将当前正在执行的任务A切换成高优先级的任务B,切换后核0的调度队列中按照执行顺序依次包括任务B、任务A、任务X和任务Y。在另一种可能的实现方式中,若不支持抢占(即关抢占),则高优先级的任务B无法立刻得到调度,任务B唤醒后仅是加入调度队列,当前执行的任务仍然是任务A,此时核0的调度队列中按照执行顺序依次包括任务A、任务B、任务X和任务Y。In an illustrative example, the scheduling queue of a processor core is shown in FIG. 3 . Wherein, the parameter prio of the task indicates the normalized priority of the task, and the value range is [0, 139]. When the value of the parameter prio is in [100, 139], it indicates that the task is a task managed by the CFS scheduling class; when the value of the parameter prio is in [0, 99], it indicates that the task is a task managed by the RT scheduling class. The value of the parameter prio is negatively correlated with the priority of the task, that is, the lower the value of the parameter prio, the higher the priority of the task. For example, the parameter prio of task 1 is 97, and the parameter prio of task 2 is 98. Then task 1 has a higher priority than task 2. In Fig. 3, the current scheduling queue of processor core 0 (abbreviated as core 0) includes task A (prio=98) managed by RT scheduling class, task X (prio=120) managed by CFS scheduling class and Task Y (prio=120) managed by the CFS scheduling class. After the task B (prio=97) managed by the RT scheduling class wakes up, in a possible implementation, if it supports high-priority tasks to preempt low-priority tasks, then core 0 immediately switches the currently executing task A to a high-priority task. For task B with priority, after switching, the scheduling queue of core 0 includes task B, task A, task X, and task Y in order of execution. In another possible implementation, if preemption is not supported (that is, preemption is turned off), high-priority task B cannot be scheduled immediately, and task B only joins the scheduling queue after waking up, and the currently executing task is still task A , at this time, the scheduling queue of core 0 includes task A, task B, task X, and task Y in order of execution.
相关技术中,中断处理中为了防止并发,就会执行关抢占操作。期间,若Surfaceflinger线程加入调度队列,但是由于关抢占,无法立刻得到调度。通常会等待一段时间(比如4ms)才能被调度执行,最终导致来不及绘制,产生丢帧情况。在网络流量较大的情况下(比如批量更新应用、下载视频等),软中断处理的调度时延甚至会达到10ms,非常容易出现类似的丢帧和卡顿问题。In related technologies, in order to prevent concurrency during interrupt processing, a preemption operation is performed. During this period, if the Surfaceflinger thread joins the scheduling queue, it cannot be scheduled immediately due to preemption. It usually waits for a period of time (such as 4ms) before being scheduled for execution, which eventually leads to too late to draw, resulting in frame loss. In the case of large network traffic (such as updating applications in batches, downloading videos, etc.), the scheduling delay of soft interrupt processing can even reach 10ms, which is very prone to similar frame loss and freeze problems.
在一个示意性的例子中,Linux操作系统进行中断处理的架构示意图如图4所示。中断是一种异步的事件处理机制,用来提高系统的并发处理能力。中断请求发生,会触发执行中断处理程序,而中断处理程序被分为中断上半部和中断下半部这两个部分。中断上半部对应硬中断,用来快速处理中断,比如处理器核根据中断表,调用已经注册的中断函数,这个中断函数会调用驱动程序(Driver)中相应的函数;中断下半部对应软中断,用来异步处理中断上半部未完成的工作。Linux内核中的ksoftirqd进程专门负责软中断的处理,当它收到软中断后,就会调用相应软中断所对应的处理函数,比如net_rx_action函数。目前Surfaceflinger线程无法及时得到调度的主要原因是中断上半部和中断下半部处理均关闭了抢占,而中断下半部处理又耗时较长,等到中断下半部处理完成,开启抢占后,Surfaceflinger线程才会得到调度执行,但已经来不及完成绘帧了。为了解决上述问题,相关技术中一种处理方式是将中断处理线程化,使得 中断处理可以被抢占,从而使得高优先级任务能够及时得到调度,有效了降低调度时延。在一种可能的实现方式中,在Linux主线外维护的实时一组补丁(比如PREEMPT_RT补丁),这组补丁将内核的中断处理线程化,并允许中断处理线程被高优先级任务抢占。这样就不会因为中断处理导致高优先级任务无法得到调度,从而降低因中断处理产生的调度时延。In a schematic example, a schematic diagram of an architecture of a Linux operating system for interrupt processing is shown in FIG. 4 . Interrupt is an asynchronous event processing mechanism, which is used to improve the concurrent processing capability of the system. When an interrupt request occurs, it will trigger the execution of the interrupt handler, and the interrupt handler is divided into two parts: the upper part of the interrupt and the lower part of the interrupt. The upper part of the interrupt corresponds to the hard interrupt, which is used to quickly process the interrupt. For example, the processor core calls the registered interrupt function according to the interrupt table. This interrupt function will call the corresponding function in the driver (Driver); the lower part of the interrupt corresponds to the soft interrupt. Interrupt, used to asynchronously handle the unfinished work in the upper half of the interrupt. The ksoftirqd process in the Linux kernel is specially responsible for the processing of soft interrupts. When it receives a soft interrupt, it will call the processing function corresponding to the corresponding soft interrupt, such as the net_rx_action function. At present, the main reason why the Surfaceflinger thread cannot be scheduled in time is that preemption is disabled for both the upper half of the interrupt and the lower half of the interrupt, and the processing of the lower half of the interrupt takes a long time. After the processing of the lower half of the interrupt is completed and the preemption is enabled, The Surfaceflinger thread will be scheduled for execution, but it is too late to complete the drawing frame. In order to solve the above problems, a processing method in the related art is to thread the interrupt processing, so that the interrupt processing can be preempted, so that high-priority tasks can be scheduled in time, effectively reducing the scheduling delay. In one possible implementation, a real-time set of patches (such as PREEMPT_RT patches) maintained outside the Linux mainline, this set of patches threads the kernel's interrupt handling and allows interrupt handling threads to be preempted by high-priority tasks. In this way, high-priority tasks will not be unscheduled due to interrupt processing, thereby reducing the scheduling delay caused by interrupt processing.
但是在上述方法中,存在如下问题:一方面会因为高优先级任务的抢占导致原中断处理产生延迟,比如本该在100ms触发的时钟中断处理,在105ms才得到执行,那么定时器就会产生5ms的偏差,进而导致对其他业务进程/线程的影响;另一方面,对中断线程的抢占会引入任务上下文的切换,切换是有性能开销的,频繁切换会导致系统吞吐量下降。However, in the above method, there are the following problems: on the one hand, the original interrupt processing will be delayed due to the preemption of high-priority tasks. The deviation of 5ms will lead to the impact on other business processes/threads; on the other hand, the preemption of interrupt threads will introduce task context switching, which has performance overhead, and frequent switching will lead to a decrease in system throughput.
为了解决上述问题,相关技术中另一种处理方式是中断绑定方式,即将中断处理绑定目标处理器核,目标处理器核为预先设置的至少一个处理器核;根据处理器核负载将指定线程调度到除目标处理器核以外的其他处理器核上,从而使得除了目标处理器核,其它处理器核上的中断数量可控,调度时延也就可控,不会造成过大的调度时延。In order to solve the above problems, another processing method in the related art is the interrupt binding method, which is to bind the interrupt processing to the target processor core, and the target processor core is at least one pre-set processor core; according to the processor core load, the specified Threads are scheduled to other processor cores except the target processor core, so that the number of interrupts on other processor cores is controllable except for the target processor core, and the scheduling delay is also controllable without causing excessive scheduling delay.
在一个示意性的例子中,电子设备包括8个处理器(核0至核7),如图5a所示,采用的中断处理策略为中断均衡处理,中断处理在每个处理器核上基本均摊,即每个处理器核对应的中断处理耗时(单位为us)是差不多的,这种策略下,中断处理可能会来回迁移,从而导致调度时延不可控。而图5b中,采用的中断处理策略为中断绑核处理,比如将中断处理绑定在核0和核4上(其他处理器核上仍然有中断负载是因为部分中断请求是无法进行绑核的,比如时钟中断)。根据处理器核负载将指定线程调度到除核0和核4以外的其他处理器核上,即可控制其调度时延。In a schematic example, the electronic device includes 8 processors (core 0 to core 7), as shown in Figure 5a, the interrupt processing strategy adopted is interrupt balanced processing, and the interrupt processing is basically amortized on each processor core , that is, the interrupt processing time (in us) corresponding to each processor core is similar. Under this strategy, interrupt processing may migrate back and forth, resulting in uncontrollable scheduling delay. In Figure 5b, the interrupt processing strategy adopted is interrupt binding core processing, such as binding interrupt processing to core 0 and core 4 (other processor cores still have interrupt loads because some interrupt requests cannot be bound to cores , such as a clock interrupt). According to the processor core load, the specified thread is scheduled to other processor cores except core 0 and core 4 to control its scheduling delay.
但是在上述方法中,存在如下几个问题:1、会导致中断处理集中在目标处理器核上(如图5b中核0和核4),拉高整个集群的频率,造成功耗浪费(同集群内其他处理器核负载低,但频率高);2、同样因为中断处理集中在目标处理器核上,其他处理器核即使空闲也不会进行处理,系统吞吐量低;3、进行中断绑定前,往往需要针对各外设中断的负载进行评估,按照负载合理进行中断绑定,这种方式缺乏灵活性,假如新增一个外设,就需要重新评估中断的分配,甚至有时原中断也要重新规划绑定.But in above-mentioned method, there are following several problems: 1, can cause interrupt processing to be concentrated on the core of target processor (as shown in Fig. 5b core 0 and core 4), pull up the frequency of whole cluster, cause the waste of power consumption (same cluster The load of other processor cores is low, but the frequency is high); 2. Also because the interrupt processing is concentrated on the target processor core, other processor cores will not process even if they are idle, and the system throughput is low; 3. Interrupt binding Previously, it was often necessary to evaluate the interrupt load of each peripheral, and reasonably bind interrupts according to the load. This method lacks flexibility. If a new peripheral is added, the allocation of interrupts needs to be re-evaluated, and sometimes the original interrupt needs to be reassessed. Redesign bindings.
为此,本申请实施例提供了一种中断调度方法、电子设备及存储介质,以解决上述相关技术中存在的问题。本申请实施例提供的技术方案中,为第一处理器核配置的中断处理造成的调度时延的最大值即第一时延阈值,使得在第一处理器核的调度时延值大于第一时延阈值的情况下,将第一处理器核当前的部分中断请求迁移绑定至第二处理器核,保证了第一处理器核上由于中断处理造成的调度时延可控,避免了相关技术中因中断处理引发的调度时延过大的问题,保证了业务处理过程的实时性,提高了电子设备的整体性能。To this end, embodiments of the present application provide an interrupt scheduling method, electronic equipment, and a storage medium, so as to solve the problems existing in the above-mentioned related technologies. In the technical solution provided by the embodiment of the present application, the maximum value of the scheduling delay caused by the interrupt processing configured for the first processor core is the first delay threshold, so that the scheduling delay value of the first processor core is greater than the first In the case of the delay threshold, the current part of the interrupt request of the first processor core is migrated and bound to the second processor core, which ensures that the scheduling delay caused by interrupt processing on the first processor core is controllable, and avoids related problems. The problem of excessive scheduling delay caused by interrupt processing in the technology ensures the real-time nature of business processing and improves the overall performance of electronic equipment.
在对本申请实施例进行解释说明之前,先对本申请实施例的应用场景进行说明。请参考图6,其示出了本申请实施例涉及的电子设备的示意图。该电子设备包括硬件层610、内核层620和用户层630。用户层630可以运行有至少一个线程,每个线程用 于处理任务。每个线程的任务调度过程和中断响应过程主要由内核层620实现。Before explaining the embodiment of the present application, the application scenario of the embodiment of the present application will be described first. Please refer to FIG. 6 , which shows a schematic diagram of an electronic device involved in an embodiment of the present application. The electronic device includes a hardware layer 610 , a kernel layer 620 and a user layer 630 . User layer 630 may run with at least one thread, each thread for processing tasks. The task scheduling process and interrupt response process of each thread are mainly implemented by the kernel layer 620 .
其中,硬件层610是电子设备中的硬件基础。该电子设备可以是基站设备、传输设备、工业机器人等对任务处理的实时性有一定要求的电子设备。比如,电子设备为手机,本申请实施例提供的中断调度方法可以应用于需要快速响应的应用场景,比如汽车自动驾驶、工业控制、虚拟现实技术(Virtual Reality,VR)等场景,结合业务进程的配置,均可以降低任务调度时延,保障关键任务能够得到及时调度。Wherein, the hardware layer 610 is the hardware basis in the electronic device. The electronic equipment may be base station equipment, transmission equipment, industrial robots and other electronic equipment that have certain requirements for real-time task processing. For example, if the electronic device is a mobile phone, the interrupt scheduling method provided by the embodiment of the present application can be applied to application scenarios that require fast response, such as autopilot, industrial control, virtual reality technology (Virtual Reality, VR) and other scenarios. Configuration can reduce task scheduling delay and ensure that key tasks can be scheduled in time.
硬件层610包括外围设备612、中断控制器614和至少一个处理器616。外围设备612包括无线网卡、蓝牙设备等。处理器616可以是单核心处理器,也可以是多核心处理器。Hardware layer 610 includes peripherals 612 , interrupt controller 614 and at least one processor 616 . The peripheral device 612 includes a wireless network card, a Bluetooth device, and the like. The processor 616 may be a single-core processor or a multi-core processor.
外围设备612在处理数据时(比如无线网卡收发包)产生中断请求,并通过中断控制器614路由至多个处理器核中的一个处理器核。The peripheral device 612 generates an interrupt request when processing data (for example, a wireless network card sends and receives packets), and routes the interrupt request to one of the multiple processor cores through the interrupt controller 614 .
内核层620是操作系统内核、虚拟存储空间和驱动应用程序运行的层。比如,操作系统内核为Linux内核。The kernel layer 620 is the layer where the operating system kernel, virtual storage space, and driver applications run. For example, the operating system kernel is the Linux kernel.
内核层620包括中断子系统622和调度子系统624。中断子系统622包括中断处理模块640、中断负载收集模块641、中断负载计算模块642、中断负载信息统计模块643、中断负载策略模块644、中断负载均衡执行模块645。调度子系统624包括指定线程配置模块651、任务选核策略模块652和任务调度执行模块653。The kernel layer 620 includes an interrupt subsystem 622 and a scheduling subsystem 624 . The interrupt subsystem 622 includes an interrupt processing module 640 , an interrupt load collection module 641 , an interrupt load calculation module 642 , an interrupt load information statistics module 643 , an interrupt load policy module 644 , and an interrupt load balancing execution module 645 . The scheduling subsystem 624 includes a specified thread configuration module 651 , a task core selection policy module 652 and a task scheduling execution module 653 .
中断处理模块640获取处理器核的中断请求,开始中断子系统622中的中断处理,该中断处理包括硬中断处理和软中断处理。The interrupt processing module 640 obtains the interrupt request of the processor core, and starts interrupt processing in the interrupt subsystem 622, and the interrupt processing includes hard interrupt processing and soft interrupt processing.
中断负载收集模块641记录中断处理时长,将该中断处理时长发送至中断负载计算模块642。其中,中断处理时长包括硬中断处理和软中断处理的总时长。The interrupt load collection module 641 records the interrupt processing duration, and sends the interrupt processing duration to the interrupt load calculation module 642 . Wherein, the interrupt processing duration includes the total duration of hard interrupt processing and soft interrupt processing.
中断负载计算模块642根据中断负载收集模块641提供的中断处理时长,采用预设算法确定当前处理器核的调度时延值,计算结果单位为us或ns。The interrupt load calculation module 642 uses a preset algorithm to determine the scheduling delay value of the current processor core according to the interrupt processing duration provided by the interrupt load collection module 641, and the unit of the calculation result is us or ns.
中断负载信息统计模块643保存和汇总处理器核上的调度时延信息,调度时延信息包括处理器核上各个中断请求的调度时延值的汇总信息。The interrupt load information statistics module 643 saves and summarizes the scheduling delay information on the processor core, and the scheduling delay information includes the summary information of the scheduling delay value of each interrupt request on the processor core.
中断负载策略模块644获取用户层630为第一处理器核配置的第一时延阈值,根据中断负载信息统计模块643中保存的调度时延信息和第一时延阈值进行决策。当第一处理器核的调度时延值大于预配置的第一时延阈值时,确定绑核策略,将绑核策略发送至中断负载均衡执行模块645,该绑核策略指示将部分中断请求迁出,绑定至其它的第二处理器核,从而保证第一处理器核的调度时延值小于第一时延阈值。The interrupt load policy module 644 obtains the first delay threshold configured for the first processor core by the user layer 630, and makes a decision according to the scheduling delay information and the first delay threshold stored in the interrupt load information statistics module 643. When the scheduling delay value of the first processor core is greater than the preconfigured first delay threshold, the core binding policy is determined, and the core binding policy is sent to the interrupt load balancing execution module 645. out, bound to other second processor cores, so as to ensure that the scheduling delay value of the first processor core is less than the first delay threshold.
中断负载均衡执行模块645从中断负载策略模块644处接收绑核策略,根据该绑核策略操作中断控制器614,将对应的中断请求绑定至第二处理器核。The interrupt load balancing execution module 645 receives the core binding policy from the interrupt load policy module 644, operates the interrupt controller 614 according to the core binding policy, and binds the corresponding interrupt request to the second processor core.
调度子系统624负责系统中所有进程/线程的调度执行。The scheduling subsystem 624 is responsible for scheduling and executing all processes/threads in the system.
指定线程配置模块651支持对指定线程配置第二时延阈值。The specified thread configuration module 651 supports configuring the second delay threshold for the specified thread.
任务选核策略模块652用于在任务选核流程中,对第一处理器核的中断负载进行检查,当调度时延值小于或等于第二时延阈值时,认为对应的处理器核满足条件,作为第二处理器核进一步筛选。The task core selection policy module 652 is used to check the interrupt load of the first processor core in the task core selection process, and when the scheduling delay value is less than or equal to the second delay threshold, it is considered that the corresponding processor core meets the condition , as a second processor core for further screening.
任务调度执行模块653用于将进程/线程加入对应处理器核的调度队列,由于该处理器核的中断负载已经在上一步完成的检查,因此可以认为对应的进程/线程可以在一 定时间内得到调度执行。The task scheduling execution module 653 is used to add the process/thread to the scheduling queue of the corresponding processor core. Since the interrupt load of the processor core has been checked in the previous step, it can be considered that the corresponding process/thread can be obtained within a certain period of time. Schedule execution.
用户层630是普通应用程序运行的层。比如,用户层包括应用框架层(比如Framework层)。用户层630包括中断负载管控模块632和指定线程识别/管控模块634。The user layer 630 is the layer where normal applications run. For example, the user layer includes an application framework layer (such as a Framework layer). The user layer 630 includes an interrupt load management module 632 and a specified thread identification/control module 634 .
用户层630负责配置第一处理器核的第一时延阈值,以及指定线程的识别和配置。The user layer 630 is responsible for configuring the first latency threshold of the first processor core, and identifying and configuring specified threads.
中断负载管控模块632负责监控系统中断整体负载,在某个处理器核的中断负载过大时,选择合适的第二处理器核(比如选择当前中断负载较轻的处理器核,可以减少中断的迁移),并为该处理器核配置第一时延阈值,保证每个集群中至少存在一个中断负载可控的处理器核。The interrupt load control module 632 is responsible for monitoring the overall interrupt load of the system, and when the interrupt load of a certain processor core is too large, selects a suitable second processor core (such as selecting a processor core with a lighter interrupt load at present, which can reduce the interrupt load). Migration), and configure the first latency threshold for the processor core to ensure that there is at least one processor core with a controllable interrupt load in each cluster.
指定线程识别/管控模块634负责识别用户层630中负责绘帧的线程(比如UI/Render),并对其配置第二时延阈值。The specified thread identification/control module 634 is responsible for identifying the thread responsible for drawing frames (such as UI/Render) in the user layer 630, and configuring the second delay threshold for it.
需要说明的是,上述各功能模块实现的功能可参考下面方法实施例中的相关描述,在此先不介绍。并且,上述实施例提供的电子设备,在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将电子设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。It should be noted that, for the functions implemented by the above functional modules, reference may be made to the relevant descriptions in the following method embodiments, which will not be introduced here. Moreover, the electronic device provided by the above-mentioned embodiments only uses the division of the above-mentioned functional modules as an example to illustrate when realizing its functions. The internal structure of the system is divided into different functional modules to complete all or part of the functions described above.
下面,采用几个示例性实施例对本申请提供的中断调度方法进行介绍。In the following, several exemplary embodiments are used to introduce the interrupt scheduling method provided by this application.
请参考图7,其示出了本申请一个示例性实施例提供的中断调度方法的流程图。本申请实施例以该中断调度方法应用于图6所示出的电子设备中来举例说明。该中断调度方法包括:Please refer to FIG. 7 , which shows a flowchart of an interrupt scheduling method provided by an exemplary embodiment of the present application. The embodiment of the present application is illustrated by taking the interrupt scheduling method applied to the electronic device shown in FIG. 6 as an example. The interrupt scheduling method includes:
步骤701,用户层配置第一处理器核的第一时延阈值,向内核层发送第一配置信息。In step 701, the user layer configures a first delay threshold of the first processor core, and sends first configuration information to the kernel layer.
可选的,用户层确定电子设备内的至少一个处理器核为第一处理器核,为第一处理器核配置的中断处理造成的调度时延的最大值即第一时延阈值。用户层向内核层发送第一配置信息。Optionally, the user layer determines that at least one processor core in the electronic device is the first processor core, and the maximum value of the scheduling delay caused by interrupt processing configured for the first processor core is the first delay threshold. The user layer sends the first configuration information to the kernel layer.
示意性的,第一处理器核为一个处理器核,第一配置信息包括第一处理器核的处理器核标识和第一时延阈值。或者,第一处理器核为至少两个处理器核,第一配置信息包括至少两个处理器核标识和各自对应的第一时延阈值,至少两个处理器核标识各自对应的第一时延阈值可以是相同的,也可以是不同的。本申请实施例对此不加以限定,为了方便介绍,仅以第一处理器核为一个处理器核为例进行说明。Schematically, the first processor core is a processor core, and the first configuration information includes a processor core identifier of the first processor core and a first latency threshold. Alternatively, the first processor core is at least two processor cores, the first configuration information includes at least two processor core identifiers and their corresponding first delay thresholds, and the at least two processor core identifiers respectively correspond to first time delay thresholds. The delay thresholds can be the same or different. This embodiment of the present application does not limit this, and for the convenience of introduction, only the first processor core is taken as an example for description.
其中,处理器核标识用于为在电子设备的多个处理器核中唯一标识第一处理器核,第一时延阈值是为该第一处理器核配置的中断处理造成的调度时延的最大值。该第一时延阈值可以在后续根据各个处理器核内的调度时延值,和/或绘帧完成情况进行动态调整。Wherein, the processor core identification is used to uniquely identify the first processor core among the multiple processor cores of the electronic device, and the first delay threshold is the scheduling delay caused by the interrupt processing configured for the first processor core. maximum value. The first delay threshold may be dynamically adjusted subsequently according to the scheduling delay value in each processor core and/or the completion of drawing frames.
步骤702,内核层根据第一配置信息,判断第一处理器核的调度时延值是否大于第一时延阈值。 Step 702, the kernel layer judges whether the scheduling delay value of the first processor core is greater than the first delay threshold according to the first configuration information.
对应的,内核层接收用户层发送的第一配置信息,该第一配置信息包括第一处理器核的处理器核标识和第一时延阈值。Correspondingly, the kernel layer receives the first configuration information sent by the user layer, where the first configuration information includes the processor core identifier of the first processor core and the first delay threshold.
可选的,内核层在接收到用户层发送的第一配置信息或者在对各个处理器核的调 度时延值进行信息更新时,判断第一处理器核的调度时延值是否大于第一时延阈值,若第一处理器核当前的调度时延值小于或者等于第一时延阈值则结束本实施例的流程,若第一处理器核的调度时延值大于第一时延阈值则执行步骤703。Optionally, when the kernel layer receives the first configuration information sent by the user layer or updates information on the scheduling delay values of each processor core, it determines whether the scheduling delay value of the first processor core is greater than the first Delay threshold, if the current scheduling delay value of the first processor core is less than or equal to the first delay threshold, the process of this embodiment ends, and if the scheduling delay value of the first processor core is greater than the first delay threshold, execute Step 703.
其中,第一处理器核的调度时延值用于指示第一处理器核当前的中断负载。可选地,调度时延值与中断负载呈正相关关系,即中断负载越大,调度时延值越大。Wherein, the scheduling latency value of the first processor core is used to indicate the current interrupt load of the first processor core. Optionally, the scheduling delay value is positively correlated with the interrupt load, that is, the greater the interrupt load, the greater the scheduling delay value.
可选地,第一处理器核的调度时延值是第一处理器核当前的调度时延的实际值或者估计值。示意性的,第一处理器核的调度时延值是基于第一处理器核当前的中断处理时长确定的调度时延的估计值。Optionally, the scheduling latency value of the first processor core is an actual value or an estimated value of the current scheduling latency of the first processor core. Schematically, the scheduling delay value of the first processor core is an estimated value of the scheduling delay determined based on the current interrupt processing duration of the first processor core.
需要说明的是,内核层确定第一处理器核的调度时延值的过程,可参考下面实施例中的相关描述,在此先不介绍。It should be noted that, for the process of determining the scheduling delay value of the first processor core at the kernel layer, reference may be made to relevant descriptions in the following embodiments, which will not be introduced here.
步骤703,若第一处理器核的调度时延值大于第一时延阈值,则内核层进行中断均衡处理,向中断控制器发送第二配置信息。Step 703: If the scheduling latency value of the first processor core is greater than the first latency threshold, the kernel layer performs interrupt equalization processing and sends second configuration information to the interrupt controller.
若第一处理器核当前的调度时延值大于第一时延阈值,则内核层确定中断均衡策略,向中断控制器发送指示中断均衡策略的第二配置信息。可选地,内核层通过调度时延值均衡执行模块向中断控制器发送第二配置信息。If the current scheduling delay value of the first processor core is greater than the first delay threshold, the kernel layer determines an interrupt balancing strategy, and sends second configuration information indicating the interrupt balancing strategy to the interrupt controller. Optionally, the kernel layer sends the second configuration information to the interrupt controller by scheduling the latency balance execution module.
其中,第二配置信息指示的中断均衡策略为将第一处理器核当前的部分中断请求迁移绑定至第二处理器核,第二处理器核不同于第一处理器核,迁移绑定后第一处理器核的调度时延值小于或等于第一时延阈值。Wherein, the interrupt balancing strategy indicated by the second configuration information is to migrate and bind part of the current interrupt requests of the first processor core to the second processor core. The second processor core is different from the first processor core. After the migration and binding The scheduling delay value of the first processor core is less than or equal to the first delay threshold.
可选的,第二配置信息包括第一处理器核中待迁出的中断号和第二处理器核的处理器核标识。示意性的,待迁出的中断号为待迁出的m个中断号,m为正整数。其中一个中断号对应至少一个中断请求。Optionally, the second configuration information includes an interrupt number to be migrated out of the first processor core and a processor core identifier of the second processor core. Schematically, the interrupt numbers to be moved out are m interrupt numbers to be moved out, and m is a positive integer. One of the interrupt numbers corresponds to at least one interrupt request.
示意性的,第二配置信息包括待迁出的中断号和待迁出的中断号对应的绑定关系信息,其中一个中断号对应的绑定关系信息用于指示该中断号对应的至少一个中断请求与处理器核之间的绑定关系。比如,电子设备包括8个处理器核,待迁出的中断号为中断号10,中断号10对应多个中断请求,第二配置信息包括中断号10和中断号10对应的8个比特位的信息,8个比特位与8个处理器核存在一一对应的关系,当比特位为第一数值时用于指示将该中断号对应的多个中断请求绑定至该比特位对应的处理器核,当比特位为第二数值时用于指示该中断号对应的多个中断请求与该比特位对应的处理器核无绑定关系。比如,第一数值为1,第二数值为0。本申请实施例对此不加以限定。Schematically, the second configuration information includes the interrupt number to be migrated out and the binding relationship information corresponding to the interrupt number to be migrated out, wherein the binding relationship information corresponding to one interrupt number is used to indicate at least one interrupt number corresponding to the interrupt number The binding relationship between requests and processor cores. For example, the electronic device includes 8 processor cores, the interrupt number to be migrated is interrupt number 10, and the interrupt number 10 corresponds to multiple interrupt requests, and the second configuration information includes the interrupt number 10 and the 8-bit information corresponding to the interrupt number 10. Information, there is a one-to-one correspondence between 8 bits and 8 processor cores, and when the bit is the first value, it is used to indicate that multiple interrupt requests corresponding to the interrupt number are bound to the processor corresponding to the bit The core, when the bit is the second value, is used to indicate that the multiple interrupt requests corresponding to the interrupt number have no binding relationship with the processor core corresponding to the bit. For example, the first value is 1, and the second value is 0. This embodiment of the present application does not limit it.
可选地,第一处理器当前包括多个中断负载,多个中断负载与多个中断号存在一一对应的关系。若第一处理器核当前的调度时延值大于第一时延阈值,则内核层将第一处理器核当前的调度时延值与第一时延阈值的差值绝对值确定为第一差值。第一处理器按照预设算法从多个中断负载选择至少一个中断负载,将选择的至少一个中断负载对应的中断号确定为待迁出的中断号,使得待迁出的中断号对应的调度时延总数大于第一差值,确定中断均衡策略为将待迁出的中断号对应的中断请求迁移绑定至第二处理器核。比如,预设算法为按照中断号对应的中断负载从高到低的顺序依次选择中断号。需要说明的是,预设算法还可以采用其他可能的实现方式,本申请实施例对此不加以限定。其中,第二处理器核为电子设备中除第一处理器核以外的其他处理器核。 可选地,第二处理器核为除第一处理器核以外的任意一个处理器核。或者,第二处理器核为除第一处理器核以外的任意的至少两个处理器核。Optionally, the first processor currently includes multiple interrupt loads, and there is a one-to-one correspondence between the multiple interrupt loads and the multiple interrupt numbers. If the current scheduling delay value of the first processor core is greater than the first delay threshold, the kernel layer determines the absolute value of the difference between the current scheduling delay value of the first processor core and the first delay threshold as the first difference value. The first processor selects at least one interrupt load from a plurality of interrupt loads according to a preset algorithm, and determines the interrupt number corresponding to the selected at least one interrupt load as the interrupt number to be moved out, so that the scheduling time corresponding to the interrupt number to be moved out If the total number of delays is greater than the first difference, it is determined that the interrupt balancing strategy is to migrate and bind the interrupt request corresponding to the interrupt number to be migrated out to the second processor core. For example, the default algorithm is to select the interrupt numbers sequentially according to the sequence of interrupt loads corresponding to the interrupt numbers from high to low. It should be noted that the preset algorithm may also adopt other possible implementation manners, which are not limited in this embodiment of the present application. Wherein, the second processor core is other processor cores in the electronic device except the first processor core. Optionally, the second processor core is any processor core except the first processor core. Alternatively, the second processor core is any at least two processor cores other than the first processor core.
可选地,第二处理器核为除第一处理器核以外的调度时延值小于第三时延阈值的至少一个处理器核。示意性的,内核层遍历查询其他处理器核的调度时延值,将调度时延值小于第三时延阈值的处理器核确定为第二处理器核。其中,第三时延阈值为自定义设置的,或者默认设置的。本申请实施例对此不加以限定。Optionally, the second processor core is at least one processor core other than the first processor core whose scheduling delay value is less than the third delay threshold. Schematically, the kernel layer traverses to query the scheduling delay values of other processor cores, and determines the processor core whose scheduling delay value is less than the third delay threshold as the second processor core. Wherein, the third delay threshold is a user-defined setting or a default setting. This embodiment of the present application does not limit it.
可选地,第二处理器核为除第一处理器核以外的调度时延值最小的至少一个处理器核。示意性的,内核层遍历查询其他处理器核的调度时延值,按照调度时延值从小到大的顺序对第二处理器核进行排序,将排序后位于前n个的处理器核确定为第二处理器核,n为正整数。Optionally, the second processor core is at least one processor core other than the first processor core with the smallest scheduling delay value. Schematically, the kernel layer traverses to query the scheduling delay values of other processor cores, sorts the second processor cores in ascending order of scheduling delay values, and determines the first n processor cores after sorting as The second processor core, n is a positive integer.
步骤704,中断控制器根据第二配置信息,将第一处理器核当前的部分中断请求迁移绑定至第二处理器核。In step 704, the interrupt controller migrates and binds some current interrupt requests of the first processor core to the second processor core according to the second configuration information.
中断控制器接收内核层发送的第二配置信息,执行该第二配置信息所指示的中断均衡策略,即将第一处理器核当前的部分中断请求迁移绑定至第二处理器核,第二处理器核不同于第一处理器核,迁移绑定后第一处理器核的调度时延值小于或等于第一时延阈值。其中,第二处理器核为除第一处理器核以外的至少一个处理器核。The interrupt controller receives the second configuration information sent by the kernel layer, executes the interrupt balancing strategy indicated by the second configuration information, that is, migrates and binds part of the current interrupt requests of the first processor core to the second processor core, and the second processing The processor core is different from the first processor core, and the scheduling delay value of the first processor core after migration and binding is less than or equal to the first delay threshold. Wherein, the second processor core is at least one processor core other than the first processor core.
可选的,第二配置信息包括第一处理器核中待迁出的中断号和第二处理器核的处理器核标识。中断控制器根据第二配置信息,将中断号对应的至少一个中断请求迁移绑定至所述第二处理器核。示意性的,待迁出的中断号为待迁出的m个中断号,m为正整数。Optionally, the second configuration information includes an interrupt number to be migrated out of the first processor core and a processor core identifier of the second processor core. The interrupt controller migrates and binds at least one interrupt request corresponding to the interrupt number to the second processor core according to the second configuration information. Schematically, the interrupt numbers to be moved out are m interrupt numbers to be moved out, and m is a positive integer.
可选地,中断控制器将第一处理器核当前的部分中断请求,以预设均摊方式迁移绑定至第二处理器核,预设均摊方式用于指示迁移绑定后其它处理器核的调度时延值之间的差值绝对值小于第一差值阈值。其中,其它处理器核为电子设备中除第一处理器核以外的各个处理器核,第一差值阈值为自定义设置的,或者默认设置的。本申请实施例对此不加以限定。Optionally, the interrupt controller migrates and binds part of the current interrupt requests of the first processor core to the second processor core in a preset amortization mode, and the preset amortization mode is used to indicate the number of other processor cores after migration and binding. The absolute value of the difference between the scheduling delay values is smaller than the first difference threshold. Wherein, the other processor cores are all processor cores in the electronic device except the first processor core, and the first difference threshold is a custom setting or a default setting. This embodiment of the present application does not limit it.
综上所述,本申请实施例通过内核层根据第一处理器核当前的调度时延值和用户层配置的第一时延阈值动态调整中断绑定,若第一处理器核当前的调度时延值大于第一时延阈值,则通过中断控制器将第一处理器核当前的部分中断请求迁移绑定至第二处理器核,使得迁移绑定后第一处理器核的调度时延值小于或等于第一时延阈值,降低了第一处理器核上的调度时延。还通过迁出的部分中断请求,在其他处理器核集群上合理均摊,保证中断处理的并发度,并避免单一处理器核负载过大拉高整个集群的频率,造成功耗浪费的问题。To sum up, in the embodiment of the present application, the kernel layer dynamically adjusts the interrupt binding according to the current scheduling delay value of the first processor core and the first delay threshold configured by the user layer. If the current scheduling time of the first processor core is If the delay value is greater than the first delay threshold, the interrupt controller will migrate and bind some of the current interrupt requests of the first processor core to the second processor core, so that the scheduling delay value of the first processor core after migration and binding Less than or equal to the first delay threshold, the scheduling delay on the first processor core is reduced. Part of the interrupt requests moved out can be reasonably shared on other processor core clusters to ensure the concurrency of interrupt processing and avoid the problem of excessive load on a single processor core that increases the frequency of the entire cluster, resulting in waste of power consumption.
关于调度时延值统计,相关技术中还存在如下问题:硬中断处理和软中断处理是分开处理的,软中断处理并不感知自己的执行是由哪个硬中断处理处理触发的,内核中对应的调度时延值也是分开统计的。Regarding the scheduling delay value statistics, there are still the following problems in related technologies: hard interrupt processing and soft interrupt processing are processed separately, soft interrupt processing does not know which hard interrupt processing triggers its own execution, and the corresponding The scheduling delay value is also counted separately.
而实际上两者是有关联的,比如网络收包的过程中,内核进入硬中断处理进行简单的硬件配置(执行耗时较短),然后通过raise_softirq_irqoff接口触发软中断处理,通过软中断处理的handler函数中真正对数据包进行处理(执行耗时较长)。可以说这 种场景下,没有硬中断处理也就不会触发软中断处理,两者都应计入同一中断号的开销统计。In fact, the two are related. For example, in the process of receiving packets from the network, the kernel enters the hard interrupt processing for simple hardware configuration (the execution time is relatively short), and then triggers the soft interrupt processing through the raise_softirq_irqoff interface. The data packet is actually processed in the handler function (the execution takes a long time). It can be said that in this scenario, soft interrupt processing will not be triggered without hard interrupt processing, and both should be included in the overhead statistics of the same interrupt number.
相关技术中,调度时延值统计是以处理器核为粒度的,并不支持按照中断粒度进行负载统计。所以,当前一些实现中断均衡功能的软件,只能根据中断数量来评估某个中断号的调度时延值,而不同中断的处理时间并不一致,这样的统计结果并不精确,无法很好的支撑本申请实施例中的中断均衡处理。比如网卡的硬中断处理自身耗时并不多,但其软中断处理却比较耗时,若按照相关技术中的调度时延值计算方式即并没有将两者关联起来,可能会造成网卡的中断开销较小的假象。In related technologies, the statistics of scheduling delay values are based on the granularity of processor cores, and do not support load statistics based on the granularity of interrupts. Therefore, some current software that implements the interrupt balance function can only evaluate the scheduling delay value of a certain interrupt number based on the number of interrupts, and the processing time of different interrupts is inconsistent. Such statistical results are not accurate and cannot be well supported. The interrupt balance processing in the embodiment of the present application. For example, the hard interrupt processing of the network card itself does not take much time, but its soft interrupt processing is time-consuming. If the calculation method of the scheduling delay value in the related technology does not link the two, it may cause the interruption of the network card. The illusion of less overhead.
针对上述问题,本申请实施例提出中断粒度的负载统计(Per-Interrupt Load Tracking,PILT)方式,即按照中断请求进行调度时延值统计,并将对应的软中断处理的开销,一并计入对应中断请求的调度时延值中。基于如图7所示的实施例,在步骤702即内核层判断第一处理器核的调度时延值是否大于第一时延阈值之前,内核层需要统计第一处理器核的调度时延值。在一种可能的实现方式中,内核层根据接收到的中断请求对调度时延值进行实时统计,调度时延值统计的过程包括如下几个步骤,如图8所示:In view of the above problems, the embodiment of this application proposes a Per-Interrupt Load Tracking (PILT) method of interrupt granularity, that is, the scheduling delay value statistics are performed according to the interrupt request, and the corresponding soft interrupt processing overhead is included in the In the scheduling delay value corresponding to the interrupt request. Based on the embodiment shown in Figure 7, before the kernel layer judges whether the scheduling delay value of the first processor core is greater than the first delay threshold in step 702, the kernel layer needs to count the scheduling delay value of the first processor core . In a possible implementation, the kernel layer performs real-time statistics on the scheduling delay value according to the received interrupt request, and the process of scheduling delay value statistics includes the following steps, as shown in Figure 8:
步骤801,获取中断请求对应的中断处理时长,中断处理时长包括硬中断处理和软中断处理的总时长。In step 801, the interrupt processing duration corresponding to the interrupt request is acquired, and the interrupt processing duration includes the total duration of hard interrupt processing and soft interrupt processing.
可选地,在接收到某个中断请求后,确定该中断请求对应的硬中断处理的开始时刻和结束时刻,将硬中断处理的开始时刻和结束时刻的差值绝对值确定为第一处理时长;确定该中断请求对应的软中断处理的开始时刻和结束时刻,将软中断处理的开始时刻和结束时刻的差值绝对值确定为第二处理时长;将该中断请求的第一处理时长和第二处理时长之和确定为该中断请求的中断处理时长。Optionally, after receiving an interrupt request, determine the start time and end time of the hard interrupt processing corresponding to the interrupt request, and determine the absolute value of the difference between the start time and end time of the hard interrupt processing as the first processing duration ; Determine the start time and end time of the soft interrupt processing corresponding to the interrupt request, and determine the absolute value of the difference between the start time and the end time of the soft interrupt processing as the second processing duration; the first processing duration and the second processing duration of the interrupt request The sum of the two processing durations is determined as the interrupt processing duration of the interrupt request.
可选地,为了建立硬中断处理和软中断中断处理的关联,在一个中断请求的硬中断处理结束,且触发软中断处理前,保存该中断请求的中断号和其触发的软中断处理的软中断类型。其中,一个中断请求的中断号用于在多个中断请求中唯一标识该中断请求。软中断类型包括网络接收中断、网络发送中断、定时中断、调度中断、读-拷贝修改(Read-Copy Update,RCU)锁等类型。Optionally, in order to establish the association between hard interrupt processing and soft interrupt processing, before the hard interrupt processing of an interrupt request ends and the soft interrupt processing is triggered, the interrupt number of the interrupt request and the soft interrupt processing soft interrupt value triggered by it are saved. interrupt type. Wherein, the interrupt number of an interrupt request is used to uniquely identify the interrupt request among multiple interrupt requests. Soft interrupt types include network receive interrupt, network send interrupt, timing interrupt, scheduling interrupt, read-copy update (Read-Copy Update, RCU) lock and other types.
可选地,由于在统计时刻硬中断处理可能较多,软中断处理可能来不及处理,因此需要进行数组保存,即将中断请求的中断号和其触发的软中断处理的软中断类型保存在一个数组中。比如,一个中断请求的中断号为“200”,软中断类型为网络接收中断“NET_RX”,则将“softirq_type:NET_RX;hw_irq:200”保存在一个数组中。软中断处理从上一步保存的数组中,获取与软中断类型匹配的第一个数组成员,比如网络收包处理的软中断对应软中断类型“NET_RX”,并获取与软中断类型“NET_RX”匹配的第一个数组成员中的中断号,将软中断处理的第二处理时长一并计入该中断号的中断处理时长中。Optionally, since there may be many hard interrupts processed at the time of statistics, and the soft interrupts may not be processed in time, it is necessary to save the array, that is, the interrupt number of the interrupt request and the soft interrupt type of the soft interrupt triggered by it are stored in an array. . For example, if the interrupt number of an interrupt request is "200" and the softirq type is network receive interrupt "NET_RX", then "softirq_type: NET_RX; hw_irq: 200" is stored in an array. Soft interrupt processing obtains the first array member that matches the soft interrupt type from the array saved in the previous step, for example, the soft interrupt for network packet receiving processing corresponds to the soft interrupt type "NET_RX", and obtains the matching with the soft interrupt type "NET_RX" The interrupt number in the first array member of , the second processing duration of the soft interrupt processing is included in the interrupt processing duration of the interrupt number.
比如,中断号200的第一处理耗时为delta_hardirq200,中断号200的第二处理耗时为delta_softirq200,则中断号200的中断处理时长delta_irq200=delta_hardirq200+delta_softirq200。For example, the first processing time of interrupt number 200 is delta_hardirq200, and the second processing time of interrupt number 200 is delta_softirq200, then the interrupt processing time of interrupt number 200 is delta_irq200=delta_hardirq200+delta_softirq200.
步骤802,根据该中断请求的中断处理时长,采用预设算法确定该中断请求对应的调度时延值。 Step 802, according to the interrupt processing duration of the interrupt request, a preset algorithm is used to determine the scheduling delay value corresponding to the interrupt request.
可选地,预设算法包括实体负载跟踪算法(Per-entity load tracking,PELT)或窗口辅助负载跟踪(Window Assisted Load Tracking,WALT)。Optionally, the preset algorithm includes an entity load tracking algorithm (Per-entity load tracking, PELT) or a window assisted load tracking (Window Assisted Load Tracking, WALT).
在一种可能的实现方式中,采用WALT算法,预先设置窗口的长度(比如10ms),统计最近的预设数量(比如预设数量为5)的窗口内的中断处理时长的平均值或者当前最大值,将统计得到的平均值或者最大值确定为该中断请求对应的调度时延值。In a possible implementation, the WALT algorithm is used to preset the length of the window (for example, 10ms), and the average value or the current maximum interrupt processing duration in the window of the latest preset number (for example, the preset number is 5) is counted. value, and determine the average value or maximum value obtained from the statistics as the scheduling delay value corresponding to the interrupt request.
又比如,采用PELT算法,预设设置衰减因子的数值,采用衰减因子对预设数量的窗口内的中断处理时长进行加权求和,衰减因子为小于1的正数。比如,衰减因子为y,3个窗口内的中断处理时长分别为10、9和8,则中断请求对应的调度时延值为10×y+9×y 2+8×y 3For another example, the PELT algorithm is used, the value of the attenuation factor is preset, and the attenuation factor is used to perform weighted summation of the interrupt processing duration in a preset number of windows, and the attenuation factor is a positive number less than 1. For example, if the attenuation factor is y, and the interrupt processing durations in the three windows are 10, 9, and 8 respectively, then the scheduling delay value corresponding to the interrupt request is 10×y+9×y 2 +8×y 3 .
可选地,将该中断请求对应的中断号和调度时延值的信息进行保存。示意性的,将该中断请求对应的中断号和调度时延值的信息保持在第一处理器核的指定变量中,比如指定变量为per处理器核变量。Optionally, information about the interrupt number and the scheduling delay value corresponding to the interrupt request is saved. Schematically, the information of the interrupt number and the scheduling delay value corresponding to the interrupt request is kept in a designated variable of the first processor core, for example, the designated variable is a per processor core variable.
步骤803,将该中断请求对应的调度时延值与第一处理器核当前的调度时延值进行求和,得到更新后的调度时延值。Step 803: Summing the scheduling delay value corresponding to the interrupt request and the current scheduling delay value of the first processor core to obtain an updated scheduling delay value.
可选地,将该中断请求对应的调度时延值与第一处理器核当前的调度时延值进行加权求和,得到更新后的调度时延值。Optionally, a weighted sum is performed on the scheduling delay value corresponding to the interrupt request and the current scheduling delay value of the first processor core to obtain an updated scheduling delay value.
综上所述,与相关技术中按照处理器核维度统计调度时延值开销的方案相比,本申请实施例通过跟踪计算每个中断请求的负载开销,从而避免过往只能根据中断数量计算负载的粗放算法,支撑更精确的中断均衡处理,将由硬中断处理触发的软中断处理的时间开销一并计入对应的中断处理时长,使得基于中断处理时长确定的调度时延值更加精确,从而后续能够更加准确的进行中断均衡处理。To sum up, compared with the scheme in the related art that calculates the scheduling delay value overhead according to the processor core dimension, the embodiment of the present application tracks and calculates the load overhead of each interrupt request, so as to avoid the load that can only be calculated based on the number of interrupts in the past. The extensive algorithm supports more accurate interrupt balance processing, and includes the time overhead of soft interrupt processing triggered by hard interrupt processing into the corresponding interrupt processing time, making the scheduling delay value determined based on interrupt processing time more accurate, so that subsequent It can perform interrupt equalization processing more accurately.
此外,相关技术中任务选核时未考虑中断处理造成的调度时延,造成调度时延过大;或针对所有线程采用统一的中断负载判断标准,导致后台线程也和指定线程选择到了同一个处理器核,造成不可预知的影响(比如持锁导致无法调度)。而本申请实施例支持对指定线程配置调度时延要求,并在选核时判断调度时延要求是否大于等于处理器核的调度时延值,满足要求时才选择对应的处理器核,保证指定线程能够得到及时调度,降低了丢帧概率。而非关键线程(比如后台线程),由于没有时延要求,则可以选择调度时延值高的处理器核,减少对关键线程运行的干扰。基于图6所示出的电子设备,任务选核过程包括如下几个步骤,如图9所示:In addition, in related technologies, the scheduling delay caused by interrupt processing is not considered when selecting task cores, resulting in excessive scheduling delay; or a unified interrupt load judgment standard is adopted for all threads, resulting in the background thread and the specified thread being selected for the same processing core, resulting in unpredictable impacts (such as locks leading to unschedulable). However, the embodiment of the present application supports configuring the scheduling delay requirement for the specified thread, and judges whether the scheduling delay requirement is greater than or equal to the scheduling delay value of the processor core when selecting a core, and only selects the corresponding processor core when the requirement is met, ensuring that the specified Threads can be scheduled in time, reducing the probability of frame loss. Instead of critical threads (such as background threads), since there is no delay requirement, you can choose to schedule processor cores with high delay values to reduce interference with the operation of critical threads. Based on the electronic equipment shown in Figure 6, the task selection process includes the following steps, as shown in Figure 9:
步骤901,用户层为前台应用的指定线程配置第二时延阈值,向内核层发送第三配置信息。In step 901, the user layer configures a second delay threshold for a specified thread of the foreground application, and sends third configuration information to the kernel layer.
可选的,用户层确定前台应用的指定线程,为该指定线程配置的调度时延的最大值即第二时延阈值。用户层向内核层发送第三配置信息,该第三配置信息包括指定线程的线程标识和第二时延阈值。Optionally, the user layer determines the designated thread of the foreground application, and the maximum value of the scheduling delay configured for the designated thread is the second delay threshold. The user layer sends third configuration information to the kernel layer, where the third configuration information includes a thread identifier of a specified thread and a second delay threshold.
其中,指定线程的线程标识用于为在多个线程中唯一标识指定线程。指定线程为对调度时延有要求的线程,可选地,指定线程包括绘帧线程和/或进程间通信机制的进程。比如指定线程为UI/Render线程、Surfaceflinger线程以及通信相关的binder线程等。本申请实施例对指定线程的类型不加以限定。Wherein, the thread identifier of the specified thread is used to uniquely identify the specified thread among multiple threads. The designated thread is a thread that requires a scheduling delay. Optionally, the designated thread includes a frame drawing thread and/or a process of an inter-process communication mechanism. For example, the specified thread is UI/Render thread, Surfaceflinger thread, and communication-related binder thread. The embodiment of the present application does not limit the type of the specified thread.
其中,第二时延阈值是为该指定线程配置的调度时延值的上限值。Wherein, the second delay threshold is an upper limit value of a scheduling delay value configured for the specified thread.
该第二时延阈值可以在后续根据绘帧完成情况进行动态调整。比如,若上一帧绘制的实际结束时刻与指定终止时刻的差值绝对值大于第二差值阈值,则增加第二时延阈值;若上一帧绘制的实际结束时刻与指定终止时刻的差值绝对值小于或者等于第二差值阈值,则减小第二时延阈值,保证线程能够更快得到调度。其中,第二差值阈值为自定义设置的,或者默认设置的。本申请实施例对此不加以限定。The second delay threshold can be dynamically adjusted later according to the completion of frame drawing. For example, if the absolute value of the difference between the actual end time drawn in the previous frame and the specified end time is greater than the second difference threshold, then increase the second delay threshold; if the difference between the actual end time drawn in the previous frame and the specified end time If the absolute value of the value is less than or equal to the second difference threshold, the second delay threshold is reduced to ensure that the thread can be scheduled faster. Wherein, the second difference threshold is a custom setting or a default setting. This embodiment of the present application does not limit it.
可选地,用户层识别指定线程,获取指定线程的线程标识,为指定线程配置第二时延阈值,通过预设方式向内核层发送第三配置信息。示意性的,预设方式为I/O设备控制(input/output control,ioctl)方式。比如,第三配置信息包括“tid=1200;lat_req=200000”,其中,指定线程的线程标识为1200,调度时延要求为200000ns,即200us。Optionally, the user layer identifies the specified thread, obtains the thread identifier of the specified thread, configures the second delay threshold for the specified thread, and sends third configuration information to the kernel layer in a preset manner. Schematically, the default mode is an I/O device control (input/output control, ioctl) mode. For example, the third configuration information includes "tid=1200; lat_req=200000", wherein the thread identifier of the specified thread is 1200, and the scheduling delay requirement is 200000 ns, that is, 200 us.
步骤902,内核层根据第三配置信息,将满足预设选核条件的处理器核确定为目标处理器核,预设选核条件包括处理器核当前的调度时延值小于或等于第二时延阈值。Step 902: According to the third configuration information, the kernel layer determines the processor core that satisfies the preset core selection condition as the target processor core. The preset core selection condition includes that the current scheduling delay value of the processor core is less than or equal to the second time delay threshold.
内核层接收到用户层发送的第三配置信息后,检查线程标识对应的指定线程的当前状态,若该指定线程正在执行,则不作操作;若该指定线程并非正在执行,则内核层尝试唤醒该指定线程,如果该指定线程不允许被唤醒(比如在等锁),则修改调度时延要求的第二时延阈值(比如减小第二时延阈值)。若该指定线程允许被唤醒,则进入选核流程。After the kernel layer receives the third configuration information sent by the user layer, it checks the current state of the specified thread corresponding to the thread identifier, and if the specified thread is executing, no operation is performed; if the specified thread is not executing, the kernel layer tries to wake up the specified thread. A designated thread, if the designated thread is not allowed to be woken up (for example, waiting for a lock), then modify the second delay threshold required by the scheduling delay (for example, reduce the second delay threshold). If the specified thread is allowed to be awakened, it enters the core selection process.
可选地,在选核流程中,对于多个处理器核中的一个处理器核,内核层判断该处理器核是否满足预设选核条件,预设选核条件包括处理器核当前的调度时延值小于第二时延阈值。若该处理器核满足预设选核条件,则将该处理器核确定为目标处理器核,执行步骤903;若该处理器核不满足预设选核条件,则继续查看下一个处理器核,再次执行判断该处理器核是否满足预设选核条件的步骤。Optionally, in the core selection process, for a processor core among the plurality of processor cores, the kernel layer judges whether the processor core satisfies a preset core selection condition, and the preset core selection condition includes the current scheduling of the processor core The delay value is less than the second delay threshold. If the processor core meets the preset core selection condition, then determine the processor core as the target processor core, and execute step 903; if the processor core does not meet the preset core selection condition, continue to check the next processor core , performing the step of judging whether the processor core satisfies the preset core selection condition again.
需要说明的是,处理器核当前的调度时延值统计方式可以类比参考上述目标处理器核当前的调度时延值统计方式,在此不再赘述。It should be noted that, the current statistical method of scheduling delay value of the processor core can refer to the above-mentioned current statistical method of scheduling delay value of the target processor core by analogy, which will not be repeated here.
可选地,预设选核条件包括处理器核当前的调度时延值小于第二时延阈值和其他选核条件。比如,其他选核条件包括处理器的优先级大与预设优先级阈值,和/或处理器的属性信息与指定线程相匹配。本申请实施例对此不加以限定。Optionally, the preset core selection condition includes that the current scheduling delay value of the processor core is less than the second delay threshold and other core selection conditions. For example, other core selection conditions include that the priority of the processor is greater than the preset priority threshold, and/or the attribute information of the processor matches the specified thread. This embodiment of the present application does not limit it.
步骤903,内核层将该指定线程加入目标处理器核的调度队列。 Step 903, the kernel layer adds the specified thread to the scheduling queue of the target processor core.
内核层将该指定线程加入目标处理器核的调度队列后,等待调度执行。由于指定线程的调度时延值处于可控范围,指定线程可以在满足调度时延要求的时间内得到及时调度。After the kernel layer adds the specified thread to the scheduling queue of the target processor core, it waits for scheduling execution. Since the scheduling delay value of the specified thread is in a controllable range, the specified thread can be scheduled in time within the time that meets the scheduling delay requirement.
综上所述,本申请实施例支持针对任务粒度的调度时延要求配置,将前台/后台线程区分处理,降低后台线程对前台指定线程的影响。任务选核时,考虑任务调度时延要求和处理器核的调度时延值,只有处理器核当前的调度时延值小于或等于任务调度时延要求的第二时延阈值时,才可能将该处理器核作为目标处理器核,保证关键线程能够选择到调度时延值低的处理器核上,并得到及时调度。To sum up, the embodiment of the present application supports the configuration of scheduling delay requirements for task granularity, distinguishes between foreground and background threads, and reduces the impact of background threads on specified foreground threads. When selecting a task core, consider the task scheduling delay requirement and the scheduling delay value of the processor core. Only when the current scheduling delay value of the processor core is less than or equal to the second delay threshold required by the task scheduling delay, can the The processor core is used as the target processor core to ensure that key threads can be selected to the processor core with a low scheduling delay value and be scheduled in time.
在一个示意性的应用场景中,如图10所示,手机前台运行“应用市场”应用A, 当接收到该应用A中作用于控件1001的点击操作信号时,对推荐的5个应用程序进行批量更新。手机将该应用A切换至后台运行后,打开“杂志汇”应用显示多个杂志封面。由于应用A进行批量更新时需要通过网络下载升级包,此时会有大量的网络流量通过无线网卡进行接收/转发,而网卡是通过中断请求通知内核层从网卡内存中读写数据的,故会有大量网卡中断请求产生。且由于涉及数据包处理,网络收发包的中断处理往往耗时较长(最多达10ms),若此时UI/Render线程、Surfaceflinger线程等绘帧线程选择了正在或将要进行网卡中断处理的处理器核,则非常容易造成丢帧,导致在应用B中浏览杂志时的用户界面出现卡顿的问题。In a schematic application scenario, as shown in FIG. 10 , the “app market” application A is running in the foreground of the mobile phone. When the click operation signal acting on the control 1001 in the application A is received, five recommended application programs are executed. Batch updates. After the mobile phone switches the application A to run in the background, the "Magazine Exchange" application is opened to display multiple magazine covers. Since application A needs to download the upgrade package through the network when performing batch updates, a large amount of network traffic will be received/forwarded through the wireless network card at this time, and the network card notifies the kernel layer to read and write data from the network card memory through an interrupt request. There are a large number of NIC interrupt requests generated. And because it involves data packet processing, the interrupt processing of network sending and receiving packets often takes a long time (up to 10ms), if at this time UI/Render thread, Surfaceflinger thread and other frame drawing threads select the processor that is or will be performing network card interrupt processing core, it is very easy to cause frame loss, causing the user interface to freeze when browsing magazines in application B.
为了解决上述问题,本申请实施例提供的任务选核方法包括如下几个步骤,如图11所示:步骤1101,Framework层识别前台应用为“杂志汇”应用B,为应用B的UI/Render线程配置关键调度时延要求的第二时延阈值为500us;步骤1102,Framework层设置处理器核0和处理器核4的第一时延阈值为500us(4小核+4大核架构),保证小核、大核都至少有一个处理器核的任务调度时延可控。步骤1103,Framework层下发第一时延阈值后,处理器核0和处理器核4分别根据当前的调度时延值,决定是否需要将发往本处理器核的部分中断请求绑定到其他的处理器核,超出第一时延阈值的部分中断请求迁出后,处理器核0和处理器核4的调度时延值可保证在500us以下。步骤1104,当手机接收到应用B中的滑动操作信号时触发绘帧操作,应用B调用UI/Render线程进行绘帧。步骤1105,UI/Render线程被唤醒后,根据调度时延要求的第二时延阈值进行选核流程,此时由于其他处理器核都在处理网卡中断,负载较高,因此大概率会选择到处理器核0和处理器核4,将UI/Render线程加入这两个处理器核的调度队列后,由于调度时延值小于500us,因此可以保证UI/Render线程在500us内得到调度,从而降低丢帧卡顿的概率。In order to solve the above problems, the task selection method provided by the embodiment of the present application includes the following steps, as shown in Figure 11: Step 1101, the Framework layer identifies the foreground application as "Magazine Collection" application B, and it is the UI/Render of application B The second delay threshold required by the thread configuration key scheduling delay is 500us; step 1102, the Framework layer sets the first delay threshold of processor core 0 and processor core 4 to 500us (4 small core + 4 large core architecture), Ensure that both the small core and the large core have at least one processor core with a controllable task scheduling delay. Step 1103, after the Framework layer delivers the first delay threshold, processor core 0 and processor core 4 respectively decide whether to bind part of the interrupt requests sent to the processor core to other processor cores according to the current scheduling delay value. processor cores, after some interrupt requests exceeding the first delay threshold are moved out, the scheduling delay values of processor core 0 and processor core 4 can be guaranteed to be below 500us. Step 1104, when the mobile phone receives the sliding operation signal in the application B, the frame drawing operation is triggered, and the application B calls the UI/Render thread to draw the frame. Step 1105, after the UI/Render thread is woken up, the core selection process is performed according to the second delay threshold required by the scheduling delay. At this time, since other processor cores are processing network card interrupts and the load is high, there is a high probability that the core will be selected. Processor core 0 and processor core 4, after adding the UI/Render thread to the scheduling queue of these two processor cores, since the scheduling delay value is less than 500us, it can ensure that the UI/Render thread is scheduled within 500us, thereby reducing Probability of dropped frames.
在一个示意性的例子中,如图12所示,在相关技术中(假设应用B绘制一帧需要花费16.7ms),由于缺少调度时延值的管控,指定线程可能会由于中断处理,长时间处于就绪(runnable)状态,比如3ms或8ms。一旦中断处理时长超过6ms,就非常容易出现丢帧问题。而通过本申请实施例提供的中断调度方法,使得指定线程的调度时延调整为500us,从而可以保证指定线程的调度时延在可控范围内,不会因为调度延迟而导致丢帧和卡顿的问题。In a schematic example, as shown in Figure 12, in related technologies (assuming that application B takes 16.7ms to draw a frame), due to the lack of control over the scheduling delay value, the specified thread may be interrupted for a long time In the ready (runnable) state, such as 3ms or 8ms. Once the interrupt processing time exceeds 6ms, it is very prone to frame loss. However, through the interrupt scheduling method provided by the embodiment of the present application, the scheduling delay of the specified thread is adjusted to 500us, thereby ensuring that the scheduling delay of the specified thread is within a controllable range, and will not cause frame loss and freeze due to scheduling delay The problem.
请参考图13,其示出了本申请另一个示例性实施例提供的中断调度方法的流程图。本申请实施例以该中断调度方法应用于电子设备中来举例说明。该中断调度方法包括:Please refer to FIG. 13 , which shows a flowchart of an interrupt scheduling method provided by another exemplary embodiment of the present application. The embodiment of the present application is illustrated by taking the interrupt scheduling method applied to an electronic device as an example. The interrupt scheduling method includes:
步骤1301,获取预配置的第一时延阈值,第一时延阈值是为第一处理器核配置的中断处理造成的调度时延的最大值。 Step 1301, acquire a preconfigured first latency threshold, where the first latency threshold is the maximum value of scheduling latency caused by interrupt processing configured for the first processor core.
步骤1302,获取第一处理器核的调度时延值,调度时延值用于指示第一处理器核当前的中断负载。 Step 1302, acquiring a scheduling delay value of the first processor core, where the scheduling delay value is used to indicate the current interrupt load of the first processor core.
步骤1303,在调度时延值大于第一时延阈值的情况下,将第一处理器核当前的部分中断请求迁移绑定至第二处理器核,第二处理器核不同于第一处理器核。 Step 1303, when the scheduling delay value is greater than the first delay threshold, migrate and bind some of the current interrupt requests of the first processor core to the second processor core, the second processor core is different from the first processor core nuclear.
需要说明的是,本实施例中的各个步骤的相关细节可参考上述实施例中的相关描述,在此不再赘述。It should be noted that, for relevant details of each step in this embodiment, reference may be made to relevant descriptions in the foregoing embodiments, and details are not repeated here.
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。The following are device embodiments of the present application, which can be used to implement the method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
请参考图14,其示出了本申请一个示例性实施例提供的中断调度装置的框图。该装置可以通过软件、硬件或者两者的结合实现成为上述提供的电子设备的全部或者一部分。该装置可以包括:第一获取单元1410、第二获取单元1420和绑定单元1430。Please refer to FIG. 14 , which shows a block diagram of an interrupt scheduling device provided by an exemplary embodiment of the present application. The device can be implemented as all or part of the electronic equipment provided above through software, hardware or a combination of the two. The apparatus may include: a first obtaining unit 1410 , a second obtaining unit 1420 and a binding unit 1430 .
第一获取单元1410,用于获取预配置的第一时延阈值,第一时延阈值是为第一处理器核配置的中断处理造成的调度时延的最大值;The first obtaining unit 1410 is configured to obtain a preconfigured first delay threshold, where the first delay threshold is the maximum value of the scheduling delay caused by the interrupt processing configured for the first processor core;
第二获取单元1420,用于获取第一处理器核的调度时延值,调度时延值用于指示第一处理器核当前的中断负载;The second acquiring unit 1420 is configured to acquire a scheduling delay value of the first processor core, where the scheduling delay value is used to indicate the current interrupt load of the first processor core;
绑定单元1430,用于在调度时延值大于第一时延阈值的情况下,将第一处理器核当前的部分中断请求迁移绑定至第二处理器核,第二处理器核不同于第一处理器核。The binding unit 1430 is configured to migrate and bind part of the current interrupt requests of the first processor core to the second processor core when the scheduling delay value is greater than the first delay threshold, and the second processor core is different from the first processor core.
在一种可能的实现方式中,迁移绑定后第一处理器核的调度时延值小于或等于第一时延阈值。In a possible implementation manner, the scheduling delay value of the first processor core after the migration binding is less than or equal to the first delay threshold.
在另一种可能的实现方式中,迁移绑定后其他处理器核的调度时延值之间的差值绝对值均小于预设的第一差值阈值,其他处理器核为除第一处理器核以外的各个处理器核。In another possible implementation, after migration and binding, the absolute values of the differences between the scheduling delay values of other processor cores are all smaller than the preset first difference threshold, and the other processor cores are Each processor core other than the processor core.
在另一种可能的实现方式中,该装置用于包括用户层、内核层和硬件层的电子设备中,第一获取单元1410,还用于通过用户层向内核层发送第一配置信息,第一配置信息包括第一处理器核的处理器核标识和第一时延阈值;内核层接收用户层发送的第一配置信息;In another possible implementation manner, the apparatus is used in an electronic device including a user layer, a kernel layer, and a hardware layer, and the first obtaining unit 1410 is further configured to send the first configuration information to the kernel layer through the user layer. The configuration information includes the processor core identification of the first processor core and the first delay threshold; the kernel layer receives the first configuration information sent by the user layer;
绑定单元1430,还用于在调度时延值大于第一时延阈值的情况下,通过内核层向硬件层的中断控制器发送第二配置信息,第二配置信息包括第一处理器核中待迁出的中断号和第二处理器核的处理器核标识;中断控制器根据第二配置信息,将中断号对应的至少一个中断请求迁移绑定至第二处理器核。The binding unit 1430 is further configured to send second configuration information to the interrupt controller at the hardware layer through the kernel layer when the scheduling delay value is greater than the first delay threshold, the second configuration information includes The interrupt number to be migrated out and the processor core identifier of the second processor core; the interrupt controller migrates and binds at least one interrupt request corresponding to the interrupt number to the second processor core according to the second configuration information.
在另一种可能的实现方式中,该装置还包括:统计单元;该统计单元用于:In another possible implementation manner, the device further includes: a statistical unit; the statistical unit is used for:
在接收到中断请求后获取中断请求对应的中断处理时长,中断处理时长包括硬中断处理和软中断处理的总时长;After receiving the interrupt request, obtain the interrupt processing duration corresponding to the interrupt request, and the interrupt processing duration includes the total duration of hard interrupt processing and soft interrupt processing;
根据中断请求的中断处理时长,采用预设算法确定中断请求对应的调度时延值;According to the interrupt processing time of the interrupt request, the preset algorithm is used to determine the scheduling delay value corresponding to the interrupt request;
将中断请求对应的调度时延值与第一处理器核当前的调度时延值进行求和,得到更新后的调度时延值。The scheduling delay value corresponding to the interrupt request is summed with the current scheduling delay value of the first processor core to obtain an updated scheduling delay value.
在另一种可能的实现方式中,该装置还包括:选核单元;该选核单元用于:In another possible implementation manner, the device further includes: a nuclear selection unit; the nuclear selection unit is used for:
获取预配置的第二时延阈值,第二时延阈值是为指定线程配置的中断处理造成的调度时延的最大值;Obtain a pre-configured second latency threshold, where the second latency threshold is the maximum value of the scheduling latency caused by the interrupt processing configured for the specified thread;
在指定线程被唤醒后,将满足预设选核条件的处理器核确定为目标处理器核,预设选核条件包括处理器核当前的调度时延值小于或等于第二时延阈值;After the specified thread is awakened, the processor core that meets the preset core selection condition is determined as the target processor core, and the preset core selection condition includes that the current scheduling delay value of the processor core is less than or equal to the second delay threshold;
将指定线程加入目标处理器核的调度队列。Add the specified thread to the scheduling queue of the target processor core.
在另一种可能的实现方式中,指定线程包括前台应用的绘帧进程和/或进程间通信机制的进程。In another possible implementation manner, the specified thread includes a frame drawing process of a foreground application and/or a process of an inter-process communication mechanism.
需要说明的是,上述实施例提供的装置,在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的装置与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。It should be noted that, when realizing the functions of the device provided by the above-mentioned embodiments, the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned function allocation can be completed by different functional modules according to the needs. The internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the device and the method embodiment provided by the above embodiment belong to the same idea, and the specific implementation process thereof is detailed in the method embodiment, and will not be repeated here.
本申请实施例提供了一种电子设备,该电子设备包括:处理器;用于存储处理器可执行指令的存储器;其中,处理器被配置为执行指令时实现上述实施例中由电子设备执行的中断调度方法。An embodiment of the present application provides an electronic device, which includes: a processor; a memory for storing processor-executable instructions; wherein, the processor is configured to implement the steps performed by the electronic device in the above-mentioned embodiments when executing the instructions. Interrupt dispatch method.
本申请实施例提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当计算机可读代码在电子设备的处理器中运行时,电子设备中的处理器执行上述实施例中由电子设备执行的中断调度方法。An embodiment of the present application provides a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are run in a processor of an electronic device , the processor in the electronic device executes the interrupt scheduling method executed by the electronic device in the foregoing embodiments.
本申请实施例提供了一种非易失性计算机可读存储介质,其上存储有计算机程序指令,计算机程序指令被处理器执行时实现上述实施例中由电子设备执行的中断调度方法。An embodiment of the present application provides a non-volatile computer-readable storage medium, on which computer program instructions are stored. When the computer program instructions are executed by a processor, the interrupt scheduling method performed by the electronic device in the foregoing embodiments is implemented.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、可擦式可编程只读存储器(Electrically Programmable Read-Only-Memory,EPROM或闪存)、静态随机存取存储器(Static Random-Access Memory,SRAM)、便携式压缩盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能盘(Digital Video Disc,DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。A computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device. A computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (non-exhaustive list) of computer-readable storage media include: portable computer disk, hard disk, random access memory (Random Access Memory, RAM), read only memory (Read Only Memory, ROM), erasable Electrically Programmable Read-Only-Memory (EPROM or flash memory), Static Random-Access Memory (Static Random-Access Memory, SRAM), Portable Compression Disk Read-Only Memory (Compact Disc Read-Only Memory, CD -ROM), Digital Video Disc (DVD), memory sticks, floppy disks, mechanically encoded devices such as punched cards or raised structures in grooves with instructions stored thereon, and any suitable combination of the foregoing .
这里所描述的计算机可读程序指令或代码可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。Computer readable program instructions or codes described herein may be downloaded from a computer readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, local area network, wide area network, and/or wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
用于执行本申请操作的计算机程序指令可以是汇编指令、指令集架构(Instruction Set Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(Local Area Network, LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或可编程逻辑阵列(Programmable Logic Array,PLA),该电子电路可以执行计算机可读程序指令,从而实现本申请的各个方面。Computer program instructions for performing the operations of the present application may be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more source or object code written in any combination of programming languages, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as the “C” language or similar programming languages. Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement. In cases involving a remote computer, the remote computer can be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it can be connected to an external computer such as use an Internet service provider to connect via the Internet). In some embodiments, electronic circuits, such as programmable logic circuits, field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or programmable logic arrays (Programmable Logic Array, PLA), the electronic circuit can execute computer-readable program instructions, thereby realizing various aspects of the present application.
这里参照根据本申请实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本申请的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It should be understood that each block of the flowcharts and/or block diagrams, and combinations of blocks in the flowcharts and/or block diagrams, can be implemented by computer-readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram. These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。It is also possible to load computer-readable program instructions into a computer, other programmable data processing device, or other equipment, so that a series of operational steps are performed on the computer, other programmable data processing device, or other equipment to produce a computer-implemented process , so that instructions executed on computers, other programmable data processing devices, or other devices implement the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本申请的多个实施例的装置、系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。The flowchart and block diagrams in the figures show the architecture, functions and operations of possible implementations of apparatuses, systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions. In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行相应的功能或动作的硬件(例如电路或ASIC(Application Specific Integrated Circuit,专用集成电路))来实现,或者可以用硬件和软件的组合,如固件等来实现。It should also be noted that each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts, can be implemented with hardware (such as circuits or ASIC (Application Specific Integrated Circuit, application-specific integrated circuit)), or can be implemented with a combination of hardware and software, such as firmware.
尽管在此结合各实施例对本申请进行了描述,然而,在实施所要求保护的本申请过程中,本领域技术人员通过查看所述附图、公开内容、以及所附权利要求书,可理解并实现所述公开实施例的其它变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其它单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。Although the present application has been described in conjunction with various embodiments here, however, in the process of implementing the claimed application, those skilled in the art can understand and Other variations of the disclosed embodiments are implemented. In the claims, the word "comprising" does not exclude other components or steps, and "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that these measures cannot be combined to advantage.
以上已经描述了本申请的各实施例,上述说明是示例性的,并非穷尽性的,并且 也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。Having described various embodiments of the present application above, the foregoing description is illustrative, not exhaustive, and is not limited to the disclosed embodiments. Many modifications and alterations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen to best explain the principle of each embodiment, practical application or improvement of technology in the market, or to enable other ordinary skilled in the art to understand each embodiment disclosed herein.

Claims (10)

  1. 一种中断调度方法,其特征在于,所述方法包括:An interrupt scheduling method, characterized in that the method comprises:
    获取预配置的第一时延阈值,所述第一时延阈值是为第一处理器核配置的中断处理造成的调度时延的最大值;Acquiring a preconfigured first delay threshold, where the first delay threshold is the maximum value of scheduling delay caused by interrupt processing configured for the first processor core;
    获取所述第一处理器核的调度时延值,所述调度时延值用于指示所述第一处理器核当前的中断负载;Acquire a scheduling delay value of the first processor core, where the scheduling delay value is used to indicate a current interrupt load of the first processor core;
    在所述调度时延值大于所述第一时延阈值时,将所述第一处理器核当前的部分中断请求迁移绑定至第二处理器核,所述第二处理器核不同于所述第一处理器核。When the scheduling delay value is greater than the first delay threshold, migrate and bind part of the current interrupt requests of the first processor core to a second processor core, and the second processor core is different from the The first processor core.
  2. 根据权利要求1所述的方法,其特征在于,所述迁移绑定后所述第一处理器核的调度时延值小于或等于所述第一时延阈值。The method according to claim 1, wherein the scheduling delay value of the first processor core after the migration binding is less than or equal to the first delay threshold.
  3. 根据权利要求1或2所述的方法,其特征在于,所述迁移绑定后其他处理器核的调度时延值之间的差值绝对值均小于预设的第一差值阈值,所述其他处理器核为除所述第一处理器核以外的各个处理器核。The method according to claim 1 or 2, wherein the absolute values of the differences between the scheduling delay values of other processor cores after the migration and binding are all smaller than a preset first difference threshold, the The other processor cores are all processor cores except the first processor core.
  4. 根据权利要求1至3任一所述的方法,其特征在于,所述方法用于包括用户层、内核层和硬件层的电子设备中,所述获取预配置的第一时延阈值,包括:The method according to any one of claims 1 to 3, wherein the method is used in an electronic device including a user layer, a kernel layer, and a hardware layer, and the obtaining a preconfigured first delay threshold includes:
    所述内核层接收所述用户层发送的第一配置信息,所述第一配置信息包括所述第一处理器核的处理器核标识和所述第一时延阈值;The kernel layer receives first configuration information sent by the user layer, where the first configuration information includes a processor core identifier of the first processor core and the first delay threshold;
    所述在所述调度时延值大于所述第一时延阈值时,将所述第一处理器核当前的部分中断请求迁移绑定至第二处理器核,包括:When the scheduling delay value is greater than the first delay threshold, migrating and binding the current part of the interrupt request of the first processor core to the second processor core includes:
    在所述调度时延值大于所述第一时延阈值时,所述内核层向所述硬件层的中断控制器发送第二配置信息,所述第二配置信息包括所述第一处理器核中待迁出的中断号和所述第二处理器核的处理器核标识;所述第二配置信息用于指示所述中断控制器将所述中断号对应的至少一个中断请求迁移绑定至所述第二处理器核。When the scheduling delay value is greater than the first delay threshold, the kernel layer sends second configuration information to the interrupt controller of the hardware layer, the second configuration information includes the first processor core The interrupt number to be migrated out and the processor core identifier of the second processor core; the second configuration information is used to instruct the interrupt controller to migrate and bind at least one interrupt request corresponding to the interrupt number to The second processor core.
  5. 根据权利要求1至4任一所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 4, wherein the method further comprises:
    在接收到中断请求后获取所述中断请求对应的中断处理时长,所述中断处理时长包括硬中断处理和软中断处理的总时长;After receiving the interrupt request, the interrupt processing duration corresponding to the interrupt request is obtained, and the interrupt processing duration includes the total duration of hard interrupt processing and soft interrupt processing;
    根据所述中断请求的中断处理时长,采用预设算法确定所述中断请求对应的调度时延值;According to the interrupt processing duration of the interrupt request, a preset algorithm is used to determine the scheduling delay value corresponding to the interrupt request;
    将所述中断请求对应的调度时延值与所述第一处理器核当前的所述调度时延值进行求和,得到更新后的所述调度时延值。The scheduling delay value corresponding to the interrupt request is summed with the current scheduling delay value of the first processor core to obtain the updated scheduling delay value.
  6. 根据权利要求1至5任一所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 5, wherein the method further comprises:
    获取预配置的第二时延阈值,所述第二时延阈值是为指定线程配置的中断处理造 成的调度时延的最大值;Obtaining a pre-configured second delay threshold, the second delay threshold is the maximum value of the scheduling delay caused by the interrupt processing configured for the specified thread;
    在所述指定线程被唤醒后,将满足预设选核条件的处理器核确定为目标处理器核,所述预设选核条件包括处理器核当前的所述调度时延值小于或等于所述第二时延阈值;After the specified thread is woken up, the processor core that meets the preset core selection condition is determined as the target processor core, and the preset core selection condition includes that the current scheduling delay value of the processor core is less than or equal to the specified The second delay threshold;
    将所述指定线程加入所述目标处理器核的调度队列。Add the specified thread to the scheduling queue of the target processor core.
  7. 根据权利要求6所述的方法,其特征在于,所述指定线程包括所述前台应用的绘帧进程和/或进程间通信机制的进程。The method according to claim 6, wherein the specified thread includes a frame drawing process and/or a process of an inter-process communication mechanism of the foreground application.
  8. 一种电子设备,其特征在于,所述电子设备包括:An electronic device, characterized in that the electronic device comprises:
    处理器;processor;
    用于存储处理器可执行指令的存储器;memory for storing processor-executable instructions;
    其中,所述处理器被配置为执行所述指令时实现权利要求1-8任意一项所述的方法。Wherein, the processor is configured to implement the method according to any one of claims 1-8 when executing the instructions.
  9. 一种非易失性计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1-8中任意一项所述的方法。A non-volatile computer-readable storage medium, on which computer program instructions are stored, wherein, when the computer program instructions are executed by a processor, the method according to any one of claims 1-8 is implemented.
  10. 一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,其特征在于,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行权利要求1-8中任意一项所述的方法。A computer program product, comprising computer-readable codes, or a non-volatile computer-readable storage medium bearing computer-readable codes, characterized in that, when the computer-readable codes are run in an electronic device, the A processor in the electronic device executes the method of any one of claims 1-8.
PCT/CN2022/093584 2021-06-02 2022-05-18 Interrupt scheduling method, electronic device, and storage medium WO2022252986A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110613606.2 2021-06-02
CN202110613606.2A CN115437755A (en) 2021-06-02 2021-06-02 Interrupt scheduling method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
WO2022252986A1 true WO2022252986A1 (en) 2022-12-08

Family

ID=84272307

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/093584 WO2022252986A1 (en) 2021-06-02 2022-05-18 Interrupt scheduling method, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN115437755A (en)
WO (1) WO2022252986A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130771A (en) * 2023-03-30 2023-11-28 荣耀终端有限公司 Resource scheduling method, electronic equipment and storage medium
CN117130771B (en) * 2023-03-30 2024-06-04 荣耀终端有限公司 Resource scheduling method, electronic equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117369986A (en) * 2023-08-07 2024-01-09 华为技术有限公司 Interrupt request equalization method and device and computing equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6269391B1 (en) * 1997-02-24 2001-07-31 Novell, Inc. Multi-processor scheduling kernel
CN101354664A (en) * 2008-08-19 2009-01-28 中兴通讯股份有限公司 Method and apparatus for interrupting load equilibrium of multi-core processor
CN104838359A (en) * 2012-08-16 2015-08-12 微软技术许可有限责任公司 Latency sensitive software interrupt and thread scheduling
CN105528330A (en) * 2014-09-30 2016-04-27 杭州华为数字技术有限公司 Load balancing method and device, cluster and many-core processor
CN110839075A (en) * 2019-11-08 2020-02-25 重庆大学 Service migration method based on particle swarm in edge computing environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6269391B1 (en) * 1997-02-24 2001-07-31 Novell, Inc. Multi-processor scheduling kernel
CN101354664A (en) * 2008-08-19 2009-01-28 中兴通讯股份有限公司 Method and apparatus for interrupting load equilibrium of multi-core processor
CN104838359A (en) * 2012-08-16 2015-08-12 微软技术许可有限责任公司 Latency sensitive software interrupt and thread scheduling
CN105528330A (en) * 2014-09-30 2016-04-27 杭州华为数字技术有限公司 Load balancing method and device, cluster and many-core processor
CN110839075A (en) * 2019-11-08 2020-02-25 重庆大学 Service migration method based on particle swarm in edge computing environment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130771A (en) * 2023-03-30 2023-11-28 荣耀终端有限公司 Resource scheduling method, electronic equipment and storage medium
CN117130771B (en) * 2023-03-30 2024-06-04 荣耀终端有限公司 Resource scheduling method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115437755A (en) 2022-12-06

Similar Documents

Publication Publication Date Title
US20190324819A1 (en) Distributed-system task assignment method and apparatus
US9727372B2 (en) Scheduling computer jobs for execution
US10169060B1 (en) Optimization of packet processing by delaying a processor from entering an idle state
US10733032B2 (en) Migrating operating system interference events to a second set of logical processors along with a set of timers that are synchronized using a global clock
US10223165B2 (en) Scheduling homogeneous and heterogeneous workloads with runtime elasticity in a parallel processing environment
US20210200587A1 (en) Resource scheduling method and apparatus
WO2021233261A1 (en) Multi-task dynamic resource scheduling method
WO2016078178A1 (en) Virtual cpu scheduling method
Pastorelli et al. HFSP: size-based scheduling for Hadoop
WO2017206749A1 (en) Adaptive resource allocation method and apparatus
CN111897637B (en) Job scheduling method, device, host and storage medium
CN111488210B (en) Task scheduling method and device based on cloud computing and computer equipment
WO2022252986A1 (en) Interrupt scheduling method, electronic device, and storage medium
WO2023174037A1 (en) Resource scheduling method, apparatus and system, device, medium, and program product
US10592107B2 (en) Virtual machine storage management queue
CN105718320A (en) Clock task processing method, device and facility
KR101890046B1 (en) Concurrent network application scheduling for reduced power consumption
WO2023165485A1 (en) Scheduling method and computer system
US20140245050A1 (en) Power management for host with devices assigned to virtual machines
JP5299869B2 (en) Computer micro job
CN116303132A (en) Data caching method, device, equipment and storage medium
CN118051313A (en) Process scheduling method and device, computer readable storage medium and terminal
Mirvakili et al. Managing Bufferbloat in Cloud Storage Systems
CN116301567A (en) Data processing system, method and equipment
CN118138668A (en) Method, device and equipment for adjusting time slices of protocol stack process

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22815033

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22815033

Country of ref document: EP

Kind code of ref document: A1