CN116028207A - Scheduling policy determination method, device, equipment and storage medium - Google Patents

Scheduling policy determination method, device, equipment and storage medium Download PDF

Info

Publication number
CN116028207A
CN116028207A CN202210735811.0A CN202210735811A CN116028207A CN 116028207 A CN116028207 A CN 116028207A CN 202210735811 A CN202210735811 A CN 202210735811A CN 116028207 A CN116028207 A CN 116028207A
Authority
CN
China
Prior art keywords
scene
key value
application
determining
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210735811.0A
Other languages
Chinese (zh)
Other versions
CN116028207B (en
Inventor
张茂飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN116028207A publication Critical patent/CN116028207A/en
Application granted granted Critical
Publication of CN116028207B publication Critical patent/CN116028207B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Stored Programmes (AREA)

Abstract

The application discloses a scheduling policy determining method, a scheduling policy determining device, scheduling policy determining equipment and a scheduling policy determining storage medium, and belongs to the technical field of computers. The method comprises the following steps: determining a current user scene, a power state and a system load of the electronic equipment, acquiring a first key value corresponding to the user scene, a second key value corresponding to the power state and a third key value corresponding to the system load, and determining a target key value according to the first key value, the second key value and the third key value. And acquiring a scheduling strategy corresponding to the target key value in the corresponding relation between the key value and the scheduling strategy. The reference factors when the scheduling strategy is determined are comprehensive, so that the resource scheduling based on the scheduling strategy can accurately realize reasonable allocation of resources of the electronic equipment, thereby not only meeting the user demands, but also considering the system demands of the electronic equipment, and further ensuring the stable operation of the electronic equipment under the conditions of reducing the energy consumption of the electronic equipment and improving the cruising ability of the electronic equipment.

Description

Scheduling policy determination method, device, equipment and storage medium
The present application claims priority from chinese patent application No. 202210529304.1, entitled "scheduling policy determination method," filed on month 16 of 2022, the entire contents of which are incorporated herein by reference.
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for determining a scheduling policy.
Background
Along with the improvement of the performance of the electronic equipment, the power consumption of the electronic equipment is higher and higher, but the improvement of the battery capacity is very slow, so that the endurance of the electronic equipment cannot meet the requirements of users, and the use experience of the users is reduced. For this reason, accurate resource scheduling needs to be performed on the electronic device, so as to satisfy the long endurance experience of the user while ensuring the performance of the electronic device.
At present, when the electronic equipment is subjected to resource scheduling, a corresponding scheduling strategy is determined according to an application program which is running on the electronic equipment, and the resource scheduling is performed according to the scheduling strategy, so that the energy consumption of the electronic equipment is reduced and the cruising ability of the electronic equipment is improved while the performance of the electronic equipment meets the resource requirements of the application program.
However, when the above manner performs resource scheduling, only the running application program of the electronic device is considered, and the method is relatively limited, so that reasonable allocation of the resources of the electronic device cannot be accurately realized.
Disclosure of Invention
The method, the device, the equipment and the storage medium for determining the scheduling policy can accurately realize reasonable allocation of resources of the electronic equipment. The scheme is as follows:
In a first aspect, a scheduling policy determining method is provided, in which a user scenario in which an electronic device is currently located is determined, and a power state and a system load of the electronic device are determined. And then, acquiring a first key value corresponding to the user scene, acquiring a second key value corresponding to the power state, acquiring a third key value corresponding to the system load, and determining a target key value according to the first key value, the second key value and the third key value. And acquiring the scheduling strategy corresponding to the target key value in the corresponding relation between the key value and the scheduling strategy. The scheduling policy is used for scheduling resources of the electronic device.
In the method, the reference factors when the scheduling strategy is determined are comprehensive, so that the resource scheduling of the electronic equipment based on the scheduling strategy can accurately realize reasonable allocation of the resources of the electronic equipment, thereby not only meeting the user requirements, but also considering the system requirements of the electronic equipment, and further guaranteeing the stable operation of the electronic equipment under the conditions of reducing the energy consumption of the electronic equipment and improving the cruising ability of the electronic equipment. In addition, various reference factors are converted into corresponding key values, and then corresponding scheduling strategies are directly obtained according to the key values, so that the operation process is simple, the system processing pressure can be reduced, and the stable operation of the electronic equipment is further ensured.
Optionally, the operation of determining the user scenario in which the electronic device is currently located may be: and acquiring application running information of the electronic equipment, and determining a user scene according to the application running information.
The application running information may include focus application information, and further may further include application information such as non-focus application information, background application information, and the like. In this application, the application information of a certain application may include an application name, an application type, an application running state, and the like of the application.
The user scenario refers to a usage scenario of the user, i.e. what the user is doing using the electronic device. The user scenario may reflect the user's needs. Because the user often uses various applications installed in the electronic device when using the electronic device, the user scene can be determined according to the application running information, and the determined user scene is more accurate.
As an example, the operation of determining the user scenario according to the application running information may be: determining a main scene according to the application type in the focus application information; determining at least one sub-scene according to the main scene, the application running state in the focus application information, the non-focus application information and the background application information; selecting one sub-scene with highest priority from the at least one sub-scene, wherein the main scene and the selected one sub-scene are user scenes.
The priority of each sub-scene can be preset in the electronic device, and can be preset by a technician according to the use requirement, for example, the technician can set the priority of each sub-scene according to the influence degree of each sub-scene on the cruising ability of the electronic device, wherein the higher the influence degree on the cruising ability of the electronic device is, the higher the priority is, the lower the influence degree on the cruising ability of the electronic device is.
In this way, after determining at least one sub-scene according to the main scene, the application running state in the focus application information, the non-focus application information and the background application information, one sub-scene with the highest priority can be selected from the sub-scenes, so that the most reasonable scheduling strategy can be determined according to the sub-scenes in the follow-up process, and the cruising ability of the electronic equipment can be improved to the greatest extent.
As another example, system operating conditions may also be detected. In this case, the operation of determining the user scene according to the application running information may be: if the system working state is changed to the idle state, when the application running information indicates that the application is in a state unused by the user, determining that the user scene is the idle scene.
The system working state refers to the current working state of the system and can be divided into an idle state and other states. The idle state refers to a state in which the system is not operated by the user for a long time.
The user scenario is an idle scenario, representing that the user is not currently using the electronic device (i.e. is not operating and is not using an application), from which a scheduling policy applicable in this case can be determined later.
As yet another example, the operation of determining the user scenario from the application running information may be: and acquiring IO load information, and determining a user scene according to the application running information and the IO load information.
The IO load information is used for reflecting the IO load condition. For example, the IO load information may include an IO time ratio, which refers to a time ratio for IO operations within a period, i.e., indicating how much percent of the time in one second is used for IO operations. The IO time ratio can represent the height of IO load. That is, the higher the IO time ratio, the higher the IO load is; the lower the IO time ratio, the lower the IO load.
According to the IO load information, the user behavior can be presumed, and then the user scene can be determined. For example, the IO load information persistence is 30% or more, and the user can be considered to copy the file, or the IO load information persistence is between 10% and 30%, and the user can be considered to decompress the file.
Optionally, the power state includes a power mode and a power plan, and the operation of determining the power state of the electronic device may be: if the power mode change event and the power plan change event are detected, determining a power state according to the power mode change event and the power plan change event.
In the operation process of the electronic equipment, if the power mode is changed, a power mode change event is triggered, and the power mode change event is used for indicating the latest power mode after the change. During the operation of the electronic device, if the power supply schedule is changed, a power supply schedule change event is triggered, and the power supply schedule change event is used for indicating the latest power supply schedule after the change. Thus, the power state can be rapidly and accurately determined according to the power mode change event and the power plan change event.
Optionally, the determining the target key value according to the first key value, the second key value and the third key value may be: and splicing the first key value, the second key value and the third key value to obtain a target key value. For example, the first key value, the second key value and the third key value may be spliced according to a preset splicing manner, for example, the second key value may be spliced at the end of the first key value, and then the third key value may be spliced at the end of the second key value, so as to obtain the target key value.
Optionally, if any one or more of the user scenario, the power state, and the system load changes, the steps of obtaining the first key value corresponding to the user scenario, obtaining the second key value corresponding to the power state, and obtaining the third key value corresponding to the system load and subsequent steps are re-executed. That is, the key values corresponding to the three are redetermined, the key values corresponding to the three are spliced to obtain the target key value, and the scheduling strategy is redetermined according to the target key value. Therefore, the determined scheduling strategy can be ensured to be suitable for the latest state of the electronic equipment, and the reasonable allocation of the resources of the electronic equipment can be accurately realized according to the scheduling strategy.
In the application, the user scene where the electronic equipment is currently located, the power state and the system load of the electronic equipment can be continuously determined in the running process of the electronic equipment, namely, the user scene, the power state and the system load can be dynamically identified, the scheduling strategy is dynamically determined according to the changes of the user scene, the power state and the system load, and accordingly dynamic optimization of resources is carried out.
In a second aspect, a scheduling policy determining apparatus is provided, which has a function of implementing the scheduling policy determining method behavior in the first aspect. The scheduling policy determining device comprises at least one module, and the at least one module is used for implementing the scheduling policy determining method provided in the first aspect.
In a third aspect, a scheduling policy determining apparatus is provided, where the structure of the scheduling policy determining apparatus includes a processor and a memory, where the memory is configured to store a program supporting the scheduling policy determining apparatus to execute the scheduling policy determining method provided in the first aspect, and store data related to implementing the scheduling policy determining method in the first aspect. The processor is configured to execute a program stored in the memory. The scheduling policy determining apparatus may further comprise a communication bus for establishing a connection between the processor and the memory.
In a fourth aspect, a computer readable storage medium is provided, in which instructions are stored which, when run on a computer, cause the computer to perform the scheduling policy determination method according to the first aspect described above.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the scheduling policy determination method of the first aspect described above.
The technical effects obtained by the second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described in detail herein.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a software module architecture according to an embodiment of the present application;
FIG. 3 is a schematic diagram of interactions between software modules according to an embodiment of the present application;
FIG. 4 is a schematic diagram of another software module architecture provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of interactions between another software module provided by an embodiment of the present application;
fig. 6 is a flowchart of a scheduling policy determining method provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of determining a user scenario provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of determining a power state according to an embodiment of the present application;
FIG. 9 is a schematic diagram of determining a scheduling policy provided by an embodiment of the present application;
fig. 10 is a schematic structural diagram of a scheduling policy determining apparatus provided in an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that reference herein to "a plurality" means two or more. In the description of the present application, "/" means or, unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, for the purpose of facilitating the clear description of the technical solutions of the present application, the words "first", "second", etc. are used to distinguish between the same item or similar items having substantially the same function and effect. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
The statements of "one embodiment" or "some embodiments" and the like, described in this application, mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in various places throughout this application are not necessarily all referring to the same embodiment, but mean "one or more, but not all, embodiments" unless expressly specified otherwise. Furthermore, the terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless otherwise specifically noted.
For clarity and conciseness of description of various embodiments, a brief introduction to related concepts or technologies is given below:
1. a focus window (focus window) refers to a window having focus. The focus window is the only window that can receive keyboard input. The manner in which the focus window is determined is associated with the focus mode (focus mode) of the system. The top level window of the focus window is called an active window (active window). Only one window at a time may be an active window. The focus window is a window which is needed to be used by the user at present with high probability.
2. The focus application refers to an application to which a focus window belongs, and the focus application is an application which can receive keyboard input, mouse operation and other operations in the current foreground operation.
3. Non-focus applications refer to applications that run in the foreground but are not currently able to receive keyboard input, mouse operations, and the like, i.e., applications that generally run in the foreground but are not operated by the user.
4. Background applications refer to applications that have been minimized to run in the background.
5. The focus mode may be used to determine how the mouse brings a window into focus. In general, the focus modes may include three types, respectively:
(1) Click-to-focus (click-to-focus) in this mode, the window that the mouse clicks on gets focus. That is, when the mouse clicks on any position of a window where focus is available, the window is activated, and is positioned at the forefront of all windows, and receives keyboard input. When the mouse clicks on another window, this window loses focus.
(2) The focus follows the mouse (focus-mouse), in which mode a window under the mouse can acquire focus. That is, when the mouse is moved to a window where focus is available, the user may activate the window without clicking somewhere on the window, receiving keyboard input, but the window is not necessarily positioned at the front of all windows. When the mouse moves out of the range of this window, this window will also lose focus.
(3) Grass focus (slide focus), which is similar to focus-mouse, when the mouse is moved to a window where focus is available, the user activates the window without clicking somewhere on the window, receiving keyboard input, but the window is not necessarily placed at the front of all windows. Unlike focus-focus, focus does not change when the mouse moves out of the range of this window, but only when the mouse moves into another window that can receive focus.
6. A process, including multiple threads, a thread may create a window. The focus process is the process to which the thread that created the focus window belongs.
7. The long-time power consumption (PL 1) refers to the power consumption of the central processing unit (central processing unit, CPU) under normal load, which is equivalent to the thermal design power consumption, and the running power consumption of the CPU for most of the time does not exceed PL1.
8. Short-time-with-frequency power consumption (PL 2), which refers to the highest power consumption that a CPU can reach in a short time, has a duration limit. Generally, PL2 is greater than PL1.
Notably, PL1 and PL2 are Intel (Intel)
Figure BDA0003715386280000051
Name of the platform. In the super semiconductor company (Advanced Micro Devices, AMD)/( >
Figure BDA0003715386280000052
Platform, PL1 is called SPL (sustained power limit), one stage of PL2 is called FPPT (fast ppt limit), and two stages of PL2 are called SPPT (slow ppt limit).
9. The CPU energy efficiency ratio (energy performance preference, EPP) is used for reflecting the scheduling trend of the CPU, and the value range is 0-255. The smaller the CPU energy efficiency ratio, the higher the CPU tends to be; the higher the CPU energy efficiency ratio, the lower the CPU trend.
10. Energy efficiency-performance optimization Gear (energy performance optimize Gear, EPO Gear) used for representing the strength of adjusting EPP, wherein the value range can be 1-5; the larger the value, the more energy efficient it is to adjust EPP; the smaller the value, the more performance is favored when adjusting EPP.
The following describes an electronic device according to an embodiment of the present application.
The electronic device may be a tablet, notebook, ultra-mobile personal computer (UMPC), desktop, personal digital assistant (personal digital assistant, PDA), or the like.
Fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application. As shown in fig. 1, the electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, wireless communication module 150, display screen 160, etc.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and command center of the electronic device 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
In some embodiments, a memory may also be provided in the processor 110 for storing instructions and data. The memory in the processor 110 is illustratively a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces, such as may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a USB interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments, the electronic device 100 may also employ different interfaces in the above embodiments, or a combination of interfaces.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. The charging management module 140 may also supply power to the electronic device 100 through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display screen 160, the wireless communication module 150, and the like. In some embodiments, the power management module 141 and the charge management module 140 may also be provided in the same device.
The wireless communication module 150 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. For example, in the embodiment of the present application, the electronic device 100 may establish a bluetooth connection with a device such as a wireless headset through the wireless communication module 150. The wireless communication module 150 may be one or more devices that integrate at least one communication processing module. The wireless communication module 150 receives electromagnetic waves via an antenna, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 150 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via an antenna.
The electronic device 100 implements display functions through a GPU, a display screen 160, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 160 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 160 is used to display images, videos, and the like. The display screen 160 includes a display panel.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 via the external memory interface 120 to implement data storage functions, such as storing files of music, video, etc. in the external memory card.
The internal memory 121 may be used to store computer executable program code that includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
One possible software system of the electronic device 100 is described next.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, a Windows system with a layered architecture is taken as an example, and a software system of the electronic device 100 is described as an example.
Fig. 2 is a block diagram of a software system of the electronic device 100 according to an embodiment of the present application. Referring to fig. 2, the hierarchical architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, windows systems are classified into a user mode and a kernel mode. The user mode comprises an application layer and a subsystem dynamic link library. The kernel mode is divided into a firmware layer, a hardware abstraction layer (hardware abstraction layer, HAL), a kernel (kernel) and a driving layer and an executing body from bottom to top.
As shown in FIG. 2, the application layer includes applications for music, video, games, office, social, etc. The application layer also includes an environment subsystem, a system probe module, a first scene recognition engine, a first scheduling engine, and the like. In this embodiment, only a part of application programs are shown in the figure, and the application layer may further include other application programs, such as a shopping application, a browser, and the like.
The environment subsystem may expose certain subsets of the basic executive services to the application in a particular modality, providing an execution environment for the application.
And the system probe module is used for reporting the state to the first scene recognition engine. The first scene recognition engine is used for completing recognition of the user scene according to the state reported by the system probe module and determining a scheduling strategy according to the recognized user scene. The first scheduling engine is used for scheduling the firmware layer according to a scheduling policy.
In some embodiments, the first scenario recognition engine may recognize a user scenario in which the electronic device 100 is located and determine a base scheduling policy that matches the user scenario. The first scheduling engine may obtain the load situation of the electronic device 100, and determine an actual scheduling policy according to the actual operation situation of the electronic device 100 by combining the load situation of the electronic device 100 and the basic scheduling policy. The specific content of the first scene recognition engine and the first scheduling engine is described below, and is not described herein.
The subsystem dynamic link library includes an application programming interface (application programming interface, API) module including Windows API, windows native API, etc. The Windows APIs, which can provide system call entry and internal function support for the application program, are different in that they are Windows system native APIs. For example, windows APIs may include user. Dll, kernel. Dll, and Windows native APIs may include ntdll. The user. Dll is a Windows user interface, and can be used for performing operations such as creating a window, sending a message, and the like. kernel. Dll is used to provide an interface for applications to access the kernel. ntdll.dll is an important Windows NT kernel-level file that describes the interface of the Windows local NTAPI. When Windows is started, ntdll.dll resides in a particular write protect region of memory, which prevents other programs from occupying that memory region.
The executives include a process manager, a virtual memory manager, a secure reference monitor, an Input/Output (I/O) manager, a Windows management specification (Windows management instrumentation, WMI) plug-in, a power manager, a system Event driver (operating system Event driver, oseeventdriver) node (also referred to as an Event driver (Event driver) node), a system and chip driver (operating system to System on Chip, OS2 SOC) node, and the like.
The process manager is used to create and suspend processes and threads. The virtual memory manager implements "virtual memory". The virtual memory manager also provides basic support for the cache manager. The security reference monitor may execute a security policy on the local computer that protects operating system resources, performs protection and monitoring of runtime objects. The I/O manager performs device independent input/output and further processes call the appropriate device drivers. The power manager may manage power state changes for all devices that support power state changes. The OsEventDriver node can interact with the kernel and the driving layer, such as the display card driving, and after determining that the GPU video decoding event exists, the OsEventDriver node reports the GPU video decoding event to the system probe module. The OS2SOC node may be used by the first scheduler engine to send adjustment information to the hardware device, such as information to adjust PL1 and PL2 to the CPU.
The kernel and driver layer includes a kernel and a device driver. The kernel is an abstraction of the processor architecture, separates the difference between the executable and the processor architecture, and ensures the portability of the system. The kernel may perform thread scheduling and scheduling, trap handling and exception scheduling, interrupt handling and scheduling, etc. The device driver operates in kernel mode as an interface between the I/O system and the associated hardware. The device driver may include a graphics driver,
Figure BDA0003715386280000081
Dynamic tuning technology (dynamic tuning technology, DTT) drive, mouse drive, audio video drive, camera drive, keyboard drive, etc. For example, the graphics driver may drive the GPU to run and the Intel DTT driver may drive the CPU to run.
The HAL is a core state module, which can hide various details related to hardware, such as an I/O interface, an interrupt controller, a multiprocessor communication mechanism and the like, provide uniform service interfaces for different hardware platforms running Windows, and realize portability on various hardware platforms. It should be noted that, in order to maintain portability of Windows, the Windows internal components and the device driver written by the user do not directly access the hardware, but rather by calling the routine in the HAL.
The firmware layer may include a basic input output system (basic input output system, BIOS), which is a set of programs that are cured into a Read Only Memory (ROM) chip on the motherboard of the computer, which holds the most important basic input output programs, post-boot self-test programs, and system self-start programs of the computer, which can read and write specific information of the system settings from the complementary metal oxide semiconductor (complementary metal oxide semiconductor, CMOS). Its main function is to provide the lowest, most direct hardware setup and control for the computer. The Intel DTT driver may send instructions to the CPU via the BIOS.
It should be noted that, in the embodiment of the present application, only a Windows system is used as an example to illustrate, and in other operating systems (for example, an Android (Android) system, an IOS system, etc.), the scheme of the present application can be implemented as long as the functions implemented by the respective functional modules are similar to those implemented by the embodiment of the present application.
The workflow of the software and hardware for scheduling resources by the electronic device 100 described in the embodiment of fig. 2 above is described next.
Fig. 3 is a schematic workflow diagram of software and hardware for scheduling resources by the electronic device 100 according to an embodiment of the present application.
As shown in fig. 3, the application layer includes a system probe module and a first scene recognition engine, which includes a scene recognition module and a base policy matching manager. The scene recognition module can interact with the system probe module and the basic policy matching manager respectively. The scene recognition module may send a request to the system probe module to obtain the probe status. The system probe module may acquire the operating state of the electronic device 100. For example, the system probe modules may include a power state probe, a peripheral state probe, a process load probe, an audio video state probe, a system load probe, a system event probe, and the like.
The power state probe may subscribe to a kernel state for a power state event, determine a power state according to a callback function fed back by the kernel state, where the power state includes a battery (remaining) power, a power mode, and the like, and the power mode may include an alternating current (alternating current, AC) state and a Direct Current (DC) state. For example, the power state probe may send a request to the oseeventdriver node of the executive layer to subscribe to a power state event, which is forwarded by the oseeventdriver node to the power manager of the executive layer. The power manager may feed back a callback function to the power state probe through the oseeventdriver node.
The peripheral state probe can subscribe a peripheral event to the kernel state, and the peripheral event is determined according to a callback function fed back by the kernel state. Peripheral events include mouse wheel slide events, mouse click events, keyboard input events, microphone input events, camera input events, and the like.
The process load probe may subscribe to the process load from kernel states and determine the load of the process (e.g., the focal process) from the callback function fed back from kernel states.
The system load probe can subscribe the system load to the kernel state, and the system load is determined according to a callback function fed back by the kernel state.
The audio and video status probe may subscribe to the kernel mode for audio and video events, and determine the audio and video events currently existing in the electronic device 100 according to the callback function fed back by the kernel mode. The audio video events may include GPU decoding events, and the like. For example, the audio/video status probe may send a request to the oseeventdriver node of the executive layer to subscribe to the GPU decoding event, and the oseeventdriver node forwards the request to the graphics card driver of the kernel and driver layer. The display card driver can monitor the state of the GPU, and after the GPU is monitored to perform decoding operation, callback functions are fed back to the audio and video state probes through the OsEventDriver node.
The system event probe can subscribe to the kernel state for system events, and the system events are determined according to a callback function fed back by the kernel state. The system events may include window change events, process creation events, thread creation events, and the like. For example, the system event probe may send a request to the oseeventdriver node of the executive layer to subscribe to a process creation event, which is forwarded by the oseeventdriver node to the process manager. After the process manager creates the process, a callback function can be fed back to the system event probe through the OsEventDriver node. For another example, the system event probe may also send a request to the API module to subscribe to a focus window change event, and the API module may monitor whether the focus window of the electronic device 100 has changed, and when it monitors that the focus window has changed, feed back a callback function to the system event probe.
It can be seen that the system probe module subscribes to various events of the electronic device 100 from the kernel mode, and then determines the running state of the electronic device 100 according to the callback function fed back from the kernel mode, so as to obtain the probe state. After the system probe module obtains the probe state, the probe state can be fed back to the scene recognition module. After the scene recognition module receives the probe state, the scene recognition module can determine the user scene where the electronic device 100 is located according to the probe state. The user scene may include a video scene, a game scene, an office scene, a social scene, and the like. The user scenario may reflect the current use needs of the user. For example, when the scene recognition module recognizes that the focus window is a window of the video application, it determines that the electronic device 100 is in a video scene, which indicates that the user needs to watch and browse the video using the video application. For another example, the scene recognition module determines that the electronic device 100 is in a social scene when recognizing that the focus window is a chat window of an instant messaging application. The scene recognition module may also send the user scene to the base policy matching manager. The base policy matching manager may determine a base scheduling policy based on the user scenario. The base policy matching manager may feed back the base scheduling policy to the scene recognition module. The scene recognition module may send the base scheduling policy and the user scene to a first scheduling engine of an application layer.
As shown in fig. 3, the first scheduling engine includes a load manager, a chip policy aggregator, and a scheduling executor. The load controller may receive the basic scheduling policy and the user scenario sent by the scenario identification module. The load controller can also acquire the system load from the system probe module, and adjust the basic scheduling strategy according to the system load and the user scene to obtain the actual scheduling strategy. The actual scheduling policy includes an Operating System (OS) scheduling policy and a first CPU power consumption scheduling policy.
The load manager may send the OS scheduling policy to the scheduling executor, and the scheduling executor may schedule based on the OS scheduling policy. The OS scheduling policy is used to adjust the process priority and I/O priority of the focal process. For example, the schedule executor may send an instruction to the process manager to adjust the process priority of the focal process, in response to which the process manager adjusts the process priority of the focal process. For another example, the scheduling executor may send an instruction to the I/O manager to adjust the I/O priority of the focal process, in response to which the I/O manager adjusts the I/O priority of the focal process.
The load management controller can also send a first CPU power consumption scheduling policy to the chip policy fusion device, and the chip policy fusion device can obtain a second CPU power consumption scheduling policy based on the chip platform type of the CPU and the first CPU power consumption scheduling policy. The chip platform types of the CPU are mainly divided into two types, namely
Figure BDA0003715386280000101
CPU and +.>
Figure BDA0003715386280000102
These two types of CPUs are different in the adjustment manner of CPU power consumption, and therefore need to be distinguished.
If the chip platform type of the CPU is
Figure BDA0003715386280000103
The schedule executor may send an instruction to the power manager to adjust the EPP of the CPU. In addition, the schedule executor may also send instructions to the OS2SOC driving node to adjust PL1, PL2 to adjust PL1 and PL2 of the CPU.
If the chip platform type of the CPU is
Figure BDA0003715386280000104
The scheduling executor may send a second CPU power consumption scheduling policy to the Intel DTT driver through the WMI plug-in, the second CPU power consumption scheduling policy may include a minimum value of PL1 (pl1_mini), a maximum value of PL1 (pl1_max), a duration of PL2, PL2 (pl2_time), and EPO Gear, and the Intel DTT driver instructs the CPU to operate based on the second CPU power consumption scheduling policy.
Another possible software system of the electronic device 100 is described next.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, a Windows system with a layered architecture is taken as an example, and a software system of the electronic device 100 is described as an example.
Fig. 4 is a block diagram of a software system of the electronic device 100 according to an embodiment of the present application. Referring to fig. 4, the layered architecture divides the software into several layers, each with a clear role and division of work. The layers communicate with each other through a software interface. In some embodiments, the Windows system includes an application layer, a subsystem dynamic link library, a driver layer, and a firmware layer.
As shown in FIG. 4, the application layer includes applications for music, video, games, office, social, etc. The application layer also comprises a system probe module, a second scene recognition engine, a second scheduling engine, a strategy configuration module, a manager interface module and the like. In this embodiment, only a part of application programs are shown in the figure, and the application layer may further include other application programs, such as a shopping application, a browser, and the like.
And the system probe module is used for reporting the state to the second scene recognition engine. The second scene recognition engine is used for completing recognition of the user scene according to the state reported by the system probe module and determining a scheduling strategy according to the recognized user scene. The second scheduling engine is used for scheduling the firmware layer according to the scheduling policy.
The policy configuration module is used for sending a plurality of preset scheduling policies to the second scene recognition engine, and the second scene recognition engine searches the scheduling policies matched with the recognized user scene from the plurality of scheduling policies after recognizing the user scene. The housekeeping interface module is used for providing a current power mode to the second scene recognition module, and the second scene recognition engine can select a scheduling strategy matched with the current power mode and the current user scene.
The subsystem dynamic link library comprises an API module comprising Windows API, windows native API, etc. The Windows APIs, which can provide system call entry and internal function support for the application program, are different in that they are Windows system native APIs. For example, windows APIs may include user. Dll, kernel. Dll, and Windows native APIs may include ntdll. The user. Dll is a Windows user interface, and can be used for performing operations such as creating a window, sending a message, and the like. kernel. Dll is used to provide an interface for applications to access the kernel. ntdll.dll is an important Windows NT kernel-level file that describes the interface of the Windows local NTAPI. When Windows is started, ntdll.dll resides in a particular write protect region of memory, which prevents other programs from occupying that memory region.
The driver layer may include a process manager, a virtual memory manager, a secure reference monitor, an I/O manager, a power manager, a WMI plug-in, an Event driver node, an OS2SOC driver node.
The process manager is used to create and suspend processes and threads. The virtual memory manager implements "virtual memory". The virtual memory manager also provides basic support for the cache manager. The security reference monitor may execute a security policy on the local computer that protects operating system resources, performs protection and monitoring of runtime objects. The I/O manager performs device independent input/output and further processes call the appropriate device drivers. The power manager may manage power state changes for all devices that support power state changes. The WMI plug-in can be used for the second scheduling engine to send scheduling strategies to the firmware layer; the Event driven node may interact with a graphics card driver, an audio/video driver, a camera driver, a keyboard driver, etc. to enable the system probe module to detect various events (which may also be referred to as data or information), for example, interact with the graphics card driver, so that the system probe module may monitor GPU video decoding events. The OS2SOC driver node may be configured for the second scheduling engine to send scheduling policies to the firmware layer.
The firmware layer includes various hardware and hardware drivers configured for the electronic device 100, e.g., the firmware layer may include a CPU, a mouse, etc., and the firmware layer may also include a mouse driver. The hardware of the electronic device 100 may be configured by different hardware platforms, for example, the hardware platforms include:
Figure BDA0003715386280000111
and->
Figure BDA0003715386280000112
Etc., the scheduling policies of the three hardware platforms may be different, so the second scheduling engine may determine the scheduling policyThe hardware platform types are distinguished. In this case, the firmware layer may also include Intel DTT, AMD power management framework (power management framework, PMF), NVIDIA Database (DB), etc.
It should be noted that, in the embodiment of the present application, only a Windows system is used as an example to illustrate, and in other operating systems (for example, an Android system, an IOS system, etc.), the scheme of the present application can be implemented as long as the functions implemented by the respective functional modules are similar to those implemented by the embodiment of the present application.
The workflow of the software and hardware for scheduling resources by the electronic device 100 described in the embodiment of fig. 4 above is described next.
Fig. 5 is a schematic workflow diagram of software and hardware for scheduling resources by the electronic device 100 according to an embodiment of the present application.
As shown in fig. 5, the operating system of the electronic device 100 includes a system probe module, a second scene recognition engine, a second scheduling engine, and a chip scheduling engine. The system probe module, the second scene recognition engine and the second scheduling engine are located at an application layer, the second scene recognition engine can be operated as a plug-in, and the second scheduling engine can be operated as a service. The chip scheduling engine is located at the driver layer and can operate as a service.
The second scene recognition engine may interact with the system probe module to recognize a user scene according to the operation state of the electronic device 100 fed back by the system probe module. The second scene recognition engine can interact with the second scheduling engine, and after the second scene recognition engine recognizes the user scene, the scheduling strategy is determined according to the user scene, and the scheduling strategy is issued to the second scheduling engine. And after the second scheduling engine receives the scheduling policy, returning a receiving result to the second scene recognition engine so as to inform the second scene recognition engine that the second scene recognition engine has successfully received the scheduling policy. Then, the second scheduler engine transmits the scheduling policy to the chip scheduler engine, and the chip scheduler engine executes the scheduling policy.
The second scene recognition engine comprises a scene recognition module, a scene library, a strategy scheduling module and a strategy library. The second scheduling engine comprises a scene interaction module, a scheduling policy fusion module and a scheduling executor. The chip scheduling engine comprises a WMI plug-in, an Event driving node and an OS2SOC driving node. The firmware layer includes Intel DTT, AMD PMF, NVIDIA DB, etc.
As shown in fig. 5, the second scenario recognition engine may interact with the system probe module, the policy configuration module, the housekeeping interface module, and the second scheduling engine, respectively.
The manager interface module may send a power mode currently used by the electronic device to the second scene recognition engine, where the power mode may assist the second scene recognition engine in determining the scheduling policy; the policy configuration module is used for sending a plurality of pre-configured scheduling policies to the second scene recognition engine.
The system probe module may acquire the operating state of the electronic device 100. For example, the system probe modules may include power state probes, peripheral state probes, audiovisual state probes, application switching probes, system load probes, system operational state probes, and the like.
Wherein the power state probe is used to detect a power state including a battery (remaining) amount, a power mode, a power plan, etc., the power mode may include an AC state and a DC state, and the power plan may include an energy efficiency plan, a balance plan, a performance plan, etc. The peripheral status probe is used for detecting peripheral events, including mouse wheel sliding events, mouse click events, keyboard input events, microphone input events, camera input events, and the like. The audio-visual status probe is used to detect audio events and video events currently present in the electronic device 100. The application switching probe is used for detecting an application currently running in the electronic device 100, that is, detecting a focus application, a non-focus application, a background application, and the like, where the focus application is an application to which a focus window belongs, the non-focus application is an application to which a window that is not a focus window but is not minimized in a currently opened window belongs, and the background application is an application running in the background. The system load probe is used to detect the current load level of the system. The system working state probe is used for detecting the current working state of the system, namely, detecting whether the system is in an idle state or not.
The system probe module detects the operation state of the electronic device 100 through various probes, and obtains the probe state. The second scenario recognition engine may subscribe to the system probe module for probe status. In this case, after the system probe module obtains the probe state, the probe state may be reported to the second scene recognition engine.
The scene library in the second scene recognition engine is used for storing a plurality of user scenes, such as a plurality of main scenes including a social scene, an office scene, a browser scene and the like, and a plurality of sub-scenes can be divided under each main scene, such as a browser scene including a browser internet surfing scene, a browser audio playing scene, a browser video playing scene and the like. The policy library in the second scene recognition engine is used for storing various scheduling policies sent by the policy configuration module. For example, the scheduling policies include a size core scheduling, an office policy library, and the like, in which the scheduling policies related to office applications are recorded, the size core scheduling being
Figure BDA0003715386280000121
The architecture of the 12 th generation platform provides a large and small core scheduling capability indicating policy configuration that prioritizes use of large cores (bias performance) or small cores (bias energy efficiency).
The policy scheduling module may send a query subscription scenario request to the scenario recognition module, where the query subscription scenario request is used to trigger the scenario recognition module to perform scenario recognition, and the query subscription scenario request may be sent immediately after the electronic device 100 is powered on, or may be sent periodically, which is not limited in the embodiment of the present application.
After receiving the inquiry subscription scene request, the scene recognition module sends an inquiry subscription state request to the system probe module, wherein the inquiry subscription state request is used for indicating each probe in the system probe module to perform state detection/state determination and the like, and then the system probe module can report the state of the electronic device 100 to the scene recognition module. The scene recognition module determines the current user scene of the electronic device 100 from the scene library according to the state of the electronic device, and reports the user scene to the policy scheduling module. The policy scheduling module may determine a scheduling policy from a policy repository according to a user scenario in which the electronic device 100 is currently located.
In addition, the policy dispatching module can also receive a power mode issued by the manager interface module, and the power mode can be determined according to a user switch identifier issued by the manager interface module. The policy scheduling module may refer to the power mode in determining the scheduling policy, such as determining a scheduling policy that matches the currently used power mode and the current user scenario. Or, the power mode is used as a condition of determining the scheduling policy by the policy scheduling module, and when the power mode is a preset mode, the policy scheduling module determines the scheduling policy according to the user scene.
The strategy scheduling module sends the scheduling strategy to the scene interaction module, and the scene interaction module returns a receiving result to the strategy scheduling module after receiving the scheduling strategy, wherein the receiving result is used for informing the strategy scheduling module that the scheduling strategy is successfully received. The scene interaction module sends the scheduling strategy to the scheduling strategy fusion module, and the scheduling strategy fusion module analyzes and changes the scheduling strategy to analyze and change strategy parameters in the scheduling strategy into parameters recognized by the hardware platform. And the scheduling policy fusion module transmits the parsed scheduling policy to a scheduling executor. And the scheduling executor transmits the scheduling strategy after resolution and escape according to the type of the hardware platform.
For example, if the hardware platform type is
Figure BDA0003715386280000131
The scheduling executor can send the analyzed and escaped scheduling strategy to the OS2SOC driving node; if the hardware platform type is +.>
Figure BDA0003715386280000132
The scheduling executor can send the analyzed and escape scheduling strategy to the Intel DTT driver through the WMI plug-in. In the embodiment of the application, the scheduling policy may be a chip scheduling policy, and the optimal balance of power consumption is achieved by adjusting the energy efficiency ratio of the chip. For example, the scheduling policy may be a power consumption scheduling policy of the CPU.
If the hardware platform type is
Figure BDA0003715386280000133
The schedule executor may send instructions in the scheduling policy to the power manager to adjust the EPP of the CPU. In addition, the schedule executor may also send instructions for adjusting PL1, PL2 of the CPU in the schedule policy to the OS2SOC driving node. If the hardware platform type is +.>
Figure BDA0003715386280000134
The scheduling executor may send a scheduling policy to the Intel DTT driver through the WMI plug-in, where the minimum value of PL1, the maximum value of PL1, the duration of PL2, and EPO Gear may be included, and the Intel DTT driver instructs the CPU to operate based on the scheduling policy. After receiving the scheduling policy, the WMI plug-in, intel DTT and OS2SOC driving node may return a receiving result, where the receiving result is used to indicate that the receiving result has successfully received the scheduling policy.
It should be noted that, in the embodiments of fig. 2-3 above, the first scenario recognition engine may determine the scheduling policy according to the user scenario. Similarly, in the embodiments of fig. 4-5 above, the second scenario recognition engine may determine the scheduling policy based on the user scenario. However, if the scheduling policy is determined only by considering the user scenario, the determined scheduling policy is limited, and reasonable allocation of resources of the electronic device cannot be accurately achieved.
Therefore, the embodiment of the application provides a scheduling policy determining method, which can determine a scheduling policy by combining a current user scene of an electronic device, a power state of the electronic device and a system load, so that reference factors when determining the scheduling policy are comprehensive, and resource scheduling performed on the basis of the scheduling policy can accurately realize reasonable allocation of resources of the electronic device, so that not only can user requirements be met, but also the system requirements of the electronic device can be considered, and further stable operation of the electronic device can be ensured under the conditions of reducing energy consumption of the electronic device and improving cruising ability of the electronic device.
The scheduling policy determining method provided in the embodiment of the present application is explained in detail below.
Fig. 6 is a flowchart of a scheduling policy determining method provided in an embodiment of the present application. Referring to fig. 6, the method includes the steps of:
step 601: the electronic device determines a user scenario in which the electronic device is currently located.
The user scenario refers to a usage scenario of the user, i.e. what the user is doing using the electronic device. The user scenario may reflect the user's needs.
Specifically, the operation of step 601 may be: the electronic equipment acquires application operation information of the electronic equipment, and determines a user scene where the electronic equipment is located according to the application operation information.
Because the user often uses various applications installed in the electronic device when using the electronic device, in the embodiment of the application, the user scene can be determined according to the application running information, and the determined user scene is more accurate.
The application running information may include focus application information, and further may further include application information such as non-focus application information, background application information, and the like. In the embodiment of the present application, the application information of a certain application may include an application name, an application type, an application running state, and the like of the application.
In some embodiments, application running information may be obtained by the system probe module in the embodiments of fig. 2-5 above. The system probe module can report the acquired application running information to the scene recognition module, and the scene recognition module can recognize the user scene according to the application running information.
For example, in the embodiments of FIGS. 2-3 above, a system event probe may be included in the system probe module, and a focus window probe may be included in the system event probe. The focus window probe is used to determine the latest focus window as the focus window changes. The focus window probe may report a focus window change event to the system event probe. The system event probe can determine application information of the current focus application, non-focus application, background application and the like according to the focus window change event reported by the focus window probe so as to obtain application running information. The system event probe may then report the application running information to the scene recognition module for the scene recognition module to recognize the user scene accordingly.
For another example, in the embodiments of fig. 4-5 above, an application switching probe may be included in the system probe module, and a focus window probe may be included in the application switching probe. The focus window probe is used for determining the latest focus window when the focus window changes, and the focus window probe can report a focus window change event to the application switching probe. The application switching probe can determine application information of the current focus application, non-focus application, background application and other applications according to the focus window change event reported by the focus window probe so as to obtain application running information. The application switching probe can report the application running information to the scene recognition module so that the scene recognition module can recognize the user scene according to the application running information.
When the system event probe or the application switching probe determines application information of applications such as the current focus application, the non-focus application, the background application and the like according to the focus window change event reported by the focus window probe, the current focus application, the non-focus application, the background application and the like can be determined according to the process creation event, the process exit event, the window event and the like detected by other probes in the system probe module after the focus window change event reported by the focus window probe is received, namely, the application name and the application type of each application are determined, and then the application running state of each application is determined by combining the peripheral state, the audio state, the video state, the process load and the like detected by other probes in the system probe module, so that the application information of the applications such as the current focus application, the non-focus application, the background application and the like can be obtained, and the application running information can be obtained.
The window events may include, among other things, a window full screen event, a window minimize event, a focus get event, a focus lose event, and the like. A window full screen event is used to indicate that a window is full-screen. The window minimize event is used to indicate that a window is minimized. The focus acquisition event is used to instruct a certain window to acquire focus. A focus loss event is used to indicate that a window is out of focus. The window that gets focus is the focus window. The window losing focus is the window that acquired focus last time, i.e. the last history focus window.
Alternatively, the operation of the electronic device (such as the scene recognition module) to determine the user scene where the electronic device is located according to the application running information may be implemented in the following manner 1, manner 2, or manner 3.
In mode 1, an electronic device determines a main scene according to an application type in focus application information (i.e., an application type of a focus application), and determines a sub-scene according to the main scene and an application running state in focus application information (i.e., an application running state of a focus application), wherein the main scene and the sub-scene are user scenes.
Alternatively, the correspondence between the focus application type and the main scene may be preset in the electronic device. In this case, the electronic device may obtain the corresponding main scene from the correspondence between the focus application type and the main scene according to the application type in the focus application information.
For example, the correspondence between the focus application type and the main scene may be as shown in table 1 below. In this case, if the application type in the focus application information is a video type, the main scene may be determined to be a video scene according to table 1; if the application type in the focus application information is office type, determining that the main scene is office scene according to table 1; if the application type in the focus application information is a game type, determining that the main scene is a game scene according to table 1; if the application type in the focus application information is social, determining that the main scene is a social scene according to table 1; if the application type in the focus application information is a browser type, the main scene can be determined to be a browser scene according to table 1.
TABLE 1
Focus application type Main scene
Video class Video scene
Office class Office scene
Game class Game scene
Social class Social scene
Browser class Browser scene
…… ……
In the embodiment of the present application, table 1 is merely taken as an example to exemplarily illustrate the correspondence between the focus application type and the main scene, and table 1 is not limited to the embodiment of the present application.
As one example, the electronic device may directly determine the sub-scene from the application running state in the main scene and focus application information.
Alternatively, the correspondence relationship among the main scene, the focus application running state, and the sub-scene may be preset in the electronic device. In this case, the electronic device may acquire the corresponding sub-scene from the corresponding relationship among the main scene, the focus application running state, and the sub-scene according to the application running state in the main scene and the focus application information.
For example, the correspondence between the main scene, the focus application running state, and the sub-scene may be as shown in table 2 below. In this case, if the main scene is an office scene and the application running state in the focus application information is to receive the mouse input, it may be determined that the sub-scene is a document browsing scene in the office scene according to table 2. If the main scene is an office scene and the application running state in the focus application information is that keyboard input is received, determining that the sub-scene is a document editing scene in the office scene according to table 2. If the main scene is an office scene and the application running state in the focus application information is that a camera is used, the sub-scene can be determined to be a video conference scene in the office scene according to table 2. If the main scene is a social scene and the application running state in the focus application information is that keyboard input is received, determining that the sub scene is a text chat scene in the social scene according to table 2. If the main scene is a social scene, the application running state in the focus application information is that a microphone is used and a camera is not used, and the sub-scene can be determined to be a voice chat scene in the social scene according to table 2. If the main scene is a social scene, the application running state in the focus application information is microphone and camera, and the sub scene can be determined to be a video chat scene in the social scene according to table 2.
TABLE 2
Figure BDA0003715386280000151
In the embodiment of the present application, table 2 is merely taken as an example to describe the correspondence between the main scene, the focus application running state and the sub-scene by way of example, and table 2 is not limited to the embodiment of the present application.
As another example, the electronic device may determine at least one sub-scene according to the application running state in the main scene, the focus application information, the non-focus application information, and the background application information, and then select one sub-scene with the highest priority from the at least one sub-scene.
The priority of each sub-scene can be preset in the electronic device, and can be preset by a technician according to the use requirement, for example, the technician can set the priority of each sub-scene according to the influence degree of each sub-scene on the cruising ability of the electronic device, wherein the higher the influence degree on the cruising ability of the electronic device is, the higher the priority is, the lower the influence degree on the cruising ability of the electronic device is. Therefore, after at least one sub-scene is determined according to the application running state, the non-focus application information and the background application information in the main scene, the focus application information, one sub-scene with the highest priority can be selected from the sub-scenes, so that the most reasonable scheduling strategy can be determined according to the sub-scenes later, and the cruising ability of the electronic equipment can be improved to the greatest extent.
Alternatively, the electronic device may preset a corresponding relationship between the main scene, the focus application running state, the non-focus application information, the background application information, and the sub-scene. In this case, the electronic device may obtain at least one corresponding sub-scene from the corresponding relationship between the main scene, the focus application running state, the non-focus application information, the background application information, and the sub-scene according to the application running state, the non-focus application information, and the background application information in the main scene, the focus application information.
For example, the correspondence relationship between the main scene, the focus application running state, the non-focus application information, the background application information, and the sub-scene may be as shown in table 3 below.
TABLE 3 Table 3
Figure BDA0003715386280000161
In the embodiment of the present application, table 3 is merely taken as an example to describe the correspondence between the main scene, the focus application running state, the non-focus application information, the background application information and the sub-scene by way of example, and table 3 is not limited to the embodiment of the present application.
In this case, if the main scene is a browser scene, the application running state in the focus application information is nothing, the application name in the non-focus application information is word, the application type is office, the application running state is nothing, the application name in the background application information is xx music, the application type is music, and the application running state is output audio, two sub-scenes can be determined according to table 3, one is an office data query scene in the browser scene, and the other is a background listening music scene in the browser scene. Assuming that the priority of the office material query scene in the browser scene is higher than the priority of the background music listening scene in the browser scene, the office material query scene in the browser scene can be selected.
If the main scene is an office scene, the application running state in the focus application information is to receive keyboard input, the application name in the non-focus application information is xx friend making, the application type is social type, the application running state is to use a microphone, the application name in the background application information is xx browser, the application type is browser type, and the application running state is none, two sub-scenes can be determined according to table 3, one is a voice chat scene in the office scene, and the other is a document editing scene in the office scene. Assuming that the priority of the voice chat scene in the office scene is higher than the priority of the document editing scene in the office scene, the voice chat scene in the office scene may be selected.
And 2, the electronic equipment determines a user scene according to the application running information and the system working state.
The electronic device may detect the system operating state. The system working state refers to the current working state of the system and can be divided into an idle state and other states. The idle state refers to a state in which the system is not operated by the user for a long time.
If the system operating state is other than the idle state, the electronic device may determine the user scenario according to the above mode 1, that is, directly determine the user scenario according to the application running information.
In some embodiments, in the embodiments of fig. 2-5 above, a system operation status probe may be included in the system probe module, where the system operation status probe is configured to detect a current operation status of the system (i.e., a system operation status), that is, detect whether the system is currently in an idle state, and report the system operation status to the scene recognition module when the system operation status is in the idle state. After the scene recognition module receives the system working state, the user scene can be determined according to the system working state and the application running information.
In other words, when the system operating state is other than the idle state, the system operating state probe does not report the system operating state to the scene recognition module, and the scene recognition module determines the user scene according to the above mode 1, that is, determines the user scene directly according to the application running information. When the system working state is changed to the idle state, the system working state probe reports the system working state to the scene recognition module, and the scene recognition module can acquire that the system working state is changed to the idle state after receiving the system working state reported by the system working state probe, and at the moment, the scene recognition module can determine a user scene according to the system working state and the application running information.
Optionally, when the system working state probe acquires the system working state, the system working state can be determined according to the equipment cover closing state, the equipment bright screen state, the system locking state, the mouse keyboard peripheral state and the like. For example, if the device is not covered, the device is on, and the device is not locked, no mouse and keyboard input is performed for a long time, it may be determined that the system operating state is an idle state; otherwise, the system operating state may be determined to be other than the idle state.
Optionally, when determining the user scenario according to the system working state and the application running information, if the system working state is changed to an idle state and the application running information indicates that the application is in a state of being used by the user, the electronic device may determine the user scenario according to the above mode 1, that is, directly determine the user scenario according to the application running information. For example, if the system operating state is an idle state and the application running state in at least one of the application information, such as the focus application information, the non-focus application information, and the background application information, indicates that the microphone is being used, the camera is being used, the video is being output, the audio is being output, or other peripherals are being used, and it is indicated that the user is using some applications, the electronic device may determine the user scene according to the above manner 1, that is, directly determine the user scene according to the application running information.
Or if the system working state is changed to the idle state and the application running information indicates that the application is in a state unused by the user, the electronic device can determine that the main scene is the idle scene, and the main scene is directly used as the user scene without determining the sub-scene. For example, if the system operating state is an idle state and the application operating states in the application information such as the focus application information, the non-focus application information, the background application information, etc. all indicate that the microphone is not used, the camera is not used, the video is not output, the audio is not output, and other peripherals are not used, which indicates that the user does not use the application, the electronic device may determine that the main scene is an idle scene, that is, determine that the user scene is an idle scene. The user scenario is an idle scenario, representing that the user is not currently using the electronic device (i.e. is not operating and is not using an application), from which a scheduling policy applicable in this case can be determined later.
And 3, the electronic equipment determines a user scene where the electronic equipment is located according to the application running information and the IO load information.
The electronic device may obtain the IO load information. In some embodiments, in the embodiments of fig. 2-5 above, a system load probe may be included in the system probe module, and the system load probe may detect the IO load information and report the IO load information to the scene recognition module, so that the scene recognition module determines the user scene according to the IO load information.
The IO load information is used for reflecting the IO load condition. For example, the IO load information may include an IO time ratio, which refers to a time ratio for IO operations within a period, i.e., indicating how much percent of the time in one second is used for IO operations. The IO time ratio can represent the height of IO load. That is, the higher the IO time ratio, the higher the IO load is; the lower the IO time ratio, the lower the IO load. In some embodiments, the IO time ratio may be obtained through an IO throughput statistics service, and of course, the IO time ratio may also be obtained through other manners, which is not limited by the embodiments of the present application.
According to the embodiment of the application, the user behavior can be presumed according to the IO load information, so that the user scene can be determined. For example, the IO load information persistence is 30% or more, and the user can be considered to copy the file, or the IO load information persistence is between 10% and 30%, and the user can be considered to decompress the file.
Optionally, the electronic device may determine a main scene according to an application type in the focus application information in the application running information, and then determine a sub-scene according to the main scene and the IO load information, where the main scene and the sub-scene are user scenes.
For example, the correspondence relationship among the main scene, the IO load information, and the sub scene may be preset in the electronic device. In this case, the electronic device may obtain the corresponding sub-scene from the corresponding relationship according to the main scene and the IO load information.
It is noted that the electronic device may determine the sub-scene according to the main scene and the IO load information, and may also determine the sub-scene by combining other information, for example, by combining an application running state, non-focus application information, background application information and the like in focus application information in the application running information, so that the determined sub-scene is more accurate.
The process of determining the user scene in step 601 described above is exemplarily described below with reference to fig. 7.
Fig. 7 is a schematic diagram of determining a user scenario according to an embodiment of the present application.
Referring to fig. 7, when a focus window is changed, current focus application information, non-focus application information, and background application information may be acquired, and then a main scene may be determined according to an application type in the focus application information. Then, scene jitter filtering can be performed, specifically: if the latest determined main scene is the same as the last determined main scene and the application running state in the latest acquired focus application information is the same as the application running state in the last acquired focus application information, the user scene is not determined again, the operation is ended, and otherwise, the sub-scene is continuously determined.
When the system working state changes, the current system working state can be obtained. If the working state of the system is not the idle state, the user scene is not redetermined, and the operation is ended. If the system working state is an idle state, determining whether the main scene is an idle scene according to the current application running state in the focus application information, the non-focus application information and the background application information. If the application running states in the focus application information, the non-focus application information and the background application information all indicate that the microphone is not used, the camera is not used, the video is not output, the audio is not output and other peripherals are not used, the main scene can be determined to be an idle scene, namely the user scene is determined to be the idle scene. And if the application running state in the focus application information, the non-focus application information and the background application information indicates that the microphone is being used, the camera is being used, the video is being output, the audio is being output or other peripherals are being used, when the focus window is not changed currently, the user scene is not redetermined, the operation is ended, and when the focus window is changed currently and the main scene is determined according to the application type in the focus application information, the sub-scene is continuously determined.
When determining the sub-scene, at least one sub-scene can be determined according to the application running state in the main scene, the focus application information, the non-focus application information and the background application information, for example, the application running state in the focus application information, the non-focus application information and the background application information can be analyzed by combining with the main scene to obtain at least one sub-scene. And then comparing the priorities of the at least one sub-scene, selecting one sub-scene with the highest priority from the at least one sub-scene, and determining the main scene and the selected sub-scene as the user scene.
Step 602: the electronic device determines a power state of the electronic device.
The power states may include power modes, power plans, etc., the power modes may include AC states and DC states, the power plans may include energy efficiency plans, balance plans, performance plans, etc.
In some embodiments, in the embodiments of fig. 2-5 above, a power status probe may be included in the system probe module for detecting a power status and reporting the power status to the scene recognition module.
Alternatively, the power state probe may determine the current power state based on the power mode change event and the power plan change event when determining the power state. In the operation process of the electronic equipment, if the power mode is changed, a power mode change event is triggered, and the power mode change event is used for indicating the latest power mode after the change. During the operation of the electronic device, if the power supply schedule is changed, a power supply schedule change event is triggered, and the power supply schedule change event is used for indicating the latest power supply schedule after the change.
It should be noted that in some cases, the power mode change event and the power plan change event may be directly triggered when the electronic device is started, and the power state probe may directly detect the power mode change event and the power plan change event to determine the current power state. However, in other cases, the power mode change event and the power plan change event are not triggered when the electronic device is powered on, and the power state probe cannot directly detect the power mode change event and the power plan change event. In this case, in order to ensure a normal determination of the power state at the start-up, the electronic device may actively determine the power state at the start-up.
The process of determining the power state in step 602 described above is exemplarily described below with reference to fig. 8.
Fig. 8 is a schematic diagram of determining a power state according to an embodiment of the present application.
Referring to fig. 8, when the electronic device is started up, the electronic device actively acquires power mode information; if the power mode information is not acquired, namely acquisition fails, the power state is not determined, and the operation is ended; if the power mode information is acquired, namely the acquisition is successful, the power planning information is continuously acquired. If the power supply plan information is not acquired, namely acquisition fails, setting a power supply plan as a balance plan, and then executing power supply plan filtering operation; and if the power supply plan information is acquired, namely the acquisition is successful, executing the power supply plan filtering operation.
During normal operation of the electronic device, the power state probe may determine a power mode through the power mode probe and a power plan through the power plan probe. The power mode probe may detect a power mode change event. The power supply planning probe may detect a power supply planning change event. Thus, when the power mode is changed and/or the power plan is changed, the power state probe can acquire the changed power mode and/or the changed power plan, and then execute the power plan filtering operation.
When the power supply plan filtering operation is carried out, if the latest determined power supply plan is unchanged from the power supply plan determined last time, ending the operation and not determining the power supply state again; if the latest determined power supply plan is changed compared with the power supply plan determined last time, power supply state jitter filtering is performed, specifically: ending the operation without redefining the power supply state when the latest determined power supply plan is the same as the power supply plan in the current power supply state and the latest determined power supply mode is the same as the power supply mode in the current power supply state; and re-determining the power supply state in the case that the latest determined power supply plan is different from the power supply plan in the current power supply state and/or the latest determined power supply mode is different from the power supply mode in the current power supply state, namely determining the latest determined power supply plan and the latest determined power supply mode as the power supply state.
Step 603: the electronic device determines a system load of the electronic device.
In some embodiments, in the embodiments of fig. 2-5 above, a system load probe may be included in the system probe module for detecting a system load and reporting the system load to the scene recognition module.
For example, the system load of the electronic device may be represented by a load level. The load level is used for reflecting the load condition of the whole system. Illustratively, the load levels may be light, medium, heavy. The light load level indicates that the system load is low; the middle load level indicates that the system load is moderate; the load class is a weight indication that the system load is high.
Optionally, the electronic device (such as a system load module) may determine a current load level of the electronic device according to a current device performance index of the electronic device. For example, the load level may be determined from the CPU load information and the IO load information.
The CPU load information is used for reflecting the CPU load condition. For example, the CPU load information may include a system frame loss rate. The system frame loss rate can embody the load of the CPU. That is, the higher the system frame loss rate, the higher the CPU load is; the lower the system frame loss rate, the lower the CPU load. In some embodiments, the system frame loss rate may be obtained through an associated interface used to obtain frame information in the surface efliger service, and of course, the system frame loss rate may also be obtained through other manners, which is not limited in this embodiment of the present application.
The IO load information is used for reflecting the IO load condition. For example, the IO load information may include an IO time ratio, which refers to a time ratio for IO operations within a period, i.e., indicating how much percent of the time in one second is used for IO operations. The IO time ratio can represent the height of IO load. That is, the higher the IO time ratio, the higher the IO load is; the lower the IO time ratio, the lower the IO load. In some embodiments, the IO time ratio may be obtained through an IO throughput statistics service, and of course, the IO time ratio may also be obtained through other manners, which is not limited by the embodiments of the present application.
Alternatively, the operation of determining the load level according to the CPU load information and the IO load information may be: if the system frame loss rate is greater than or equal to a first frame loss rate threshold or the IO time rate is greater than or equal to a first time rate threshold, determining that the load level is heavy; if the system frame loss rate is greater than the second frame loss rate threshold and less than the first frame loss rate threshold, and the IO time ratio is greater than the second time ratio threshold and less than the first time ratio threshold, determining that the load level is medium; and if the system frame loss rate is smaller than or equal to the second frame loss rate threshold value or the IO time ratio is smaller than or equal to the second time ratio threshold value, determining that the load level is light.
The first frame loss rate threshold and the second frame loss rate threshold can be preset, and the first frame loss rate threshold and the second frame loss rate threshold are used for judging the high and low frame loss rates of the system, and are larger than the second frame loss rate threshold. For example, the first frame loss rate threshold may be 1%, 2%, etc., and the second frame loss rate threshold may be 0%, 0.1%, etc. When the system frame loss rate is greater than or equal to the first frame loss rate threshold, the system frame loss rate is higher, namely the CPU load is higher. When the system frame loss rate is less than or equal to the second frame loss rate threshold, it is indicated that the system frame loss rate is lower, i.e. the CPU load is lower.
The first time ratio threshold and the second time ratio threshold may be set in advance, and the first time ratio threshold and the second time ratio threshold are thresholds for judging the IO time ratio, and the first time ratio threshold is greater than the second time ratio threshold. For example, the first time ratio threshold may be 60%, 85%, etc., and the second time ratio threshold may be 40%, 35%, etc. When the IO time ratio is greater than or equal to the first time ratio threshold, it is indicated that the IO time ratio is higher, i.e., the IO load is higher. When the IO time ratio is less than or equal to the second time ratio threshold, it is indicated that the IO time ratio is lower, i.e., the IO load is lower.
In this case, if the system frame loss rate is greater than or equal to the first frame loss rate threshold, or the IO time rate is greater than or equal to the first time rate threshold, it indicates that the CPU load is high or the IO load is high, that is, it indicates that the system load is high, so that it can be determined that the load level is heavy.
If the system frame loss rate is greater than the second frame loss rate threshold and less than the first frame loss rate threshold, and the IO time ratio is greater than the second time ratio threshold and less than the first time ratio threshold, the CPU load is moderate and the IO load is moderate, that is, the system load is moderate, so that the load level can be determined to be the middle.
If the system frame loss rate is less than or equal to the second frame loss rate threshold, or the IO time ratio is less than or equal to the second time ratio threshold, it indicates that the CPU load is low or the IO load is low, that is, the system load is low, so that it can be determined that the load level is light.
Step 604: the electronic device determines a scheduling policy based on the user scenario, the power state, and the system load.
The scheduling strategy is used for scheduling resources of the electronic equipment, and is particularly used for scheduling bottom hardware resources. The scheduling policy contains tuning parameters that are reasonable for the current state of the electronic device. The scheduling policy may be issued to a scheduling engine for the scheduling engine to implement scheduling of the underlying hardware resources based on the scheduling policy.
In the embodiment of the application, the user scene where the electronic equipment is currently located, the power state and the system load of the electronic equipment can be continuously determined in the running process of the electronic equipment, namely, the user scene, the power state and the system load can be dynamically identified, and the scheduling strategy is dynamically determined according to the changes of the user scene, the power state and the system load, so that the dynamic optimization of resources is performed. Therefore, reasonable allocation of resources of the electronic equipment can be accurately realized, so that energy consumption of the electronic equipment is reduced, cruising ability of the electronic equipment is improved, and stable operation of the electronic equipment is further ensured under the condition that user requirements are met and system requirements of the electronic equipment are considered.
Specifically, the operation of step 604 may be: the electronic equipment acquires a first key value corresponding to the user scene, acquires a second key value corresponding to the power state, acquires a third key value corresponding to the system load, determines a target key value according to the first key value, the second key value and the third key value, and acquires a strategy corresponding to the target key value as a scheduling strategy.
Optionally, a correspondence between the user scene and the key value (may also be referred to as a key value) may be preset in the electronic device, where the correspondence may be preset by a technician according to a user requirement, and the correspondence includes key values corresponding to various user scenes. The electronic device may obtain, from the correspondence, a key value corresponding to the user scenario as the first key value.
Optionally, a correspondence between the power state and the key value may be preset in the electronic device, where the correspondence may be preset by a technician according to a user requirement, and the correspondence includes key values corresponding to various power states. The electronic device may obtain, from the correspondence, a key value corresponding to the power state as a second key value.
As an example, in the case where the power state includes a power mode and a power plan, the correspondence between the power state and the key value may include two correspondences, one of which is a correspondence between the power mode and the key value, and the other of which is a correspondence between the power plan and the key value. In this case, the electronic device may obtain a key value corresponding to the power mode in the power state from a correspondence between the power mode and the key value, then obtain a key value corresponding to the power plan in the power state from a correspondence between the power plan and the key value, and then splice the key value corresponding to the power mode and the key value corresponding to the power plan to obtain the second key value.
Optionally, a correspondence between the system load (such as a load level) and the key value may be preset in the electronic device, where the correspondence may be preset by a technician according to a user requirement, and the key value corresponding to various system loads is included in the correspondence. The electronic device may obtain a key value corresponding to the system load from the corresponding relationship as a third key value.
Optionally, when the electronic device determines the target key value according to the first key value, the second key value and the third key value, the first key value, the second key value and the third key value may be spliced to obtain the target key value. For example, the first key value, the second key value and the third key value may be spliced according to a preset splicing manner, for example, the second key value may be spliced at the end of the first key value, and then the third key value may be spliced at the end of the second key value, so as to obtain the target key value.
Optionally, a correspondence between the key values and the scheduling policies may be preset in the electronic device, where the correspondence may be preset by a technician according to a user requirement, and the scheduling policies corresponding to the key values are included in the correspondence. The electronic device can obtain the scheduling policy corresponding to the target key value from the corresponding relation. Then, the electronic device can use the obtained scheduling policy to schedule the resources of the electronic device.
It should be noted that when any one or more of the user scenario, the power state, and the system load changes, the electronic device needs to determine the scheduling policy again according to the three. Specifically, step 604 needs to be re-executed to determine the scheduling policy according to the latest user scenario, power status and system load, that is, the key values corresponding to the three needs to be re-determined, the key values corresponding to the three need to be spliced to obtain the target key value, and the scheduling policy is re-determined according to the target key value. Therefore, the determined scheduling strategy can be ensured to be suitable for the latest state of the electronic equipment, and the reasonable allocation of the resources of the electronic equipment can be accurately realized according to the scheduling strategy.
The process of determining the scheduling policy in step 604 described above is exemplarily described below with reference to fig. 9.
Fig. 9 is a schematic diagram of determining a scheduling policy according to an embodiment of the present application.
Referring to fig. 9, when any one or more of a user scenario, a power state, and a system load changes, the electronic device may redetermine the scheduling policy according to the three, that is, redetermine key values corresponding to the three, splice the key values corresponding to the three to obtain a target key value, and redetermine the scheduling policy according to the target key value. Particularly, since the change of the power state and the system load may affect the change of the running state of the background program, if the change of the power state or the system load occurs under the condition that the user scene is not changed, the sub-scene can be redetermined, if the sub-scene is changed, the user scene can be redetermined, then the key values corresponding to the user scene, the power state and the system load are redetermined, the key values corresponding to the user scene, the power state and the system load are spliced to obtain the target key value, and the scheduling strategy is redetermined according to the target key value. When the power state or the system load changes when the user scene is not changed, the code for determining the sub-scene in the process of determining the user scene in step 601 may be used to determine the sub-scene, so that code multiplexing may be implemented, and processing pressure may be reduced.
In the embodiment of the application, the user scene where the electronic equipment is currently located is determined, and the power state and the system load of the electronic equipment are determined. And then, acquiring a first key value corresponding to the user scene, acquiring a second key value corresponding to the power state, acquiring a third key value corresponding to the system load, and determining a target key value according to the first key value, the second key value and the third key value. And acquiring the scheduling strategy corresponding to the target key value in the corresponding relation between the key value and the scheduling strategy. The reference factors when determining the scheduling policy in the embodiment of the application are relatively comprehensive, so that the resource scheduling of the electronic equipment based on the scheduling policy can accurately realize reasonable allocation of the resources of the electronic equipment, thereby not only meeting the user requirements, but also considering the system requirements of the electronic equipment, and further ensuring the stable operation of the electronic equipment under the conditions of reducing the energy consumption of the electronic equipment and improving the cruising ability of the electronic equipment. In addition, in the embodiment of the application, various reference factors are converted into corresponding key values, and then the corresponding scheduling strategies are directly obtained according to the key values, so that the operation process is simple, the system processing pressure can be reduced, and the stable operation of the electronic equipment is further ensured.
Fig. 10 is a schematic structural diagram of a scheduling policy determining apparatus provided in the embodiment of the present application, where the apparatus may be implemented by software, hardware, or a combination of both, and may be part or all of a computer device, and the computer device may be the electronic device 100 described in the embodiment of fig. 1. Referring to fig. 10, the apparatus includes: a first determination module 1001, a first acquisition module 1002, a second determination module 1003, and a second acquisition module 1004.
A first determining module 1001, configured to determine a user scenario in which the electronic device is currently located, and determine a power state and a system load of the electronic device;
a first obtaining module 1002, configured to obtain a first key value corresponding to a user scenario, obtain a second key value corresponding to a power state, and obtain a third key value corresponding to a system load;
a second determining module 1003, configured to determine a target key value according to the first key value, the second key value, and the third key value;
the second obtaining module 1004 is configured to obtain, in a correspondence between the key value and a scheduling policy, the scheduling policy corresponding to the target key value, where the scheduling policy is used to schedule resources of the electronic device.
Optionally, the first determining module 1001 is configured to:
Acquiring application running information of the electronic equipment, wherein the application running information comprises focus application information, and the focus application information comprises application types and application running states of focus applications;
and determining the user scene according to the application running information.
Optionally, the first determining module 1001 is configured to:
determining a main scene according to the application type in the focus application information;
determining at least one sub-scene according to the main scene, the application running state in the focus application information, the non-focus application information and the background application information;
and selecting one sub-scene with the highest priority from at least one sub-scene, wherein the main scene and the one sub-scene are user scenes.
Optionally, the apparatus further comprises:
the detection module is used for detecting the working state of the system;
the first determining module 1001 is configured to:
if the system working state is changed to the idle state, when the application running information indicates that the application is in a state unused by the user, determining that the user scene is the idle scene.
Optionally, the first determining module 1001 is configured to:
IO load information is obtained;
and determining a user scene according to the application running information and the IO load information.
Optionally, the power state includes a power mode and a power plan, and the first determining module 1001 is configured to:
If the power mode change event and the power plan change event are detected, determining a power state according to the power mode change event and the power plan change event.
Optionally, the second determining module 1003 is configured to:
and splicing the first key value, the second key value and the third key value to obtain a target key value.
Optionally, the apparatus further comprises:
the triggering module is configured to trigger the first obtaining module 1002 to obtain a first key value corresponding to the user scenario, obtain a second key value corresponding to the power state, and obtain a third key value corresponding to the system load if any one or more of the user scenario, the power state, and the system load changes.
In the embodiment of the application, the user scene where the electronic equipment is currently located is determined, and the power state and the system load of the electronic equipment are determined. And then, acquiring a first key value corresponding to the user scene, acquiring a second key value corresponding to the power state, acquiring a third key value corresponding to the system load, and determining a target key value according to the first key value, the second key value and the third key value. And acquiring the scheduling strategy corresponding to the target key value in the corresponding relation between the key value and the scheduling strategy. The reference factors when determining the scheduling policy in the embodiment of the application are relatively comprehensive, so that the resource scheduling of the electronic equipment based on the scheduling policy can accurately realize reasonable allocation of the resources of the electronic equipment, thereby not only meeting the user requirements, but also considering the system requirements of the electronic equipment, and further ensuring the stable operation of the electronic equipment under the conditions of reducing the energy consumption of the electronic equipment and improving the cruising ability of the electronic equipment. In addition, in the embodiment of the application, various reference factors are converted into corresponding key values, and then the corresponding scheduling strategies are directly obtained according to the key values, so that the operation process is simple, the system processing pressure can be reduced, and the stable operation of the electronic equipment is further ensured.
It should be noted that: in determining the scheduling policy, the scheduling policy determining apparatus provided in the foregoing embodiment only illustrates the division of each functional module, and in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the functions described above.
The functional units and modules in the above embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the embodiments of the present application.
The scheduling policy determining device and the scheduling policy determining method provided in the foregoing embodiments belong to the same concept, and specific working processes and technical effects of units and modules in the foregoing embodiments may be referred to a method embodiment section, which is not repeated herein.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, data subscriber line (Digital Subscriber Line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium such as a floppy Disk, a hard Disk, a magnetic tape, an optical medium such as a digital versatile Disk (Digital Versatile Disc, DVD), or a semiconductor medium such as a Solid State Disk (SSD), etc.
The above embodiments are not intended to limit the present application, and any modifications, equivalent substitutions, improvements, etc. within the technical scope of the present disclosure should be included in the protection scope of the present application.

Claims (11)

1. A scheduling policy determining method, the method comprising:
determining a current user scene of the electronic equipment and determining a power state and a system load of the electronic equipment;
acquiring a first key value corresponding to the user scene, acquiring a second key value corresponding to the power state, and acquiring a third key value corresponding to the system load;
determining a target key value according to the first key value, the second key value and the third key value;
and acquiring a scheduling strategy corresponding to the target key value in the corresponding relation between the key value and the scheduling strategy, wherein the scheduling strategy is used for scheduling the resources of the electronic equipment.
2. The method of claim 1, wherein the determining the user context in which the electronic device is currently located comprises:
acquiring application running information of the electronic equipment, wherein the application running information comprises focus application information, and the focus application information comprises application types and application running states of focus applications;
And determining the user scene according to the application running information.
3. The method of claim 2, wherein the application running information further includes non-focus application information and background application information, the determining the user scene according to the application running information comprising:
determining a main scene according to the application type in the focus application information;
determining at least one sub-scene according to the main scene, the application running state in the focus application information, the non-focus application information and the background application information;
selecting one sub-scene with highest priority from the at least one sub-scene, wherein the main scene and the one sub-scene are the user scenes.
4. The method of claim 2, wherein the method further comprises:
detecting the working state of the system;
the determining the user scene according to the application running information comprises the following steps:
if the system working state is changed to the idle state, determining that the user scene is the idle scene when the application running information indicates that the application is in a state unused by the user.
5. The method of claim 2, wherein the determining the user context from the application running information comprises:
Acquiring input/output IO load information;
and determining the user scene according to the application running information and the IO load information.
6. The method of any of claims 1-5, wherein the power state comprises a power mode and a power plan, and wherein the determining the power state of the electronic device comprises:
and if the power mode change event and the power supply plan change event are detected, determining the power supply state according to the power supply mode change event and the power supply plan change event.
7. The method of any of claims 1-6, wherein determining a target key value based on the first key value, the second key value, and the third key value comprises:
and splicing the first key value, the second key value and the third key value to obtain the target key value.
8. The method of any one of claims 1-7, wherein the method further comprises:
and if any one or more of the user scene, the power state and the system load changes, re-executing the steps and subsequent steps of acquiring the first key value corresponding to the user scene, acquiring the second key value corresponding to the power state, and acquiring the third key value corresponding to the system load.
9. A scheduling policy determining apparatus, the apparatus comprising:
the first determining module is used for determining a user scene where the electronic equipment is currently located and determining a power state and a system load of the electronic equipment;
the first acquisition module is used for acquiring a first key value corresponding to the user scene, acquiring a second key value corresponding to the power state and acquiring a third key value corresponding to the system load;
the second determining module is used for determining a target key value according to the first key value, the second key value and the third key value;
the second obtaining module is used for obtaining a scheduling strategy corresponding to the target key value in the corresponding relation between the key value and the scheduling strategy, and the scheduling strategy is used for carrying out resource scheduling on the electronic equipment.
10. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, which computer program, when executed by the processor, implements the method according to any of claims 1-8.
11. A computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of any of claims 1-8.
CN202210735811.0A 2022-05-16 2022-06-27 Scheduling policy determination method, device, equipment and storage medium Active CN116028207B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210529304 2022-05-16
CN2022105293041 2022-05-16

Publications (2)

Publication Number Publication Date
CN116028207A true CN116028207A (en) 2023-04-28
CN116028207B CN116028207B (en) 2024-04-12

Family

ID=86071124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210735811.0A Active CN116028207B (en) 2022-05-16 2022-06-27 Scheduling policy determination method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116028207B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117891617A (en) * 2024-03-15 2024-04-16 荣耀终端有限公司 Resource scheduling method, device, readable storage medium and chip system
CN118042263A (en) * 2024-01-10 2024-05-14 荣耀终端有限公司 Image acquisition method, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2845341A1 (en) * 2014-03-11 2015-09-11 Pierre Popovic A computer system, methods, apparatus for processing applications, dispensing workloads, monitor energy and sequence power to nonhierarchical multi-tier blade servers in data centers
CN105045367A (en) * 2015-01-16 2015-11-11 中国矿业大学 Android system equipment power consumption optimization method based on game load prediction
CN106412343A (en) * 2015-07-28 2017-02-15 中兴通讯股份有限公司 Power consumption control method and device and mobile terminal
CN109960395A (en) * 2018-10-15 2019-07-02 华为技术有限公司 Resource regulating method and computer equipment
CN113778663A (en) * 2021-07-28 2021-12-10 荣耀终端有限公司 Scheduling method of multi-core processor and electronic equipment
CN113906648A (en) * 2019-07-12 2022-01-07 华为技术有限公司 Power supply protection method and system with power supply protection function
CN114443256A (en) * 2022-04-07 2022-05-06 荣耀终端有限公司 Resource scheduling method and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101717365B1 (en) * 2015-08-07 2017-03-16 한국과학기술원 Portable electronic device for aging mitigating of power supply-connected batteries

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2845341A1 (en) * 2014-03-11 2015-09-11 Pierre Popovic A computer system, methods, apparatus for processing applications, dispensing workloads, monitor energy and sequence power to nonhierarchical multi-tier blade servers in data centers
CN105045367A (en) * 2015-01-16 2015-11-11 中国矿业大学 Android system equipment power consumption optimization method based on game load prediction
CN106412343A (en) * 2015-07-28 2017-02-15 中兴通讯股份有限公司 Power consumption control method and device and mobile terminal
CN109960395A (en) * 2018-10-15 2019-07-02 华为技术有限公司 Resource regulating method and computer equipment
CN113906648A (en) * 2019-07-12 2022-01-07 华为技术有限公司 Power supply protection method and system with power supply protection function
CN113778663A (en) * 2021-07-28 2021-12-10 荣耀终端有限公司 Scheduling method of multi-core processor and electronic equipment
CN114443256A (en) * 2022-04-07 2022-05-06 荣耀终端有限公司 Resource scheduling method and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118042263A (en) * 2024-01-10 2024-05-14 荣耀终端有限公司 Image acquisition method, electronic equipment and storage medium
CN117891617A (en) * 2024-03-15 2024-04-16 荣耀终端有限公司 Resource scheduling method, device, readable storage medium and chip system

Also Published As

Publication number Publication date
CN116028207B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN114443256B (en) Resource scheduling method and electronic equipment
CN116028207B (en) Scheduling policy determination method, device, equipment and storage medium
CN116028205B (en) Resource scheduling method and electronic equipment
JP5593404B2 (en) System and method for executing threads in a processor
CN116028210B (en) Resource scheduling method, electronic equipment and storage medium
CN117112191B (en) Information processing method and electronic device
CN117130454B (en) Power consumption adjustment method and electronic equipment
CN116027880B (en) Resource scheduling method and electronic equipment
CN116069209A (en) Focus window processing method, device, equipment and storage medium
CN116028208B (en) System load determining method, device, equipment and storage medium
CN116027879B (en) Method for determining parameters, electronic device and computer readable storage medium
CN116028211A (en) Display card scheduling method, electronic equipment and computer readable storage medium
CN116028005B (en) Audio session acquisition method, device, equipment and storage medium
CN116025580A (en) Method for adjusting rotation speed of fan and electronic equipment
CN117632460A (en) Load adjusting method and terminal equipment
CN116055443B (en) Method for identifying social scene, electronic equipment and computer readable storage medium
CN116028209B (en) Resource scheduling method, electronic equipment and storage medium
CN116089055B (en) Resource scheduling method and device
CN118819267A (en) Power consumption adjustment method and electronic equipment
CN116027878B (en) Power consumption adjustment method and electronic equipment
CN116028206A (en) Resource scheduling method, electronic equipment and storage medium
CN117130772A (en) Resource scheduling method, electronic equipment and storage medium
CN117270670B (en) Power consumption control method and electronic equipment
WO2023221720A1 (en) Resource scheduling method and apparatus
CN117950935A (en) Performance regulating and controlling method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant