WO2023227075A1 - 一种资源管控方法、电子设备及介质 - Google Patents

一种资源管控方法、电子设备及介质 Download PDF

Info

Publication number
WO2023227075A1
WO2023227075A1 PCT/CN2023/096363 CN2023096363W WO2023227075A1 WO 2023227075 A1 WO2023227075 A1 WO 2023227075A1 CN 2023096363 W CN2023096363 W CN 2023096363W WO 2023227075 A1 WO2023227075 A1 WO 2023227075A1
Authority
WO
WIPO (PCT)
Prior art keywords
resource
foreground
information
scene
management
Prior art date
Application number
PCT/CN2023/096363
Other languages
English (en)
French (fr)
Inventor
罗翔
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023227075A1 publication Critical patent/WO2023227075A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This application relates to the field of communication technology, and in particular to a resource management and control method, electronic equipment and media.
  • the resource management and control policy of the electronic device is generally triggered.
  • memory pressure relief will be considered through methods such as memory compression and process killing, the above methods will usually result in increased demand for CPU resources, waste of CPU resources, or increase in background complaints. Reduce user experience to a certain extent.
  • embodiments of the present application provide a resource management and control method, electronic equipment, and media.
  • embodiments of the present application provide a resource management and control method for electronic devices.
  • the method includes:
  • the resource management and control method provided in the embodiment of this application can determine the resource management and control strategy based on the current foreground scene information, the foreground resource demand information and the background resource status information.
  • the memory management strategy can be determined based on the current foreground scene information, which can effectively solve the problem. There is a conflict between some memory resources and CPU resources.
  • the target objects and methods to be controlled in the background can be determined based on the foreground resource demand information and the background resource status information, which can effectively reduce redundant management and control, that is, can effectively avoid the occurrence of too many killing processes in the existing technology.
  • the resource management strategy includes a memory arrangement strategy and a process management and control strategy
  • Determining the resource management and control strategy based on the current foreground scene information, foreground resource demand information and background resource status information of the electronic device includes:
  • the determining a memory management strategy based on the current foreground scene information includes:
  • the memory sorting task to be executed is selected according to the size of the CPU resources occupied by each memory sorting strategy;
  • CPU-insensitive scenarios are scenarios that do not require high CPU resources, so excess CPU resources can be used for time-consuming operations such as background memory sorting. Therefore, if it is recognized that the current foreground scene is a CPU-insensitive scene, all background memory sorting tasks are allowed to be executed.
  • CPU-sensitive scenarios are scenarios with high demand for CPU resources. At this time, more CPU resources need to be reserved to facilitate the foreground operation of such scenarios. At this time, among the background memory cleaning tasks, select the memory cleaning task that consumes less CPU resources to execute. .
  • memory sorting when memory sorting is required, such as memory compression, it can be determined based on the foreground scene information whether to perform memory sorting tasks or which memory sorting tasks to execute to achieve a balance between memory resources and CPU resources. In this way, it can be done in a certain way. Solve the conflict problem between some memory resources and CPU resources to a certain extent.
  • the foreground scene information includes the foreground scene category and/or the CPU frequency load value of the foreground scene, and,
  • Determining that the current foreground scene is a CPU-sensitive scene based on the current foreground scene information of the electronic device includes:
  • the CPU frequency load value of the current foreground scene of the electronic device is greater than the set value, or the current foreground scene category corresponds to the category of a CPU-sensitive scene, it is determined that the current foreground scene is a CPU-sensitive scene.
  • Determining that the current foreground scene is a CPU-insensitive scene based on the current foreground scene information of the electronic device includes:
  • the current foreground scene of the electronic device When the CPU frequency load value of the current foreground scene of the electronic device is less than or equal to the set value, or the current foreground scene category corresponds to the category of a CPU-insensitive scene, the current foreground scene is determined to be a CPU-sensitive scene.
  • determining the process management and control strategy based on the current foreground resource demand information and the current background resource status information includes:
  • the target management and control process and management and control method are determined based on the target resource and the current background resource status information.
  • determining target resources that need to be managed and reserved based on the current foreground resource demand information includes:
  • the current front-end resource demand information is used as a target resource that needs to be managed and reserved.
  • the historical foreground resource demand information of the target application can be obtained by checking the target application each time it enters the foreground.
  • CPU resources and memory usage information obtained through sampling during the process.
  • target resources that need to be controlled and reserved based on the target resources (target values) can achieve precise management and control, that is, the reserved CPU resources and memory resources can be close to or the same as the target values.
  • system available resources include at least one of system available CPU load resources, system available memory resources and system available I/O resources;
  • the current foreground resource requirement information includes at least one of the current foreground CPU load resource requirement, foreground memory resource requirement and foreground I/O resource requirement; and,
  • Determining that the system's available resources do not meet the front-end resource requirements includes:
  • the background resource status information includes a first type of resource status information, and the first type of resource status information includes memory resource status information, CPU resource status information and soft resource status information. ;
  • the memory resource status information includes memory information of each process, file page and anonymous page distribution information of each process, and priority information of each process;
  • the CPU resource status information includes the number of execution instructions of each thread and the priority of each thread;
  • the soft resource status information includes the lock status of each process.
  • the background resource status information also includes a second type of resource status information, and the second type of resource status information includes: a correlation coefficient between each process running in the background and the corresponding process in the foreground. ;
  • the target management and control process is determined based on the correlation coefficient between each process in the candidate thread and the corresponding process in the foreground.
  • the correlation coefficient between each process and the corresponding process in the foreground is determined based on the load correlation coefficient and the wake-up correlation coefficient between each process and the corresponding process in the foreground.
  • the management and control method includes performing sniff freezing processing on the target thread
  • the sniff freezing process for the target thread includes:
  • the second scheduling interval is greater than the first scheduling interval.
  • the sniff freezing technology provided by this application can increase the scheduling interval of the frozen process. At the same time, the load of the frozen process will not be counted in the system CPU load during the freezing period and will not affect CPU frequency regulation.
  • sniff freezing technology can periodically wake up the target process to process business, instead of completely freezing the process, which is equivalent to extending the process execution time. Therefore, it is no longer necessary to notify the process to perform pre-freezing processing, which reduces the cost of freezing and solves the problem of frequent processes. The problem of freezing failure caused by waking up.
  • each first process that meets the set conditions is determined
  • the threads in the L-CFS group are scheduled after other threads that do not belong to the L-CFS group.
  • background threads with extremely low priority in a certain period of time can be identified based on the priority information of each thread in the background resource status information, and these threads can be placed into L-CFS groups to control these threads to seize the foreground. resources to achieve the purpose of priority scheduling of foreground business or foreground threads.
  • the first processes that are determined to meet the set conditions include:
  • the background process whose load is greater than the set value and is not related to the foreground process is regarded as the first process.
  • L-CFS control can be performed on processes running in the background that have higher loads and have nothing to do with the foreground, which can effectively reduce the impact on the foreground process and reduce power consumption to a certain extent.
  • embodiments of the present application provide a readable medium. Instructions are stored on the readable medium. When the instructions are executed on an electronic device, they cause the machine to execute the above resource management and control method.
  • embodiments of the present application provide an electronic device, including: a memory for storing instructions executed by one or more processors of the electronic device; and a processor, which is one of the processors of the electronic device, for Implement the above resource management and control methods.
  • embodiments of the present application provide a computer program product that includes instructions that, when executed on an electronic device, cause the machine to execute the above resource management and control method.
  • embodiments of the present application provide a resource management and control device, including:
  • An acquisition module used to acquire the current foreground scene information, foreground resource demand information and background resource status information of the electronic device
  • a determination module configured to determine a resource management and control strategy based on the current foreground scene information, foreground resource demand information and background resource status information of the electronic device;
  • An execution module is used to execute the resource management and control policy.
  • Figure 1 shows a schematic flow chart of a resource management and control method according to some embodiments of the present application
  • Figure 2 shows a schematic flow chart of a resource management and control method according to some embodiments of the present application
  • Figure 3 shows a schematic flow chart of a resource management and control method according to some embodiments of the present application
  • Figure 4 shows a schematic diagram of the classification of the first type of background resource status information according to some embodiments of the present application
  • Figure 5 shows a hardware schematic diagram of an electronic device according to some embodiments of the present application.
  • Figure 6 shows a schematic diagram of the software architecture of an electronic device according to some embodiments of the present application.
  • Figure 7 shows a schematic flow chart of a resource management and control method according to some embodiments of the present application.
  • Figure 8 shows a schematic flow chart of a resource management and control method according to some embodiments of the present application.
  • Figure 9 shows a schematic diagram of foreground scene classification according to some embodiments of the present application.
  • Figure 10 shows a schematic diagram of trigger management and control according to some embodiments of the present application.
  • Figure 11 shows a schematic diagram of a freezing method according to some embodiments of the present application.
  • Figure 12 shows a schematic diagram of a sniff freezing method according to some embodiments of the present application.
  • Figure 13 shows a schematic diagram of the freezing state of a freezing method according to some embodiments of the present application.
  • Figure 14 shows a schematic diagram of the freezing state of a sniff freezing method according to some embodiments of the present application.
  • Figure 15 shows a schematic diagram of the execution of the same process (task) using the CFS control strategy and the L-CFS control strategy according to some embodiments of the present application;
  • Figure 16 shows a schematic diagram of a method for determining processes that need to adopt L-CFS management and control strategies according to some embodiments of the present application
  • Figure 17 shows a schematic diagram comparing the running time of the foreground thread using the CFS control strategy and using the L-CFS control strategy according to some embodiments of the present application.
  • Illustrative embodiments of the present application include, but are not limited to, a resource management and control method, electronic equipment, and media.
  • Figure 1 shows a schematic diagram of a resource management and control strategy method.
  • the resource management and control method shown in Figure 1 is to obtain the application scenario information of the terminal through the scenario awareness module 210. And input the scene information and real-time feedback information of usage experience into the performance model, power consumption model and thermal control model in the resource monitoring and decision-making module to obtain the three factors of performance degradation, power consumption degradation and temperature degradation of the device.
  • the real-time feedback information of the usage experience may include information such as average frame rate, low frame rate, fluency, user interaction or operation during current use.
  • the decision-making module determines the resource management and control strategy based on three factors: the degree of performance degradation, power consumption degradation, and temperature degradation of the electronic device, and sends the resource management and control strategy to the execution module for execution to optimize user experience.
  • resource management and control strategies directly use the method of killing processes when there is insufficient memory. Although this solution also alleviates the memory pressure, this solution may cause too much memory to be killed. This causes a waste of CPU resources and causes more background complaints, which reduces the user experience to a certain extent.
  • embodiments of the present invention disclose a resource management and control method, which can be applied to electronic devices.
  • the electronic devices provided by the embodiments of the present application include but are not limited to smartphones, vehicle-mounted devices, personal computers, artificial intelligence devices, tablets, computers, personal digital assistants, and smart wearable devices (such as smart watches or bracelets, smart glasses) , intelligent voice devices (such as smart speakers, etc.), and network access devices (such as gateways), etc.
  • the resource management and control method of the embodiment of the present application can be shown in Figure 2, including: when it is detected that the system resources cannot meet the requirements of the foreground scenario, triggering the management and control strategy, that is, identifying the foreground resource demand information and the background load information, based on Select the appropriate management and control strategy based on the foreground resource information and background load information and execute it.
  • the resource management and control method provided by the embodiment of the present application can also integrate foreground scene information.
  • the resource management and control method in the embodiment of the present application can obtain the current foreground scene information and foreground scene information of the electronic device as shown in Figure 3.
  • Resource demand information and status information of background business resources determine the resource management and control strategy based on the current foreground scene information of the electronic device, the foreground resource demand information, and the status information of the background business resource; execute the resource management and control strategy.
  • resource management and control strategies include memory organization strategies, process killing strategies, resource scheduling strategies, etc.
  • the following describes some ways of determining resource management and control strategies based on the current foreground scene information, foreground resource demand information, and background business resource status information of the electronic device.
  • the current foreground scene information can be used to determine a preliminary memory arrangement strategy. Specifically, it can be first determined whether the current foreground scene is a CPU-sensitive scene based on the current foreground scene information of the electronic device. When it is determined that the current foreground scene is a CPU-sensitive scene, the memory sorting task to be executed is selected based on the amount of CPU resources occupied by each memory sorting policy. For example, one or more to-be-executed memory sorting tasks that occupy the least amount of CPU resources can be selected for execution. When the current foreground scene is determined to be a CPU-insensitive scene according to the current foreground scene information of the electronic device, all memory sorting tasks are allowed to be performed. Among them, memory organization tasks can include memory compression, memory merging and other tasks.
  • CPU-insensitive scenarios are scenarios that do not require high CPU resources, so excess CPU resources can be used for time-consuming operations such as background memory sorting. Therefore, if it is recognized that the current foreground scene is a CPU-insensitive scene, all background memory sorting tasks are allowed to be executed.
  • CPU-sensitive scenarios are scenarios with high demand for CPU resources. At this time, more CPU resources need to be reserved to facilitate the foreground operation of such scenarios. At this time, among the background memory cleaning tasks, select the memory cleaning task that consumes less CPU resources to execute. .
  • memory sorting when memory sorting is required, such as memory compression, it can be determined based on the foreground scene information whether to perform memory sorting tasks or which memory sorting tasks to execute to achieve a balance between memory resources and CPU resources. In this way, it can be done in a certain way. Solve the conflict problem between some memory resources and CPU resources to a certain extent.
  • the foreground scene information includes the foreground scene category and/or the CPU frequency load value of the foreground scene.
  • the current foreground scene may be determined to be a CPU-sensitive scene when it is determined that the CPU frequency load value of the current foreground scene of the electronic device is greater than the set value, or the foreground scene category corresponds to the category of the CPU-sensitive scene.
  • the current foreground scene is determined to be a CPU insensitive scene.
  • categories belonging to CPU-sensitive scenarios and categories belonging to CPU-insensitive scenarios may be predefined according to the requirements for CPU computing power of each foreground scenario. For example, for navigation scenarios, most users will not always stare at the screen for navigation. We only need to ensure the basic CPU resources for foreground drawing. Therefore, navigation scenes require high CPU computing power, so navigation scenes can be defined as CPU-insensitive scenes. At this time, the demand is not very high, and other CPU resources can be used for time-consuming operations such as background memory sorting. For game scenes, the requirements for CPU computing power are generally higher, so game scenes can be defined as CPU-sensitive scenes.
  • the target objects to be managed and controlled in the background and the management and control methods may be determined based on the foreground resource requirement information and the background resource status information.
  • the target resources that need to be reserved by the backend through management and control can be determined based on the front-end resource demand information. Then you can use backend resources
  • the source status information determines the target objects to be controlled and the management and control methods to reserve target resources.
  • the system available resources include at least one of the system available CPU load resources, system available memory resources and system available I/O resources;
  • the current foreground resource demand information includes the current foreground CPU load resource demand, foreground memory resource demand and foreground I/O resources.
  • the method of determining that the system available resources do not meet the foreground resource requirements includes: when at least one available resource among the system available resources does not meet the corresponding resource requirements in the foreground resource requirement information, determine that the system Available resources do not meet front-end resource requirements.
  • the electronic device can record the historical startup resource requirement information of each application that has entered the foreground in history, and determine the foreground resource requirement information when the target application starts to enter the foreground at the current moment based on the historical startup resource requirement information of the target application.
  • the foreground resource demand information when the target application starts and enters the foreground at the current moment is the target resource that needs to be reserved through management and control in the current backend.
  • the historical foreground resource demand information of the target application may be CPU resource and memory usage information obtained by sampling each time the target application enters the foreground.
  • the average CPU resources and memory usage can be determined based on the CPU resources and memory usage information each time the target application enters the foreground in history, as the CPU resource and memory usage target values required for the current target application to enter the foreground.
  • the CPU resource frequency distribution and memory trend of the target application can also be determined based on the CPU resources and memory occupancy of the target application each time it enters the foreground, and the current target application entry can be determined based on the CPU frequency distribution and memory trend analysis. Target values of CPU resources and memory usage required by the frontend.
  • management and control such as background scheduling and memory sorting are performed based on the target value.
  • control methods such as killing the setting process or freezing the setting process can be used to make the reserved CPU resources and memory resources close to or the same as the target values. In this way, redundant management and control can be effectively reduced, that is, excessive scanning and killing processes in the existing technology can be effectively avoided.
  • the target memory occupation value required to enter the foreground is that when the "News Application" is started and enters the foreground at the current moment, the electronic device can reserve as close to 387MB of memory as possible through resource management and control methods such as killing some processes.
  • 389 MB can also be used as the target memory occupation value required for the current "News Application” to enter the foreground based on the numerical trend of the memory usage gradually increasing by 1 MB three times in the history of the "News Application” entering the foreground.
  • the target objects to be controlled or the management and control methods can be determined through the background resource status information, so as to reserve the target resources.
  • the following describes the method of determining the target objects to be controlled and the control methods through background resource status information.
  • Figure 4 shows a schematic diagram of the classification of the first type of background resource status information.
  • the first type of background resource status information includes memory resource status information, CPU resource status information and soft resource status information; memory resource status information includes memory information of each process, file page and anonymous page distribution information of each process, and Priority information of each process; CPU resource status information includes the number of instructions executed by each thread and the priority of each thread.
  • Soft resource status information includes the lock status of each thread. It can be understood that the above resource status information may be resource status information in a set time period, for example, resource status information within the last second.
  • statistics on the resource status of back-end services can be used to clarify the control objects and methods. For example, according to the statistics of each The memory occupancy of threads can determine which processes to kill or freeze so that the reserved memory can be close to or equal to the current foreground resource requirements. That is, the candidate processes and management and control methods are determined based on the first-type background resource status information determined above and in the table;
  • the background resource status information may also include the second type of background resource status information, that is, the correlation information between each thread running in the background and the corresponding thread in the foreground.
  • the resource management and control method may also include selecting a target process from the above candidate processes for management and control of the background business resources based on the correlation between each thread running in the background and the corresponding thread in the foreground. Specifically, it may be to obtain each thread running in the background and the corresponding thread in the foreground. The degree of correlation; priority is given to selecting several target processes with the lowest or lower correlation among candidate threads for control.
  • the electronic device determines that the memory occupied by thread A in the background thread is 389MB.
  • the memory occupied by thread B is 389MB
  • the memory occupied by thread C is 389MB
  • the memory occupied by thread A, thread B and thread C are all close to the target value of 387MB
  • thread B has the lowest correlation with the corresponding thread in the foreground. At this time, you can choose to check Kill or freeze the B thread to reserve corresponding memory.
  • the electronic device detects some memory sorting tasks triggered by any application, at this time, it can be identified according to the aforementioned method whether the scene in which the "News Application" is running is a CPU-sensitive scene. to determine the memory management strategy.
  • the target objects and methods to be controlled in the background can be determined based on the foreground resource demand information and the background resource status information, which can effectively reduce redundant management and control, that is, can effectively avoid the occurrence of too many killing processes in the existing technology.
  • the mobile phone 10 may include a processor 110, a power module 140, a memory 180, a mobile communication module 130, a wireless communication module 120, a sensor module 190, an audio module 150, a camera 170, an interface module 160, a button 101 and a display screen. 102 etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the mobile phone 10 .
  • the mobile phone 10 may include more or less components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, such as a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), a microprocessor (micro- Processing modules or processing circuits such as programmed control unit (MCU), artificial intelligence (artificial intelligence, AI) processor or programmable logic device (field programmable gate array, FPGA). Among them, different processing units can be independent devices or integrated in one or more in the processor.
  • a storage unit may be provided in the processor 110 for storing instructions and data. In some embodiments, the storage unit in processor 110 is cache memory 180.
  • Power module 140 may include a power supply, power management components, and the like.
  • the power source can be a battery.
  • the power management component is used to manage the charging of the power supply and the power supply from the power supply to other modules.
  • the power management component includes a charge management module and a power management module.
  • the charging management module is used to receive charging input from the charger; the power management module is used to connect the power supply, the charging management module and the processor 110 .
  • the power management module receives input from the power supply and/or charging management module and supplies power to the processor 110, the display screen 102, the camera 170, the wireless communication module 120, etc.
  • the mobile communication module 130 may include, but is not limited to, an antenna, a power amplifier, a filter, a low noise amplifier (LNA), etc.
  • the mobile communication module 130 can provide wireless communication solutions including 2G/3G/4G/5G applied on the mobile phone 10 .
  • the mobile communication module 130 can receive electromagnetic waves through an antenna, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 130 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna for radiation.
  • at least part of the functional modules of the mobile communication module 130 may be disposed in the processor 110 .
  • at least part of the functional modules of the mobile communication module 130 may be provided in the same device as at least part of the modules of the processor 110 .
  • the wireless communication module 120 may include an antenna, and implements the transmission and reception of electromagnetic waves via the antenna.
  • the wireless communication module 120 can provide applications on the mobile phone 10 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (blue tooth, BT), and global navigation satellites.
  • WLAN wireless local area networks
  • WiFi wireless fidelity
  • BT Bluetooth
  • global navigation satellites System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the mobile phone 10 can communicate with the network and other devices through wireless communication technology.
  • the mobile communication module 130 and the wireless communication module 120 of the mobile phone 10 may also be located in the same module.
  • the display screen 102 is used to display human-computer interaction interfaces, images, videos, etc.
  • Display 102 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the sensor module 190 may include a proximity light sensor, a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
  • the audio module 150 is used to convert digital audio information into an analog audio signal output, or convert an analog audio input into a digital audio signal. Audio module 150 may also be used to encode and decode audio signals. In some embodiments, the audio module 150 may be disposed in the processor 110 , or some functional modules of the audio module 150 may be disposed in the processor 110 . In some embodiments, audio module 150 may include speakers, earpieces, microphones, and headphone jacks.
  • Camera 170 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to ISP (image signal processing, image signal processing) into digital image signals.
  • ISP image signal processing, image signal processing
  • the mobile phone 10 can implement the shooting function through the ISP, camera 170, video codec, GPU (graphic processing unit, graphics processor), display screen 102 and application processor.
  • the interface module 160 includes an external memory interface, a universal serial bus (USB) interface, a subscriber identification module (subscriber identification module, SIM) card interface, etc.
  • the external memory interface can be used to connect external memory cards, such as Micro SD cards, to expand the storage capacity of the mobile phone 10.
  • the external memory card communicates with the processor 110 through the external memory interface to implement data storage functions.
  • the Universal Serial Bus interface is used by mobile phones 10 and other electronic devices to communicate.
  • the user identity module card interface is used to communicate with the SIM card installed in the mobile phone 1010, such as reading the phone number stored in the SIM card, or writing the phone number into the SIM card.
  • the mobile phone 10 also includes buttons 101, motors, indicators, etc.
  • the keys 101 may include volume keys, on/off keys, etc.
  • the motor is used to cause the mobile phone 10 to produce a vibration effect, for example, when the user's mobile phone 10 is called, it vibrates to prompt the user to answer the incoming call.
  • Indicators may include laser pointers, radio frequency indicators, LED indicators, etc.
  • the software architecture may include a super brain 210, an operating system 220 and an application program 230.
  • the super brain 210 is a control module that controls and implements the resource management and control method of this application. It can be used to monitor the system status in real time, and to query the foreground scene in response to user operations, such as click interaction operations, gesture interaction operations, or face interaction operations. .
  • the foreground scene referred to here may include the foreground scene information and the resource demand information of the foreground scene.
  • the resource management and control strategy is determined based on the fore-end scene information, the front-end resource demand information and the status information of the back-end business resources, that is, the scene fusion management and control shown in Figure 7 Strategy. And deliver the scene fusion management and control policy to the operating system 220.
  • the method of determining that the system's available resources do not meet the foreground resource requirements includes: when at least one available resource among the system's available resources does not meet the corresponding resource requirement in the foreground resource requirement information, determining that the system's available resources do not meet the foreground resource requirements.
  • scenario integration management and control strategies generally include: sniff freezing or killing strategies, CFS control strategies, memory pre-organizing strategies, IO current limiting strategies, redundant rendering control strategies, soft resource competition control strategies, etc.
  • the super brain 210 can be controlled to first determine the scene fusion management and control strategy when the system has a high load warning or insufficient resources, and set the scene fusion management and control strategy to execute management and control before the system default policy. It is equivalent to intercepting the deteriorating user experience through convergence management and control before the user experience deteriorates; among them, the system default policy can be used to perform load default management and control of application 230 when convergence management and control fails or under other circumstances to ensure the normal operation of electronic devices in the worst case use.
  • the operating system 220 is used to execute the above scenario integration management and control strategy.
  • the application program 230 may include various application programs such as camera, navigation, and gallery.
  • Figure 8 shows a schematic flow chart of a resource management and control method according to an embodiment of the present application.
  • the resource management and control method provided by an embodiment of the present application can to be executed by processor 110. Methods specifically include:
  • the electronic device can obtain the current foreground scene information, the foreground resource demand information and the status information of the background business resources of the electronic device when detecting that the system resources cannot meet the foreground scene requirements.
  • the electronic device can also obtain the foreground scene information and the foreground resource demand information in real time, and when it is determined that the system's available resources do not meet the foreground resource demand, obtain the resource status information of the background business.
  • System available resources include at least one of system available CPU load resources, system available memory resources, and system available I/O resources;
  • System available resources can include system available CPU load resources, available memory resources, IO load resources, soft resources, etc.
  • the current front-end resource demand information includes at least one of the current front-end CPU load resource demand, front-end memory resource demand, and front-end I/O resource demand; and the method for determining that the system's available resources do not meet the foreground resource demand includes: when the system's available resources When at least one of the available resources does not meet the corresponding resource requirement in the foreground resource requirement information, it is determined that the available resources of the system do not meet the foreground resource requirement.
  • the foreground scene information includes the foreground scene category and/or the CPU frequency load value of the foreground scene.
  • the above categories can be assigned values of 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 in sequence. ,20,21,255.
  • the foreground resource requirement information may include memory resource information and CPU resource information required by the foreground.
  • the background resource status information includes memory resource status information, CPU resource status information and soft resource status information;
  • the memory resource status information includes the memory information of each process, the file page and anonymous page distribution information of each process and each process.
  • the CPU resource status information includes the number of instructions executed by each thread and the priority of each thread.
  • the soft resource status information includes the lock status of each thread.
  • the embodiment of the present application can trigger management and control when any system resource such as CPU load resources, available memory resources, IO load resources, soft resources, etc. is insufficient, and obtain current foreground scene information and foreground resource demand information. and the status information of back-end business resources, so that the resource management and control strategy can be determined later based on the current fore-end scene information, front-end resource demand information and the status information of back-end business resources.
  • system resource such as CPU load resources, available memory resources, IO load resources, soft resources, etc.
  • the following describes some ways of determining resource management and control strategies based on the current foreground scene information, foreground resource demand information, and background business resource status information of the electronic device.
  • the current foreground scene information can be used to determine a preliminary memory arrangement strategy. Specifically, it can be first determined whether the current foreground scene is a CPU-sensitive scene according to the current foreground scene information of the electronic device. When it is determined If the current foreground scene is a CPU-sensitive scene, the memory consolidation task to be executed is selected based on the amount of CPU resources occupied by each memory consolidation strategy. For example, you can select one or more contents to be executed that occupy the least amount of CPU resources. Perform storage sorting tasks. When the current foreground scene is determined to be a CPU-insensitive scene according to the current foreground scene information of the electronic device, all memory sorting tasks are allowed to be performed.
  • the above-mentioned preliminary memory arrangement strategy determined based on the current foreground scene information can be executed when system resources are insufficient, or when system resources are sufficient.
  • CPU-insensitive scenarios are scenarios that do not require high CPU resources, so excess CPU resources can be used for time-consuming operations such as background memory sorting. Therefore, if it is recognized that the current foreground scene is a CPU-insensitive scene, all background memory sorting tasks are allowed to be executed.
  • CPU-sensitive scenarios are scenarios with high demand for CPU resources. At this time, more CPU resources need to be reserved to facilitate the foreground operation of such scenarios. At this time, among the background memory cleaning tasks, select the memory cleaning task that consumes less CPU resources to execute. .
  • memory sorting when memory sorting is required, such as memory compression, it can be determined based on the foreground scene information whether to perform memory sorting tasks or which memory sorting tasks to execute to achieve a balance between memory resources and CPU resources. In this way, it can be done in a certain way. Solve the conflict problem between some memory resources and CPU resources to a certain extent.
  • the current foreground scene may be determined to be a CPU-sensitive scene when it is determined that the CPU frequency load value of the current foreground scene of the electronic device is greater than the set value, or the foreground scene category corresponds to the category of the CPU-sensitive scene.
  • the current foreground scene is determined to be a CPU insensitive scene.
  • categories belonging to CPU-sensitive scenarios and categories belonging to CPU-insensitive scenarios may be predefined according to the requirements for CPU computing power of each foreground scenario. For example, for navigation scenarios, most users will not always stare at the screen for navigation. We only need to ensure the basic CPU resources for foreground drawing. Therefore, navigation scenarios do not require very high CPU computing power, so navigation scenarios can be defined as CPU-insensitive scenarios. At this time, other CPU resources can be released for time-consuming operations such as background memory sorting. For game scenes, the requirements for CPU computing power are generally higher, so game scenes can be defined as CPU-sensitive scenes.
  • the target object to be managed and controlled in the background can be determined based on the foreground resource demand information and the background resource status information.
  • the target resources that need to be reserved by the backend through management and control can be determined based on the front-end resource demand information.
  • the target objects to be controlled and the control methods can be determined through the background resource status information, so as to reserve the target resources.
  • the electronic device can record the historical startup resource requirement information of each application that has entered the foreground in the past, and determine the foreground resource requirement information when the target application starts to enter the foreground at the current moment based on the historical startup resource requirement information of the target application at or before.
  • the foreground resource demand information when the target application starts and enters the foreground at the current moment is the target resource that needs to be reserved by the current backend through management and control.
  • the historical foreground resource demand information of the target application may be CPU resource and memory usage information obtained by sampling each time the target application enters the foreground.
  • the average CPU resources and memory usage can be determined based on the CPU resources and memory usage information each time the target application enters the foreground in history, as the CPU resource and memory usage target values required for the current target application to enter the foreground.
  • the CPU resource frequency distribution and memory trend of the target application can also be determined based on the CPU resources and memory occupation of the target application each time it enters the foreground, and the current target application entering the foreground can be determined based on the CPU frequency distribution and memory trend.
  • Required CPU resources and memory footprint targets value are examples of the target application.
  • management and control such as background scheduling and memory sorting are performed based on the target value. For example, you can kill the setting process or freeze the setting process so that the reserved CPU resources and memory resources are close to or the same as the target value. In this way, redundant management and control can be effectively reduced, that is, excessive scanning and killing processes in the existing technology can be effectively avoided.
  • the electronic device can reserve as close to 387MB of memory as possible through resource management and control methods such as killing some processes.
  • 389 MB can also be used as the target memory occupation value required for the current "News Application” to enter the foreground based on the numerical trend of the memory usage gradually increasing by 1 MB three times in the history of the "News Application” entering the foreground.
  • the resource requirement target value required by the target application to enter the foreground at the current moment may also be determined based on the resource requirements of the historical version of the target application of the electronic device entering the foreground.
  • the corresponding startup memory of "News Application” version 7.7.8, version 7.8.8, version 7.9.2, version 7.9.6, and version 8.0.0 are 386.67MB, 445.46MB, 486.79MB, 460.69MB, and 422.17MB respectively;
  • the resource demand target value required by the target application to enter the foreground at the current moment can be determined based on the currently started version. For example, if the currently launched version is version 8.0.0, it can be determined that the target resource requirement value required for the target application to enter the foreground is 422.17MB.
  • the target objects to be controlled and the management and control methods can be determined through the background resource status information, so as to reserve the target resources.
  • the following describes how to determine the target objects to be controlled through background resource status information.
  • the background resource status information includes the first type of resource status information.
  • the first type of resource status information can include memory resource status information, CPU resource status information and soft resource status information;
  • the memory resource status information includes various The memory information of the process, the file page and anonymous page distribution information of each process, and the priority information of each process;
  • the CPU resource status information includes the number of instructions executed by each thread, the priority of each thread, and the soft resource status information includes the lock status of each thread. .
  • statistics on the resource status of backend services can be used to clarify control objects and control methods. For example, based on the statistical memory usage of each thread, it can be determined which processes to kill or freeze so that the reserved memory can be close to or equal to the current foreground. Resource requirements.
  • the background resource status information may also include correlation information between each thread running in the background and the corresponding thread in the foreground.
  • the resource management and control method may also include managing and controlling background business resources based on the correlation between each thread running in the background and the corresponding thread in the foreground.
  • the method of managing and controlling background business resources may be to determine candidate processes and management methods based on the target resources and the first type of resource information; based on the association between each process in the candidate thread and the corresponding process in the foreground.
  • the coefficient determines the target management and control process.
  • the electronic device determines In the background thread, the memory occupied by thread A is 389MB, the memory occupied by thread B is 389MB, and the memory occupied by thread C is 389MB.
  • the memory occupied by thread A, thread B, and thread C are all close to the target value of 387MB, and thread B is the corresponding thread in the foreground.
  • the correlation degree is the lowest. At this time, you can choose to kill or freeze the B thread to reserve the corresponding memory. That is, B thread is used as the control object.
  • the degree of correlation can be determined by the correlation coefficient between each thread in the background and the corresponding thread in the foreground. The higher the correlation coefficient, the higher the correlation; the lower the correlation coefficient, the lower the correlation.
  • the correlation coefficient between each thread in the background and the corresponding thread in the foreground can be the product of the load correlation coefficient and the wake-up correlation coefficient between each thread in the background and the corresponding thread in the foreground.
  • the load correlation coefficient r between each thread in the background and the corresponding thread in the foreground is calculated as follows:
  • the above calculation formula uses 1 second after the user operation as the total sampling time, and samples are periodically sampled at 20ms intervals.
  • Xi is the load of thread X at each sampling time, is the average load of sampling thread X within 1 second;
  • Yi is the load of foreground UI thread Y during each sampling, Samples the average load of thread Y within 1 second.
  • the above calculation formula uses 1 second after the user operation as the total sampling time, and samples are periodically sampled at 20ms intervals.
  • CountX is the total number of times that the background thread wakes up the corresponding foreground (UI) thread Y
  • CountY is the total number of times that the background thread X is actively awakened when the foreground thread is awakened.
  • CountXi is the number of times the background thread wakes up the corresponding foreground (UI) thread Y per cycle
  • CountYi is the number of times the background thread X is actively woken up when the foreground thread is awakened per cycle.
  • the current occupation information of the background resources is determined based on the status information of the background business resources.
  • the background business resources are managed and controlled.
  • the current occupation information of background resources includes the ratios of the current occupation of each background resource to the total background resources.
  • the setting condition includes that at least one of the ratios of the current occupation of each background resource corresponding to the total background resource is greater than the set value. That is, if the background resources are less occupied or the background team members do not occupy any resources, there is no need to control the background resources.
  • the target objects and methods to be controlled in the background can be determined based on the foreground resource demand information and the background resource status information, which can effectively reduce redundant management and control, that is, can effectively avoid the occurrence of too many killing processes in the existing technology.
  • resource management and control strategies determined above may include strategies such as killing or freezing set processes, memory sorting, etc.
  • freezing technology can be used to perform thread freezing.
  • the freezing solution includes: when the electronic device receives an instruction requesting freezing, it will run a pre-freezing process, that is, send an instruction to release system resources to the process to be frozen, so that the process to be frozen is ready to be frozen.
  • the electronic device can make the thread enter the freeze state through the signal processing mechanism (do_signal). And at the same time, it sends a freeze instruction to the kernel.
  • the kernel can determine whether the thread is in the freeze state by calling the should_stop function. When it is in the freeze state, it can directly execute the freeze. That is, the existing technology runs a set of pre-freezing processes before executing application freezing. In this way, while alleviating the memory pressure, it will also cause the problem of excessive CPU resource demand. And if the process is awakened just after freezing, the problem of freezing failure may occur.
  • embodiments of the present application provide a sniff freeze (Sniff Frozen) technology, which specifically adjusts the current scheduling interval of the process to be frozen, that is, the first scheduling interval, to the second scheduling interval when receiving a freezing instruction; wherein The second scheduling interval is greater than the first scheduling interval. That is, increase the scheduling interval of the process to be frozen. That is, as shown in Figure 12, when the electronic device receives an instruction requesting freezing, it will send a freezing instruction to the kernel. The kernel can directly execute the freezing by calling the should_stop function. In the embodiment of the present application, the execution freezing in the sniff freezing technology is about to freeze the process. The first scheduling interval is adjusted to the second scheduling interval.
  • the electronic device when it determines that it needs to freeze the B thread that is about to be scheduled or is running to release system resources to facilitate the startup of the "news application" in the foreground, it can send a specific freezing instruction to the kernel, and the kernel can increase the number of B threads. Scheduling interval. For example, the current scheduling interval of thread B (that is, the interval between thread B's two scheduled times) is 2 seconds. If thread B currently reaches the scheduling interval of 2 seconds and is ready to be scheduled, it receives a freeze instruction from thread B at this time. , you can increase the scheduling interval of B thread to 10 seconds. That is, thread B needs to wait another 8 seconds before being scheduled.
  • the current scheduling interval of thread B that is, the interval between thread B's two scheduled times
  • the scheduling interval of thread B can be increased to 10 seconds. That is, thread B stops running and needs to wait another 10 seconds before being scheduled. It can be understood that when the B thread is not scheduled, the effect of releasing or reserving system resources is achieved.
  • the thread is frozen and the thread scheduling status is as shown in Figure 13, that is, during the running process, it is frozen after receiving the freezing instruction.
  • the scheduling status of the frozen thread is as shown in Figure 14.
  • the scheduling time interval is increased after receiving the freezing instruction, that is, a false freezing.
  • the scheduling time When the interval is reached, it will run again, that is, it will wake up periodically.
  • the sniff freezing technology provided by this application can increase the scheduling interval of the frozen process.
  • the load of the frozen process will not be counted in the system CPU load during the freezing period, and will not affect CPU frequency regulation.
  • sniff freezing technology can periodically wake up the process to process services instead of completely freezing the process, which is equivalent to extending the execution time of the process, so This no longer needs to notify the process to perform pre-freezing processing, which reduces the freezing cost and solves the problem of frequent waking up of the process causing freezing failure.
  • the resource management and control method also includes a resource scheduling solution.
  • the resource scheduling scheme is generally a completely fair scheduler (CFS) control scheme.
  • CFS completely fair scheduler
  • the CFS control scheme is guaranteed by prioritizing the foreground threads and background threads in the process. The foreground thread is scheduled first, but if the background thread cannot be scheduled for a long time, its priority will be raised, and occasionally it will seize foreground resources, causing the problem of unsmooth running of the foreground.
  • the resource management and control method in the embodiment of the present application also provides a low priority completely fair scheduler (lower completely fair scheduler, L-CFS) management and control strategy, which specifically includes: For those who need to perform L-CFS The process of the strategy determines the priority information of the background threads in each time period in the process based on the background resource status information, sorts the background threads in each time period according to the priority from high to low, obtains the priority sequence, and sets the priority sequence in the post-set
  • the background thread of the positioning number is set to the thread in the L-CFS group; and the threads in the L-CFS group are controlled to be scheduled after other threads that do not belong to the L-CFS group, that is, as long as they do not belong to the L-CFS group of threads to be scheduled, the threads in the L-CFS group will give up resources. It can be understood that the thread priority will not change after the thread enters the L-CFS group.
  • thread A that does not belong to the L-CFS group there are thread A that does not belong to the L-CFS group and thread B that belongs to the L-CFS group both waiting to be scheduled. At this time, thread A that does not belong to the L-CFS group will be scheduled directly.
  • background threads with extremely low priority in a certain period of time can be identified based on background resource status information, and these threads can be placed into L-CFS groups to control these threads to seize front-end resources, thereby achieving the purpose of priority scheduling of front-end business .
  • Figure 15 shows a schematic diagram of the execution of the same process (task) using the CFS control strategy and the L-CFS control strategy.
  • the threads in this process include interface (UI) threads, rendering (Render) threads, background thread group 1, background thread group 2, and communication (Binderx) threads.
  • the interface (UI) thread, rendering (Render) thread and communication (Binderx) thread are all foreground threads.
  • the method of determining the processes that need to adopt the L-CFS control policy can be as shown in Figure 16.
  • the electronic device can match the settings among the multiple processes in the background.
  • the conditional process is set as a process in the first background group.
  • the L-CFS control policy is implemented for the processes in the first background group, and the foreground process and other threads that do not belong to the first background group still use the CFS control policy.
  • Figure 16 shows a schematic diagram of the situation where multiple processes in the background are all processes belonging to the first background group that meet the set conditions.
  • the foreground process adopts the CFS control strategy, and the background processes all implement L- CFS management and control strategy.
  • the method of determining the processes in the first background group that meets the set conditions includes: determining the processes in the background that have nothing to do with the foreground process and whose load is greater than the set value; And the process whose load is greater than the set value is marked as a process corresponding to the whitelist, and the process corresponding to the whitelist is marked as a process in the first background group.
  • processes running in the background with higher loads and unrelated to the foreground can be managed and controlled, which can effectively reduce the impact on the foreground process and reduce power consumption to a certain extent.
  • each process can be determined based on the background resource status information.
  • Priority information of the background threads in the time period sort the background threads in each time period from high to low to obtain the priority sequence, and use the background thread with the last set number of digits in the priority sequence as the L-CFS group thread.
  • the L-CFS control strategy adopted in the embodiment of this application can effectively reduce the runnable time of the foreground thread, as shown in Figure 17:
  • the foreground thread 3 needs to wait for the foreground thread 1, the foreground thread 2, and the background thread a before it can be scheduled. Assume that the foreground thread 3 needs to wait.
  • the ready (Runnable) time is five minutes.
  • the ready time of foreground thread 3 will reduce the execution time of background thread a, for example, to three minutes, that is, foreground thread 3 will start running (Running) two minutes earlier, so that The end time of foreground thread 3 can be advanced by two minutes, so that the execution time of all foreground threads will be advanced by two minutes, that is, the gain of the control strategy is two minutes. Therefore, the L-CFS management and control strategy based on this application can effectively speed up the completion of front-end tasks.
  • Table 2 shows the comparison of different electronic devices using the resource management and control method of this application and not using the resource management and control method of this application.
  • the target objects and methods to be controlled in the background can be determined based on the foreground resource demand information and the background resource status information, which can effectively reduce redundant management and control, that is, can effectively avoid the occurrence of too many killing processes in the existing technology.
  • the Sniff freezing technology provided in this application simulates the freezing process, does not require the process to enter the pre-freezing process, and achieves the freezing effect, reducing the freezing cost, and at the same time solving the problem of frequent awakening of the process causing freezing failure.
  • the L-CFS scheduling strategy provided by this application can identify background threads with extremely low priority in a certain period of time based on background resource status information, put these threads into L-CFS groups, and control these threads to seize foreground resources. This achieves the purpose of priority scheduling of front-end business.
  • L-CFS control can be performed on processes running in the background that have higher loads and have nothing to do with the foreground, which can effectively reduce the impact on the foreground process and reduce power consumption to a certain extent.
  • An embodiment of the present application provides a resource management and control device, including:
  • An acquisition module used to acquire the current foreground scene information, foreground resource demand information and background resource status information of the electronic device
  • a determination module configured to determine a resource management and control strategy based on the current foreground scene information, foreground resource demand information and background resource status information of the electronic device;
  • An execution module is used to execute the resource management and control policy.
  • Embodiments of the mechanisms disclosed in this application may be implemented in hardware, software, firmware, or a combination of these implementation methods.
  • Embodiments of the present application may be implemented as a computer program or program code executing on a programmable system including at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements) , at least one input device and at least one output device.
  • Program code may be applied to input instructions to perform the functions described herein and to generate output information.
  • Output information can be applied to one or more output devices in a known manner.
  • a processing system includes any system having a processor such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • Program code may be implemented in a high-level procedural language or an object-oriented programming language to communicate with the processing system.
  • assembly language or machine language can also be used to implement program code.
  • the mechanisms described in this application are not limited to the scope of any particular programming language. In either case, the language may be a compiled or interpreted language.
  • the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof.
  • the disclosed embodiments may also be implemented as instructions carried on or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be operated by one or more processors Read and execute.
  • instructions may be distributed over a network or through other computer-readable media.
  • machine-readable media may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including, but not limited to, floppy disks, optical disks, optical disks, read-only memories (CD-ROMs), magnetic Optical disk, read-only memory (ROM), random-access memory (RAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic or optical card, flash memory, or Tangible machine-readable storage used to transmit information (e.g., carrier waves, infrared signals, digital signals, etc.) using electrical, optical, acoustic, or other forms of propagated signals over the Internet.
  • machine-readable media includes any type of machine-readable media suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, computer).
  • each unit/module mentioned in each device embodiment of this application is a logical unit/module.
  • a logical unit/module can be a physical unit/module, or it can be a physical unit/module.
  • Part of the module can also be implemented as a combination of multiple physical units/modules.
  • the physical implementation of these logical units/modules is not the most important.
  • the combination of functions implemented by these logical units/modules is what solves the problem of this application. Key technical issues raised.
  • the above-mentioned equipment embodiments of this application do not introduce units/modules that are not closely related to solving the technical problems raised by this application. This does not mean that the above-mentioned equipment embodiments do not exist. Other units/modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本申请涉及通信技术领域,公开了一种资源管控方法、电子设备及介质。资源管控方法包括:获取所述电子设备的当前前台场景信息、前台资源需求信息和后台资源状态信息;基于所述电子设备的当前前台场景信息、前台资源需求信息和后台资源状态信息确定资源管控策略;执行所述资源管控策略。基于上述方案,能够有效解决内存资源与CPU资源的冲突问题。此外,可以有效减少冗余管控,即可以有效避免过多查杀进程的情况发生。

Description

一种资源管控方法、电子设备及介质
本申请要求于2022年05月27日提交中国专利局、申请号为202210593502.4、申请名称为“一种资源管控方法、电子设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,特别涉及一种资源管控方法、电子设备及介质。
背景技术
目前,安装在手机、电脑等电子设备上的应用软件越来越多。当用户把很多的应用程序运行启动后,每个启动的应用程序,无论是在前台(用户可见的或是与用户交互的),或者是后台程序(用户不可见的)都占用了手机上的资源。如中央处理器(central processing unit,CPU)的资源,内存资源,输入或输出(input/output,I/O)资源等。
当电子设备存在资源不足时,一般会触发电子设备的资源管控策略。实施这些资源管控策略时,虽然会通过压缩内存、进程查杀等方式考虑缓解内存压力的问题,但是上述方式通常会造成CPU资源的需求增加、CPU资源浪费或引起的后台投诉变多等情况,在一定程度上降低用户体验。
发明内容
为解决上述问题,本申请实施例提供了一种资源管控方法、电子设备及介质。
第一方面,本申请实施例提供了一种资源管控方法,用于电子设备,所述方法包括:
获取所述电子设备的当前前台场景信息、前台资源需求信息和后台资源状态信息;
基于所述电子设备的当前前台场景信息、前台资源需求信息和后台资源状态信息确定资源管控策略;
执行所述资源管控策略。
本申请实施例中提供的资源管控方法可以根据当前前台场景信息、前台资源需求信息和后台资源状态信息确定资源管控策略,具体的,可以基于所述当前前台场景信息确定内存管理策略,能够有效解决部分内存资源与CPU资源的冲突问题。此外,可以根据前台资源需求信息以及后台资源状态信息确定后台待管控的目标对象和方式,可以有效减少冗余管控,即可以有效避免现有技术中过多查杀进程的情况发生。
在上述第一方面的一种可能的实现中,所述资源管理策略包括内存整理策略和进程管控策略;并且,
所述基于所述电子设备的当前前台场景信息、前台资源需求信息和后台资源状态信息确定资源管控策略;包括:
基于所述当前前台场景信息确定内存管理策略;
基于所述当前前台资源需求信息和所述当前后台资源状态信息确定进程管控策略。
在上述第一方面的一种可能的实现中,所述基于所述当前前台场景信息确定内存管理策略;包括:
当根据所述电子设备的当前前台场景信息确定出当前前台场景为CPU敏感场景,则根据各内存整理策略占用CPU资源的大小选取待执行的内存整理任务;
当根据所述电子设备的当前前台场景信息确定出当前前台场景为CPU不敏感场景,则允许执行全部内存整理任务。
可以理解,CPU不敏感场景为对CPU资源需求不高的场景,因此多余CPU资源可以让出来做后台内存整理等耗时的操作。因此,如果识别到当前前台场景为CPU不敏感的场景,对于后台内存整理类的任务,全部允许执行。而CPU敏感场景为对CPU资源需求较高的场景,此时需要留出较多CPU资源便于该类场景的前台运行,此时,在后台内存整理任务中选择CPU资源占用少的内存整理任务执行。
基于上述方案,在需要进行内存整理,例如压缩内存等任务时,可以根据前台场景信息确定是否执行内存整理任务或者执行哪些内存整理任务以实现内存资源与CPU资源的均衡问题,如此,可以在一定程度上解决部分内存资源与CPU资源的冲突问题。
在上述第一方面的一种可能的实现中,所述前台场景信息包括前台场景类别和/或前台场景的CPU频率负载值,并且,
所述根据所述电子设备的当前前台场景信息确定出当前前台场景为CPU敏感场景,包括:
在所述电子设备的当前前台场景的CPU频率负载值大于设定值,或者所述当前前台场景类别对应CPU敏感场景的类别的情况下,确定所述当前前台场景为CPU敏感场景。
所述根据所述电子设备的当前前台场景信息确定出当前前台场景为CPU不敏感场景,包括:
在所述电子设备的当前前台场景的CPU频率负载值小于等于设定值,或者所述当前前台场景类别对应CPU不敏感场景的类别的情况下,确定所述当前前台场景为CPU敏感场景。
在上述第一方面的一种可能的实现中,所述基于所述当前前台资源需求信息和所述当前后台资源状态信息确定进程管控策略;包括:
在确定系统可用资源不满足前台资源需求时,基于所述当前前台资源需求信息确定需要进行管控以预留的目标资源;
基于所述目标资源,以及所述当前后台资源状态信息确定目标管控进程和管控方式。
在上述第一方面的一种可能的实现中,所述基于所述当前前台资源需求信息确定需要进行管控以预留的目标资源,包括:
获取所述当前前台应用的历史前台资源需求信息;
基于所述当前前台应用的历史前台资源需求信息确定所述当前前台资源需求信息;以及
将所述当前前台资源需求信息作为需要进行管控以预留的目标资源。
可以理解,目标应用的历史前台资源需求信息可以为通过对目标应用每次进入前台的 过程进行采样获取的CPU资源以及内存占用信息。
可以理解,上述基于目标资源(目标值)确定需要进行管控以预留的目标资源可以实现精准管控,即使得预留出的CPU资源和内存资源与目标值逼近或相同。
在上述第一方面的一种可能的实现中,所述系统可用资源包括系统可用CPU负载资源、系统可用内存资源和系统可用I/O资源中的至少一种;
所述当前前台资源需求信息包括当前前台CPU负载资源需求、前台内存资源需求和前台I/O资源需求中的至少一种;并且,
所述确定系统可用资源不满足前台资源需求;包括:
当所述系统可用资源中的至少一项可用资源不满足前台资源需求信息中的对应资源需求时,确定所述系统可用资源不满足前台资源需求。
在上述第一方面的一种可能的实现中,所述后台资源状态信息包括第一类资源状态信息,所述第一类资源状态信息包括内存资源状态信息、CPU资源状态信息和软资源状态信息;
所述内存资源状态信息包括各进程的内存信息、所述各进程的文件页和匿名页分布信息和所述各进程的优先级信息;
所述CPU资源状态信息包括所述各线程的执行指令数量和所述各线程的优先级;
所述软资源状态信息包括所述各进程的持锁状态。
在上述第一方面的一种可能的实现中,所述后台资源状态信息还包括第二类资源状态信息,所述第二类资源状态信息包括:后台运行的各进程与前台对应进程的关联系数;
并且,
基于所述目标资源,以及所述当前后台资源状态信息确定目标管控进程和管控方式;包括:
基于所述目标资源和所述第一类资源信息确定候选进程和管控方式;
基于所述候选线程中各进程与前台对应进程的关联系数确定目标管控进程。
可以理解,优先管控后台线程中与前台对应线程关联度较低的线程可以显著降低对前台运行情况的影响,避免由于管控了与前台对应线程关联度较高的线程,造成了前台资源需求的较大变化,导致需要重新制定管控策略的情况发生。
在上述第一方面的一种可能的实现中,所述各进程与前台对应进程的关联系数基于所述各进程与前台对应进程的负载相关系数和唤醒相关系数确定。
在上述第一方面的一种可能的实现中,所述管控方式包括对目标线程进行sniff冻结处理;
所述对目标线程进行sniff冻结处理包括:
将目标冻结进程的第一调度间隔调整为第二调度间隔;
其中第二调度间隔大于第一调度间隔。
本申请提供的sniff冻结技术,可以增加被冻结进程的调度间隔,同时被冻结进程的负载在冻结期不统计到系统CPU负载中,不会影响CPU调频。
且sniff冻结技术可以周期唤醒目标进程处理业务,而不是将进程完全冻住,相当于延长了进程执行时间,因此不再需要通知进程做冻结前的处理,降低了冻结成本,同时解决了进程频繁唤醒导致冻结失效的问题。
在上述第一方面的一种可能的实现中,确定出符合设定条件的各第一进程;
确定所述第一进程中各时间段中各后台线程的调度优先级;
将对应时间段的各后台线程按照调度优先级从高到低排序,获得优先级序列,将优先级序列中处于后设定位数的后台线程设置为低优先级完全公平调度(Lower Completely Fair Scheduler,L-CFS)分组中的线程;
其中,所述L-CFS分组中的线程置于其他不属于L-CFS分组中的线程之后进行调度。
可以理解,基于上述方案,可以根据后台资源状态信息中各线程的优先级信息识别出某一时间段优先级极低的后台线程,并将这些线程放入L-CFS分组,控制这些线程抢占前台资源,从而达成前台业务或前台线程优先调度的目的。
在上述第一方面的一种可能的实现中,
所述确定出符合设定条件的各第一进程,包括:
将后台运行的各进程中负载大于设定值,且与前台进程无相互关联的后台进程作为第一进程。
本申请实施例中,可以后台运行的进程中负载较高且与前台无关的进程进行L-CFS管控,能够有效减少对前台进程的影响,并能够在一定程度上减少功耗。
第二方面,本申请实施例提供一种可读介质,所述可读介质上存储有指令,所述指令在电子设备上执行时使机器执行上述资源管控方法。
第三方面,本申请实施例提供一种电子设备,包括:存储器,用于存储由电子设备的一个或多个处理器执行的指令,以及处理器,是电子设备的处理器之一,用于执行上述资源管控方法。
第四方面,本申请实施例提供一种计算机程序产品,包括指令,所述指令在电子设备上执行时使机器执行上述资源管控方法。
第五方面,本申请实施例提供一种资源管控装置,包括:
获取模块,用于获取所述电子设备的当前前台场景信息、前台资源需求信息和后台资源状态信息;
确定模块,用于基于所述电子设备的当前前台场景信息、前台资源需求信息和后台资源状态信息确定资源管控策略;
执行模块,用于执行所述资源管控策略。
附图说明
图1根据本申请的一些实施例,示出了一种资源管控方法的流程示意图;
图2根据本申请的一些实施例,示出了一种资源管控方法的流程示意图;
图3根据本申请的一些实施例,示出了一种资源管控方法的流程示意图;
图4根据本申请的一些实施例,示出了一种第一类后台资源状态信息的分类示意图;
图5根据本申请的一些实施例,示出了一种电子设备的硬件示意图;
图6根据本申请的一些实施例,示出了一种电子设备的软件架构示意图;
图7根据本申请的一些实施例,示出了一种资源管控方法的流程示意图;
图8根据本申请的一些实施例,示出了一种资源管控方法的流程示意图;
图9根据本申请的一些实施例,示出了一种前台场景分类示意图;
图10根据本申请的一些实施例,示出了一种触发管控的示意图;
图11根据本申请的一些实施例,示出了一种冻结方法的示意图;
图12根据本申请的一些实施例,示出了一种sniff冻结方法的示意图;
图13根据本申请的一些实施例,示出了一种冻结方法的冻结状态示意图;
图14根据本申请的一些实施例,示出了一种sniff冻结方法的冻结状态示意图;
图15根据本申请的一些实施例,示出了同一进程(任务)采用CFS管控策略和采用L-CFS管控策略的执行的示意图;
图16根据本申请的一些实施例,示出了确定需要采用L-CFS管控策略的进程的方式示意图;
图17根据本申请的一些实施例,示出了采用CFS管控策略和采用L-CFS管控策略的前台线程运行时间对比示意图。
具体实施方式
本申请的说明性实施例包括但不限于一种资源管控方法、电子设备及介质。
如前所述,现有技术中实施资源管控策略时,虽然会通过压缩内存、进程查杀等方式考虑缓解内存压力的问题,但是上述方式通常会造成CPU资源的需求增加、CPU资源浪费或引起的后台投诉变多等情况,在一定程度上降低用户体验。
例如,图1中示出了一种资源管控策略的方法示意图,具体的,图1中示出的资源管控方法为通过场景感知模块210获取终端的应用场景信息。并将场景信息以及使用体验实时反馈信息输入资源监控和决策模块中的性能模型、功耗模型和热控模型,以获取设备的性能劣化程度、功耗劣化程度和温度劣化程度三种因素。其中,使用体验实时反馈信息可以包括当前使用时的平均帧率、低帧率、流畅度、用户交互或操作等信息。决策模块根据电子设备的性能劣化程度、功耗劣化程度和温度劣化程度三种因素确定资源管控策略,并将资源管控策略发送至执行模块进行执行,以优化用户体验。
但上述方案并没有考虑CPU/内存/IO等系统资源之间存在冲突的问题。例如,当根据性能劣化程度、功耗劣化程度和温度劣化程度三种因素确定出需要压缩内存时,系统会采用内存压缩算法压缩内存。此种方法虽然在一定程度上可以缓解内存压力,但是会导致CPU资源需求增加。
再例如,还有一些资源管控策略在出现内存不足的情况时,直接采用查杀进程的方式,虽然该种方案也缓解了内存压力,但是该种方案可能会出现查杀过多内存的情况,造成CPU资源的浪费,且引起的后台投诉变多,在一定程度上降低用户体验。
为解决上述问题,本发明实施例公开了一种资源管控的方法,可以应用于电子设备。其中,本申请实施例提供的电子设备包括但不限于智能手机、车载装置、个人计算机、人工智能设备、平板、电脑、个人数字助理、智能穿戴式设备(例如智能手表或手环、智能眼镜)、智能语音设备(例如智能音箱等)、以及网络接入设备(例如网关)等。
其中,本申请实施例的资源管控方法可以如图2所示,包括:在检测到系统资源不能满足前台场景需求的情况下,触发管控策略,即识别前台资源需求信息和后台负载信息,根 据前台资源信息和后台负载信息选择合适的管控策略并执行。可以理解,本申请实施例提供的资源管控方法,还可以融合前台场景信息,具体的,本申请实施例中的资源管控方法可以如图3中所示,获取电子设备的当前前台场景信息、前台资源需求信息和后台业务资源的状态信息(或称为后台资源状态信息);根据电子设备的当前前台场景信息、前台资源需求信息和后台业务资源的状态信息确定资源管控策略;执行资源管控策略。其中,资源管控策略包括内存整理策略、进程查杀策略、资源调度策略等。
下面对根据电子设备的当前前台场景信息、前台资源需求信息和后台业务资源的状态信息确定资源管控策略的一些方式进行说明。
可以理解,本申请实施例中,当前前台场景信息可以用于确定出初步的内存整理策略,具体的,可以首先根据电子设备的当前前台场景信息确定出当前前台场景是否为CPU敏感场景。当确定出当前前台场景为CPU敏感场景,则根据各内存整理策略占用CPU资源的大小选取待执行的内存整理任务。例如,可以选取占用CPU资源占用最小的一个或多个待执行的内存整理任务进行执行。当根据电子设备的当前前台场景信息确定出当前前台场景为CPU不敏感场景,则允许执行全部内存整理任务。其中,内存整理任务可以包括内存压缩、内存合并等任务。
可以理解,CPU不敏感场景为对CPU资源需求不高的场景,因此多余CPU资源可以让出来做后台内存整理等耗时的操作。因此,如果识别到当前前台场景为CPU不敏感的场景,对于后台内存整理类的任务,全部允许执行。而CPU敏感场景为对CPU资源需求较高的场景,此时需要留出较多CPU资源便于该类场景的前台运行,此时,在后台内存整理任务中选择CPU资源占用少的内存整理任务执行。
基于上述方案,在需要进行内存整理,例如压缩内存等任务时,可以根据前台场景信息确定是否执行内存整理任务或者执行哪些内存整理任务以实现内存资源与CPU资源的均衡问题,如此,可以在一定程度上解决部分内存资源与CPU资源的冲突问题。
可以理解,上述前台场景信息包括前台场景类别和/或前台场景的CPU频率负载值。前台场景类别可以有多种,例如为导航类、游戏类和购物类等。在一些实施例中,可以在确定电子设备的当前前台场景的CPU频率负载值大于设定值,或者前台场景类别对应CPU敏感场景的类别的情况下,确定当前前台场景为CPU敏感场景。在确定电子设备的当前前台场景的CPU频率负载值小于等于设定值,或者前台场景类别对应CPU不敏感场景的类别的情况下,确定当前前台场景为CPU不敏感场景。
在一些实施例中,可以根据各前台场景对CPU算力的需求预先定义属于CPU敏感场景的类别和属于CPU不敏感场景的类别。例如,对于导航类场景,大多数用户不会一直盯着屏幕进行导航,我们只要保障前台绘制的基础CPU资源。因此导航类场景对CPU的算力要高,所以可以定义导航类场景为CPU不敏感场景,此时求不是很,其他CPU资源可以让出来做后台内存整理等耗时操作。而对于游戏类场景,对CPU算力的要求一般较高,所以可以定义游戏类场景为CPU敏感场景。
在一些实施例中,在确定系统可用资源不满足前台资源需求时,可以根据前台资源需求信息以及后台资源状态信息确定后台待管控的目标对象以及管控方式等。其中,首先可以根据前台资源需求信息确定出后台通过管控所需要预留出的目标资源。然后可以通过后台资 源状态信息确定待管控的目标对象和管控方式,以实现预留出目标资源。
其中,系统可用资源包括系统可用CPU负载资源、系统可用内存资源和系统可用I/O资源中的至少一种;当前前台资源需求信息包括当前前台CPU负载资源需求、前台内存资源需求和前台I/O资源需求中的至少一种;并且,确定系统可用资源不满足前台资源需求的方式包括:当系统可用资源中的至少一项可用资源不满足前台资源需求信息中的对应资源需求时,确定系统可用资源不满足前台资源需求。
下面首先介绍根据前台资源需求信息确定出后台通过管控所需要预留出的目标资源的方式。
具体的,电子设备可以记录各应用历史进入前台的历史启动资源需求信息,根据目标应用的历史启动资源需求信息确定出目标应用当前时刻启动进入前台时的前台资源需求信息。其中目标应用在当前时刻启动进入前台时的前台资源需求信息即为当前后台需要通过管控预留出的目标资源。
其中,目标应用的历史前台资源需求信息可以为通过对目标应用每次进入前台的过程进行采样获取的CPU资源以及内存占用信息。在一些实施例中,可以根据目标应用历史每次进入前台时的CPU资源以及内存占用信息确定CPU资源以及内存占用的平均值,作为当前目标应用进入前台所需的CPU资源和内存占用目标值。在一些实施例中还可以根据目标应用历史每次进入前台时的CPU资源以及内存占用确定该目标应用的CPU资源频点分布和内存走势,根据CPU频点分布和内存走势分析确定当前目标应用进入前台所需的CPU资源和内存占用的目标值。然后在目标应用在当前时刻进入前台时,基于目标值进行后台调度和内存整理等管控。例如,可以采用查杀设定进程或冻结设定进程等管控方式以使预留出的CPU资源和内存资源与目标值逼近或相同。如此,可以有效减少冗余管控,即可以有效避免现有技术中过多查杀进程的情况发生。
例如,假设某新闻类应用(以下简称“新闻应用”)历史进入前台的三次内存占用为386MB,387MB和388MB,则根据将上述三次进入前台的三次内存占用的平均值387MB作为当前“新闻应用”进入前台所需的内存占用目标值,即在当前时刻“新闻应用”启动即进入前台时,电子设备可以通过查杀部分进程等资源管控方式预留出尽可能接近387MB的内存。在一些实施例中,也可以根据“新闻应用”历史进入前台的三次内存占用的逐渐递增1MB的数值走势,将389MB作为当前“新闻应用”进入前台所需的内存占用目标值。
在确定了前台所需目标资源的情况下,可以通过后台资源状态信息确定待管控的目标对象或管控方式,以实现预留出目标资源。下面介绍通过后台资源状态信息确定待管控的目标对象以及管控方式的方法。
其中,图4示出了第一类后台资源状态信息的分类示意图。如图4所示第一类后台资源状态信息包括内存资源状态信息、CPU资源状态信息和软资源状态信息;内存资源状态信息包括各进程的内存信息、各进程的文件页和匿名页分布信息和各进程的优先级信息;CPU资源状态信息包括各线程执行指令数量各线程的优先级,软资源状态信息包括各线程的持锁状态。可以理解,上述资源状态信息可以为设定时间段的资源状态信息,例如最近1秒钟内的资源状态信息。
其中,统计后台业务的资源状态,可以用于明确管控对象和方式,例如根据统计的各 线程的内存占用大小,可以确定查杀或冻结哪些进程能够使得预留的内存可以接近或等同当前前台资源需求。即基于上述确定出的和表内的第一类后台资源状态信息确定出候选进程和管控方式;
可以理解,在一些实施例中,后台资源状态信息还可以包括第二类后台资源状态信息,即后台运行的各线程与前台对应线程的关联度信息。资源管控方法还可以包括还可以根据后台运行的各线程与前台对应线程的关联性对后台业务资源从上述候选进程中选择出目标进程进行管控,具体可以为获取后台运行的各线程与前台对应线程的关联度;优先选择出候选线程中关联度最低或较低的若干目标进程进行管控。可以理解,优先管控后台线程中与前台对应线程关联度较低的线程可以显著降低对前台运行情况的影响,避免由于管控了与前台对应线程关联度较高的线程,造成了前台资源需求的较大变化,导致需要重新制定管控策略的情况发生。其中,获取后台各线程与前台对应线程的关联度的方式可以如后文所述,此处不再详述。
例如,当确定将目标值387MB作为当前“新闻应用”进入前台所需的内存占用目标值,在当前时刻“新闻应用”启动即进入前台时,电子设备确定出后台线程中A线程占用内存为389MB,B线程占用内存为389MB,C线程占用内存为389MB,A线程、B线程、C线程的占用内存均与目标值387MB相近,而B线程与前台对应线程的关联度最低,此时可以选择查杀或冻结B线程以预留出对应内存。
相应的,本申请实施例中,在“新闻应用”启动后,若电子设备检测到任意应用触发的一些内存整理任务,此时可以根据前述方法识别“新闻应用”运行的场景是否为CPU敏感场景以确定内存管理策略。
综上,基于上述方案,在需要进行内存整理,例如压缩内存等任务时,可以根据前台场景信息确定是否执行内存整理任务或者执行哪些内存整理任务以实现内存资源与CPU资源的均衡问题,如此,可以在一定程度上解决部分内存资源与CPU资源的冲突问题。此外,可以根据前台资源需求信息以及后台资源状态信息确定后台待管控的目标对象和方式,可以有效减少冗余管控,即可以有效避免现有技术中过多查杀进程的情况发生。
下面在介绍本申请的详细资源管控方法之前,首先以手机10为例对本申请提及的电子设备进行简要介绍。
如5所示,手机10可以包括处理器110、电源模块140、存储器180,移动通信模块130、无线通信模块120、传感器模块190、音频模块150、摄像头170、接口模块160、按键101以及显示屏102等。
可以理解的是,本发明实施例示意的结构并不构成对手机10的具体限定。在本申请另一些实施例中,手机10可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如,可以包括中央处理器(central processing unit,CPU)、图像处理器(graphics processing unit,GPU)、数字信号处理器DSP、微处理器(micro-programmed control unit,MCU)、人工智能(artificial intelligence,AI)处理器或可编程逻辑器件(field programmable gate array,FPGA)等的处理模块或处理电路。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个 处理器中。处理器110中可以设置存储单元,用于存储指令和数据。在一些实施例中,处理器110中的存储单元为高速缓冲存储器180。
可以理解,本申请上述资源管控方法可以由处理器110执行。
电源模块140可以包括电源、电源管理部件等。电源可以为电池。电源管理部件用于管理电源的充电和电源向其他模块的供电。在一些实施例中,电源管理部件包括充电管理模块和电源管理模块。充电管理模块用于从充电器接收充电输入;电源管理模块用于连接电源,充电管理模块与处理器110。电源管理模块接收电源和/或充电管理模块的输入,为处理器110,显示屏102,摄像头170,及无线通信模块120等供电。
移动通信模块130可以包括但不限于天线、功率放大器、滤波器、低噪声放大器(low noise amplify,LNA)等。移动通信模块130可以提供应用在手机10上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块130可以由天线接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块130还可以对经调制解调处理器调制后的信号放大,经天线转为电磁波辐射出去。在一些实施例中,移动通信模块130的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块130至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块120可以包括天线,并经由天线实现对电磁波的收发。无线通信模块120可以提供应用在手机10上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(blue tooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。手机10可以通过无线通信技术与网络以及其他设备进行通信。
在一些实施例中,手机10的移动通信模块130和无线通信模块120也可以位于同一模块中。
显示屏102用于显示人机交互界面、图像、视频等。显示屏102包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。
传感器模块190可以包括接近光传感器、压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器,骨传导传感器等。
音频模块150用于将数字音频信息转换成模拟音频信号输出,或者将模拟音频输入转换为数字音频信号。音频模块150还可以用于对音频信号编码和解码。在一些实施例中,音频模块150可以设置于处理器110中,或将音频模块150的部分功能模块设置于处理器110中。在一些实施例中,音频模块150可以包括扬声器、听筒、麦克风以及耳机接口。
摄像头170用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件把光信号转换成电信号,之后将电信号传递给ISP(image signal processing, 图像信号处理)转换成数字图像信号。手机10可以通过ISP,摄像头170,视频编解码器,GPU(graphic processing unit,图形处理器),显示屏102以及应用处理器等实现拍摄功能。
接口模块160包括外部存储器接口、通用串行总线(universal serial bus,USB)接口及用户标识模块(subscriber identification module,SIM)卡接口等。其中外部存储器接口可以用于连接外部存储卡,例如Micro SD卡,实现扩展手机10的存储能力。外部存储卡通过外部存储器接口与处理器110通信,实现数据存储功能。通用串行总线接口用于手机10和其他电子设备进行通信。用户标识模块卡接口用于与安装至手机1010的SIM卡进行通信,例如读取SIM卡中存储的电话号码,或将电话号码写入SIM卡中。
在一些实施例中,手机10还包括按键101、马达以及指示器等。其中,按键101可以包括音量键、开/关机键等。马达用于使手机10产生振动效果,例如在用户的手机10被呼叫的时候产生振动,以提示用户接听手机10来电。指示器可以包括激光指示器、射频指示器、LED指示器等。
下面对本申请实施例中提及的电子设备的软件架构进行简要介绍。如图6所示,软件架构中可以包括超级大脑210、操作系统220和应用程序230。
现结合图7所示的资源管控方法对本申请实施例中图6中所示的电子设备的软件架构进行说明。
超级大脑210即为控制实现本申请资源管控方法的控制模块,可用于实时监听系统状态,并用于响应于用户的操作,例如点击交互操作、手势交互操作或人脸交互等操作时,查询前台场景。其中,此处所指的前台场景可以包括前台场景信息以及前台场景的资源需求信息,在根据前台场景信息以及前台场景的资源需求信息确定资源状态信息,在确定出目前电子设备的资源状态信息为系统可用资源不满足前台资源需求时,获取后台业务资源的状态信息;根据前台场景信息、前台资源需求信息和后台业务资源的状态信息确定出资源管控策略,即图7中所示的场景融合管控策略。并将场景融合管控策略下发给操作系统220。
确定系统可用资源不满足前台资源需求的方式包括:当系统可用资源中的至少一项可用资源不满足前台资源需求信息中的对应资源需求时,确定系统可用资源不满足前台资源需求。
其中,场景融合管控策略一般可以包括:sniff冻结或查杀策略,CFS管控策略,内存预整理策略,IO限流策略,冗余绘制管控策略,软资源竞争管控策略等。
可以理解,本申请实施例中,超级大脑210可以控制在系统高负载预警或资源不足时,先确定出场景融合管控策略,并设定场景融合管控策略先于系统默认策略执行管控。相当于在用户体验变差前,通过融合管控拦截住恶化体验;其中,系统默认策略可用于在融合管控失效或其他情况下进行应用程序230的负载默认管控,保障最差情况下电子设备的正常使用。
操作系统220用于执行上述场景融合管控策略。
应用程序230可以包括相机、导航、图库等各类应用程序。
下面结合上述提及的电子设备,对本申请实施例提供的资源管控方法进行说明。图8示出了本申请实施例一种资源管控方法的流程示意图,本申请实施例提供的资源管控方法可 以由处理器110执行。方法具体包括:
801:获取电子设备的当前前台场景信息、前台资源需求信息和后台业务资源的状态信息;
可以理解,在一些实施例中,电子设备可以在检测到系统资源不能满足前台场景需求的情况下,获取电子设备的当前前台场景信息、前台资源需求信息和后台业务资源的状态信息。
在一些实施例中,电子设备还可以实时获取前台场景信息以及前台资源需求信息,在确定系统可用资源不满足前台资源需求时,获取后台业务的资源状态信息。
系统可用资源包括系统可用CPU负载资源、系统可用内存资源和系统可用I/O资源中的至少一种;
系统可用资源可以包括系统可用CPU负载资源,可用内存资源,IO负载资源、软资源等。当前前台资源需求信息包括当前前台CPU负载资源需求、前台内存资源需求和前台I/O资源需求中的至少一种;并且,确定系统可用资源不满足前台资源需求的方式包括:当系统可用资源中的至少一项可用资源不满足前台资源需求信息中的对应资源需求时,确定系统可用资源不满足前台资源需求。
其中,前台场景信息包括前台场景类别和/或前台场景的CPU频率负载值。前台场景类别可以有多种,例如如图9所示,包括即时通信类、邮件类、运动健康类、导航类、股票类、闹钟类、阅读类、音乐类、视频类、游戏类、词典类、商务类、办公类、主题类、购物类、工具类、银行类、相机类、浏览器类、输入法类、论坛类、直播类以及其他类别等。且可以依次给上述类别分别赋值以便于识别。具体的,可以依次给上述类别分别赋值为0、1、2、3、4、5、6、7、8、9、10、11、12、13、14、15、16、17、18、19、20、21、255。
前台资源需求信息可以包括前台所需内存资源信息、CPU资源信息。
如前述图4所示,后台资源状态信息包括内存资源状态信息、CPU资源状态信息和软资源状态信息;内存资源状态信息包括各进程的内存信息、各进程的文件页和匿名页分布信息和各进程的优先级信息;CPU资源状态信息包括各线程执行指令数量各线程的优先级,软资源状态信息包括各线程的持锁状态。
可以理解,如图10所示,本申请实施例可以在CPU负载资源,可用内存资源,IO负载资源、软资源等任一系统资源不足的时候触发管控,获取当前前台场景信息、前台资源需求信息和后台业务资源的状态信息,以便于后续根据当前前台场景信息、前台资源需求信息和后台业务资源的状态信息确定资源管控策略。
802:基于电子设备的当前前台场景信息、前台资源需求信息和后台业务资源的状态信息确定资源管控策略。
下面对根据电子设备的当前前台场景信息、前台资源需求信息和后台业务资源的状态信息确定资源管控策略的一些方式进行说明。
可以理解,本申请实施例中,当前前台场景信息可以用于确定出初步的内存整理策略,具体的,可以首先根据电子设备的当前前台场景信息确定出当前前台场景是否为CPU敏感场景,当确定出当前前台场景为CPU敏感场景,则根据各内存整理策略占用CPU资源的大小选取待执行的内存整理任务。例如,可以选取占用CPU资源占用最小的一个或多个待执行的内 存整理任务进行执行。当根据电子设备的当前前台场景信息确定出当前前台场景为CPU不敏感场景,则允许执行全部内存整理任务。
可以理解,上述根据当前前台场景信息确定出初步的内存整理策略可以在系统资源不足的情况下执行,也可以在系统资源充足的情况下进行。
可以理解。CPU不敏感场景为对CPU资源需求不高的场景,因此多余CPU资源可以让出来做后台内存整理等耗时的操作。因此,如果识别到当前场景前台场景为CPU不敏感的场景,对于后台内存整理类的任务,全部允许执行。而CPU敏感场景为对CPU资源需求较高的场景,此时需要留出较多CPU资源便于该类场景的前台运行,此时,在后台内存整理任务中选择CPU资源占用少的内存整理任务执行。
基于上述方案,在需要进行内存整理,例如压缩内存等任务时,可以根据前台场景信息确定是否执行内存整理任务或者执行哪些内存整理任务以实现内存资源与CPU资源的均衡问题,如此,可以在一定程度上解决部分内存资源与CPU资源的冲突问题。
在一些实施例中,可以在确定电子设备的当前前台场景的CPU频率负载值大于设定值,或者前台场景类别对应CPU敏感场景的类别的情况下,确定当前前台场景为CPU敏感场景。在确定电子设备的当前前台场景的CPU频率负载值小于等于设定值,或者前台场景类别对应CPU不敏感场景的类别的情况下,确定当前前台场景为CPU不敏感场景。
在一些实施例中,可以根据各前台场景对CPU算力的需求预先定义属于CPU敏感场景的类别和属于CPU不敏感场景的类别。例如,对于导航类场景,大多数用户不会一直盯着屏幕进行导航,我们只要保障前台绘制的基础CPU资源。因此导航类场景对CPU的算力要求不是很高,所以可以定义导航类场景为CPU不敏感场景,此时,其他CPU资源可以让出来做后台内存整理等耗时操作。而对于游戏类场景,对CPU算力的要求一般较高,所以可以定义游戏类场景为CPU敏感场景。
在一些实施例中,可以根据前台资源需求信息以及后台资源状态信息确定后台待管控的目标对象。其中,首先可以根据前台资源需求信息确定出后台通过管控所需要预留出的目标资源。然后可以通过后台资源状态信息确定待管控的目标对象和管控方式,以实现预留出目标资源。
下面首先介绍根据前台资源需求信息确定出后台通过管控所需要预留出的目标资源的方式。
具体的,电子设备可以记录各应用历史进入前台的历史启动资源需求信息,在时或之前,根据目标应用的历史启动资源需求信息确定出目标应用当前时刻启动进入前台时的前台资源需求信息。其中目标应用当前时刻启动进入前台时的前台资源需求信息即为当前后台需要通过管控预留出的目标资源。
其中,目标应用的历史前台资源需求信息可以为通过对目标应用每次进入前台的过程进行采样获取的CPU资源以及内存占用信息。在一些实施例中,可以根据目标应用历史每次进入前台时的CPU资源以及内存占用信息确定CPU资源以及内存占用的平均值作为当前目标应用进入前台所需的CPU资源和内存占用目标值。在一些实施例中还可以根据目标应用历史每次进入前台时的CPU资源以及内存占用确定该目标应用的CPU资源频点分布和内存走势,根据CPU频点分布和内存走势确定当前目标应用进入前台所需的CPU资源和内存占用的目标 值。然后在该目标应用在当前时刻进入前台时,基于目标值进行后台调度和内存整理等管控。例如,可以查杀设定进程或冻结设定进程以使预留出的CPU资源和内存资源与目标值逼近或相同。如此,可以有效减少冗余管控,即可以有效避免现有技术中过多查杀进程的情况发生。
例如,假设“新闻应用”历史进入前台的三次内存占用为386MB,387MB和388MB,则根据将上述三次进入前台的三次内存占用的平均值387MB作为当前“新闻应用”进入前台所需的内存占用目标值,即在当前时刻新闻应用启动即进入前台时,电子设备可以通过查杀部分进程等资源管控方式预留出尽可能接近387MB的内存。在一些实施例中,也可以根据“新闻应用”历史进入前台的三次内存占用的逐渐递增1MB的数值走势,将389MB作为当前“新闻应用”进入前台所需的内存占用目标值。
在一些实施例中,还可以根据电子设备的目标应用的历史版本进入前台的资源需求确定当前时刻目标应用进入前台所需的资源需求目标值。
例如,“新闻应用”历史经历了多个版本的更新,电子设备记录了“新闻应用”历史不同版本启动时的内存占用如下表1所示:
表1:“新闻应用”历史不同版本启动时的内存情况
其中“新闻应用”7.7.8版本,7.8.8版本、7.9.2版本、7.9.6版本、8.0.0版本对应的启动内存分别为386.67MB、445.46MB、486.79MB和460.69MB,422.17MB;此时可以根据当前启动的版本确定当前时刻目标应用进入前台所需的资源需求目标值。例如,若当前启动的版本为8.0.0版本,则可以确定目标应用进入前台所需的资源需求目标值为422.17MB。
在确定了前台所需目标资源的情况下,可以通过后台资源状态信息确定待管控的目标对象和管控方式,以实现预留出目标资源。下面介绍通过后台资源状态信息确定待管控的目标对象。
其中,如前述图4所示,后台资源状态信息包括第一类资源状态信息,第一类资源状态信息可以包括内存资源状态信息、CPU资源状态信息和软资源状态信息;内存资源状态信息包括各进程的内存信息、各进程的文件页和匿名页分布信息和各进程的优先级信息;CPU资源状态信息包括各线程执行指令数量各线程的优先级,软资源状态信息包括各线程的持锁状态。
其中,统计后台业务的资源状态,可以用于明确管控对象和管控方式,例如根据统计的各线程的内存占用大小,可以确定查杀或冻结哪些进程能够使得预留的内存可以接近或等同当前前台资源需求。
可以理解,在一些实施例中,后台资源状态信息还可以包括后台运行的各线程与前台对应线程的关联度信息。资源管控方法还可以包括根据后台运行的各线程与前台对应线程的关联性对后台业务资源进行管控。
在一些实施例中,对后台业务资源进行管控的方式可以为基于所述目标资源和所述第一类资源信息确定候选进程和管控方式;基于所述候选线程中各进程与前台对应进程的关联 系数确定目标管控进程。
具体的可以获取后台运行的各候选线程与前台对应线程的关联度;将各候选线程按照各线程与前台对应线程的关联度由低到高进行排序,获取第一序列;按照第一序列中的线程顺序,对线程进行管控。即优先管控候选线程中与前台对应线程关联度较低的线程。可以理解,优先管控线程中与前台对应线程关联度较低的线程可以显著降低对前台运行情况的影响,避免由于管控了与前台对应线程关联度较高的线程,造成了前台资源需求的较大变化,导致需要重新制定管控策略的情况发生。
例如,当确定将目标值387MB作为当前“新闻应用”进入前台所需的内存占用目标值,在当前时刻“新闻应用”启动即进入前台时,且系统已无内存资源的状况下,电子设备确定出后台线程中A线程占用内存为389MB,B线程占用内存为389MB,C线程占用内存为389MB,A线程、B线程、C线程的占用内存均与目标值387MB相近,而B线程与前台对应线程的关联度最低,此时可以选择查杀或冻结B线程以预留出对应内存。即将B线程作为管控对象。
下面对获取后台各线程与前台对应线程的关联度的方式进行详细描述。
在一些实施例中,可以通过后台各线程与前台对应线程的相关系数确定关联度的高低。相关系数越高,关联度越高;相关系数越低,关联度越低。
其中后台各线程与前台对应线程的相关系数可以为后台各线程与前台对应线程的负载相关系数与唤醒相关系数的乘积。
具体的,后台各线程与前台对应线程的负载相关系数r,计算公式如下:
其中,上述计算公式以用户操作后1秒作为采样总时间,以20ms间隔周期采样。Xi为每次采样时线程X的负载,为1秒内采样线程X的平均负载;Yi为每次采样时前台UI线程Y的负载,为1秒内采样线程Y的平均负载。
后台各线程与前台对应线程的唤醒相关系数P,计算公式如下:
其中,上述计算公式以用户操作后1秒作为采样总时间,以20ms间隔周期采样。CountX为后台线程唤醒对应前台(UI)线程Y的总次数,CountY为前台线程被唤醒时再主动唤醒后台线程X的总次数。CountXi为每周期后台线程唤醒对应前台(UI)线程Y的次数,CountYi为每周期前台线程被唤醒时再主动唤醒后台线程X的次数。
可以理解,根据后台业务资源的状态信息确定当前后台资源的占用信息,当占用信息满足设定条件,则对后台业务资源进行管控。当前后台资源的占用信息包括当前各后台资源的占用对应后台总资源的各比值。设定条件包括当前各后台资源的占用对应后台总资源的各比值中存在至少一个比值大于设定值。即若后台资源占用较少或者后台组员没有任何占用,则无需对后台资源进行管控。
综上,基于上述方案,在需要进行内存整理,例如压缩内存等任务时,可以根据前台场景信息确定是否执行内存整理任务或者执行哪些内存整理任务以实现内存资源与CPU资源的均衡问题,如此,可以在一定程度上解决部分内存资源与CPU资源的冲突问题。此外,可以根据前台资源需求信息以及后台资源状态信息确定后台待管控的目标对象和方式,可以有效减少冗余管控,即可以有效避免现有技术中过多查杀进程的情况发生。
803:执行资源管控策略。
可以理解,上述确定出的资源管控策略可以包括查杀或冻结设定进程、内存整理等策略。
在一些施例中,在上述确定出需要对后台一些线程成进行冻结时,可以采用冻结技术执行线程冻结。
可以理解,一些实施例中,冻结方案包括:电子设备当接收到请求冻结的指令,会运行预冻结流程,即发送释放系统资源的指令至待冻结进程,以使得待冻结进程进入准备即可冻结状态,具体的,如图11所示,电子设备可以通过信号处理机制(do_signal)使得线程进入freeze(可冻结)状态。并同时向内核发送冻结指令,内核可以通过调用should_stop函数来判断线程是否处于freeze状态,当处于freeze状态,可以直接执行冻结。即现有技术在执行应用冻结前运行一套预冻结流程,如此,在缓解了内存压力的同时会引起CPU资源需求过大的问题。且如果进程刚冻结就被唤醒,就可能出现冻结失效的问题。
为解决上述问题,本申请实施例提供一种sniff冻结(Sniff Frozen)技术,其具体为当接收到冻结指令,将待冻结进程的当前调度间隔即第一调度间隔调整为第二调度间隔;其中第二调度间隔大于第一调度间隔。即增加待冻结进程的调度间隔。即如图12所示,当电子设备收到请求冻结的指令,会向内核发送冻结指令,内核可以通过调用should_stop函数直接执行冻结,本申请实施例中sniff冻结技术中的执行冻结即将待冻结进程的第一调度间隔调整为第二调度间隔。
例如,当电子设备确定出需要冻结准备被调度或正在运行过程中的B线程以释放系统资源,便于前台的“新闻应用”启动时,可以发送具体的冻结指令给内核,内核可以增加B线程的调度间隔。例如,B线程的当前调度间隔(即B线程两次被调度时间之间的间隔)为2秒钟,若B线程当前达到2秒的调度间隔准备被调度,此时接到了B线程的冻结指令,则可以将B线程的调度间隔增加为10秒钟。即B线程需要再等待8秒才能被调度。若B线程正在运行过程中,此时接到了B线程的冻结指令,则可以将B线程的调度间隔增加为10秒钟。即B线程停止运行,需要再等待10秒才能被调度。可以理解,当B线程不被调度,即达到了释放或预留出了系统资源的效果。
可以理解,图11中所示方案中线程被冻结线程调度状态如图13所示,即在运行的过程中,收到冻结指令后即被冻结。而本申请实施例图12中提供的冻结方案,被冻结线程的调度状态如图14所示,运行的过程中,收到冻结指令后即被增加调度时间间隔,即假的冻结,在调度时间间隔达到时,会重新运行,即周期性唤醒的状态。
基于上述方案,本申请提供的sniff冻结技术,可以增加被冻结进程的调度间隔,同时被冻结进程的负载在冻结期不统计到系统CPU负载中,不会影响CPU调频。且sniff冻结技术可以周期唤醒进程处理业务,而不是将进程完全冻住,相当于延长了进程执行时间,因 此不再需要通知进程做冻结前的处理,降低了冻结成本,同时解决了进程频繁唤醒导致冻结失效的问题。
可以理解,本申请实施例中,资源管控方法中还包括资源调度方案。例如,一些实施例中,资源调度方案一般为完全公平调度(completely fair scheduler,CFS)管控方案,具体的,CFS管控方案为通过将进程中的前台各线程和后台各线程进行优先级排序来保障前台线程优先调度,但后台线程长期得不到调度的情况下,其优先级会被提拉,偶尔也会抢占前台资源,造成前台运行不流畅的问题。
为解决CFS管控策略存在的问题,本申请实施例中的资源管控方法还提供一种低优先级完全公平调度(lower completely fair scheduler,L-CFS)管控策略,具体包括:对于需要执行L-CFS策略的进程,根据后台资源状态信息确定进程中各时间段后台线程的优先级信息,将各时间段后台线程按照优先级从高到低排序,获得优先级序列,将优先级序列中处于后设定位数的后台线程设置为L-CFS分组中的线程;并控制L-CFS分组中的线程置于其他不属于L-CFS分组中的线程之后进行调度,即只要不属于L-CFS分组中的线程待调度,L-CFS分组中的线程就会让出资源。可以理解,线程在进入L-CFS分组后,线程优先级不会改变。
例如,在当前时刻,存在不属于L-CFS分组中的线程A和属于L-CFS分组中的线程B均待调度,此时将直接调度不属于L-CFS分组中的线程A。
基于上述方案,可以根据后台资源状态信息识别出某一时间段优先级极低的后台线程,并将这些线程放入L-CFS分组,控制这些线程抢占前台资源,从而达成前台业务优先调度的目的。
图15示出了同一进程(任务)采用CFS管控策略和采用L-CFS管控策略的执行的示意图。可以理解,该进程中的线程包括界面(UI)线程、渲染(Render)线程、后台线程分组1、后台线程分组2以及通信(Binderx)线程。其中,界面(UI)线程、渲染(Render)线程以及通信(Binderx)线程均属于前台线程。如图15所示,假设通过L-CFS管控策略确定出后台线程分组1和后台线程分组2中的部分线程进入了L-CFS分组,即置于其他所有线程后执行,则后台线程1和后台线程2中的位于正常次序执行的剩余线程所占的时间将减少,则前台线程全部执行完的时间将加快。
可以理解,在一些实施例中,确定需要采用L-CFS管控策略的进程的方式可以如图16所示,当进程运行从前台切到后台后,电子设备可以将后台的多个进程中符合设定条件的进程设定为第一后台分组内的进程。然后对第一后台分组内的进程实施L-CFS管控策略,前台进程及其他不属于第一后台分组内的线程仍然采用CFS管控策略。图16示出了后台的多个进程均为符合设定条件的属于第一后台分组内的进程的情况示意图,如图16所示的情况,前台进程采用CFS管控策略,后台进程均实施L-CFS管控策略。
其中,确定符合设定条件的第一后台分组内的进程的方式包括:确定后台各进程中与前台进程无关的、且负载大于设定值的进程,将后台各进程中与前台进程无关的、且负载大于设定值的进程标记为白名单对应的进程,将白名单对应得进程标记为第一后台分组内的进程。本申请实施例中,可以后台运行的进程中负载较高且与前台无关的进程进行管控,能够有效减少对前台进程的影响,并能够在一定程度上减少功耗。
当确定了实施L-CFS管控策略的进程后,可以根据后台资源状态信息确定各进程中各 时间段后台线程的优先级信息,将各时间段后台线程按照优先级从高到低排序,获得优先级序列,将优先级序列中处于后设定位数的后台线程作为L-CFS分组中的线程。
其中,本申请实施例中采用L-CFS管控策略能够有效减少前台线程的就绪(runnable)时间,如图17所示:
假设在未使用本申请中的L-CFS管控策略技术之前,即采用CFS管控策略时,前台线程3需要等待前台线程1、前台线程2、后台线程a才能被调度,假设前台线程3需要等待即就绪(Runnable)时间为五分钟。而若后台线程a被进入了L-CFS分组,前台线程3的就绪时间就会减少后台线程a的执行时间,例如减少为三分钟,即前台线程3会提前两分钟开始运行(Running),使得前台线程3运行的结束时间可以提前两分钟,进而使得前台所有线程执行完毕的时间将会提前两分钟,即管控策略的收益为两分钟。因此,基于本申请的L-CFS管控策略,可以有效加快前台任务的完成速度。
表2展示了不同电子设备在采用本申请资源管控方法和不采用本申请资源管控方法的对比情况。
表2:
从表2中看出,在相同时间内,设备1采用本申请资源融合管控方法丢帧大于6帧的次数为132822次,不采用本申请资源管控方法丢帧大于6帧的次数为146886次,改善幅度为9.57%。设备2采用本申请资源管控方法丢帧大于6帧的次数为15865次,不采用本申请资源管控方法丢帧大于6帧的次数为16636次,改善幅度为4.63%。可以理解,丢帧大于6帧的次数越少,则说明前台资源被强占,出现应用卡顿的情况越少。如此,可以有效证明本申请提供的资源管控方法能够优化电子设备的整体性能。
综上,基于上述方案,在需要进行内存整理,例如压缩内存等任务时,可以根据前台场景信息确定是否执行内存整理任务或者执行哪些内存整理任务以实现内存资源与CPU资源的均衡问题,如此,可以在一定程度上解决部分内存资源与CPU资源的冲突问题。此外,可以根据前台资源需求信息以及后台资源状态信息确定后台待管控的目标对象和方式,可以有效减少冗余管控,即可以有效避免现有技术中过多查杀进程的情况发生。
另外,本申请中提供的Sniff冻结技术,模拟冻结流程,不需要进程进入预冻结流程,且达成冻结效果,降低了冻结成本,同时解决了进程频繁唤醒导致冻结失效的问题。
此外,本申请实施提供的L-CFS调度策略可以根据后台资源状态信息识别出某一时间段优先级极低的后台线程,并将这些线程放入L-CFS分组,控制这些线程抢占前台资源,从而达成前台业务优先调度的目的。本申请实施例中,可以后台运行的进程中负载较高且与前台无关的进程进行L-CFS管控,能够有效减少对前台进程的影响,并能够在一定程度上减少功耗。
本申请实施例提供一种资源管控装置,包括:
获取模块,用于获取所述电子设备的当前前台场景信息、前台资源需求信息和后台资源状态信息;
确定模块,用于基于所述电子设备的当前前台场景信息、前台资源需求信息和后台资源状态信息确定资源管控策略;
执行模块,用于执行所述资源管控策略。
本申请公开的机制的各实施例可以被实现在硬件、软件、固件或这些实现方法的组合中。本申请的实施例可实现为在可编程系统上执行的计算机程序或程序代码,该可编程系统包括至少一个处理器、存储系统(包括易失性和非易失性存储器和/或存储元件)、至少一个输入设备以及至少一个输出设备。
可将程序代码应用于输入指令,以执行本申请描述的各功能并生成输出信息。可以按已知方式将输出信息应用于一个或多个输出设备。为了本申请的目的,处理系统包括具有诸如例如数字信号处理器(DSP)、微控制器、专用集成电路(ASIC)或微处理器之类的处理器的任何系统。
程序代码可以用高级程序化语言或面向对象的编程语言来实现,以便与处理系统通信。在需要时,也可用汇编语言或机器语言来实现程序代码。事实上,本申请中描述的机制不限于任何特定编程语言的范围。在任一情形下,该语言可以是编译语言或解释语言。
在一些情况下,所公开的实施例可以以硬件、固件、软件或其任何组合来实现。所公开的实施例还可以被实现为由一个或多个暂时或非暂时性机器可读(例如,计算机可读)存储介质承载或存储在其上的指令,其可以由一个或多个处理器读取和执行。例如,指令可以通过网络或通过其他计算机可读介质分发。因此,机器可读介质可以包括用于以机器(例如,计算机)可读的形式存储或传输信息的任何机制,包括但不限于,软盘、光盘、光碟、只读存储器(CD-ROMs)、磁光盘、只读存储器(ROM)、随机存取存储器(RAM)、可擦除可编程只读存储器(EPROM)、电可擦除可编程只读存储器(EEPROM)、磁卡或光卡、闪存、或用于利用因特网以电、光、声或其他形式的传播信号来传输信息(例如,载波、红外信号数字信号等)的有形的机器可读存储器。因此,机器可读介质包括适合于以机器(例如,计算机)可读的形式存储或传输电子指令或信息的任何类型的机器可读介质。
在附图中,可以以特定布置和/或顺序示出一些结构或方法特征。然而,应该理解,可能不需要这样的特定布置和/或排序。而是,在一些实施例中,这些特征可以以不同于说明性附图中所示的方式和/或顺序来布置。另外,在特定图中包括结构或方法特征并不意味着暗示在所有实施例中都需要这样的特征,并且在一些实施例中,可以不包括这些特征或者可以与其他特征组合。
需要说明的是,本申请各设备实施例中提到的各单元/模块都是逻辑单元/模块,在物理上,一个逻辑单元/模块可以是一个物理单元/模块,也可以是一个物理单元/模块的一部分,还可以以多个物理单元/模块的组合实现,这些逻辑单元/模块本身的物理实现方式并不是最重要的,这些逻辑单元/模块所实现的功能的组合才是解决本申请所提出的技术问题的关键。此外,为了突出本申请的创新部分,本申请上述各设备实施例并没有将与解决本申请所提出的技术问题关系不太密切的单元/模块引入,这并不表明上述设备实施例并不存在其它的单元/模块。
需要说明的是,在本专利的示例和说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
虽然通过参照本申请的某些优选实施例,已经对本申请进行了图示和描述,但本领域的普通技术人员应该明白,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。

Claims (16)

  1. 一种资源管控方法,其特征在于,用于电子设备,所述方法包括:
    获取所述电子设备的当前前台场景信息、前台资源需求信息和后台资源状态信息;
    基于所述电子设备的当前前台场景信息、前台资源需求信息和后台资源状态信息确定资源管控策略;
    执行所述资源管控策略。
  2. 根据权利要求1所述的方法,其特征在于,所述资源管理策略包括内存整理策略和进程管控策略;并且,
    所述基于所述电子设备的当前前台场景信息、前台资源需求信息和后台资源状态信息确定资源管控策略;包括:
    基于所述当前前台场景信息确定内存管理策略;
    基于所述当前前台资源需求信息和所述当前后台资源状态信息确定进程管控策略。
  3. 根据权利要求2所述的方法,其特征在于,所述基于所述当前前台场景信息确定内存管理策略;包括:
    当根据所述电子设备的当前前台场景信息确定出当前前台场景为CPU敏感场景,则根据各内存整理策略占用CPU资源的大小选取待执行的内存整理任务;
    当根据所述电子设备的当前前台场景信息确定出当前前台场景为CPU不敏感场景,则允许执行全部内存整理任务。
  4. 根据权利要求3所述的方法,其特征在于,所述前台场景信息包括前台场景类别和/或前台场景的CPU频率负载值,并且,
    所述根据所述电子设备的当前前台场景信息确定出当前前台场景为CPU敏感场景,包括:
    在所述电子设备的当前前台场景的CPU频率负载值大于设定值,或者所述当前前台场景类别对应CPU敏感场景的类别的情况下,确定所述当前前台场景为CPU敏感场景。
    所述根据所述电子设备的当前前台场景信息确定出当前前台场景为CPU不敏感场景,包括:
    在所述电子设备的当前前台场景的CPU频率负载值小于等于设定值,或者所述当前前台场景类别对应CPU不敏感场景的类别的情况下,确定所述当前前台场景为CPU敏感场景。
  5. 根据权利要求2所述的方法,其特征在于,所述基于所述当前前台资源需求信息和所述当前后台资源状态信息确定进程管控策略;包括:
    在确定系统可用资源不满足前台资源需求时,基于所述当前前台资源需求信息确定需要进行管控以预留的目标资源;
    基于所述目标资源,以及所述当前后台资源状态信息确定目标管控进程和管控方式。
  6. 根据权利要求5所述的方法,其特征在于,所述基于所述当前前台资源需求信息确定需要进行管控以预留的目标资源,包括:
    获取所述当前前台应用的历史前台资源需求信息;
    基于所述当前前台应用的历史前台资源需求信息确定所述当前前台资源需求信息;以及
    将所述当前前台资源需求信息作为需要进行管控以预留的目标资源。
  7. 根据权利要求5所述的方法,其特征在于,所述系统可用资源包括系统可用CPU负载资源、系统可用内存资源和系统可用I/O资源中的至少一种;
    所述当前前台资源需求信息包括当前前台CPU负载资源需求、前台内存资源需求和前台I/O资源需求中的至少一种;并且,
    所述确定系统可用资源不满足前台资源需求;包括:
    当所述系统可用资源中的至少一项可用资源不满足前台资源需求信息中的对应资源需求时,确定所述系统可用资源不满足前台资源需求。
  8. 根据权利要求5所述的方法,其特征在于,
    所述后台资源状态信息包括第一类资源状态信息,所述第一类资源状态信息包括内存资源状态信息、CPU资源状态信息和软资源状态信息;
    所述内存资源状态信息包括各进程的内存信息、所述各进程的文件页和匿名页分布信息和所述各进程的优先级信息;
    所述CPU资源状态信息包括所述各线程的执行指令数量和所述各线程的优先级;
    所述软资源状态信息包括所述各进程的持锁状态。
  9. 根据权利要求8所述的方法,其特征在于,所述后台资源状态信息还包括第二类资源状态信息,所述第二类资源状态信息包括:后台运行的各进程与前台对应进程的关联系数;
    并且,
    基于所述目标资源,以及所述当前后台资源状态信息确定目标管控进程和管控方式;包括:
    基于所述目标资源和所述第一类资源信息确定候选进程和管控方式;
    基于所述候选线程中各进程与前台对应进程的关联系数确定目标管控进程。
  10. 根据权利要求9所述的方法,其特征在于,所述各进程与前台对应进程的关联系数基于所述各进程与前台对应进程的负载相关系数和唤醒相关系数确定。
  11. 根据权利要求5-10中任一项所述的方法,其特征在于,所述管控方式包括对目标线程进行sniff冻结处理;
    所述对目标线程进行sniff冻结处理包括:
    将目标冻结进程的第一调度间隔调整为第二调度间隔;
    其中第二调度间隔大于第一调度间隔。
  12. 根据权利要求1所述的方法,其特征在于,还包括:
    确定出符合设定条件的各第一进程;
    确定所述第一进程中各时间段中各后台线程的调度优先级;
    将对应时间段的各后台线程按照调度优先级从高到低排序,获得优先级序列,将优先级序列中处于后设定位数的后台线程设置为低优先级完全公平调度分组中的线程;
    其中,所述低优先级完全公平调度分组中的线程置于其他不属于低优先级完全公平调度分组中的线程之后进行调度。
  13. 根据权利要求12所述的方法,其特征在于,
    所述确定出符合设定条件的各第一进程,包括:
    将后台运行的各进程中负载大于设定值,且与前台进程无相互关联的后台进程作为第一进程。
  14. 一种可读介质,其特征在于,所述可读介质上存储有指令,所述指令在电子设备上执行时使机器执行权利要求1至13中任一项所述的资源管控方法。
  15. 一种电子设备,包括:存储器,用于存储由电子设备的一个或多个处理器执行的指令,以及处理器,是电子设备的处理器之一,用于执行权利要求1至13中任一项所述的资源管控方法。
  16. 一种计算机程序产品,其特征在于,包括指令,所述指令在电子设备上执行时使机器执行权利要求1至13中任一项所述的资源管控方法。
PCT/CN2023/096363 2022-05-27 2023-05-25 一种资源管控方法、电子设备及介质 WO2023227075A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210593502.4A CN117170857A (zh) 2022-05-27 2022-05-27 一种资源管控方法、电子设备及介质
CN202210593502.4 2022-05-27

Publications (1)

Publication Number Publication Date
WO2023227075A1 true WO2023227075A1 (zh) 2023-11-30

Family

ID=88918582

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/096363 WO2023227075A1 (zh) 2022-05-27 2023-05-25 一种资源管控方法、电子设备及介质

Country Status (2)

Country Link
CN (1) CN117170857A (zh)
WO (1) WO2023227075A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1205847A1 (en) * 2000-10-23 2002-05-15 Sony International (Europe) GmbH Resource conflict resolution
CN107515787A (zh) * 2017-08-31 2017-12-26 广东欧珀移动通信有限公司 资源配置方法及相关产品
CN107995357A (zh) * 2017-11-15 2018-05-04 广东欧珀移动通信有限公司 资源配置方法及装置
CN108664329A (zh) * 2018-05-10 2018-10-16 努比亚技术有限公司 一种资源配置方法、终端及计算机可读存储介质
CN111666140A (zh) * 2020-05-28 2020-09-15 北京百度网讯科技有限公司 资源调度方法、装置、设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1205847A1 (en) * 2000-10-23 2002-05-15 Sony International (Europe) GmbH Resource conflict resolution
CN107515787A (zh) * 2017-08-31 2017-12-26 广东欧珀移动通信有限公司 资源配置方法及相关产品
CN107995357A (zh) * 2017-11-15 2018-05-04 广东欧珀移动通信有限公司 资源配置方法及装置
CN108664329A (zh) * 2018-05-10 2018-10-16 努比亚技术有限公司 一种资源配置方法、终端及计算机可读存储介质
CN111666140A (zh) * 2020-05-28 2020-09-15 北京百度网讯科技有限公司 资源调度方法、装置、设备和存储介质

Also Published As

Publication number Publication date
CN117170857A (zh) 2023-12-05

Similar Documents

Publication Publication Date Title
KR102106744B1 (ko) 피어 이벤트 데이터에 기초한 모바일 디바이스의 동적 조정
WO2021208627A1 (zh) 一种任务调度方法、装置及电子设备
US20220350602A1 (en) Multi-Thread Synchronization Method and Electronic Device
WO2021147396A1 (zh) 图标管理方法及智能终端
WO2019128546A1 (zh) 应用程序处理方法、电子设备、计算机可读存储介质
US20220044043A1 (en) Integrated circuit and sensor data processing method
WO2021052415A1 (zh) 资源调度方法及电子设备
CN110032266B (zh) 信息处理方法、装置、计算机设备和计算机可读存储介质
WO2021213084A1 (zh) 应用通知管理方法和电子设备
WO2021238387A1 (zh) 一种执行应用的方法及装置
CN111198757A (zh) Cpu内核调度方法、cpu内核调度装置及存储介质
CN114493470A (zh) 日程管理的方法、电子设备和计算机可读存储介质
US11995317B2 (en) Method and apparatus for adjusting memory configuration parameter
WO2023227075A1 (zh) 一种资源管控方法、电子设备及介质
US20240137870A1 (en) Power Consumption Control Method and Apparatus
CN109992360A (zh) 进程处理方法和装置、电子设备、计算机可读存储介质
CN109992309A (zh) 应用程序处理方法和装置、电子设备、计算机可读存储介质
CN116700816A (zh) 一种资源管理方法及电子设备
CN110045811B (zh) 应用程序处理方法和装置、电子设备、计算机可读存储介质
CN109992369B (zh) 应用程序处理方法和装置、电子设备、计算机可读存储介质
CN109992371A (zh) 应用程序处理方法、装置、电子设备、计算机可读存储介质
CN109992379B (zh) 应用冻结方法、装置、存储介质和终端
CN109992395A (zh) 应用冻结方法、装置、终端及计算机可读存储介质
CN114116610A (zh) 获取存储信息的方法、装置、电子设备和介质
WO2019128562A1 (zh) 应用冻结方法、装置、终端及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23811145

Country of ref document: EP

Kind code of ref document: A1