WO2019072200A1 - 资源管理的方法及终端设备 - Google Patents

资源管理的方法及终端设备 Download PDF

Info

Publication number
WO2019072200A1
WO2019072200A1 PCT/CN2018/109753 CN2018109753W WO2019072200A1 WO 2019072200 A1 WO2019072200 A1 WO 2019072200A1 CN 2018109753 W CN2018109753 W CN 2018109753W WO 2019072200 A1 WO2019072200 A1 WO 2019072200A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
data
time
applications
computer system
Prior art date
Application number
PCT/CN2018/109753
Other languages
English (en)
French (fr)
Inventor
陈秋林
陈寒冰
亢治
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to KR1020207011136A priority Critical patent/KR102424030B1/ko
Priority to EP18865934.6A priority patent/EP3674895A4/en
Publication of WO2019072200A1 publication Critical patent/WO2019072200A1/zh
Priority to US16/845,382 priority patent/US11693693B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2113Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/482Application
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5019Workload prediction

Definitions

  • the present application relates to the field of computer operating systems, and in particular, to a method, device, and system for ordering importance of applications deployed in an operating system and performing resource management according to importance ordering.
  • Computer operating systems including smartphone operating systems, have the primary task of maintaining resource allocation for the various applications running on them.
  • a resource from the perspective of a computer operating system, including a plurality of types, such as a processor in the form of a time slice (for example, a central processing unit CPU), a memory resource in the form of a memory page, and an input and output in the form of bandwidth. (Input/Output, I/O) resources, etc.
  • a processor in the form of a time slice (for example, a central processing unit CPU)
  • memory resource in the form of a memory page
  • I/O input and output in the form of bandwidth.
  • the resource scheduling strategy When the resources that support the user's current operation are not available in time, the operating system will show up. Therefore, the core factor affecting the operating system is the resource scheduling strategy. Whether the resource scheduling strategy is reasonable or not is the most important thing to identify important applications and non-critical applications, and to achieve reasonable allocation of resources as much as possible.
  • the application can be divided into a foreground application, a background application, and an un-activated application.
  • the user can feel that the scene of the Karton is usually in the process of using the foreground application, so the foreground application is relatively more important. Therefore, from the user's point of view, the main reason for the Android system to appear stuck is: the disorderly execution of the application, resulting in the resources required for the foreground application or service to be used are not guaranteed.
  • Existing methods are able to identify foreground applications and background applications, and allocate more resources to the foreground application as appropriate, but this allocation is relatively fixed.
  • the present application provides a resource management method, a terminal device to which the method is applied, and the like.
  • the method includes an application importance ranking method and a resource scheduling method, which can identify the importance of an application in a current scenario, and thereby implement resource scheduling according to the present invention.
  • the application guarantees the supply of resources, thereby avoiding system stalls to a certain extent, thereby improving the user experience.
  • the present application provides a resource management method.
  • the method can be applied to a computer system, such as a terminal device.
  • the terminal device acquires data including at least one of application timing feature data related to the current foreground application and the following real-time data: system time of the computer system, current state data of the computer system, and the computer system Current location data.
  • the terminal device selects a target machine learning model that matches the real-time data from the plurality of machine learning models according to at least one of the real-time data, where the plurality of machine learning models correspond to different application usage rules.
  • the terminal device inputs all the acquired data into the target machine learning model, and the importance ranking of the plurality of applications installed on the computer system by the target machine learning model.
  • the result of the importance ranking can be one of the decision factors for the terminal device to perform resource management. Determining the target machine learning model can also use other data obtained in addition to the above real-time data.
  • the application ranking provided by the present application is based on collected data related to the use of the terminal device, and the diversification of the data can also improve the accuracy of the application ranking.
  • the application timing feature data is used to characterize the chronological data of the plurality of applications being used.
  • the application timing feature data may include k1 applications that are recently used, k2 largest possible previous applications of the foreground application, and k3 maximum possible subsequent applications, where k1, k2, and k3 are positive integers. .
  • the terminal device determines a time period in which the computer system is currently located according to a system time of the computer system; determining, from a correspondence relationship, a current state with the computer system according to a time period in which the computer system is currently located;
  • the target machine learning model corresponding to the time period, the correspondence relationship includes a plurality of time segments and a plurality of machine learning models respectively corresponding to the plurality of time segments.
  • the terminal device determines a semantic location in which the computer system is currently located according to current location data of the computer system, and then determines the computer from the correspondence according to a semantic location in which the computer system is currently located.
  • the target machine learning model corresponding to the semantic position where the system is currently located, the correspondence relationship includes a plurality of semantic positions and a plurality of machine learning models respectively corresponding to the plurality of semantic positions.
  • the above are two ways to determine the target machine learning model.
  • the multiple target machine learning models respectively correspond to various usage rules of the user using the application.
  • the division of multiple usage rules can be divided according to one dimension, such as system time or current location data, or can be divided according to multiple dimensions.
  • the terminal device determines the target machine learning model based on at least two of the real-time data.
  • the usage rule corresponding to the target machine learning model is the result of multiple dimension divisions, for example, the user usage rule is divided into four types from time to time and geographical location: work Time and place are company, working time and location is business trip (non-company), non-working time and location is home, and non-working hours and location is entertainment (non-home). These four rules of use present different characteristics, so they correspond to a machine learning model.
  • the terminal device determines a machine learning model that best matches the current based on the real-time data.
  • the terminal device may further predict the number N of applications that the user frequently uses in the current scenario according to the application usage history, and then determine the N applications that are ranked first based on the importance ranking. In this way, the terminal device can use the N applications as important applications according to the time when the resource management is performed. In some cases, resources may be reserved for the N applications, or some measures are taken to protect the N applications.
  • the terminal device determines the N applications (or less or more) in which the importance is prioritized, reserves resources for the determined application, or temporarily freezes the remaining other applications, or creates one for each CPU.
  • the vip queue includes the determined tasks (processes or threads) of the applications, and the execution of each character in the vip queue takes precedence over other execution queues of the CPU.
  • the application ordering by the method provided by the present application needs to collect historical data used by the application, but for newly installed applications, it may be that the application uses too little historical data to cause the ranking to be backward, but this does not accurately represent the new data.
  • the real importance of installing an app is not to collect historical data used by the application, but for newly installed applications, it may be that the application uses too little historical data to cause the ranking to be backward, but this does not accurately represent the new data. The real importance of installing an app.
  • the present application also provides an importance ranking method for a newly installed application, which is equivalent to an important compensation for the newly installed application.
  • the terminal device sorts the importance of the newly installed application according to the weight of the newly installed application, and selects the N2 newly installed applications in which the new one is installed, wherein the time when the newly installed application is installed on the computer system is less than a preset Second threshold.
  • the terminal device can also consider the ordering of the newly installed applications when performing resource management. For example, when performing resource reservation, in the case of a limited number of applications, consider the prior application of the sorting results in the importance ranking results mentioned above, and also consider the newly installed applications in the newly installed application ranking results. . Other types of resource management are similar.
  • the terminal device calculates a score for each newly installed application based on the usage likelihood weight and the time decay weight, the importance of the newly installed application having a higher score being higher than the importance of the newly installed application having a low score.
  • the usage possibility weight is used to reflect whether the newly installed application has been used recently; the time decay weight is used to reflect the time difference between the current time and the application installation time.
  • the present application further provides a data collection method and a method for performing model training according to the collected data, which can be used to support application ordering, resource management, and the like provided in other embodiments.
  • the terminal device collects and stores application data and related data of the computer system, wherein the application data includes an identifier of the application, a time when the application is used, and related data of the computer system includes at least one of the following data A: time, state data, and location data of the computer system at the time the application is used.
  • the terminal device calculates application timing feature data of the plurality of applications according to the application data collected and stored in a past period of time; inputting the application data, or the application data and related data of the computer system into a classification model , for example, an entropy increase model to obtain a plurality of classifications related to the law in which the application is used, wherein the two laws corresponding to any two classifications are different; respectively, training the machine learning model for each of the plurality of classifications
  • the machine learning model is configured to implement an order of importance of the application, the input of the training including at least one of a time when the application is used, the application timing characteristic data, and related data of the computer system.
  • the process of model training can also be performed on the server side.
  • the terminal device sends the collected data to the server, and the server performs model training.
  • the trained model can be stored locally or returned to the terminal device. If the model is stored on the server side, the terminal device can apply for a model to the server when sorting importance. Further, the importance ordering can also be performed on the server side, and the terminal device only needs to store the sorting result or apply for the sorting result to the server when using.
  • the present application further provides a resource management method, which can be applied to a computer system, such as a terminal device.
  • a computer system such as a terminal device.
  • the terminal device detects a specific event, it temporarily freezes part of the application until the end of a certain time period, and then thaws all or part of the frozen application.
  • the specific event is an event indicating that the resource demand is increased, for example, an application start event, a photographing event, a gallery zoom event, a slide event, and a light/off event.
  • the end of a particular time period is only one of the conditions of thawing, and another condition may be the detection of the end of the particular event.
  • the terminal device implements a temporary freeze by setting a timer whose duration is set to the specific time period. This way the code changes are small.
  • the temporarily frozen portion of the application includes all background applications, or all applications that are in the background and not perceptible by the user.
  • the temporarily frozen portion of the application includes an application of low importance, wherein the importance of the application is obtained based on historical usage of the application, machine learning algorithms, and current scene data of the system. Specifically, the importance of the application can be obtained according to the method of importance sorting provided above, and the application in which the importance order is ranked is performed to perform temporary freezing.
  • the present application further provides another resource management method, which can be applied to a computer system, such as a terminal device.
  • the terminal device includes a plurality of physical cores, each of which corresponds to a first queue and a second queue, and the first queue and the second queue respectively include one or more tasks to be executed by the physical core.
  • the at least one physical core performs the following method: acquiring and executing the task in the first queue until all the tasks in the first queue are executed, and acquiring and executing the tasks in the second queue.
  • the important tasks are placed in an additional queue with higher execution priority, and the physical core performs these important tasks first, thus ensuring the resource supply of important tasks.
  • real-time tasks generally do not allow moving from one physical core to another.
  • the important tasks here refer to non-real-time tasks.
  • the important task waiting for the timeout is moved to the first queue of another idle physical core, thereby avoiding the jamming caused by the waiting time of the important task.
  • the tasks in the first queue include important tasks (or key tasks) and tasks that the important tasks depend on.
  • the important tasks are tasks that affect the user experience, or tasks that the user can perceive, or tasks of high importance, wherein the importance of the application is based on the historical usage of the application, the machine learning algorithm, and the current scenario of the system. Data is obtained.
  • the dependencies between tasks are, for example, data dependencies, lock dependencies, or binder service dependencies.
  • the important tasks and their dependent tasks are placed in the first queue with higher execution priority, which can further improve the execution speed of important tasks.
  • the present application also provides a corresponding apparatus for each of the methods provided by the present application, the apparatus comprising means for implementing the various steps of the method.
  • the implementation of this module can be software, hard and soft combination or hardware.
  • the present application further provides a terminal device, including a processor and a memory, the memory is configured to store computer readable instructions, and the processor is configured to read the computer readable instruction stored in the memory Any one or more of the methods provided herein.
  • the present application further provides a storage medium, which may be a non-volatile storage medium, for storing computer readable instructions, when the one or more processors execute the computer readable instructions, Any one or more methods.
  • a storage medium which may be a non-volatile storage medium, for storing computer readable instructions, when the one or more processors execute the computer readable instructions, Any one or more methods.
  • the present application further provides a computer program product comprising computer readable instructions for implementing any one or more of the methods provided herein when one or more processors execute the computer readable instructions.
  • the definition of "high” or “low” is not particularly high or low in the application, as those skilled in the art will appreciate that there are different needs in different situations.
  • the "importance" of the application is the possibility that the application is used by the user, and the greater the possibility, the higher the importance.
  • the "importance" of the application may be determined according to the current situation of resource management, such as the importance of the application may be related to whether the user perceives the application, and the like.
  • the two resource management methods provided by the present application may or may not depend on the sorting method provided by the present application.
  • the key to solving the disordered scheduling of computer operating system resources is to enable the system to sense the importance of the application in real time and accurately, and implement optimal resource management strategies according to the order of application importance to ensure that the system resources are used to the best of their ability.
  • the operating system should stand in the user's perspective, fully identify the user's usage needs and supply resources as needed. For example, for the user to use, fully guarantee, for the user to use, prepare in advance, for the user currently the most Fully reclaimed, such as invalid self-start or associated startup applications.
  • the application sequencing method provided by the present application collects various information of a computer device (for example, a smart terminal) in real time, and separately trains multiple machine learning models under different classifications related to the application usage rule, and selects the most suitable user when the importance is sorted.
  • the regular machine learning model is used to predict the importance of the application in real time, which improves the recognition accuracy of the application importance.
  • FIG. 1 is a schematic diagram of a logical structure of a terminal device
  • FIG. 2 is a schematic diagram of a logical structure of an operating system deployed in a terminal device
  • FIG. 3 is a schematic diagram showing the logical structure of some modules of the sensing management device
  • FIG. 4 is a schematic diagram of a resource management method
  • FIG. 5 is a schematic flowchart of a data collection method in a resource management method
  • FIG. 6 is a schematic flowchart of a model training method in a resource management method
  • FIG. 7 is a schematic diagram of the principle and effect of dividing an application usage rule according to one or more dimensions
  • FIG. 8 is a schematic flowchart of applying a real-time sorting method in a resource management method
  • FIG. 9 is a schematic flowchart of a temporary freezing method in a resource management method
  • FIG. 10 is an exemplary diagram of a first state of a ready queue
  • 11 is an exemplary diagram of a second state of a ready queue
  • FIG. 12 is a diagram showing an example of two queues corresponding to each cpu
  • FIG. 13 is a schematic flow chart of a task moving method
  • Figure 14 is a diagram showing an example of queue changes involved in task movement.
  • Operating system A computer program that manages computer hardware and software resources, and is also the core and cornerstone of a computer system.
  • the operating system needs to handle basic tasks such as managing and configuring memory, prioritizing system resource supply and demand, controlling input and output devices, operating the network, and managing file systems.
  • the operating system also provides an operator interface that allows the user to interact with the system.
  • the types of operating systems are very diverse, and the operating systems installed on different machines can range from simple to complex, ranging from mobile embedded systems to large operating systems for supercomputers. Many operating system manufacturers do not agree on the scope of their coverage. For example, some operating systems integrate a graphical user interface (GUI), while others use only the command line interface, while the GUI is considered a non-essential s application.
  • GUI graphical user interface
  • the terminal operating system is generally considered to be an operating system running on a terminal such as a mobile phone, a tablet computer, or a sales terminal, such as the current mainstream Android or iOS.
  • Application also called an application or application, a computer program designed to implement a set of associated functions, tasks, or activities for a user.
  • the application is deployed on the operating system and can be deployed in combination with the operating system software (such as the operating system), such as system-level applications (or system services), or can be deployed independently, such as the current common word processing applications (for example, Word application), web browser application, multimedia playback application, game application, etc.
  • the operating system software such as the operating system
  • system-level applications or system services
  • the current common word processing applications for example, Word application
  • web browser application web browser application
  • multimedia playback application multimedia playback application
  • game application etc.
  • System resources In this application, the resources within the computer system include, but are not limited to, any one or more of memory resources, processing resources, and I/O resources.
  • the management of resources includes many implementations, such as resource recovery by shutting down, freezing, or compressing some applications, or resource reservation by deny application startup, or resource reservation for the application by preloading the application. .
  • the foreground application is short for applications running in the foreground.
  • the background application is the abbreviation of the application running in the background.
  • the Word application and the web browser application are currently launched, but the current user is writing a version of the operating system to manage the two applications separately through two lists.
  • a front-end and background-switching event is triggered, and the system can sense the front-to-back-office switching of the application by monitoring the front-end and background-switching events.
  • the background application is divided into a background non-aware application and a background-aware application.
  • the background-aware application refers to some applications that can be perceived by the user even if they are running in the background, such as a music playing application or a navigation application. Even in the background, users can still hear the sound of music or navigation.
  • a pre-application of an application is an application that is used before the application is used (which can be understood as being switched to the foreground), and a subsequent application of an application is an application that is used after the application is used.
  • the use of an application means that the application launches or the application switches from the background to the foreground, and application launch can also be understood as an application (from shutdown) to the foreground.
  • Application timing feature data data used to characterize the chronological order in which multiple applications are used, such as which applications are recently used, which application may be applied to a previous application, or which application may be a subsequent application of an application, etc. .
  • System status data or simply status data, used to indicate information about the state of a computer device or operating system itself. More specifically, it can be understood as status data of a component or an external component built in the device, such as a network connection state, or a connection state of an external device such as a headset or a charging cable.
  • Location data is a generalized concept, and any information representing a location can be considered as location data, such as latitude and longitude. More precise position data such as latitude and longitude is given to practical meaning, such as "home”, "company”, or “entertainment place”, etc., which is semantic position data. Semantic location data is also a type of location data.
  • Location data and system status data can also be uniformly understood as scene data of the device.
  • Application Importance The probability that an application is being used, or the likelihood that an application will be switched to the foreground.
  • the ordering of application importance in this application is the ordering of the probability of application being used.
  • the basis of this ordering is the prediction of the application usage probability according to the application usage history and some other information.
  • the method provided by the present application is mainly applied to a terminal device (usually a mobile terminal), which may also be called a user equipment (UE), a mobile station (MS), a mobile terminal, etc.
  • the terminal may have the capability of communicating with one or more core networks via a radio access network (ran), for example, the terminal may be a mobile phone (or “cellular” phone), Or a computer with a mobile nature, for example, the terminal can also be a portable, pocket-sized, handheld, computer-integrated or in-vehicle mobile device.
  • the terminal device can be a cellular telephone, a smart phone, a laptop computer, a digital broadcast terminal, a personal digital assistant, a portable multimedia player, a navigation system, and the like.
  • the method provided by any embodiment of the present application may also be applied to a fixed terminal, such as a personal computer, a point of sale (POS), an automatic teller machine, etc.; or may also be applied to A non-terminal type computer system such as a server.
  • a fixed terminal such as a personal computer, a point of sale (POS), an automatic teller machine, etc.
  • POS point of sale
  • ATM automatic teller machine
  • the terminal device 100 includes a wireless communication module 110, a sensor 120, a user input module 130, an output module 140, a processor 150, an audio and video input module 160, an interface module 170, a memory 180, and a power supply 190.
  • the wireless communication module 110 can include at least one module that enables wireless communication between the terminal device 100 and a wireless communication system or between the terminal device 100 and a network in which the terminal device 100 is located.
  • the wireless communication module 110 may include a broadcast receiving module 115, a mobile communication module 111, a wireless internet module 112, a local area communication module 113, and a location (or positioning) information module 114.
  • the broadcast receiving module 115 may receive a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits broadcast signals and/or broadcast associated information, or may be a server that receives pre-generated broadcast signals and/or broadcast associated information and transmits them to the terminal device 100.
  • the broadcast signal may include not only television broadcast signals, radio broadcast signals, and data broadcast signals, but also signals in the form of a combination of television broadcast signals and radio broadcast signals.
  • the broadcast associated information may be information about a broadcast channel, a broadcast program, or a broadcast service provider, and may even be provided through a mobile communication network.
  • the broadcast associated information may be received by the mobile communication module 111.
  • Broadcast related information can exist in many forms.
  • the broadcast related information may be in the form of an electronic program guide (EPG) of a digital multimedia broadcasting (DMB) system, or a digital video broadcast-handheld (DVB-H).
  • EPG electronic program guide
  • the system's electronic service guide (ESG) exists.
  • the broadcast receiving module 115 can receive broadcast signals using various broadcast systems. More specifically, the broadcast receiving module 115 can use, for example, a multimedia broadcasting-terrestrial (DMB-T), a digital multimedia broadcasting-satellite (DMB-S), and a media forward link (media).
  • DMB-T multimedia broadcasting-terrestrial
  • DMB-S digital multimedia broadcasting-satellite
  • media media forward link
  • the broadcast receiving module 115 can receive a signal from a broadcast system that provides a broadcast signal other than the above-described digital broadcast system.
  • the broadcast signal and/or broadcast associated information received through the broadcast receiving module 115 may be stored in the memory 180.
  • the mobile communication module 111 may transmit a radio signal to at least one of a base station, an external terminal, and a server on the mobile communication network, or may receive a radio signal from at least one of them.
  • the signals may include voice call signals, video telephony call signals, and data in a variety of formats.
  • the wireless internet module 112 may correspond to a module for wireless access and may be included in the terminal device 100 or externally connected to the terminal device 100.
  • Wireless LAN technology WLAN or Wi-Fi
  • WiMAX world interoperability for microwave access
  • HSDPA high speed downlink packet access
  • the local area communication module 113 may correspond to a module for local area communication.
  • Bluetooth radio frequency identification (RFID), infrared data association (IrDA), ultra wide band (UWB), and/or ZigBee may be used as the local area communication technology.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra wide band
  • ZigBee ZigBee
  • the location information module 114 can confirm or obtain the location of the mobile terminal 100.
  • the location information module 114 can obtain location information by using a global navigation satellite system (GNSS).
  • GNSS global navigation satellite system
  • a GNSS is a radio navigation satellite system that describes rotating around the earth and transmitting reference signals to a predetermined type of wireless navigation receiver such that the radio navigation receiver can determine their position on or near the surface of the earth.
  • GNSS can include the US global positioning system (GPS), the European Galileo system, the Russian global orbiting navigation satellite system, the Chinese compass system, and the Japanese quasi-zenith satellite system.
  • the GPS module is a representative example of the location information module 114.
  • the GPS module 114 may calculate information about a distance between a point or an object and at least three satellites and information about a time when the distance information is obtained, and may apply triangulation to the obtained distance information to be based on longitude at a predetermined time. , latitude and height get three-dimensional position information about a point or object. It is also possible to use a method of calculating position and time information using three satellites and using another satellite to correct the calculated position and time information. Additionally, the GPS module 114 can continuously calculate the current location in real time and use the location or location information to calculate speed information.
  • the sensor 120 may sense the current state of the terminal device 100, such as an open/closed state of the terminal device 100, a location of the terminal device 100, whether the user is in contact with the terminal device 100, a direction of the terminal device 100, and an acceleration/deceleration of the terminal device 100. And the sensor 120 may generate a sensing signal for controlling the operation of the terminal device 100. For example, in the case of a slide phone, sensor 120 can sense whether the slide phone is open or closed. Additionally, sensor 120 can sense whether power supply 190 is powered and/or whether interface unit 170 is connected to an external device. The sensor 120 may specifically include an attitude detecting sensor, a proximity sensor, and the like.
  • the user input module 130 is configured to receive input digital information, character information or contact touch operation/contactless gesture, and receive signal input related to user setting and function control of the terminal device 100.
  • the touch panel 131 also referred to as a touch screen, can collect touch operations on or near the user (such as the user's operation on the touch panel 131 or on the touch panel 131 using any suitable object or accessory such as a finger, a stylus, or the like. ), and drive the corresponding connection device according to a preset program.
  • the touch panel 131 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 150 is provided and can receive commands from the processor 150 and execute them. For example, the user clicks an application icon on the touch panel 131 with a finger, the touch detection device detects the signal brought by the click, and then transmits the signal to the touch controller, and the touch controller converts the signal.
  • the coordinates are sent to the processor 150, which performs an open operation on the application based on the coordinates and the type of the signal (click or double click).
  • the touch panel 131 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input device 130 may further include other input devices 132.
  • the other input devices 132 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, One or more of a dome switch, a jog wheel, and a jog switch.
  • the output module 140 includes a display panel 141 for displaying information input by a user, information provided to the user, various menu interfaces of the terminal device 100, and the like.
  • the display panel 141 can be configured in the form of a liquid crystal display (LCD) or an organic light-emitting diode (OLED).
  • the touch panel 131 can cover the display panel 141 to form a touch display screen.
  • the output module 140 may further include an audio output module 142, an alarm 143, a haptic module 144, and the like.
  • the audio output module 142 may output audio data received from the wireless communication unit module 110 in a call signal receiving mode, a telephone call mode or a recording mode, a voice recognition mode, and a broadcast receiving mode, or output audio data stored in the memory 180.
  • the audio output module 142 may output an audio signal related to a function performed in the terminal device 100, such as a call signal reception tone, a message reception tone.
  • the audio output module 142 can include a receiver, a speaker, a buzzer, and the like.
  • the audio output module 142 can output sound through the headphone jack. The user can listen to the sound by connecting the headset to the headphone jack.
  • the alarm 143 may output a signal for indicating the occurrence of an event of the terminal device 100. For example, an alert can be generated when a call signal is received, a message is received, a key signal is input, or a touch is input.
  • the alarm 143 can also output a signal of a different form than the video signal or the frequency signal. For example, a signal indicating the occurrence of an event by vibration.
  • the haptic module 144 can generate various tactile effects that the user can feel.
  • An example of a haptic effect is vibration.
  • the intensity and/or mode of the vibration generated by the haptic module 144 can also be controlled. For example, different vibrations may be output in combination or sequentially.
  • the haptic module 144 can generate a plurality of haptic effects, in addition to the vibration, the stimulating effect of the needle row moving vertically relative to the skin surface, the air spray force effect or the air suction effect formed by the nozzle hole or the suction hole, and the rubbing of the skin.
  • One or more of various effects such as a stimulating effect, a stimulating effect of electrode contact, a stimulating effect using an electrostatic force, and an effect of reproducing heat or cold by an endothermic or heat releasing element.
  • the haptic module 144 can not only transmit the haptic effect by direct contact, but also allow the user to feel the haptic effect through the muscle sense of the user's finger or arm.
  • the terminal device 100 can include
  • Processor 150 may include one or more processors.
  • processor 150 may include one or more central processors or include a central processing unit and a graphics processor.
  • the processor 150 includes a plurality of processors, the plurality of processors may be integrated on the same chip, or may each be a separate chip.
  • a processor can include one or more physical cores, with the physical core being the smallest processing module.
  • the audio and video input module 160 is configured to input an audio signal or a video signal.
  • the audio and video input module 160 may include a camera 161 and a microphone 162.
  • the camera 161 can process image frames of still images or moving images obtained by the image sensor in the video telephony mode or the photographing mode.
  • the processed image frame can be displayed on the display panel 141.
  • the image frames processed by the camera 161 may be stored in the memory 180 or may be transmitted to the external device through the wireless communication module 110.
  • the terminal device 100 may also include a plurality of cameras 161.
  • the microphone 162 can receive an external audio signal in a call mode, a recording mode, or a voice recognition mode, and process the received audio signal into electronic audio data. The audio data can then be converted into a form that can be transmitted to the mobile communication base station by the mobile communication module 111 and output in a call mode.
  • the microphone 162 can employ various noise cancellation algorithms (or noise cancellation algorithms) to eliminate or reduce the noise generated when an external audio signal is received.
  • the interface module 170 can function as a path to an external device connected to the terminal device 100.
  • the interface module 170 may receive data or power from an external device and transmit the data or power to an internal component of the terminal device 100, or transmit data of the terminal device 100 to the external device.
  • the interface module 170 can include a wireless/headphone port, an external charger port, a wired/wireless data port, a memory card port, a port for connecting a device having a subscriber identity module, an audio I/O port, Video I/O port and/or headphone port.
  • the interface module 170 may also be connected to a subscriber identity module, which is a chip that stores information for verifying the rights to use the terminal device 100.
  • the identification device including the subscriber identity module can be manufactured in the form of a smart card. Thus, the identification device can be connected to the terminal device 100 via the interface module 170.
  • the interface module 170 may also be a path for providing power from the external cradle to the terminal device 100 when the terminal device 100 is connected to the external cradle, or transmitting various command signals input by the user through the cradle to the terminal device 100. path.
  • Various command signals or power input from the cradle can be used as signals for confirming whether the terminal device 100 is correctly mounted in the cradle.
  • the memory 180 stores a computer program including an operating system program 182, an application 181, and the like.
  • Typical operating systems such as Microsoft's Windows, Apple's MacOS and other systems for desktops or laptops, such as the Linux-based Android (Android) system developed by Google Inc., are used in mobile terminal systems.
  • the processor 150 is configured to read a computer program in the memory 180 and then execute a computer program defined method, for example, the processor 150 reads the operating system program 182 to run an operating system on the terminal device 100 and implement various functions of the operating system. Or reading one or more applications 181 to run the application on the terminal device.
  • the operating system program 182 includes a computer program that can implement the method provided by any embodiment of the present application, such that after the processor 150 reads the operating system program 182 and runs the operating system, the operating system can be provided with the present application. Apply real-time sorting and/or resource management functions.
  • the memory 180 also stores other data 183 in addition to the computer program, such as application information obtained by the present application, trained training models, and results of real-time sorting, and also temporarily stores input/output data (for example, a phone book). Data, messages, still images, and/or moving images), various modes of vibration and sound related data output when a touch input is applied to the touch screen, and the like.
  • the memory 180 may be one or more of the following types: flash memory, hard disk type memory, micro multimedia card type memory, card memory (such as SD or XD memory), random access memory (random access memory) , RAM), static random access memory (SRAM), read only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable Read-only memory (PROM), magnetic memory, magnetic disk or optical disk.
  • flash memory such as SD or XD memory
  • card memory such as SD or XD memory
  • random access memory random access memory
  • RAM static random access memory (SRAM), read only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable Read-only memory (PROM), magnetic memory, magnetic disk or optical disk.
  • the memory 180 may also be a network storage device on the Internet, and the terminal device 100 may perform operations such as updating or reading on the memory 180 on the Internet.
  • the power supply 190 can receive external power and internal power under the control of the processor 150 and provide power required for operation of various components of the terminal device 100.
  • connection relationship of each module is only an example, and the method provided by any embodiment of the present application can also be applied to other connection mode terminal devices, for example, all modules are connected through a bus.
  • the method provided by the present application can be implemented by hardware or software.
  • an application specific integrated circuit ASIC
  • DSP digital signal processor
  • PLD programmable logic device
  • FPGA field programmable gate array
  • At least one of an electronic unit such as a field programmable gate array (FPGA), a processor, a controller, a microcontroller, and/or a microprocessor implements an embodiment of the present application.
  • implementations such as procedures and functions may be implemented using software modules that perform at least one function and operation.
  • the software modules can be implemented in a software program written in any suitable software language.
  • the software program can be stored in the memory 180 and read and executed by the processor 150.
  • FIG. 2 is an implementation of the method provided by the present application by taking an Android system as an example.
  • a typical Android system 200 includes an application 210, an application framework 220, a system runtime and an Android runtime environment 230, and a kernel 240.
  • Android is based on Linux, so this kernel 240 is the Linux kernel.
  • the application 210 includes various applications such as a browser, a media player, a game application, and a word processing application, some of which are built-in applications, and some are applications that the user installs as needed. This application focuses on the use of these applications by users, so as to ensure the supply of resources that are most likely to be used by users, and avoid the application of the user's perspective.
  • the application framework 220 includes an Android service 224.
  • the Android service includes various system services provided by the Android system for use by other modules. For example, power manager 224-a, notification manager 224-b, connection manager 224-c, package manager 224-d, location manager 224-e, wired access manager 224-g, Bluetooth device 224-h, View system 224-i, etc. These managers can refer to the module implementation provided by the Android system in the prior art, which is not described in detail in this application.
  • the system runtime and Android runtime environment 230 includes a system runtime 231 and an Android runtime environment 232.
  • the Android runtime environment 232 includes a core library and a Dalvik virtual machine.
  • This core library provides most of the functionality of the Java programming language core library. Every Android application runs in its own process and has a separate Dalvik virtual machine instance. Dalvik is designed to be a device that can run multiple virtual systems efficiently at the same time.
  • the Dalvik virtual machine relies on some features of the Linux kernel 240, such as the threading mechanism and the underlying memory management mechanism.
  • Linux kernel 240 acts as an abstraction layer between hardware and software.
  • Android has also made some modifications to the Linux kernel, mainly involving two parts of the modification:
  • Binder (IPC) driver Provides effective inter-process communication. Although the Linux kernel itself already provides these functions, many services in the Android system need to use this function, and for some reason implement their own set.
  • Power management mainly used for power saving. Because the Android system is designed for mobile terminals, such as smart phones, low power consumption is an important goal.
  • the Android system is stored in the memory 180 in the form of software code, and the processor 150 reads and executes the software code to implement various functions provided by the system on the terminal device 100, including the functions provided by the embodiment.
  • the application framework 220 of the embodiment further includes a sensing management device, and the sensing management device includes a data collecting module 221, an application sorting module 222, and a decision executing module 223.
  • sensing management device can also be provided to other components as a system service of Android.
  • Modules 221-223 can be used for three system services, or they can be combined or further subdivided.
  • the data collection module 221 collects scene data of the terminal device 100, and the scene data includes location data (such as GPS location data) where the device is located, status data of components built in the device, or external components.
  • the status data may be, for example, a display panel layout status, a network connection status, a headset or charging line connection status, a camera/audio/video component/sensor status, and the like.
  • Various components of the terminal device 100 can be understood with reference to FIG.
  • the "components" here include both hardware and software.
  • the data collection module 221 also collects application data related to the use of the application, such as the type of application, the name of the application, the time the application was used, and the like. The collected data is stored directly or processed and stored in memory 180 or sent to other modules.
  • the application ranking module 222 determines the importance ranking results of all applications in the real-time scenario according to the data collected by the data collection module 221 and the machine learning model. Before the real-time importance ranking is implemented, the application ranking module 222 further performs training according to the historical data collected by the data collection module 221 to obtain the above-described machine learning model.
  • the decision execution module 223 formulates a decision on how to manage the system resources according to the real-time importance order of the applications output by the application ranking module 222, and directly executes or calls other modules to perform corresponding resource management measures. It should be understood that in a particular implementation, the decision execution module 223 requires a real-time importance ranking of the application when not doing all of the resource management decisions.
  • the modules 221-223 may also be implemented in the system runtime and the Android runtime environment 230 or the kernel 240, or partially in the application framework 220, and partially implemented in other layers.
  • the application ranking module 222 includes two major sub-modules: a model training module 310 and a real-time sequencing module 320.
  • the model training module 310 trains the model applied to the application real-time sorting according to the historical data collected by the data collecting module 221, and stores it in the memory 180 for the real-time sorting module 320 to perform real-time application sorting. Specifically, the model training module 310 mainly trains three models: a dimension division model, a ranking model, and an N-value prediction model.
  • the "model” in this application is a generalized concept, and parameters, formulas, algorithms or correspondences can be considered as models.
  • the dimension partitioning model is used to classify the application usage rules, which may be one-dimensional classification or multi-dimensional classification. "Dimensions" are the basis for using regular classifications, such as time, space, and so on. Taking time as an example, based on 24 hours a day, the application usage rules are divided into two categories: working time period and non-working time period. Specifically, the current system time and the dimension division model obtained by the training determine whether the terminal currently belongs to the working time period or the non-working time period. What is actually reflected here is whether the user is currently in a working state or a non-working state. In these two states, the user's usage rules of the application exhibit different characteristics. Further, the dimensions may also be multidimensional, such as a time dimension and a location dimension.
  • the working time period is divided into company office and business trip; non-working time period is divided into home and travel.
  • the examples herein use semantic descriptions, such as "office”, “outbound”, etc., it should be understood that the specific implementation is not limited thereto.
  • the N-value prediction model is used to determine the number of applications that users frequently use under a particular category.
  • the value of N is a positive integer. Taking the time dimension classification as an example, the N value reflects the number of applications that the user frequently uses in a specific time period.
  • the real-time sorting module 320 is a real-time processing module that can determine the real-time important ordering of the application according to the data collected by the data collecting module 221 in real time and the model trained by the model training module 310.
  • the real-time ranking module 320 performs three functions using the functionality provided by the model training module 310: dimension partitioning prediction 321, application ranking prediction 322, and N value acquisition 323.
  • each function within the model training module 310 and the real-time sequencing module 320 can be considered a functional module or unit.
  • the data collection module 221 may perform processing such as cleaning/processing on all or part of the collected data, and then store it in the memory 180 or provide it to the real-time sequencing module 320.
  • FIG. 4 it is a schematic diagram of a global scheme of an application ranking method and a resource management method provided by this embodiment.
  • the figure shows four main method flows, including the first phase of data collection, the second phase of model training, the third phase of application sequencing, and the fourth phase of resource management.
  • the process of description in order to give a global overview of the scheme, it will be described in the above order, but it should be understood that in the specific implementation of the scheme, these four phases are not completely serially executed.
  • the first stage is the process of data collection.
  • the collected data is mainly the data used by the user and the system status data and location data of the terminal device.
  • the information can reflect the user's usage rules of the application. In the future, predict the user's probability of using the application.
  • the collected data will be stored directly, and some may need to be processed before storage (1).
  • the execution entity of the first stage is the data acquisition module 221.
  • the second phase of model training is performed.
  • the application uses a machine learning training method, which generally requires a certain amount of historical data input (2), but it takes a long time to start the first training, which is not limited in this application.
  • the training process of the machine learning model is the process of model building, which is essentially a process of parameter learning and tuning. Training the model is a learning update of the model parameters.
  • the trained machine learning model or model parameters are stored for use in the third stage of sorting.
  • the collected historical data Prior to the training of the ranking model, the collected historical data is divided into multiple categories based on a machine learning algorithm or other algorithms from one dimension or multiple dimensions, and the application rules in the multiple categories present different characteristics for each of the multiple categories. Train the respective sorting models. Therefore, an important output of the second stage is a plurality of classifications and a plurality of machine learning models (or model parameters) corresponding to the plurality of classifications (3).
  • the second stage of execution is the model training module 310. Further, the model training module 310 further calculates the number of applications NB frequently used by the user under a specific classification according to the N-value prediction algorithm and stores them for subsequent resource management. The process of N-value prediction can also be placed in the third or fourth stage.
  • the terminal device can start the training process periodically, or start the training process at a specified time point, or start the training process according to the simple mismatch principle of collecting data.
  • the third stage application ordering can be performed.
  • the third stage performs the ordering of the applications based on the machine learning model (5) of the second stage output of the real-time data (4).
  • some historical data (5) is needed.
  • the real-time data includes one or more of the current application collected in real time, the current application time, the current time of the terminal device (ie, the current system time), the current state data of the terminal device, and the current location data of the terminal device. And some data, such as the time sequence in which multiple applications are used, also requires the usage time of multiple applications recorded in the historical data to be calculated.
  • the execution entity of the third stage is the real-time sorting module 320, which outputs the sorting result of all the applications installed on the terminal device (6).
  • the start of the third stage application ordering can be periodic or event-triggered.
  • the triggering event may be application switching, application installation, application uninstallation, terminal device state change, etc., which may cause changes in application importance to the user.
  • the sorted results can be stored for use in the fourth phase of resource management.
  • the trigger event may also be a request event triggered by the fourth phase resource management, that is, the application sorting is performed under the call of the resource management.
  • the fourth stage of resource management you can access and use the latest sorting results stored in the memory (7), or you can call (8) the third-stage application sorting to obtain the sorting result according to the requirements.
  • the resources are then managed according to the order of importance of the application.
  • the data collection module 221 sets an event monitoring mechanism (S501) for monitoring whether a key event occurs. If a key event is detected (S502), the data collection process is triggered (S503).
  • the collected data can be stored directly (S505), or it can be stored after cleaning/processing (S504).
  • Data cleaning refers to the process of re-examining and verifying data by a computer to remove duplicate information, correct existing errors, and provide data consistency.
  • Data processing refers to the computer's processing of data types, formats, etc., or mathematical transformation of data.
  • the data collection module 221 collects the scene data of the terminal device 100 and the application data historical data related to the application use, and the collected data is stored in the memory 180 after being cleaned/processed, and is used as historical data for the model training module 310 to train and generate each.
  • the data collection module 221 monitors the status of the terminal device, application usage, and the like in real time, and collects data and stores when the condition is met.
  • the data collection module 221 monitors and caches user behavior states (also understood as terminal device behavior states, such as motion or stationary), application usage states, system states, and location state changes in real time. After a pre-application background switch or other types of critical events occur, information such as the application package name, the current time, and the latest data of the above state, which are cut to the foreground, are recorded in the persistent storage medium.
  • the data collection module 221 also collects data in real time when the sorting is needed, and the collected data becomes the input of the real-time sorting module 320 as real-time data, so that the real-time sorting module 320 sorts the applications in real time.
  • the data collected by the data acquisition module 221 and the acquisition method are as shown in the following table. It should be understood that, in some embodiments, some data may be reduced or added as needed, and this embodiment is merely an example and is not intended to be limited to the data in this table.
  • the Android system provides a function interface to obtain some data.
  • Table 1 describes a module (such as a PackageManager) or a function (for example, getActiveNotifications) involved in a method for obtaining the data.
  • a module such as a PackageManager
  • a function for example, getActiveNotifications
  • FIG. 2 those skilled in the art can Other acquisition methods are required, and the application is not limited.
  • the modules and functions called in other systems may be different, and this application is not limited to the Android system.
  • the foreground application in Table 1 refers to the application currently running in the foreground, which is considered to be an application that is currently used by the user and is relatively important to the user.
  • the above data collection process is triggered when a critical event is detected.
  • the key events herein may include one or more of the following events: a front-end switching event, an installation event of the application, an uninstall event of the application, and a notification caused by a change in the system state. Notification events caused by changes in events or geographic location information.
  • the change of the geographical location information may also be to first identify whether the semantic geographic location changes, for example, whether it changes from the home to the office, and if the semantic geographic location changes, the data collection is started.
  • the key events are not limited to the above examples. In summary, the key events are mainly determined by the information that needs to be collected. If the information that needs to be collected may be changed, it may be necessary to start the above data collection process.
  • the time when the application is switched to the foreground may be the current system time.
  • the current system time collected may be regarded as the current foreground application switching time. If data collection is performed under the trigger of other events, the time when the application switches to the foreground can be obtained by a timestamp used by the current foreground application to record the switching time recorded at the time of switching. This timestamp is the application that was recorded with the current system time when it was switched to the foreground.
  • time when the application switches to the foreground in the present application is sometimes referred to as "the time when the application is used.”
  • the collected data can be further divided into two categories from the final usage: 1) directly usable data, such as the Bluetooth/network/headset/charge line connection status, notification list status, and application type in Table 1. These data can be directly used as input parameters of the training model through feature extraction. 2) There are two main types of data that need to be processed: information related to the application of timing characteristics and GPS position information, which needs further processing and conversion into other forms as parameter inputs. The following is a detailed introduction to the second category.
  • GPS location information needs to be clustered for semantic locations, such as home and office. When the conversion of these two geographic locations occurs, the system will push a broadcast of the change in semantic location information. In other embodiments, GPS location information may also be used directly.
  • the data processing process can occur before storage or before the model training uses the data.
  • the information related to the application timing characteristics mainly includes the package name of the foreground application and the time when the application switches to the foreground.
  • the following three types of information can be further statistically obtained: a) k1 applications that have been used recently, and b) the maximum possible k2 of the current foreground application.
  • the values of k1, k2, and k3 may or may not be equal.
  • the three types of information are collectively referred to as application timing feature data.
  • the matrix U[M*M] is used to indicate the association relationship of the M applications installed on the terminal device 100.
  • U[i*j] represents the number of switchings from application i directly to application j, where i and j are positive integers less than or equal to M. For example, at some point when the application i switches to the operation of the application j, then the U[i*j] record is incremented by one.
  • the foreground application is for the front-end application, that is, the application currently in the foreground directly changes from the application i to the application j.
  • the user uses the application i and then uses the application j, which is the application i directly switches. Go to application j.
  • the number of jumps between applications can also be taken into account.
  • the application i directly switches to the application j to indicate that the two may be strongly related, but the application i can switch to the application j after several jumps, and can also reflect the certain possibility of association between the two.
  • a jump number parameter can be added. For example, if the application i has been switched to the application j after d jumps, the record of U[i, j][d] is incremented by one. Too many jumps indicate that the relationship between applications is weak and can be ignored. Therefore, you can set up to D jumps (for example, D is 5).
  • each element U'[i,j] in the association matrix of application i and application j can be defined as:
  • U'[,j] (jth column of the matrix) can represent the possibility of jumping from other applications to application j;
  • U'[i,] (representing the ith row of the matrix) can represent The possibility of other applications jumping from application i. From this matrix, one application can be obtained with the largest possible k2 predecessor applications and the largest possible k3 subsequent applications.
  • the largest k2 values can be selected from M′[, v], and the row value corresponding to the k2 value is the maximum possible k2 pre-applications of the application v; [v,] selects the largest k3 values in turn, and the column value corresponding to the k3 value is the k3 subsequent applications that apply v to the maximum possible.
  • the application timing feature data is stored in memory 180 along with other acquired data for use in the next phase of model training.
  • the above method is also used to obtain the application timing feature data corresponding to the foreground application.
  • FIG. 6 a schematic diagram of the process of the second stage model training.
  • some data has been stored in the memory 180, which reflects the rules of application usage. This rule requires the formation of reusable models through machine learning.
  • the model training process is doing this.
  • the data is read from the memory 180 (S601), the data is subjected to feature extraction (S602), and the like, and then the dimension division model is trained, that is, the user's usage habits of the application are classified according to one or more dimensions (S603). ).
  • the ranking model is separately trained for different classifications (S604), and the training results are stored in the memory for the third-stage application sorting. Further, at this stage, the prediction of the number N of applications frequently used by the user can also be performed (S605).
  • the three model algorithms involved are introduced below.
  • the user's habit of using the application may be broken in some special dimensions.
  • the time dimension the user's working time period and the non-working time period (also called the rest time period) may have great differences in the habit of using the application.
  • a sorting model is constructed in a general way to sort the applications, then there may be great deviations.
  • the training dimension division model is to explore the finer division of user habits in some dimensions, eliminate the errors caused by such breaks, and further improve the accuracy of sorting.
  • a partitioning result can be obtained by applying some partitioning algorithm to the original data. As shown in FIG. 7, the original data is divided into three categories from the dimension X. Further, each category can be further divided into more categories on the dimension Y, and will not be described in detail. This is also the basic principle of the entropy increase algorithm described below.
  • the time dimension is taken as an example, and the time is divided into two types: a working time period and a non-working time period.
  • the method of one day is to find one or two time points, which can distinguish the user's work from the non-work time.
  • the specific implementation can be further divided into: 1) single line division, in the form of [0:00 ⁇ x], [x ⁇ 24:00]. There is only one variable x; 2) two-line division, in the form of [0:00 ⁇ x 1 ], [x 1 to x 2 ], [x 2 to 24:00], two variables x 1 and x 2 are required. .
  • the following is an example of two-line division.
  • the data is first read from the memory 180.
  • the data used here mainly includes the application package name and the time the application is used.
  • the time of day is divided into 24 time periods, each time period including 1 hour.
  • the user's use of the application during each time period is counted based on the read data.
  • the two-dimensional matrix S is used to count the usage of the application in any time period.
  • S[h][i] indicates the number of times the application i is used in total during the h time period.
  • the number of times the application of application i is used in total
  • the proportion of times the application i is used in this time [h 1 , h 2 ] to the number of times all applications are used It can be calculated by the following two formulas (2) and (3):
  • the user uses the information entropy of the application during the period [h 1 , h 2 ]
  • the definition is as follows:
  • f(x 1 , x 2 ) is the proportion of the number of uses of all applications over the period of time for all applications in the period [x 1 , x 2 ], calculated as follows:
  • the final result may be that x 1 and x 2 are 9:00 and 18:00, respectively, then the three segments are [0:00 to 9:00] respectively, and [9:00 to 18: 00] work, [18:00 ⁇ 24:00] non-work, expressed in Table 2 as:
  • [0:00 to 9:00] and [18:00 to 24:00] are regarded as non-working time periods, and a sorting model is trained for the non-working time period. In other embodiments, a sorting model may also be trained for [0:00 to 9:00] and [18:00 to 24:00], respectively.
  • the semantics of “work” and “non-work” are given according to the living habits of most people. In fact, the meaning of the method reflects that the history of the application is presented in two different rules. This application does not necessarily limit the semantic division of "work” and "non-work”.
  • the importance of the application on the terminal device is related to the probability of use of the application.
  • the probability of use of an application may be different in different states of the system. For example, in the state of network connection, the probability of use of each application and the probability of use of each application in the state of disconnected network are different, which results in different order of application importance in the case of network connection and disconnection. So you need to consider the current system state when sorting your application.
  • network connection or disconnection status is only an example, and other states related to the terminal device may also be considered, such as whether the headset is connected, whether the Bluetooth connection, the charging line is connected, or the current location of the terminal device. .
  • the training ordering model mainly includes the following steps.
  • Data is read from the memory 180.
  • the data needed for training here mainly includes: the time when the application is used, application timing characteristics information, system status information, and semantic location information.
  • the application of the timing feature information has been detailed above and the acquisition process.
  • System status information mainly includes: the connection status of the headset/charge line/network.
  • the semantic location information may include, for example, a home and an office, and the like.
  • the read data is converted into a feature vector/feature matrix that can be processed by the machine learning algorithm by means of statistics, recognition, and/or vectorization. Specifically, it includes: s1) time and application timing feature information of the application to be used: converting the time and application timing feature information of the application into a vector using vectorization, time slice, and weekday/non-working day identification methods. For example, the time the application is used is divided into two parts, date and time period, and Monday, Tuesday, ..., Sunday are mapped to 0, 1, ..., 6, respectively, and the time period is a segmentation map, such as [9: 00 ⁇ 18:00] is a paragraph, all the time of this period is mapped to the same number.
  • System status information extracts features from the read system state information and uses the discrete/enumeration method to convert system state information into vectors.
  • Semantic location information The read semantic location information is encoded to be converted into a vector.
  • Normalization normalize the value [0,1] using the maximum/minimum method, and normalize the variance using the whitening method.
  • S6) Feature matrix combination, the data of each dimension is merged into a feature matrix and provided to the algorithm training module.
  • Model training and storage The data is divided into multiple groups, and the model is trained separately to obtain multiple models.
  • the division of data can also occur before the feature extraction, and the feature extraction is performed separately after the data is divided.
  • the data of :00 ⁇ 24:00] respectively trains two models, and stores the generated sorting model (machine learning model parameter information) and the corresponding time period into the database. Saturday and Sunday all-day data can be trained as non-working hours.
  • the machine learning algorithm softmax is taken as an example to introduce the training process in detail.
  • the reason for choosing the Softmax algorithm is that the algorithm is an incremental learning algorithm. Incremental learning algorithm means that the training process can ensure that the feature data of the new training sample is merged based on the original model parameters, and the parameters are further tuned, that is, the original sample data does not need to be retained, thus reducing the data. Storage capacity saves storage space.
  • the purpose is to find the sorting function (ie, the sorting model) h ⁇ (s), of course, the premise of finding the sorting function is to find the parameter ⁇ .
  • the sort function is defined as
  • k is the number of samples of the training input
  • M is the number of categories, ie the number of applications
  • W is the length of the feature vector s.
  • Model training uses the gradient descent method or the Newton method to minimize J( ⁇ ) to obtain ⁇ .
  • the gradient descent method is as follows:
  • is a fixed parameter and j is the number of iterations.
  • the training model is continuously updated every 24 hours, and the recognition model is continuously updated, so that the user's real use habits are getting closer and closer.
  • the sorting function h ⁇ (s) is known, that is, the sorting function that is used in the real-time sorting.
  • the obtained information is stored in the memory 180 in the form of Table 3 for use in real time sorting.
  • the value of N is a positive integer, which reflects the number of applications that users frequently use in a particular category, such as the number of applications that users frequently use during a specific time period (such as working hours).
  • the N value prediction algorithm is used to determine the value of N.
  • the value of N can be used for resource management, because these N applications are frequently used by users, so resources of this application can be appropriately protected during resource management.
  • the present embodiment classifies the user usage rules by time as the dimension, as shown in Table 3. Then, the calculation of the value of N can also perform the same classification in terms of time.
  • the output results are shown in Table 4: an N value is calculated for each of the working time period and the non-working time period. Such a prediction of the value of N is more accurate.
  • the calculation of the N value may also be performed using a non-time dimension, such as a location dimension.
  • a non-time dimension such as a location dimension.
  • the home-related data calculates an N value
  • the office-related dimension calculates an N value.
  • the value of N may be pre-computed and stored, for example, as stored in Table 4, or the N value prediction algorithm may be called in real time to obtain an N value when resource management requires an N value.
  • N-value prediction algorithm second-order moving average. Day t certain period of time (or some other classification dimensions) to be protected on the number N t is the mean of the number of applications of the same applications need to protect the first period t-1, t-2 plus variance days Weighted as shown in equation (12):
  • is the weight coefficient, and ⁇ is the variance value of all N i in the data.
  • the value of ⁇ is, for example, 0.005.
  • the N values for Days 1 and 2 can be calculated using the former implementation.
  • the variance value ⁇ of all the history N i is calculated every time an N value is calculated, so that all the history N i needs to be stored.
  • an incremental calculation method is used to evaluate the variance values of all Ni:
  • Is the average of all the data Is the historical data variance
  • P is the number of historical data
  • Is the variance of the new data Is the average of the new data
  • Q is the number of new data.
  • FIG. 8 Please refer to FIG. 8 for a schematic diagram of the process of applying the real-time sorting process in the third stage.
  • the real-time sorting module 320 sets an event listening mechanism to detect a key event that triggers real-time sorting (S801). After monitoring the critical event (S802), the trigger data collection module 221 collects real-time data (S803), which may include the system time of the terminal device, the current state data of the terminal device, and the current location data of the terminal device.
  • the key events mainly include the following: 1) front and back office switching; 2) application installation and uninstallation; 3) system status change: that is, the connection status of the headset/network changes; 4) receiving the scenario intelligent broadcast notification, ie Notify that the semantic location (company/home) has changed, and so on.
  • the classification in the time dimension in which the terminal device is currently located is determined, that is, the working or non-working time period (S804).
  • the sorting model of the related classification is selected or only the model parameters are obtained, and then the importance ordering of all applications in the real-time scene is predicted based on the historical data and the data collected in real time (S805).
  • the historical data here is the data collected and stored in history. For details, please refer to Table 1.
  • the data to be input into the sorting model may need to be pre-processed, for example, the time-series feature data related to the current foreground application needs to be obtained by collecting the collected real-time data and part of the historical data, if the model needs to input semantic location data. Then, it is also required to convert the terminal device location information collected in real time into semantic location data through clustering or obtain corresponding semantic location data according to the corresponding correspondence between the preset GPS location interval and the semantic location. It is not necessary to pre-process, and what kind of pre-processing root model training can be matched, which is not specifically limited in this application.
  • Tables 2 and 3 are only examples for convenience of understanding. In a specific implementation process, the two may be combined into one table, that is, a corresponding relationship, and the ordering model or model parameters may be determined at one time according to the current time of the system. .
  • the S804 classification uses the system time of real-time acquisition to classify. In other embodiments, if the classification dimensions are different, other data and/or historical data collected in real time may be used as needed. classification.
  • the real-time sorting module 320 stores the sorted result (S806) for use in the fourth-stage resource management query.
  • the third stage of the ordering may also occur when the fourth stage is called in real time, and then the sorting result is returned in real time.
  • the number N of applications frequently used by the user in the time period can also be obtained by an N-value prediction algorithm or by a pre-stored correspondence (such as Table 4). This step is not required and can be calculated in real time when the resource management really needs N values.
  • the present application further proposes a supplementary mechanism for newly installed applications, which sorts the newly installed applications by calculating a score Score.
  • Sorting the newly installed application mainly considers two factors: 1) using the possibility weight, which may also be referred to as LRU weight in this embodiment: checking whether the newly installed application is present in the LRU (last recently used) list to determine Whether the newly installed application has been used recently. If it appears, the LRU is 1 and the LRU is 0. 2) Time decay weight: Calculate the time difference attenuation weight of the current installation time from the installation time of the newly installed application.
  • the LRU list stores the identifiers of the most recently used applications. For example, if the list length is 8, the 8 most recently used applications are stored in chronological order. Each time a newly used application is inserted, one of the oldest used applications is deleted.
  • ⁇ 1 is the weight coefficient of the LRU
  • ⁇ 2 is the weight coefficient of the time decay
  • t is the discrete value of the time difference between the current time and the application installation time. Calculate and sort the Scores of all newly installed apps.
  • other forms may be employed to determine if a newly installed application has been used recently. For example, record the last usage time of the newly installed application, and then determine whether the time difference between the last usage time and the current time is less than a certain threshold.
  • the Nx of the previous rankings are recommended by the sorting result of the softmax algorithm, and the number of the newly installed applications is recommended by the number of the sorted results, and the two recommended methods recommend a total of N applications. As the most important application of the current system, for the next resource management.
  • x newly installed applications can select the largest x applications of Score in turn, and the value of x can be set as needed, for example, 2. Or set the condition: the Score of the recommended new installed application must be greater than a certain threshold (for example, 0.5) and/or the recommended number is at most 2. Specifically, how to set conditions is not limited in this application.
  • the main purpose of system resource management is to ensure the resource supply of foreground applications or applications of high importance.
  • This embodiment provides a temporary freeze (also referred to as instantaneous freeze) method of an application process, which temporarily freezes some unimportant processes when the resource demand is large, so as to provide more resources for the use of important processes. To avoid the application experience caused by insufficient resources.
  • Process freeze usually refers to the process of placing a process into an uninterruptible sleep or stop state and placing it in a wait queue.
  • a "process” can be understood as an instance of a running application, and in some descriptions, the two can be equally understood.
  • the freezing technology is mainly used to put the state of all processes in the process list into sleep or stop state when the system is hibernate or suspend, and save all processes. Context to the hard disk, so that when recovering from hibernation or suspend, the system can restore itself to the previous running state by unfreezing all frozen processes.
  • the use of the freezing technology by different terminal operating systems is not the same.
  • Some terminal operating systems use pseudo background technology, that is, after the application process switches to the background and runs in the background for a period of time, the process is frozen until the user switches the application to the foreground again, and then the process is thawed and continues to run.
  • the OOM (out of memory) or LMK (low memory killer) module will trigger the memory recycling to recover the memory by killing the process, but this will cause the killed application to reopen for a longer time, and the user The original state cannot be saved, which reduces the user experience and introduces a freeze technique to solve the problem.
  • the process is first frozen, the context of the process is swapped into the ROM, and the memory is released, so that the next time the application is reopened, the process is thawed. To continue to run, it not only avoids the original state of the application being killed, but also reduces the application restart time and improves the user experience.
  • the existing freezing technology is mainly used for the processing of long-term unused processes, or the long-term release of resources, until the user has a demand for the process to thaw.
  • This embodiment proposes a transient freezing technology, which uses a temporary way to ensure timely delivery of resources required for important applications or important scenarios by temporarily freezing other application processes, and automatically recovering after the resource requirements are reduced. Unfreezing these frozen application processes, while improving the processing and response speed of the application, while avoiding the experience degradation caused by other applications being frozen for a long time (if the download is frozen for a long time, it cannot continue to download). Improve the user experience.
  • the "specific events" here are generally related to important applications, such as foreground applications or background-aware applications, and background-aware applications such as music playback applications running in the background.
  • the non-critical application that is temporarily frozen may be selected according to requirements, for example, all the applications except the first N applications are temporarily frozen according to the importance ranking result of the application provided by the foregoing embodiment (where N is the foregoing embodiment) Calculate the number of applications that users frequently use), or select all background applications for temporary freezing, or select other applications that are not important or predict non-critical for a period of time, as needed.
  • the detection function monitors a specific event (S901), and the specific event is reported by the ticker function of the Activity Manager Service (AMS) (S902), and the partial application is temporarily frozen (S903).
  • Set the freeze time such as 1.5s, when calling a temporarily frozen function. This freeze time can be preset, cannot be changed, or can be user configurable.
  • the freeze time is implemented by a timer, it is detected that the timer expires (S904), and then the application is thawed (S907).
  • Steps S904 and S905 can be understood as two conditions for normal thawing, and any one or both of them can be realized.
  • an event indicating that the application environment change is frozen occurs, for example, the frozen application is switched to the foreground, the frozen application exits, and the frozen application is frozen.
  • the notification message of the frozen application is clicked by the user, or the like, the frozen application can be thawed in advance (S907).
  • an event indicating that a frozen application operating environment changes is generally associated with a frozen application, and the occurrence of the event generally causes the importance of the frozen application to suddenly increase and needs to be thawed in time.
  • monitoring of events indicating changes to the frozen application operating environment can also be implemented by the RBI reporting function of the aforementioned Activity Manager service.
  • the event indicating that the frozen application runtime environment changes may be any of the following events: 1) the application is set to the foreground; 2) the application exits; 3) the application reinstalls or updates the version; 4) the live wallpaper application is set to the live wallpaper; 5) Add widgets to the desktop; 6) Instant messaging (IM)/short message service (SMS)/mail (Email) applications with network packet arrival; 7) IM/SMS/Email class
  • the application is from no network to network connection; 8) when other applications access the provider or service of the frozen application; 9) the system or other application calls the freezing process through the binder synchronously; 10) after unlocking, it detects There are widget applications on the desktop; 11) the start of the exercise, the previously frozen application using GPS and Sensor; 12) the frozen application processing the headset button; 13) the temporarily frozen application receives the binder asynchronous message; 14) click the notification
  • this embodiment also proposes an emergency or early thawing method. Before the specific event is completed, and the specified timer duration has not arrived, but an event occurs in which the frozen application runtime environment changes, the frozen application can be thawed in advance to avoid normal use of the application due to freezing.
  • this embodiment also proposes a method of periodic freezing. Before the application is frozen, the running time after the application is thawed is detected first. For an application that has accumulated a running time of t1 seconds after thawing (t1 is a preset time length value, for example, 10 s), the temporary freezing can be continuously performed. For applications with a cumulative run time less than t1 after thawing, a temporary freeze can be implemented periodically, for which the application is frozen for t2 seconds and then thawed for t3 seconds until the continuous freeze condition is met.
  • t1 is a preset time length value, for example, 10 s
  • the accumulated running time of the application after thawing is cleared to 0, so that each background application can be guaranteed to receive the After the event of the operating environment change can run for a period of time, in addition to ensuring the instantaneous high resource demand of the foreground application, the running and experience of the background application are also guaranteed.
  • Another embodiment of the present application provides a scheduling method for an important task, where the task may be a process or a thread.
  • each task can get the same time slice from the cpu and run on the cpu at the same time, but in fact one cpu can only run one task at a time. In other words, when a task occupies a cpu, other tasks must wait.
  • the fully fair scheduler (CFS) scheduling algorithm is a more general scheduling algorithm in the existing Linux system. In order to achieve fairness, the CFS scheduling algorithm must punish the currently running tasks so that those processes that are waiting for the next time are scheduled.
  • CFS uses the virtual run time (vruntime) of each task to measure which task is most worthy of being scheduled.
  • the ready queue in CFS is a red-black tree with tikntime as the key value. The smaller the vruntime, the closer the process is to the leftmost end of the entire red-black tree. Therefore, the scheduler selects the task at the leftmost end of the red-black tree each time, and the task has the smallest vruntime.
  • cpu refers to a minimum processing unit in a computer device, and may also be referred to as a processing core, or simply a core.
  • the order in which the cpu performs tasks is T1-T3-T5-T2-T4-T6. Assume that task T1 is an important task among them.
  • Vruntime is calculated from the actual running time of the task and the weight of the task, which is an accumulated value.
  • the concept of task priority is weakened, but the weight of the task is emphasized. The greater the weight of a task, the more it needs to run, so the virtual running time is smaller, so the chances of being scheduled are greater.
  • This embodiment proposes a scheduling method for important tasks.
  • a new running queue hereinafter referred to as a vip queue
  • a vip queue is created for each CPU, and its structure is similar to the aforementioned ready queue. Put important tasks in the vip queue.
  • An important task refers to a task that has a great influence on the user experience, and may include all threads (or processes) of the N applications prior to the importance mentioned in the foregoing embodiment; or important tasks include threads of the N applications.
  • Key threads in the middle such as UI threads and rendering threads; or important tasks include key threads such as UI threads and rendering threads for the current foreground application.
  • Important tasks are divided into static and dynamic.
  • the identification of static important tasks generally occurs in user mode, such as the user interface (UI) thread and the render thread of the foreground application.
  • Static important tasks generally cancel their importance when the importance of the application changes.
  • Dynamic important tasks are important tasks on which static important tasks depend. Their recognition generally occurs in the kernel state. Once the dependency is released, its importance is cancelled. Dynamic important tasks include tasks that are directly dependent on static important tasks, and can also include tasks that are indirectly dependent.
  • the dependencies here can be data dependencies, lock dependencies, binder service dependencies, or control flow dependencies.
  • the data dependency that is, the execution of the B task must depend on the output of the A task;
  • the lock dependency means that the execution of the B task requires some kind of lock released by the A task;
  • the control flow dependency is that the B task must wait for the A task to execute before executing the logic;
  • Binder service dependency refers to task A calling a binder function (similar to a remote procedure call), requiring task B to complete a certain function, and returning the running result, so that task A generates a binder service dependency on task B, which belongs to the control flow dependency.
  • static_vip 1, indicating that the task is a static important task
  • dynamic_vip is not equal to 0, indicating that the task is a dynamic important task.
  • dynamic_vip Divide into 3 or more types, respectively representing mutex, rwsem and binder dependencies. More types are not detailed, and several types can be reserved for later expansion. Each type uses 8-bit storage. Thus, each time the corresponding dependency function is called, the block value corresponding to the field is incremented by 1. After each dependency function is completed, the value is decremented by 1 until all the fields are equal to 0, and then the important attribute of the task is cancelled. , put it back into the ready queue to run.
  • Important task A calls the mutex_lock function to get a mutex lock.
  • the state of the lock is detected. If the acquisition fails, it indicates that the lock has been acquired by other tasks.
  • the current task A is suspended and goes to sleep.
  • the code is added to obtain the current holder of the lock (task B), and the corresponding value of the value of the field dynamic_vip of the task structure is incremented by 1, and then the task B is moved to the vip queue for scheduling operation. .
  • task B releases the lock mutex_unlock, it determines the corresponding value of dynamic_vip in its task_struct and decrements it by 1. If it is 0, it removes the task from the vip queue. The operation of other locks is similar.
  • the VIP queue includes two important tasks T1 and T7.
  • the current T7 vruntime is less than the T1 vruntime.
  • the cpu Each time you get the next task to be run, the cpu first checks whether there are tasks in the vip queue that need to be run. If yes, select the task in the queue to run. If the queue is empty, select the task in the ready queue to run. This ensures that important tasks can be run before other non-critical tasks, and that similar tasks are implemented for important tasks.
  • the kernel checks whether there is a task waiting time in the vip queue corresponding to the current cpu exceeding a threshold (for example, 10 ms), that is, determining whether there is a delay task (S1302). If the waiting time of the task exceeds the threshold, it is checked whether the data and/or the instruction of the task still exists in the cache, that is, whether the task is movable (S1303). If the data and/or instructions of the task do not exist or partially do not exist in the cache (the cache is hot), then there is no task waiting in the cpu cluster in the current cpu cluster, and there is no real time.
  • a threshold for example, 10 ms
  • the purpose of the task cpu (S1304) the task is migrated to the destination cpu (S1305).
  • the migration of tasks can be achieved by calling the function migration provided by the kernel. Checking whether the cache is hot can be achieved by calling the function task_hot provided by the kernel.
  • all the migrateable tasks can be identified at one time, and then the migration can be performed; the tasks in the vip queue can also be processed in sequence.
  • the waiting time of a task is the time difference between the current time of the task and the enqueue time, provided that the task has never been run in the middle; or the waiting time of one task is the time difference between the current time and the last time it was run.
  • Detecting whether the waiting time of a task in a queue exceeds the threshold usually requires acquiring the lock of the queue first.
  • the above method is implemented, and the deadlock can be avoided to some extent.
  • the current terminal device 8 cores (cpu0-cpu7) are generally divided into two levels, 4 small cores and 4 large cores. If the original cpu is a small core during migration, then a large core can be selected as the destination cpu. To migrate important tasks, see Figure 14.
  • the device embodiments described above are merely illustrative, wherein the modules described as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules, ie may be located A place, or it can be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the connection relationship between the modules indicates that there is a communication connection between them, and specifically may be implemented as one or more communication buses or signal lines.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Stored Programmes (AREA)
  • Telephone Function (AREA)

Abstract

本申请提供一种在计算机系统中管理资源的方法以及终端设备。该方法包括:获取数据,所述数据包括与当前的前台应用相关的应用时序特征数据,所述数据还包括以下实时数据中的至少一种:所述计算机系统的系统时间、所述计算机系统的当前状态数据和所述计算机系统的当前位置数据;根据所述实时数据中的至少一种从多个机器学习模型中选择与所述实时数据匹配的目标机器学习模型;将获取的所述数据输入所述目标机器学习模型以对所述计算机系统上安装的多个应用进行重要性排序;根据所述重要性排序的结果执行资源管理。该方法提高应用的重要性识别准确度,从而使得资源分配/预留或资源回收的对象更准确,进而提升资源管理的有效性。

Description

资源管理的方法及终端设备 技术领域
本申请涉及计算机操作系统领域,尤其涉及一种对操作系统中部署的应用进行重要性排序以及根据重要性排序进行资源管理的方法、装置及系统等。
背景技术
计算机操作系统,包括智能手机操作系统,首要任务就是为运行在其上的各种应用维护资源分配。计算机操作系统视角的资源,包含多种类型,如以时间片为表示形式的处理器(例如中央处理器CPU)计算资源,以内存页为表现形式的内存资源,以带宽为表现形式的输入输出(Input/Output,I/O)资源等。当支撑用户当前操作的资源无法及时供给,操作系统就会表现出卡顿。因此,影响操作系统卡顿与否的核心因素是资源调度策略,而资源调度策略是否合理最重要的是识别重要应用和非重要应用,已尽可能达到资源的合理分配。
以智能手机操作系统Android为例,应用可划分为前台应用、后台应用和未启动的应用,用户能够感受到卡顿的场景通常是在对前台应用的使用过程中,所以前台应用相对更重要。因此,从用户的角度出发,Android系统出现卡顿的主要原因就是:应用的无序执行,导致前台应用或服务被使用时需要的资源得不到保证。现有的方法能够识别前台应用和后台应用,并适当为前台应用分配更多资源,但这种分配相对固定。但是在实时场景下,尤其是当资源不足的时候,如何为前台应用或者重要应用临时提供更多资源,以保证这些重要应用的运行,就涉及到如何释放一些被当前非重要应用占用的资源的问题,在解决这个问题的过程中,释放哪些应用的资源以便于临时为重要应用让路就成为了亟待解决的问题。
发明内容
本申请提供资源管理方法以及应用该方法的终端设备等,该方法中包括应用重要性排序方法和资源调度方法,能够识别当前场景下应用的重要性,进而据此实现资源调度,为更重要的应用保证资源供给,从而一定程度上避免系统卡顿,从而提升用户体验。
第一方面,本申请提供一种资源管理方法。该方法可以应用于计算机系统,例如终端设备。终端设备获取数据,该数据包括与当前的前台应用相关的应用时序特征数据以及以下实时数据中的至少一种:所述计算机系统的系统时间、所述计算机系统的当前状态数据和所述计算机系统的当前位置数据。终端设备根据所述实时数据中的至少一种从多个机器学习模型中选择与所述实时数据匹配的目标机器学习模型,这里的多个机器学习模型对应不同的应用使用规律。终端设备将获取的所有数据输入所述目标机器学习模型,通过该目标机器学习模型对所述计算机系统上安装的多个应用进行重要性排序。该重要性排序的结果可以作为该终端设备执行资源管理的决策因素之一。确定目标机器学习模型也可以使用除上述实时数据之外获取到的其他数据。
设置多种机器学习模型,基于采集到的数据在这多种机器学习模型中确定出与终端设备当前场景最相关的机器学习模型,并根据该机器学习模型确定应用的重要性排序,使得应用重要性的排序结果更符合终端设备的当前场景,即重要性排序的实时准确性更高。
另外,与现有技术采集单一或少量数据相比,本申请提供的应用排序是基于采集到的 多种与终端设备的使用相关的数据,数据的多样化也能够提高应用排序的准确性。
在一些实施例中,应用时序特征数据用于表征多个应用被使用的时间顺序的数据。具体的,应用时序特征数据可以包括最近被使用的k1个应用,所述前台应用的k2个最大可能的前续应用以及k3个最大可能的后续应用,其中,k1、k2和k3均为正整数。
在一些实施例中,终端设备根据所述计算机系统的系统时间确定所述计算机系统当前所处的时间段;根据所述计算机系统当前所处的时间段从对应关系中确定与所述计算机系统当前所处的时间段对应的目标机器学习模型,所述对应关系包括多个时间段以及该多个时间段分别对应的多个机器学习模型。
在一些实施例中,终端设备根据所述计算机系统的当前位置数据确定所述计算机系统当前所处的语义位置,然后根据所述计算机系统当前所处的语义位置从对应关系中确定与所述计算机系统当前所处的语义位置对应的目标机器学习模型,所述对应关系包括多个语义位置以及该多个语义位置分别对应的多个机器学习模型。
以上为确定目标机器学习模型的两种方式。多个目标机器学习模型分别对应用户使用应用的多种使用规律。多种使用规律的划分可以根据一个维度划分,例如系统时间或当前位置数据,也可以根据多个维度划分。
在一些实施例中,终端设备根据所述实时数据中的至少两种确定目标机器学习模型。与前两种实现方式不同的是,在这些实施例中,目标机器学习模型对应的使用规律是多个维度划分的结果,例如用户使用规律从时间和地理位置两个维度被划分四种:工作时间且地点为公司、工作时间且地点为出差(非公司)、非工作时间且地点为家、以及非工作时间且地点为娱乐场所(非家)。这四种使用规律呈现出各自不同的特点,所以分别对应一个机器学习模型。终端设备根据实时数据确定一个与当前最符合的机器学习模型。
本申请不对“使用规律”的衡量方法做限定。所谓使用规律不同,即用当前的衡量方法来看不同的使用规律呈现出不同的特征。
在一些实施例中,终端设备还可以根据应用使用历史预测当前场景下用户经常使用的应用个数N,然后在重要性排序的基础上确定出排序在前的N个应用。这样终端设备根据在执行资源管理的时候就可以将这个N个应用作为重要应用,在某些情况下可以为这N个应用预留资源,或采取一些措施保护这个N个应用。
虽然有所有应用的重要性排序结果,但是终端设备执行资源管理的时候选取部分应用,到底选取几个是困难的问题,通过预测用户经常使用的应用个数N,能够让终端设备执行资源管理的时候目的性更强,更明确。而且这N个应用确实是重要性很高的应用,这使得资源管理的合理性更强。
在一些实施例中,终端设备确定重要性排序在前的N个应用(或更少或更多),为确定的应用预留资源,或临时冻结剩余的其他应用,或为每个CPU创建一个vip队列,所述vip队列中包括确定出来的这些应用的任务(进程或线程),所述vip队列中各个人物的执行优先于所述CPU的其他执行队列。
通过本申请提供的方法进行应用排序,需要收集应用使用的历史数据,但是对于新安 装不久的应用,有可能因为应用使用的历史数据过少而导致排序靠后,但这并不能准确代表该新安装应用的真正重要性。
第二方面,本申请还提供以一种新安装应用的重要性排序方法,相当于对新安装应用进行一个重要性的补偿。终端设备根据新安装应用的权重对所述新安装应用进行重要性排序,选取其中排序在前的N2个新安装应用,其中所述新安装应用安装到所述计算机系统上的时间小于预设的第二阈值。相应的,终端设备在执行资源管理的时候也可以考虑新安装应用的排序。例如,在执行资源预留的时候在应用数量有限的情况下即考虑前述提到的重要性排序结果中排序在前的部分应用,也考虑一下新安装应用排序结果中排序在前的新安装应用。其他类型的资源管理也类似。
这样就避免了在资源管理时疏忽新安装应用中对用户比较重要的应用的,进一步提高了资源管理的有效性。
在一些实施例中,终端设备根据使用可能性权重和时间衰减权重计算每个新安装应用的得分,得分高的新安装应用的重要性高于得分低的新安装应用的重要性。其中,所述使用可能性权重用于反映新安装应用最近有没有被使用;所述时间衰减权重用于反应当前时间距离应用安装时间的时间差。
第三方面,本申请还提供一种数据采集方法以及根据采集到的数据执行模型训练的方法,可以用来支持其他实施例中提供的应用排序以及资源管理等。
终端设备采集并存储应用数据和所述计算机系统的相关数据,其中,所述应用数据包括所述应用的标识,所述应用被使用的时间,所述计算机系统的相关数据包括以下数据中的至少一种:在所述应用被使用的时间所述计算机系统的时间、状态数据和位置数据。
进一步的,终端设备根据过去一段时间内采集并存储的所述应用数据计算多个应用的应用时序特征数据;将所述应用数据,或所述应用数据和所述计算机系统的相关数据输入分类模型,例如熵增模型,以获得与应用被使用的规律相关的多个分类,其中任意两个分类分别对应的两种规律存在不同;分别针对所述多个分类中的每个分类训练机器学习模型,所述机器学习模型用于实现应用的重要性排序,所述训练的输入包括所述应用被使用的时间、所述应用时序特征数据,以及所述计算机系统的相关数据中的至少一种。
在一些实施例中,模型训练的过程也可以在服务器端执行。终端设备将采集到的数据发送给服务器,由服务器执行模型训练。训练后的模型可以存储在本地,也可以返回给终端设备。若模型存储在服务器侧,那么终端设备可以进行重要性排序时向服务器申请模型。进一步的,重要性排序也可以在服务器端执行,终端设备只需存储排序结果或使用时向服务器申请排序结果。
第四方面,本申请还提供一种资源管理方法,可以应用于计算机系统,例如终端设备。终端设备监测到特定事件时,临时冻结部分应用,直至特定时间段结束,然后解冻被冻结的全部或部分应用。所述特定事件为指示资源需求量升高的事件,例如,应用的启动事件、拍照事件、图库缩放事件、滑动事件以及亮/灭屏事件等。
特定时间段结束仅是解冻的其中一种条件,另一种条件可以是检测到所述特定事件结 束。
进一步的,可能存在一些紧急事件发生,这些事件发生在所述特定时间段结束和所述特定事件结束之前,一旦发生这类型的紧急事件,那么需要提前解冻与紧急事件相关的应用。
在瞬时资源需求量大时,通过临时冻结部分重要性不那么高的应用,释放部分资源,可以保证那些为用户所感知且资源需求量较大的应用的资源供给,从而避免这部分应用出现卡顿的现象,提升用户体验。临时冻结仅是将应用冻结较短的一段时间,接着就释放,而现有技术中通常是将长期不用的应用长期冻结,并在应用被请求时再解冻。虽然都是冻结,但至少在使用场景和冻结的具体方式上存在不同。
在一些实施例中,终端设备通过设置定时器实现临时冻结,所述定时器的时长被设置为所述特定时间段。这种方式代码改动量较小。
在一些实施例中,所述临时冻结的部分应用包括所有后台应用,或所有位于后台且用户不可感知的应用。在另一些实施例中,所述临时冻结的部分应用包括重要性低的应用,其中应用的重要性根据应用的历史使用情况、机器学习算法以及系统的当前场景数据获得。具体的,应用的重要性可根据前述提供的重要性排序的方法获得,选取其中重要性排序靠后的应用执行临时冻结。
第五方面,本申请还提供另一种资源管理方法,可以应用于计算机系统,例如终端设备。该终端设备包括多个物理核,每个物理核都对应一个第一队列和一个第二队列,所述第一队列和第二队列中分别包括一个或多个待所述物理核执行的任务。至少一个物理核执行如下方法:获取并执行所述第一队列中的任务,直至所述第一队列中所有的任务都执行完毕,再获取并执行所述第二队列中的任务。
将重要任务放到一个额外的执行优先性更高的队列中,物理核先执行这些重要任务,从而保证重要任务的资源供给。
在一些实施例中,监测所述第一队列中是否存在等待时间超过特定阈值的任务,若存在,将所述任务移动到另一个物理核对应的第一队列中。
由于linux操作系统的限制,实时任务一般不允许从一个物理核移动到另一物理核,这里的重要任务指的是非实时任务。在监测到存在等待超时的重要任务时,将等待超时的重要任务移动到另一个空闲的物理核的第一队列中,从而避免重要任务等待时间过久造成的卡顿现象。
在一些实施例中,所述第一队列中的任务包括重要任务(或称关键任务)以及所述重要任务依赖的任务。所述重要任务为对影响用户体验的任务,或者说用户可感知的任务,或者所重要性高的应用的任务,其中应用的重要性根据应用的历史使用情况、机器学习算法以及系统的当前场景数据获得。任务之间的依赖关系例如为数据依赖、锁依赖或binder服务依赖等。
因为重要任务的执行依赖于它所依赖的任务的执行,所以将重要任务及其依赖的任务都放到执行优先性更高的第一队列中,能够进一步提高重要任务的执行速度。
第六方面,针对本申请提供的每种方法,本申请还提供一个相应的装置,该装置包括用于实现方法的各个步骤的模块。该模块的实现可以是软件、软硬结合或硬件。
第七方面,本申请还提供一种终端设备,包括处理器和存储器,所述存储器用于存储计算机可读指令,所述处理器用于读取所述存储器中存储的所述计算机可读指令实现本申请提供的任意一种或多种方法。
第八方面,本申请还提供一种存储介质,具体可以为非易失性存储介质,用于存储计算机可读指令,当一个或多个处理器执行该计算机可读指令时实现本申请提供的任意一种或多种方法。
第九方面,本申请还提供一种计算机程序产品,该产品中包括计算机可读指令,当一个或多个处理器执行该计算机可读指令时实现本申请提供的任意一种或多种方法。
在一些实施例中,本申请中并没有限定重要性“高”或“低”具体是多高或多低,因为本领域技术人员可以理解,在不同情况下有不同的需求。对本申请提供的排序方法而言,应用的“重要性”为应用被用户使用的可能性,可能性越大,重要性越高。但在其他一些实施例中,例如应用资源管控,应用的“重要性”可以根据当下资源管控的状况决定,比如应用的重要性可以与用户是否感知到该应用相关,等等。另外,本申请提供的两种资源管理方法可以依赖于本申请提供的排序方法,也可以不依赖。
解决计算机操作系统资源无序调度的关键就是要系统能够实时的、准确的感知应用的重要性,并根据应用重要性排序实施最优的资源管理策略,确保系统资源物尽其用。简言之,操作系统要站在用户的视角,充分识别用户的使用需求并按需进行资源供给,例如,对于用户正在使用的,充分保障,对于用户将要使用的,提前准备,对于用户当前最不关注的,例如无效自启或关联启动的应用等,充分回收。
本申请提供的应用排序方法,通过实时采集计算机设备(例如智能终端)的多种信息,并分别训练与应用使用规律相关的不同分类下的多种机器学习模型,重要性排序时选择最符合用户使用规律的机器学习模型对应用的重要性进行实时地预测,提高了应用重要性的识别准确度。
进一步的,在识别应用重要性的基础上,进行合理有效地资源管理,从而保障了重要应用的资源供给,提升了计算机设备使用的流畅度,进而提升了用户的使用体验。
附图说明
为了更清楚地说明本申请提供的技术方案,下面将对附图作简单地介绍。显而易见地,下面描述的附图仅仅是本申请的一些实施例。
图1为一种终端设备的逻辑结构示意图;
图2为一种终端设备中部署的操作系统的逻辑结构示意图;
图3为感知管理装置的部分模块的逻辑结构示意图;
图4为资源管理方法的概要示意图;
图5为资源管理方法中数据采集方法的流程示意图;
图6为资源管理方法中模型训练方法的流程示意图;
图7为根据一个或多个维度划分应用使用规律的原理和效果的示意图;
图8为资源管理方法中应用实时排序方法的流程示意图;
图9为资源管理方法中临时冻结方法的流程示意图;
图10为就绪队列第一状态的示例图;
图11为就绪队列第二状态的示例图;
图12为每个cpu对应的两个队列的示例图;
图13为任务移动方法的流程示意图;
图14为任务移动涉及到的队列变化的示例图。
具体实施方式
为了方便理解本申请的实施例,首先在此介绍本申请实施例描述中会引入的几个要素。
操作系统:是管理计算机硬件与软件资源的计算机程序,同时也是计算机系统的内核与基石。操作系统需要处理如管理与配置内存、决定系统资源供需的优先次序、控制输入与输出设备、操作网络与管理文件系统等基本事务。操作系统也提供一个让用户与系统交互的操作界面。操作系统的型态非常多样,不同机器安装的操作系统可从简单到复杂,可从手机的嵌入式系统到超级计算机的大型操作系统。许多操作系统制造者对它涵盖范畴的定义也不尽一致,例如有些操作系统集成了图形用户界面(graphic user interface,GUI),而有些仅使用命令行界面,而将GUI视为一种非必要的应用程序。终端操作系统一般认为是运行在手机、平板电脑、销售终端等终端上的操作系统,例如目前主流的Android或iOS。
应用:也叫应用软件或应用程序,是一种计算机程序,被设计来为用户实现一组相关联的功能、任务或活动。应用在操作系统上部署,具体可以与操作系统的系统软件(例如操作系统)绑定部署,例如系统级别的应用(或称系统服务),也可以独立部署,例如目前常见的文字处理应用(例如Word应用)、网页浏览器应用、多媒体播放应用、游戏应用等。
系统资源:本申请中指的是计算机系统内部的资源,包括但不限于内存资源、处理资源和I/O资源中的任意一种或多种。对资源的管理包括很多种实现,例如通过对一些应用执行关闭、冻结或压缩等实现资源回收,或者通过拒绝应用启动实现资源的保留,或者通过预加载应用的方式实现资源为该应用的预留。
前台应用、后台应用以及应用的前后台切换:前台应用是运行在前台的应用的简称。后台应用是运行在后台的应用的简称。例如,当前启动了Word应用和网页浏览器应用,但是当前用户正在写一封有些操作系统通过两个列表分别管理这两种应用。在Linux系统中,当一个应用从前台切换到后台或从后台切换到前台时,会触发一个前后台切换事件,系统可通过监控该前后台切换事件来感知应用的前后台切换。在一些实施例,后台应用由分为后台不可感知应用和后台可感知应用,后台可感知应用是指某些应用即使运行在后台也可以被用户感知到,例如音乐播放应用或导航应用,二者即使后台运行,用户还是能听到音乐或导航的声音。
前续应用和后续应用:一个应用的前续应用是该应用被使用(可以理解为被切换到前台)之前被使用的应用,一个应用的后续应用是该应用被使用之后被使用的应用。应用被 使用意味着应用启动或应用从后台切换到前台,应用启动也可以理解为应用(从关闭)切换到前台。
应用时序特征数据:用于表征多个应用被使用的时间顺序的数据,例如最近被使用的应用有哪些、一个应用的前续应用可能是哪个应用、或者一个应用的后续应用可能是哪个应用等。
系统状态数据:或简称为状态数据,用于表示计算机设备或操作系统本身状态的信息。更具体的,可理解为设备中内置的组件或外接组件的状态数据,例如网络连接状态,或耳机、充电线等外接设备的连接状态等。
位置数据和语义位置数据:位置数据是广义的概念,任意表示位置的信息都可以认为是位置数据,例如经纬度。将经纬度等较为精确的位置数据赋予实际意义,例如“家”、“公司”、或“娱乐场所”等,这就是语义位置数据。语义位置数据也属于位置数据的一种。
位置数据和系统状态数据也可以统一理解为设备的场景数据。
应用重要性:应用被使用的概率,或者理解为应用被切换到前台的可能性。本申请中对应用重要性的排序,即是对应用被使用概率的排序,这种排序的基础是根据应用使用历史以及一些其他信息的对应用使用概率的预测。
本申请中的“多个”若无特殊说明,表示的是两个或两个以上。本申请中提到的“数据”或“信息”并没有限定存储方式或格式。
本申请提供的方法主要应用于终端设备,该终端设备(通常为移动终端)也可称之为用户设备(user equipment,UE)、移动台(mobile station,MS)、移动终端(mobile terminal)等,可选的,该终端可以具备经无线接入网(radio access network,ran)与一个或多个核心网进行通信的能力,例如,终端可以是移动电话(或称为“蜂窝”电话)、或具有移动性质的计算机等,例如,终端还可以是便携式、袖珍式、手持式、计算机内置的或者车载的移动装置。举例来说,终端设备可以是蜂窝电话、智能电话、膝上型计算机、数字广播终端、个人数字助理、便携式多媒体播放器、导航系统等。应理解的是,除了移动终端以外,本申请任意实施例提供的方法也可以应用于固定终端,例如个人电脑、销售终端(point of sale,POS)、或自动取款机等;或者也可以应用于服务器等非终端类型的计算机系统。
下面将参照附图更详细地描述涉及本申请的终端设备。需要说明的是,后置用语“单元”和“模块”仅仅是为了描述的方便,并且这些后置用语不具有相互区分的含义或功能。
请参阅图1,为本实施例应用的一种终端设备的结构示意图。如图1所示,终端设备100包括无线通信模块110、传感器120、用户输入模块130、输出模块140、处理器150、音视频输入模块160、接口模块170、存储器180以及电源190。
无线通信模块110可以包括至少一个能使终端设备100与无线通信系统之间或终端设备100与该终端设备100所在的网络之间进行无线通信的模块。例如,无线通信模块110可以包括广播接收模块115、移动通信模块111、无线因特网模块112、局域通信模块113和位置(或定位)信息模块114。
广播接收模块115可以经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播信道可以包括卫星信道和地面信道。广播管理服务器可以是生成并发送广播信号和/或广播相关信息的服务器,或者可以是接收预先生成的广播信号和/或广播相关信 息并将它们发送至终端设备100的服务器。广播信号不仅可以包括电视广播信号、无线电广播信号和数据广播信号,而且可以包括电视广播信号和无线电广播信号的组合的形式的信号。广播相关信息可以是关于广播信道、广播节目或广播服务提供商的信息,甚至可以通过移动通信网络来提供。在通过移动通信网络来提供广播相关信息的情况下,广播相关信息可以由移动通信模块111接收。广播相关信息可以以多种形式存在。例如,广播相关信息可以以数字多媒体广播(digital multimedia broadcasting,DMB)系统的电子节目指南(electronic program guide,EPG)的形式存在,或者以手持数字视频广播(digital video broadcast-handheld,DVB-H)系统的电子服务指南(electronic service guide,ESG)的形式存在。广播接收模块115可以使用各种广播系统来接收广播信号。更具体地说,广播接收模块115可以使用诸如地面数字多媒体广播(multimedia broadcasting-terrestrial,DMB-T)、卫星数字多媒体广播(digital multimedia broadcasting-satellite,DMB-S)、媒体前向链路(media forward link only,MediaFLO)、DVB-H和综合业务地面数字广播(integrated services digital broadcasting-terrestrial,ISDB-T)的数字广播系统来接收广播信号。广播接收模块115可以从上述数字广播系统以外的提供广播信号的广播系统接收信号。通过广播接收模块115接收到的广播信号和/或广播相关信息可以存储在存储器180中。
移动通信模块111可以向移动通信网络上的基站、外部终端和服务器中的至少一方发送无线电信号,或者可以从它们中的至少一方接收无线电信号。根据文本/多媒体消息的接收和发送,信号可以包括语音呼叫信号、视频电话呼叫信号和多种格式的数据。
无线因特网模块112可以对应于用于无线接入的模块,并且可以包括在终端设备100中或从外部连接到终端设备100。可以使用无线LAN(WLAN或Wi-Fi)、全球微波接入互操作性(world interoperability for microwave access,WiMAX)、高速下行链路分组接入(high speed downlink packet access,HSDPA)等作为无线因特网技术。
局域通信模块113可以对应于用于局域通信的模块。此外,可以使用蓝牙(Bluetooth)、射频识别(radio frequency identification,RFID)、红外数据协会(infrared data association,IrDA)、超宽带(ultra wide band,UWB)和/或ZigBee作为局域通信技术。
位置信息模块114可以确认或获得移动终端100的位置。通过使用全球导航卫星系统(global navigation satellite system,GNSS),位置信息模块114可以获得位置信息。GNSS是描述围绕地球旋转并向预定类型的无线导航接收器发送参考信号以使得无线电导航接收器可以确定他们在地球表面上的位置或靠近地球表面的位置的无线电导航卫星系统。GNSS可以包括美国的全球定位系统(global positioning system,GPS)、欧洲的伽利略系统、俄国的全球轨道导航卫星系统、中国的罗盘系统和日本的准天顶卫星系统等。
GPS模块是位置信息模块114的代表性示例。GPS模块114可以计算关于一个点或对象与至少三个卫星之间的距离的信息和关于测量得到距离信息时的时间的信息,并且可以对获得的距离信息应用三角测量法以在预定时间根据经度、纬度和高度获得关于一个点或对象的三维位置信息。还可以使用利用三个卫星来计算位置和时间信息并利用另一卫星来校正计算出的位置和时间信息的方法。另外,GPS模块114可以实时地连续计算当前位置并使用定位或位置信息来计算速度信息。
传感器120可以感测终端设备100的当前状态,诸如终端设备100的打开/闭合状态、 终端设备100的位置、用户是否与终端设备100接触、终端设备100的方向、和终端设备100的加速/减速,并且传感器120可以生成用于控制终端设备100的操作的感测信号。例如,在滑盖式电话的情况下,传感器120可以感测滑盖式电话是打开的还是闭合的。此外,传感器120可以感测电源190是否供电和/或接口单元170与外部装置是否连接。传感器120具体可以包括姿态检测传感器、接近传感器等。
用户输入模块130,用于接收输入的数字信息、字符信息或接触式触摸操作/非接触式手势,以及接收与终端设备100的用户设置以及功能控制有关的信号输入等。触控面板131,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板131上或在触控面板131的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板131可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给该处理器150,并能接收处理器150发来的命令并加以执行。例如,用户在触控面板131上用手指单击一个应用图标,触摸检测装置检测到此次单击带来的这个信号,然后将该信号传送给触摸控制器,触摸控制器再将这个信号转换成坐标发送给处理器150,处理器150根据该坐标和该信号的类型(单击或双击)执行对该应用的打开操作。
触控面板131可以采用电阻式、电容式、红外线以及表面声波等多种类型实现。除了触控面板131,输入设备130还可以包括其他输入设备132,其他输入设备132可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆、薄膜开关(dome switch)、、滚轮(jog wheel)和拨动开关(jog switch)等中的一种或多种。
输出模块140包括显示面板141,用于显示由用户输入的信息、提供给用户的信息或终端设备100的各种菜单界面等。可选的,可以采用液晶显示器(liquid crystal display,LCD)或有机发光二极管(organic light-emitting diode,OLED)等形式来配置显示面板141。在其他一些实施例中,触控面板131可覆盖显示面板141上,形成触摸显示屏。
另外,输出模块140还可以包括音频输出模块142、告警器143以及触觉模块144等。
音频输出模块142可以在呼叫信号接收模式、电话呼叫模式或记录模式、语音识别模式和广播接收模式中输出从无线通信单模块110接收到的音频数据,或输出存储在存储器180中的音频数据。音频输出模块142可以输出与在终端设备100中执行的功能相关的音频信号,诸如,呼叫信号接收音、消息接收音。音频输出模块142可以包括接收器、扬声器、蜂鸣器等。音频输出模块142可以通过耳机插孔输出声音。用户可以通过将耳机连接到耳机插孔来收听声音。
告警器143可以输出用于指示终端设备100的事件的发生的信号。例如,当接收到呼叫信号、接收到消息、输入键信号或输入触摸时,可以生成告警。告警器143还可以输出与视频信号或频信号不同形式的信号。例如,通过振动指示事件的发生的信号。
触觉模块144可以生成用户可以感觉到的各种触觉效果。触觉效果的一个示例是振动。还可以控制由触觉模块144所生成的振动的强度和/或模式。例如,不同的振动可以组合地或顺序地输出。触觉模块144可以生成多种触觉效果,除了振动以外,还可以为相对于皮肤表面垂直移动的针列的刺激效果、通过喷孔或吸孔形成的空气喷力效果或空气吸力效果、 摩擦皮肤的刺激效果、电极接触的刺激效果、使用静电力的刺激效果、和利用能够吸热或放热元件再现热或冷的效果等多种效果中的一种或多种。触觉模块144不仅可以通过直接接触来发送触觉效果,而且可以允许用户通过用户的手指或手臂的肌觉来感觉触觉效果。终端设备100可以包括多个触觉模块144。
处理器150可以包括一个或多个处理器,例如,处理器150可以包括一个或多个中央处理器,或者包括一个中央处理器和一个图形处理器。当处理器150包括多个处理器时,这多个处理器可以集成在同一块芯片上,也可以各自为独立的芯片。一个处理器可以包括一个或多个物理核,其中物理核为最小的处理模块。
音视频输入模块160,用于输入音频信号或视频信号。音视频输入模块160可以包括摄像头161和麦克风162。摄像头161可以处理图像传感器在视频电话模式或拍摄模式中获得的静止图像或运动图像的图像帧。处理后的图像帧可以显示在显示面板141上。
经过摄像头161处理的图像帧可以存储在存储器180中或者可以通过无线通信模块110发送到外部设备。终端设备100还可以包括多个摄像头161。
麦克风162可以在呼叫模式、记录模式或语音识别模式中接收外部音频信号,并将接收的音频信号处理为电子音频数据。该音频数据可以接着变换为可以通过移动通信模块111发送到移动通信基站的形式,并在呼叫模式中输出。麦克风162可以采用各种噪声消除算法(或噪声抵消算法),以消除或降低在接收外部音频信号时所产生的噪声。
接口模块170可以用作连接到终端设备100的外部设备的通路。接口模块170可以接收来自外部设备的数据或电力并将数据或电力发送到终端设备100的内部组件,或者向外部设备发送终端设备100的数据。例如,接口模块170可以包括为有/无线头戴式耳机端口、外部充电器端口、有线/无线数据端口、存储卡端口、用于连接具有用户识别模块的设备的端口、音频I/O端口、视频I/O端口和/或耳机端口。
接口模块170还可以与用户识别模块连接,用户识别模块是存储用于验证使用终端设备100的权利的信息的芯片。包括用户识别模块的识别设备可以制造成智能卡的形式。因而,识别设备可以经由接口模块170与终端设备100连接。
接口模块170还可以是在终端设备100与外部托架连接时将来自外部托架的电力提供给终端设备100的通路,或者是将用户通过托架输入的各种命令信号发送给终端设备100的通路。从托架输入的各种命令信号或电力可以用作确认终端设备100是否被正确地安装在托架中的信号。
存储器180存储计算机程序,该计算机程序包括操作系统程序182和应用程序181等。典型的操作系统如微软公司的Windows,苹果公司的MacOS等用于台式机或笔记本的系统,又如谷歌公司开发的基于Linux的安卓(Android)系统等用于移动终端的系统。处理器150用于读取存储器180中的计算机程序,然后执行计算机程序定义的方法,例如处理器150读取操作系统程序182从而在该终端设备100上运行操作系统以及实现操作系统的各种功能,或读取一种或多种应用程序181,从而在该终端设备上运行应用。
操作系统程序182中包含了可实现本申请任意实施例提供的方法的计算机程序,从而使得处理器150读取到该操作系统程序182并运行该操作系统后,该操作系统可具备本申请提供的应用实时排序功能和/或资源管理功能等。
存储器180还存储有除计算机程序之外的其他数据183,例如本申请采集获得的应用信息、训练得到的模型以及实时排序的结果等信息,还例如临时存储输入/输出的数据(例如,电话簿数据、消息、静态图像和/或运动图像),当对触摸屏施加触摸输入时输出的各种模式的振动和声音有关的数据等。
存储器180可以是以下类型中的一种或多种:闪速(flash)存储器、硬盘类型存储器、微型多媒体卡型存储器、卡式存储器(例如SD或XD存储器)、随机存取存储器(random access memory,RAM)、静态随机存取存储器(static RAM,SRAM)、只读存储器(read only memory,ROM)、电可擦除可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、可编程只读存储器(programmable ROM,PROM)、磁存储器、磁盘或光盘。
在其他一些实施例中,存储器180也可以是因特网上的网络存储设备,终端设备100可以对在因特网上的存储器180执行更新或读取等操作。
电源190可以在处理器150的控制下接收外部电力和内部电力,并且提供终端设备100的各个组件的操作所需的电力。
各个模块的连接关系仅为一种示例,本申请任意实施例提供的方法也可以应用在其它连接方式的终端设备中,例如所有模块通过总线连接。
本申请提供的方法可用硬件或软件来实现。在硬件实现方式下,可以使用专用集成电路(application specific integrated circuit,ASIC)、数字信号处理器(digital signal processor,DSP)、可编程逻辑器件(programmable logic device,PLD)、现场可编程门阵列(field programmable gate array,FPGA)、处理器、控制器、微控制器和/或微处理器等电子单元中的至少一个来实现本申请的实施方式。在软件实现方式下,诸如过程和功能的实施方式可以使用执行至少一个功能和操作的软件模块实现。软件模块可以以任意适当的软件语言编写的软件程序来实现。软件程序可以存储在存储器180中,并由处理器150读取并执行。
图2以安卓(Android)系统为例,介绍本申请提供的方法的一种实现方式。如图所示,典型的Android系统200包括应用210、应用框架220、系统运行库和安卓运行环境230以及内核240。Android是基于Linux开发的,所以此内核240为Linux内核。
应用210包括浏览器、媒体播放器、游戏应用、文字处理应用等各种应用,有些是系统自带的应用,有些是用户根据需要安装的应用。本申请重点关注这些应用被用户使用的情况,从而为用户最可能使用的应用保障资源供给,避免用户视角的应用卡顿现象。
应用框架220包括安卓服务224,安卓服务包括安卓系统提供的多种系统服务,供其他模块调用使用。例如电源管理器224-a、通知管理器224-b、连接管理器224-c、包管理器224-d、位置管理器224-e、有线访问管理器224-g、蓝牙设备224-h、视图系统224-i等。这些管理器均可以参考现有技术中安卓系统提供的模块实现,本申请不再详述。
系统运行库和安卓运行环境230包括系统运行库231和Android运行环境232。系统运行库231,也叫程序库,包含一些C/C++库,这些库能被Android系统中不同的组件使用。
Android运行环境232包括核心库和Dalvik虚拟机。该核心库提供了Java编程语言核心库的大多数功能。每一个Android应用程序都在它自己的进程中运行,都拥有一个独立的Dalvik虚拟机实例。Dalvik被设计成一个可以同时高效运行多个虚拟系统的设备。Dalvik 虚拟机依赖于Linux内核240的一些功能,比如线程机制和底层内存管理机制等。
Android的核心系统服务依赖于Linux内核240,如安全性,内存管理,进程管理,网络协议栈和各种驱动模型。Linux内核240也同时作为硬件和软件之间的抽象层。另外,Android还对Linux内核做了部分修改,主要涉及两部分修改:
Binder(IPC)驱动器:提供有效的进程间通信,虽然Linux内核本身已经提供了这些功能,但Android系统很多服务都需要用到该功能,为了某种原因实现了自己的一套。
电源管理:主要用于省电。因为Android系统是为移动终端,例如智能手机设计的,低耗电是一个重要目的。
Android系统以软件代码的形式存储在存储器180中,处理器150读取并执行该软件代码以在终端设备100上实现该系统提供的各项功能,其中包括本实施例提供的功能。
本实施例涉及的改进主要在应用框架220。如图所示,除安卓原有的系统服务之外,本实施例的应用框架220还包括感知管理装置,该感知管理装置包括数据采集模块221、应用排序模块222、以及决策执行模块223。
应理解的是,感知管理装置也可以作为Android的系统服务提供给其他组件使用。其中的模块221-223可以分别做三个系统服务,也可以适当合并或进一步细分。
数据采集模块221采集终端设备100的场景数据,该场景数据包括设备所处的位置数据(例如GPS位置数据)、设备中内置的组件或外接组件的状态数据等。该状态数据例如可以是显示面板布局状态、网络连接状态、耳机或充电线连接状态、摄像头/音视频组件/传感器的状态等。可参考图1了解终端设备100的多种组件。这里的“组件”即包括硬件也包括软件。
数据采集模块221还采集与应用的使用相关的应用数据,例如应用类型、应用名称、应用被使用的时间等。采集的数据直接或经过处理后被存储到存储器180或发送给其他模块。
应用排序模块222根据数据采集模块221实时采集的数据和机器学习模型,确定实时场景下所有应用的重要性排序结果。在实现实时重要性排序之前,应用排序模块222还根据数据采集模块221采集的历史数据进行训练获得上述机器学习模型。
决策执行模块223:根据应用排序模块222输出的应用的实时重要性排序制定系统资源如何管理的决策,并直接执行或调用其他模块执行相应的资源管理措施。应理解的是,在具体实现中,决策执行模块223在并非在做所有资源管理决策时都需要应用的实时重要性排序。
需要说明的是,当以上模块需要存储中间数据或最终数据时,需要存储功能,这些存储功能可以由独立的存储模块实现,也可以融合在三个模块中分别实现。
在其他一些实施例中,上述模块221-223也可以在系统运行库和安卓运行环境230或内核240实现,也可以部分在应用框架220实现,部分在其他层次实现。
如图3所示,应用排序模块222包括两大子模块:模型训练模块310和实时排序模块320。
模型训练模块310根据数据采集模块221采集的历史数据训练出应用于应用实时排序的模型,并将其存储到存储器180中,供实时排序模块320做实时的应用排序时调用。具 体的,模型训练模块310主要训练三个模型:维度划分模型、排序模型以及N值预测模型。本申请中的“模型”是广义的概念,参数、公式、算法或对应关系等都可以认为是模型。
维度划分模型用于对应用使用规律进行分类,该分类可以是一维分类也可以是多维分类。“维度”是使用规律分类的依据,例如时间、空间等。以时间为例,以一天24小时为划分依据,将应用使用规律划分为工作时间段和非工作时间段两个分类。具体的,根据当前的系统时间和训练获得的维度划分模型确定终端当前属于工作时间段还是非工作时间段。这里实际反映的即是用户当前处于工作状态还是非工作状态,在这两种状态下用户对应用的使用规律呈现出不同的特点。进一步的,维度还可以是多维,比如时间维度和地点维度。例如工作时间段又划分为公司办公、出差办公;非工作时间段又划分为家和出游等。为方便理解,这里的举例采用了带有语义的描述,例如“办公”、“出游”等,应理解的是具体实现中并不以此为限。
N值预测模型是用来确定特定分类下用户经常使用的应用的个数的。N的值为正整数。以时间维度分类为例,N值反映的是特定时间段内用户经常使用的应用的个数。
实时排序模块320是一个实时处理模块,它能够根据数据采集模块221实时采集的的数据和模型训练模块310训练的模型确定应用的实时重要排序。实时排序模块320利用模型训练模块310提供的功能完成三个功能:维度划分预测321、应用排序预测322以及N值获取323。
应理解的是,模型训练模块310和实时排序模块320内的每个功能都可以看做是一个功能模块或单元。
如图3所示,数据采集模块221可能会对采集到的全部或部分数据进行清洗/加工等处理,再存储到存储器180中或提供给实时排序模块320。
需要说明的是,以上几个模块或功能并非在所有实施例中都要应用,在本申请的一些实施例中,仅选择部分功能实现也可以。
下面介绍本实施例提供的应用排序方法以及资源管理方法,亦即前述模块或单元的具体实现。
如图4所示,为本实施例提供的应用排序方法以及资源管理方法的全局方案示意图。该图示出了四个主要方法流程,包括第一阶段数据采集、第二阶段模型训练、第三阶段应用排序以及第四阶段资源管理。在描述的过程中,为了给出方案的全局概貌,将按照以上顺序进行描述,但应理解的是,在方案的具体实现中,这四个阶段并非全然是串行执行的。
终端设备运行之后,先经过第一阶段即数据采集的过程,采集的主要是用户使用应用的数据以及终端设备的系统状态数据、位置数据等信息,这些信息可以反应用户对应用的使用规律,用于在将来预测用户对应用的使用概率。采集到的数据部分会直接存储,部分可能需要加工后再存储(①)。第一阶段的执行主体是数据采集模块221。
经过一段时间的数据采集之后,进行第二阶段的模型训练。针对排序模型本申请采用的是机器学习的训练方法,一般需要一定量的历史数据输入(②),但是具体需要采集多长时间才启动第一次训练,本申请不做限定。机器学习模型的训练过程就是模型建立过程, 实质是一个参数学习与调优的过程。对模型进行训练,便是模型参数的学习更新。训练出来的机器学习模型或者模型参数会被存储起来,用于第三阶段排序使用。
在排序模型的训练之前基于机器学习算法或其他算法将采集的历史数据从一个维度或多个维度分为多类,多个类别中的应用使用规律呈现出不同的特征,针对这多个类别分别训练各自的排序模型。因此第二阶段一个重要的输出就是多个分类以及多个分类分别对应的多个机器学习模型(或模型参数)(③)。第二阶段的执行主体是模型训练模块310。进一步的,模型训练模块310还根据N值预测算法计算用户在特定分类下经常使用的应用个数NB并存储起来用于后续资源管理。N值预测的过程也可以放到第三阶段或第四阶段。
终端设备可周期性启动训练过程,或者在指定的时间点上启动训练过程,或者按采集到数据简单不匹配原则启动训练过程等。
第二阶段模型训练完成之后就可以进行第三阶段应用排序。第三阶段根据实时数据(④)第二阶段输出的机器学习模型(⑤)来进行应用的排序。除了这两项之外,还需要一些历史数据(⑤)的辅助。实时数据包括实时采集到的当前应用以及当前应用的使用时间、终端设备的当前时间(即当前的系统时间)、终端设备的当前状态数据和终端设备的当前位置数据等其中的一种或多种,而有些数据,例如多个应用被使用的时间顺序则还需要历史数据中记载的多个应用的使用时间来计算获得。第三阶段的执行主体是实时排序模块320,输出的是所有安装在终端设备上的应用的排序结果(⑥)。
第三阶段应用排序的启动可以是周期性的,也可以是事件触发性的。其中,触发事件可以是应用切换、应用安装、应用卸载、终端设备状态变化等,这些对用户来说都可能造成应用重要性的变化。这种情况下排序后的结果可以存储起来,供第四阶段资源管理需要的时候使用。另外,触发事件也可以是第四阶段资源管理触发的请求事件,即在资源管理的调用下执行应用排序。
第四阶段资源管理,可以访问并使用存储器中存储的最新的排序结果(⑦),也可以根据需求实时调用(⑧)第三阶段应用排序获得排序结果。然后,根据应用的重要性排序对资源进行管理。
下面示例性地分别介绍这四个阶段的详细实现方式。
请参考图5为第一阶段数据采集过程的示意图。首先数据采集模块221设置事件监听机制(S501),用于监听关键事件是否发生,若检测到关键事件(S502),则触发数据采集过程(S503)。采集的数据可以直接存储(S505),也可以经过清洗/加工(S504)后再存储。
数据清洗(Data cleaning)指的是计算机对数据进行重新审查和校验的过程,目的在于删除重复信息、纠正存在的错误,并提供数据一致性。数据加工指的是计算机对数据进行类型、格式等变化,或对数据进行数学变换等处理。
一方面,数据采集模块221采集终端设备100的场景数据和应用使用相关的应用数据历史数据,采集到的数据经过清洗/加工后存储到存储器180中,作为历史数据供模型训练模块310训练生成各种模型。具体的,数据采集模块221实时监控终端设备的状态、应用使用情况等,当条件满足时采集数据并存储。例如,数据采集模块221实时监控和缓存用户行为状态(也可以理解为终端设备行为状态,例如运动或静止)、应用使用状态、系统状 态和位置状态的改变。当发生应用前后台切换或其他类型的关键事件之后,将切到前台的应用包名、当前时间以及上述状态的最新数据等信息记录到持久化存储介质中。
另一方面,数据采集模块221还在需要排序时实时采集数据,采集的数据作为实时数据成为实时排序模块320的输入,以便于实时排序模块320对应用进行实时排序。
举例来说,数据采集模块221采集的数据以及采集方式如下表所示。应理解的是,在一些实施例中,可以根据需要酌情减少或增加一些数据,本实施例只是举例,无意以此表中的数据为限。
表1
Figure PCTCN2018109753-appb-000001
Figure PCTCN2018109753-appb-000002
Android系统提供了函数接口来获取一些数据,表1中描述了这些数据的一种获取方法所涉及到的模块(例如PackageManager)或函数(例如getActiveNotifications),可参考图2,本领域技术人员可以根据需要使用其他获取方式,本申请并不限制。除了Android系统之外,在其他系统中所调用的模块和函数可能是不一样的,本申请也不以Android系统为限。
表1中前台应用指的就是当前运行在前台的应用,这种应用被认为是当前用户正在使用的、对用户而言相对重要的应用。有些应用运行在后台用户也会使用,比如音乐播放引用,对此类型应用的采集时间可以放到此类应用启动,即第一次切换到前台的时刻。
监测到关键事件时触发以上数据采集过程,此处的关键事件可以包括以下事件中的一种或多种:前后台切换事件、应用的安装事件、应用的卸载事件、系统状态发生变化引起的通知事件或地理位置信息发生变化引起的通知事件。其中,地理位置信息发生变化也可以是先识别语义地理位置是否发生变化,例如是否从家变化到了办公地,如果语义地理位置发生变化再启动数据采集。关键事件不局限于以上示例。概括来说,关键事件主要取决于需要采集的信息,如果监控到需要采集的信息可能发生变化,就可能需要启动上述数据采集过程。
采集的数据中应用切换到前台的时间有可能就是当前系统时间,比如监测到前后台切换事件时执行数据采集,那么采集的当前系统时间就可以认为是当前前台应用切换的时间。如果在其他事件的触发下执行数据采集,那么可以通过当前前台应用在切换时记录的一个用于表示切换时间的时间戳来获取应用切换到前台的时间。该时间戳就是应用在被切换到前台时利用当时的系统时间记录的。
本申请中“应用切换到前台的时间“有时也称为“应用被使用的时间”。
采集的数据从最后的使用方式上又可以分为两类:1)直接可使用的数据,例如表1中示例的蓝牙/网络/耳机/充电线连接状态、通知列表状态和应用类型等数据。这些数据通过特征提取就可以当成训练模型的输入参数直接使用。2)需要经过加工的数据,主要有两类:与应用时序特征相关的信息和GPS位置信息,这些数据需要进一步加工转换成其它的 形式作为参数输入。下面主要就第2)类作详细介绍。
GPS位置信息,需要针对语义位置进行聚类,例如家和办公地。当一旦发生这两种地理位置的转换,系统将会推送一条语义位置信息发生变更的广播。在其他一些实施例中,GPS位置信息也可以直接使用。
应理解的是,数据加工过程可以发生在存储之前,也可以在模型训练使用该数据之前再做加工。
与应用时序特征相关的信息主要包括前台应用的包名和应用切换到前台的时间。通过收集这两类信息的历史记录(可以认为是用户使用应用的轨迹记录),进一步可以统计得到以下三类信息:a)最近被使用的k1个应用,b)当前前台应用的k2个最大可能的前续应用以及,c)当前前台应用的k3个最大可能的后续应用。k1、k2和k3的取值可以相等也可以不相等。a)b)c)三类信息统称为应用时序特征数据。
a)是当前前台应用被使用之前用户最近使用的k1个应用,可以直接根据存储的历史数据中各个应用以及各个应用的使用时间获得。例如,假设k1=2,当前应用使用时间是18:00,根据各个应用的使用时间确定在当前应用使用时间之前最近使用应用的时间分别是15:45,15:30,那么k1个应用就是15:45和15:30使用的这两个应用。在其他一些实施例中,k1也可以包括当前前台应用。
b)和c)则需要从用户使用应用的历史轨迹里利用数学方法统计出应用的多阶关联关系。下面就多阶关联关系的统计模型作描述。
本实施例用矩阵U[M*M]来表示终端设备100上安装的M个应用的关联关系。U[i*j]表示从应用i直接切换到应用j,这种切换的次数,其中i和j均为小于或等于M的正整数。例如,在某一刻发生了应用i切换到应用j的操作,那么U[i*j]这条记录加1。
需要说明的是,这里针对的是前台应用,即当前位于前台的应用从应用i直接变成了应用j,换句话说,用户使用应用i接着又使用了应用j,这即是应用i直接切换到应用j。
进一步的,应用间的跳转次数也可以考虑在内。应用i直接切换到应用j说明二者可能强相关,但应用i经过几次跳转之后切换到应用j也能体现二者之间一定可能性的关联。基于这种实现,可以增加一个跳转次数参数,例如发生过应用i经过d次跳转切换到应用j,则U[i,j][d]这条记录加1。跳转次数太多,说明应用之间的关联关系很弱,可以不考虑,所以可以设置最多D次跳转(例如D取值为5)。
那么,应用i与应用j的关联矩阵中每个元素U'[i,j]就可以被定义为:
U'[i,j]=αU[i,j][0]+βU[i,j][1]+...+γU[i,j][D]        (1)
基于上述描述方法:U'[,j](矩阵的第j列)则可表示从其它应用跳转到应用j的可能性;U'[i,](表示矩阵的第i行)则可表示其它应用从应用i跳转过来的可能性。从这个矩阵中可以获取到一个应用最大可能的k2个前续应用和最大可能的k3个后续应用。例如,当前前台应用为应用v,则可以从M′[,v]中依次挑选出最大的k2个值,这k2值对应的行值就是应用v最大可能的k2个前续应用;从U'[v,]中依次挑选出最大的k3个值,这k3值对 应的列值就是应用v最大可能的k3个后续应用。
应用时序特征数据和其他采集的数据一起存储到存储器180中,供下一阶段模型训练使用。在应用实时排序的过程中也会用到上述方法去获得前台应用对应的应用时序特征数据。
请参考图6为第二阶段模型训练的过程示意图。经过一段时间的数据采集之后,存储器180中已经存储了一些数据,这些数据反映了应用使用的规律。这种规律需要通过机器学习的方式形成可重复利用的模型。模型训练过程即是做这个。如图所示,从存储器180中读取数据(S601),对数据进行特征提取(S602)等操作,然后训练维度划分模型,即将用户对应用的使用习惯按照一个或多个维度进行分类(S603)。分类之后针对不同的分类分别训练排序模型(S604),并将训练结果存储到存储器中,供第三阶段应用排序使用。进一步的,该阶段还可以进行用户经常使用的应用个数N的预测(S605)。下面分别对涉及的三个模型算法进行介绍。
a)训练维度划分模型(S603)
用户使用应用的习惯在某些特维度上可能存在断裂,例如在时间维度上,用户工作时间段和非工作时间段(也可称之为休息时间段)使用应用的习惯就会有很大差异,这种情况下如果笼统地构造一个排序模型去对应用进行排序,那么就可能出现极大的偏差。训练维度划分模型即是探索在某些维度上去对用户习惯作更精细的划分,消除这种断裂带来的误差,进一步提升排序的精准度。
对于一个或多个维度的不同的分类,可通过对原始数据应用一些划分算法获得一个划分结果。如图7所示,原始数据从维度X上被划分为三种分类。进一步的,每种分类还可以在维度Y上再划分出更多分类,不再详述。这也是下面将介绍的熵增算法的基本原理。
本实施例将以时间维度为例,将时间划分为工作时间段和非工作时间段两类。以一天时间为准,方法就是找到一个或者两个时间点,能够将用户的工作和非工作时间区分开来。具体实现上又可以分为:1)单线划分,以[0:00~x]、[x~24:00]形式的划分。只有一个变量x;2)双线划分,以[0:00~x 1]、[x 1~x 2]、[x 2~24:00]形式的划分,需要两个变量x 1和x 2。下面以双线划分为例。
首先从存储器180读取数据。这里用到的数据主要有应用包名和应用被使用的时间。
将一天时间划分成24个时间段,每个时间段包括1小时。根据读取的数据统计每个时间段内用户使用应用的情况。本实施例中用二维矩阵S来统计任意时间段内应用的使用情况,例如S[h][i]就表示了在h时间段,应用i总共被使用的次数。从而,一段时间[h 1,h 2]内,应用i的应用总共被使用的次数
Figure PCTCN2018109753-appb-000003
,以及该时间[h 1,h 2]内应用i被使用的次数占所有应用被使用的次数的占比
Figure PCTCN2018109753-appb-000004
可分别由下面公式(2)和(3)两式计算获得:
Figure PCTCN2018109753-appb-000005
Figure PCTCN2018109753-appb-000006
Figure PCTCN2018109753-appb-000007
也可以称之为应用i在该时间[h 1,h 2]内被使用的频率。
从而,该段时间[h 1,h 2]内用户使用应用的信息熵
Figure PCTCN2018109753-appb-000008
定义如下:
Figure PCTCN2018109753-appb-000009
这样,x 1和x 2的一次双线划分从而产生的熵计算如下:
Figure PCTCN2018109753-appb-000010
其中f(x 1,x 2)为[x 1,x 2]这一段时间内所有应用的使用次数在全天所有应用的使用次数中的占比,计算如下:
Figure PCTCN2018109753-appb-000011
从而,问题的求解最终被转化成了寻找两个划分时间x 1和x 2,使得这种划分下的熵值最小,即:
arg min E(x 1,x 2)        (7)
举例来说,最后得出的结果可能是x 1和x 2分别为9:00和18:00,那么三段分别是[0:00~9:00]非工作,[9:00~18:00]工作,[18:00~24:00]非工作,用表2表示为:
表2
[0:00~9:00] 非工作
[9:00~18:00] 工作
[18:00~24:00] 非工作
在本实施例中将[0:00~9:00]和[18:00~24:00]均认为是非工作时间段,针对非工作时间段训练了一个排序模型。在其他一些实施例中,也可以针对[0:00~9:00]和[18:00~24:00]分别训练一个排序模型。另外,需要说明的是“工作”和“非工作”的语义是根据当前大多数人的生活习惯给予的,实际上该方法的意义是反映了应用被使用的历史呈现出两种不同的规律,本申请并不限定一定要采用“工作”和“非工作”这种语义划分。
b)训练排序模型(S604)
终端设备上应用的重要性和应用的使用概率相关,使用概率越高,重要性越大。应用的使用概率在系统的不同状态下又可能不同。例如,网络连接的状态下,各个应用的使用概率和网络断开的状态下各个应用的使用概率是不同的,这就导致网络连接和断开情况下应用重要性排序是不同的。所以在对应用进行排序时需要考虑当前系统状态。
需要说明的是,网络连接或断开状态仅是举例,与终端设备相关的其他状态也可以在考虑范围之内,比如耳机是否连接,蓝牙是否连接、充电线是否连接或终端设备当前的位置等。
概括来说,训练排序模型主要包括以下几步。
读取数据。从存储器180中读取数据。这里训练需要用到的数据主要包含:应用被使用的时间、应用时序特征信息、系统状态信息和语义位置信息。应用时序特征信息在前面已经详述内容和获取过程。系统状态信息主要包括:耳机/充电线/网络的连接状态。语义位置信息例如可以包括家和办公室等。
特征提取。通过统计、识别和/或矢量化等手段,将读取的数据转换成机器学习算法可处理的特征向量/特征矩阵。具体包括:s1)应用被使用的时间和应用时序特征信息:使用矢量化、时间片和工作日/非工作日识别方法将应用被使用的时间和应用时序特征信息转换成向量。例如,应用被使用的时间被分成两部分,日期和时间段,将周一,周二,……周日分别映射为0,1,……,6,时间段则是分段映射,比如[9:00~18:00]是一段,所有这个时段的时间映射为同一个数字。s2)系统状态信息:从读取的系统状态信息中抽取特征,使用离散/枚举化方法将系统状态信息转换成向量。s3)语义位置信息:将读取的语义位置信息进行编码从而转换成向量。s4)归一化处理,使用最大/最小值法对数值进行[0,1]归一化,并使用白化方法对数值进行方差归一化。s6)特征矩阵组合,将各个维度的数据合并为特征矩阵,提供给算法训练模块。
模型训练与存储。将数据划分成多组,分别进行模型训练获得多个模型。数据的划分也可以发生在特征提取之前,对数据划分之后分别进行特征提取。举例来说,按照本申请前述实施例提供的时间维度上的划分,将周一至周五[9:00~18:00]的数据和周一至周五[0:00~9:00][18:00~24:00]的数据分别训练两个模型,并将生成的排序模型(机器学习模型参数信息)和对应的时间段存储到数据库中去。周六和周日全天的数据可以作为非工作时间段去训练。
存储结果的一个例子如下:
表3
Figure PCTCN2018109753-appb-000012
下面以机器学习算法softmax为例详细介绍一下训练过程。选择Softmax算法的原因是该算法是增量式的学习算法。增量式学习算法意味着训练过程可保证在原先学习的模型参数基础上,融合新训练样本的特征数据,对参数进行进一步的调优,即无需保留原有的样本数据,这样可减少数据的存储量,节省存储空间。
通过统计、识别和/或矢量化等手段,将获取的应用时序特征信息、系统状态信息等数据转换成softmax算法可处理的特征向量s=(s1,s2,.....sW),W为特征向量的数量(或长度)。应用集合使用一个向量表示a=(a1,a2,.....aM),aj表示第j个应用,M表示手机上安装的应用个数。目的是求出排序函数(即排序模型)h θ(s),当然求出排序函数的前提是求出参数θ。
因此,根据softmax算法,排序函数定义为
Figure PCTCN2018109753-appb-000013
此模型的代价函数为
Figure PCTCN2018109753-appb-000014
其中,k表示训练输入的样本数量,M表示类别数目,即应用个数,W表示特征向量s的长度。λ表示权重(λ>0)。1{.}是示性函数,其取值规则为:
1{表达式为真}=1
1{表达式为假}=0
y (i)=j标识第i个样本中应用包名在向量a中的标识是j。因此,若第i个样本中应用包名在向量a中的标识是j为真,1{y (i)=j}=1,否则,1{y (i)=j}=0。
模型训练采用梯度下降法或者牛顿法最小化J(θ),求出θ。其中梯度下降法方法如下:
Figure PCTCN2018109753-appb-000015
Figure PCTCN2018109753-appb-000016
α是一个固定参数,j表示迭代次数。
本实施例通过每24小时进行一次训练,不断更新识别模型,使得越来越接近用户的真实使用习惯。
通过以上模型训练求出参数θ之后,就得知了排序函数h θ(s),也就是在中实时排序时会用到的排序函数。将求得的信息按照表3的形式存储到存储器180中,供实时排序时使用。
c)N值预测算法(S605)
N的值为正整数,反映的是某一个特定分类内用户经常使用的应用的个数,例如一个特定时间段内(比如工作时间段)用户经常使用的应用的个数。N值预测算法用于确定N的值。N值可以用于资源管理,因为这N个应用是用户经常使用的,所以资源管理的时候可以适当保护这N这个应用的资源。
依照前述,本实施例是以时间为维度将用户使用规律做了两个分类,如表3所示。那么,对N值的计算也可以以时间为维度进行同样的分类。输出结果如表4所示:工作时间段和非工作时间段分别计算一个N值。这样N值的预测更准确。
在其他一些实施例中,也可以所有分类只计算一个N值,或计算N值的分类方式和表3的分类方式不同。在另外一些实施例中,也可以采用非时间维度进行N值的计算,比如位置维度,举例来说,与家相关的数据计算一个N值,与办公地相关的维度计算一个N值。
N值可以是预先计算存储起来,例如如表4存储,也可以资源管理需要N值的时候实时调用N值预测算法获得N值。
表4
Figure PCTCN2018109753-appb-000017
N值预测算法的一种实现方式:获取前一天或多天统一时间段的应用的使用记录,统计每个应用的使用次数,然后根据使用次数从大到小排序,依次找到最大的V个应用,若 该V个应用的使用次数之和能达到该时间段所有应用被使用的次数之和的90%,则N=V。
N值预测算法的另一种实现方式:二阶滑动平均。第t天特定时间段(或其他维度的某种分类)上需要保护的应用个数N t是第t-1、t-2天同样时间段上需要保护的应用个数的平均值加上方差的加权,如公式(12)所示:
Figure PCTCN2018109753-appb-000018
α为权重系数,σ是数据中所有N i的方差值。α取值例如为0.005。第1天和第2天的N值可以使用前一种实现方式计算N值。
按照公式(12)每次计算一个N值都要计算所有历史N i的方差值σ,这样需要存储下所有历史的N i。本实施例中为了减少历史Ni的存储量,使用增量式计算法来评估所有Ni的方差值:
Figure PCTCN2018109753-appb-000019
其中,
Figure PCTCN2018109753-appb-000020
是所有数据的平均值,
Figure PCTCN2018109753-appb-000021
是历史数据方差,
Figure PCTCN2018109753-appb-000022
是历史数据平均值,P是历史数据的个数,
Figure PCTCN2018109753-appb-000023
是新增数据的方差,
Figure PCTCN2018109753-appb-000024
是新增数据的平均值,Q是新增数据的个数。
请参考图8为第三阶段应用实时排序过程的过程示意图。
实时排序模块320设置事件监听机制,检测触发实时排序的关键事件(S801)。监测到关键事件之后(S802),触发数据采集模块221采集实时数据(S803),该实时数据可以包括终端设备的系统时间、终端设备的当前状态数据和终端设备的当前位置数据。
关键事件主要包含以下几种:1)前后台切换;2)应用的安装和卸载;3)系统的状态发生变化:即耳机/网络的连接状态发生变化;4)收到情景智能广播通知,即通知语义地理位置(公司/家)发生了变化,等。
根据采集到的终端设备的系统时间和前述维度划分模型的训练结果(表2),确定终端设备当前所处的时间维度上的分类,即为工作或非工作时间段(S804)。根据排序模型的训练结果(表3),选择相关分类的排序模型或仅获得模型参数,然后根据历史数据以及实时采集的数据等,预测出实时场景下所有应用的重要性排序(S805)。这里的历史数据就是历史采集并存储的数据,具体可参考表1。
应理解的是,待输入排序模型的数据可能需要经过预处理,比如与当期前台应用相关的时序特征数据需要将采集到的实时数据和部分历史数据进行统计后获得,如果模型需要输入语义位置数据,那还需要通过聚类将实时采集到的终端设备位置信息转化为语义位置数据或根据预设的GPS位置区间与语义位置的对应关系获得相应的语义位置数据。需不需 要预处理,以及做怎样的预处理根模型训练的时候匹配即可,本申请对此并不做具体限定。
需要说明的是,表2和表3仅是方便理解的举例,具体实现过程中,二者可以合并为一个表,即一个对应关系,而根据系统当前时间可以一次性确定出排序模型或模型参数。
需要说明的是,本实施例在S804分类使用的是实时采集的系统时间来分类,在其他实施例中,如果分类的维度不同,也可以根据需要利用实时采集的其他数据和/或历史数据来分类。
最后实时排序模块320将排序的结果存储(S806)起来,供第四阶段资源管理时查询使用。在其他一些实施例中,第三阶段的排序也可以是第四阶段实时调用时才发生,然后实时返回排序结果。
进一步的,图8中未示出,排序之后还可以通过N值预测算法或通过预先存储的对应关系(如表4)获得该时间段内的用户经常使用的应用的个数N。该步骤不是必须的,可以在资源管理真正需要N值时再实时计算。
针对用户近期才安装到手机上的新应用,在利用上述方法对应用排序的时候,可能会因为采集到的数据过少而导致对应用的重要性评估过低。因此本申请进一步还提出一种对新安装应用的补充机制,通过计算一个分值Score对新安装应用进行排序。
排序新安装应用主要考虑两方面因素:1)使用可能性权重,本实施例中也可以称之为LRU权重:查看当前新安装的应用有没有出现在LRU(last recently used)列表中,以判断新安装的应用是否在最近被使用过,若出现,则LRU为1,不出现LRU为0;2)时间衰减权重:计算当前时间距离新安装应用安装时间的时间差衰减权重。LRU列表中存储了最近使用的多个应用的标识,例如列表长度为8,则按照时间顺序存储最近使用的8个应用,每插入一个新使用的应用,就删除一个最久之前使用的应用。
基于这两种因素,定义新安装应用Score如下:
Score=α 1×LRU+α 2×e -t    (14)
其中,α 1为LRU的权重系数,α 2为时间衰减的权重系数,t为当前时间距离应用安装时间的时间差的离散值。计算所有新安装应用的Score,并进行排序。
在其他一些实施例中,也可以采用其它形式来确定新安装的应用是否在最近使用过。例如,记录新安装应用上一次的使用时间,然后判断上一次使用时间距离当前时间的时间差是否小于某个阈值等。
在需要进行N个用户经常使用的应用的推荐时,通过前述softmax算法的排序结果推荐排序在前的N-x个,通过新安装应用的排序结果推荐x个,两种推荐方法一共推荐N个应用,作为当前系统最重要的应用,以用于接下来的资源管理。
其中,x个新安装应用可以依次选出的Score最大的x个应用,而x的取值是多少可以根据需要设定,例如为2。或者设置条件:被推荐的新安装应用的Score必须大于某个阈值 (例如0.5)和/或推荐个数最多为2。具体如何设置条件,本申请不做限定。
下面介绍第四阶段系统资源管理的实施例,下面介绍的方法可以部分或全部由决策执行模块223实现。
系统资源管理的主要目的是保证前台应用或重要性高的应用的资源供给。本实施例提供一种应用进程的临时冻结(也可以称之为瞬时冻结)方法,用以在资源需求量较大时,临时冻结一些不重要的进程,以为重要进程的使用提供更充分的资源,避免因为资源不足造成应用卡顿而影响用户体验。
进程冻结通常指的是将进程置为不可中断的睡眠状态或停止状态,并放入等待队列的过程。“进程”可以理解为正在运行的应用的实例,在有些描述中,二者可以被等同理解。
在传统的台式计算机操作系统内,冻结技术主要用来在系统休眠(hibernate)或挂起(suspend)的时候,将进程列表中所有进程的状态都置于睡眠或停止状态,并保存所有进程的上下文到硬盘上,这样在从休眠或挂起恢复时,系统就可以通过解冻所有被冻结的进程,把自己恢复到之前的运行状态。
针对终端操作系统,不同的终端操作系统对冻结技术的使用不太相同。部分终端操作系统采用伪后台技术,即在应用进程切换到后台且在后台运行一段时间后,冻结该进程,直到用户重新将该应用切换到前台,才解冻该进程继续运行。Android操作系统在内存不足时,OOM(out of memory)或LMK(low memory killer)模块会触发内存回收通过杀进程的方式来回收内存,但这样导致被杀应用重新打开的时间变长,同时用户原先的状态也无法保存,降低了用户体验,为解决该问题引入冻结技术。具体的,系统在内存不足时(例如内存剩余量低于某个阈值)先冻结进程,把该进程的上下文交换到ROM中,并释放内存,这样在下次重新打开该应用时,通过解冻该进程来继续运行,既避免了应用被杀丢失原先的状态,也降低了应用重启的时间,提高了用户体验。
总之,现有的冻结技术主要用在对长期不用的进程的处理,或是对资源的长久释放,直到有用户对该进程有需求才解冻。
本实施例提出一种瞬时冻结技术,通过对其他应用进程的临时冻结,使用临时让路的方式来保证对重要应用或重要场景所需资源的及时给足,并在资源需求降低后,重新自动恢复解冻这些被冻结的应用进程,在提高该应用的处理和响应速度的同时,又避免了另一些应用被长期冻结后,导致的体验下降(如下载被长期冻结后,就无法继续下载了),提高了用户体验。
当某些对资源瞬时需求量较大的特定事件(例如应用的启动、activity切换、拍照、滑动、亮/灭屏、图库缩放等)被触发时,为了避免非重要应用的执行对这些特定事件造成影响,强制要求临时冻结非重要应用,并在关键事件完成后,根据应用的重要性排序,恢复部分被临时冻结的应用的运行。
这里的“特定事件”一般都与重要应用相关,重要应用比如前台应用或后台可感知应用,后台可感知应用例如为在后台运行的音乐播放应用。被临时冻结的非重要应用可以根据需要选择,例如,根据前述实施例提供的应用的重要性排序结果选择除前N个应用之外的其他所有应用进行临时冻结(这里的N为前述实施例中计算出的用户经常使用的应用的 个数),或选择所有后台应用进行临时冻结,或者根据需要选择其他当前判定非重要或预测一段时间内非重要的应用。
如图9所示,检测函数监测特定事件(S901),通过活动管理器服务(Activity Manager Service,AMS)的打点上报函数上报特定事件(S902),临时冻结部分应用(S903)。在调用临时冻结的函数时设置冻结时间,例如1.5s,这个冻结时间可以是预先设置的,不能改变,也可以是用户可配置的。冻结时间通过定时器实现,检测到定时器到期(S904),然后解冻应用(S907)。
另一方面,也可以监听特定事件的执行,当检测到特定事件结束(S905)之后,解冻应用。这里的特定事件结束的检测也可以通过活动管理器服务的打点上报函数上报的方式来实现。步骤S904和S905可以理解为两种正常解冻的条件,实现其中任意一种或两种都可以。
解冻的时候可以解冻全部应用,也可以仅解冻部分应用。选择需要被解冻的应用可以根据应用的重要性排序选择重要性相对较高的部分应用进行解冻,其他的因为重要性很低,可能很长一段时间还是不会被用户使用,所以可以继续冻结。
在特定事件完成之前,并且指定的定时时长未到,但发生了指示被冻结应用运行环境改变的事件(S906),例如被冻结的应用切换成了前台、被冻结的应用退出、被冻结的应用收到binder异步消息、被冻结的应用的通知消息被用户点击等,则该被冻结的应用可以提前解冻(S907)。这里“指示被冻结应用运行环境改变的事件”一般与某一被冻结应用有关,该事件的发生通常会使所述某一被冻结应用的重要性突然提高,需要及时被解冻。同样的,对指示被冻结应用运行环境改变的事件的监测也可以由前述的活动管理器服务的打点上报函数来实现。指示被冻结应用运行环境改变的事件可能为以下事件中的任意一项:1)应用置前台;2)应用退出;3)应用重安装或者更新版本;4)动态壁纸应用被设置为动态壁纸;5)应用widget添加到桌面;6)即时通讯(instance message,IM)/短消息服务(short message service,SMS)/邮件(Email)类应用有网络数据包到达;7)IM/SMS/Email类应用从无网络到网络连接上;8)其它应用访问冻结应用的提供者(provider)或者服务(service)时;9)系统或其它应用通过binder同步调用冻结进程;10)解锁后,检测到在桌面上有widget的应用;11)运动开始,之前冻结的使用GPS和Sensor的应用;12)冻结应用处理耳机按键;13)被临时冻结的应用收到binder异步消息;14)点击通知栏进入冻结应用;15)进入灭屏状态等。
需要说明的是,以上1)-15)事件有的涉及到具体应用,那就意味着需要将该应用解冻(如果该应用在冻结状态的话),有的不涉及具体应用,那就选取部分或全部被冻结的应用解冻,例如进入15)检测到系统进入灭屏状态,意味着可能没有用户可见的前台应用需要保护了,因此可以解冻所有被冻结的应用,以便于它们尽快继续运行。
可见,除了两种正常解冻的方式外,本实施例还提出一种紧急或提前解冻的方式。在特定事件完成之前,并且指定的定时器时长未到,但发生了被冻结应用运行环境改变的事件,则该被冻结的应用可以被提前解冻,以避免因为冻结影响该应用的正常使用。
用户连续操作(如滑动、或连续启动应用等)将会连续触发临时冻结。为了保证一些后台应用在连续瞬时冻结的操作下有机会运行,本实施例还提出了一种周期性冻结的方法。 在对应用进行冻结操作之前,会先检测所述应用被解冻后的运行时间。对于解冻后累计运行时长已达t1秒的应用(t1是个预先设置的时间长度值,例如10s),可以连续实施临时冻结。而对于解冻后累计运行时长不足t1的应用可以周期实施临时冻结,对这种应用先冻结t2秒然后再解冻t3秒,直至满足连续冻结条件。这里t2,t3都是预先设置时间长度值(t2<t1,t3<t1),例如:t2=1.5s,t3=3s。通过这样一种周期性解冻的方法,能够很好的保证一些后台应用在前台长期交互操作时能够得到一些运行的机会,不会出现后台一些重要应用长期得不到运行的情况,甚至出现某些应用被连续冻结后导致异常的发生(例如下载被长期冻结后,就无法继续下载了)。在检测到指示所述被冻结应用运行环境改变的事件后,无论所述应用有无被解冻,该应用的解冻后累计运行时长都会被清0,这样就可以保证各后台应用在接收到所述运行环境改变的事件后都能够运行一段时间,在优先保障前台应用的瞬时高资源需求外,也保证了后台应用的运行和体验。
本申请另一个实施例提供一种重要任务的调度方法,这里的任务可以是进程,也可以是线程。
理想状态下,每个任务都能从cpu那里获得相同的时间片,并且同时运行在该cpu上,但实际上一个cpu同一时刻只能运行一个任务。也就是说,当一个任务占用cpu时,其他任务就必须等待。完全公平调度器(completely fair scheduler,CFS)调度算法是现有的Linux系统中的一种较为通用的调度算法。CFS调度算法为了实现公平,必须惩罚当前正在运行的任务,以使那些正在等待的进程下次被调度。具体实现时,CFS通过每个任务的虚拟运行时间(virtual run time,vruntime)来衡量哪个任务最值得被调度。CFS中的就绪队列是一棵以vruntime为键值的红黑树,vruntime越小的进程越靠近整个红黑树的最左端。因此,调度器每次选择位于红黑树最左端的那个任务,该任务的vruntime最小。
需要说明的是,本实施例中提到的“cpu”指的是计算机设备中的最小处理单元,也可以称之为处理核,或简称为核。
如图10所示,针对该二叉树以及二叉树上的任务T1-任务T6,cpu执行任务的顺序为T1-T3-T5-T2-T4-T6。假设任务T1为其中的重要任务。
vruntime是通过任务的实际运行时间和任务的权重(weight)计算出来的,是一个累加值。在CFS调度器中,将任务优先级这个概念弱化,而是强调任务的权重。一个任务的权重越大,则说明这个任务更需要运行,因此它的虚拟运行时间就越小,这样被调度的机会就越大。
由于CFS调度算法的公平性,即便把重要任务的权重设置的很大,也会出现重要任务被执行完一次之后,等待至少一个其他任务运行的情况。任务T1执行一次之后,任务T1的vruntime值变大,CFS调度任务T3之后任务T1因为vruntime值变大所以被插入到队列末尾,如图11所示。那么任务T1则要等到任务T3-T5-T2-T4-T6全部被执行一遍后才能再次被执行。图11为一个示例,任务T1被执行一次之后,也可能重新入队到其他位置,但是无论怎样,CFS为了实现公平,总会让任务T1出现等待的情况。因为任务T1是对用户而言的重要任务,那么这样的等待将有可能造成系统卡顿。
本实施例提出一种重要任务的调度方法,除前述就绪队列之外,为每个cpu都再新建一 个运行队列(下称vip队列),其结构和前述就绪队列类似。将重要任务放到该vip队列中。
重要任务指的是对用户体验影响较大的任务,可以包括前述实施例中提到的重要性排序在前的N个应用的所有线程(或进程);或者重要任务包括这N个应用的线程中的关键线程,例如UI线程和渲染线程;或者重要任务包括当前前台应用的UI线程和渲染线程等关键线程。
重要任务又分为静态和动态两种,静态重要任务的识别一般发生用户态,例如前台应用的用户界面(user interface,UI)线程和渲染(render)线程。静态重要任务一般在应用重要性改变的时候才取消其重要性。动态重要任务为静态重要任务所依赖的重要任务,其识别一般发生在内核态,一旦依赖解除,其重要性就取消。动态重要任务包括静态重要任务直接依赖的任务,还可以包括间接依赖的任务。
这里的依赖可以为数据依赖、锁依赖、binder服务依赖或控制流依赖等。数据依赖即B任务的执行必须依赖于A任务的输出;锁依赖即B任务的执行需要A任务释放的某种锁;控制流依赖为B任务在执行逻辑上必须等待A任务执行完才能执行;binder服务依赖指的是任务A调用一个binder函数(类似远程过程调用),要求任务B完成某项功能,并返回运行结果,这样任务A对任务B产生了binder服务依赖,属于控制流依赖的一种具体实例。
以linux系统为例,介绍一种依赖检测关系的具体实现过程。在任务控制块task_struct中添加两个字段:static_vip和dynamic_vip,分别用来表示静态和动态重要任务标志值。static_vip=1,表示该任务为静态重要任务;dynamic_vip不等于0,表示该任务为动态重要任务。对于动态重要任务,由于一个任务可能会被多个其他任务同时依赖,并且依赖的原因可以相同或不同,如mutex互斥锁、rwsem读写信号量、binder服务依赖等,所以我们对dynamic_vip这个字段进行划分,分成3个或更多类型,分别表示mutex,rwsem和binder依赖,更多类型不在详述,也可以预留几个类型以后扩展。每种类型使用8位存储。这样在每次调用对应的依赖函数时,该字段对应的区块值加1,在每次依赖函数完成后,该值相应的减1,直到所有字段都等于0,再取消该任务的重要属性,重新放入就绪队列中运行。
下面以mutex锁和binder依赖为例,详细说明其步骤:
重要任务A中调用mutex_lock函数获取一个互斥锁,在该函数中会检测锁的状态,如果获取失败,则表明该锁已被其他任务获取了,当前任务A被挂起,进入睡眠状态,我们在其中添加了代码,用来获取当前该锁的持有者(任务B),并且对该任务结构的字段dynamic_vip的值对应位加1,然后再把该任务B移到vip队列中进行调度运行。当任务B释放该锁mutex_unlock时,判断其task_struct中的dynamic_vip的相应值,并减1,如果为0,则把该任务从vip队列中移除。其他锁的运行过程类似。
重要任务A调用普通任务B的binder函数时,在内核的binder驱动中的binder_thread_write函数中,会先找到任务B的任务结构task_struct,并设置其dynamic_vip的值,然后把相应的函数id和参数传递给任务B,并唤醒任务B开始运行,然后重要任务A等待其返回运行结果;当任务B通过binder_thread_read函数完成相应的功能后,在返回运行结果到任务A前,检查dynamic_vip的值,如果为0,则把该任务从vip队列中移除。
按照上述方式识别出多个重要任务后,将这些任务插入vip队列。插入原则可以和前述就绪队列一样,即根据每个任务的vruntime的值插入。如图12示例,vip队列中包括两个 重要任务T1和T7,当前T7的vruntime小于T1的vruntime.
每次获取下一个需运行的任务时,cpu先检查vip队列中是否有任务需要运行,如有则选取该队列中的任务运行,如该队列为空,才选取就绪队列中的任务运行。这样就保证了重要任务都能在其他非重要任务之前运行,为重要任务实现了类似插队的功能。
进一步的,当vip队列中的任务数较多时,为了避免vip队列中的任务排队等待,可以适时检查当前各cpu的vip队列中的任务是否有延迟。若存在延迟的任务,则确定该任务是否可移动,若可移动则移动该任务到另一个空闲的cpu的vip队列中。通过这样的方式实现了重要任务的迁移(或者称搬家),保证了重要任务的及时运行,避免了卡顿,进一步提高了用户体验。
具体的,参考图13,内核在时钟中断到来(S1301)时检查当前cpu对应的vip队列中是否有任务等待时长超过阈值(例如10ms),即判断是否有延迟任务(S1302)。如果有任务的等待时长超过该阈值,再检查该任务的数据和/或指令是否还存在于缓存(cache)中,即判断该任务是否可移动(S1303)。若该任务的数据和/或指令全部没有存在于或部分没有存在于缓存中(cache是否hot),则在当前cpu所在的cpu簇(cluster)中选择一个vip队列中没有任务等待,也没有实时任务的目的cpu(S1304),将该任务迁移到该目的cpu(S1305)。任务的迁移可以通过调用内核提供的函数migration就可以实现。检测cache是否hot可以通过调用内核提供的函数task_hot实现。
具体实现的时候,可以一次性识别出所有可迁移的任务,然后执行迁移;也可以依次处理vip队列中的任务。
一个任务的等待时长为该任务当前时间与入队时间的时间差,前提是该任务这中间从未被运行过;或者一个任务的等待时长为当前时间与上一次被运行的时间的时间差。
检测一个队列中的任务的等待时间是否超过阈值通常需要先获取队列的锁,在时钟中断到来时实现上述方法,可一定程度上避免死锁。
移动一个任务时可以将该任务移动到处理效率比原cpu高的cpu上。通过这样的方式可以进一步提高重要任务的处理效率。例如,当前终端设备8个核(cpu0-cpu7)一般分为两个等级,4个小核和4个大核,那么迁移的时候如果原始cpu为小核,那可以选择一个大核作为目的cpu,将重要任务迁移过去,参考图14所示。
通过为每个cpu创建一个vip队列,并将重要任务放到该队列中,使得重要任务优先于就绪队列中的其他任务执行,可以保证重要任务的执行效率,由于重要任务和系统的卡顿体验相关,从而可以一定程度上避免出现用户可感知的系统卡顿,提升终端设备的用户体验。
需要说明的是,前述实施例中提出模块或模块的划分仅作为一种示例性的示出,所描述的各个模块的功能仅是举例说明,本申请并不以此为限。程序设计人员可以根据需求合并其中两个或更多模块的功能,或者将一个模块的功能拆分从而获得更多更细粒度的模块,以及其他变形方式。
以上描述的各个实施例之间相同或相似的部分可相互参考。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即 可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。另外,本申请提供的装置实施例附图中,模块之间的连接关系表示它们之间具有通信连接,具体可以实现为一条或多条通信总线或信号线。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述,仅为本申请的一些具体实施方式,但本申请的保护范围并不局限于此。

Claims (33)

  1. 一种在计算机系统中管理资源的方法,其特征在于,包括:
    获取数据,所述数据包括与当前的前台应用相关的应用时序特征数据,所述数据还包括以下实时数据中的至少一种:所述计算机系统的系统时间、所述计算机系统的当前状态数据和所述计算机系统的当前位置数据;
    根据所述实时数据中的至少一种从多个机器学习模型中选择与所述实时数据匹配的目标机器学习模型;
    将获取的所述数据输入所述目标机器学习模型以对所述计算机系统上安装的多个应用进行重要性排序;
    根据所述重要性排序的结果执行资源管理。
  2. 如权利要求1所述的方法,其特征在于,获取的所述数据包括所述计算机系统的系统时间;
    所述选择与所述实时数据匹配的目标机器学习模型包括:
    根据所述计算机系统的系统时间确定所述计算机系统当前所处的时间段;
    根据所述计算机系统当前所处的时间段从对应关系中确定与所述计算机系统当前所处的时间段对应的目标机器学习模型,所述对应关系包括多个时间段以及该多个时间段分别对应的多个机器学习模型。
  3. 如权利要求1所述的方法,其特征在于,获取的所述数据包括所述计算机系统的当前位置数据;
    所述选择与所述实时数据匹配的目标机器学习模型包括:
    根据所述计算机系统的当前位置数据确定所述计算机系统当前所处的语义位置;
    根据所述计算机系统当前所处的语义位置从对应关系中确定与所述计算机系统当前所处的语义位置对应的目标机器学习模型,所述对应关系包括多个语义位置以及该多个语义位置分别对应的多个机器学习模型。
  4. 如权利要求1-3任意一项所述的方法,其特征在于,还包括:
    采集并存储应用数据和所述计算机系统的相关数据,其中,所述应用数据包括所述应用的标识,所述应用被使用的时间,所述计算机系统的相关数据包括以下数据中的至少一种:在所述应用被使用的时间所述计算机系统的时间、状态数据和位置数据。
  5. 如权利要求4所述的方法,其特征在于,还包括:
    根据过去一段时间内采集并存储的所述应用数据计算多个应用的应用时序特征数据;
    将所述应用数据,或所述应用数据和所述计算机系统的相关数据输入分类模型以获得与应用被使用的规律相关的多个分类,所述多个分类为一维分类或多维分类,其中任意两个分类分别对应的两种规律存在不同;
    分别针对所述多个分类中的每个分类训练机器学习模型,所述机器学习模型用于实现应用的重要性排序,所述训练的输入包括所述应用被使用的时间、所述应用时序特征数据, 所述训练的输入还包括所述计算机系统的相关数据中的至少一种。
  6. 如权利要求4或5所述的方法,其特征在于,在监测到以下事件中的一种或多种时,启动所述采集步骤:前后台切换事件、应用的安装事件、应用的卸载事件、所述计算机系统的状态数据发生变化引起的通知事件或所述计算机系统的位置数据发生变化引起的通知事件。
  7. 如权利要求5或6所述的方法,其特征在于,所述多个分类为从时间维度上划分的多个分类。
  8. 如权利要求7所述的方法,其特征在于,所述多个分类包括工作时间段和非工作时间段。
  9. 如权利要求1-8任意一项所述的方法,其特征在于,包括:
    在监测到以下事件中的一种或多种时,启动所述获取数据的步骤:前后台切换事件、应用的安装事件、应用的卸载事件、所述计算机系统的状态数据发生变化引起的通知事件或所述计算机系统的位置数据发生变化引起的通知事件。
  10. 如权利要求1-9任意一项所述的方法,其特征在于,在根据所述重要性排序的结果执行资源管理之前,还包括:
    获取需要保护的应用的个数N,所述N值满足如下条件:在过去被使用的次数最多的N个应用的使用次数占所有应用在该段时间内被使用次数之和的比例大于预设的第一阈值,其中所述N为大于0且小于或等于M的整数;
    所述根据所述重要性排序的结果执行资源管理包括:根据所述N和所述重要性排序的结果执行资源管理。
  11. 如权利要求10所述的方法,其特征在于,所述根据所述N和所述重要性排序的结果执行资源管理包括:
    从所述重要性排序的结果中识别排序在前的N1个应用,并对所述N1个应用或剩余的其他应用执行资源管理,其中N1为小于或等于N的正整数。
  12. 如权利要求10所述的方法,其特征在于,还包括:
    根据新安装应用的权重对所述新安装应用进行重要性排序,选取其中排序在前的N2个新安装应用,其中所述新安装应用安装到所述计算机系统上的时间小于预设的第二阈值;
    相应的,所述根据所述N和所述重要性排序的结果执行资源管理包括:
    从所述重要性排序的结果中识别排序在前的N-N2个应用,对所述N-N2个应用和所述N2个新安装的应用执行资源管理,或对剩余的其他应用执行资源管理。
  13. 如权利要求12所述的方法,其特征在于,所述根据新安装应用的权重对所述新安装应用进行重要性排序包括:
    根据使用可能性权重和时间衰减权重计算每个新安装应用的得分,得分高的新安装应用的重要性高于得分低的新安装应用的重要性;
    其中,所述使用可能性权重用于反映新安装应用最近有没有被使用;所述时间衰减权重用于反应当前时间距离应用安装时间的时间差。
  14. 如权利要求1-13任意一项所述的方法,其特征在于,所述根据所述重要性排序的结果执行资源管理包括以下管理措施中的任意一种或多种:
    为识别出来的重要性高的应用的预留资源;
    临时冻结识别出来的重要性低的应用,直至特定时间结束;和
    为每个CPU创建一个vip队列,所述vip队列中包括重要性高的应用的任务,所述vip队列中各个任务的执行优先于所述CPU的其他执行队列。
  15. 如权利要求1-14任意一项所述的方法,其特征在于,所述应用时序特征数据包括最近被使用的k1个应用,所述前台应用的k2个最大可能的前续应用以及k3个最大可能的后续应用,其中,k1、k2和k3均为正整数。
  16. 如权利要求1-15任意一项所述的方法,其特征在于,所述当前位置数据为语义位置数据。
  17. 如权利要求1-16任意一项所述的方法,其特征在于,所述当前状态数据为以下数据中的一种或多种:表示网络连接或网络断开的数据,表示耳机连接或断开的数据,表示充电线连接或断开的数据,以及表示蓝牙连接或断开的数据。
  18. 一种终端设备,其特征在于,所述终端设备包括处理器和存储器,所述存储器用于存储计算机可读指令,所述处理器用于读取所述存储器中存储的所述计算机可读指令执行如权利要求1-17任意一项所述的方法。
  19. 一种临时冻结应用的方法,其特征在于,包括:
    在检测到特定事件时,临时冻结部分应用;
    当特定时间段结束或监测到所述特定事件结束时,解冻被冻结的全部或部分应用,或者,当检测到一个或多个被冻结的应用的运行环境发生改变时,解冻所述一个或多个被冻结的应用。
  20. 如权利要求19所述的方法,其特征在于,所述特定事件为指示资源需求量升高的事件。
  21. 如权利要求19或20所述的方法,其特征在于,所述特定事件包括以下事件中的至少一种:应用的启动事件、拍照事件、图库缩放事件、滑动事件以及亮/灭屏事件。
  22. 如权利要求19-21任意一项所述的方法,其特征在于,所述临时冻结为通过设置定时器实现临时冻结,所述定时器的时长被设置为所述特定时间段。
  23. 如权利要求19-22任意一项所述的方法,其特征在于,所述临时冻结的部分应用包括所有后台应用;或者所述临时冻结的部分应用包括重要性低的应用,其中应用的重要性根据应用的历史使用情况、机器学习算法以及系统的当前场景数据获得。
  24. 如权利要求19-23任意一项所述的方法,其特征在于,所述检测到一个或多个被冻结的应用的运行环境发生改变包括检测到如下事件中的任意一种或多种:被冻结的应用切换到前台、被冻结的应用退出、被冻结的应用收到异步消息以及被冻结的应用的通知消息 被用户点击。
  25. 一种终端设备,其特征在于,所述终端设备包括处理器和存储器,所述存储器用于存储计算机可读指令,所述处理器用于读取所述存储器中存储的计算机可读指令执行如权利要求19-24任意一项所述的方法。
  26. 一种计算机系统中执行任务的方法,其特征在于,所述方法应用于计算机系统,所述计算机系统包括多个物理核,每个物理核对应第一队列和第二队列,所述第一队列和第二队列中分别包括一个或多个待所述物理核执行的任务,至少一个物理核执行如下方法:
    获取并执行所述第一队列中的任务,直至所述第一队列中所有的任务都执行完毕,再获取并执行所述第二队列中的任务。
  27. 如权利要求26所述的方法,其特征在于,还包括:
    监测所述第一队列中是否存在等待时间超过特定阈值的任务,若存在,将所述任务移动到另一个物理核对应的第一队列中。
  28. 如权利要求26或27所述的方法,其特征在于,所述将所述任务移动到另一个物理核对应的第一队列中包括:在确定所述任务可移动后再将所述任务移动到所述另一个物理核对应的第一队列中。
  29. 如权利要求26-28任意一项所述的方法,其特征在于,所述第一队列中的任务包括重要任务以及所述重要任务依赖的任务。
  30. 如权利要求29所述的方法,其特征在于,所述重要任务为对影响用户体验的任务。
  31. 如权利要求29或30所述的方法,其特征在于,所述重要任务为重要性高的应用的任务,其中应用的重要性根据应用的历史使用情况、机器学习算法以及系统的当前场景数据获得。
  32. 如权利要求29-31任意一项所述的方法,其特征在于,所述依赖包括以下依赖关系中的至少一种:数据依赖、锁依赖以及binder服务依赖。
  33. 一种终端设备,其特征在于,所述终端设备包括处理器和存储器,所述存储器用于存储计算机可读指令,所述处理器用于读取所述存储器中存储的计算机可读指令执行如权利要求26-32任意一项所述的方法。
PCT/CN2018/109753 2017-10-13 2018-10-11 资源管理的方法及终端设备 WO2019072200A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020207011136A KR102424030B1 (ko) 2017-10-13 2018-10-11 리소스 관리 방법 및 단말 장치
EP18865934.6A EP3674895A4 (en) 2017-10-13 2018-10-11 RESOURCE MANAGEMENT PROCESS AND TERMINAL DEVICE
US16/845,382 US11693693B2 (en) 2017-10-13 2020-04-10 Resource management based on ranking of importance of applications

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710953233.7 2017-10-13
CN201710953233.7A CN109684069A (zh) 2017-10-13 2017-10-13 资源管理的方法及终端设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/845,382 Continuation US11693693B2 (en) 2017-10-13 2020-04-10 Resource management based on ranking of importance of applications

Publications (1)

Publication Number Publication Date
WO2019072200A1 true WO2019072200A1 (zh) 2019-04-18

Family

ID=66100389

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/109753 WO2019072200A1 (zh) 2017-10-13 2018-10-11 资源管理的方法及终端设备

Country Status (5)

Country Link
US (1) US11693693B2 (zh)
EP (1) EP3674895A4 (zh)
KR (1) KR102424030B1 (zh)
CN (2) CN110879750A (zh)
WO (1) WO2019072200A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111459648A (zh) * 2020-06-17 2020-07-28 北京机电工程研究所 面向应用程序的异构多核平台资源优化方法和装置
CN111475083A (zh) * 2020-04-03 2020-07-31 惠州Tcl移动通信有限公司 应用跳转的方法、装置、存储介质及移动终端
CN112737798A (zh) * 2019-10-14 2021-04-30 中国移动通信集团四川有限公司 主机资源分配方法、装置及调度服务器、存储介质
CN112882878A (zh) * 2021-02-03 2021-06-01 南方电网数字电网研究院有限公司 电能表操作系统的资源占用测试方法、装置和计算机设备
WO2021158037A1 (en) * 2020-02-07 2021-08-12 Samsung Electronics Co., Ltd. Electronic device for task scheduling when application is run, method of operating the same, and storage medium

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
PT3440481T (pt) * 2016-04-05 2024-01-08 Statsports Group Ltd Sistema de medição de posição uwb e gnss melhorado
US11641406B2 (en) * 2018-10-17 2023-05-02 Servicenow, Inc. Identifying applications with machine learning
CN110187958B (zh) * 2019-06-04 2020-05-05 上海燧原智能科技有限公司 一种任务处理方法、装置、系统、设备及存储介质
CN112256119A (zh) * 2019-07-02 2021-01-22 中兴通讯股份有限公司 应用程序冷冻控制方法、装置、终端及可读存储介质
CN110691401B (zh) * 2019-08-28 2021-04-09 华为技术有限公司 一种系统应用的管理方法及装置
US11379260B2 (en) * 2019-09-04 2022-07-05 Oracle International Corporation Automated semantic tagging
CN113127069B (zh) * 2019-12-31 2023-08-22 成都鼎桥通信技术有限公司 基于双系统的位置服务管理方法、装置和终端设备
CN111381952B (zh) * 2020-03-12 2023-05-12 腾讯科技(深圳)有限公司 进程冻结方法、装置、终端及存储介质
US20210255898A1 (en) * 2020-05-11 2021-08-19 Suresh Babu Revoled Konti System and method of predicting application performance for enhanced user experience
KR102345749B1 (ko) * 2020-06-09 2022-01-03 주식회사 토브데이터 데이터 컴플라이언스 제공을 위한 평가 관리 방법 및 그 시스템
KR102345748B1 (ko) * 2020-06-09 2022-01-03 주식회사 토브데이터 데이터 컴플라이언스 제공을 위한 방법 및 그 시스템
CN111831433A (zh) * 2020-07-01 2020-10-27 Oppo广东移动通信有限公司 资源分配方法、装置、存储介质及电子设备
CN111880440B (zh) * 2020-07-31 2021-08-03 仲刚 一种串行链路数据采集方法及系统
CN112052149B (zh) * 2020-09-06 2022-02-22 厦门理工学院 一种大数据信息采集系统及使用方法
CN112114947B (zh) * 2020-09-17 2024-02-02 石家庄科林电气股份有限公司 一种基于边缘计算网关的系统资源调度方法
CN113282385A (zh) * 2020-10-30 2021-08-20 常熟友乐智能科技有限公司 基于在线办公设备的业务处理方法及系统
CN112286704B (zh) * 2020-11-19 2023-08-15 每日互动股份有限公司 延时任务的处理方法、装置、计算机设备及存储介质
KR102543818B1 (ko) * 2020-11-24 2023-06-14 경희대학교 산학협력단 차량 번호판 인식 시스템, 방법, 장치 및 차량 번호판 인식 관리 장치
CN112256354B (zh) * 2020-11-25 2023-05-16 Oppo(重庆)智能科技有限公司 应用启动方法、装置、存储介质及电子设备
KR102418991B1 (ko) * 2020-11-26 2022-07-08 성균관대학교산학협력단 적응형 i/o 완료 방법 및 이를 수행하기 위한 컴퓨터 프로그램
US11782770B2 (en) * 2020-12-02 2023-10-10 International Business Machines Corporation Resource allocation based on a contextual scenario
US11861395B2 (en) * 2020-12-11 2024-01-02 Samsung Electronics Co., Ltd. Method and system for managing memory for applications in a computing system
US20220222122A1 (en) * 2021-01-08 2022-07-14 Dell Products L.P. Model-based resource allocation for an information handling system
CN112817743A (zh) * 2021-01-13 2021-05-18 浙江大华技术股份有限公司 智能设备的业务管理方法、设备及计算机可读存储介质
US11526341B2 (en) * 2021-01-25 2022-12-13 Vmware, Inc. Conflict resolution for device-driven management
US11934795B2 (en) * 2021-01-29 2024-03-19 Oracle International Corporation Augmented training set or test set for improved classification model robustness
CN113791877A (zh) * 2021-02-26 2021-12-14 北京沃东天骏信息技术有限公司 用于确定信息的方法和装置
KR102630955B1 (ko) * 2021-03-30 2024-01-29 남서울대학교 산학협력단 사용자 경험을 기반으로 big.LITTLE 멀티코어 구조의 스마트 모바일 단말의 에너지 소비 최적화 장치 및 그 방법
CN113296951B (zh) * 2021-05-31 2024-08-20 阿里巴巴创新公司 一种资源配置方案确定方法及设备
WO2022265227A1 (ko) * 2021-06-15 2022-12-22 삼성전자 주식회사 전자 장치 및 이를 이용한 생체 인증 방법
KR102706002B1 (ko) * 2021-07-28 2024-09-12 주식회사 넥스트칩 특징점에 대한 기술자를 생성하기 위한 전자 장치 및 그 동작 방법
US20230056727A1 (en) * 2021-08-23 2023-02-23 Dell Products, L.P. Managing the degradation of information handling system (ihs) performance due to software installations
CN113760193B (zh) * 2021-08-26 2024-04-02 武汉天喻信息产业股份有限公司 用于资源受限制装置的数据读写方法、装置及指令集
WO2023027521A1 (en) * 2021-08-26 2023-03-02 Samsung Electronics Co., Ltd. Method and electronic device for managing network resources among application traffic
CN113792095A (zh) * 2021-08-31 2021-12-14 通号城市轨道交通技术有限公司 信号系统接口信息转换方法、装置、电子设备和存储介质
CN113656046A (zh) * 2021-08-31 2021-11-16 北京京东乾石科技有限公司 一种应用部署方法和装置
CN113918296B (zh) * 2021-10-11 2024-09-13 平安国际智慧城市科技股份有限公司 模型训练任务调度执行方法、装置、电子设备及存储介质
CN113918287A (zh) * 2021-11-11 2022-01-11 杭州逗酷软件科技有限公司 启动应用程序的方法、装置、终端设备及存储介质
WO2023101152A1 (ko) * 2021-11-30 2023-06-08 삼성전자주식회사 대용량 메모리 사용 앱의 진입 속도를 개선하는 장치 및 방법
TWI795181B (zh) * 2022-01-20 2023-03-01 網路家庭國際資訊股份有限公司 增加網頁流暢度的方法和網頁伺服器
CN114547027B (zh) * 2022-02-11 2023-01-31 清华大学 容量和价值约束的数据压缩处理方法、装置及存储介质
CN115202902B (zh) * 2022-07-01 2023-08-22 荣耀终端有限公司 控制进程交互的方法及相关装置
WO2024155013A1 (ko) * 2023-01-17 2024-07-25 삼성전자 주식회사 어플리케이션의 실행하는 전자 장치와 이의 동작 방법
US20240303244A1 (en) * 2023-03-06 2024-09-12 Plaid Inc. Predicting data availability and scheduling data pulls
CN116346289B (zh) * 2023-05-30 2023-08-04 泰山学院 一种用于计算机网络中心的数据处理方法
CN118410023B (zh) * 2024-07-01 2024-09-13 中关村科学城城市大脑股份有限公司 基于大数据的计算机资源管理系统及方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101414270A (zh) * 2008-12-04 2009-04-22 浙江大学 硬件辅助的辅核任务动态优先级调度的实现方法
CN101923382A (zh) * 2009-06-16 2010-12-22 联想(北京)有限公司 一种计算机系统的节能方法及计算机系统
CN106055399A (zh) * 2016-05-31 2016-10-26 宇龙计算机通信科技(深圳)有限公司 一种控制应用程序的方法及终端

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7716151B2 (en) * 2006-02-13 2010-05-11 Infosys Technologies, Ltd. Apparatus, method and product for optimizing software system workload performance scenarios using multiple criteria decision making
US7558771B2 (en) * 2006-06-07 2009-07-07 Gm Global Technology Operations, Inc. System and method for selection of prediction tools
US8112755B2 (en) * 2006-06-30 2012-02-07 Microsoft Corporation Reducing latencies in computing systems using probabilistic and/or decision-theoretic reasoning under scarce memory resources
CN102891916B (zh) 2011-07-18 2016-01-20 中兴通讯股份有限公司 一种预测用户操作的方法及移动终端
US9225772B2 (en) * 2011-09-26 2015-12-29 Knoa Software, Inc. Method, system and program product for allocation and/or prioritization of electronic resources
CN103257898A (zh) * 2012-02-15 2013-08-21 北京邦天信息技术有限公司 嵌入式系统中资源分配方法和系统
US8990534B2 (en) * 2012-05-31 2015-03-24 Apple Inc. Adaptive resource management of a data processing system
JP5845351B2 (ja) 2012-07-06 2016-01-20 ▲華▼▲為▼終端有限公司Huawei Device Co., Ltd. リソース割当方法及び装置
EP2747000B1 (en) * 2012-12-20 2017-11-22 ABB Schweiz AG System and method for automatic allocation of mobile resources to tasks
US9508040B2 (en) * 2013-06-12 2016-11-29 Microsoft Technology Licensing, Llc Predictive pre-launch for applications
WO2014210050A1 (en) * 2013-06-24 2014-12-31 Cylance Inc. Automated system for generative multimodel multiclass classification and similarity analysis using machine learning
US10133659B2 (en) 2013-11-22 2018-11-20 Sap Se Proactive memory allocation
CN104699218B (zh) * 2013-12-10 2019-04-19 华为终端(东莞)有限公司 一种任务管理方法及设备
US10402733B1 (en) * 2015-06-17 2019-09-03 EMC IP Holding Company LLC Adaptive ensemble workload prediction model based on machine learning algorithms
CN107291549B (zh) * 2016-03-31 2020-11-24 阿里巴巴集团控股有限公司 一种管理应用程序的方法及装置
WO2017188419A1 (ja) * 2016-04-28 2017-11-02 日本電気株式会社 計算資源管理装置、計算資源管理方法、及びコンピュータ読み取り可能な記録媒体
CN106055406A (zh) * 2016-05-20 2016-10-26 深圳天珑无线科技有限公司 一种程序运行的方法和终端
CN106022150A (zh) * 2016-05-30 2016-10-12 宇龙计算机通信科技(深圳)有限公司 一种冻结应用方法以及装置
CN105939416A (zh) * 2016-05-30 2016-09-14 努比亚技术有限公司 移动终端及其应用预启动方法
CN106125896A (zh) * 2016-06-29 2016-11-16 宇龙计算机通信科技(深圳)有限公司 一种应用程序冻结方法及移动终端
CN106201685A (zh) 2016-06-30 2016-12-07 宇龙计算机通信科技(深圳)有限公司 一种应用冻结的方法、装置以及终端
CN106354371A (zh) 2016-09-06 2017-01-25 深圳市金立通信设备有限公司 一种应用排序的方法及终端
US10691491B2 (en) * 2016-10-19 2020-06-23 Nutanix, Inc. Adapting a pre-trained distributed resource predictive model to a target distributed computing environment
CN106941713A (zh) 2017-05-16 2017-07-11 努比亚技术有限公司 一种降低移动终端功耗的方法及其装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101414270A (zh) * 2008-12-04 2009-04-22 浙江大学 硬件辅助的辅核任务动态优先级调度的实现方法
CN101923382A (zh) * 2009-06-16 2010-12-22 联想(北京)有限公司 一种计算机系统的节能方法及计算机系统
CN106055399A (zh) * 2016-05-31 2016-10-26 宇龙计算机通信科技(深圳)有限公司 一种控制应用程序的方法及终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3674895A4 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112737798A (zh) * 2019-10-14 2021-04-30 中国移动通信集团四川有限公司 主机资源分配方法、装置及调度服务器、存储介质
CN112737798B (zh) * 2019-10-14 2022-09-27 中国移动通信集团四川有限公司 主机资源分配方法、装置及调度服务器、存储介质
WO2021158037A1 (en) * 2020-02-07 2021-08-12 Samsung Electronics Co., Ltd. Electronic device for task scheduling when application is run, method of operating the same, and storage medium
US11941435B2 (en) 2020-02-07 2024-03-26 Samsung Electronics Co., Ltd Electronic device for rapid entry into application being run, method of operating the same, and storage medium
CN111475083A (zh) * 2020-04-03 2020-07-31 惠州Tcl移动通信有限公司 应用跳转的方法、装置、存储介质及移动终端
CN111459648A (zh) * 2020-06-17 2020-07-28 北京机电工程研究所 面向应用程序的异构多核平台资源优化方法和装置
CN112882878A (zh) * 2021-02-03 2021-06-01 南方电网数字电网研究院有限公司 电能表操作系统的资源占用测试方法、装置和计算机设备

Also Published As

Publication number Publication date
US11693693B2 (en) 2023-07-04
KR102424030B1 (ko) 2022-07-21
CN109684069A (zh) 2019-04-26
US20200241917A1 (en) 2020-07-30
EP3674895A1 (en) 2020-07-01
KR20200060421A (ko) 2020-05-29
EP3674895A4 (en) 2020-11-18
CN110879750A (zh) 2020-03-13

Similar Documents

Publication Publication Date Title
WO2019072200A1 (zh) 资源管理的方法及终端设备
CN108681475B (zh) 应用程序预加载方法、装置、存储介质及移动终端
CN105677431B (zh) 将后台工作和前台工作解耦合
US11683396B2 (en) Efficient context monitoring
TWI540426B (zh) 行動裝置基於熱條件之動態調整
CN107835311B (zh) 应用管理方法、装置、存储介质及电子设备
US9372898B2 (en) Enabling event prediction as an on-device service for mobile interaction
CN107748697B (zh) 应用关闭方法、装置、存储介质及电子设备
CN109213539A (zh) 一种内存回收方法及装置
CN105431822A (zh) 应用的预测预启动
CN108762844B (zh) 应用程序预加载方法、装置、存储介质及终端
CN107402808B (zh) 进程管理方法、装置、存储介质及电子设备
CN107870810B (zh) 应用清理方法、装置、存储介质及电子设备
CN107943582B (zh) 特征处理方法、装置、存储介质及电子设备
WO2022161325A1 (zh) 提示方法和电子设备
CN109587328B (zh) 消息管理方法和装置、存储介质及电子设备
CN115562744A (zh) 一种应用程序加载方法及电子设备
CN107870809B (zh) 应用关闭方法、装置、存储介质及电子设备
CN113885944A (zh) 应用程序后台保活的方法、装置和电子设备
CN108234758B (zh) 应用的显示方法、装置、存储介质及电子设备
WO2021115481A1 (zh) 终端控制方法、装置、终端和存储介质
CN107870811B (zh) 应用清理方法、装置、存储介质及电子设备
CN107943535B (zh) 应用清理方法、装置、存储介质及电子设备
CN115061740B (zh) 应用程序处理方法及装置
CN115016855A (zh) 应用预加载的方法、设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18865934

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018865934

Country of ref document: EP

Effective date: 20200326

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20207011136

Country of ref document: KR

Kind code of ref document: A