CN113778662B - Memory recovery method and device - Google Patents
Memory recovery method and device Download PDFInfo
- Publication number
- CN113778662B CN113778662B CN202110857483.7A CN202110857483A CN113778662B CN 113778662 B CN113778662 B CN 113778662B CN 202110857483 A CN202110857483 A CN 202110857483A CN 113778662 B CN113778662 B CN 113778662B
- Authority
- CN
- China
- Prior art keywords
- memory
- application
- priority
- linked list
- application program
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Stored Programmes (AREA)
Abstract
The scheme maintains memory pages of a plurality of application programs in the same priority level in the same memory page linked list, and when memory recovery is performed, memory pages of all application programs stored in the memory page linked list are sequentially recovered in a balanced manner according to a sequence from cold to hot instead of performing memory recovery one by one for each application (namely, all recyclable memories of one application are recovered first and then the recyclable memory of the next application is recovered). Therefore, the phenomenon of system jolt caused by that the relatively cold memory pages in the multiple application programs with the same priority are not recycled and the relatively hot memory pages in the multiple application programs with the same priority are recycled is avoided, and the memory pages of the multiple application programs with the same priority are guaranteed to be recycled in a balanced manner, so that the scheme reduces the system jolt and improves the system efficiency.
Description
Technical Field
The present application relates to the field of memory management technologies, and in particular, to a memory recovery method and apparatus.
Background
The internal memory is also called an internal memory or a main memory, is used for temporarily storing the operation data of the CPU and the data exchanged with an external memory such as a hard disk, and is a bridge for communication between the external memory and the CPU, and all program operations are performed in the internal memory.
The memory space of the memory is limited, and in order to ensure that enough memory can be used during the operation of the system, the operating system of the electronic device is provided with a memory release mechanism: when the memory is insufficient, the operating system will clean up the data of the applications that are not used frequently in the memory, i.e. memory reclamation. However, the memory reclamation scheme in the related art may cause thrashing, thereby reducing system efficiency.
Disclosure of Invention
In view of this, the present application provides a memory recovery method and device to solve the problem of thrashing in the memory recovery scheme in the related art, and the disclosed technical scheme is as follows:
in a first aspect, the present application provides a memory recycling method, including: responding to a memory recovery instruction, determining memory page linked lists to be recovered, wherein each memory page linked list comprises memory pages of application programs of the same application group, and each application group comprises all application programs at the same priority level; determining memory pages to be recycled in the memory page linked list according to the sequence from cold to hot based on the memory recycling rate corresponding to the memory page linked list to be recycled; and recovering the memory page to be recovered. In this way, the memory pages of multiple application programs at the same priority level are maintained in the same memory page linked list, and when memory recovery is performed, the memory pages of the application programs in the memory page linked list are sequentially subjected to balanced recovery according to the sequence from cold to hot, rather than performing memory recovery one by one for each application. Therefore, the phenomena that relatively cooler memory pages in a plurality of application programs with the same priority are not recycled and relatively hotter memory pages are recycled are avoided, the memory pages of the plurality of application programs with the same priority are guaranteed to be recycled in a balanced mode, system jolt is reduced, and system efficiency is improved.
According to the first aspect, the process of determining the memory page linked list to be recycled includes: and determining a memory page linked list corresponding to the application packet with the lowest priority from the memory page linked list which is not subjected to memory recovery as the memory page linked list to be recovered. In the implementation mode, the lower the priority is, the lower the probability of the application program being accessed is, so that the memory pages in the memory page linked list with the low priority are preferentially recycled, the system jolt can be reduced, and the system efficiency is improved.
According to the first aspect, or any implementation manner of the first aspect, the memory reclamation method further includes: and determining the priority of the application program according to the running state information corresponding to the application program, wherein the running state information at least comprises at least one of foreground running, background running, a freezing state and foreground and background switching frequency. According to the first aspect or any implementation manner of the first aspect, determining the priority of the application program according to the running state information corresponding to the application program includes: acquiring the current running state of an application program and corresponding foreground and background switching information in a historical time period; acquiring a state score corresponding to the application program according to the current running state and foreground and background switching information; and determining the priority of the application program according to the state score of the application program. In the implementation mode, the running state of the application program is subjected to fractional quantization, the priority of the application program is further divided according to the state score, the application programs with similar running states can be divided into the same application group, memory pages in the same group are managed in a unified mode, the system bump is further reduced, and the system efficiency is improved.
According to the first aspect, or any implementation manner of the first aspect, acquiring a state score corresponding to an application program according to a current operating state and foreground and background switching information includes: determining a reference score corresponding to the current running state; calculating to obtain an adjustment score according to the weight coefficient and the score adjustment step length corresponding to the foreground and background switching information; and adjusting the adjustment score on the basis of the reference score to obtain a state score corresponding to the application program. In the implementation mode, the state score of the application program is adjusted based on the information of the foreground and background switching of the application program, so that the application programs with more similar running state information are divided into the same application group, the memory pages of the same group are managed in a unified mode, the system bump is further reduced, and the system efficiency is improved.
According to the first aspect, or any implementation manner of the first aspect, the foreground and background switching information includes foreground and background switching frequency and a time when the foreground is switched to the foreground last time; calculating to obtain an adjustment score according to the weight coefficient and the score adjustment step length corresponding to the foreground and background switching information, wherein the method comprises the following steps: acquiring a time difference between the last switching time of the application program to the foreground and the current time, and a first weight corresponding to the time difference, wherein the time difference is in negative correlation with the first weight; acquiring a second weight corresponding to the switching frequency of the foreground and the background, wherein the switching frequency of the foreground and the background is positively correlated with the second weight; determining the sum of the first weight and the second weight as a weight coefficient corresponding to foreground and background switching information; and calculating the product of the weight coefficient and the fraction adjustment step length to obtain an adjustment fraction.
According to the first aspect, or any implementation manner of the first aspect above, determining the priority of the application according to the state score of the application includes: determining a target score interval to which the state score of the application program belongs; and determining the priority corresponding to the target score interval according to the mapping relation between the score interval and the priority.
According to the first aspect, or any implementation manner of the first aspect, the memory reclamation method further includes: for any application program, after determining that the priority corresponding to the application program changes according to the state information of the application program, adjusting the application program to a target application group corresponding to the changed priority; and updating the memory pages of the application program to the memory page linked list corresponding to the target application group. According to the implementation mode, the grouping of the application programs can be dynamically adjusted according to the running state information of the application programs, and then the memory pages of the application programs are dynamically managed according to the latest grouping of the application programs, so that the system bump is reduced.
According to the first aspect, or any implementation manner of the first aspect, the memory reclamation method further includes: determining the target priority of an application packet corresponding to a to-be-recycled memory page linked list, wherein the target priority is the priority of an application program in the application packet; inquiring the target memory recovery rate matched with the target priority according to the mapping relation between the preset priority and the memory recovery rate; and determining the target memory recovery rate as the memory recovery rate corresponding to the page linked list of the memory to be recovered.
According to a first aspect, or any one of the above implementations of the first aspect, the memory reclamation rate is inversely related to the level of priority. Therefore, the recycled memory is less for the application program with higher access probability, and the recycled memory is more for the application program with lower access probability, so that the thrashing is finally reduced, and the system efficiency is improved.
In a second aspect, the present application further provides an electronic device, including: a memory and one or more processors, wherein the memory is to store one or more programs; the processor is configured to run the one or more programs, so that the electronic device executes the memory recovery method according to the first aspect or any implementation manner of the first aspect.
In a third aspect, the present application further provides a computer-readable storage medium, on which instructions are stored, and when the instructions are executed on an electronic device, the instructions cause the electronic device to execute the memory recovery method according to the first aspect or any implementation manner of the first aspect.
In a fourth aspect, the present application further provides a computer program product, which when run on an electronic device, causes the electronic device to execute the memory recovery method according to the first aspect or any implementation manner of the first aspect.
It should be appreciated that the description of technical features, solutions, benefits, or similar language in this application does not imply that all of the features and advantages may be realized in any single embodiment. Rather, it should be appreciated that any discussion of a feature or advantage is meant to encompass a particular feature, aspect, or advantage in at least one embodiment. Therefore, descriptions of technical features, technical solutions or advantages in this specification do not necessarily refer to the same embodiment. Furthermore, the technical features, technical solutions and advantages described in the present embodiments may also be combined in any suitable manner. One skilled in the relevant art will recognize that an embodiment may be practiced without one or more of the specific features, aspects, or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2 is a block diagram of a software structure of an electronic device according to an embodiment of the present application;
FIG. 3 is a diagram illustrating a mapping relationship between an LRU linked list and an application program according to the related art;
fig. 4 is a schematic flowchart of a memory recovery method according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating a mapping relationship between an application packet and an LRU linked list according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating arrangement of memory pages in an LRU linked list according to an embodiment of the present disclosure;
FIG. 7 is a diagram illustrating hot and cold sorting of memory pages in an LRU linked list according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a memory recycling device according to an embodiment of the present disclosure.
Detailed Description
The terms "first", "second" and "third", etc. in the description and claims of this application and the description of the drawings are used for distinguishing between different objects and not for limiting a particular order.
In the embodiments of the present application, the words "exemplary" or "such as" are used herein to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
For the sake of clarity and conciseness of the following description of the various embodiments, a brief introduction is first made to the related technical terms or techniques to which this application relates:
page-based memory management is a memory space management technique, in which a memory virtual space is divided into a plurality of pages (pages) with equal length, also called memory pages (or memory pages for short), and the pages are used as the minimum unit of the memory space.
The ZRAM compression technology is a function of Linux kernel and can provide virtual memory compression.
The ZRAM swap is used for allocating an area from a memory to be used as a swap partition, if the memory space is insufficient, the application program is not killed, but the memory data occupied by the application program is compressed and stored in the swap partition, and when the ZRAM swap is switched back, the data can be directly recovered to the memory, so that the time for restarting the application program is saved. For example, a certain application program occupies 50MB of space in a common memory, if the compression rate is 0.4, the data stored in the swap partition only needs 20MB of space, so that the swap partition can store more application programs temporarily unused in the background, which is equivalent to expanding the memory space.
The memory swap mechanism (swap mechanism) writes data stored in an infrequently accessed memory space to a hard disk (i.e., a non-volatile memory of an electronic device), and frees the memory space for use by other processes that are more needed. The swap partition refers to a swap exchange area divided on the hard disk. And through the swap partition, the inactive memory pages in the memory are exchanged to the hard disk so as to achieve the effect of increasing the memory.
LRU (Least recently used) is a commonly used memory page replacement algorithm, and its design principle is that if a data is not accessed in the recent period, the probability of future access is small, so when memory recovery is required, the Least recently used memory page is selected to be eliminated, i.e. the memory space of the memory page is recovered. The node data in the LRU linked list closer to the tail of the linked list is accessed earlier (i.e., the memory page is cooler), and the node data closer to the head of the linked list is accessed most recently (i.e., the memory page is hotter), i.e., the memory pages are stored sequentially from head to tail in the order from hot to cold. Therefore, when the memory is recycled, the memory is recycled from the tail part of the linked list.
Thrashing, that is, a page just swapped out of the memory is immediately swapped in upon request (that is, the page is read from the non-volatile memory into the memory), and this repeated swapping out and swapping in phenomenon is thrashing, which results in that the system needs to spend a lot of time on such frequent page swapping, which causes the actual efficiency of the system to be low, and even leads to system crash in case of serious system.
The electronic device applying the memory recovery method provided by the application can be a mobile phone, a tablet Personal computer, a handheld computer, a netbook, a Personal Digital Assistant (PDA), a wearable electronic device and the like, and the specific form of the electronic device applying the memory recovery method is not particularly limited by the application.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 1, the electronic device may include: processor 110, internal memory 120, non-volatile memory 130, and display screen 140.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the electronic device. In other embodiments, an electronic device may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The internal memory 120, i.e., the internal memory of the electronic device, is usually a random-access memory (RAM), which supports random access and fast read/write, and is used for temporarily storing the operation data of the processor 110 and the data exchanged with the nonvolatile memory 130 as a bridge for communication between the processor 110 and the nonvolatile memory 130.
The nonvolatile memory 130 is a memory using a nonvolatile storage medium, for example, at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), or the like.
The nonvolatile memory 130 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, a video playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, photos, videos, etc.) created during the use of the electronic device, and the like.
The processor 110 executes various functional applications of the electronic device and data processing by executing instructions stored in the non-volatile memory 130.
The read-write speed of the internal memory 120 is higher than the read-write speed of the non-volatile memory 130, so that when the processor 110 runs the instruction stored in the non-volatile memory 130, the instruction in the non-volatile memory 130 is loaded into the internal memory 120, and thus, the processor 110 can directly communicate with the internal memory 120, and the system operation efficiency is improved.
The display screen 140 is used for displaying images, videos, a series of Graphical User Interfaces (GUIs), etc., and a user can interact with an application program by operating the GUIs.
In addition, an operating system runs on the above components. For example, the Android operating system developed by google corporation, the hong meng operating system (Harmony OS) developed by hua corporation, the Windows operating system developed by microsoft corporation, the iOS operating system developed by apple corporation, etc. A running application may be installed on the operating system.
The operating system of the electronic device may employ a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture. The embodiment of the application takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of an electronic device. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface.
Fig. 2 is a block diagram of a software structure of an electronic device according to an embodiment of the present Application, where the software structure includes an Application layer (APP), an Application Framework layer (Framework), and a Kernel layer (Kernel).
The application layer may include a series of application packages. As shown in fig. 2, the application package may include camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
The application framework layer is used for managing the application and recording the running state of the application, such as foreground, background, freezing and the like.
In this embodiment, as shown in fig. 2, the application framework layer further includes an application priority management module 210, where the application priority management module 210 can obtain the running state of the application from the application framework layer, determine the priority of the application according to the running state of the application, and further divide the application groups according to the priority of the application.
The kernel layer comprises a system layer security mechanism, a memory management, a file system, a process management, a network stack and a series of driving modules, is a layer between hardware and software and provides interaction with the hardware.
In this embodiment, as shown in fig. 2, the kernel layer includes a memory management module 220, where the memory management module 220 manages a memory occupied by each running application according to a memory release mechanism based on application groups divided by the application priority management module 210, and ensures that a sufficient memory space is available when the system runs.
In one embodiment of the present application, the application priority management module of the application framework layer may directly interact with the memory management module of the kernel layer, for example, the application priority management module transmits the identifier of the application program and the group identifier included in each application group to the memory management module, so that the memory management module may create an LRU linked list for each application group for maintaining the memory pages of all the application programs included in the application group.
In an embodiment of the application, the hardware layer may include an internal memory and a non-volatile memory.
The memory management module may monitor a memory state of the memory, such as a current available memory space, and trigger a memory recycle process when it is monitored that the current available memory space satisfies the memory recycle start condition.
The memory resource of the electronic equipment is fixed, after a user opens a plurality of Application programs (APPs) in the using process, the system caches the data of the APPs which are switched to the background into the memory, when the memory use of the whole machine is about to reach the upper limit, the system starts to check and kill, and part of the APPs cached in the memory are killed, but the check and kill can cause certain influence on the keep-alive experience of the user, for example, when the user accesses the checked and killed APPs again, the data of the APPs need to be loaded from the hardware again to be started, and the process needs to carry out very time-consuming disk IO operation, so the Application starting process is slow, and an operation interface when the user uses the data last time cannot be reserved.
For example, after a user reads an article using a reading type APP, a certain instant messaging type APP is opened again, at this time, the reading type APP is switched to the background, if the process of the reading type APP is killed (that is, the memory occupied by the APP is all recovered), when the user accesses the reading type APP again, the reading type APP needs to be started by loading data from the hard disk again, and moreover, the interface of the user when reading the article last time cannot be reserved, that is, the application keep-alive experience is poor.
In order to reduce the influence of process killing on application keep-alive experience as much as possible, the system needs to cache more application data, and currently, commonly used technologies include a ZRAM memory compression technology and a memory dump technology. The compression or dump of the memory occupied by the APP improves the application keep-alive experience, but occupies certain system overhead. For example, the ZRAM compression and decompression process occupies certain CPU resources; while dumping increases the life wear of the external memory device.
In order to reduce these problems, a certain policy needs to be specified to select to compress or dump the memory blocks occupied by different APPs, and one solution in the related art is: for each application, an LRU (Least Recently Used) linked list is maintained to manage the memory pages occupied by the application, i.e. an LRU linked list includes all the memory pages of an APP.
As shown in fig. 3, APP1 corresponds to LRU1 linked list, APP2 corresponds to LRU2 linked list, APP3 corresponds to LRU3 linked list, and APP4 corresponds to LRU4 linked list, wherein, only the memory page occupied by APP1 is in LRU1 linked list, and similarly, only the memory page occupied by APP2 is in LRU2 linked list, only the memory page occupied by APP3 is in LRU3, and only the memory page occupied by APP4 is in LRU 4.
When memory recovery is required, the LRU linked lists are sequentially traversed, and memory recovery is performed from the tail of the LRU linked list to be recovered, that is, the recoverable memory pages are compressed or dumped.
The inventor researches the memory recycling process to find that the memory recycling process still has a thrashing phenomenon. Furthermore, the inventor further studies the above memory reclamation scheme to find out that the reason for thrashing is as follows:
in one case, for a plurality of applications with the same future access probability, when memory recovery is performed, the system sequentially traverses the LRU linked lists of the applications to perform memory recovery, and when the memory recovery of a part of the applications is performed and a memory recovery stop condition is met, the memory recovery is stopped without performing memory recovery on other applications.
For example, in the example shown in fig. 3, assuming that the probabilities of future accesses of APP1 to APP4 are the same, after the reclaimable memories in the LRU linked lists of APP1 and APP2 are recovered, the available memory space meets the memory recovery stop condition, and at this time, the memories of APP3 and APP4 are not recovered any more. That is, the memory pages with higher future access probability in APP1 and APP2 are recycled, while the memory pages with lower future access probability in APP3 and APP4 are not recycled, which may cause a phenomenon of repeatedly swapping out and swapping in and out the memory pages, i.e. a thrashing phenomenon.
In another case, for multiple applications with the same future access probability, the same memory recovery rate is set, that is, the memory is recovered according to the same memory recovery rate, but due to differences between different applications, for example, APP1 and APP2 both run in the background, and the probability of future access is the same overall, but the probability of future access to a memory page occupied by a certain process in APP1 is higher than the probability of future access to memory pages occupied by all processes of APP2, at this time, if the memory pages of APP1 and APP2 are recovered according to the same recovery rate, it may cause that a memory page with a higher future access probability by a user is also recovered, and when the memory page of APP1 is accessed again, the page needs to be swapped in, and a phenomenon of swapping out and swapping in pages repeatedly, that is, a thrashing phenomenon, also occurs.
In order to solve the thrashing problem existing in the memory recovery schemes of the related art, the inventor provides a memory recovery method and device of the application, all application programs in the same priority are divided into an application group, an LRU linked list is maintained for each application group, all application memory pages of the application group are maintained in the LRU linked list, and the memory pages are sequentially sorted according to the cold and hot degrees of the memory pages in the LRU linked list. And responding to the memory recovery instruction, determining the LRU linked list to be recovered, and recovering the memory pages to be recovered in the LRU linked list to be recovered according to the sequence from cold to hot based on the memory recovery rate corresponding to the LRU linked list to be recovered. According to the scheme, the memory pages of a plurality of application programs in the same priority are maintained in the same LRU linked list, and when memory recovery is carried out, the memory pages of the application programs are uniformly recovered in sequence according to the sequence from cold to hot of the memory pages, instead of carrying out memory recovery one by one for each application (namely, all recoverable memories of one application are recovered first, and then the recoverable memories of the next application are recovered). Therefore, the phenomenon that relatively cold memory pages in a plurality of application programs with the same priority are not recycled and relatively hot memory pages are recycled is avoided, the memory pages of the plurality of application programs with the same priority are guaranteed to be recycled in a balanced manner, the system jolt is reduced, and the system efficiency is improved.
Fig. 4 is a schematic flowchart of a memory recycling method provided in an embodiment of the present application, where the method is applied to the electronic device shown in fig. 1, and as shown in fig. 4, the method may include the following steps:
and S110, acquiring the running state information of each application program in the running state.
In other words, the running state means that an application program is used and run and is not closed, and the application program may be in a foreground running state, a background running state, or may be frozen.
The running state information of the application program can comprise foreground running, background running and freezing states.
Foreground operation refers to an application program operating in a foreground mode, and the application program in the foreground mode occupies a display device (such as a touch screen) and an input device of the electronic device and can interact with a user.
Background running means that the application program is in a background mode, does not occupy a display device and an input device of the electronic device, cannot interact with a user, and the application program in the background mode also occupies certain system resources, such as memory resources and CPU resources.
The frozen state means that a certain application program is frozen by a system, the application program in the frozen state stops running, CPU resources are not occupied, memory resources are not released, and most memory pages occupied by the application program in the frozen state can be dumped into a ROM and still can be restarted to the previous state because the application program in the frozen state is temporarily not used.
The operating system of the electronic device can monitor the running state of the application program, for example, the application framework layer of the Android system is used for managing the application program, so that the application framework layer can record the running state of the application program, for example, information such as foreground, background, freeze, foreground and background switching times and time.
And S120, acquiring a state score corresponding to the application program according to the running state information corresponding to the application program.
The active degrees of the application programs in different running states are different, the active degrees represent the probability of being accessed by the user, and the higher the active degree is, the higher the probability of being accessed by the user is; the lower the activity level, the lower the probability of being accessed by the user. Thus, a state score characterizing the activity level of the application can be calculated from the running state information.
In this embodiment, the application priority management module shown in fig. 2 may be used to obtain the state score corresponding to the application program, where the application priority management module may obtain the running state of the application program from the application framework layer of the operating system, and further calculate the state score of the application program according to the running state.
In one example, the state score is negatively related to the activity level, i.e., the state score is negatively related to the priority level, e.g., the lower the state score, the more active the application is, and the higher the probability of being accessed by the user, the higher the priority level; conversely, a higher status score indicates that the application is less active and has a lower probability of being accessed by the user, and a lower priority.
In an embodiment of the present application, a status score range (e.g., 0 to 1000) may be set, and the benchmark scores corresponding to the foreground operation, the background operation, and the frozen status are set, respectively, where the benchmark score corresponding to the foreground operation is 0, the benchmark score corresponding to the background operation is 300, and the benchmark score corresponding to the frozen status is 1000, for example.
In an application scenario, a higher foreground and background switching frequency indicates a higher probability that the application is accessed, and conversely, a lower foreground and background switching frequency indicates a lower probability that the application is accessed. The smaller the time difference between the time when the application program was last switched to the foreground (i.e. the time when the application program was last switched to the foreground) and the current time, the higher the probability that the application program is accessed. Thus, the last time the application was switched to the foreground can characterize the probability that the application was accessed.
Therefore, in an exemplary embodiment, for the background running state and the frozen state, a certain score may be subtracted from the reference score corresponding to each running state according to the foreground and background switching frequency counted in the historical time window and the time difference between the latest switching time to the foreground and the current time, so as to obtain the state score of the application program.
In a possible implementation manner, a step length of subtraction (e.g., 100 minutes) is set, a weight coefficient is determined according to the switching frequency of the foreground and the background and the time difference between the time of switching the foreground most recently and the current time, and further, the adjustment score of the application is obtained by calculating the product of the weight coefficient and the step length of subtraction.
In an example, the weight coefficient may include two parts, one part depends on the switching frequency, such as the higher the switching frequency, the higher the weight; another part depends on that the time of the last switching foreground is less different from the current time, e.g. the less the time difference is, the higher the weight is.
For example, the weight corresponding to the application with the time of last switching to foreground being 1 hour is higher than the weight corresponding to the application with the time of last switching to foreground being 3 hours; the sum of the two weights is used as the weight coefficient of foreground and background switching information.
For example, if the division step size is 100, the weight corresponding to the switching frequency is 0.6, and the weight corresponding to the time difference between the latest switching time to the foreground and the current time is 0.8, the weight corresponding to the division step size is 0.6+0.8=1.4, so that the final subtraction score is 100 × 1.4=140, and the reference score of the application in the frozen state is 1000, and the final state score of the application is 820.
In the implementation mode, the state score of the application program is further adjusted based on the information of the foreground and background switching of the application program, so that the application programs with more similar running state information are divided into the same application group, the unified management of the memory pages in the same group is further realized, the system bump is further reduced, and the system efficiency is improved.
Of course, in other embodiments of the present application, the weight coefficient may also be determined according to one of the foreground-background switching frequency and a time difference between a time of last switching to foreground and a current time. For example, if the weight corresponding to the foreground/background switching frequency is 0.6, the weight coefficient of the foreground/background switching information is 0.6. In another example, if the weight corresponding to the time difference between the time of the latest handover to the foreground and the current time is 0.8, the weight coefficient of the foreground-background handover information is 0.8.
In addition, other methods may also be used to calculate the state score of the application program, for example, the state score may be set to have a positive correlation with the activity level, that is, a higher state score indicates that the application program is more active, and a lower state score indicates that the application program is less active, which is not limited in this application.
And S130, determining the priority of the application program according to the state score of the application program.
A higher priority of an application indicates a more active application, i.e. a higher probability of the user accessing the application. For example, in an example where the state score is negatively correlated with activity, the state score is negatively correlated with priority, i.e., the higher the state score, the lower the priority; in the example where the state score is positively correlated with the activity level, the state score is positively correlated with the priority, i.e., the lower the state score, the higher the priority.
In this embodiment, a mapping relationship between the state score and the priority may be preset, and the priority corresponding to the state score of the application program may be further determined according to the mapping relationship.
The mapping relationship between the state scores and the priorities is determined according to the distribution of the state scores corresponding to the application programs, for example, one score may be set to correspond to one priority, or the same score interval may be set to correspond to one priority.
In one example, the status scores of the applications are the same, that is, the same score corresponds to different applications, in which case, one score corresponds to one priority, and different scores correspond to different priorities. For example, if the scores of APP1, APP2 and APP3 are the same, the priorities of the three APPs are determined to be the same.
In another example, the state scores corresponding to the applications are relatively distributed, in this case, the priorities of the applications in the same score segment may be set to be the same, for example, the state score of APP1 is 200, the state score of app2 is 220, the state score of app3 is 250, and assuming that the score interval corresponding to the second priority is [200,250], it can be seen that APP1, APP2, and APP3 are all of the second priority.
And S140, dividing the application programs in the same priority into one application group.
In this embodiment, the application programs are divided into different groups according to different priorities of the application programs, for example, the application programs in the same priority may be divided into the same group. In addition, in other embodiments, two adjacent priority applications may be divided into the same group.
For example, in an Android system, based on a memory group (memcg) mechanism, processes of all applications in the same application group are in the same memcg group, so as to perform memory management by using the memcg group as a unit, that is, memory pages of all processes in the same memcg group are uniformly managed.
The memcg is a function of the Linux kernel, and provides management of memory behaviors of a group of processes in the system.
For example, as shown in fig. 5, 12 applications APP1 to APP12 are divided into four groups by priority:
all of APP1, APP2 and APP3 are of the first priority, and all processes of the three APPs are divided into one memcg group, that is, a memcg1 group.
APP4, APP5 and APP6 are all of the second priority, and the processes of the three APPs are divided into memcg2 groups.
APP7, APP8 and APP9 are all third priority, and the processes of the three APPs are divided into memcg3 groups.
APP10, APP11 and APP12 are all of a fourth priority, and the processes of these three APPs are divided into memcg4 packets.
In an application scenario, as the user interacts with the application program, the running state of the application program may change, for example, switch from background to foreground, and thus the priority of the application program is dynamically changed.
For example, after the user opens a song that the music APP selects to play, the music APP is switched to the background, and the user opens the reading APP again and listens to music while watching an article. In this example, the reading APP always runs in the foreground, the music APP always runs in the background, and thus the priority of the reading APP is higher than that of the music APP; when the user opens the interface of the music APP to check lyrics, the music APP is switched to the foreground, the reading APP is switched to the background, and in the use scene, the priority of the music APP is higher than that of the reading APP.
In a possible implementation manner, the operating system of the electronic device may monitor the running state of the application program according to a specified period, for example, the specified period may be 1min or other duration, which is not limited in this application.
S150, maintaining an LRU linked list for each application group, where the LRU linked list includes memory pages occupied by all applications in the application group.
For example, in the Android system, memory pages occupied by all applications within a memcg group are managed by an LRU linked list. In this embodiment, each memcg group corresponds to one LRU linked list, and the LRU linked list maintains memory pages of all applications in the memcg group.
Still referring to the example shown in fig. 5, application packet 1 corresponds to the LRU1 linked list, application packet 2 corresponds to the LRU2 linked list, application packet 3 corresponds to the LRU3 linked list, and application packet 4 corresponds to the LRU4 linked list.
As shown in fig. 5, the application group 1 includes three APPs, i.e., APP1, APP2, and APP3, and the LRU1 linked list corresponding to the application group includes all memory pages occupied by the three APPs, and the memory pages of the three APPs are uniformly sorted and are no longer sorted according to different applications.
In other words, the LRU linked list breaks up the sequence of the memory pages of the same original application program arranged together, and uniformly sorts the memory pages of the application programs according to the cold and hot degrees.
In an embodiment, memory pages of each application in the same application group are sequentially sorted according to a hot and cold degree, as shown in fig. 6, for convenience of illustration, the memory pages are identified by pageij, where i represents a sorting order of the memory page in the LRU linked list, and j represents a number of APPs occupying the memory page.
For example, page11 indicates that the memory page is sorted to bit 1 in the LRU linked list, and the memory page is occupied by APP 1. Similarly, the page22 indicates that the memory page is sorted into the 2 nd bit in the LRU linked list, and the memory page is occupied by the APP 2; the third memory page of the page31 is occupied by APP 1. By analogy, pageij indicates that the sorting of the memory page in the LRU linked list is the ith bit, and the memory page is occupied by APPj.
As can be seen from fig. 6, in the LRU chain table, the 1 st and 3 rd memory pages all belong to APP1, and the 2 nd memory page belongs to APP2, so that the LRU chain table of the present application disturbs the order of the applications, and directly sequences according to the hot and cold programs of the memory pages of each application.
The LRU linked list is internally arranged from the head to the tail in sequence according to the sequence from hot to cold of the memory pages, namely the heat of the memory pages closer to the head of the linked list is higher, and the heat of the memory pages closer to the tail of the linked list is lower; for example, page1 is hotter than page2.
The cold and hot degree of the memory page is according to the time difference between the latest access time of the memory page and the current time, the larger the time difference is, the colder the memory page is, and on the contrary, the smaller the time difference is, the hotter the memory page is. For example, if the time interval between the access time of the memory Page1 and the current time is 35min, and the time interval between the access time of the Page2 and the current time is 30min, the heat of the Page1 is lower than that of the Page2.
As shown in fig. 7, the memory pages in the LRU linked list at time t0 are ordered sequentially as: page1 → page2 → page3 → 8230 \8230; → page i → 8230; \8230 → page, t1 access the page i at the moment, the page of the page moves to the head of the LRU linked list, the other pages move backwards in turn, and the ordering of the memory pages in the LRU linked list becomes: page i → page1 → page2 → page3 → 8230 \8230; → page.
When memory reclamation occurs, reclamation begins from the end of the LRU linked list because the end stores memory pages that have the lowest probability of being accessed again. Releasing this portion of memory has a minimal probability of causing thrashing.
And S160, generating a memory recycling instruction after detecting that the available memory space is less than or equal to a preset threshold value.
In an application scenario, a kernel (kernel) of an operating system of an electronic device (e.g., a memory management module shown in fig. 2) detects a currently available memory space, and if the currently available memory space is less than or equal to a predetermined threshold (i.e., a first predetermined threshold), a memory recycling process is triggered.
The preset threshold value can be determined according to the total storage space of the internal memory of the electronic equipment, and when the total storage space of the internal memory is larger, the preset threshold value can be set to be a larger numerical value, so that the efficient operation of the system is ensured.
For example, the total space of the internal memory is 6GB, and the preset threshold may be set to 500MB; as another example, the total space of the internal memory is 4GB, and the preset threshold may be set to 300MB.
In a possible implementation manner, the memory management module may detect the available memory space according to a specified period, for example, the specified period may be 1min, 5min, 10min, or other duration, which is not limited in this application.
In another application scenario, when the electronic device detects that the system starts an application (e.g., a camera) consuming a large amount of memory, a process of detecting available memory space is triggered.
Certain memory space is consumed in the running process of the application program, however, if the available memory space does not meet the memory required for starting a certain application program, memory recovery needs to be performed once, so that the electronic device can trigger a process of detecting the available memory space when detecting that a specified application program is started. The designated application program is an application program that consumes a large memory space in the running process, for example, the designated application program is an application program that consumes a memory space larger than a certain memory threshold in the running process, for example, a camera application occupies a large amount of memory space in the running process.
In such an application scenario, the preset threshold may be determined according to a memory space consumed by the specified application program during operation, for example, if a certain application program needs to consume a memory space of 100MB during operation, the preset threshold may be set to 100MB, 150MB, or another value close to 100 MB.
S170, in response to the memory recycling instruction, determining the LRU linked list to be recycled according to the sequence of the priorities corresponding to the application groups from low to high.
In an embodiment of the present application, the memory management module, in response to the memory recycle instruction, scans each LRU linked list in a sequence from a low priority to a high priority corresponding to the application packet. And determining the LRU linked list corresponding to the application packet with the lowest priority as the LRU linked list to be recycled from the LRU linked list which is not subjected to memory recycling.
It is assumed that the priorities corresponding to the application packets in fig. 5 are, from low to high: the priority4 < priority3 < priority2 < priority1, and the LRU linked lists corresponding to the four application groups are not subjected to memory recovery, so that the LRU4 linked list with the lowest priority is determined as the LRU linked list to be recovered.
And S180, based on the memory recovery rate corresponding to the LRU linked list to be recovered, according to the sequence from cold to hot, recovering the memory pages to be recovered in the LRU linked list, and recovering the memory pages to be recovered.
The memory recovery rate refers to a ratio of a memory to be recovered to an occupied memory, for example, a memory space occupied by an application is 50MB, and the memory recovery rate is 40%, that is, the recovered memory space is 50MB × 40% =20MB.
Different priorities indicate that the access probability of the application programs is different, in order to reduce the frequency of the thrashing phenomenon, the memory is recycled as much as possible for the application programs with lower access probability, and the memory is recycled less or even not for the application programs with higher access probability.
Therefore, different memory recovery rates can be set for different priorities, and the priority is inversely related to the memory recovery rate, that is, the higher the priority corresponding to the application packet is, the lower the memory recovery rate is, and conversely, the lower the priority corresponding to the application packet is, the higher the memory recovery rate is.
For example, as shown in fig. 5, four priorities are set, and the memory recovery rate corresponding to the first priority1 is 0, that is, it is not recovered; the memory recovery rate corresponding to the second priority2 is 10%, the memory recovery rate corresponding to the third priority3 is 20%, and the memory recovery rate corresponding to the fourth priority4 is 30%.
In addition, the memory recovery rate corresponding to each priority can be set according to the actual application requirement, and the application is not limited to this.
In one embodiment, the memory recovery rates corresponding to application packets with different priorities are preset in the memory management logic of the memory management module, and each application packet corresponds to one LRU linked list, so that each LRU linked list has a corresponding memory recovery rate. That is, the memory management module determines the memory recovery rate of the LRU linked list according to the priority of the application packet corresponding to the LRU linked list.
Since the memory pages are sequentially stored from the head to the tail in the order of warm to cold in the LRU linked list, when the memory pages in the LRU linked list are collected, the memory pages are collected from the tail of the LRU linked list.
For example, the memory reclamation process may be to compress the data in the memory page and store the compressed data in the swap partition in the memory, or compress the data in the memory page and store the compressed data in the memory page in the nonvolatile memory.
S190, detecting whether the available memory space meets the memory recovery stopping condition, and if so, ending the memory recovery process; if not, returning to execute the step S170 until the available memory space meets the memory recycling stop condition.
In one embodiment, the memory reclamation stop condition may be that the available memory space is greater than a second predetermined threshold. In one example, the second preset threshold may be determined according to the total space of the internal memory, such as 800MB.
In other examples, the second preset threshold may be determined according to a memory space consumed by a specific application program during running, for example, if a memory space to be allocated during running of a certain application program is 100MB, the second preset threshold may be set to 200MB.
After the memory recovery action is executed, if the available memory space is detected to be larger than or equal to a second preset threshold value, the memory recovery process is stopped. For example, in the example shown in fig. 5, after the memory of the LRU4 linked list is recycled, the available memory space is greater than the second preset threshold, and at this time, the memory recycling of the LRU3 and LRU2 linked lists is not continued. For another example, after the memory of the LRU4 and LRU3 linked lists in fig. 5 is recycled, if the available memory space is greater than the second preset threshold, the memory recycling operation is not performed on the LRU2 linked list.
If the available memory space is detected to be smaller than the second preset threshold, determining that the available memory space does not satisfy the memory recovery stop condition, and continuing to perform memory recovery on other LRU linked lists which are not subjected to memory recovery, namely returning to execute S170 to determine a new LRU linked list to be recovered.
For example, after the recyclable memory in the LRU4 linked list is recycled, the available memory space still does not satisfy the memory recycling stop condition, the LRU3 linked list with the lowest priority is determined from the LRU3 and the LRU2 to be used as the LRU linked list for the memory recycling, and the process is repeated until the available memory space is greater than or equal to a second preset threshold.
In addition, it should be noted that, in the memory recovery method provided in this embodiment, there is no specific execution order between the process of dividing the application packet and the process of recovering the memory of the application packet, and these two processes may be executed in parallel.
For example, the application priority management module monitors the running state of the application program, divides the application groups according to the running state, and then transmits the divided application groups to the memory management module. At this time, if the module is executing the memory recovery process, executing the memory recovery process based on the new application packet when the memory recovery is performed next time; and if the module does not execute the memory recovery flow currently, executing the memory recovery flow based on the new application packet during memory recovery.
In the memory recovery method provided in this embodiment, the priority of the application is determined according to the running state of the application, all the applications at the same priority are divided into one application group, and one memory page linked list is maintained for each application group, memory pages of all the applications of the application group are maintained in the memory page linked list, and the memory page linked lists are sequentially sorted according to the cooling and heating degrees of the memory pages. And when the available memory space is detected to be lower than a preset threshold value, triggering a memory recovery process, determining a memory page chain table to be recovered, and determining and recovering the memory page to be recovered from the memory page chain table according to the sequence from cold to hot based on the memory recovery rate corresponding to the memory page chain table to be recovered.
Different from the way that the memory pages of different applications are maintained in independent LRU linked lists individually, the memory recovery method provided by the application maintains the memory pages of all applications of the same priority by using one LRU linked list, and the memory pages in the LRU linked list are only sorted in sequence according to the cold and hot degrees, that is, the memory pages of each application of the same priority are uniformly sorted according to the cold and hot degrees. In this way, when the memory pages of the LRU linked list are recycled, the recycled memory pages include the relatively cold memory pages in each application, so that a phenomenon that a part of the relatively hot memory pages of the applications in the same priority level are recycled, while another part of the relatively cold memory pages of the applications in the same priority level are not recycled is avoided, that is, a problem of unbalanced memory recycling of multiple applications in the same priority level is avoided.
In the memory recovery method, a same LRU linked list is provided with a memory recovery rate, and the memory pages applied in the LRU linked list are uniformly sorted according to the degree of cold and heat, for example, if the number of some applied relatively cold memory pages included in the LRU linked list is large, the number of the applied recovered memory pages is large, that is, the memory recovery rate of the application is high; the LRU linked list contains a smaller number of relatively cold memory pages for another application, and the application has fewer memory pages recycled, i.e., the application has a lower memory recycling rate. Therefore, the scheme can avoid the occurrence of a bumping phenomenon caused by the fact that a plurality of applications in the same priority all adopt the same memory recovery rate to a certain extent, and improve the system operation efficiency.
Further, the scheme can dynamically adjust the priority of the application according to the change condition of the running state of the application program, so as to dynamically adjust the memory recovery strategy of the application, for example, if the application is switched from a foreground to a background, the application is adjusted to an application group with a lower priority, and further the memory recovery of the application is increased; if the application is switched from the background to the foreground, if the application runs in the foreground all the time, the application is adjusted to an application group with higher priority, and further memory recovery of the application is reduced. The scheme enables the dynamically adjusted memory recovery strategy to be more matched with the running state of the application after the application changes, further improves the running efficiency of the system, and improves the use experience of a user.
The memory recovery method provided by the application determines the priority of the application according to the running state of the application, divides all application programs at the same priority into one application group, maintains one memory page linked list for each application group, maintains all applied memory pages of the application group in the memory page linked list, and uniformly sorts the memory page linked lists according to the cold and hot degrees of the memory pages of each application. And determining a memory page linked list to be recycled after the memory recycling instruction is detected, and determining and recycling the memory page to be recycled from the memory page linked list according to the sequence from cold to hot based on the memory recycling rate corresponding to the memory page linked list to be recycled. Although the embodiments of the memory recovery method described above all use the Android system as an example for description, this should not limit the present application. The memory recovery method provided by the application is also suitable for electronic equipment based on other operating systems such as a Hongmon system (Harmony OS), an iOS or a Windows. Those skilled in the art can adaptively modify the memory recycling logic in the corresponding operating system based on the memory recycling method provided by the present application according to application requirements of different operating systems, such as system frameworks of different operating systems and different memory recycling logic, so as to achieve the same technical effect as the memory recycling method of the present application. For example, in other operating systems, the process of grouping the applications may be performed by other modules having the same function as the application priority management module, or for example, the process of recovering the memory may be performed by other modules having the same function as the memory management module, which is not listed here.
In the embodiment of the present application, the electronic device may be divided into the functional modules according to the memory recovery method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
In the case of dividing each function module according to each function, fig. 8 shows a schematic diagram of a possible composition of the memory recovery device related to the above embodiment, where the memory recovery device is capable of executing the steps of any method embodiment in the method embodiment shown in fig. 4 in this application. The memory recovery apparatus is an electronic device or a communication apparatus supporting the electronic device to implement the method provided in the embodiments, for example, the communication apparatus may be a chip system.
As shown in fig. 8, the memory reclamation apparatus may include: a to-be-recycled linked list determining module 310, a to-be-recycled page determining module 320 and a memory recycling module 330.
A to-be-recycled linked list determining module 310, configured to determine a to-be-recycled memory page linked list in response to a memory recycling instruction;
each memory page linked list comprises memory pages of application programs of the same application group, and each application group comprises all application programs in the same priority level.
The to-be-recycled page determining module 320 is configured to determine, based on the memory recycling rate corresponding to the to-be-recycled memory page linked list, a to-be-recycled memory page from the memory page linked list according to the sequence from cold to hot.
The memory recovery module 330 is configured to recover the memory page to be recovered.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The memory recovery device provided in the embodiment of the present application is configured to execute the memory recovery method in any embodiment described above, so that the same effect as that of the memory recovery method in the embodiment described above can be achieved.
The present embodiment also provides a computer-readable storage medium, where the computer-readable storage medium includes instructions, and when the instructions are executed on an electronic device, the electronic device is enabled to implement the memory recycling method provided in any of the foregoing embodiments.
The present embodiment also provides a computer program product including instructions, and when the computer program product runs on an electronic device, the electronic device is enabled to implement the memory recycling method provided in any of the above embodiments.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
In the several embodiments provided in this embodiment, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, each functional unit in the embodiments of the present embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present embodiment essentially or partially contributes to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the method described in the embodiments. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A memory reclamation method, comprising:
in response to a memory recovery instruction, determining memory page linked lists to be recovered, wherein each memory page linked list comprises memory pages of all application programs of the same application group, all the memory pages in the memory page linked lists are uniformly sorted according to the cold and hot degrees, each application group comprises all the application programs at the same priority, and the priority level is negatively related to the probability of the memory pages being recovered;
determining the memory pages to be recycled in the memory page linked list according to the sequence from cold to hot based on the memory recovery rate corresponding to the memory page linked list to be recycled;
recovering the memory page to be recovered;
the process of determining the priority of the application program comprises the following steps:
determining a benchmark score corresponding to the current running state of the application program;
calculating to obtain an adjustment score according to a weight coefficient and a score adjustment step length corresponding to foreground and background switching information of the application program in a historical time period;
and adjusting the adjustment score on the basis of the reference score to obtain a state score corresponding to the application program, and determining the priority of the application program according to the state score.
2. The method of claim 1, wherein the determining the linked list of memory pages to be recycled comprises:
and determining the memory page linked list corresponding to the application group with the lowest priority as the memory page linked list to be recycled from each memory page linked list which is not subjected to memory recycling.
3. The method of claim 1 or 2, wherein the run state comprises a foreground run, a background run, or a frozen state.
4. The method of claim 1, wherein the foreground and background switching information comprises foreground and background switching frequency and time of last switching to foreground;
the calculating to obtain an adjustment score according to the weight coefficient and the score adjustment step length corresponding to the foreground and background switching information comprises:
acquiring a time difference between the last time when the application program is switched to a foreground and the current time, and a first weight corresponding to the time difference, wherein the time difference is negatively related to the first weight;
acquiring a second weight corresponding to the foreground and background switching frequency, wherein the foreground and background switching frequency is in positive correlation with the second weight;
determining the sum of the first weight and the second weight as a weight coefficient corresponding to the foreground and background switching information;
and calculating the product of the weight coefficient and the fraction adjustment step length to obtain the adjustment fraction.
5. The method of claim 1, wherein the determining the priority of the application based on the status score of the application comprises:
determining a target score interval to which the state score of the application program belongs;
and determining the priority corresponding to the target score interval according to the mapping relation between the score interval and the priority.
6. The method of claim 1, further comprising:
for any application program, after determining that the priority corresponding to the application program changes according to the state information of the application program, adjusting the application program to a target application group corresponding to the changed priority;
and updating the memory pages of the application program to the memory page linked list corresponding to the target application group.
7. The method of claim 1, further comprising:
determining a target priority of an application packet corresponding to the memory page linked list to be recycled, wherein the target priority is the priority of an application program in the application packet;
inquiring the target memory recovery rate matched with the target priority according to the mapping relation between the preset priority and the memory recovery rate;
and determining the target memory recovery rate as the memory recovery rate corresponding to the to-be-recovered memory page linked list.
8. The method of claim 1, wherein the memory reclamation rate is inversely related to the level of priority.
9. An electronic device, comprising: a memory and one or more processors, wherein,
the memory is used for storing one or more programs;
the processor is configured to execute the one or more programs, so that the electronic device executes the memory recovery method according to any one of claims 1 to 8.
10. A computer-readable storage medium having stored thereon instructions that, when executed on an electronic device, cause the electronic device to perform the memory reclamation method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110857483.7A CN113778662B (en) | 2021-07-28 | 2021-07-28 | Memory recovery method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110857483.7A CN113778662B (en) | 2021-07-28 | 2021-07-28 | Memory recovery method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113778662A CN113778662A (en) | 2021-12-10 |
CN113778662B true CN113778662B (en) | 2022-12-06 |
Family
ID=78836226
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110857483.7A Active CN113778662B (en) | 2021-07-28 | 2021-07-28 | Memory recovery method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113778662B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117130947B (en) * | 2023-01-31 | 2024-07-12 | 荣耀终端有限公司 | Memory management method and electronic equipment |
CN117130767B (en) * | 2023-02-08 | 2024-08-16 | 荣耀终端有限公司 | Method for recycling memory, electronic equipment and storage medium |
CN116107925B (en) * | 2023-04-10 | 2023-09-26 | 阿里云计算有限公司 | Data storage unit processing method |
CN116185890B (en) * | 2023-04-23 | 2023-09-19 | 荣耀终端有限公司 | Memory recycling method and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104008061A (en) * | 2013-02-22 | 2014-08-27 | 华为技术有限公司 | Internal memory recovery method and device |
CN111078586A (en) * | 2019-12-10 | 2020-04-28 | Oppo(重庆)智能科技有限公司 | Memory recovery method and device, storage medium and electronic equipment |
CN111831440A (en) * | 2020-07-01 | 2020-10-27 | Oppo广东移动通信有限公司 | Memory recovery method and device, storage medium and electronic equipment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5333141B2 (en) * | 2009-10-09 | 2013-11-06 | ソニー株式会社 | Information processing apparatus and method, and program |
US9792227B2 (en) * | 2014-08-19 | 2017-10-17 | Samsung Electronics Co., Ltd. | Heterogeneous unified memory |
US11256619B2 (en) * | 2019-05-05 | 2022-02-22 | Microsoft Technology Licensing, Llc | Memory management for serverless databases |
-
2021
- 2021-07-28 CN CN202110857483.7A patent/CN113778662B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104008061A (en) * | 2013-02-22 | 2014-08-27 | 华为技术有限公司 | Internal memory recovery method and device |
CN111078586A (en) * | 2019-12-10 | 2020-04-28 | Oppo(重庆)智能科技有限公司 | Memory recovery method and device, storage medium and electronic equipment |
CN111831440A (en) * | 2020-07-01 | 2020-10-27 | Oppo广东移动通信有限公司 | Memory recovery method and device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113778662A (en) | 2021-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113778662B (en) | Memory recovery method and device | |
CN107885666B (en) | Memory management method and device | |
US10552337B2 (en) | Memory management and device | |
US10776007B2 (en) | Memory management device predicting an erase count | |
US6857047B2 (en) | Memory compression for computer systems | |
US11593186B2 (en) | Multi-level caching to deploy local volatile memory, local persistent memory, and remote persistent memory | |
CN107526546B (en) | Spark distributed computing data processing method and system | |
JP2008090657A (en) | Storage system and control method | |
US10657069B2 (en) | Fine-grained cache operations on data volumes | |
CN102999444A (en) | Method and device for replacing data in caching module | |
CN115794669A (en) | Method, device and related equipment for expanding memory | |
CN112015343B (en) | Cache space management method and device of storage volume and electronic equipment | |
CN111427804B (en) | Method for reducing missing page interruption times, storage medium and intelligent terminal | |
CN112579251A (en) | Method and equipment for managing memory of virtual machine | |
CN113204407A (en) | Memory over-allocation management method and device | |
CN116166573B (en) | Method for controlling memory reclamation, electronic device and storage medium | |
WO2011131003A1 (en) | System for realizing mobile phone buffer storage mechanism and method for loading mobile phone operating system | |
CN109739688B (en) | Snapshot resource space management method and device and electronic equipment | |
CN108334401B (en) | System and method for realizing logic roll dynamic distribution and supporting virtual machine dynamic migration | |
US10210097B2 (en) | Memory system and method for operating the same | |
Jeong et al. | DaaC: device-reserved memory as an eviction-based file cache | |
CN112948073A (en) | Optimization method and device for running memory and storage medium | |
Wu et al. | APP: Enabling soft real-time execution on densely-populated hybrid memory system | |
JP2005004282A (en) | Disk array system, and method and program for managing disk array system | |
CN115827508B (en) | Data processing method, system, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |