CN113778662A - Memory recovery method and device - Google Patents

Memory recovery method and device Download PDF

Info

Publication number
CN113778662A
CN113778662A CN202110857483.7A CN202110857483A CN113778662A CN 113778662 A CN113778662 A CN 113778662A CN 202110857483 A CN202110857483 A CN 202110857483A CN 113778662 A CN113778662 A CN 113778662A
Authority
CN
China
Prior art keywords
memory
application
priority
linked list
application program
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110857483.7A
Other languages
Chinese (zh)
Other versions
CN113778662B (en
Inventor
田孝斌
袁晓峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110857483.7A priority Critical patent/CN113778662B/en
Publication of CN113778662A publication Critical patent/CN113778662A/en
Application granted granted Critical
Publication of CN113778662B publication Critical patent/CN113778662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Stored Programmes (AREA)

Abstract

The application provides a memory recovery method and a device, the scheme maintains memory pages of a plurality of application programs at the same priority in the same memory page linked list, and when memory recovery is carried out, the memory pages of the application programs stored in the memory page linked list are sequentially and uniformly recovered according to the sequence from cold to hot, rather than carrying out memory recovery one by one for each application (namely, all recoverable memories of one application are recovered first, and then the recoverable memories of the next application are recovered). Therefore, the phenomenon of system jolt caused by that the relatively cold memory pages in the multiple application programs with the same priority are not recycled and the relatively hot memory pages in the multiple application programs with the same priority are recycled is avoided, and the memory pages of the multiple application programs with the same priority are guaranteed to be recycled in a balanced manner, so that the scheme reduces the system jolt and improves the system efficiency.

Description

Memory recovery method and device
Technical Field
The present application relates to the field of memory management technologies, and in particular, to a memory recovery method and apparatus.
Background
The internal memory is also called an internal memory or a main memory, is used for temporarily storing the operation data of the CPU and the data exchanged with an external memory such as a hard disk, and is a bridge for communication between the external memory and the CPU, and all program operations are performed in the internal memory.
The memory space of the memory is limited, and in order to ensure that enough memory can be used during the operation of the system, the operating system of the electronic device is provided with a memory release mechanism: when the memory is insufficient, the operating system will clean up the data of the infrequent application in the memory, i.e. the memory is recycled. However, the memory reclamation scheme in the related art may cause thrashing, thereby reducing system efficiency.
Disclosure of Invention
In view of this, the present application provides a memory recovery method and device to solve the problem of thrashing in the memory recovery scheme in the related art, and the disclosed technical scheme is as follows:
in a first aspect, the present application provides a memory recycling method, including: in response to a memory recovery instruction, determining memory page linked lists to be recovered, wherein each memory page linked list comprises memory pages of application programs of the same application group, and each application group comprises all application programs at the same priority; determining the memory pages to be recycled in the memory page linked list according to the sequence from cold to hot based on the memory recovery rate corresponding to the memory page linked list to be recycled; and recovering the memory page to be recovered. In this way, the memory pages of the multiple application programs at the same priority level are maintained in the same memory page linked list, and when memory recovery is performed, the memory pages of the application programs in the memory page linked list are sequentially subjected to balanced recovery according to the sequence from cold to hot, rather than performing memory recovery one by one for each application. Therefore, the phenomena that relatively cold memory pages in a plurality of application programs with the same priority are not recycled and relatively hot memory pages are recycled are avoided, the memory pages of the plurality of application programs with the same priority are guaranteed to be recycled in a balanced manner, the system bump is reduced, and the system efficiency is improved.
According to the first aspect, the process of determining the memory page linked list to be recycled includes: and determining the memory page linked list corresponding to the application packet with the lowest priority from the memory page linked list which is not subjected to memory recovery as the memory page linked list to be recovered. In the implementation mode, the lower the priority is, the lower the probability that the application program is accessed is, so that the memory pages in the memory page linked list with the low priority are preferentially recycled, the thrashing can be reduced, and the system efficiency is improved.
According to the first aspect, or any implementation manner of the first aspect, the memory reclamation method further includes: and determining the priority of the application program according to the running state information corresponding to the application program, wherein the running state information at least comprises at least one of foreground running, background running, a freezing state and foreground and background switching frequency. According to the first aspect or any implementation manner of the first aspect, determining the priority of the application program according to the running state information corresponding to the application program includes: acquiring the current running state of an application program and corresponding foreground and background switching information in a historical time period; acquiring a state score corresponding to the application program according to the current running state and foreground and background switching information; determining the priority of the application program according to the state score of the application program. In the implementation mode, the running state of the application program is subjected to fractional quantization, the priority of the application program is further divided according to the state score, the application programs with similar running states can be divided into the same application group, memory pages in the same group are managed in a unified mode, the system bump is further reduced, and the system efficiency is improved.
According to the first aspect, or any implementation manner of the first aspect, acquiring a state score corresponding to an application program according to a current operating state and foreground and background switching information includes: determining a reference score corresponding to the current running state; calculating to obtain an adjustment score according to the weight coefficient and the score adjustment step length corresponding to the foreground and background switching information; and adjusting the adjustment score on the basis of the reference score to obtain a state score corresponding to the application program. In the implementation mode, the state score of the application program is adjusted based on the information of the foreground and background switching of the application program, so that the application programs with more similar running state information are divided into the same application group, the memory pages of the same group are managed in a unified mode, the system bump is further reduced, and the system efficiency is improved.
According to the first aspect or any implementation manner of the first aspect, the foreground and background switching information includes a foreground and background switching frequency and a time of last switching to foreground; calculating to obtain an adjustment score according to the weight coefficient and the score adjustment step length corresponding to the foreground and background switching information, wherein the calculation comprises the following steps: acquiring a time difference between the moment when the application program is switched to the foreground last time and the current moment, and a first weight corresponding to the time difference, wherein the time difference is in negative correlation with the first weight; acquiring a second weight corresponding to the switching frequency of the foreground and the background, wherein the switching frequency of the foreground and the background is positively correlated with the second weight; determining the sum of the first weight and the second weight as a weight coefficient corresponding to foreground and background switching information; and calculating the product of the weight coefficient and the fraction adjustment step length to obtain an adjustment fraction.
According to the first aspect, or any implementation manner of the first aspect above, determining the priority of the application according to the state score of the application includes: determining a target score interval to which the state score of the application program belongs; and determining the priority corresponding to the target score interval according to the mapping relation between the score interval and the priority.
According to the first aspect, or any implementation manner of the first aspect, the memory reclamation method further includes: for any application program, after determining that the priority corresponding to the application program changes according to the state information of the application program, adjusting the application program to a target application group corresponding to the changed priority; and updating the memory pages of the application program to the memory page linked list corresponding to the target application group. The realization mode can dynamically adjust the grouping of the application programs according to the running state information of the application programs, and further dynamically manage the memory pages of the application programs according to the latest grouping of the application programs, so that the thrashing is reduced.
According to the first aspect, or any implementation manner of the first aspect, the memory reclamation method further includes: determining the target priority of an application packet corresponding to a to-be-recycled memory page linked list, wherein the target priority is the priority of an application program in the application packet; inquiring the target memory recovery rate matched with the target priority according to the mapping relation between the preset priority and the memory recovery rate; and determining the target memory recovery rate as the memory recovery rate corresponding to the page linked list of the memory to be recovered.
According to the first aspect, or any implementation manner of the first aspect above, the memory recovery rate is inversely related to the priority level. Therefore, the recycled memory is less for the application program with higher access probability, and the recycled memory is more for the application program with lower access probability, so that the thrashing is finally reduced, and the system efficiency is improved.
In a second aspect, the present application further provides an electronic device, including: a memory and one or more processors, wherein the memory is to store one or more programs; the processor is configured to run the one or more programs, so that the electronic device executes the memory recovery method according to the first aspect or any implementation manner of the first aspect.
In a third aspect, the present application further provides a computer-readable storage medium, having stored thereon instructions, which, when executed on an electronic device, cause the electronic device to perform the memory recovery method according to the first aspect or any implementation manner of the first aspect.
In a fourth aspect, the present application further provides a computer program product, which when run on an electronic device, causes the electronic device to execute the memory recovery method according to the first aspect or any implementation manner of the first aspect.
It should be appreciated that the description of technical features, solutions, benefits, or similar language in this application does not imply that all of the features and advantages may be realized in any single embodiment. Rather, it is to be understood that the description of a feature or advantage is intended to include the specific features, aspects or advantages in at least one embodiment. Therefore, the descriptions of technical features, technical solutions or advantages in the present specification do not necessarily refer to the same embodiment. Furthermore, the technical features, technical solutions and advantages described in the present embodiments may also be combined in any suitable manner. One skilled in the relevant art will recognize that an embodiment may be practiced without one or more of the specific features, aspects, or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a block diagram of a software structure of an electronic device according to an embodiment of the present disclosure;
FIG. 3 is a diagram illustrating a mapping relationship between an LRU linked list and an application program according to the related art;
fig. 4 is a schematic flowchart of a memory recovery method according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating a mapping relationship between an application packet and an LRU linked list according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating arrangement of memory pages in an LRU linked list according to an embodiment of the present disclosure;
FIG. 7 is a diagram illustrating hot and cold page ordering in an LRU linked list according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a memory recovery device according to an embodiment of the present application.
Detailed Description
The terms "first", "second" and "third", etc. in the description and claims of this application and the description of the drawings are used for distinguishing between different objects and not for limiting a particular order.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
For the sake of clarity and conciseness of the following description of the various embodiments, a brief description of related art terms or techniques referred to in this application is first provided:
page-based memory management is a memory space management technique, in which a memory virtual space is divided into a plurality of pages (pages) with equal length, also called memory pages (or memory pages for short), and the pages are used as the minimum unit of the memory space.
The ZRAM compression technology is a function of Linux kernel and can provide virtual memory compression.
The ZRAM swap is used for allocating an area from a memory to be used as a swap partition, if the memory space is insufficient, the application program is not killed, but the memory data occupied by the application program is compressed and stored in the swap partition, and when the ZRAM swap is switched back, the data can be directly recovered to the memory, so that the time for restarting the application program is saved. For example, a certain application program occupies 50MB of space in a common memory, if the compression rate is 0.4, the data stored in the swap partition only needs 20MB of space, so that the swap partition can store more application programs temporarily unused in the background, which is equivalent to expanding the memory space.
The memory swap mechanism (swap mechanism) writes data stored in an infrequently accessed memory space to a hard disk (i.e., a non-volatile memory of an electronic device), and frees the memory space for use by other processes that are more needed. The swap partition refers to a swap exchange area divided on the hard disk. And through the swap partition, the inactive memory pages in the memory are exchanged to the hard disk so as to achieve the effect of increasing the memory.
Lru (least recently used), which is a commonly used memory page replacement algorithm, is designed based on the principle that if a data is not accessed in the recent period of time, the probability of future access is small, and therefore, when memory reclamation is required, the least recently used memory page is selected to be eliminated, i.e., the memory space of the memory page is reclaimed. The node data in the LRU linked list closer to the tail of the linked list is accessed earlier (i.e., the memory page is cooler), and the node data closer to the head of the linked list is accessed most recently (i.e., the memory page is hotter), i.e., the memory pages are stored sequentially from head to tail in the order from hot to cold. Therefore, when the memory is recycled, the recycling is started from the tail part of the linked list.
Thrashing, that is, a page just swapped out of the memory is immediately swapped in upon request (that is, the page is read from the non-volatile memory into the memory), and this repeated swapping out and swapping in phenomenon is thrashing, which results in that the system needs to spend a lot of time on such frequent page swapping, which causes the actual efficiency of the system to be low, and even leads to system crash in case of serious system.
The electronic device applying the memory recovery method provided by the application can be a mobile phone, a tablet Personal computer, a handheld computer, a netbook, a Personal Digital Assistant (PDA), a wearable electronic device and the like, and the specific form of the electronic device applying the memory recovery method is not particularly limited by the application.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 1, the electronic device may include: processor 110, internal memory 120, non-volatile memory 130, and display screen 140.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the electronic device. In other embodiments, an electronic device may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The internal memory 120, i.e., the internal memory of the electronic device, is usually a random-access memory (RAM), which supports random access and fast read/write speed, and is used for temporarily storing the operation data of the processor 110 and the data exchanged with the nonvolatile memory 130 as a bridge for communication between the processor 110 and the nonvolatile memory 130.
The nonvolatile memory 130 is a memory using a nonvolatile storage medium, for example, at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), or the like.
The nonvolatile memory 130 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, a video playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, photos, videos, etc.) created during the use of the electronic device, and the like.
The processor 110 executes various functional applications of the electronic device and data processing by executing instructions stored in the non-volatile memory 130.
The read-write speed of the internal memory 120 is higher than the read-write speed of the non-volatile memory 130, so that when the processor 110 runs the instruction stored in the non-volatile memory 130, the instruction in the non-volatile memory 130 is loaded into the internal memory 120, and thus, the processor 110 can directly communicate with the internal memory 120, and the system operation efficiency is improved.
The display screen 140 is used for displaying images, videos, a series of Graphical User Interfaces (GUIs), and the like, and a user can interact with an application program by operating the GUIs.
In addition, an operating system runs on the above components. For example, the Android operating system developed by google corporation, the hong meng operating system (Harmony OS) developed by hua corporation, the Windows operating system developed by microsoft corporation, the iOS operating system developed by apple corporation, etc. A running application may be installed on the operating system.
The operating system of the electronic device may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the application takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of an electronic device. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface.
Fig. 2 is a block diagram of a software structure of an electronic device according to an embodiment of the present Application, where the software structure includes an Application layer (APP), an Application Framework layer (Framework), and a Kernel layer (Kernel).
The application layer may include a series of application packages. As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
The application framework layer is used for managing the application and recording the running state of the application, such as foreground, background, freezing and the like.
In this embodiment of the application, as shown in fig. 2, the application framework layer further includes an application priority management module 210, where the application priority management module 210 is capable of acquiring the running state of the application from the application framework layer, determining the priority of the application according to the running state of the application, and further dividing the application groups according to the priority of the application.
The kernel layer comprises a system layer security mechanism, memory management, a file system, process management, a network stack and a series of driving modules, is a layer between hardware and software and provides interaction with the hardware.
In this embodiment, as shown in fig. 2, the kernel layer includes a memory management module 220, where the memory management module 220 manages a memory occupied by each running application according to a memory release mechanism based on application groups divided by the application priority management module 210, and ensures that a sufficient memory space is available when the system runs.
In one embodiment of the present application, the application priority management module of the application framework layer may directly interact with the memory management module of the kernel layer, for example, the application priority management module transmits the identifier of the application program and the group identifier included in each application group to the memory management module, so that the memory management module may create an LRU linked list for each application group for maintaining the memory pages of all the application programs included in the application group.
In embodiments of the present application, the hardware layer may include internal memory and non-volatile memory.
The memory management module may monitor a memory state of the memory, such as a current available memory space, and trigger a memory recovery process when it is monitored that the current available memory space meets a memory recovery starting condition.
The memory resource of the electronic equipment is fixed, after a user opens a plurality of Application programs (APPLICATION, APP) in the use process, the system can cache the data of the APP switched to the background into the memory, when the use of the memory of the whole machine is about to reach the upper limit, the system can start checking and killing, and part of the APP cached in the memory is killed, but the checking and killing can cause certain influence on the keep-alive experience of the user, for example, when the user accesses the APP after checking and killing again, the data of the APP needs to be loaded from the hardware for starting again, the process needs to perform very time-consuming disk IO operation, therefore, the Application starting process is slow, and an operation interface when the user uses the APP last time can not be reserved.
For example, after a user reads an article using a reading type APP, a certain instant messaging type APP is opened again, at this time, the reading type APP is switched to the background, if the process of the reading type APP is killed (that is, the memory occupied by the APP is all recovered), when the user accesses the reading type APP again, the reading type APP needs to be started by loading data from the hard disk again, and moreover, the interface of the user when reading the article last time cannot be reserved, that is, the application keep-alive experience is poor.
In order to reduce the influence of process killing on application keep-alive experience as much as possible, the system needs to cache more application data, and currently, commonly used technologies include a ZRAM memory compression technology and a memory dump technology. The compression or dump of the memory occupied by the APP improves the application keep-alive experience, but occupies certain system overhead. For example, the ZRAM compression and decompression process occupies certain CPU resources; while dumping increases the life wear of the external memory device.
In order to reduce these problems, a certain policy needs to be specified to select to compress or dump the memory blocks occupied by different APPs, and one solution in the related art is: for each application, an LRU (least recent used) linked list is maintained to manage the memory pages occupied by the application, i.e. an LRU linked list includes all the memory pages of an APP.
As shown in fig. 3, APP1 corresponds to LRU1 linked list, APP2 corresponds to LRU2 linked list, APP3 corresponds to LRU3 linked list, APP4 corresponds to LRU4 linked list, where only the memory page occupied by APP1 is in LRU1 linked list, and similarly, only the memory page occupied by APP2 is in LRU2 linked list, only the memory page occupied by APP3 is in LRU3, and only the memory page occupied by APP4 is in LRU 4.
When memory recovery is needed, the LRU linked lists are sequentially traversed, and memory recovery is performed from the tail of the LRU linked list to be recovered, that is, a recoverable memory page is compressed or dumped.
The inventor researches the memory recycling process to find that the memory recycling process still has a thrashing phenomenon. Furthermore, the inventor further studies the memory recycling scheme to find out that the reason for thrashing is as follows:
in one case, for a plurality of applications with the same future access probability, when memory recovery is performed, the system sequentially traverses the LRU linked lists of the applications to perform memory recovery, and when the memory recovery of a part of the applications is performed and a memory recovery stop condition is met, the memory recovery is stopped without performing memory recovery on other applications.
For example, in the example shown in fig. 3, assuming that the probabilities of future accesses of APP1 to APP4 are the same, after the reclaimable memories in the LRU linked lists of APP1 and APP2 are reclaimed, the available memory space satisfies the memory reclamation stop condition, and at this time, the memories of APP3 and APP4 are not reclaimed any more. That is, the memory pages with higher probability of being accessed in the future in APP1 and APP2 are recycled, while the memory pages with lower probability of being accessed in the future in APP3 and APP4 are not recycled, which may cause the memory pages to be swapped out and in repeatedly, i.e. thrashing occurs.
In another case, for multiple applications with the same future access probability, the same memory recovery rate is set, that is, the memory is recovered according to the same memory recovery rate, but due to the difference between different applications, for example, APP1 and APP2 both run in the background, and the probability of future access is the same overall, but the probability of future access to a memory page occupied by a certain process in APP1 is higher than the probability of future access to memory pages occupied by all processes in APP2, at this time, if the page memory of APP1 and APP2 is recovered according to the same recovery rate, a memory page with a higher probability of future access by a user in APP1 may be recovered, and when the memory page of APP1 is accessed again, the page needs to be swapped in, and a phenomenon of swapping out and in memory pages repeatedly, that is, a thrashing phenomenon, may also occur.
In order to solve the thrashing problem existing in the memory recovery schemes of the related art, the inventor provides a memory recovery method and device of the application, all application programs in the same priority are divided into an application group, an LRU linked list is maintained for each application group, all application memory pages of the application group are maintained in the LRU linked list, and the memory pages are sequentially sorted according to the cold and hot degrees of the memory pages in the LRU linked list. And responding to the memory recovery instruction, determining the LRU linked list to be recovered, and recovering the memory pages to be recovered in the LRU linked list to be recovered according to the sequence from cold to hot based on the memory recovery rate corresponding to the LRU linked list to be recovered. According to the scheme, the memory pages of a plurality of application programs in the same priority are maintained in the same LRU linked list, and when memory recovery is carried out, the memory pages of the application programs are uniformly recovered in sequence according to the sequence from cold to hot of the memory pages, instead of carrying out memory recovery one by one for each application (namely, all recoverable memories of one application are recovered first, and then the recoverable memories of the next application are recovered). Therefore, the phenomenon that relatively cold memory pages in a plurality of application programs with the same priority are not recycled and relatively hot memory pages are recycled is avoided, the memory pages of the plurality of application programs with the same priority are guaranteed to be recycled in a balanced manner, the system jolt is reduced, and the system efficiency is improved.
Fig. 4 is a schematic flowchart of a memory recycling method according to an embodiment of the present application, where the method is applied to the electronic device shown in fig. 1, and as shown in fig. 4, the method may include the following steps:
and S110, acquiring the running state information of each application program in the running state.
In other words, the running state means that an application program is used and run and is not closed, and the application program may be in a foreground running state, a background running state, or may be frozen.
The running state information of the application program can comprise foreground running, background running and freezing states.
Foreground operation refers to an application program operating in a foreground mode, and the application program in the foreground mode occupies a display device (such as a touch screen) and an input device of the electronic device and can interact with a user.
Background running means that the application program is in a background mode, does not occupy a display device and an input device of the electronic device, cannot interact with a user, and the application program in the background mode also occupies certain system resources, such as memory resources and CPU resources.
The frozen state means that a certain application program is frozen by a system, the application program in the frozen state stops running, CPU resources are not occupied, memory resources are not released, and most memory pages occupied by the application program in the frozen state can be dumped into a ROM and still can be restarted to the previous state because the application program in the frozen state is temporarily not used.
The operating system of the electronic device can monitor the running state of the application program, for example, the application framework layer of the Android system is used for managing the application program, so that the application framework layer can record the running state of the application program, for example, information such as foreground, background, freezing, foreground and background switching times and time.
And S120, acquiring a state score corresponding to the application program according to the running state information corresponding to the application program.
The active degrees of the application programs in different running states are different, the active degrees represent the probability of being accessed by the user, and the higher the active degree is, the higher the probability of being accessed by the user is; the lower the activity level, the lower the probability of being accessed by the user. Thus, a state score characterizing the activity level of the application can be calculated from the running state information.
In this embodiment, the application priority management module shown in fig. 2 may be used to obtain the state score corresponding to the application program, where the application priority management module may obtain the running state of the application program from the application framework layer of the operating system, and further calculate the state score of the application program according to the running state.
In one example, the state score is negatively related to the activity level, i.e., the state score is negatively related to the priority level, e.g., the lower the state score, the more active the application is, and the higher the probability of being accessed by the user, the higher the priority level; conversely, a higher state score indicates that the application is less active and has a lower probability of being accessed by the user, and a lower priority.
In an embodiment of the present application, a state score range (e.g., 0 to 1000) may be set, and reference scores corresponding to a foreground operation, a background operation, and a frozen state are set, respectively, where the reference score corresponding to the foreground operation is 0, the reference score corresponding to the background operation is 300, and the reference score corresponding to the frozen state is 1000, for example.
In an application scenario, a higher foreground-background switching frequency indicates a higher probability that the application is accessed, and conversely, a lower foreground-background switching frequency indicates a lower probability that the application is accessed. The smaller the time difference between the time when the application program was last switched to the foreground (i.e. the time when the application program was last switched to the foreground) and the current time, the higher the probability that the application program is accessed. Thus, the last time the application was switched to the foreground can characterize the probability that the application was accessed.
Therefore, in an exemplary embodiment, for the background operating state and the frozen state, a certain score may be subtracted from a reference score corresponding to each operating state according to the switching frequency of the foreground and the background counted in the historical time window and the time difference between the latest switching time to the foreground and the current time, so as to finally obtain the state score of the application program.
In a possible implementation manner, a division step size (e.g., 100) is set, a weight coefficient is determined according to the switching frequency of the foreground and the background and the time difference between the time of switching the foreground most recently and the current time, and further, the adjustment score of the application is obtained by calculating the product of the weight coefficient and the division step size.
In an example, the weight coefficient may include two parts, one part depends on the switching frequency, such as the higher the switching frequency, the higher the weight; another part depends on that the time of the last switching foreground is less different from the current time, e.g. the less the time difference is, the higher the weight is.
For example, the weight corresponding to the application with the time of last switching to foreground being 1 hour is higher than the weight corresponding to the application with the time of last switching to foreground being 3 hours; the sum of the two weights is used as the weight coefficient of foreground and background switching information.
For example, if the division step size is 100, the weight corresponding to the switching frequency is 0.6, and the weight corresponding to the time difference between the latest switching time to the foreground and the current time is 0.8, the weight corresponding to the division step size is 0.6+0.8 to 1.4, so that the final division score is 100 × 1.4 to 140, and the reference score of the application in the frozen state is 1000, and the final state score of the application is 820.
In the implementation mode, the state score of the application program is further adjusted based on the information of the foreground and background switching of the application program, so that the application programs with more similar running state information are divided into the same application group, the unified management of the memory pages of the same group is further realized, the system bump is further reduced, and the system efficiency is improved.
Of course, in other embodiments of the present application, the weight coefficient may also be determined according to one of the foreground-background switching frequency and a time difference between a time of last switching to foreground and a current time. For example, if the weight corresponding to the foreground/background switching frequency is 0.6, the weight coefficient of the foreground/background switching information is 0.6. In another example, if the weight corresponding to the time difference between the time of the latest handover to the foreground and the current time is 0.8, the weight coefficient of the foreground-background handover information is 0.8.
In addition, other methods may also be used to calculate the state score of the application program, for example, the state score may be set to be positively correlated with the activity degree, that is, a higher state score indicates that the application program is more active, and a lower state score indicates that the application program is less active, which is not limited in this application.
S130, determining the priority of the application program according to the state score of the application program.
A higher priority of an application indicates a more active application, i.e. a higher probability of the user accessing the application. For example, in an example where the state score is negatively correlated with activity, the state score is negatively correlated with priority, i.e., the higher the state score, the lower the priority; in the example where the state score is positively correlated with the activity level, the state score is positively correlated with the priority, i.e., the lower the state score, the higher the priority.
In this embodiment, a mapping relationship between the state score and the priority may be preset, and the priority corresponding to the state score of the application program may be further determined according to the mapping relationship.
The mapping relationship between the state scores and the priorities is determined according to the distribution of the state scores corresponding to the application programs, for example, one score may be set to correspond to one priority, or the same score interval may be set to correspond to one priority.
In one example, the status scores of the applications are the same, that is, the same score corresponds to different applications, in which case, one score corresponds to one priority, and different scores correspond to different priorities. For example, if the scores of APP1, APP2, and APP3 are the same, the priorities of the three APPs are determined to be the same.
In another example, the state scores corresponding to the applications are relatively distributed, in this case, the priorities of the applications in the same score segment may be set to be the same, for example, the state score of APP1 is 200, the state score of APP2 is 220, the state score of APP3 is 250, and the score interval corresponding to the second priority is [200,250], and thus, APP1, APP2 and APP3 are all of the second priority.
And S140, dividing the application programs in the same priority into one application group.
In this embodiment, the applications are divided into different groups according to different priorities of the applications, for example, the applications in the same priority may be divided into the same group. In addition, in other embodiments, two adjacent priority applications may be divided into the same group.
For example, in an Android system, based on a memcg (memory group) mechanism, processes of all applications in the same application group are in the same memcg group, so as to perform memory management by using the memcg group as a unit, that is, memory pages of all processes in the same memcg group are uniformly managed.
The memcg is a function of the Linux kernel, and provides management of memory behaviors of a group of processes in the system.
For example, as shown in fig. 5, 12 application programs APP1 to APP12 are divided into four groups by priority:
APP1, APP2 and APP3 are all of the first priority, and all processes of these three APPs are divided into one memcg packet, namely, a memcg1 packet.
APP4, APP5 and APP6 are all of the second priority, and the processes of these three APPs are divided into memcg2 packets.
APP7, APP8 and APP9 are all third priority, and the progress of these three APPs is divided into memcg3 groups.
APP10, APP11 and APP12 are all of fourth priority, and the processes of these three APPs are divided into memcg4 packets.
In an application scenario, as the user interacts with the application program, the running state of the application program may change, for example, switch from background to foreground, and thus the priority of the application program is dynamically changed.
For example, after the user opens a song that the music APP selects to play, the music APP is switched to the background, and the user opens the reading APP again and listens to music while watching an article. In this example, the reading APP always runs in the foreground, the music APP always runs in the background, and thus the priority of the reading APP is higher than that of the music APP; and when the user opened the interface of music APP and looked over the lyrics, this music APP switches over to the proscenium, and reading APP switches over to the backstage, and under this kind of use scene, the priority of music APP is higher than reading APP.
In a possible implementation manner, the operating system of the electronic device may monitor the running state of the application program according to a specified period, for example, the specified period may be 1min or other duration, which is not limited in this application.
S150, maintaining an LRU linked list for each application group, where the LRU linked list includes memory pages occupied by all applications in the application group.
For example, in the Android system, memory pages occupied by all applications within a memcg group are managed by an LRU linked list. In this embodiment, each memcg group corresponds to one LRU linked list, and the LRU linked list maintains memory pages of all applications in the memcg group.
Still illustrated by way of example in fig. 5, application grouping 1 corresponds to the LRU1 linked list, application grouping 2 corresponds to the LRU2 linked list, application grouping 3 corresponds to the LRU3 linked list, and application grouping 4 corresponds to the LRU4 linked list.
As shown in fig. 5, the application group 1 includes three APPs, i.e., APP1, APP2, and APP3, the LRU1 linked list corresponding to the application group includes all the memory pages occupied by the three APPs, and the memory pages of the three APPs are uniformly sorted and are no longer sorted according to different applications.
In the LRU linked list, the memory pages of all the applications included in the same group are directly maintained as a set, and are not maintained according to the applications, in other words, the LRU linked list breaks up the sequence of the memory pages of the same original applications, and the memory pages of the applications are uniformly sorted according to the degree of cooling and heating.
In an embodiment, memory pages of each application in the same application group are sequentially sorted according to a hot and cold degree, as shown in fig. 6, for convenience of illustration, the memory pages are identified by pageij, where i represents a sorting order of the memory page in the LRU linked list, and j represents a number of APPs occupying the memory page.
For example, page11 indicates that the memory page is sorted to bit 1 in the LRU linked list, and the memory page is occupied by APP 1. Similarly, page22 indicates that the memory page is sorted into the LRU linked list as the 2 nd bit, and the memory page is occupied by APP 2; page31 the third memory page is occupied by APP 1. By analogy, pageij indicates that the memory page is sorted into the ith bit in the LRU linked list, and the memory page is occupied by APPj.
As can be seen from fig. 6, in the LRU chain table, the 1 st and 3 rd memory pages both belong to APP1, and the 2 nd memory page belongs to APP2, which means that the LRU chain table in the present application disturbs the order of the applications and directly sorts the applications according to the hot and cold programs of the memory pages of each application.
The LRU linked list is internally arranged from the head to the tail in sequence according to the sequence from hot to cold of the memory pages, namely the heat of the memory pages closer to the head of the linked list is higher, and the heat of the memory pages closer to the tail of the linked list is lower; for example, page1 is hotter than page 2.
The cold and hot degree of the memory page is according to the time difference between the latest access time of the memory page and the current time, the larger the time difference is, the colder the memory page is, and on the contrary, the smaller the time difference is, the hotter the memory page is. For example, if the access time of the memory Page1 is 35min from the current time and the access time of the Page2 is 30min from the current time, the heat of the Page1 is lower than that of the Page 2.
As shown in fig. 7, the memory pages in the LRU linked list at time t0 are ordered as follows: page1 → page2 → page3 → … … → page → … … → page, t1 accesses page i, at which time the page of page i moves to the head of the LRU link and the other pages move backwards in sequence, at which time the ordering of the pages of memory in the LRU link becomes: page i → page1 → page2 → page3 → … … → page.
When memory reclamation occurs, reclamation begins from the end of the LRU linked list because the probability that the memory pages stored at the end are accessed again is lowest. Releasing this portion of memory causes a minimal probability of thrashing.
And S160, generating a memory recycling instruction after detecting that the available memory space is less than or equal to a preset threshold value.
In an application scenario, a kernel (kernel) of an operating system of an electronic device (e.g., a memory management module shown in fig. 2) detects a currently available memory space, and triggers a memory recycling process if the currently available memory space is less than or equal to a preset threshold (i.e., a first preset threshold).
The preset threshold value can be determined according to the total storage space of the internal memory of the electronic equipment, and when the total storage space of the internal memory is larger, the preset threshold value can be set to be a larger numerical value, so that the efficient operation of the system is ensured.
For example, the total space of the internal memory is 6GB, and the preset threshold may be set to 500 MB; as another example, the total space of the internal memory is 4GB, and the preset threshold may be set to 300 MB.
In a possible implementation manner, the memory management module may detect the available memory space according to a specified period, for example, the specified period may be 1min, 5min, 10min, or other duration, which is not limited in this application.
In another application scenario, when the electronic device detects that the system starts an application (e.g., a camera) consuming a large amount of memory, a process of detecting available memory space is triggered.
Certain memory space is consumed in the running process of the application program, but if the available memory space does not meet the memory required for starting a certain application program, memory recovery needs to be performed once, so that the electronic device can trigger a process of detecting the available memory space when detecting that the specified application program is started. The designated application program is an application program that consumes a large memory space in the running process, for example, the designated application program is an application program that consumes a memory space larger than a certain memory threshold in the running process, for example, a camera application occupies a large amount of memory space in the running process.
In this application scenario, the preset threshold may be determined according to a memory space consumed by the designated application program during running, for example, if a certain application program needs to consume 100MB of memory space during running, the preset threshold may be set to 100MB, 150MB, or other values close to 100MB, etc.
S170, responding to the memory recovery instruction, and determining the LRU linked list to be recovered according to the sequence from low to high of the priority corresponding to each application group.
In an embodiment of the present application, the memory management module, in response to the memory recycle instruction, scans each LRU linked list in a sequence from a low priority to a high priority corresponding to the application packet. And determining the LRU linked list corresponding to the application packet with the lowest priority as the LRU linked list to be recycled from the LRU linked list which is not subjected to memory recycling.
It is assumed that the priorities corresponding to the application packets in fig. 5 are, from low to high: the priority4 < priority3 < priority2 < priority1, and the LRU linked lists corresponding to the four application groups are not subjected to memory recovery, so that the LRU4 linked list with the lowest priority is determined as the LRU linked list to be recovered.
And S180, based on the memory recovery rate corresponding to the LRU linked list to be recovered, according to the sequence from cold to hot, recovering the memory pages to be recovered in the LRU linked list, and recovering the memory pages to be recovered.
The memory recovery rate is a ratio of a memory to be recovered to an occupied memory, and for example, a memory space occupied by an application is 50MB, and the memory recovery rate is 40%, that is, the recovered memory space is 50MB × 40% — 20 MB.
Different priorities indicate that the access probability of the application programs is different, in order to reduce the frequency of the thrashing phenomenon, the memory is recycled as much as possible for the application programs with lower access probability, and the memory is recycled less or even not for the application programs with higher access probability.
Therefore, different memory recovery rates can be set for different priorities, respectively, and the priority is inversely related to the memory recovery rate, that is, the higher the priority corresponding to the application packet is, the lower the memory recovery rate is, whereas the lower the priority corresponding to the application packet is, the higher the memory recovery rate is.
For example, as shown in fig. 5, four priorities are set, and the memory recovery rate corresponding to the first priority1 is 0, that is, it is not recovered; the memory recovery rate corresponding to the second priority2 is 10%, the memory recovery rate corresponding to the third priority3 is 20%, and the memory recovery rate corresponding to the fourth priority4 is 30%.
In addition, the memory recovery rate corresponding to each priority can be set according to the actual application requirement, and the application is not limited to this.
In one embodiment, the memory management logic of the memory management module is preset with the memory recovery rates corresponding to application packets with different priorities, and each application packet corresponds to one LRU linked list, so that each LRU linked list has a corresponding memory recovery rate. That is, the memory management module determines the memory recovery rate of the LRU linked list according to the priority of the application packet corresponding to the LRU linked list.
In the LRU chain, since memory pages are sequentially stored from the head to the tail in the order from hot to cold, when memory pages in the LRU chain are collected, the memory pages are collected from the tail of the LRU chain.
For example, the memory reclamation process may be to compress the data in the memory page and store the compressed data in the swap partition in the memory, or compress the data in the memory page and store the compressed data in the memory page in the nonvolatile memory.
S190, detecting whether the available memory space meets the memory recovery stopping condition, and if so, ending the memory recovery process; if not, returning to execute the step S170 until the available memory space meets the memory recycling stop condition.
In one embodiment, the memory reclamation stop condition may be that the available memory space is greater than a second predetermined threshold. In one example, the second preset threshold may be determined according to the total space of the internal memory, such as 800 MB.
In other examples, the second preset threshold may be determined according to a memory space consumed by a specific application when running, for example, if a memory space to be allocated when an application runs is 100MB, the second preset threshold may be set to 200 MB.
After the memory recovery action is executed, if the available memory space is detected to be larger than or equal to a second preset threshold value, the memory recovery process is stopped. For example, in the example shown in fig. 5, after the memory of the LRU4 linked list is recycled, the available memory space is greater than the second preset threshold, and at this time, the memory recycling of the LRU3 and LRU2 linked list is not continued. For another example, after the memory of the LRU4 and the LRU3 linked list in fig. 5 is recycled, if the available memory space is greater than the second preset threshold, the memory recycling operation is not performed on the LRU2 linked list.
If the available memory space is detected to be smaller than the second preset threshold, determining that the available memory space does not satisfy the memory recovery stop condition, and continuing to perform memory recovery on other LRU linked lists which are not subjected to memory recovery, namely returning to execute S170 to determine a new LRU linked list to be recovered.
For example, after the reclaimable memory in the LRU4 linked list is reclaimed, the available memory space still does not satisfy the memory reclamation stop condition, and the LRU3 linked list with the lowest priority is determined from the LRU3 and the LRU2 as the LRU linked list for the current memory reclamation, and the process is repeated until the available memory space is greater than or equal to the second preset threshold.
In addition, it should be noted that, in the memory reclamation method provided in this embodiment, the process of dividing the application packet and the process of reclaiming the memory of the application packet do not have a specific execution order, and the two processes may be executed in parallel.
For example, the application priority management module monitors the running state of the application program, divides the application groups according to the running state, and then transmits the divided application groups to the memory management module. At this time, if the module is executing the memory recovery process, executing the memory recovery process based on the new application packet when the memory recovery is performed next time; and if the module does not execute the memory recovery flow currently, executing the memory recovery flow based on the new application packet during memory recovery.
In the memory recovery method provided in this embodiment, the priority of the application is determined according to the running state of the application, all the applications at the same priority are divided into one application group, and for each application group, a memory page linked list is maintained, in which all the memory pages of the application group are maintained, and the memory page linked lists are sequentially sorted according to the cooling and heating degrees of the memory pages. And triggering a memory recovery process to determine a memory page linked list to be recovered when the available memory space is detected to be lower than a preset threshold, and determining and recovering the memory page to be recovered from the memory page linked list according to the sequence from cold to hot based on the memory recovery rate corresponding to the memory page linked list to be recovered.
Different from the way that the memory pages of different applications are maintained in the independent LRU linked lists separately, the memory recovery method provided by the application utilizes one LRU linked list to maintain the memory pages of all the applications with the same priority, and the memory pages in the LRU linked list are only sequentially sorted according to the cold and hot degrees, namely the memory pages of all the applications with the same priority are uniformly sorted according to the cold and hot degrees. In this way, when the memory pages of the LRU linked list are recycled, the recycled memory pages include the relatively cooler memory pages in each application, so that a phenomenon that a part of the relatively hotter memory pages applied at the same priority level are recycled, and another part of the relatively cooler memory pages are not recycled is avoided, that is, a problem of unbalanced memory recycling of a plurality of applications at the same priority level is avoided.
In the memory recovery method, a same LRU linked list is provided with a memory recovery rate, and the memory pages applied in the LRU linked list are uniformly sorted according to the degree of cold and heat, for example, if the number of some applied relatively cold memory pages included in the LRU linked list is large, the number of the applied recovered memory pages is large, that is, the memory recovery rate of the application is high; the LRU linked list contains a smaller number of relatively cold memory pages for another application, and the application has fewer memory pages recycled, i.e., the application has a lower memory recycling rate. Therefore, the scheme can avoid the phenomenon of system bumping caused by the fact that a plurality of applications in the same priority all adopt the same memory recovery rate to a certain extent, and improves the system operation efficiency.
Further, the scheme can dynamically adjust the priority of the application according to the change condition of the running state of the application program, so as to dynamically adjust the memory recovery strategy of the application, for example, if the application is switched from a foreground to a background, the application is adjusted to an application group with a lower priority, and further the memory recovery of the application is increased; if the application is switched from the background to the foreground, if the application is always operated in the foreground, the application is adjusted to the application group with higher priority, and further the memory recovery of the application is reduced. The scheme enables the dynamically adjusted memory recovery strategy to be more matched with the running state of the application after the application changes, further improves the running efficiency of the system, and improves the user experience.
The memory recovery method provided by the application determines the priority of the application according to the running state of the application, divides all application programs at the same priority into one application group, maintains one memory page linked list for each application group, maintains all applied memory pages of the application group in the memory page linked list, and uniformly sorts the memory page linked lists according to the cold and hot degrees of the memory pages of each application. And determining a memory page linked list to be recycled after the memory recycling instruction is detected, and determining and recycling the memory page to be recycled from the memory page linked list according to the sequence from cold to hot based on the memory recycling rate corresponding to the memory page linked list to be recycled. Although the embodiments of the memory recovery method described above all use the Android system as an example for description, this should not limit the present application. The memory recovery method provided by the application is also suitable for electronic equipment based on other operating systems such as a Hongmon system (Harmony OS), an iOS or a Windows. According to the application requirements of different operating systems, such as system frameworks of different operating systems and different memory recovery logics, a person skilled in the art can adaptively modify the memory recovery logics in the corresponding operating systems based on the memory recovery method provided by the present application, so as to achieve the same technical effect as the memory recovery method of the present application. For example, in other operating systems, the process of grouping the applications may be performed by other modules having the same function as the application priority management module, or for example, the process of recovering the memory may be performed by other modules having the same function as the memory management module, which is not listed here.
In the embodiment of the present application, the electronic device may be divided into the functional modules according to the memory recovery method, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
In the case of dividing each function module according to each function, fig. 8 shows a schematic diagram of a possible composition of the memory recovery device related to the above embodiment, where the memory recovery device is capable of executing the steps of any method embodiment in the method embodiment shown in fig. 4 in this application. The memory recovery apparatus is an electronic device or a communication apparatus supporting the electronic device to implement the method provided in the embodiments, for example, the communication apparatus may be a chip system.
As shown in fig. 8, the memory reclamation apparatus may include: a to-be-recycled linked list determining module 310, a to-be-recycled page determining module 320 and a memory recycling module 330.
A to-be-recycled linked list determining module 310, configured to determine a to-be-recycled memory page linked list in response to a memory recycling instruction;
each memory page linked list comprises memory pages of application programs of the same application group, and each application group comprises all application programs in the same priority level.
The to-be-recycled page determining module 320 is configured to determine, based on the memory recycling rate corresponding to the to-be-recycled memory page linked list, a to-be-recycled memory page from the memory page linked list according to the sequence from cold to hot.
The memory recovery module 330 is configured to recover the memory page to be recovered.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The memory recovery device provided in the embodiments of the present application is configured to execute the memory recovery method in any one of the embodiments, so that the same effect as that of the memory recovery method in the embodiments can be achieved.
The present embodiment also provides a computer-readable storage medium, where the computer-readable storage medium includes instructions, and when the instructions are executed on an electronic device, the electronic device is enabled to implement the memory recycling method provided in any of the foregoing embodiments.
The present embodiment also provides a computer program product including instructions, and when the computer program product runs on an electronic device, the electronic device is enabled to implement the memory recycling method provided in any of the above embodiments.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
In the several embodiments provided in this embodiment, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, each functional unit in the embodiments of the present embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present embodiment essentially or partially contributes to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the method described in the embodiments. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A memory reclamation method, comprising:
in response to a memory recovery instruction, determining memory page linked lists to be recovered, wherein each memory page linked list comprises memory pages of application programs of the same application group, and each application group comprises all application programs in the same priority level;
determining the memory pages to be recycled in the memory page linked list according to the sequence from cold to hot based on the memory recovery rate corresponding to the memory page linked list to be recycled;
and recovering the memory page to be recovered.
2. The method of claim 1, wherein the determining the linked list of memory pages to be recycled comprises:
and determining the memory page linked list corresponding to the application packet with the lowest priority from the memory page linked list which is not subjected to memory recovery as the memory page linked list to be recovered.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
determining the priority of the application program according to the running state information corresponding to the application program, wherein the running state information at least comprises at least one of foreground running, background running, a freezing state and foreground and background switching frequency.
4. The method according to claim 3, wherein the determining the priority of the application according to the running state information corresponding to the application comprises:
acquiring the current running state of the application program and corresponding foreground and background switching information in a historical time period;
acquiring a state score corresponding to the application program according to the current running state and the foreground and background switching information;
and determining the priority of the application program according to the state score of the application program.
5. The method according to claim 4, wherein the obtaining the state score corresponding to the application according to the current operating state and the foreground/background switching information comprises:
determining a reference score corresponding to the current running state;
calculating to obtain an adjustment score according to the weight coefficient and the score adjustment step length corresponding to the foreground and background switching information;
and adjusting the adjustment score on the basis of the reference score to obtain a state score corresponding to the application program.
6. The method of claim 5, wherein the foreground and background switching information comprises foreground and background switching frequency and time of last switching to foreground;
the calculating to obtain an adjustment score according to the weight coefficient and the score adjustment step length corresponding to the foreground and background switching information comprises:
acquiring a time difference between the moment when the application program is switched to a foreground last time and the current moment, and a first weight corresponding to the time difference, wherein the time difference is negatively related to the first weight;
acquiring a second weight corresponding to the foreground and background switching frequency, wherein the foreground and background switching frequency is positively correlated with the second weight;
determining the sum of the first weight and the second weight as a weight coefficient corresponding to the foreground and background switching information;
and calculating the product of the weight coefficient and the fraction adjustment step length to obtain the adjustment fraction.
7. The method of any of claims 4 to 6, wherein said determining the priority of the application based on the state score of the application comprises:
determining a target score interval to which the state score of the application program belongs;
and determining the priority corresponding to the target score interval according to the mapping relation between the score interval and the priority.
8. The method according to any one of claims 4 to 7, further comprising:
for any application program, after determining that the priority corresponding to the application program changes according to the state information of the application program, adjusting the application program to a target application group corresponding to the changed priority;
and updating the memory pages of the application program to the memory page linked list corresponding to the target application group.
9. The method according to any one of claims 1 to 8, further comprising:
determining a target priority of an application packet corresponding to the to-be-recycled memory page linked list, wherein the target priority is the priority of an application program in the application packet;
inquiring the target memory recovery rate matched with the target priority according to the mapping relation between the preset priority and the memory recovery rate;
and determining the target memory recovery rate as the memory recovery rate corresponding to the to-be-recovered memory page linked list.
10. The method of any of claims 1-9, wherein the memory reclamation rate is inversely related to the level of priority.
11. An electronic device, comprising: a memory and one or more processors, wherein,
the memory is used for storing one or more programs;
the processor is configured to execute the one or more programs, so that the electronic device executes the memory recovery method according to any one of claims 1 to 10.
12. A computer-readable storage medium having stored thereon instructions that, when executed on an electronic device, cause the electronic device to perform the memory reclamation method of any of claims 1-10.
CN202110857483.7A 2021-07-28 2021-07-28 Memory recovery method and device Active CN113778662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110857483.7A CN113778662B (en) 2021-07-28 2021-07-28 Memory recovery method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110857483.7A CN113778662B (en) 2021-07-28 2021-07-28 Memory recovery method and device

Publications (2)

Publication Number Publication Date
CN113778662A true CN113778662A (en) 2021-12-10
CN113778662B CN113778662B (en) 2022-12-06

Family

ID=78836226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110857483.7A Active CN113778662B (en) 2021-07-28 2021-07-28 Memory recovery method and device

Country Status (1)

Country Link
CN (1) CN113778662B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116107925A (en) * 2023-04-10 2023-05-12 阿里云计算有限公司 Data storage unit processing method
CN116185890A (en) * 2023-04-23 2023-05-30 荣耀终端有限公司 Memory recycling method and electronic equipment
CN117130947A (en) * 2023-01-31 2023-11-28 荣耀终端有限公司 Memory management method and electronic equipment
CN117130767A (en) * 2023-02-08 2023-11-28 荣耀终端有限公司 Method for recycling memory, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110087777A1 (en) * 2009-10-09 2011-04-14 Sony Corporation Information-processing device, information-processing method, and program
CN104008061A (en) * 2013-02-22 2014-08-27 华为技术有限公司 Internal memory recovery method and device
US20160055097A1 (en) * 2014-08-19 2016-02-25 Yang Seok KI Heterogeneous unified memory
CN111078586A (en) * 2019-12-10 2020-04-28 Oppo(重庆)智能科技有限公司 Memory recovery method and device, storage medium and electronic equipment
CN111831440A (en) * 2020-07-01 2020-10-27 Oppo广东移动通信有限公司 Memory recovery method and device, storage medium and electronic equipment
US20200349067A1 (en) * 2019-05-05 2020-11-05 Microsoft Technology Licensing, Llc Memory management for serverless databases

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110087777A1 (en) * 2009-10-09 2011-04-14 Sony Corporation Information-processing device, information-processing method, and program
CN104008061A (en) * 2013-02-22 2014-08-27 华为技术有限公司 Internal memory recovery method and device
US20160055097A1 (en) * 2014-08-19 2016-02-25 Yang Seok KI Heterogeneous unified memory
US20200349067A1 (en) * 2019-05-05 2020-11-05 Microsoft Technology Licensing, Llc Memory management for serverless databases
CN111078586A (en) * 2019-12-10 2020-04-28 Oppo(重庆)智能科技有限公司 Memory recovery method and device, storage medium and electronic equipment
CN111831440A (en) * 2020-07-01 2020-10-27 Oppo广东移动通信有限公司 Memory recovery method and device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李佳伟: "虚拟化系统中的内存管理优化", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130947A (en) * 2023-01-31 2023-11-28 荣耀终端有限公司 Memory management method and electronic equipment
CN117130767A (en) * 2023-02-08 2023-11-28 荣耀终端有限公司 Method for recycling memory, electronic equipment and storage medium
CN116107925A (en) * 2023-04-10 2023-05-12 阿里云计算有限公司 Data storage unit processing method
CN116107925B (en) * 2023-04-10 2023-09-26 阿里云计算有限公司 Data storage unit processing method
CN116185890A (en) * 2023-04-23 2023-05-30 荣耀终端有限公司 Memory recycling method and electronic equipment
CN116185890B (en) * 2023-04-23 2023-09-19 荣耀终端有限公司 Memory recycling method and electronic equipment

Also Published As

Publication number Publication date
CN113778662B (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN113778662B (en) Memory recovery method and device
CN107885666B (en) Memory management method and device
US10776007B2 (en) Memory management device predicting an erase count
US10552337B2 (en) Memory management and device
US6857047B2 (en) Memory compression for computer systems
CN107526546B (en) Spark distributed computing data processing method and system
US11593186B2 (en) Multi-level caching to deploy local volatile memory, local persistent memory, and remote persistent memory
AU2006262111B2 (en) Managing memory pages
JP2008090657A (en) Storage system and control method
US10657069B2 (en) Fine-grained cache operations on data volumes
CN102999444A (en) Method and device for replacing data in caching module
CN112579251A (en) Method and equipment for managing memory of virtual machine
CN113204407A (en) Memory over-allocation management method and device
CN111427804B (en) Method for reducing missing page interruption times, storage medium and intelligent terminal
CN115794669A (en) Method, device and related equipment for expanding memory
WO2011131003A1 (en) System for realizing mobile phone buffer storage mechanism and method for loading mobile phone operating system
CN110750221B (en) Volume cloning method, apparatus, electronic device and machine-readable storage medium
CN109739688B (en) Snapshot resource space management method and device and electronic equipment
CN108334401B (en) System and method for realizing logic roll dynamic distribution and supporting virtual machine dynamic migration
CN116166573B (en) Method for controlling memory reclamation, electronic device and storage medium
JP2005004282A (en) Disk array system, and method and program for managing disk array system
CN115827508B (en) Data processing method, system, equipment and storage medium
CN116954924A (en) Memory management method and device and electronic equipment
CN113986540A (en) Method and device for regularly recycling linux cache
CN116954511A (en) Storage processing method, electronic device, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant