CN110377527B - Memory management method and related equipment - Google Patents

Memory management method and related equipment Download PDF

Info

Publication number
CN110377527B
CN110377527B CN201810333058.6A CN201810333058A CN110377527B CN 110377527 B CN110377527 B CN 110377527B CN 201810333058 A CN201810333058 A CN 201810333058A CN 110377527 B CN110377527 B CN 110377527B
Authority
CN
China
Prior art keywords
memory
application
continuous
switching
defragmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810333058.6A
Other languages
Chinese (zh)
Other versions
CN110377527A (en
Inventor
李刚
唐城开
韦行海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201810333058.6A priority Critical patent/CN110377527B/en
Priority to PCT/CN2019/082098 priority patent/WO2019196878A1/en
Publication of CN110377527A publication Critical patent/CN110377527A/en
Application granted granted Critical
Publication of CN110377527B publication Critical patent/CN110377527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • G06F12/0269Incremental or concurrent garbage collection, e.g. in real-time systems
    • G06F12/0276Generational garbage collection

Abstract

The application provides a memory management method and related equipment, which actively carry out memory defragmentation based on application scenes and continuous memory demand prediction so as to meet the demands of different application scenes on continuous memory, reduce the waiting time of memory allocation and improve the application operation efficiency. The method comprises the following steps: the terminal equipment acquires the switching probability of switching from a first application scene operated currently to each of one or more second application scenes; then determining a target continuous memory according to the continuous memory required by one or more second application scenes of which the switching probability meets the preset condition in the one or more second application scenes; if the continuous memory available on the terminal device is smaller than the target continuous memory, before the terminal device is switched from the first application scene to any second application scene, the terminal device performs memory defragmentation so that the continuous memory available on the terminal device is larger than the target continuous memory.

Description

Memory management method and related equipment
Technical Field
The present application relates to the field of computers, and in particular, to a method for memory management and related devices.
Background
Fragmentation of physical memory, i.e., memory page discontinuity, has been one of the major problems faced by operating systems, and most of the memory used in general applications during operation needs to be continuous memory. To solve the problem of physical memory fragmentation, the prior art generally uses a memory management algorithm, such as a defragmentation algorithm of the Buddy system in Linux, to defragment fragments in a memory into a continuous memory, so as to meet the memory requirement of an application.
Existing memory management algorithms are mainly divided into two categories: a synchronous memory defragmentation (memory defragmentation) algorithm and an asynchronous memory defragmentation algorithm. The synchronous memory defragmentation algorithm triggers memory defragmentation when the available continuous memory of the system cannot meet the application requirement in the process of distributing the memory for the application. The asynchronous memory defragmentation algorithm triggers memory defragmentation when the available continuous memory of the system is lower than a set threshold.
It can be seen that existing memory management algorithms are based on passive triggering of memory defragmentation by specific events. For example, the synchronous memory defragmentation algorithm triggers defragmentation when the system cannot allocate continuous memory for the current application, so that the memory allocation can be completed only after the system releases the memory and defragments the continuous memory, thus greatly increasing the waiting time of the memory allocation and affecting the running efficiency of the current application. The asynchronous memory defragmentation algorithm triggers memory defragmentation when the available continuous memory of the system is lower than a preset threshold value, and stops when the defragmentation is finished above the threshold value, when a large amount of continuous memory is needed by an application, the asynchronous defragmentation cannot timely meet the memory requirement of the application, so that synchronous memory defragmentation is entered, the problem of long memory allocation time is caused, and the operation efficiency of the application is affected.
Disclosure of Invention
The application provides a memory management method and related equipment, which actively carry out memory defragmentation based on application scenes and continuous memory demand prediction so as to meet the demands of different application scenes on continuous memory, reduce the waiting time of memory allocation and improve the application operation efficiency.
In view of this, a first aspect of the present application provides a method for memory management, which may include:
the terminal equipment acquires the switching probability of switching from a first application scene currently running to each of one or more second application scenes, wherein the number of the second application scenes is two or more; then the terminal equipment determines a target continuous memory according to the switching probability meeting the preset condition and the continuous memory required by each second application scene of which the switching probability meets the preset condition in the one or more second application scenes; if the continuous memory available on the terminal device is smaller than the target continuous memory, before the terminal device is switched from the first application scene to any one of the one or more second application scenes, the terminal device performs memory defragmentation according to the target continuous memory, so that the continuous memory available on the terminal device is larger than the target continuous memory.
It should be noted that, in the embodiment of the present application, when the continuous memory available on the terminal device is equal to the target continuous memory, memory defragmentation may be performed, or memory defragmentation may not be performed, and the present application is not limited herein, specifically according to the actual design requirement.
In the embodiment of the application, the application scene to be switched of the terminal equipment is predicted, the switching probability of switching from the first application scene to each second application scene is obtained, the target continuous memory is determined according to the switching probability meeting the preset condition and the continuous memory required by the corresponding second application scene, and then the terminal equipment actively performs memory defragmentation so that the available continuous memory on the terminal equipment is larger than the target continuous memory, thereby ensuring that the terminal equipment has enough continuous memory allocation when switching the application scenes, and improving the efficiency of the terminal equipment when switching the application scenes without waiting for the terminal equipment to switch the application scenes.
With reference to the first aspect of the present application, in a first implementation manner of the first aspect of the present application, the performing, by the terminal device, memory defragmentation may include:
the terminal equipment acquires a system load and determines a memory defragmentation algorithm according to the range of the system load; and then performing memory defragmentation on the memory of the terminal equipment according to the memory defragmentation algorithm so as to improve the continuous memory available on the terminal equipment.
In the embodiment of the application, the memory defragmentation algorithm can be determined through the range of the system load of the terminal equipment, and the memory defragmentation algorithm can be dynamically adjusted according to the system load of the terminal equipment, so that the memory resources of the terminal equipment can be reasonably utilized, the influence of the memory defragmentation on the running application scene of the terminal equipment is reduced, and the efficiency of switching the application scene of the terminal equipment is improved.
With reference to the first implementation manner of the first aspect of the present application, in a second implementation manner of the first aspect of the present application, the determining, by the terminal device, a memory defragmentation algorithm according to a range where a system load is located includes:
if the system load is in a first preset range, the terminal equipment determines that the memory defragmentation algorithm is a deep memory defragmentation algorithm; if the system load is in a second preset range, the terminal equipment determines that the memory defragmentation algorithm is a moderate memory defragmentation algorithm; or if the system load is in a third preset range, the terminal equipment determines that the memory defragmentation algorithm is a mild memory defragmentation algorithm.
In the embodiment of the application, a proper memory defragmentation algorithm is determined according to the system load of the terminal equipment, the memory defragmentation algorithm is dynamically adjusted, the memory defragmentation algorithm comprises a deep memory defragmentation algorithm, a medium memory defragmentation algorithm and a light memory defragmentation algorithm, the influence on the running application scene on the terminal equipment is reduced while the memory defragmentation is carried out, the available continuous memory of the terminal equipment is cleared, and the efficiency of switching the application scene of the terminal equipment is improved. For example, under the condition of higher load of the terminal equipment, the memory defragmentation can be performed through a mild memory defragmentation algorithm so as to avoid influencing the running application scene on the terminal equipment; or under the condition of low load of the terminal equipment, the memory defragmentation can be carried out through a deep memory defragmentation algorithm so as to reasonably utilize the resources of the terminal equipment, and meanwhile, the operation of other processes or application scenes on the terminal equipment is not influenced.
With reference to the first aspect of the present application, the first implementation manner of the first aspect of the present application, or the second implementation manner of the first aspect of the present application, in a third implementation manner of the first aspect of the present application, the obtaining a switching probability of a terminal device from a first application scenario to one or more second application scenarios may include:
acquiring historical switching times of switching the terminal equipment from the first application scene to each of the one or more second application scenes; and the terminal equipment determines the switching probability of switching from the first application scene to each of the one or more second application scenes according to the historical switching times.
Specifically, in the embodiment of the application, the terminal device firstly acquires the historical switching times of switching from the first application scene to one or more second application scenes, and then calculates the switching probability of switching to each second application scene according to the historical times. The switching probability can be used for predicting the continuous memory required by switching the application scene and actively performing memory defragmentation according to the continuous memory, so that the available continuous memory on the terminal equipment meets the continuous memory required by switching the application scene, and the efficiency of switching the application scene of the terminal equipment is improved.
With reference to the first aspect of the present application, any one of the first implementation manner of the first aspect of the present application to the third implementation manner of the first aspect of the present application, in a fourth implementation manner of the first aspect of the present application, determining the target contiguous memory according to a switching probability that satisfies a preset condition in the switching probabilities and a contiguous memory required by each second application scenario that satisfies the preset condition in the switching probabilities in the one or more second application scenarios may include:
the terminal equipment determines a second application scene with the switching probability larger than a threshold value from the one or more second application scenes; and the terminal equipment determines the target continuous memory according to the continuous memory required by the second application scene of which the switching probability is larger than the threshold value.
In the embodiment of the application, the second application scene with the switching probability not larger than the threshold value is filtered, then the target continuous memory is determined according to the continuous memory required by the second application scene with the switching probability larger than the threshold value, and the terminal equipment is enabled to sort out the available continuous memory larger than the target continuous memory, so that the continuous memory available on the terminal equipment can ensure the continuous memory required by the second application scene to be switched.
In combination with the fourth embodiment of the present application, in a fifth embodiment of the present application, if it is determined that there are a plurality of second application scenarios with a handover probability greater than the threshold from the one or more second scenarios, the determining, by the terminal device, the target contiguous memory according to one or more contiguous memories required by the second application scenario with the handover probability greater than a preset threshold may include:
and the terminal equipment performs weighted operation on the switching probability of each second application scene in the second application scenes with the switching probability larger than the threshold value and the required continuous memory to obtain the target continuous memory. The weight in the weighting operation may have a corresponding relationship with the switching probability of each of the second application scenes in which the plurality of switching probabilities is greater than the threshold, for example, the higher the switching probability, the greater the occupied weight.
In the embodiment of the application, the target continuous memory is more similar to the continuous memory required by the second application scene to be switched by the terminal equipment by carrying out the weighted operation on the switching probability of each second application scene and the required continuous memory in the second application scenes with the switching probability larger than the threshold value, so that the continuous memory required by the terminal equipment to be switched to the second application scene is further ensured.
In combination with the fourth embodiment of the present application, in a sixth embodiment of the present application, the determining, by the terminal device, the target continuous memory according to the continuous memory required by the one or more second application scenarios with the handover probabilities greater than the threshold value includes:
the terminal equipment determines a target application scene with the largest required continuous memory from the one or more second application scenes with the switching probability larger than a threshold value; and taking the continuous memory required by the target application scene as the target continuous memory.
In the embodiment of the application, the maximum continuous memory required by one or more second application scenes with the switching probability larger than the threshold value can be used as the target continuous memory, so that the continuous memory required by each second application scene in the second application scenes with the switching probability larger than the threshold value is met, and the efficiency of switching the application scenes of the terminal equipment is improved.
With reference to the first aspect of the present application, the first implementation manner of the first aspect of the present application, and any one implementation manner of the sixth implementation manner of the first aspect of the present application, in a seventh implementation manner of the first aspect of the present application, the method may further include:
when the terminal equipment is switched from the first application scene to one of the one or more second application scenes, and the available continuous memory on the terminal equipment does not meet the continuous memory required by the one of the second application scenes, the terminal equipment sorts the memory defragmentation of the terminal equipment through a mild memory defragmentation algorithm.
In the embodiment of the application, when unexpected continuous memory consumption may occur in the terminal equipment or the available continuous memory on the terminal equipment is lower than the target continuous memory, the terminal equipment can also perform slight memory defragmentation when switching to the second application scene, and the available continuous memory is rapidly cleared, so that the terminal equipment can be ensured to switch the application scene normally.
A second aspect of an embodiment of the present application provides a terminal device, where the terminal device has a function of implementing the method for memory management in the first aspect. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
A third aspect of an embodiment of the present application provides a terminal device, which may include:
the device comprises a processor, a memory, a bus and an input/output interface, wherein the processor, the memory and the input/output interface are connected through the bus; the memory is used for storing program codes; the processor, when calling the program code in the memory, performs the steps performed by the terminal device provided in the first aspect or any implementation manner of the first aspect.
A fourth aspect of the embodiments of the present application provides a storage medium, where it is noted that, in essence, a part of the technical solution or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium, for storing computer software instructions for use in the above device, and where the computer software product includes a program for executing the program designed for the terminal device according to any one of the first aspect to the second aspect.
The storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
A fifth aspect of the embodiments of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform a method as described in any of the alternative embodiments of the first or second aspects of the present application.
In the embodiment of the application, if the terminal equipment is currently running the first application scene, the terminal equipment can acquire the switching probability of each second scene in one or more second application scenes from the first application scene, determine the target continuous memory according to the switching probability of each second scene in which the switching probability meets the preset condition in the one or more second application scenes and the continuous memory required by starting and running each second application scene, and then actively perform memory defragmentation before the terminal equipment is switched from the first application scene to any one second application scene, so that the length of the available continuous memory on the terminal equipment is larger than the length of the target continuous memory, thereby ensuring the continuous memory required by the second application scene to be switched and improving the efficiency of switching and running the second application scene of the terminal equipment.
Drawings
FIG. 1 is a schematic view of a scenario in which an embodiment of the present application is specifically applied;
FIG. 2 is a schematic diagram of a memory page according to an embodiment of the present application;
FIG. 3 is a block diagram of a method of memory management according to an embodiment of the present application;
FIG. 4 is a flow chart of a method for memory management according to an embodiment of the application;
FIG. 5 is a flow chart of a method for memory management according to an embodiment of the application;
FIG. 6 is a schematic diagram of a partner algorithm in an embodiment of the present application;
FIG. 7 is a flow chart of a method for memory management according to an embodiment of the application;
FIG. 8 is a schematic diagram of a mild memory defragmentation according to an embodiment of the application;
FIG. 9 is a schematic diagram of a moderate memory defragmentation in an embodiment of the application;
FIG. 10 is a schematic diagram of deep memory defragmentation according to an embodiment of the application;
FIG. 11 is a schematic diagram of a specific scenario of a method for memory management according to an embodiment of the present application;
fig. 12 is a schematic diagram of an embodiment of a terminal device in an embodiment of the present application;
fig. 13 is a schematic diagram of another embodiment of a terminal device in an embodiment of the present application.
Detailed Description
The application provides a memory management method and related equipment, which actively carry out memory defragmentation (memory defragmentation) based on application scenes and continuous memory demand prediction so as to meet the demands of different application scenes on continuous memory, reduce the waiting time of memory allocation and improve the application running efficiency.
The method for memory management equipment provided by the embodiment of the application can be applied to terminal equipment, wherein the terminal equipment can be a mobile phone, a tablet personal computer, a vehicle-mounted mobile device, a PDA (personal digital assistant, a personal digital assistant), a camera or a wearable equipment. Of course, in the following embodiments, there is no limitation on the specific form of the terminal device. Wherein, the system on which the terminal equipment can be carried can compriseOr other operating system, etc., to which embodiments of the application are not limited in any way.
To be carried onThe terminal device 100 of the operating system is, for example, as shown in fig. 1, the terminal device 100 can be logically divided into a hardware layer 21, an operating system 161, and an application layer 31. The hardware layer 21 includes hardware resources such as an application processor 101, a microcontroller unit 103, a modem 107, a Wi-Fi module 111, a sensor 114, a positioning module 150, and the like. The application layer 31 includes one or more application programs, such as application program 163, and the application program 163 may be any type of application program, such as a social type application, an e-commerce type application, a browser, and the like. The operating system 161, as a software middleware between the hardware layer 21 and the application layer 31, is a software middleware that manages and controls hardware and software resources Computer program.
In one embodiment, operating system 161 includes kernel 23, hardware abstraction layer (hardware abstraction layer, HAL) 25, library and runtime (libraries and runtime) 27, and framework (29). Wherein the kernel 23 is used to provide underlying system components and services, such as: power management, memory management, thread management, hardware drivers, etc.; the hardware drivers include Wi-Fi drivers, sensor drivers, positioning module drivers, and the like. The hardware abstraction layer 25 is a package for kernel drivers that provides an interface to the framework 29, masking the implementation details of the lower layers. The hardware abstraction layer 25 runs in user space, while the kernel driver runs in kernel space.
The library and runtime 27, also called a runtime library, provides the executable with the required library files and execution environment at runtime. Library and Runtime 27 includes Android Runtime (ART) 271, library 273, and the like. ART 271 is a virtual machine or virtual machine instance capable of converting the bytecode of an application into machine code. Library 273 is a library that provides support for executable programs at runtime, including browser engines (such as webkit), script execution engines (such as JavaScript engines), graphics processing engines, and the like.
Framework 27 is used to provide various underlying common components and services for applications in application layer 31, such as window management, location management, and the like. The framework 27 may include a phone manager 291, a resource manager 293, a location manager 295, and the like.
The functions of the respective components of the operating system 161 described above can be realized by the application processor 101 executing programs stored in the memory 105.
Those skilled in the art will appreciate that terminal 100 may include fewer or more components than are shown in fig. 1, and that the terminal device shown in fig. 1 includes only components that are more relevant to the various implementations disclosed in embodiments of the present application.
As can be seen from fig. 1, a plurality of application programs (simply referred to as "applications") can be installed on the terminal device, and the terminal device can switch between the plurality of applications, or between a plurality of scenes of one application, such as different functions or interfaces of the application, etc. When the terminal device performs application scene switching, including switching among a plurality of applications or switching among a plurality of scenes in one application, the memory of the terminal device relates to the operation of a plurality of modules, including memory allocation, memory reading and writing, and the like. Each application scene is started and running needs continuous memory.
In this case, a specific application scenario is taken as an example, and applications such as a browser, shopping software, game software and the like are installed on the terminal device. The terminal device may switch between multiple application scenarios, including between multiple applications installed on the terminal device, or between individual scenarios within an application, such as functions or user interfaces. For example, the browser is switched to shopping software, or the shopping software is switched to photographing software, or the photographing scene in the photographing software is switched to a photo preview scene, etc. When the terminal equipment performs switching of application scenes, the internal memory of the terminal equipment is also changed, and the starting and running of each application scene need the memory. The memory may also include multiple partitioning modes, specifically, taking the Linux system as an example, the physical memory may be partitioned into multiple memory pages according to a fixed page, and the size of one memory page may be 4kb. And continuous memory, namely continuous memory pages, is needed in the running of the general application scene. Memory pages may be divided into removable memory pages, non-removable memory pages, and recyclable pages. The movable memory page can be moved at will, and the data stored in the movable memory page can be moved to other memory pages. Pages occupied by application programs in a general user space belong to movable pages, and the application programs and memory pages can be mapped through pages. Therefore, only the page table entry needs to be updated, and the data stored on the original memory page is copied to the target memory page. One memory page may also be shared by multiple processes, corresponding to multiple page table entries. After long-term memory allocation and release exist in the physical memory, part of immovable pages are formed, namely, the positions in the memory are fixed, and the pages cannot be moved to other places. Most of the pages allocated by the core kernel belong to non-movable pages, and the non-movable pages are increased along with the longer the operation time of the system. The recyclable page cannot be moved directly, but can be recycled, including the application reconstructing the data from another memory page, the data on the original memory page can be recycled. Typically, the recyclable pages can be recycled by a memory recycling process preset by the system. For example, the memory occupied by the data of the mapping file may belong to a recyclable page, and the kswapd process in the Linux system may periodically recycle the recyclable page according to a preset rule.
When the terminal equipment is switching the application scene, the continuous memory is used. With the use of the system, the physical memory will be occupied with the operation of the system. For example, as shown in fig. 2, a section of memory may include various memory pages, including free memory pages, recoverable memory pages, non-removable memory pages, and the like. Therefore, in order to ensure the normal operation of the application scene on the terminal device, the memory needs to be subjected to memory defragmentation so as to obtain continuous idle memory. The memory defragmentation is a process of reducing the number of memory defragmentations, and the memory defragmentation mainly comprises moving a movable memory page, or recycling or removing a recyclable page, so as to obtain a free memory with continuous physical addresses.
Thus, to ensure that there is sufficient contiguous memory available on the terminal device, the embodiment of the present application improves the operating system portion, taking the terminal device described in the previous figures as an example. The method can relate to a memory part and a part required by application operation in an operating system, and also relates to an application layer part in terminal equipment, in particular to a switching process of an application program. In addition, each functional module described in fig. 1 is only a part of modules, and in practical application, the terminal device may include a plurality of modules related to memory, application running and application switching, which are not limited herein. The specific improvement provided by the memory management method in the embodiment of the application can comprise the steps of predicting the application scene to be switched and the required continuous memory before switching the application scene, and actively performing memory defragmentation to trim enough continuous memory. And further, when the terminal equipment switches the application scene, enough continuous memory can be allocated for the application scene. Specifically, a framework of a method for memory management in an embodiment of the present application is shown in fig. 3.
Before the application is started or the application scene is switched, the terminal equipment can predict the target continuous memory required by the application scene to be switched in a calculation or learning mode. And then the terminal equipment performs memory defragmentation according to the target continuous memory to trim the usable continuous memory so that the usable continuous memory of the terminal equipment is larger than the target continuous memory. The "application scenario" in the embodiments of the present application may be an application on the terminal device, or may be a scenario in a certain application on the terminal device, such as a function of an application, a user interface, and the like. Namely, the switching of the application scene in the present application may be switching between applications on the terminal device, or may be switching of a scene inside a certain application on the terminal device, which is not limited herein. Before the terminal equipment switches the application scene, allocating continuous memory for the application scene to be switched, so that the terminal equipment can meet the continuous memory required by the application scene, and the application scene can normally run. The method for memory management provided by the application is used for guaranteeing the continuous memory requirement of the application scene to be switched, and the terminal equipment can predict the target continuous memory required by the application scene to be switched before the terminal equipment switches the application scene and then perform memory defragmentation. Therefore, in the embodiment of the application, the continuous memory required by the application scene to be switched can be arranged before the terminal equipment performs the application scene switching, so that the continuous memory required by the application scene to be switched in operation is ensured, and the efficiency of the application scene switching is improved.
Further, as shown in fig. 4, a flow chart of a method for memory management in an embodiment of the present application includes:
401. and acquiring the switching probability of switching from the first application scene to each of the one or more second application scenes.
The terminal device may include one or more applications thereon, the plurality being two or more. Multiple application scenarios may be included in each application. The terminal device may switch among the plurality of application scenarios when running. If the terminal device is currently running the first application scene, the terminal device may acquire a switching probability of switching from the first application scene to each of the one or more second application scenes, and a manner of acquiring the switching probability may be a manner of performing switching of the application scenes on the terminal device or deep learning, and the like. For example, if the terminal device is currently running a browser, i.e. the first application scenario, the terminal device may obtain a switching probability of switching from the browser to each of one or more second application scenarios of a camera, a game, shopping software, etc., e.g. the probability of switching from the browser to the camera is 15%, the probability of switching to the shopping software is 23%, the probability of switching to the game is 3%, etc. For another example, when switching among various scenes inside the application, a chat conversation scene in a WeChat is running on the terminal device, the probability of switching to a WeChat red packet is 30%, the probability of switching to a friend circle scene is 50%, and the like.
In the embodiment of the present application, the first application scenario and the second application scenario may be different applications, or may be different applications in different or the same applications, and may be adjusted according to actual design requirements, which is not limited herein.
402. And acquiring continuous memory required by each second application scene in the one or more second application scenes.
The terminal device needs to acquire the switching probability of switching from the first application scene to each of the one or more second application scenes, and also needs to acquire the continuous memory required by each of the one or more second application scenes. For example, running a game requires 60kb of contiguous memory, running a camera requires 500kb of contiguous memory, and so on. And the continuous memory required by each second application scene in the second application scenes with the switching probability larger than the preset value can be obtained.
It should be understood that if the continuous memory required for each of the one or more second application scenarios is obtained, in an embodiment of the present application, step 401 may be performed first, or step 402 may be performed first, which is not limited herein.
403. And determining the target continuous memory according to the switching probability and the continuous memory required by each second application scene.
After determining the switching probability of switching to the second application scene, determining the switching probability meeting the preset condition in the switching probability, and determining the continuous memory required by each second application scene of which the switching probability meets the preset condition in one or more second application scenes. The terminal equipment can calculate the target continuous memory according to the switching probability and the continuous memory required by each second application scene of which the switching probability meets the preset condition. Before the terminal equipment is switched to any one of the second application scenes, the terminal equipment can carry out memory defragmentation, so that the available continuous memory on the terminal equipment is larger than the target continuous memory, and further, when the terminal equipment is switched to the application scenes, enough continuous memory can be allocated for the application scenes, and the continuous memory required by the terminal equipment to be switched from the first application scene to one of the second application scenes is ensured.
The method for calculating the target continuous memory can be that first, determining the second application scenes with the switching probability larger than the threshold value, and if at least two second application scenes with the switching probability larger than the threshold value exist, carrying out weighted operation on the switching probability of each second application scene in the at least two second application scenes with the switching probability larger than the threshold value and the required continuous memory to obtain the target continuous memory; the terminal device may determine the maximum continuous memory required from the second application scenario with the switching probability greater than the threshold, and then use the maximum continuous memory as the target continuous memory, which may be specifically adjusted according to the actual design requirement, and is not limited herein.
404. And judging whether the available continuous memory on the terminal equipment is larger than the target continuous memory.
After the terminal device determines the target continuous memory, the terminal device determines whether the available continuous memory is larger than the target continuous memory, i.e. determines whether the size of the continuous memory available on the terminal device is larger than the size of the target continuous memory. If the available contiguous memory is greater than the target contiguous memory, step 405 is performed, and if the available contiguous memory is not greater than the target contiguous memory, step 406 is performed. The available continuous memory is a continuous memory which can be allocated to the second application scene on the terminal equipment.
405. Other steps are performed.
If the available continuous memory on the terminal device is larger than the target continuous memory, the available continuous memory on the terminal device can ensure the continuous memory required by switching from the first application scene to one of the second application scenes. At this time, the terminal device may perform memory defragmentation, or may not perform memory defragmentation, and may specifically be adjusted according to actual design requirements, which is not limited herein.
406. And performing memory defragmentation.
If the available continuous memory on the terminal device is smaller than or equal to the target continuous memory, in order to ensure the continuous memory required by switching from the first application scene to one of the second application scenes, before the terminal device switches from the first application scene to the second application scene, the terminal device needs to perform memory defragmentation, so that the available continuous memory on the terminal device is not smaller than the target continuous memory, and therefore, when the terminal device switches to the second application scene, enough continuous memory can be allocated. The specific step of memory defragmentation may be to trim memory pages on the terminal device, move movable pages, or reclaim pages to trim idle continuous memory, where the idle continuous memory may be allocated for the switched application scenario when the terminal device switches the application scenario.
It should be noted that, in the embodiment of the present application, if the continuous memory available on the terminal device is equal to the target continuous memory, the memory defragmentation may be performed, or may not be performed, and specifically, the memory defragmentation may be adjusted according to the actual design requirement, which is not limited herein.
In the embodiment of the application, firstly, the switching probability of the terminal equipment from the first application scene to each of one or more second application scenes is determined, and then the target continuous memory is obtained according to the switching probability and the required memory calculation. And performing memory defragmentation according to the target continuous memory, so that the available continuous memory on the terminal equipment is not smaller than the target continuous memory, and enough continuous memory is available when the terminal equipment is switched from the first application scene to one of the one or more second application scenes. The efficiency of switching from the first application scenario to one of the one or more second application scenarios is improved.
The foregoing describes the flow of the memory management method in the embodiment of the present application, and the following describes the flow of the memory management method in the embodiment of the present application further. Referring to fig. 5, another embodiment of a method for managing an inner pipe according to an embodiment of the present application is shown in the drawings, which includes:
501. The first application scenario is initiated.
The first application scene is an application scene currently operated by the terminal equipment, when the first application scene is started and normally operated, the terminal equipment can predict the application scene to be switched to and arrange enough continuous memory in advance so as to ensure that the switched application scene can use enough continuous memory and improve the efficiency of the terminal to switch the application scene.
502. And switching application scene data acquisition.
The terminal device may collect data of the application scene switching, for example, the number of times the terminal device switches from the application scene a to the application scene B, or the number of times the application scene a switches to the application scene C in the current previous 24 hours, etc.
The specific implementation can be that a switching counting variable is inserted in a scene starting function in the terminal equipment, and the application scene is counted for each switching, for example, a chat scene of WeChat is switched to a friend circle scene, and the like.
503. And determining the association relation of the application scene.
After the switching times of the application scenes are acquired, the association relation of each application scene can be determined through the switching times among the application scenes, an application scene association matrix can be generated, and the application scene to be switched, namely the switching probability of switching to the second application scene, can be determined according to the application scene association matrix. The calculation method of the switching probability is, for example, 50 times from the application scene a to the application scene B, 30 times from the application scene a to the application scene C, and 20 times from the application scene a to the application scene D, and if the application scene a, i.e., the first application scene, is currently running, the switching probability of switching to the application scene B is 50%, the switching probability of switching to the application scene C is 30%, and the switching probability of switching to the application scene D is 20%. The application scenario B, the application scenario C, and the application scenario D are one or more of the second application scenarios in fig. 4.
For example, before 24 hours, the number of times of switching from application scene a to application scene B was 500 times, and the number of times of switching from application scene a to application scene C was 100 times. In practical application, if the terminal device is currently running the application scene a, the probability of switching from the application scene a to the application scene B is greater than the probability of switching from the application scene a to the application scene C. The application scenario association matrix may be as shown in table 1 below:
application scenario A Application scenario B Application scenario C Application scenario D Application scenario E Application scenario F
Application scenario A 100 10 20 0 1
Application scenario B 5 20 30 50 0
Application scenario C 80 20 0 2 13
Application scenario D 10 6 2 22 6
Application scenario E 0 20 0 0 0
Application scenario F 0 0 30 0 0
TABLE 1
The application scene association matrix is used for representing the times that the device in the representation is switched from one application scene to another application scene. For example, in the first row of table 1, the number of times of switching from the application scene a to the application scene B is not 100 times, the number of times of switching to the application scene C is 10 times, the number of times of switching to the application scene D is 20 times, the number of times of switching to the application scene E is 0 times, and the number of times of switching to the application scene F is 1 time, so that the probability of switching from the application scene a to each of the other application scenes can be calculated. The probability of the terminal equipment switching from the application scene A to the application scene B is larger than that of the terminal equipment switching to the application scene C, the application scene D, the application scene E and the application scene F. And when the application scene data acquisition is switched, the application scene association matrix can be updated every time the application scene is switched, so that the application switching on the terminal equipment can be learned and recorded in real time. Specifically, when the original record is updated, the average value of the original record and the updated data may be taken for updating, or the original record and the updated data may be updated by a weighting operation, which is not limited herein.
In addition, when determining the association relation of the application scene, each time of switching is performed, the association relation can be updated, so that the terminal equipment can predict the application scene to be switched according to more historical switching data. The more the historical switching data are, the higher the accuracy of the terminal equipment for switching scene prediction is, so that the terminal equipment can improve the prediction accuracy by updating the application scene association relation, and the available continuous memory cleaned out of the terminal equipment can meet the continuous memory required by switching the application scenes.
504. And (5) continuously acquiring the internal memory of the application scene.
Besides the need of collecting the switching times of the application scenes and determining the switching probability of the application scenes, the continuous memory required by the operation of each application scene is also required to be collected, namely, the continuous memory requirement in the starting process of each application scene and the continuous memory requirement in the Jing Yun line of the application field, and the continuous memory requirement required by the next application scene can be identified according to the continuous memory requirement of each application scene.
The specific implementation method is that a switching counting variable is inserted into a memory allocation function in the terminal equipment, continuous memories allocated by the memories each time are counted, and each time the application scene is prepared to enter, enter to finish and exit, the counting is obtained once. The difference value of the continuous memory acquired when entering is completed and entering is accurate is the continuous memory requirement when the application scene is started, and the difference value of the exiting and entering is the integral continuous memory requirement of the application scene.
In addition, besides the continuous memory required by each application scene, the continuous memory required by the application scene with the switching probability larger than the threshold value when switching from the current application scene can be acquired, for example, when switching from the current application scene, 10 application scenes with the switching probability larger than 10% are acquired, only the continuous memories required by the 10 application scenes can be acquired, and the specific acquisition condition can be adjusted according to the design requirement, which is not limited in particular.
It should be understood that, if the application scene switching data acquired in step 502 is not required to be used when the application scene continuous memory acquisition is performed, the execution sequence of step 502 and step 504 is not limited in the present application, step 502 may be executed first, step 504 may be executed first, and the method may be specifically adjusted according to actual design requirements, which is not limited herein.
In the embodiment of the application, the continuous memory required by each application scene can be recorded, and the continuous memory required by each application scene is updated after each continuous memory is allocated. The weighting operation may be performed between the continuous memory allocated by the current switch and the continuous memory allocated by the history switch, or the continuous memory allocated by the current switch may be used to replace the continuous memory record allocated by the history switch, which may be specifically adjusted according to the actual scenario, and is not limited herein.
505. And determining the continuous memory requirement of the application scene.
After the continuous memory of the application scene is collected, the continuous memory requirement of each second application scene can be determined according to the collected data, for example, the continuous memory requirement required from switching the second application scene to the running of the application scene can be determined according to the continuous memory requirement in the starting process of each second application scene and the continuous memory requirement in the running of the application scene Jing Yun.
Specifically, in the Linux system, the memory fragments are managed through a buddy (buddy) algorithm, and the system kernel is arranged into a linked list queue according to the power level of 2 in each memory page available to the manager and stored in free_area data. In the following, a specific embodiment will be described, referring to fig. 6, which is a schematic diagram of a partner algorithm in an embodiment of the present application. There are 16 memory pages in the system memory, including memory page 0 through memory page 15, i.e., 0-15 in pages row in FIG. 6. The 16 memory pages are arranged into a linked list queue according to the power level of 2. Since there are only 16 pages, only 4 levels (orders) are needed to determine the bitmap of the 16 memory pages, namely orders 0 through 3 in FIG. 6. The high-order continuous memory can be quickly sorted out by the low-order continuous memory, and the low-order continuous memory can be quickly allocated by the high-order continuous memory. Therefore, when determining the continuous memory requirement of the application scene, the continuous memory allocation can be performed through the buddy algorithm. The specific format is shown in table 2:
Start-up Maximum value
Application scenario A 100,order:2 500,order:2
Application scenario B 200,order:4 500,order:4
Application scenario C 100,order:8 1000,order:8
Application scenario D 0 0
Application scenario E 10,order:2 10,order:2
Application scenario F 100,order:2 500,order:2
TABLE 2
When the application scene A is started, 100 memory pages of order2 are needed, and when the application scene A operates normally, 500 memory pages of order2 are needed; when the application scene B is started, 200 pages of the memory of the order4 are needed, and when the application scene B operates normally, 500 pages of the order4 are needed; when the application scene C is started, 100 memory pages of the order8 are needed, and when the application scene C operates normally, 1000 memory pages of the order8 are needed; when the application scene D is started, 0 memory pages are needed, and when the application scene D operates normally, 0 memory pages are needed; when the application scene E is started, 10 memory pages of the order2 are needed, and when the application scene E operates normally, 10 memory pages of the order2 are needed; when the application scene F is started, 100 memory pages of the order2 are needed, when the application scene F operates normally, 500 memory pages of the order2 are needed, and the other application scenes are similar.
506. Predicting the target continuous memory.
After determining the switching probability of each second application scene and the continuous memory required by each second application scene, the second application scene to be switched can be predicted, and the target continuous memory required by the second application scene to be switched is predicted. The switching probability of switching from the current application to other application scenes can be determined through the application scene association matrix, and application scenes lower than the threshold can be filtered through setting a threshold, so that application scenes with lower transmission probability can be filtered. For example, if the switching probability of the application scene a is lower than 10%, the application scene a is filtered out.
The specific step of determining the target continuous memory may be to first filter out the second application scenes with the switching probability not greater than the threshold value in the one or more second application scenes. And then, carrying out weighted operation on the switching probability of the second application scene with the switching probability larger than the threshold value and the required continuous memory to obtain the target continuous memory. Specifically, the weight in the weighting operation may have a corresponding relationship with the switching probability corresponding to each second application scenario. For example, the larger the weight occupied by the application scene with larger switching probability, the more the obtained target continuous memory is biased to the continuous memory required by the second application scene with larger switching probability; the maximum continuous memory required in the second application scenario with the switching probability greater than the threshold may be used as the target continuous memory, or the target continuous memory may be obtained through other algorithms, which may specifically be adjusted according to the actual device requirement, and is not limited herein.
507. And starting the memory defragmentation.
After determining the target continuous memory, if the continuous memory available on the terminal device is not greater than the target continuous memory, memory defragmentation can be actively performed, so that the continuous memory available on the terminal device is greater than the target continuous memory before switching to the second application scenario. The specific defragmentation method is described in detail in the following example of fig. 7.
In the embodiment of the application, the target continuous memory required by switching to the second application scene is predicted in advance, and the memory defragmentation is performed in advance, so that the continuous memory available on the terminal equipment is larger than the target continuous memory, and therefore, when the terminal equipment is switched from the first application scene to the second application scene, the terminal equipment can start and operate the second application scene by using enough continuous memory, thereby reducing the waiting time of switching the terminal equipment to the second application scene, and further improving the efficiency of switching the application scene by the terminal equipment.
The foregoing focuses on the specific steps of determining the target continuous memory in the memory management method in the embodiment of the present application, in the memory management method provided by the present application, in addition to predicting the target continuous memory in advance and performing memory defragmentation before switching application scenarios, in order to further improve the efficiency of memory defragmentation, and not affect the running applications or processes on the terminal device, etc., the embodiment of the present application further improves the specific algorithm of memory defragmentation, and may perform memory defragmentation by dynamic adjustment. Referring to fig. 7, another embodiment of a memory management method according to an embodiment of the present application is shown in the drawings, which may include:
701. And starting the memory defragmentation.
After the terminal device determines the target continuous memory in a predictive manner, the terminal device may initiate memory defragmentation before switching to a certain second application scenario, so as to trim the available continuous memory. Specifically, steps 702-708 are described in detail below.
702. The currently available continuous memory is calculated, if the target continuous memory is satisfied, step 703 is executed, and if the target continuous memory is not satisfied, step 704 is executed.
After determining the target continuous memory, the terminal device may calculate the currently available continuous memory, i.e. the continuous memory that the terminal device may currently allocate to the second application scenario. Specifically, in the Linux system, all available continuous memories on the current terminal device can be obtained from the buddy system, and whether the available continuous memories meet the target continuous memory is judged.
If the available continuous memory on the terminal device does not meet the target continuous memory, step 704 is executed, i.e. performing fast memory sorting, so that the available continuous memory on the terminal device is smaller than the target continuous memory. If the available continuous memory on the terminal device is not less than the target continuous memory, the terminal device may perform the calculation of the dense area of the immovable page, i.e. step 703 is performed.
703. An immovable page compact area is calculated.
When the terminal equipment starts the memory defragmentation, or when the available continuous memory on the terminal equipment meets the target continuous memory, the movable page compact region can be calculated. Wherein, in a preset unit range, the immovable page exceeds the dense threshold, and the unit range is considered to be an immovable dense area. For example, if the immovable pages are more than 100 out of 1024 pages, the 1024 pages can be considered to belong to the immovable page compact area. When the movable page compact is greater than the compact threshold, then step 704 may be performed. And when the movable page compact region is not more than the compact threshold value, the memory defragmentation can be stopped.
It should be noted that, in the embodiment of the present application, the computation may be performed on the dense area of the immovable page, or the computation may not be performed on the dense area of the immovable page, that is, step 703 may be an optional step. In practical application, when the terminal device does not calculate the target continuous memory, the dense area of the immovable page can also be directly calculated. If the dense area of non-movable pages is greater than the preset value, a fast defragmentation of the continuous memory may also be performed, i.e. by a light memory defragmentation algorithm, which is detailed in step 704. If the dense area of the immovable page is not greater than the preset value, the memory defragmentation can be performed continuously and rapidly through the slight memory defragmentation algorithm, and the memory defragmentation can be performed or not, and the method is not limited.
Specifically, when the terminal device performs memory defragmentation, if an immovable page is skipped directly, after the system operates for a long time, the immovable page is increased, the memory defragmentation degree is greatly improved, the success rate of trimming large continuous memories is reduced, the speed of memory trimming and memory allocation is reduced, and the operation efficiency of the terminal device is reduced. Thus, in an embodiment of the present application, the dense area of non-movable pages is calculated and then subjected to memory defragmentation, including the arrangement of areas including non-movable pages, as described in detail in step 707 and step 708. Therefore, the area comprising the immovable page can be arranged, the problem that after the system operates for a long time, the efficiency and the success rate of memory defragmentation are reduced due to the fact that the immovable page is increased is avoided, and the efficiency and the success rate of memory defragmentation by the terminal equipment can be improved.
704. And (5) rapidly arranging the continuous memory.
And when the available continuous memory on the terminal equipment does not meet the target continuous memory or the immovable dense area on the terminal equipment is larger than a preset value, the terminal equipment quickly sorts the continuous memory. The method comprises the step of performing memory defragmentation through a mild memory defragmentation algorithm, so that the movable page area can be subjected to memory defragmentation. Specifically, the memory pages before being sorted by the light memory defragmentation algorithm and the memory pages after being sorted are shown in fig. 8, where the movable page area is that the memory pages within the preset unit range do not include the immovable pages. For example, if an immovable page is not included in 1024 pages, the 1024 pages can be considered to belong to a movable page area. And performing memory defragmentation through a mild memory algorithm, namely finishing the movable page area, and moving the movable pages in the movable page area to a section of continuous memory so as to enable the idle pages to form the continuous memory. For example, if 20 discontinuous movable pages are included in the memory pages with the movable page area addresses 0001-0100, then the 20 movable pages can be uniformly moved to 0001-0020, so that the memory pages with the addresses 0020 are all free pages, and the free continuous memory is sorted out. Therefore, the idle continuous memory can be rapidly arranged on the terminal equipment through a mild memory defragmentation algorithm, so that the continuous memory required by the terminal equipment when the application scene is switched is ensured.
In practical application, the available continuous memory can be rapidly sorted out through a mild memory defragmentation algorithm, so that more available continuous memory can be distributed when the terminal equipment is switched to be applied. For example, if the terminal device currently runs the application scenario a, if the current available continuous memory on the terminal device does not meet the application scenario a, or if the dense area of the immovable pages on the terminal device is greater than a preset value, the terminal device may quickly sort the continuous memory, and quickly sort the movable page area to quickly sort out the available continuous memory. Therefore, the defect of insufficient continuous memory caused by sudden switching of the application scene by the terminal equipment is avoided, and the efficiency and reliability of switching the application scene are further improved. If the movable page compact area is larger than the preset value, the movable page on the terminal equipment is increased, and more continuous memories on the terminal equipment can be allocated for application scenes by rapidly arranging the continuous memories.
705. And acquiring the system load.
After the continuous memory is quickly arranged, the available continuous memory on the terminal equipment can be increased, and the condition that the available continuous memory is insufficient when the terminal equipment suddenly switches application scenes can be prevented. If the available continuous memory on the terminal device still does not meet the target continuous memory, or to further improve the available continuous memory on the terminal device, the memory defragmentation can be further processed. The memory defragmentation algorithm can be dynamically adjusted according to the range of the system load of the terminal equipment by acquiring the system load of the terminal equipment, so that the resources of the terminal equipment are reasonably utilized, and the influence on the running application scene on the terminal equipment is reduced. The system load may be used to represent the busyness of the system in the terminal device, and may be a coefficient occupied by a process running or waiting to run per unit time on the terminal device. For example, the system load may be an average of the number of processes in the running queue of the terminal device per unit time. Specifically, in the Linux system, a preset inquiry command, such as an up time command, a top command, and the like, can be used to inquire the system load of the terminal device.
The system load of the terminal device can be generally expressed by the occupancy rate of the central processing unit (central processing unit, cpu) of the terminal device or the throughput rate of input/output (io). The mode of determining the system load of the terminal device may specifically be to read a node of cpu or io in the system, so as to obtain the system load of the terminal device. And then the terminal equipment can dynamically adjust according to the system load, namely dynamically adjust a memory defragmentation algorithm, and the memory defragmentation is realized in a grading manner, so that the efficiency of the terminal equipment for performing the memory defragmentation is improved, and the influence of the memory defragmentation on an application scene running on the terminal equipment is reduced. Specifically, if the system load is in the first preset range, the terminal device determines that the memory defragmentation algorithm is a deep memory defragmentation algorithm, i.e. executes step 708; if the system load is in the second preset range, the terminal device determines that the memory defragmentation algorithm is a moderate memory defragmentation algorithm, and then executes step 707; or if the system load is in the third preset range, the terminal device determines that the memory defragmentation algorithm is a mild memory defragmentation algorithm, i.e. executes step 706.
The specific hierarchical memory arrangement algorithm is shown in table 3:
TABLE 3 Table 3
As can be seen from table 3, specifically, when the system load is less than 20%, the system load is not high, and the deep memory defragmentation algorithm is performed at this time, which does not affect the operation of the application scenario currently operated on the terminal device or other application scenarios, and the deep defragmentation algorithm includes performing memory defragmentation on the dense area of the non-movable page, the normal area of the non-movable page, and the movable page area; when the system load is 20% -40%, the terminal equipment carries out a moderate memory defragmentation algorithm, and compared with a deep memory defragmentation algorithm, the defragmentation algorithm reduces the arrangement of the dense areas of the immovable pages so as to reduce the load of the system when the memory defragmentation is carried out, and can avoid influencing the running efficiency of the application scene or other application scenes currently running on the terminal equipment; when the system load is 40% -60%, the system is busy, and can perform slight memory defragmentation, and only the movable page area is arranged, so that the influence of the memory defragmentation on the current running application scene or other application scenes on the terminal equipment is reduced; when the system load is more than 60%, the system of the terminal equipment is busy at the moment, and memory defragmentation can be omitted, so that the influence on the running application scene on the terminal equipment is avoided.
It should be noted that, except for the first preset range being <20%, the second preset range being 20% -40%, the third preset range being 40% -60%, the first preset range, the second preset range and the third preset range may be other values, and may be specifically adjusted according to actual design requirements, which is not limited herein.
In the embodiment of the application, different memory defragmentation algorithms can be determined according to the system load, so that the influence on the application scene currently running on the terminal equipment or other application scenes is reduced, the application scene on the terminal equipment normally runs, and meanwhile, the available continuous memory can be sorted out, and the efficiency of the terminal equipment in memory switching is improved.
706. A light memory defragmentation algorithm is performed.
And when the system load on the terminal equipment is in a third preset range, the terminal equipment executes a slight memory defragmentation algorithm to sort the movable page area. The memory defragmentation is similar to the mild memory defragmentation algorithm in the fast-defragmentation continuous memory in step 704, and will not be described here.
In the embodiment of the application, when the system load is in the third preset range, the mild memory defragmentation algorithm is executed, and the third preset range can be the condition that the system load is higher, so that the influence on other running application scenes on the terminal equipment can be avoided.
707. Executing a moderate memory defragmentation algorithm;
and when the system load on the terminal equipment is in a second preset range, executing a moderate memory defragmentation algorithm, and performing memory defragmentation on the common area of the immovable page and the movable page area. The movable page area is arranged in a similar manner to the light memory defragmentation algorithm in the rapid contiguous memory in step 704, and detailed description thereof is omitted herein. The immovable page is greater than 0 and does not exceed the dense threshold within a preset unit range, and is considered to be an immovable normal area within the unit range. For example, if the immovable pages are not more than 100 and more than 0 out of 1024 pages, the 1024 pages can be considered to belong to the immovable page general area. The memory arrangement of the normal area of the non-movable page may specifically be as shown in fig. 9, where the movable page in the normal area of the non-movable page is arranged, so that the movable page is located in a continuous memory page, thereby arranging the continuous page. The movable page may be moved to the idle continuous memory or to the continuous memory adjacent to the non-movable page, specifically, may be adjusted according to the actual design requirement, which is not limited herein.
Therefore, in the embodiment of the application, when the system load of the terminal equipment is in the second preset range, and the terminal equipment is in the condition of moderate system load, a moderate memory defragmentation algorithm can be performed, and only the common area and the movable page area of the immovable page are sorted so as to adapt to the system load of the moderate equipment, improve the operation efficiency of the terminal equipment, and can sort out available continuous memory in advance.
In the embodiment of the application, the immovable pages are arranged, so that the efficiency and the success rate of memory defragmentation are reduced due to the increase of the immovable pages after the system runs for a long time, and the efficiency and the success rate of memory defragmentation of the terminal equipment can be improved.
708. And executing a deep memory defragmentation algorithm.
When the system load on the terminal device is in a first preset range, the terminal device can execute a deep memory defragmentation algorithm, including memory defragmentation of the non-movable page compact region, the non-movable page common region and the movable page region. Wherein the memory defragmentation of the movable page area is similar to the light memory defragmentation algorithm in the fast-defragmentation continuous memory in step 704, wherein the memory defragmentation of the non-movable page normal area is similar to the medium memory defragmentation algorithm in step 707, and the details are not repeated here. The memory defragmentation of the dense area of immovable pages may be specifically shown in fig. 10, where movable pages of the dense area of immovable pages may be moved to free memory of the dense area of immovable pages, so as to increase the available continuous memory on the terminal device. In particular, the movable pages of the dense area of immovable pages may be moved to free memory pages of the space between immovable pages. When the movable page area, the immovable page common area and the immovable page dense area are simultaneously sorted, the movable pages can be moved to idle memory pages at intervals among the immovable pages so as to sort more idle memory pages. And the common area and the dense area of the immovable pages are arranged, so that the efficiency and the success rate of memory defragmentation are reduced due to the increase of the immovable pages after the system operates for a long time. The method can reduce the severity of the memory fragmentation after the terminal equipment runs for a long time, and can improve the efficiency and the success rate of the memory fragmentation of the terminal equipment.
In addition, in the embodiment of the application, when the terminal equipment is switched from the first application scene to the second application scene, if the available continuous memory on the terminal equipment is insufficient to start or run the second application scene, the terminal equipment can perform quick memory arrangement. For example, a light memory defragmentation algorithm is performed so that the available continuous memory on the terminal device meets the continuous memory required for the second application scenario. For example, the terminal device currently runs the first application scenario, predicts the second application scenario to be switched, and obtains the target continuous memory, and then needs to perform memory defragmentation due to insufficient continuous memory on the terminal device. When the memory defragmentation is carried out or before the memory defragmentation is carried out, if the terminal equipment is switched to the second application scene at the moment, the terminal equipment can also start a slight memory defragmentation algorithm at the moment to quickly clear out the available memory, so that the terminal equipment can be ensured to normally start and operate the second application scene.
In the embodiment of the application, after the target continuous memory is determined in a predictive mode, memory defragmentation is actively carried out to clear up the available continuous memory which is not smaller than the target continuous memory, so that the terminal equipment can be ensured to normally switch application scenes. When the memory defragmentation is carried out, firstly, the quick memory defragmentation is carried out so as to quickly obtain the available continuous memory and ensure the continuous memory requirement when the terminal equipment is switched. Furthermore, after the fast memory defragmentation, if the terminal device is not switched to the second application scenario at this time, the terminal device may further perform memory defragmentation according to the system load, and dynamically adjust the memory defragmentation algorithm according to the system load of the terminal device, so as to further improve the available continuous memory on the terminal device. The problem that after the terminal equipment runs for a long time, pages cannot be moved to be increased, the memory fragmentation severity is improved, the success rate of sorting out continuous memory is low, and the memory allocation speed is reduced is avoided. And the resources of the terminal equipment can be reasonably utilized, the influence on the running application scene on the terminal equipment is reduced, and the efficiency and reliability of switching the application scene of the terminal equipment are improved.
The foregoing details of the method for memory management provided by the present application, and in particular, the terminal device in the embodiment of the present application may be a smart phone, a tablet computer, a vehicle-mounted mobile device, a PDA (Personal Digital Assistant, a personal digital assistant), a camera, or various wearable devices, which is not limited herein. The following will further describe a specific application scenario in the terminal device as an example.
Referring to fig. 11, a specific switching scenario of a method for memory management according to an embodiment of the present application is shown. Taking the terminal device as a smart phone as an example, a plurality of applications including WeChat and a camera are installed in the smart phone, and when a user uses the smart phone, the user can switch from WeChat to the camera to take a picture. When the terminal equipment is switched from the WeChat to the camera, the camera is started first, then the camera preview is entered, and then the camera photographing is performed. If the memory is lower than a certain threshold value, then when the continuous memory is allocated to the camera preview scene and the camera photographing scene, if the continuous memory is insufficient, then memory defragmentation is performed, which results in waiting for allocating the memory for a long time, thus causing the terminal device to be blocked and affecting the user experience.
Therefore, in order to improve the operation efficiency of the smart phone, the specific steps of the memory management method provided by the application may include:
when the smart phone is currently running the WeChat, the terminal equipment acquires the times of switching from the WeChat to other application scenes, and the times of switching from the WeChat to the camera are obtained. The specific acquisition mode can be to record the switching from the WeChat to other application scenes each time. For example, the number of times of switching from the WeChat to the camera is 100 times, the number of times of switching from the WeChat to the application market is 2 times, and so on. And then, respectively acquiring continuous memories required by each application or application scene on the smart phone during starting and running, wherein the continuous memories comprise a camera preview scene of a camera and continuous memory sizes required by running a camera shooting scene.
The specific acquisition mode may be that a switching count variable is inserted in a memory allocation function in the smart phone, continuous memories allocated for each time are counted, each time a count is acquired when a camera is ready to enter, enter is completed and exit is performed, a continuous memory difference value acquired when the camera is completed and accurately enters is the continuous memory requirement when the camera is started, and a difference value between the exit and the completion of the entry is the integral continuous memory requirement of the camera.
When the times of switching from the WeChat to other applications and application scenes are acquired, the times of switching from the WeChat to other applications or application scenes can be updated simultaneously so as to acquire camera switching information later. After the continuous memory of the camera is collected, the continuous memory required by the camera can be updated, so that the smart phone can determine the continuous memory requirement of the camera according to the historical record data and the collected data.
After determining the switching information of the camera and the associated information of the camera, the probability of switching from the WeChat to other applications or application scenes can be determined, wherein the probability of switching from the WeChat to the camera can be determined to be 90%, and the smart phone can predict that the switching to the camera scene is about to be performed. For the probability of predicting the starting of the camera, the more samples the smart phone starts the camera, the more accurate the probability of prediction and the higher the efficiency of prediction. For example, if more than 10 ten thousand samples are sampled, the probability of starting the camera can be predicted when the WeChat is entered, and if only 1 sample is sampled, the prediction can be performed only when the WeChat is entered. For continuous memory requirements of the camera, only one sample is needed to predict.
After the smart phone predicts that the scene of the camera is about to be switched to, the continuous memory required by the camera to start and run is identified, including the continuous memory required by the camera preview and the camera photographing. Memory defragmentation is then initiated. Firstly, calculating the current available continuous memory on the smart phone, if the current available continuous memory on the smart phone is not larger than the continuous memory required by the camera starting and running, the smart phone can quickly perform memory defragmentation, if the current available continuous memory on the smart phone is larger than the continuous memory required by the camera starting and running, or when the smart phone does not compare the available continuous memory with the continuous memory required by the camera starting and running, the smart phone can calculate a current immovable page compact area, if the current immovable page compact area is larger than a preset value, the smart phone can also perform the subsequent memory defragmentation step, and firstly, the smart phone quickly performs memory defragmentation. The fast memory defragmentation may be to execute a light memory defragmentation algorithm to defragment the memory defragmentation on the smart phone, and first fast trim the continuous memory required by the camera preview scene, and then trim the continuous memory required by the camera when shooting. After the rapid memory defragmentation is completed, the system load of the smart phone can be continuously obtained, and then the memory defragmentation algorithm is dynamically adjusted according to the system load of the smart phone, for example, when the system load is less than 20%, the deep memory defragmentation algorithm is executed to sort the dense area of the immovable pages, the common area of the immovable pages and the movable page area, wherein the specific sort algorithm is similar to steps 706-708 in fig. 7, and the specific details are not repeated here; if the system load is 20% -40%, a moderate memory defragmentation algorithm can be executed at this time, and memory defragmentation is carried out on the common area and the movable page area of the immovable page; if the system load is 40% -60%, a moderate memory defragmentation algorithm can be executed at this time, and memory defragmentation is carried out on the movable page area; if the system load is greater than 60%, then no memory defragmentation may be performed to avoid affecting the running applications on the smartphone.
The foregoing details of the method for memory management provided in the embodiment of the present application, in addition, the embodiment of the present application further provides a terminal device for implementing the method for dry memory management, referring to fig. 12, an embodiment of the terminal device in the embodiment of the present application is shown schematically, which may include:
the data acquisition module 1201 is configured to obtain a switching probability of switching from a first application scenario to each of one or more second application scenarios, where the first application scenario is an application scenario currently operated by the terminal device, and may be specifically configured to implement a specific step of step 401 in the foregoing embodiment of fig. 4;
the continuous memory requirement identifying module 1202 is configured to determine a target continuous memory according to the switching probability that satisfies the preset condition in the switching probability and the continuous memory required by each second application scenario in which the switching probability satisfies the preset condition in the one or more second application scenarios, and may be specifically configured to implement the specific step of step 403 in the foregoing embodiment of fig. 4;
the active memory defragmentation module 1203 is configured to perform memory defragmentation according to the target continuous memory before the terminal device switches from the first application scenario to any one of the one or more second application scenarios if the continuous memory available on the terminal device is not greater than the target continuous memory, so that the continuous memory available on the terminal device is greater than the target continuous memory, and may be specifically configured to implement the specific step of step 406 in the embodiment of fig. 4.
In some possible embodiments, the active memory defragmentation module 1203 is specifically configured to:
determining a memory defragmentation algorithm according to the system load;
performing memory defragmentation according to the memory defragmentation algorithm and the target continuous memory;
and in particular may be used to implement step 705 and specific ones of the relevant steps in the previously described fig. 7 embodiment.
In some possible embodiments, the active memory defragmentation module 1203 is further specifically configured to:
if the system load is in a first preset range, determining that the memory defragmentation algorithm is a deep memory defragmentation algorithm;
if the system load is in a second preset range, determining that the memory defragmentation algorithm is a moderate memory defragmentation algorithm; or (b)
If the system load is in a third preset range, determining that the memory defragmentation algorithm is a mild memory defragmentation algorithm;
and may be used in particular to implement the specific steps of steps 705-708 in the previously described fig. 7 embodiment.
In some possible embodiments, the data acquisition module 1201 is specifically configured to:
acquiring historical switching times of switching from the first application scene to each of the one or more second application scenes;
Determining a switching probability of switching from the first application scene to each of the one or more second application scenes according to the historical switching times;
and may be used in particular to implement the particular steps of step 502 in the previously described fig. 5 embodiment.
In some possible embodiments, the continuous memory requirement identification module 1202 is specifically configured to:
determining a second application scenario in which the switching probability is greater than a threshold value from the one or more second application scenarios;
determining the target continuous memory according to the continuous memory required by the second application scenario where the handover probability is greater than the threshold value may be specifically used to implement the specific step in step 506 in the foregoing embodiment of fig. 5.
In some possible embodiments, the continuous memory requirement identification module 1202 is further specifically configured to:
if there are multiple second application scenarios with a switching probability greater than the threshold, weighting operation is performed on the switching probability of each second application scenario and the required continuous memory in the multiple second application scenarios with a switching probability greater than the threshold, so as to obtain the target continuous memory, which may be specifically used to implement the specific step in step 506 in the embodiment of fig. 5.
In some possible embodiments, the continuous memory requirement identification module 1202 is specifically configured to:
The terminal equipment determines a target application scene with the largest required continuous memory from the one or more second application scenes with the switching probability larger than a threshold value from the terminal equipment;
the terminal device uses the continuous memory required by the target application scenario as the target continuous memory, which can be specifically used to implement the specific step in step 506 in the foregoing embodiment of fig. 5.
In some possible embodiments, the active memory defragmentation module 1203 is further configured to:
when the terminal device switches from the first application scenario to one of the one or more second application scenarios and the available continuous memory on the terminal device does not meet the continuous memory required by the one of the second application scenarios, the terminal device defragments the memory defragmentation by using the fast memory defragmentation algorithm, which may be specifically used to implement the specific step in step 704 in the foregoing embodiment of fig. 7.
The embodiment of the present application further provides a terminal device, as shown in fig. 13, for convenience of explanation, only the portion relevant to the embodiment of the present application is shown, and specific technical details are not disclosed, please refer to the method portion of the embodiment of the present application. The terminal device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant ), a POS (Point of Sales), a vehicle-mounted computer, and the like, taking the terminal device as an example of the mobile phone:
Fig. 13 is a block diagram showing a part of the structure of a mobile phone related to a terminal provided by an embodiment of the present invention. Referring to fig. 13, the mobile phone includes: radio Frequency (RF) circuitry 1310, memory 1320, input unit 1330, display unit 1340, sensors 1350, audio circuitry 1360, wireless fidelity (wireless fidelity, wiFi) modules 1370, processor 1380, power supply 1390, and the like. It will be appreciated by those skilled in the art that the handset construction shown in fig. 13 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 13:
the RF circuit 1310 may be used for receiving and transmitting signals during a message or a call, and in particular, after receiving downlink information of a base station, the RF circuit may process the downlink information for the processor 1380; in addition, the data of the design uplink is sent to the base station. In general, RF circuitry 1310 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (Low Noise Amplifier, LNA), a duplexer, and the like. In addition, the RF circuitry 1310 may also communicate with networks and other devices via wireless communications. The wireless communications may use any communication standard or protocol including, but not limited to, global system for mobile communications (Global System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), code division multiple access (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), email, short message service (Short Messaging Service, SMS), and the like.
The memory 1320 may be used to store software programs and modules, and the processor 1380 performs various functional applications and data processing of the handset by executing the software programs and modules stored in the memory 1320. The memory 1320 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 1320 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 1330 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the handset. In particular, the input unit 1330 may include a touch panel 1331 and other input devices 1332. Touch panel 1331, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on touch panel 1331 or thereabout using any suitable object or accessory such as a finger, stylus, etc.) and actuate the corresponding connection device according to a predetermined program. Alternatively, the touch panel 1331 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 1380, and can receive commands from the processor 1380 and execute them. In addition, the touch panel 1331 may be implemented in various types of resistive, capacitive, infrared, surface acoustic wave, and the like. The input unit 1330 may include other input devices 1313 in addition to the touch panel 1331. In particular, other input devices 1313 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 1340 may be used to display information input by a user or information provided to the user as well as various menus of the mobile phone. The display unit 1340 may include a display panel 1341, and alternatively, the display panel 1341 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1331 may overlay the display panel 1341, and when the touch panel 1331 detects a touch operation thereon or thereabout, the touch panel is transferred to the processor 1380 to determine the type of touch event, and the processor 1380 then provides a corresponding visual output on the display panel 1341 according to the type of touch event. Although in fig. 13, the touch panel 1331 and the display panel 1341 are two independent components for implementing the input and output functions of the mobile phone, in some embodiments, the touch panel 1331 may be integrated with the display panel 1341 to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1350, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 1341 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1341 and/or the backlight when the phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the handset are not described in detail herein.
Audio circuitry 1360, speaker 1361, microphone 1362 may provide an audio interface between the user and the handset. The audio circuit 1360 may transmit the received electrical signal after audio data conversion to the speaker 1361, where the electrical signal is converted to a sound signal by the speaker 1361 and output; on the other hand, the microphone 1362 converts the collected sound signals into electrical signals, which are received by the audio circuit 1360 and converted into audio data, which are processed by the audio data output processor 1380 for transmission to, for example, another cell phone via the RF circuit 1310, or for output to the memory 1320 for further processing.
WiFi belongs to a short-distance wireless transmission technology, and a mobile phone can help a user to send and receive emails, browse webpages, access streaming media and the like through a WiFi module 1370, so that wireless broadband Internet access is provided for the user. Although fig. 13 shows a WiFi module 1370, it is understood that it does not belong to the necessary constitution of the mobile phone, and can be omitted entirely as required within a range that does not change the essence of the invention.
The processor 1380 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions and processes data of the mobile phone by running or executing software programs and/or modules stored in the memory 1320, and calling data stored in the memory 1320, thereby performing overall monitoring of the mobile phone. Optionally, processor 1380 may include one or more processing units; preferably, processor 1380 may integrate an application processor primarily handling operating systems, user interfaces, applications, etc., with a modem processor primarily handling wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1380. The processor 1380 may perform the specific steps described above as being performed by the terminal device in fig. 3-13.
The handset further includes a power supply 1390 (e.g., a battery) for powering the various components, which may be logically connected to the processor 1380 through a power management system, such as to provide for managing charging, discharging, and power consumption by the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which will not be described herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or partly in the form of a software product or all or part of the technical solution, which is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of fig. 3 to 11 of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (14)

1. A method of memory management, comprising:
acquiring the switching probability of switching a terminal device from a first application scene to each of one or more second application scenes, wherein the first application scene is the application scene currently operated by the terminal device;
determining a target continuous memory according to the continuous memory required by one or more second application scenes, of which the switching probability meets a preset condition, in the one or more second application scenes;
if the available continuous memory on the terminal equipment is smaller than the target continuous memory, performing memory defragmentation before the terminal equipment is switched from a first application scene to any one of the one or more second application scenes, so that the available continuous memory on the terminal equipment is larger than the target continuous memory, and the available continuous memory is an idle memory with continuous physical addresses;
The determining the target continuous memory according to the continuous memory required by the one or more second application scenes with the switching probability meeting the preset condition in the one or more second application scenes includes:
determining one or more second application scenes with the switching probability larger than a threshold value from the one or more second application scenes;
and if a plurality of second application scenes with the switching probability larger than the threshold value are determined from the one or more second scenes, carrying out weighting operation on the continuous memory required by each second application scene in the second application scenes with the switching probability larger than the threshold value so as to obtain the target continuous memory.
2. The method of claim 1, wherein performing memory defragmentation comprises:
determining a memory defragmentation algorithm according to the range of the system load of the terminal equipment;
and performing memory defragmentation on the memory of the terminal equipment by using the determined memory defragmentation algorithm.
3. The method of claim 2, wherein the determining the memory defragmentation algorithm based on the range of system loads comprises:
if the system load is in a first preset range, determining that the memory defragmentation algorithm is a deep memory defragmentation algorithm;
If the system load is in a second preset range, determining that the memory defragmentation algorithm is a moderate memory defragmentation algorithm; or (b)
And if the system load is in a third preset range, determining that the memory defragmentation algorithm is a mild memory defragmentation algorithm.
4. A method according to any of claims 1-3, wherein the obtaining a handover probability for a terminal device to handover from a first application scenario to one or more second application scenarios comprises:
acquiring historical switching times of switching the terminal equipment from the first application scene to each of the one or more second application scenes;
and determining the switching probability of switching from the first application scene to each of the one or more second application scenes according to the historical switching times.
5. A method according to any of claims 1-3, wherein said determining the target contiguous memory from the contiguous memory required for the second application scenario where the one or more handover probabilities are greater than the threshold value comprises:
determining a target application scene with the largest required continuous memory from the one or more second application scenes with the switching probability larger than the threshold value;
And taking the continuous memory required by the target application scene as the target continuous memory.
6. A method according to any one of claims 1-3, characterized in that the method further comprises:
when the terminal equipment is switched from the first application scene to one of the one or more second application scenes, and the available continuous memory on the terminal equipment does not meet the continuous memory required by the one of the second application scenes, the terminal equipment sorts the memory of the terminal equipment through a mild memory defragmentation algorithm.
7. A terminal device, comprising:
the data acquisition module is used for acquiring the switching probability of the terminal equipment from a first application scene to each of one or more second application scenes, wherein the first application scene is the application scene currently operated by the terminal equipment;
the continuous memory demand identification module is used for determining a target continuous memory according to the continuous memory required by one or more second application scenes, wherein the switching probability of the one or more second application scenes meets the preset condition;
an active memory defragmentation module, configured to perform memory defragmentation according to the target continuous memory before the terminal device switches from a first application scenario to any one of the one or more second application scenarios if the continuous memory available on the terminal device is smaller than the target continuous memory, so that the continuous memory available on the terminal device is larger than the target continuous memory, and the available continuous memory is an idle memory with continuous physical addresses;
The continuous memory demand identification module is specifically configured to: determining one or more second application scenes with the switching probability larger than a threshold value from the one or more second application scenes;
and the active memory defragmentation module is further configured to, if it is determined that there are a plurality of second application scenarios with a handover probability greater than the threshold from the one or more second scenarios, perform a weighted operation on the continuous memory required by each second application scenario in the plurality of second application scenarios with a handover probability greater than the threshold, so as to obtain the target continuous memory.
8. The terminal device according to claim 7, wherein the active memory defragmentation module is specifically configured to:
determining a memory defragmentation algorithm according to the range of the system load of the terminal equipment;
and performing memory defragmentation on the memory of the terminal equipment by using the determined memory defragmentation algorithm.
9. The terminal device according to claim 8, wherein the active memory defragmentation module is specifically configured to:
if the system load is in a first preset range, determining that the memory defragmentation algorithm is a deep memory defragmentation algorithm;
If the system load is in a second preset range, determining that the memory defragmentation algorithm is a moderate memory defragmentation algorithm; or (b)
And if the system load is in a third preset range, determining that the memory defragmentation algorithm is a mild memory defragmentation algorithm.
10. The terminal device according to any of the claims 7-9, wherein the data acquisition module is specifically configured to:
acquiring historical switching times of switching the terminal equipment from the first application scene to each of the one or more second application scenes;
and determining the switching probability of switching from the first application scene to each of the one or more second application scenes according to the historical switching times.
11. The terminal device according to any of claims 7-9, wherein the continuous memory requirement identification module is specifically configured to:
the terminal equipment determines a target application scene with the largest required continuous memory from the one or more second application scenes with the switching probability larger than a threshold value;
and taking the continuous memory required by the target application scene as the target continuous memory.
12. The terminal device according to any of claims 7-9, wherein the active memory defragmentation module is further configured to:
when the terminal equipment is switched from the first application scene to one of the one or more second application scenes, and the available continuous memory on the terminal equipment does not meet the continuous memory required by the one of the second application scenes, the terminal equipment sorts the memory fragments through a mild memory defragmentation algorithm.
13. A terminal device, comprising:
a processor and a memory;
the memory stores a computer program;
the processor, when executing the program, implements the steps of the method of any one of claims 1-6.
14. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the steps of the method of any of claims 1-6.
CN201810333058.6A 2018-04-13 2018-04-13 Memory management method and related equipment Active CN110377527B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810333058.6A CN110377527B (en) 2018-04-13 2018-04-13 Memory management method and related equipment
PCT/CN2019/082098 WO2019196878A1 (en) 2018-04-13 2019-04-10 Method for memory management and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810333058.6A CN110377527B (en) 2018-04-13 2018-04-13 Memory management method and related equipment

Publications (2)

Publication Number Publication Date
CN110377527A CN110377527A (en) 2019-10-25
CN110377527B true CN110377527B (en) 2023-09-22

Family

ID=68163011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810333058.6A Active CN110377527B (en) 2018-04-13 2018-04-13 Memory management method and related equipment

Country Status (2)

Country Link
CN (1) CN110377527B (en)
WO (1) WO2019196878A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111078405B (en) * 2019-12-10 2022-07-15 Oppo(重庆)智能科技有限公司 Memory allocation method and device, storage medium and electronic equipment
CN111444116B (en) * 2020-03-23 2022-11-25 海信电子科技(深圳)有限公司 Storage space fragment processing method and device
CN112925478B (en) * 2021-01-29 2022-10-25 惠州Tcl移动通信有限公司 Camera storage space control method, intelligent terminal and computer readable storage medium
US11520695B2 (en) * 2021-03-02 2022-12-06 Western Digital Technologies, Inc. Storage system and method for automatic defragmentation of memory
CN113082705B (en) * 2021-05-08 2023-09-15 腾讯科技(上海)有限公司 Game scene switching method, game scene switching device, computer equipment and storage medium
CN116661988A (en) * 2022-12-29 2023-08-29 荣耀终端有限公司 Memory normalization method, electronic device and readable storage medium
CN116400871B (en) * 2023-06-09 2023-09-19 Tcl通讯科技(成都)有限公司 Defragmentation method, defragmentation device, storage medium and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701025A (en) * 2015-12-31 2016-06-22 华为技术有限公司 Memory recovery method and device
CN107133094A (en) * 2017-06-05 2017-09-05 努比亚技术有限公司 Application management method, mobile terminal and computer-readable recording medium
CN107273011A (en) * 2017-06-26 2017-10-20 努比亚技术有限公司 Application program fast switch over method and mobile terminal

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6571326B2 (en) * 2001-03-08 2003-05-27 Intel Corporation Space allocation for data in a nonvolatile memory
CN1889737A (en) * 2006-07-21 2007-01-03 华为技术有限公司 Resource management method and system
FR2907625B1 (en) * 2006-10-18 2012-12-21 Streamezzo METHOD FOR MEMORY MANAGEMENT IN CLIENT TERMINAL, COMPUTER PROGRAM SIGNAL AND CORRESPONDING TERMINAL
CN100462940C (en) * 2007-01-30 2009-02-18 金蝶软件(中国)有限公司 Method and apparatus for cache data in memory
CN103150257A (en) * 2013-02-28 2013-06-12 天脉聚源(北京)传媒科技有限公司 Memory management method and memory management device
CN105718027B (en) * 2016-01-20 2019-05-31 努比亚技术有限公司 The management method and mobile terminal of background application
CN105939416A (en) * 2016-05-30 2016-09-14 努比亚技术有限公司 Mobile terminal and application prestart method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701025A (en) * 2015-12-31 2016-06-22 华为技术有限公司 Memory recovery method and device
CN107133094A (en) * 2017-06-05 2017-09-05 努比亚技术有限公司 Application management method, mobile terminal and computer-readable recording medium
CN107273011A (en) * 2017-06-26 2017-10-20 努比亚技术有限公司 Application program fast switch over method and mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于用户行为分析的智能终端应用管理优化;黄文茜等;《计算机系统应用》;20161015(第10期);第1-8页 *

Also Published As

Publication number Publication date
CN110377527A (en) 2019-10-25
WO2019196878A1 (en) 2019-10-17

Similar Documents

Publication Publication Date Title
CN110377527B (en) Memory management method and related equipment
US11099900B2 (en) Memory reclamation method and apparatus
CN107526640B (en) Resource management method, resource management device, mobile terminal and computer-readable storage medium
CN111061516B (en) Method and device for accelerating cold start of application and terminal
CN109992398B (en) Resource management method, resource management device, mobile terminal and computer-readable storage medium
US10956316B2 (en) Method and device for processing reclaimable memory pages, and storage medium
EP3502878B1 (en) Method for preloading application and terminal device
CN110018902B (en) Memory processing method and device, electronic equipment and computer readable storage medium
US10698837B2 (en) Memory processing method and device and storage medium
WO2019128598A1 (en) Application processing method, electronic device, and computer readable storage medium
CN110018900B (en) Memory processing method and device, electronic equipment and computer readable storage medium
CN104850507A (en) Data caching method and data caching device
CN112422711B (en) Resource allocation method and device, electronic equipment and storage medium
CN112559390B (en) Data writing control method and storage device
CN109992399B (en) Resource management method, resource management device, mobile terminal and computer-readable storage medium
CN110032430B (en) Application program processing method and device, electronic equipment and computer readable storage medium
CN110018886B (en) Application state switching method and device, electronic equipment and readable storage medium
CN111880928B (en) Method for releasing selection process and terminal equipment
CN109508300B (en) Disk fragment sorting method and device and computer readable storage medium
CN109375995B (en) Application freezing method and device, storage medium and electronic equipment
CN108513005B (en) Contact person information processing method and device, electronic equipment and storage medium
CN109992395B (en) Application freezing method and device, terminal and computer readable storage medium
CN116596202A (en) Work order processing method, related device and storage medium
CN115840736A (en) File sorting method, intelligent terminal and computer readable storage medium
CN114356159A (en) Display method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant