WO2019196878A1 - Method for memory management and related device - Google Patents

Method for memory management and related device Download PDF

Info

Publication number
WO2019196878A1
WO2019196878A1 PCT/CN2019/082098 CN2019082098W WO2019196878A1 WO 2019196878 A1 WO2019196878 A1 WO 2019196878A1 CN 2019082098 W CN2019082098 W CN 2019082098W WO 2019196878 A1 WO2019196878 A1 WO 2019196878A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
terminal device
application scenario
application
contiguous
Prior art date
Application number
PCT/CN2019/082098
Other languages
French (fr)
Chinese (zh)
Inventor
李刚
唐城开
韦行海
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN201810333058.6 priority Critical
Priority to CN201810333058.6A priority patent/CN110377527A/en
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2019196878A1 publication Critical patent/WO2019196878A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • G06F12/0269Incremental or concurrent garbage collection, e.g. in real-time systems
    • G06F12/0276Generational garbage collection

Abstract

The present application provides a method for memory management and a related device. Memory defragmentation is actively performed on the basis of application scenarios and continuous-memory requirement prediction, so as to satisfy the requirements for continuous memory in different application scenarios, reduce the waiting time of memory allocation, and improve application running efficiency. The method comprises: a terminal device obtains a switching probability of switching from the currently run first application scenario to each of one or more second application scenarios; then determines a target continuous memory according to the continuous memory required by one or more of the one or more second application scenarios where the switching probability satisfies a preset condition; if the available continuous memory on the terminal device is less than the target continuous memory, before the terminal device switches from the first application scenario to any one of the second application scenarios, the terminal device performs memory defragmentation, so that the available continuous memory on the terminal device is greater than the target continuous memory.

Description

Memory management method and related equipment

This application claims priority to Chinese Patent Application No. 201810333058.6 filed on Apr. 13, 2018, the entire disclosure of which is hereby incorporated by reference. In the application.

Technical field

The present application relates to the field of computers, and in particular, to a method for memory management and related devices.

Background technique

Fragmentation of physical memory, that is, memory pages are not continuous, has always been one of the important issues facing the operating system, and most of the memory used by the general application at runtime needs to be contiguous memory. To solve the problem of physical memory fragmentation, the prior art usually uses memory management algorithms, such as the Buddy system defragmentation algorithm in Linux, to organize the fragmentation in memory into contiguous memory to meet The memory requirements of the application.

The existing memory management algorithms are mainly divided into two categories: synchronous memory defragmentation algorithm and asynchronous memory defragmentation algorithm. The synchronous memory defragmentation algorithm triggers memory defragmentation in the process of allocating memory for the application if the available contiguous memory of the system cannot meet the application requirements. The asynchronous memory defragmentation algorithm triggers memory defragmentation when the system's available contiguous memory falls below a set threshold.

It can be seen that the existing memory management algorithm passively triggers memory defragmentation based on specific events. For example, the synchronous memory defragmentation algorithm triggers defragmentation when the system cannot allocate contiguous memory for the current application. Therefore, it is necessary to wait for the system to release the memory and sort out the contiguous memory to complete the memory allocation, which greatly increases the waiting time of the memory allocation, affecting the current The operational efficiency of the application. The asynchronous memory defragmentation algorithm triggers memory defragmentation when the system's available contiguous memory is lower than the preset threshold, and stops when it is above the threshold. When the application requires a large amount of contiguous memory, asynchronous defragmentation cannot meet the application's memory requirements in time. Therefore, entering the synchronous memory defragmentation also causes a problem of long memory allocation time, which affects the running efficiency of the application.

Summary of the invention

The present invention provides a memory management method and related equipment, and actively performs memory defragmentation based on application scenarios and continuous memory demand prediction to meet the requirements of continuous memory in different application scenarios, reduce waiting time of memory allocation, and improve application operation efficiency. .

In view of this, the first aspect of the present application provides a method for memory management, which may include:

The terminal device acquires a handover probability of switching from the currently running first application scenario to each of the one or more second application scenarios, where the multiple is two or more; then the terminal device Determining target continuous memory according to the switching probability that satisfies the preset condition and the continuous memory required for each second application scenario in which the switching probability meets the preset condition in the one or more second application scenarios; if available on the terminal device The contiguous memory is smaller than the target contiguous memory, and the terminal device performs memory fragmentation according to the target contiguous memory before the terminal device switches from the first application scenario to any one of the one or more second application scenarios. Finishing so that the contiguous memory available on the terminal device is greater than the target contiguous memory.

It should be noted that, in the embodiment of the present application, the contiguous memory available on the terminal device is equal to the target contiguous memory, and the memory defragmentation may be performed, or the memory defragmentation may not be performed, and the specific design is adjusted according to actual design requirements, which is not limited herein.

In the embodiment of the present application, the application scenario that the terminal device is about to switch is predicted, and the handover probability is switched from the first application scenario to the second application scenario, and the handover probability and the corresponding number are met according to the preset condition. The contiguous memory required by the application scenario determines the target contiguous memory, and then the terminal device actively performs the memory defragmentation, so that the available contiguous memory on the terminal device is larger than the target contiguous memory, so as to ensure that the terminal device has sufficient continuity when switching the application scenario. The memory is allocated, and the defragmentation of the memory is not required when the terminal device switches the application scenario, thereby improving the efficiency of the terminal device when switching the application scenario.

With reference to the first aspect of the present application, in a first implementation manner of the first aspect of the present application, the terminal device performs memory defragmentation, which may include:

The terminal device acquires the system load, and determines the memory defragmentation algorithm according to the range of the system load; and then performs memory defragmentation on the memory of the terminal device according to the memory defragmentation algorithm to improve the contiguous memory available on the terminal device.

In the embodiment of the present application, the memory defragmentation algorithm may be determined by the range of the system load of the terminal device, and the memory defragmentation algorithm may be dynamically adjusted according to the system load of the terminal device, so that the memory resources of the terminal device can be reasonably utilized and reduced. The effect of memory defragmentation on the application scenario of the terminal device to improve the efficiency of the terminal device switching application scenario.

With reference to the first embodiment of the first aspect of the present application, in a second implementation manner of the first aspect of the present application, the terminal device determines a memory defragmentation algorithm according to a range in which the system load is located, including:

If the system load is in the first preset range, the terminal device determines that the memory defragmentation algorithm is a deep memory defragmentation algorithm; if the system load is in the second preset range, the terminal device determines that the memory defragmentation algorithm is The medium memory defragmentation algorithm; or if the system load is in the third preset range, the terminal device determines that the memory defragmentation algorithm is a mild memory defragmentation algorithm.

In the embodiment of the present application, the appropriate memory defragmentation algorithm is determined according to the system load of the terminal device, and the memory defragmentation algorithm is dynamically adjusted, including a deep memory defragmentation algorithm, a moderate memory defragmentation algorithm, and a mild memory defragmentation algorithm. The memory defragmentation process reduces the impact on the running application scenarios of the terminal device, clears out the available contiguous memory of the terminal device, and improves the efficiency of the terminal device switching application scenarios. For example, if the load of the terminal device is high, the memory defragmentation can be performed by a light memory defragmentation algorithm to avoid affecting the running application scenario on the terminal device; or when the load of the terminal device is low, the deep memory can be used. The defragmentation algorithm performs memory defragmentation to properly utilize the resources of the terminal device without affecting the operation of other processes or application scenarios on the terminal device.

With reference to the first aspect of the present application, the first implementation manner of the first aspect of the application, or the second implementation manner of the first aspect of the application, in the third implementation manner of the first aspect of the application, the acquiring the terminal device from the The switching probability of an application scenario switching to one or more second application scenarios may include:

And obtaining, by the terminal device, the number of historical handovers of each of the one or more second application scenarios, and determining, by the terminal device, the switching from the first application scenario to the The switching probability of each second application scenario in one or more second application scenarios.

Specifically, in the embodiment of the present application, the terminal device first acquires the number of historical handovers that are switched from the first application scenario to the one or more second application scenarios, and then calculates a handover to each of the second application scenarios according to the historical number of times. Probability. The switching probability can be used to predict the contiguous memory required to switch the application scenario, and the memory defragmentation is actively performed according to the contiguous memory, so that the contiguous memory available on the terminal device satisfies the contiguous memory required for switching the application scenario, and the terminal device switching application is improved. The efficiency of the scene.

With reference to the first aspect of the present application, the first embodiment of the first aspect of the present application, or any one of the third embodiment of the first aspect of the present application, in the fourth embodiment of the first aspect of the present application, Determining the target continuous memory according to the switching probability that the preset condition is met, and the continuous memory required for each second application scenario in which the switching probability meets the preset condition in the one or more second application scenarios. include:

Determining, by the terminal device, the second application scenario that the switching probability is greater than a threshold from the one or more second application scenarios; the terminal device determining the target contiguous memory according to the contiguous memory required by the second application scenario that the switching probability is greater than the threshold .

In the embodiment of the present application, the second application scenario whose switching probability is not greater than the threshold is filtered out, and then the target continuous memory is determined according to the continuous memory required by the second application scenario whose switching probability is greater than the threshold, and the terminal device is organized to be larger than the target continuous. The contiguous memory available in the memory to make the contiguous memory available on the terminal device more secure for the contiguous memory required for the second application scenario to be switched.

With reference to the fourth embodiment of the present application, in the fifth implementation manner of the present application, if there are multiple second application scenarios in which the switching probability is greater than the threshold, the terminal device is configured according to the second scenario. The one or more consecutive memories whose switching probability is greater than the preset second application scenario determine the target contiguous memory, and may include:

The terminal device performs a weighting operation on the switching probability of each second application scenario and the required contiguous memory in the second application scenario in which the multiple switching probabilities are greater than the threshold to obtain the target contiguous memory. The weight of the weighting operation may have a corresponding relationship with the switching probability of each second application scenario in the second application scenario in which the multiple switching probabilities are greater than the threshold. For example, the higher the handover probability, the greater the weight.

In the embodiment of the present application, the switching probability of each second application scenario in the second application scenario with the handover probability greater than the threshold and the required contiguous memory may be weighted to make the target contiguous memory closer to the terminal device. The contiguous memory required for the second application scenario to be switched, thereby ensuring the contiguous memory required for the terminal device to switch to the second application scenario.

With reference to the fourth embodiment of the present application, in the sixth implementation manner of the present application, the terminal device determines the target continuous memory according to the contiguous memory required by the one or more second application scenarios whose switching probability is greater than a threshold, including:

The terminal device determines, from the second application scenario that the one or more handover probabilities are greater than the threshold, the target application scenario that requires the largest contiguous memory; and uses the contiguous memory required by the target application scenario as the target contiguous memory.

In this embodiment, the maximum continuous memory required in the second application scenario in which one or more handover probabilities are greater than the threshold may be used as the target continuous memory, so as to satisfy each of the second application scenarios in which the handover probability is greater than the threshold. The contiguous memory required by the second application scenario improves the efficiency of the terminal device switching the application scenario.

With reference to the first aspect of the present application, the first embodiment of the first aspect of the present application, or any one of the sixth embodiment of the first aspect of the present application, in the seventh embodiment of the first aspect of the present application, The method may further include:

When the terminal device switches from the first application scenario to one of the one or more second application scenarios, and the available continuous memory on the terminal device does not meet the requirement of the second application scenario In the case of contiguous memory, the terminal device sorts the memory fragments of the terminal device by a light memory defragmentation algorithm.

In the embodiment of the present application, when the terminal device may have unexpected continuous memory consumption, or the available continuous memory on the terminal device is lower than the target continuous memory, the terminal device may also perform light memory fragmentation when switching to the second application scenario. Finishing and quickly sorting out the available contiguous memory to ensure that the terminal device can switch the application scenario normally.

A second aspect of the embodiment of the present application provides a terminal device having a function of implementing the method for memory management of the first aspect described above. This function can be implemented in hardware or in hardware by executing the corresponding software. The hardware or software includes one or more modules corresponding to the functions described above.

A third aspect of the embodiments of the present disclosure provides a terminal device, which may include:

a processor, a memory, a bus, and an input/output interface, the processor, the memory and the input/output interface being connected through the bus; the memory for storing program code; the processor executing the application when the program code in the memory is executed The first aspect or the step performed by the terminal device provided by any of the first aspect.

A fourth aspect of the present application provides a storage medium. It should be noted that the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be implemented by software. Formally embodied, the computer software product is stored in a storage medium for storing computer software instructions for use in the device, comprising: for performing the terminal device designed in any of the first to second aspects above program.

The storage medium includes: a U disk, a mobile hard disk, a read only memory (English abbreviation ROM, English full name: Read-Only Memory), a random access memory (English abbreviation: RAM, English full name: Random Access Memory), a disk or a disk. And other media that can store program code.

A fifth aspect of the embodiments of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method as described in any of the alternative embodiments of the first or second aspect of the present application.

In the embodiment of the present application, if the terminal device is currently running the first application scenario, the terminal device may acquire the switching probability of switching from the first application scenario to each of the second or second application scenarios, and Determining the target continuous memory according to the switching probability of each second scenario in which the switching probability meets the preset condition in the one or more second application scenarios, and the contiguous memory required for starting and running each of the second application scenarios, and then at the terminal Before the device is switched from the first application scenario to the second application scenario, the terminal device actively performs memory defragmentation, so that the length of the available contiguous memory on the terminal device is greater than the length of the target contiguous memory, so as to ensure the second application to be switched. The contiguous memory required by the scene improves the efficiency of the terminal device switching and running the second application scenario.

DRAWINGS

FIG. 1 is a schematic diagram of a specific application of an embodiment of the present application;

2 is a schematic diagram of a memory page in an embodiment of the present application;

3 is a frame diagram of a method for memory management in an embodiment of the present application;

4 is a schematic flowchart of a method for memory management in an embodiment of the present application;

FIG. 5 is another schematic flowchart of a method for memory management in an embodiment of the present application;

6 is a schematic diagram of a partner algorithm in an embodiment of the present application;

FIG. 7 is another schematic flowchart of a method for memory management in an embodiment of the present application;

FIG. 8 is a schematic diagram of light memory fragmentation in the embodiment of the present application; FIG.

FIG. 9 is a schematic diagram of a medium memory fragmentation in the embodiment of the present application;

FIG. 10 is a schematic diagram of deep memory fragmentation in the embodiment of the present application; FIG.

FIG. 11 is a schematic diagram of a specific scenario of a method for memory management in an embodiment of the present application;

FIG. 12 is a schematic diagram of an embodiment of a terminal device according to an embodiment of the present application;

FIG. 13 is a schematic diagram of another embodiment of a terminal device according to an embodiment of the present application.

detailed description

The present invention provides a memory management method and related equipment, and actively performs memory defragmentation based on an application scenario and continuous memory demand prediction to meet the requirements of continuous memory in different application scenarios, and reduce the waiting time of memory allocation. Improve application efficiency.

The method of the memory management device provided by the embodiment of the present application may be applied to a terminal device, which may be a mobile phone, a tablet computer, a vehicle mobile device, a PDA (personal digital assistant), a camera or a wearable device, or the like. Of course, in the following embodiments, no specific limitation is imposed on the specific form of the terminal device. Wherein, the system that the terminal device can carry can include

Figure PCTCN2019082098-appb-000001
The operating system of the present application does not impose any limitation on this.

Equipped

Figure PCTCN2019082098-appb-000002
As an example of the terminal device 100 of the operating system, as shown in FIG. 1, the terminal device 100 is logically divided into a hardware layer 21, an operating system 161, and an application layer 31. The hardware layer 21 includes hardware resources such as an application processor 101, a microcontroller unit 103, a modem 107, a Wi-Fi module 111, a sensor 114, and a positioning module 150. The application layer 31 includes one or more applications, such as an application 163, which may be any type of application such as a social application, an e-commerce application, a browser, or the like. The operating system 161, as a software middleware between the hardware layer 21 and the application layer 31, is a computer program that manages and controls hardware and software resources.

In one embodiment, operating system 161 includes a kernel 23, a hardware abstraction layer (HAL) 25, a library and runtime 27, and a framework 29. The kernel 23 is used to provide the underlying system components and services, such as: power management, memory management, thread management, hardware drivers, etc.; hardware drivers include Wi-Fi drivers, sensor drivers, positioning module drivers, and the like. The hardware abstraction layer 25 is a wrapper around the kernel driver, providing an interface to the framework 29 to mask the implementation details of the lower layers. The hardware abstraction layer 25 runs in user space, while the kernel driver runs in kernel space.

The library and runtime 27 are also called runtime libraries, which provide the required library files and execution environment for the executable at runtime. The library and runtime 27 includes an Android Runtime (ART) 271 and a library 273. ART 271 is a virtual machine or virtual machine instance that can convert the application's bytecode to machine code. Library 273 is a library that provides support for executable programs at runtime, including browser engines (such as webkits), script execution engines (such as JavaScript engines), graphics processing engines, and the like.

The framework 27 is used to provide various basic common components and services for applications in the application layer 31, such as window management, location management, and the like. The framework 27 can include a phone manager 291, a resource manager 293, a location manager 295, and the like.

The functions of the various components of the operating system 161 described above may be implemented by the application processor 101 executing a program stored in the memory 105.

It will be understood by those skilled in the art that the terminal 100 may include fewer or more components than those shown in FIG. 1. The terminal device shown in FIG. 1 includes only a plurality of implementations disclosed in the embodiments of the present application. component.

As shown in FIG. 1 , a plurality of applications (referred to as “applications”) can be installed on the terminal device, and can be switched between multiple applications, or in multiple scenarios of one application, such as different functions of the application, or Switch between the interface and so on. When the terminal device performs application scenario switching, including switching between multiple applications, or switching between multiple scenarios within one application, the terminal device memory involves the operation of multiple modules, including memory allocation, and memory reading. Write and so on. Continuous running of memory is required for each application scenario to start and run.

For example, taking a specific application scenario switch as an example, an application such as a browser, a shopping software, and a game software is installed on the terminal device. The terminal device can switch between multiple application scenarios, including switching between multiple applications installed on the terminal device, or between various scenarios within the application, such as functions or user interfaces. For example, switching from a browser to a shopping software, or switching from a shopping software to a photographing software, or switching from a photographing scene in a photographing software to a photo preview scene. When the terminal device performs the switching of the application scenario, the terminal device also involves a change in the memory, and each application scenario needs memory in its startup and operation. The memory can also include multiple ways of dividing. Specifically, the Linux system is used as an example. The physical memory divides the memory by a fixed page, and can be divided into multiple memory pages, and the size of one memory page can be 4 kb. The general application scenario requires continuous memory, that is, continuous memory pages. Memory pages can be divided into removable memory pages, non-removable memory pages, and recyclable pages. The removable memory page can be moved at will, and the data stored in the removable memory page can be moved to other memory pages. The pages occupied by the general user space application belong to the movable page, and the application and memory pages can be mapped through the page. Therefore, you only need to update the page table entry to copy the data stored on the original memory page to the target memory page. A memory page may also be shared by multiple processes, corresponding to multiple page table entries. The physical memory, after long-term memory allocation and release, forms part of the non-movable page, that is, it is fixed in the memory and cannot be moved to other places. Most of the pages allocated by the core kernel are non-removable pages. As the system runs longer, the non-removable pages will also increase. Recyclable pages cannot be moved directly, but can be reclaimed, including when the application rebuilds data from another memory page, and the data on the original memory page can be reclaimed. Generally recyclable pages can be reclaimed by the system's preset memory reclamation process. For example, the memory occupied by the data of the mapping file may belong to a recyclable page, and the kswapd process in the Linux system may periodically recycle the recyclable page according to preset rules.

When the terminal device is performing the application scene switching, contiguous memory will be used. With the use of the system, physical memory will be occupied as the system runs. For example, as shown in FIG. 2, a piece of memory can include a variety of memory pages, including free memory pages, reclaimable memory pages, non-removable memory pages, and removable memory pages. Therefore, in order to ensure the normal operation of the application scenario on the terminal device, memory defragmentation is required to obtain continuous free memory. Among them, memory defragmentation is a process of reducing the amount of memory fragmentation. The memory defragmentation mainly includes moving the removable memory page, or reclaiming or removing the recyclable page to obtain contiguous free memory of the physical address.

Therefore, in order to ensure that there is sufficient contiguous memory available on the terminal device, the embodiment of the present application improves the operating system portion by taking the terminal device described in the foregoing figure as an example. Specifically, the operating system may be related to the memory part and the part required for the application to run, and also relates to the application layer part inside the terminal device, specifically related to the application switching process. In addition, each of the functional modules described in FIG. 1 is only a partial module. In actual applications, the terminal device may include multiple modules related to the memory, and related to the application running and the application switching, which are not limited herein. The specific improvement provided by the method of the memory management in the embodiment of the present application may include predicting the application scenario to be switched and the required continuous memory before performing the application scenario switching, and actively performing memory defragmentation to sort out sufficient data. Continuous memory. In addition, the terminal device can allocate sufficient continuous memory for the application scenario when switching the application scenario. Specifically, the framework of the method for memory management in the embodiment of the present application is as shown in FIG. 3.

The terminal device can predict the target continuous memory required for the application scenario to be switched by calculating or learning before the application starts or performs the application scenario switching. Then, the terminal device performs memory defragmentation according to the target contiguous memory, and sorts out contiguous memory that can be used, so that the available contiguous memory of the terminal device is larger than the target contiguous memory. The application scenario in the embodiments of the present application may be an application on the terminal device, or a scenario in an application on the terminal device, such as an application function, a user interface, and the like. That is, the switching of the application scenario in the application may be the switching between the applications on the terminal device, or the scenario switching within an application on the terminal device, which is not limited herein. Before the terminal device switches the application scenario, the contiguous memory is allocated for the application scenario to be switched, so that the terminal device meets the contiguous memory required by the application scenario, so that the application scenario can run normally. The memory management method provided by the present application is to ensure the continuous memory requirement of the application scenario to be switched, and the terminal device can predict the target continuous memory required for the application scenario to be switched before the terminal device performs the application scenario switching, and then perform memory fragmentation. sort out. Therefore, in the embodiment of the present application, the contiguous memory required for the application scenario to be switched may be collated before the application device performs the application scenario switching, so as to ensure the continuous memory required for the application scenario to be switched, and improve the application scenario switching. s efficiency.

The flow of the method for memory management in the embodiment of the present application is as shown in FIG. 4 , and the flowchart of the method for memory management of the present application includes:

401. Acquire a handover probability of switching from the first application scenario to each of the one or more second application scenarios.

One or more applications may be included on the terminal device, that is, two or more. Multiple application scenarios can be included in each application. When the terminal device is running, it can be switched in the multiple application scenarios. If the terminal device is currently running the first application scenario, the terminal device may obtain a handover probability from the first application scenario to the second application scenario of the one or more second application scenarios, and the manner of obtaining the handover probability may be According to the number of times of application scenario switching or deep learning on the terminal device. For example, if the terminal device is currently running a browser, that is, the first application scenario, the terminal device may acquire, from the browser, switch to each second application scenario of the one or more second application scenarios, such as a camera, a game, a shopping software, and the like. The probability of switching, such as the probability of switching from the browser to the camera is 15%, the probability of switching to the shopping software is 23%, the probability of switching to the game is 3%, and so on. For example, when switching in various scenarios within the application, the chat session scene in the WeChat is running on the terminal device, that is, the probability of switching to the WeChat red envelope is 30%, and the probability of switching to the circle of friends is 50%.

In the embodiment of the present application, the first application scenario and the second application scenario may be different applications, or may be different application scenarios in different or the same application, and may be adjusted according to actual design requirements, which is not limited herein. .

402. Acquire continuous memory required for each second application scenario in one or more second application scenarios.

The terminal device needs to acquire the switching probability of each application scenario in the one or more second application scenarios, and obtain the second application scenario in the one or more second application scenarios. Continuous memory required. For example, running a game requires 60 kb of contiguous memory, running a camera requires 500 kb of contiguous memory, and so on. The contiguous memory required for each second application scenario in the second application scenario in which the handover probability is greater than the preset value is obtained.

It should be understood that, if the contiguous memory required for each second application scenario in the second application scenario is obtained, in the embodiment of the present application, step 401 may be performed first, or step 402 may be performed first, where Not limited.

403. Determine target continuous memory according to a switching probability and a contiguous memory required for each second application scenario.

After determining the handover probability of the handover to the second application scenario, determining, in the handover probability, determining a handover probability that meets the preset condition, and determining, in the one or more second application scenarios, each second application scenario that the handover probability meets the preset condition The contiguous memory required. The terminal device may calculate the target continuous memory according to the continuous memory required for each second application scenario in which the switching probability and the switching probability satisfy the preset condition. The terminal device can perform the memory defragmentation before the terminal device switches to any one of the second application scenarios, so that the contiguous memory available on the terminal device is larger than the target contiguous memory, so that the terminal device can allocate the application scenario when switching the application scenario. Sufficient contiguous memory to ensure that the terminal device switches from the first application scenario to the contiguous memory required for one of the second application scenarios.

The method for calculating the contiguous memory of the target may be: first, determining, in the second application scenario that the switching probability is greater than the threshold, if there are at least two second application scenarios with the switching probability being greater than the threshold, the second application that the at least two switching probabilities are greater than the threshold The switching probability of each second application scenario in the scenario and the required continuous memory are weighted to obtain the target continuous memory; or the terminal device determines the maximum continuous demand from the second application scenario in which the handover probability is greater than the threshold. The memory is then used as the target continuous memory, etc., and can be adjusted according to actual design requirements, which is not limited herein.

404. Determine whether the contiguous memory available on the terminal device is greater than the target contiguous memory.

After the terminal device determines the target contiguous memory, the terminal device determines whether the available contiguous memory is greater than the target contiguous memory, that is, determines whether the size of the contiguous memory available on the terminal device is greater than the size of the target contiguous memory. If the available contiguous memory is greater than the target contiguous memory, step 405 is performed. If the available contiguous memory is not greater than the target contiguous memory, step 406 is performed. The available contiguous memory is contiguous memory that can be allocated to the second application scenario on the terminal device.

405. Perform other steps.

If the contiguous memory available on the terminal device is larger than the target contiguous memory, the contiguous memory available on the terminal device can guarantee the contiguous memory required to switch from the first application scenario to one of the second application scenarios. At this time, the terminal device may perform memory defragmentation, or may not perform memory defragmentation, and may be adjusted according to actual design requirements, which is not limited herein.

406. Perform memory defragmentation.

If the contiguous memory available on the terminal device is less than or equal to the target contiguous memory, the terminal device switches from the first application scenario to the second application scenario to ensure continuous memory required to switch from the first application scenario to the second application scenario. The terminal device needs to perform memory defragmentation so that the contiguous memory available on the terminal device is not less than the target contiguous memory, so that the terminal device has sufficient contiguous memory to allocate when switching to the second application scenario. The specific steps of the memory defragmentation may be: arranging the memory pages on the terminal device, moving the movable pages, or reclaiming the pages for recycling, etc., to sort out the free contiguous memory, and the free contiguous memory may be at the terminal. When the device performs application scenario switching, it allocates the switched application scenarios.

It should be noted that, in the embodiment of the present application, if the contiguous memory available on the terminal device is equal to the target contiguous memory, the memory defragmentation may be performed, and the memory defragmentation may not be performed, which may be adjusted according to actual design requirements. Not limited.

In the embodiment of the present application, first, the switching probability of the terminal device switching from the first application scenario to each of the second application scenarios in the second application scenario is determined, and then the target continuous memory is obtained according to the switching probability and the required memory calculation. . Then, the memory is defragmented according to the target contiguous memory, so that the contiguous memory available on the terminal device is not less than the target contiguous memory, so as to ensure that the terminal device switches from the first application scenario to one of the one or more second application scenarios. In the second application scenario, there is enough contiguous memory available. The efficiency of switching from the first application scenario to one of the one or more second application scenarios is improved.

The foregoing describes the flow of the memory management method in the embodiment of the present application. The flow of the memory management method in the embodiment of the present application is further described below. The following is a detailed description of the specific steps of determining the target contiguous memory. Referring to FIG. 5, a schematic diagram of another embodiment of the method for managing internal management in the embodiment of the present application includes:

501. The first application scenario is started.

The first application scenario is an application scenario in which the terminal device is currently running. After the first application scenario is started and running normally, the terminal device may next predict the application scenario to be switched, and organize enough continuous memory in advance. In an application scenario that guarantees switching, sufficient contiguous memory can be used to improve the efficiency of the terminal switching application scenarios.

502. Switch application scenario data collection.

The terminal device can collect the data of the application scenario switch, for example, the number of times the terminal device switches from the application scenario A to the application scenario B, or the number of times the application scenario A switches to the application scenario C, in the current 24 hours.

The specific implementation manner may be that a handover count variable is inserted in a scenario start function in the terminal device, and the application scenario is switched every time, for example, switching from a chat scene of a WeChat to a scene of a circle of friends.

503. Determine an application scenario association relationship.

After the number of times of switching the application scenario is collected, the association relationship between the application scenarios can be determined by the number of times of switching between the application scenarios, and the application scenario association matrix can be generated, and the application scenario to be switched can be determined according to the application scenario association matrix, that is, Switch to the switching probability of the second application scenario. The switching probability is calculated, for example, from application scenario A to application scenario B is 50 times, from application scenario A to application scenario C is 30 times, and from application scenario A to application scenario D is 20 times, then, if current The application scenario A, that is, the first application scenario, has a switching probability of 50% switching to the application scenario B, a switching probability of switching to the application scenario C of 30%, and a switching probability of switching to the application scenario D of 20%. The application scenario B, the application scenario C, and the application scenario D are one or more second application scenarios in the foregoing FIG. 4 .

For example, before 24 hours, the number of times of switching from application scenario A to application scenario B is 500, and the number of times from application scenario A to application scenario C is 100. In a practical application, if the terminal device is currently running the application scenario A, the probability of switching from the application scenario A to the application scenario B is greater than the probability of switching from the application scenario A to the application scenario C. The application context association matrix can be as shown in Table 1 below:

Application scenario A Application scenario B Application scenario C Application scenario D Application scenario E Application scenario F Application scenario A 100 10 20 0 1 Application scenario B 5 20 30 50 0 Application scenario C 80 20 0 2 13 Application scenario D 10 6 2 twenty two 6 Application scenario E 0 20 0 0 0 Application scenario F 0 0 30 0 0

Table 1

The application context association matrix is used to indicate the number of times the device in the device switches from one application scenario to another. For example, in the first row of Table 1, the number of times of switching from application scenario A to application scenario B is not 100, the number of times of switching to application scenario C is 10, and the number of switching to application scenario D is 20, switching to the application. The number of times of the scene E is 0, and the number of times of switching to the application scene F is one, which can be used to calculate the probability of switching from the application scenario A to each of the other application scenarios. The probability of the terminal device switching from the application scenario A to the application scenario B is greater than the application scenario C, the application scenario D, the application scenario E, and the application scenario F. When the application scenario data is collected, the application context of the application scenario may be updated every time the application scenario is switched, so that the application switching on the terminal device is recorded in real time. Specifically, when the original record is updated, the average value of the original record and the updated data may be updated, or the original record and the updated data may be updated by a weighting operation, which is not limited herein.

In addition, when determining the association relationship of the application scenario, the association relationship may be updated every time the handover is performed, so that the terminal device can predict the application scenario to be switched according to more historical handover data. The more the history switching data, the higher the accuracy of the terminal device to perform the handover scenario prediction. Therefore, the terminal device can improve the prediction accuracy by updating the application scenario association relationship, thereby ensuring that the available continuous memory cleared by the terminal device satisfies the application scenario switching. The contiguous memory required.

504. Application scene continuous memory acquisition.

In addition to collecting the number of switching scenarios of the application scenario and determining the switching probability of the application scenario, you need to collect the contiguous memory required for each application scenario, that is, the continuous memory requirement and the application scenario runtime during the startup of each application scenario. The contiguous memory requirements, based on the continuous memory requirements of each application scenario, can identify the contiguous memory requirements required for the next application scenario.

The specific implementation manner may be: inserting a switch count variable into a memory allocation function in the terminal device, counting the contiguous memory allocated for each memory, and acquiring a count each time the application scenario is ready to enter, enter, and exit. The continuous memory difference collected during the completion and accurate entry is the continuous memory requirement when the application scenario starts. The difference between the exit and the completion is the overall continuous memory requirement of the application scenario.

In addition, in addition to collecting the contiguous memory required for each application scenario, the contiguous memory of the application scenario whose switching probability is greater than the threshold when switching from the current application scenario may be collected. For example, when switching from the current application scenario, the handover probability is greater than 10 If there are 10 application scenarios, you can collect only the contiguous memory required by the 10 application scenarios. The specific collection situation can be adjusted according to the design requirements.

It should be understood that, if the application scenario switching data collected in step 502 is not used in the continuous memory collection of the application scenario, the execution sequence of the steps 502 and 504 is not limited in this application, and step 502 may be performed first, or the steps may be performed first. 504, which can be adjusted according to actual design requirements, and is not limited herein.

In the embodiment of the present application, the contiguous memory required for each application scenario may also be recorded, and after each contiguous memory is allocated, the contiguous memory required for each application scenario is updated. The contiguous memory allocated by the current switching may be weighted with the contiguous memory allocated by the historical switching, or the contiguous memory allocated by the current switching may be used instead of the contiguous memory record allocated by the historical switching, which may be adjusted according to the actual scenario. limited.

505. Determine a continuous memory requirement of the application scenario.

After the contiguous memory of the application scenario is collected, the contiguous memory requirement of each second application scenario may be determined according to the collected data, for example, the contiguous memory requirement and the application scenario in the startup process of each second application scenario may be run. The continuous memory requirement of the time determines the continuous memory requirement required to switch from the second application scenario to the running of the application scenario.

Specifically, in the Linux system, the memory fragment is managed by a buddy algorithm, and the system kernel arranges the memory pages available to the administrator in each zone, and is arranged into a linked list queue according to the power level of 2, and is stored in the free_area data. . The following is a specific embodiment. Please refer to FIG. 6, which is a schematic diagram of a partner algorithm in the embodiment of the present application. Among them, there are 16 memory pages in the system memory, including memory page 0 to memory page 15, which is 0-15 in the pages row in Figure 6. The 16 memory pages are arranged in a linked list queue according to the power level of 2. Since there are only 16 pages, only 4 levels can be used to determine the bitmap of the 16 memory pages, that is, order0 to order3 in FIG. High-order contiguous memory can be quickly sorted out through low-order contiguous memory, and low-order contiguous memory can be quickly allocated through high-order contiguous memory. Therefore, when determining the continuous memory requirements of the application scenario, continuous memory allocation can be performed by the buddy algorithm. The specific format is shown in Table 2:

start up maximum Application scenario A 100,order:2 500,order:2 Application scenario B 200,order:4 500,order:4 Application scenario C 100,order:8 1000,order:8 Application scenario D 0 0 Application scenario E 10,order:2 10,order:2 Application scenario F 100,order:2 500,order:2

Table 2

When the application scenario A is started, 100 order2 memory pages are required. When the application scenario A is running normally, 500 order2 memory pages are required. When the application scenario B is started, 200 order4 memory pages are required, in the application scenario B. In normal operation, 500 order4 pages are required. When application scenario C is started, 100 order8 memory pages are required. When application scenario C is running normally, 1000 order8 memory pages are required. When application scenario D is started, 0 is required. The memory page requires 0 memory pages when the application scenario D is running normally. When the application scenario E is started, 10 order2 memory pages are required. When the application scenario E is running normally, 10 order2 memory pages are required. F, you need 100 order2 memory pages, when the application scene F is running normally, you need 500 order2 memory pages, and other application scenarios and so on.

506. Prediction target continuous memory.

After determining the switching probability of each second application scenario and the contiguous memory required for each second application scenario, the second application scenario to be switched may be predicted while predicting the required target contiguous memory. The application scenario correlation matrix can be used to determine the switching probability of switching from the current application to the other application scenarios. You can filter the application scenario that is lower than the threshold by setting a threshold. For example, if the switching probability of the application scenario A is less than 10%, the application scenario A is filtered out.

The specific step of determining the target contiguous memory may be: first filtering out the second application scenario in which the switching probability is not greater than the threshold in the one or more second application scenarios. Then, the switching probability of the second application scenario with the switching probability greater than the threshold and the required continuous memory are weighted to obtain the target continuous memory. Specifically, the weight in the weighting operation may have a corresponding relationship with the switching probability corresponding to each second application scenario. For example, the weight of the application scenario with a large switching probability may be larger, that is, the obtained target continuous memory is more contiguous to the contiguous memory required for the second application scenario with a higher probability of switching; or the switching probability may be greater than The maximum contiguous memory required in the second application scenario of the threshold is used as the target contiguous memory, or the target contiguous memory is obtained by other algorithms, which may be adjusted according to the actual device requirements, which is not limited herein.

507, start memory defragmentation.

After the target contiguous memory is determined, if the contiguous memory available on the terminal device is not greater than the target contiguous memory, the memory defragmentation may be actively performed to enable the contiguous memory available on the terminal device before switching to the second application scenario on the terminal device. Greater than the target contiguous memory. Among them, the specific defragmentation method is explained in detail in the embodiment of FIG. 7 as follows.

In the embodiment of the present application, the target continuous memory required to switch to the second application scenario is predicted in advance, and the memory defragmentation is performed in advance so that the contiguous memory available on the terminal device is larger than the target contiguous memory, so the terminal device is in the second When the application scenario is switched to the second application scenario, the contiguous memory can be used to start and run the second application scenario, thereby reducing the waiting time for the terminal device to switch to the second application scenario, thereby improving the efficiency of the terminal device switching the application scenario.

The foregoing describes the specific steps of determining the target contiguous memory in the memory management method in the embodiment of the present application. In the memory management method provided by the present application, in addition to predicting the target contiguous memory in advance before the application scenario switching, and performing memory defragmentation, In order to further improve the efficiency of the memory defragmentation, and does not affect the running application or process on the terminal device, the embodiment of the present application also improves the specific algorithm of the memory defragmentation, and can perform memory defragmentation through dynamic adjustment. The following is a detailed description of the steps of performing the memory defragmentation in the memory management method in the embodiment of the present application. Referring to FIG. 7, a schematic diagram of another embodiment of the method for memory management in the embodiment of the present application may include:

701. Start memory defragmentation.

After the terminal device determines the target continuous memory in a predictive manner, before switching to a certain second application scenario, the terminal device may start memory defragmentation to sort out the available contiguous memory. Specifically, it is described in detail in the following steps 702 to 708.

702. Calculate the currently available contiguous memory. If the target contiguous memory is satisfied, go to step 703. If the target contiguous memory is not met, go to step 704.

After determining the target contiguous memory, the terminal device can calculate the currently available contiguous memory, that is, the contiguous memory that the terminal device can currently allocate to the second application scenario. Specifically, in the Linux system, all available contiguous memory on the current terminal device can be obtained from the buddy system, and it is determined whether the available contiguous memory satisfies the target contiguous memory.

If the available contiguous memory on the terminal device does not satisfy the target contiguous memory, step 704 is performed to quickly sort the memory so that the available contiguous memory on the terminal device is smaller than the target contiguous memory. If the available contiguous memory on the terminal device is not less than the target contiguous memory, the terminal device may perform the non-movable page dense area calculation, that is, step 703 is performed.

703. Calculate a non-movable page dense area.

When the terminal device starts memory defragmentation, or when the available contiguous memory on the terminal device satisfies the target contiguous memory, the non-movable page dense area can be calculated. Wherein, in the preset unit range, if the non-movable page exceeds the dense threshold, the unit range is considered to be a non-movable dense area. For example, if there are more than 100 non-removable pages in 1024 pages, then the 1024 pages may be considered to be non-removable page dense areas. Step 704 may be performed when the non-movable page dense area is greater than the dense threshold. When the non-movable page dense area is not greater than the dense threshold, the memory defragmentation can be stopped.

It should be noted that, in the embodiment of the present application, the non-movable page dense area may be calculated, or the non-movable page dense area may not be calculated, that is, step 703 may be an optional step. In practical applications, when the terminal device does not calculate the target continuous memory, the non-movable page dense area can also be directly calculated. If the non-movable page dense area is larger than the preset value, the contiguous memory can be quickly sorted, that is, the memory defragmentation is performed by the light memory defragmentation algorithm, and the light memory defragmentation algorithm is described in detail in step 704. If the non-movable page-intensive area is not greater than the preset value, you can continue to perform the memory defragmentation by the light memory defragmentation algorithm, or you can not perform the memory defragmentation, which can be adjusted according to the actual design requirements, which is not limited here.

Specifically, when the terminal device performs memory defragmentation, if the non-movable page is directly skipped, when the system is running for a long time, the mobile page cannot be increased, and the degree of memory fragmentation is greatly improved, resulting in tidying up large contiguous memory. The success rate is reduced, resulting in a slower memory consolidation and memory allocation, which will reduce the operating efficiency of the terminal device. Therefore, in the embodiment of the present application, the calculation of the non-movable page-dense area is performed, and the memory defragmentation is performed in the subsequent process, including arranging the area including the non-removable page, which is specifically described in the following steps 707 and 708. Therefore, the non-removable page area can be arranged to avoid the efficiency and success rate of the memory defragmentation due to the increase of the non-movable page after the system is operated for a long time, thereby improving the efficiency and success rate of the terminal device for performing memory defragmentation.

704. Quickly organize continuous memory.

When the available continuous memory on the terminal device does not satisfy the target continuous memory, or the non-movable dense area on the terminal device is greater than a preset value, the terminal device quickly sorts the continuous memory. Including memory defragmentation through a light memory defragmentation algorithm, you can defragment the removable page area. Specifically, the memory page before the finishing by the light memory defragmentation algorithm and the deflated memory page are as shown in FIG. 8 , wherein the movable page area is a memory page within the preset unit range and does not include the non-movable page. For example, if the 1024 pages do not include the non-removable page, the 1024 pages may be considered to belong to the movable page area. The memory defragmentation is performed by a light memory algorithm, that is, the movable page area is sorted, and the movable pages in the movable page area are moved to a contiguous memory, so that the free pages form contiguous memory. For example, if the memory page with the movable page area address 0001-0100 includes 20 non-contiguous movable pages, then the 20 movable pages can be uniformly moved to 0001-0020, so the address is after 0020. Memory pages are free pages to organize free contiguous memory. Therefore, the idle memory can be quickly sorted out on the terminal device by the light memory defragmentation algorithm to ensure the contiguous memory required for the terminal device to perform the application scenario switching.

In practical applications, the available contiguous memory can be quickly sorted out by a light memory defragmentation algorithm to ensure that more contiguous memory can be allocated when the terminal device switches applications. For example, if the terminal device is currently running the application scenario A, if the current available continuous memory on the terminal device does not satisfy the application scenario A, or the non-removable page dense region on the terminal device is greater than the preset value, the terminal device may perform rapid finishing. Continuous memory, quickly organizes the removable page area, and quickly sorts out the available contiguous memory. In this way, the terminal device suddenly switches the application scenario and the continuous memory is insufficient, thereby improving the efficiency and reliability of the switching application scenario. If the mobile page dense area is larger than the preset value, the non-removable page on the terminal device is increased, and the continuous memory can be quickly arranged, so that more continuous memory on the terminal device can be allocated for the application scenario.

705. Obtain a system load.

After the contiguous memory is quickly defragmented, the contiguous memory available on the terminal device can be increased, which can prevent the contiguous memory from being insufficient when the terminal device suddenly switches the application scenario. If the contiguous memory available on the terminal device still does not meet the target contiguous memory, or to further increase the available contiguous memory on the terminal device, the memory fragmentation can be further defragmented. Specifically, the memory fragmentation algorithm can be dynamically adjusted according to the range of the system load of the terminal device, so as to reasonably utilize the resources of the terminal device and reduce the impact on the running application scenario on the terminal device. . The system load can be used to indicate the degree of system busyness in the terminal device, and can be a coefficient occupied by a process that is running or waiting to run per unit time on the end device. For example, the system load can be an average of the number of processes in the running queue of the terminal device per unit time. Specifically, in the Linux system, the system load of the terminal device can be queried by using preset query instructions, such as uptime, top command, and the like.

The system load of the terminal device can usually be expressed by the occupancy rate of the central processing unit (cpu) of the terminal device or the throughput of the input/output (io). Specifically, the method for determining the system load of the terminal device may be: reading a node of the CPU or io in the system, thereby acquiring the system load of the terminal device. Then, the terminal device can dynamically adjust according to the system load, that is, dynamically adjust the memory fragmentation algorithm, hierarchically implement memory defragmentation, improve the efficiency of the terminal device for performing memory defragmentation, and reduce the application scenario of performing memory defragmentation on the terminal device. Impact. Specifically, if the system load is in the first preset range, the terminal device determines that the memory defragmentation algorithm is a deep memory defragmentation algorithm, that is, performs step 708; if the system load is in the second preset range, the terminal device determines the memory defragmentation. The algorithm is a medium memory defragmentation algorithm, that is, step 707 is performed; or if the system load is in the third preset range, the terminal device determines that the memory defragmentation algorithm is a light memory defragmentation algorithm, that is, step 706 is performed.

The specific hierarchical memory sorting algorithm is shown in Table 3:

Figure PCTCN2019082098-appb-000003

table 3

According to Table 3, specifically, when the system load is <20%, the system load is not high at this time. The deep memory defragmentation algorithm does not affect the running of the application scenario or other application scenarios currently running on the terminal device. The algorithm includes performing memory defragmentation on the non-movable page dense area, the non-movable page normal area, and the movable page area; when the system load is between 20% and 40%, the terminal device performs a medium memory defragmentation algorithm, compared to the deep memory fragmentation. The defragmentation algorithm reduces the defragmentation of the non-removable page-intensive areas, so as to reduce the load of the system during the defragmentation of the memory, and avoids affecting the running efficiency of the currently running application scenarios or other application scenarios on the terminal device; when the system load is 40%-60 %, when the system is busy, the system can be used for light storage defragmentation, and only the removable page area can be arranged to reduce the impact of memory defragmentation on the application scenario or other application scenarios currently running on the terminal device; At 60%, the system of the terminal device is busy at this time. Defragmentation to avoid scenarios on the impact of the terminal device is running.

It should be noted that, except that the first preset range may be <20%, the second preset range is 20%-40%, and the third preset range is 40%-60%, the first preset range, the second preset The range and the third preset range may also be other values, which may be adjusted according to actual design requirements, and are not limited herein.

In the embodiment of the present application, different memory defragmentation algorithms may be determined according to the system load, and the impact on the currently running application scenario or other application scenarios on the terminal device may be reduced, so that the application scenario on the terminal device can be normally operated, and the device can be collated. The contiguous memory available improves the efficiency of the terminal device when performing memory switching.

706. Perform a light memory defragmentation algorithm.

When the system load on the terminal device is in the third preset range, the terminal device performs a light memory defragmentation algorithm to organize the movable page area. The step of performing the memory defragmentation is similar to the method of quickly defragmenting the contiguous memory in the foregoing step 704, and details are not described herein.

In the embodiment of the present application, when the system load is in the third preset range, a light memory defragmentation algorithm is executed, where the third preset range may be that the system load is high, and the operation on the terminal device may be avoided. The impact of other application scenarios.

707. Perform an intermediate memory defragmentation algorithm;

When the system load on the terminal device is in the second preset range, the medium memory defragmentation algorithm is executed to perform memory defragmentation on the non-movable page common area and the movable page area. The method for collating the movable page area is similar to the light memory defragmentation algorithm in the fast finishing contiguous memory in the foregoing step 704, and details are not described herein. In the preset unit range, if the non-movable page is greater than 0 and does not exceed the dense threshold, it is considered that the unit range is a non-movable common area. For example, if there are no more than 100 non-removable pages in 1024 pages and greater than 0, then the 1024 pages may be considered to belong to the non-removable page common area. For the memory of the non-movable page common area, as shown in FIG. 9, the movable pages in the non-movable page common area are arranged, so that the movable page is in a continuous memory page, thereby arranging the continuous pages. The movable page can be moved to the free contiguous memory, or can be moved to the contiguous memory adjacent to the non-removable page, which can be adjusted according to actual design requirements, which is not limited herein.

Therefore, in the embodiment of the present application, when the system load of the terminal device is in the second preset range, and the terminal device is in a moderate system load, the medium memory defragmentation algorithm may be performed, and only the non-removable page common area is performed. And the movable page area is arranged to adapt to the system load of the medium device, improve the operating efficiency of the terminal device, and the available continuous memory can be sorted out in advance.

In the embodiment of the present application, the non-removable pages are sorted to prevent the efficiency and success rate of the memory defragmentation from being increased due to the increase of the non-movable pages after the system is operated for a long time, thereby improving the efficiency and success rate of the terminal device for performing memory defragmentation. .

708. Perform a deep memory defragmentation algorithm.

When the system load on the terminal device is in the first preset range, the terminal device may perform a deep memory defragmentation algorithm, including performing memory defragmentation on the non-movable page dense area, the non-movable page normal area, and the movable page area. The memory defragmentation of the movable page area is similar to the light memory defragmentation algorithm in the fast-organizing contiguous memory in the foregoing step 704, wherein the memory defragmentation of the non-movable page common area and the moderate memory fragmentation in the foregoing step 707 are performed. The finishing algorithm is similar, and will not be described here. The specificity of the memory defragmentation of the non-movable page dense area may be as shown in FIG. 10, and the movable page of the non-movable page dense area may be moved to the free memory of the non-movable page dense area to increase the available contiguous memory on the terminal device. . Specifically, the movable page of the non-movable page dense area can be moved to the free memory page spaced between the non-movable pages. When the movable page area, the non-movable page normal area, and the non-movable page dense area are collated at the same time, the movable page can be moved to the free memory page between the non-movable pages to sort out more free memory. page. The non-movable page normal area and the non-movable page dense area are arranged to avoid the efficiency and success rate of the memory defragmentation due to the increase of the non-movable page after the system is operated for a long time. It can reduce the severity of memory fragmentation of the terminal device after the system is running for a long time, and can improve the efficiency and success rate of the terminal device for performing memory defragmentation.

In addition, in the embodiment of the present application, when the terminal device is switched from the first application scenario to the second application scenario, if the available continuous memory on the terminal device is insufficient to start or run the second application scenario, the terminal device may perform the fast. Memory finishing. For example, a light memory defragmentation algorithm is executed to make the available contiguous memory on the terminal device satisfy the contiguous memory required for the second application scenario. For example, the terminal device currently runs the first application scenario, and after predicting the second application scenario to be switched, and obtaining the target continuous memory, the terminal device needs to perform memory defragmentation due to insufficient continuous memory. The terminal device can also enable the light memory defragmentation algorithm to quickly clear out the available memory to ensure the terminal device. The second application scenario can be started normally.

In the embodiment of the present application, after the target continuous memory is determined by the prediction, the memory defragmentation is performed to clear the available contiguous memory that is not less than the target contiguous memory, so as to ensure that the terminal device can normally switch the application scenario. When performing memory defragmentation, first perform fast memory defragmentation to quickly obtain available contiguous memory and ensure continuous memory requirements when the terminal device switches. Further, after the fast memory defragmentation, if the second application scenario has not been switched at this time, the terminal device may further perform memory defragmentation according to the system load, and dynamically update the memory defragmentation algorithm according to the system load of the terminal device. Adjust to further increase the available contiguous memory on the terminal device. After the terminal device is running for a long time, the mobile page cannot be increased, the memory fragmentation severity is increased, and the success rate of cascading memory is low, resulting in a decrease in memory allocation speed. The resources of the terminal device can be reasonably utilized to reduce the impact on the running application scenario of the terminal device, and improve the efficiency and reliability of the terminal device switching application scenario.

The foregoing describes the method for the memory management provided by the present application. Specifically, the terminal device in the embodiment of the present application may be a smart phone, a tablet computer, a mobile device, a PDA (Personal Digital Assistant), a camera, or Various wearable devices, etc., are not limited herein. The following takes the specific application scenario in the terminal device as an example for further explanation.

Please refer to FIG. 11 , which is a schematic diagram of a specific handover scenario of the method for memory management in the embodiment of the present application. For example, the terminal device is a smart phone, and a plurality of applications are installed in the smart phone, including a WeChat and a camera. When the user uses the smart phone, the user can switch from WeChat to the camera to take a photo. When the terminal device switches from WeChat to the camera, the camera is first turned on, then the camera preview is entered, and then the camera is taken. Among them, a large amount of contiguous memory is used in the camera preview scene and the camera photographing scene. If the memory cleaning is performed when the memory is below a certain threshold, if the continuous memory is insufficient when the contiguous memory is allocated to the camera preview scene and the camera photographing scene, At this point, memory defragmentation will result in waiting for a long time to allocate memory, thus causing the terminal device to get stuck and affecting the user experience.

Therefore, in order to improve the operating efficiency of the smart phone, the specific steps of the method for memory management provided by the present application may include:

When the smart phone is currently running WeChat, at this time, the terminal device collects the number of times of switching from WeChat to other application scenarios, and obtains the number of times of switching from WeChat to the camera. The specific collection mode may be to record each time from WeChat to other application scenarios. For example, the number of times to switch from WeChat to the camera is 100, and the number of times to switch from WeChat to the application market is two. Then, the contiguous memory required for each application or application scene on the smartphone to be started and run is separately collected, including the camera preview scene of the camera and the contiguous memory size required for the camera photographing scene to run.

The specific collection method may be: inserting a switching count variable into the memory allocation function in the smart phone, counting the continuous memory allocated for each memory, and acquiring a count each time the camera is ready to enter, enter and exit, and enter The continuous memory difference collected during completion and accurate entry is the continuous memory requirement when the camera is started. The difference between the exit and the completion is the overall continuous memory requirement of the camera.

When the number of times of switching from WeChat to other applications and application scenarios is collected, the number of times of switching from WeChat to other applications or application scenarios can be updated at the same time, so that the camera switching information can be collected subsequently. After the contiguous memory of the camera is acquired, the contiguous memory required by the camera can also be updated, so that the smartphone determines the continuous memory requirement of the camera based on the historical data and the collected data.

After determining the switching information of the camera and the associated information of the camera, the probability of currently switching from WeChat to other applications or application scenarios may be determined, wherein the probability of switching from WeChat to the camera may be determined to be 90%, at which time the smartphone can predict that the upcoming Switch to the camera scene. For predicting the probability of camera activation, the more samples the smartphone launches the camera, the more accurate the prediction probability and the higher the prediction efficiency. For example, if you sample more than 100,000 samples, you can predict the probability of starting the camera when you enter WeChat. If there is only one sample, you can only predict when you enter the camera. For camera continuous memory requirements, only one sample is needed to predict.

After the smartphone predicts that it will switch to the camera scene, it recognizes the continuous memory required for the camera to start up and run, including the camera preview and the contiguous memory required for camera taking. Then start the memory defragmentation. First calculate the current available contiguous memory on the smartphone. If the current available contiguous memory on the smartphone is not greater than the contiguous memory required for the camera to start and run, the smartphone can quickly defragment the memory, if the current available contiguous memory on the smartphone The smartphone can calculate the current non-movable page dense area if the continuous memory required for the camera to start up and run, or the smartphone does not compare the available continuous memory with the continuous memory required for the camera to start up and run, if the current non-movable page dense area If the current memory fragmentation of the smartphone is serious, the smartphone can also perform subsequent memory defragmentation steps, first performing fast memory defragmentation. Fast memory defragmentation can be performed by performing a light memory defragmentation algorithm to defragment the memory on the smartphone. First, quickly sort out the contiguous memory required for the camera to preview the scene, and then sort out the contiguous memory required for the camera to take photos. After the fast memory defragmentation is completed, the system load of the smartphone can be continuously obtained, and then the memory defragmentation algorithm is dynamically adjusted according to the system load of the smartphone. For example, when the system load is less than 20%, the deep memory defragmentation algorithm is executed. The mobile page dense area, the non-movable page common area, and the movable page area are collated. The specific finishing algorithm is similar to the foregoing step 706-step 708 in FIG. 7 , and details are not described herein; if the system load is at 20% -40%, at this time, you can perform a moderate memory defragmentation algorithm to perform memory defragmentation on the non-removable page normal area and the movable page area; if the system load is 40%-60%, you can perform moderate memory defragmentation at this time. The algorithm performs memory defragmentation on the removable page area; if the system load is greater than 60%, the memory defragmentation may not be performed to avoid affecting the running application on the smartphone.

The foregoing describes the method for the memory management provided in the embodiment of the present application. In addition, the embodiment of the present application further provides a terminal device for implementing a method for performing dry memory management. Referring to FIG. 12, a terminal device in the embodiment of the present application is provided. A schematic diagram of an embodiment may include:

The data collection module 1201 is configured to acquire a switching probability of switching from the first application scenario to the second application scenario of the second application scenario, where the first application scenario is an application scenario currently run by the terminal device. Specifically, it can be used to implement the specific steps of step 401 in the foregoing embodiment of FIG. 4;

The contiguous memory requirement identification module 1202 is configured to: according to the switching probability that the preset condition is met in the switching probability, and the contiguous memory required for each second application scenario in which the switching probability meets the preset condition in the one or more second application scenarios Determining the target contiguous memory, which may be used to implement the specific steps of step 403 in the foregoing embodiment of FIG. 4;

The active memory defragmentation module 1203 is configured to switch from the first application scenario to any one of the one or more second application scenarios, if the contiguous memory available on the terminal device is not greater than the target contiguous memory. Before the application scenario, the defragmentation is performed according to the contiguous memory of the target, so that the contiguous memory available on the terminal device is greater than the target contiguous memory, which may be used to implement the specific steps of step 406 in the foregoing FIG. 4 embodiment.

In some possible implementations, the active memory defragmentation module 1203 is specifically configured to:

Determine the memory defragmentation algorithm based on the system load;

Performing memory defragmentation according to the memory defragmentation algorithm and the target contiguous memory;

Specifically, it can be used to implement the specific steps in step 705 and related steps in the foregoing FIG. 7 embodiment.

In some possible implementations, the active memory defragmentation module 1203 is specifically configured to:

If the system load is in the first preset range, determining that the memory defragmentation algorithm is a deep memory defragmentation algorithm;

If the system load is in the second preset range, determining that the memory defragmentation algorithm is a moderate memory defragmentation algorithm; or

If the system load is in the third preset range, determining that the memory defragmentation algorithm is a mild memory defragmentation algorithm;

Specifically, it can be used to implement the specific steps in steps 705 to 708 in the foregoing embodiment of FIG. 7.

In some possible implementations, the data collection module 1201 is specifically configured to:

Obtaining, from the first application scenario, the number of historical switches of each of the one or more second application scenarios;

Determining, according to the number of historical handovers, a handover probability of switching from the first application scenario to each of the one or more second application scenarios;

Specifically, it can be used to implement the specific steps in step 502 in the foregoing embodiment of FIG. 5.

In some possible implementations, the contiguous memory requirement identification module 1202 is specifically configured to:

Determining, from the one or more second application scenarios, the second application scenario that the switching probability is greater than a threshold;

The target contiguous memory is determined according to the contiguous memory required by the second application scenario whose switching probability is greater than the threshold, and may be used to implement the specific steps in step 506 in the foregoing FIG. 5 embodiment.

In some possible implementations, the contiguous memory requirement identification module 1202 is specifically configured to:

If there are multiple application scenarios in which the switching probability is greater than the threshold, the switching probability of each second application scenario and the required contiguous memory in the second application scenario with the multiple switching probabilities being greater than the threshold are weighted to obtain the The target continuous memory may be specifically used to implement the specific steps in step 506 in the foregoing embodiment of FIG. 5.

In some possible implementations, the contiguous memory requirement identification module 1202 is specifically configured to:

Determining, by the terminal device, the target application scenario that requires the largest continuous memory from the second application scenario in which the one or more handover probabilities are greater than a threshold;

The terminal device uses the contiguous memory required by the target application scenario as the target contiguous memory, which can be used to implement the specific steps in step 506 in the foregoing embodiment of FIG. 5.

In some possible implementations, the active memory defragmentation module 1203 is further configured to:

When the terminal device switches from the first application scenario to one of the one or more second application scenarios, and the available continuous memory on the terminal device does not meet the requirement of the second application scenario In the contiguous memory, the terminal device defragments the memory fragment by using a fast memory defragmentation algorithm, which can be used to implement the specific steps in step 704 in the foregoing embodiment of FIG.

The embodiment of the present application further provides a terminal device. As shown in FIG. 13 , for the convenience of description, only parts related to the embodiment of the present invention are shown. For details that are not disclosed, refer to the method part of the embodiment of the present invention. . The terminal device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), an in-vehicle computer, and the terminal device is used as a mobile phone as an example:

FIG. 13 is a block diagram showing a partial structure of a mobile phone related to a terminal provided by an embodiment of the present invention. Referring to FIG. 13 , the mobile phone includes: a radio frequency (RF) circuit 1310 , a memory 1320 , an input unit 1330 , a display unit 1340 , a sensor 1350 , an audio circuit 1360 , a wireless fidelity (WiFi) module 1370 , and a processor 1380 . And power supply 1390 and other components. It will be understood by those skilled in the art that the structure of the handset shown in FIG. 13 does not constitute a limitation to the handset, and may include more or less components than those illustrated, or some components may be combined, or different components may be arranged.

The specific components of the mobile phone will be specifically described below with reference to FIG. 13:

The RF circuit 1310 can be used for receiving and transmitting signals during and after the transmission or reception of information, in particular, after receiving the downlink information of the base station, and processing it to the processor 1380; in addition, transmitting the designed uplink data to the base station. Generally, RF circuit 1310 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuitry 1310 can also communicate with the network and other devices via wireless communication. The above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), and the like.

The memory 1320 can be used to store software programs and modules, and the processor 1380 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 1320. The memory 1320 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.). Moreover, memory 1320 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.

The input unit 1330 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset. Specifically, the input unit 1330 may include a touch panel 1331 and other input devices 1332. The touch panel 1331, also referred to as a touch screen, can collect touch operations on or near the user (such as a user using a finger, a stylus, or the like on the touch panel 1331 or near the touch panel 1331. Operation), and drive the corresponding connecting device according to a preset program. Optionally, the touch panel 1331 may include two parts: a touch detection device and a touch controller. Wherein, the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information. The processor 1380 is provided and can receive commands from the processor 1380 and execute them. In addition, the touch panel 1331 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch panel 1331, the input unit 1330 may further include other input devices 1313. Specifically, other input devices 1313 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.

The display unit 1340 can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone. The display unit 1340 can include a display panel 1341. Alternatively, the display panel 1341 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like. Further, the touch panel 1331 may cover the display panel 1341. After the touch panel 1331 detects a touch operation thereon or nearby, the touch panel 1331 transmits to the processor 1380 to determine the type of the touch event, and then the processor 1380 according to the touch event. The type provides a corresponding visual output on the display panel 1341. Although in FIG. 13 , the touch panel 1331 and the display panel 1341 are used as two independent components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 1331 and the display panel 1341 may be integrated. Realize the input and output functions of the phone.

The handset can also include at least one type of sensor 1350, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1341 according to the brightness of the ambient light, and the proximity sensor may close the display panel 1341 and/or when the mobile phone moves to the ear. Or backlight. As a kind of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.

An audio circuit 1360, a speaker 1361, and a microphone 1362 can provide an audio interface between the user and the handset. The audio circuit 1360 can transmit the converted electrical data of the received audio data to the speaker 1361, and convert it into a sound signal output by the speaker 1361; on the other hand, the microphone 1362 converts the collected sound signal into an electrical signal, by the audio circuit 1360. After receiving, it is converted into audio data, and then processed by the audio data output processor 1380, sent to, for example, another mobile phone via the RF circuit 1310, or outputted to the memory 1320 for further processing.

WiFi is a short-range wireless transmission technology. The mobile phone can help users to send and receive emails, browse web pages and access streaming media through the WiFi module 1370. It provides users with wireless broadband Internet access. Although FIG. 13 shows the WiFi module 1370, it can be understood that it does not belong to the essential configuration of the mobile phone, and may be omitted as needed within the scope of not changing the essence of the invention.

The processor 1380 is a control center for the handset that connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 1320, and invoking data stored in the memory 1320, The phone's various functions and processing data, so that the overall monitoring of the phone. Optionally, the processor 1380 may include one or more processing units; preferably, the processor 1380 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like. The modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 1380. The processor 1380 can perform the specific steps performed by the terminal device in the foregoing FIGS. 3 through 13.

The handset also includes a power source 1390 (such as a battery) that supplies power to the various components. Preferably, the power source can be logically coupled to the processor 1380 through a power management system to manage functions such as charging, discharging, and power management through the power management system.

Although not shown, the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein again.

A person skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the system, the device and the unit described above can refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.

In the several embodiments provided by the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner, for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.

The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application, in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium. A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of Figures 3 through 11 of the present application. The foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

The above embodiments are only used to explain the technical solutions of the present application, and are not limited thereto; although the present application has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand that they can still The technical solutions described in the embodiments are modified, or equivalent to some of the technical features are replaced; and the modifications or substitutions do not deviate from the scope of the technical solutions of the embodiments of the present application.

Claims (18)

  1. A method of memory management, comprising:
    Acquiring a handover probability of the terminal device switching from the first application scenario to each of the one or more second application scenarios, where the first application scenario is an application scenario currently run by the terminal device;
    Determining the target continuous memory according to the continuous memory required by the one or more second application scenarios in which the switching probability meets the preset condition in the one or more second application scenarios;
    If the contiguous memory available on the terminal device is smaller than the target contiguous memory, before the terminal device switches from the first application scenario to any one of the one or more second application scenarios, Performing memory defragmentation to make the contiguous memory available on the terminal device larger than the target contiguous memory.
  2. The method according to claim 1, wherein said performing memory defragmentation comprises:
    Determining a memory defragmentation algorithm according to a range in which the system load of the terminal device is located;
    The memory of the terminal device is defragmented using the determined memory defragmentation algorithm.
  3. The method according to claim 2, wherein the determining a memory defragmentation algorithm according to a range in which the system load is located comprises:
    If the system load is in the first preset range, determining that the memory defragmentation algorithm is a deep memory defragmentation algorithm;
    If the system load is in a second preset range, determining that the memory defragmentation algorithm is a moderate memory defragmentation algorithm; or
    If the system load is in a third preset range, determining that the memory defragmentation algorithm is a light memory defragmentation algorithm.
  4. The method according to any one of claims 1-3, wherein the obtaining a handover probability of the terminal device switching from the first application scenario to the one or more second application scenarios comprises:
    Obtaining, by the terminal device, the number of historical handovers of the second application scenario in the one or more second application scenarios;
    And determining, according to the number of historical handovers, a handover probability of switching from the first application scenario to each of the one or more second application scenarios.
  5. The method according to any one of claims 1-4, wherein, according to the one or more second application scenarios, the switching probability is required to satisfy one or more second application scenarios of the preset condition Continuous memory determines the target contiguous memory, including:
    Determining, from the one or more second application scenarios, one or more second application scenarios with a handover probability greater than a threshold;
    The target contiguous memory is determined according to contiguous memory required by the one or more second application scenarios with a switching probability greater than a threshold.
  6. The method according to claim 5, wherein if there are multiple second application scenarios in which the switching probability is greater than the threshold from the one or more second scenarios, the one or more And determining, by the contiguous memory required by the second application scenario that the switching probability is greater than the threshold, the target contiguous memory, including:
    Performing a weighting operation on the contiguous memory required for each second application scenario in the second application scenario in which the plurality of handover probabilities are greater than the threshold to obtain the target contiguous memory.
  7. The method according to claim 5, wherein the determining the target continuous memory according to the contiguous memory required by the one or more second application scenarios whose switching probability is greater than the threshold comprises:
    Determining, from the second application scenario that the one or more handover probabilities are greater than the threshold, a target application scenario that requires the largest continuous memory;
    The contiguous memory required for the target application scenario is the target contiguous memory.
  8. The method according to any one of claims 1 to 6, wherein the method further comprises:
    When the terminal device switches from the first application scenario to one of the one or more second application scenarios, and the available contiguous memory on the terminal device does not satisfy the one of the first application scenarios When the contiguous memory required by the application scenario is used, the terminal device sorts the memory of the terminal device by using a light memory defragmentation algorithm.
  9. A terminal device, comprising:
    a data acquisition module, configured to acquire a handover probability of the terminal device switching from the first application scenario to the second application scenario in the second application scenario, where the first application scenario is currently running by the terminal device Application scenario
    a contiguous memory requirement identification module, configured to determine target continuous memory according to continuous memory required by one or more second application scenarios in which the switching probability meets a preset condition in the one or more second application scenarios;
    An active memory defragmentation module, if the contiguous memory available on the terminal device is smaller than the target contiguous memory, switching to the terminal device from the first application scenario to any one of the one or more second application scenarios Before the second application scenario, the method is used for performing memory defragmentation according to the target contiguous memory, so that the contiguous memory available on the terminal device is greater than the target contiguous memory.
  10. The terminal device according to claim 9, wherein the active memory defragmentation module is specifically configured to:
    Determining a memory defragmentation algorithm according to a range in which the system load of the terminal device is located;
    The memory of the terminal device is defragmented using the determined memory defragmentation algorithm.
  11. The terminal device according to claim 10, wherein the active memory defragmentation module is specifically configured to:
    If the system load is in the first preset range, determining that the memory defragmentation algorithm is a deep memory defragmentation algorithm;
    If the system load is in a second preset range, determining that the memory defragmentation algorithm is a moderate memory defragmentation algorithm; or
    If the system load is in a third preset range, determining that the memory defragmentation algorithm is a light memory defragmentation algorithm.
  12. The terminal device according to any one of claims 9 to 11, wherein the data collection module is specifically configured to:
    Obtaining, by the terminal device, the number of historical handovers of the second application scenario in the one or more second application scenarios;
    And determining, according to the number of historical handovers, a handover probability of switching from the first application scenario to each of the one or more second application scenarios.
  13. The terminal device according to any one of claims 9 to 12, wherein the contiguous memory requirement identification module is specifically configured to:
    Determining, from the one or more second application scenarios, one or more second application scenarios with a handover probability greater than a threshold;
    The target contiguous memory is determined according to contiguous memory required by the one or more second application scenarios with a switching probability greater than a threshold.
  14. The terminal device according to claim 13, wherein the continuous memory requirement identification module is specifically configured to:
    Performing a weighting operation on the switching probability of each second application scenario and the required contiguous memory in the second application scenario in which the multiple switching probabilities are greater than the threshold to obtain the target contiguous memory.
  15. The terminal device according to claim 13, wherein the continuous memory requirement identification module is specifically configured to:
    Determining, by the terminal device, a target application scenario that requires the largest continuous memory from the second application scenario in which the one or more handover probabilities are greater than a threshold;
    The contiguous memory required by the target application scenario is used as the target contiguous memory.
  16. The terminal device according to any one of claims 9 to 15, wherein the active memory defragmentation module is further configured to:
    When the terminal device switches from the first application scenario to one of the one or more second application scenarios, and the available contiguous memory on the terminal device does not satisfy the one of the first application scenarios When the contiguous memory required by the application scenario is used, the terminal device sorts the memory fragments by a light memory defragmentation algorithm.
  17. A terminal device, comprising:
    Processor and memory;
    a computer program is stored in the memory;
    The steps of the method of any of claims 1-8 are implemented when the processor executes the program.
  18. A computer readable storage medium having stored thereon instructions, wherein the instructions are executed by a processor to perform the steps of the method of any of claims 1-8.
PCT/CN2019/082098 2018-04-13 2019-04-10 Method for memory management and related device WO2019196878A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810333058.6 2018-04-13
CN201810333058.6A CN110377527A (en) 2018-04-13 2018-04-13 A kind of method and relevant device of memory management

Publications (1)

Publication Number Publication Date
WO2019196878A1 true WO2019196878A1 (en) 2019-10-17

Family

ID=68163011

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082098 WO2019196878A1 (en) 2018-04-13 2019-04-10 Method for memory management and related device

Country Status (2)

Country Link
CN (1) CN110377527A (en)
WO (1) WO2019196878A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020129192A1 (en) * 2001-03-08 2002-09-12 Spiegel Christopher J. Method, apparatus, system and machine readable medium to pre-allocate a space for data
CN1889737A (en) * 2006-07-21 2007-01-03 华为技术有限公司 Resource management method and system
CN101013400A (en) * 2007-01-30 2007-08-08 金蝶软件(中国)有限公司 Method and apparatus for cache data in memory
CN103150257A (en) * 2013-02-28 2013-06-12 天脉聚源(北京)传媒科技有限公司 Memory management method and memory management device
CN105718027A (en) * 2016-01-20 2016-06-29 努比亚技术有限公司 Management method of background application programs and mobile terminal
CN105939416A (en) * 2016-05-30 2016-09-14 努比亚技术有限公司 Mobile terminal and application prestart method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020129192A1 (en) * 2001-03-08 2002-09-12 Spiegel Christopher J. Method, apparatus, system and machine readable medium to pre-allocate a space for data
CN1889737A (en) * 2006-07-21 2007-01-03 华为技术有限公司 Resource management method and system
CN101013400A (en) * 2007-01-30 2007-08-08 金蝶软件(中国)有限公司 Method and apparatus for cache data in memory
CN103150257A (en) * 2013-02-28 2013-06-12 天脉聚源(北京)传媒科技有限公司 Memory management method and memory management device
CN105718027A (en) * 2016-01-20 2016-06-29 努比亚技术有限公司 Management method of background application programs and mobile terminal
CN105939416A (en) * 2016-05-30 2016-09-14 努比亚技术有限公司 Mobile terminal and application prestart method thereof

Also Published As

Publication number Publication date
CN110377527A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
US9261995B2 (en) Apparatus, method, and computer readable recording medium for selecting object by using multi-touch with related reference point
JP6509895B2 (en) Resource management based on device specific or user specific resource usage profile
CN103327102B (en) A kind of method and apparatus recommending application program
EP3113035B1 (en) Method and apparatus for grouping contacts
US10347246B2 (en) Method and apparatus for executing a user function using voice recognition
EP3388946A1 (en) Memory collection method and device
WO2017206916A1 (en) Method for determining kernel running configuration in processor and related product
US9626126B2 (en) Power saving mode hybrid drive access management
US20160077620A1 (en) Method and apparatus for controlling electronic device using touch input
US9647964B2 (en) Method and apparatus for managing message, and method and apparatus for transmitting message in electronic device
JP2017507394A (en) Side menu display method, apparatus and terminal
US20130050143A1 (en) Method of providing of user interface in portable terminal and apparatus thereof
JP2016530819A (en) Method for performing power saving mode in electronic device and electronic device therefor
US8289287B2 (en) Method, apparatus and computer program product for providing a personalizable user interface
US10037143B2 (en) Memory compression method of electronic device and apparatus thereof
US9904906B2 (en) Mobile terminal and data provision method thereof
US10411945B2 (en) Time-distributed and real-time processing in information recommendation system, method and apparatus
US20160315999A1 (en) Device and method for associating applications
US9507451B2 (en) File selection method and terminal
US20160234379A1 (en) Efficient retrieval of 4g lte capabilities
US20180114047A1 (en) Electronic device and method for acquiring fingerprint information
CN104636047B (en) The method, apparatus and touch screen terminal operated to the object in list
CN102968338A (en) Method and device for classifying application program of electronic equipment and electronic equipment
US10186244B2 (en) Sound effect processing method and device, plug-in unit manager and sound effect plug-in unit
US9904409B2 (en) Touch input processing method that adjusts touch sensitivity based on the state of a touch object and electronic device for supporting the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19785336

Country of ref document: EP

Kind code of ref document: A1