CN117170872A - Memory management method, device, equipment and storage medium - Google Patents

Memory management method, device, equipment and storage medium Download PDF

Info

Publication number
CN117170872A
CN117170872A CN202311171247.5A CN202311171247A CN117170872A CN 117170872 A CN117170872 A CN 117170872A CN 202311171247 A CN202311171247 A CN 202311171247A CN 117170872 A CN117170872 A CN 117170872A
Authority
CN
China
Prior art keywords
memory
page
input
policy
reserved
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311171247.5A
Other languages
Chinese (zh)
Inventor
林芝驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311171247.5A priority Critical patent/CN117170872A/en
Publication of CN117170872A publication Critical patent/CN117170872A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses a memory management method, a device, equipment and a storage medium, which belong to the technical field of computers, wherein the memory management method can comprise the following steps: receiving a first input of a user; responding to the first input, and acquiring a memory management strategy corresponding to the first input; according to the memory management policy, at least one of the following actions is performed: distributing a first memory page from the reserved memory, defragmenting the reserved memory, recovering a second memory page from the reserved memory, and adjusting a memory distribution strategy; the page sizes of the first memory page and the second memory page are larger than a first threshold, the reserved memory is managed by the target kernel thread, and the fragmentation rate of the reserved memory after being managed by the target kernel thread is lower than a second threshold.

Description

Memory management method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of computers, and particularly relates to a memory management method, a memory management device, memory management equipment and a memory medium.
Background
Memory page sizes in computer systems are typically 4 Kilobytes (KB), which may lead to frequent page-starvation (page fault) and Translation Look-aside Buffer (TLB) failures for large memory-intensive applications, thereby degrading computer system performance. For this reason, large page memory technology has evolved.
In the related art, the large page memory technology can extend the conventional 4KB page size to super pages of 2 Megabytes (MB) or more by changing the basic unit of the virtual memory management mechanism. However, when large page memory is applied to an electronic device, there is a memory drain that affects the performance of the electronic device.
Disclosure of Invention
The embodiment of the application aims to provide a memory management method, a memory management device and a memory medium, which can solve the problem that the performance of electronic equipment cannot be improved through a large-page memory technology.
In a first aspect, an embodiment of the present application provides a memory management method, including:
receiving a first input of a user;
responding to the first input, and acquiring a memory management strategy corresponding to the first input;
according to the memory management policy, at least one of the following actions is performed: distributing a first memory page from the reserved memory, defragmenting the reserved memory, recovering a second memory page from the reserved memory, and adjusting a memory distribution strategy;
the page sizes of the first memory page and the second memory page are larger than a first threshold, the reserved memory is managed by the target kernel thread, and the fragmentation rate of the reserved memory after being managed by the target kernel thread is lower than a second threshold.
In a second aspect, an embodiment of the present application provides a memory management device, including:
the receiving module is used for receiving a first input of a user;
the acquisition module is used for responding to the first input and acquiring a memory management strategy corresponding to the first input;
the execution module is used for executing at least one of the following actions according to the memory management strategy: distributing a first memory page from the reserved memory, defragmenting the reserved memory, recovering a second memory page from the reserved memory, and adjusting a memory distribution strategy;
the page sizes of the first memory page and the second memory page are larger than a first threshold, the reserved memory is managed by the target kernel thread, and the fragmentation rate of the reserved memory after being managed by the target kernel thread is lower than a second threshold.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and where the program or instructions implement the steps of the memory management method as shown in the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored which, when executed by a processor, implement the steps of the memory management method as shown in the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a display interface, where the display interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the memory management method as shown in the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to perform the steps of the memory management method as shown in the first aspect.
In the embodiment of the application, a memory management strategy corresponding to a first input is obtained in response to the first input of a user, and at least one of the following actions is executed according to the memory management strategy: distributing a first memory page from the reserved memory, defragmenting the reserved memory, recovering a second memory page from the reserved memory, and adjusting a memory distribution strategy, wherein the page sizes of the first memory page and the second memory page are larger than a first threshold, the reserved memory is managed by a target kernel thread, and the defragmentation rate of the reserved memory after being managed by the target kernel thread is lower than a second threshold. Here, the memory management policy corresponding to the first input may be obtained according to the input of the user, so as to execute the action corresponding to the reserved memory according to the memory association policy, for example, allocate the first memory page from the reserved memory, so that the electronic device may allocate the first memory page only from the reserved memory, and not allocate from other memories, thereby reducing the influence of the large-page memory on other memory performances in the electronic device; for another example, the second memory page can be recovered from the reserved memory according to the first input, so that resource waste is avoided; and because the reserved memory can be managed by the target kernel thread, and the fragmentation rate of the pre-stored memory managed by the target kernel thread is lower than a second threshold value, the fragmentation of the memory is reduced, and the memory allocation strategy can be adjusted according to the first input, the memory space can be effectively saved, the memory utilization rate and the performance of the application with large memory demand and frequent access can be improved, and the electronic equipment can run smoothly, so that the electronic equipment can improve the equipment performance through the large-page memory technology.
Drawings
FIG. 1 is a flowchart of a memory management method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a memory management device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type not limited to the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
In the embodiment of the application, the large page memory technology refers to that in an operating system (such as Linux), pages are generally mapped to a process space by taking 4KB as a reference, and the large page memory technology can enable pages to be mapped to the process space by a size of 2MB or more. Currently, there are two implementation manners for large page memory technology in Linux kernel: traditional large Page technology (HugeTLB Page) and transparent large Page technology (Transparent Huge Page).
Both large page memory technology implementations can extend the page size from 4KB to 2MB, with the following differences: 1) The memory allocation mechanism is different: the traditional large page technology needs to reserve the memory when the operating system is started, the large page memory can only allocate the memory from the reserved memory, while the transparent large page technology does not reserve the memory and allocates the memory from the idle memory of the system; 2) The page fault processing mechanism is different: in the page fault process, if the traditional large page technology is used, the reserved memory is limited, the reserved memory can be exhausted, the page fault can fail because the large page memory is not allocated, the kernel can close the page fault process, the transparent large page technology can not fail the page fault after the large page memory allocation fails, and the technology can automatically fall back to allocate a 4KB page to finish the page fault, so that the influence of the closed process caused by the page fault is avoided; 3) The memory recovery mechanism is different: the traditional large page technology does not support large page memory reclamation; the transparent large page technology supports large page memory recovery, which can recover a 2MB large page memory at one time, or split a 2MB large page memory into 512 small pages, and then recover the small pages one by one; 4) The memory release mechanism is different: a 2MB large page memory consists of 512 consecutive 4KB small pages, a process allocates a large page memory, after a period of use, the process wants to release 511 small pages therein, and if the large page memory allocated by the conventional large page memory technology is used, the 511 small pages cannot be released; if the large page memory is allocated by using the transparent large page technology, the large page memory is split into 512 small pages, and then the 511 small pages are released; 5) The process uses a large page memory in different modes: the conventional large page technology requires a process to specify a virtual memory and use large page memory mapping, the kernel allocates large page memory for the section of virtual memory, if the reserved memory is insufficient, the kernel returns failure for the process, and even causes the process to be closed as described in the above 2); the transparent large page technology is more flexible than the traditional large page technology, and provides three allocation strategies for a system administrator: the system administrator can dynamically adjust the management mode when the system is running, when the transparent large page technology is in [ always ], the process does not need to set any parameter, and the kernel automatically judges whether the virtual memory of the process can allocate the large page memory; when the transparent large page technology is in [ madvise ], the process needs to specify the virtual memory to use the large page memory, and the kernel can only use the large page memory allocation; when the transparent large page technique is in [ new ], the kernel will not allocate large page memory.
However, since neither the conventional large page technology nor the transparent large page technology is designed to be applied to electronic devices (e.g., mobile phones, wearable devices, tablets, etc.), the large page memory technology is not used on the electronic devices. If the two large page memory technologies are used on the electronic equipment, the problems of application blocking in the electronic equipment and unsmooth operation of an operating system in the electronic equipment are faced. Therefore, the conventional large page technology and the transparent large page technology are not enough to support the use of the large page memory technology on the electronic device, so that the electronic device cannot improve the device performance through the large page memory technology.
In order to solve the problems in the related art, the memory management method provided by the embodiment of the present application is described in detail below with reference to fig. 1 to 3 through specific embodiments and application scenarios thereof.
First, a memory management method provided by an embodiment of the present application is described in detail with reference to fig. 1.
Fig. 1 is a flowchart of a memory management method according to an embodiment of the present application.
As shown in fig. 1, the memory management method provided by the embodiment of the present application may be applied to an electronic device, and based on this, the memory management method may include the following steps:
Step 110, receiving a first input of a user;
step 120, in response to the first input, obtaining a memory management policy corresponding to the first input;
step 130, executing at least one of the following actions according to the memory management policy: distributing a first memory page from the reserved memory, defragmenting the reserved memory, recovering a second memory page from the reserved memory, and adjusting a memory distribution strategy; the page sizes of the first memory page and the second memory page are larger than a first threshold, the reserved memory is managed by the target kernel thread, and the fragmentation rate of the reserved memory after being managed by the target kernel thread is lower than a second threshold.
In this way, the memory management policy corresponding to the first input can be obtained according to the input of the user, so that the action corresponding to the reserved memory can be executed according to the memory association policy, for example, the first memory page is allocated from the reserved memory, so that the electronic device can only allocate the first memory page from the reserved memory, and cannot allocate from other memories, and the influence of the large-page memory on the performance of other memories in the electronic device is reduced; for another example, the second memory page can be recovered from the reserved memory according to the first input, so that resource waste is avoided; and as the reserved memory can be managed by the target kernel thread, and the fragmentation rate of the pre-stored memory managed by the target kernel thread is lower than a second threshold value, the fragmentation of the memory is reduced, the memory allocation strategy can be adjusted according to the first input, the memory utilization rate and the performance of the application with large memory demand and frequent access are improved, and the electronic equipment can run smoothly, so that the electronic equipment can improve the equipment performance through the large page memory technology.
The above steps are described in detail below, and are specifically described below.
Referring first to step 110, in one or more possible embodiments, the first input may be an input for starting a first application in the electronic device, where, due to different importance levels of respective application programs in the electronic device, the memory performance requirements are different, for example, an instant messaging application program that is used by a user more frequently than a preset frequency is more important and the memory performance requirements are higher than an application that is not used by the user frequently, so that the first application may be an application that satisfies at least one of the following conditions: applications with a frequency higher than a preset frequency, applications running in the foreground, and applications with a memory performance requirement higher than a preset requirement.
In another possible embodiment, the first input may be an input for triggering the display of a desktop page.
In yet another possible embodiment, the first input may be an input for exiting a second application in the electronic device, wherein the second application may be any application in the electronic device, based on which, in one example, the second application may be identical to the aforementioned first application, i.e. the second application may fulfill at least one of the following conditions: applications with a frequency higher than a preset frequency, applications running in the foreground, and applications with a memory performance requirement higher than a preset requirement.
Based on this, referring to step 120, the application embodiments provide different memory management policies for different inputs, which are described below in connection with different embodiments.
In one or more possible embodiments, where the first input is an input to trigger the display of a desktop page, the memory management policy corresponding to the first input may include a fragmentation policy.
In another possible or multiple possible embodiments, in the case where the first input is an input for launching the first application, in one example, the memory management policy corresponding to the first input may include a large page memory allocation policy and a first adjusted memory policy, the first adjusted memory policy being to set the large page memory allocation policy to always; in another example, the memory management policy corresponding to the first input may further include at least one of a defragmentation whole policy and a reclaim big page memory policy.
In yet another possible or multiple possible embodiments, in a case where the first input is an input for exiting the second application, the memory management policy corresponding to the first input includes a second adjustment memory policy, the second adjustment policy being to set the large page memory allocation policy to madvise.
Then, referring to step 130, in this embodiment of the present application, a target kernel thread may be created when the operating system of the electronic device is started, which is used to manage the reserved memory, for example, determine the size of the reserved memory, allocate a first memory page from the reserved memory, defragmente the reserved memory, and reclaim a second memory page from the reserved memory.
The reserved memory in the embodiment of the application may be set based on a manual experience value, or may be determined by simulating the number of the first memory pages required in the process of using the electronic device by the user, which may be specifically described by the following examples.
In an example, the memory management method provided by the embodiment of the application can provide an application interface, a user can input the size of the reserved memory set based on the artificial experience value based on the application interface, and at this time, the target kernel thread can divide the memory space from the storage space of the electronic device as the reserved memory according to the size of the memory input by the user.
In another example, before step 130, the memory management method in the embodiment of the present application may further include:
step 1401, acquiring the number of first memory pages of the electronic device used by the user in a preset time period according to the behavior data of the electronic device used by the user;
Step 1402, counting the memory sizes of the memory pages corresponding to the number of the first memory pages according to the number of the first memory pages;
in step 1403, the memory space is divided from the storage space of the electronic device as the reserved memory according to the memory size of the memory page.
For example, according to the behavior data of the user using each application in the electronic device in the last week, the number of the first memory pages of the user using each application in the week is counted to be 10, and then the memory size of the first memory pages may be counted to be 800MB based on the number of the first memory pages of the user, where the reserved memory of 800MB may be divided from the storage space of the electronic device.
It should be noted that, the memory size of the reserved memory may be determined by the number of the first memory pages required in the process of using the electronic device by the analog user, but the reserved memory cannot bring unacceptable negative performance effects to the electronic device, for example, the average value of use of the first memory pages obtained by using the electronic device by the analog user is 800MB, and reserving the 800MB memory does not cause the operating system to become stuck and the application keep-alive number to be reduced, so that the memory smaller than or equal to 800MB may be reserved when the operating system is started.
Therefore, compared with the traditional large page technology, the reserved memory related in the memory management method provided by the embodiment of the application can be more flexible when the first memory page is reserved, for example, the reserved memory can be calculated according to the behavior data of a user, and the normal use of the electronic equipment cannot be influenced by the divided reserved memory, so that the performance of the equipment is improved.
In addition, in the embodiment of the application, the page sizes of the first memory page and the second memory page are both larger than the first threshold, the first memory page and the second memory page refer to large-page memory, namely continuous memory is allocated to the virtual machine, so that the speed of accessing the memory by the virtual machine is increased, and the page size of the large-page memory is larger than that of the standard memory, thus the number of page tables can be reduced, the memory access rate is improved, and meanwhile, the memory addresses allocated to the virtual machine are continuous, and the memory performance is improved.
The first threshold is greater than or equal to a page size of the small page memory.
Based on the foregoing, the following describes the executed operations in detail for different memory management policies, respectively.
In one or more possible embodiments, in a case where the first input is an input for triggering display of a desktop page, the memory management policy corresponding to the first input may include a fragmentation policy, based on which the step 130 specifically includes:
And performing defragmentation on the preset memory according to the defragmentation strategy.
For example, since the transparent large page technology may split the large page memory into 512 small pages when the large page memory is reclaimed or released, the reserved memory may be fragmented, and the fragmentation may reduce the large page memory allocation success rate, so that the fragmentation rate is reduced due to the need for defragmentation, for example, when the user triggers the electronic device to display a desktop page, the operating system wakes the target kernel thread to defragment the preset memory, so that the fragmentation rate of the reserved memory after being managed by the kernel thread is lower than 5%.
Therefore, the fragmentation rate of the reserved memory can be reduced, the storage space in the preset memory is saved, and the performance of the electronic equipment can be improved.
In another possible or multiple possible embodiments, in a case where the first input is an input for launching the first application, in one example, the memory management policy corresponding to the first input may include a large page memory allocation policy and a first adjustment memory policy, where the first adjustment memory policy is to set the large page memory allocation policy to always, based on this, the step 130 may specifically include:
Step 1301, according to a first memory allocation policy, adjusting a large page memory allocation policy in the memory allocation policy to always;
in step 1302, a first memory page is allocated from the reserved memory according to a large page memory allocation policy.
Illustratively, the operating system defaults to the large page memory allocation policy to [ madvise ], but upon input to launch the first application, the large page memory allocation policy is adjusted from [ madvise ] to [ always ]. Here, when the large page memory allocation policy is [ always ], the operating system determines whether the large page memory can be allocated without setting any parameters; when the large page memory allocation policy is [ madvise ], it is necessary to specify in advance whether the large page memory can be used, and only when the large page memory can be allocated, the operating system will determine whether the large page memory can be allocated, otherwise, if not, the operating system will not allocate the large page memory for the large page memory.
Further, the step 1302 may specifically include:
determining whether the virtual memory meets preset conditions or not through a target kernel thread according to a large page memory allocation strategy;
under the condition that the virtual memory meets the preset condition, a first memory page is allocated from the reserved memory, wherein the virtual memory meets the preset condition at least one of the following: the virtual memory satisfies address alignment, and a page size of a first memory page in the virtual memory is greater than or equal to 2MB.
For example, when the large page memory allocation policy is [ always ], it is first checked whether the virtual memory satisfies a preset condition, and when the virtual memory satisfies address alignment and the page size of the first memory page in the virtual memory is greater than or equal to 2MB, it is determined that the virtual memory satisfies the preset condition, and at this time, the target kernel thread may directly allocate the large page memory from the reserved memory.
Therefore, when the user triggers the first application with high memory performance requirement, the electronic equipment can only allocate the first memory page from the reserved memory, and cannot allocate the first memory page from other memories, so that the influence of the large-page memory on the performance of other memories in the electronic equipment is reduced, and the performance of the electronic equipment is improved while the performance of the memory is improved.
Here, it should be noted that if the memory size in the reserved memory is smaller, when the large page memory cannot be allocated, the reserved memory may be split into small pages and the small pages may be allocated to the small pages.
In another example, the memory management policy corresponding to the first input in this embodiment may further include at least one of a defragmentation whole policy and a reclamation large page memory policy. Where the memory management policy includes a defragmentation policy, the step 130 may specifically include:
And performing defragmentation on the preset memory according to the defragmentation strategy.
For example, when the first application is started, the target kernel thread may defragmente the preset memory, so that the defragmentation rate of the reserved memory after being managed by the kernel thread is lower than 5%.
Therefore, the fragmentation rate of the reserved memory can be reduced, the storage space in the preset memory is saved, and the performance of the electronic equipment can be improved.
And, when the memory management policy includes a reclaim big page memory policy, the step 130 may specifically include:
and recycling the second memory page from the reserved memory according to the large page memory recycling strategy.
For example, since the current transparent large page technology may cause application blocking and unsmooth system operation when large page memory is recovered, when the first application is started, the second memory page may be recovered from the reserved memory through the target kernel thread.
Therefore, the available memory capacity can be provided for the first application, the equipment performance is improved, when a user cuts a plurality of first applications at the same time, a large page of memory can be allocated for each application in the plurality of first applications, the memory utilization rate and the performance of the applications with large memory demand and frequent access are improved, and the electronic equipment can run smoothly.
In yet another possible or multiple possible embodiments, in a case where the first input is an input for exiting the second application, in one example, the memory management policy corresponding to the first input includes a second adjustment memory policy, the second adjustment policy being to set the large page memory allocation policy to madvise, based on which the step 130 may specifically include:
and according to the second memory allocation strategy, adjusting the large page memory allocation strategy in the memory allocation strategy to madvise.
Illustratively, when the first application is started, the large page memory allocation policy is [ always ], and when the second application is exited (when the second application may be the same as the first application), the large page memory allocation policy is [ madvise ], so as to ensure the memory amount of the large page memory allocated to the application with large memory demand and frequent access.
Based on this, in another example, the memory management policy corresponding to the first input further includes a large page memory allocation policy, based on which the step 130 may specifically include:
step 1303, determining whether there is a first memory page set by an objective function according to a large page memory allocation policy, where the objective function corresponds to a second memory allocation policy;
In step 1304, in the case that it is determined that there is a first memory page set by the objective function, the first memory page is allocated from the reserved memory.
Illustratively, the operating system defaults to the large page memory allocation policy to [ madvise ], and since the large page memory allocation policy has been adjusted from [ madvise ] to [ always ] when the first application is launched, then the large page memory allocation policy needs to be readjusted to [ madvise ] when the second application is exited (when the second application may be the same application as the first application). At this time, the target kernel thread can inquire whether the virtual memory has the large page memory which can be used through the setting of the target function, and if the virtual memory has the large page memory which is set through the target function, the large page memory can be allocated from the reserved memory. Otherwise, if the large page memory set by the objective function is not available, the large page memory is not allocated from the reserved memory, and at this time, the small page memory can be allocated.
Further, the step 1304 may specifically include:
under the condition that a first memory page set by an objective function is determined, determining whether the virtual memory meets a preset condition or not through a target kernel thread;
under the condition that the virtual memory meets the preset condition, a first memory page is allocated from the reserved memory, wherein the virtual memory meets the preset condition at least one of the following: the virtual memory satisfies address alignment, and a page size of a first memory page in the virtual memory is greater than or equal to 2MB.
For example, when the large page memory allocation policy is [ always ], it is first checked whether the virtual memory satisfies a preset condition, and when the virtual memory satisfies address alignment and the page size of the first memory page in the virtual memory is greater than or equal to 2MB, it is determined that the virtual memory satisfies the preset condition, and at this time, the target kernel thread may directly allocate the large page memory from the reserved memory.
Therefore, when the user triggers the first application with high memory performance requirement, the electronic equipment can only allocate the first memory page from the reserved memory, and cannot allocate the first memory page from other memories, so that the influence of the large-page memory on the performance of other memories in the electronic equipment is reduced, and the performance of the electronic equipment is improved while the performance of the memory is improved.
Here, it should be noted that if the memory size in the reserved memory is smaller, when the large page memory cannot be allocated, the reserved memory may be split into small pages and the small pages may be allocated to the small pages.
In yet another example, the memory management policy corresponding to the first input further includes a defragmentation policy, based on which the step 130 may specifically include:
and performing defragmentation on the preset memory according to the defragmentation strategy.
Illustratively, when exiting the second application, the target kernel thread may defragmente the preset memory such that the fragmentation rate of the reserved memory after being managed by the kernel thread is less than 5%.
Therefore, the fragmentation rate of the reserved memory can be reduced, the storage space in the preset memory is saved, and the performance of the electronic equipment can be improved.
According to the memory management method provided by the embodiment of the application, the execution main body can be a memory management device. In the embodiment of the present application, a memory management device is taken as an example to execute a memory management method, which is an apparatus for implementing the memory management method according to the embodiment of the present application.
Based on the same inventive concept, the application also provides a memory management device. This is described in detail with reference to fig. 2.
Fig. 2 is a schematic structural diagram of a memory management device according to an embodiment of the present application.
As shown in fig. 2, the memory management device 20 may be applied to an electronic apparatus, and the memory management device 20 may specifically include:
a receiving module 201, configured to receive a first input of a user;
an obtaining module 202, configured to obtain, in response to the first input, a memory management policy corresponding to the first input;
the execution module 203 is configured to execute at least one of the following actions according to a memory management policy: distributing a first memory page from the reserved memory, defragmenting the reserved memory, recovering a second memory page from the reserved memory, and adjusting a memory distribution strategy;
The page sizes of the first memory page and the second memory page are larger than a first threshold, the reserved memory is managed by the target kernel thread, and the fragmentation rate of the reserved memory after being managed by the target kernel thread is lower than a second threshold.
The memory management device 20 according to the embodiment of the present application is described in detail below, and is specifically as follows.
In one or more possible embodiments, the first input is an input for triggering display of a desktop page, and the memory management policy corresponding to the first input includes a defragmentation policy.
In another or more possible embodiments, the first input is an input for launching the first application, and the memory management policy corresponding to the first input includes a large page memory allocation policy and a first adjusted memory policy, the first adjusted memory policy being to set the large page memory allocation policy to always.
In yet another or more possible embodiments, the memory management policy corresponding to the first input further includes at least one of a defragmentation policy and a reclaim big page memory policy.
In yet another or more possible embodiments, the first input is an input for exiting the second application, and the memory management policy corresponding to the first input includes a second adjustment memory policy, the second adjustment policy being to set the large page memory allocation policy to madvise.
The memory management device in the embodiment of the application can be an electronic device, or can be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The memory management device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an IOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The memory management device provided by the embodiment of the application can realize each process realized by the embodiment of the memory management method shown in fig. 1, and achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
Based on this, the memory management device provided in the embodiment of the present application responds to the first input of the user, acquires the memory management policy corresponding to the first input, and executes at least one of the following actions according to the memory management policy: distributing a first memory page from the reserved memory, defragmenting the reserved memory, recovering a second memory page from the reserved memory, and adjusting a memory distribution strategy, wherein the page sizes of the first memory page and the second memory page are larger than a first threshold, the reserved memory is managed by a target kernel thread, and the defragmentation rate of the reserved memory after being managed by the target kernel thread is lower than a second threshold. Here, the memory management policy corresponding to the first input may be obtained according to the input of the user, so as to execute the action corresponding to the reserved memory according to the memory association policy, for example, allocate the first memory page from the reserved memory, so that the electronic device may allocate the first memory page only from the reserved memory, and not allocate from other memories, thereby reducing the influence of the large-page memory on other memory performances in the electronic device; for another example, the second memory page can be recovered from the reserved memory according to the first input, so that resource waste is avoided; and as the reserved memory can be managed by the target kernel thread, and the fragmentation rate of the pre-stored memory managed by the target kernel thread is lower than a second threshold value, the fragmentation of the memory is reduced, the memory allocation strategy can be adjusted according to the first input, the memory utilization rate and the performance of the application with large memory demand and frequent access are improved, and the electronic equipment can run smoothly, so that the electronic equipment can improve the equipment performance through the large page memory technology.
Optionally, as shown in fig. 3, the embodiment of the present application further provides an electronic device 30, including a processor 301 and a memory 302, where the memory 302 stores a program or an instruction that can be executed on the processor 301, and the program or the instruction implements each step of the above-mentioned memory management method embodiment when executed by the processor 301, and the steps achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 4 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, etc.
Those skilled in the art will appreciate that the electronic device 400 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 410 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, in the embodiment of the present application, the user input unit 407 is configured to receive a first input of a user. The processor 410 is configured to obtain, in response to the first input, a memory management policy corresponding to the first input. The processor 410 is further configured to perform at least one of the following actions according to a memory management policy: distributing a first memory page from the reserved memory, defragmenting the reserved memory, recovering a second memory page from the reserved memory, and adjusting a memory distribution strategy; the page sizes of the first memory page and the second memory page are larger than a first threshold, the reserved memory is managed by the target kernel thread, and the fragmentation rate of the reserved memory after being managed by the target kernel thread is lower than a second threshold.
The electronic device 400 is described in detail below, and is specifically as follows:
in one or more possible embodiments, the first input is an input for triggering display of a desktop page, and the memory management policy corresponding to the first input includes a defragmentation policy.
In another or more possible embodiments, the first input is an input for launching the first application, and the memory management policy corresponding to the first input includes a large page memory allocation policy and a first adjusted memory policy, the first adjusted memory policy being to set the large page memory allocation policy to always.
In yet another or more possible embodiments, the memory management policy corresponding to the first input further includes at least one of a defragmentation policy and a reclaim big page memory policy.
In yet another or more possible embodiments, the first input is an input for exiting the second application, and the memory management policy corresponding to the first input includes a second adjustment memory policy, the second adjustment policy being to set the large page memory allocation policy to madvise.
It should be appreciated that the input unit 404 may include a graphics processor (Graphics Processing Unit, GPU) 4041 and a microphone 4042, the graphics processor 4041 processing image data of still images or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 406 may include a display panel, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes at least one of a touch panel 4071 and other input devices 4072. The touch panel 4071 is also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch display. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume display keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 409 may be used to store software programs and various data, and the memory 409 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 409 may include volatile memory or nonvolatile memory, or the memory 409 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 409 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
Processor 410 may include one or more processing units; optionally, the processor 410 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless display signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above-mentioned memory management method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
The processor is a processor in the electronic device in the above embodiment. Among them, the readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic disk or optical disk, etc.
In addition, the embodiment of the application further provides a chip, the chip comprises a processor and a display interface, the display interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the embodiment of the memory management method can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the memory management method described above, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in part in the form of a computer software product stored on a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (12)

1. A memory management method, comprising:
receiving a first input of a user;
responding to the first input, and acquiring a memory management strategy corresponding to the first input;
according to the memory management policy, at least one of the following actions is executed: distributing a first memory page from a reserved memory, defragmenting and sorting the reserved memory, recovering a second memory page from the reserved memory, and adjusting a memory distribution strategy;
the page sizes of the first memory page and the second memory page are larger than a first threshold, the reserved memory is managed by a target kernel thread, and the fragmentation rate of the reserved memory after being managed by the target kernel thread is lower than a second threshold.
2. The method of claim 1, wherein the first input is an input for triggering display of a desktop page, and wherein the memory management policy corresponding to the first input comprises a defragmentation policy.
3. The method of claim 1, wherein the first input is an input for launching a first application, and wherein the memory management policy corresponding to the first input includes a large page memory allocation policy and a first adjusted memory policy, the first adjusted memory policy being to set the large page memory allocation policy to always.
4. The method of claim 3, wherein the memory management policy corresponding to the first input further comprises at least one of a defragmentation policy and a reclaim big page memory policy.
5. A method according to claim 1 or 3, wherein the first input is an input for exiting a second application, and the memory management policy corresponding to the first input includes a second adjustment memory policy, the second adjustment policy being to set the large page memory allocation policy to madvise.
6. A memory management device, comprising:
the receiving module is used for receiving a first input of a user;
the acquisition module is used for responding to the first input and acquiring a memory management strategy corresponding to the first input;
the execution module is used for executing at least one of the following actions according to the memory management strategy: distributing a first memory page from a reserved memory, defragmenting and sorting the reserved memory, recovering a second memory page from the reserved memory, and adjusting a memory distribution strategy;
the page sizes of the first memory page and the second memory page are larger than a first threshold, the reserved memory is managed by a target kernel thread, and the fragmentation rate of the reserved memory after being managed by the target kernel thread is lower than a second threshold.
7. The apparatus of claim 6, wherein the first input is an input to trigger display of a desktop page, and wherein the memory management policy corresponding to the first input comprises a defragmentation policy.
8. The apparatus of claim 6, wherein the first input is an input for launching a first application, and wherein the memory management policy corresponding to the first input includes a large page memory allocation policy and a first adjusted memory policy, the first adjusted memory policy being to set the large page memory allocation policy to always.
9. The apparatus of claim 8, wherein the memory management policy corresponding to the first input further comprises at least one of a defragmentation policy and a reclaim big page memory policy.
10. The apparatus of claim 6 or 8, wherein the first input is an input to exit a second application, and wherein the memory management policy corresponding to the first input includes a second adjustment memory policy, the second adjustment policy being to set a large page memory allocation policy to madvise.
11. An electronic device, comprising: a processor, a memory and a program or instruction stored on the memory and executable on the processor, the program or instruction when executed by the processor implementing the steps of the memory management method as claimed in any one of claims 1 to 5.
12. A readable storage medium, wherein a program or instructions is stored on the readable storage medium, which when executed by a processor, implements the steps of the memory management method according to any of claims 1-5.
CN202311171247.5A 2023-09-11 2023-09-11 Memory management method, device, equipment and storage medium Pending CN117170872A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311171247.5A CN117170872A (en) 2023-09-11 2023-09-11 Memory management method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311171247.5A CN117170872A (en) 2023-09-11 2023-09-11 Memory management method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117170872A true CN117170872A (en) 2023-12-05

Family

ID=88935180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311171247.5A Pending CN117170872A (en) 2023-09-11 2023-09-11 Memory management method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117170872A (en)

Similar Documents

Publication Publication Date Title
US11531625B2 (en) Memory management method and apparatus
US20240054079A1 (en) Memory Management Method and Apparatus, Electronic Device, and Computer-Readable Storage Medium
US9256532B2 (en) Method and computer system for memory management on virtual machine
EP3506106B1 (en) Method for processing application, electronic device, and computer-readable storage medium
CN111158910B (en) Memory management method and device, storage medium and electronic equipment
CN106970881B (en) Hot and cold page tracking and compression recovery method based on large page
US10754567B2 (en) Partially deactivated application with termination protection
US8775749B2 (en) Demand based memory management of non-pagable data storage
EP3163451B1 (en) Memory management method and device, and memory controller
CN114207571B (en) Computing device and method of operation thereof
CN112711387A (en) Method and device for adjusting capacity of buffer area, electronic equipment and readable storage medium
US9772776B2 (en) Per-memory group swap device
Han et al. A hybrid swapping scheme based on per-process reclaim for performance improvement of android smartphones (August 2018)
CN117170872A (en) Memory management method, device, equipment and storage medium
CN114564315A (en) Memory allocation method and device, electronic equipment and medium
US20230176980A1 (en) Page Swap Method, Storage System, and Electronic Device
Lee et al. Compressed and shared swap to extend available memory in virtualized consumer electronics
US20160170899A1 (en) Embedded device and memory management method thereof
CN113032290A (en) Flash memory configuration method and device, electronic equipment and storage medium
CN116954924A (en) Memory management method and device and electronic equipment
CN117271383A (en) Memory recycling management method and device, electronic equipment and readable storage medium
CN116680083A (en) Memory processing method, device, equipment and storage medium
CN117311967A (en) Memory processing method and device and electronic equipment
CN117311966A (en) Memory processing method and device and electronic equipment
US20160320972A1 (en) Adaptive compression-based paging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination