CN111352861B - Memory compression method and device and electronic equipment - Google Patents

Memory compression method and device and electronic equipment Download PDF

Info

Publication number
CN111352861B
CN111352861B CN202010102545.9A CN202010102545A CN111352861B CN 111352861 B CN111352861 B CN 111352861B CN 202010102545 A CN202010102545 A CN 202010102545A CN 111352861 B CN111352861 B CN 111352861B
Authority
CN
China
Prior art keywords
memory
compressed
page
data
memory page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010102545.9A
Other languages
Chinese (zh)
Other versions
CN111352861A (en
Inventor
彭冬炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010102545.9A priority Critical patent/CN111352861B/en
Publication of CN111352861A publication Critical patent/CN111352861A/en
Application granted granted Critical
Publication of CN111352861B publication Critical patent/CN111352861B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/40Specific encoding of data in memory or cache
    • G06F2212/401Compressed data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application discloses a memory compression method, a memory compression device and electronic equipment, and relates to the technical field of computers. Wherein the method comprises the following steps: when the memory is insufficient, determining a memory page to be compressed; determining a target thread needing real-time processing; judging whether a memory page to be compressed is associated with a target thread or not; if not, the memory page to be compressed is compressed. Therefore, the problem of application blocking caused by decompressing data in a memory page when the target thread uses the memory can be avoided.

Description

Memory compression method and device and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a memory compression method, a memory compression device, and an electronic device.
Background
When the memory of an electronic device is insufficient, the memory is typically required to be compressed. However, the existing memory compression method is easy to cause problems such as blocking and the like in some applications.
Disclosure of Invention
In view of the above, the present application provides a memory compression method, apparatus and electronic device, so as to improve the above problem.
In a first aspect, an embodiment of the present application provides a memory compression method, including: when the memory is insufficient, determining a memory page to be compressed; determining a target thread needing real-time processing; judging whether the memory page to be compressed is associated with the target thread or not; and if the memory page to be compressed is not associated with the target thread, compressing the memory page to be compressed.
In a second aspect, an embodiment of the present application provides a memory compression device, including: the device comprises a determining module, a judging module and a compressing module. The determining module is used for determining a memory page to be compressed and determining a target thread needing real-time processing when the memory is insufficient. The judging module is used for judging whether the memory page to be compressed is associated with the target thread. The compression module is used for compressing the memory page to be compressed when the memory page to be compressed is not associated with the target thread.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the methods described above.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored therein program code which is callable by a processor to perform a method as described above.
Compared with the prior art, the scheme provided by the application can judge whether the memory page to be compressed is related to the target line needing real-time processing after the memory page to be compressed is determined, and if not, the memory page to be compressed is compressed. Therefore, the problem of application blocking caused by decompressing data in a memory page when the target thread uses the memory can be avoided.
These and other aspects of the application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a block schematic diagram of an electronic device according to an embodiment of the present application.
Fig. 2 is a flow chart of a memory compression method according to an embodiment of the application.
Fig. 3 is a flow chart of a memory compression method according to another embodiment of the application.
Fig. 4 shows a schematic diagram of the substeps of step S210 shown in fig. 3.
Fig. 5 shows a sub-step schematic diagram of step S260 shown in fig. 3.
Fig. 6A shows a schematic diagram of the first data in the embodiment shown in fig. 3.
FIG. 6B shows a schematic diagram of the second data in the embodiment of FIG. 3.
Fig. 7 shows another sub-step schematic of step S260 shown in fig. 3.
FIG. 8 is a flow chart illustrating another method of memory compression in the embodiment of FIG. 3.
Fig. 9 is a block diagram of a memory compression device according to an embodiment of the present application.
Fig. 10 is a memory unit for storing or carrying program codes for implementing a memory compression method according to an embodiment of the present application.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present application with reference to the accompanying drawings.
The memory of the electronic device may be divided into a plurality of memory pages (pages), and the memory may be managed in units of memory pages. One memory page is typically 512-8 kbytes (bytes). Each memory page has a physical memory address, the physical memory address corresponds to a virtual memory address, and a processor of the electronic device can indirectly access the corresponding physical memory address through the virtual memory address, thereby accessing data in the memory page corresponding to the physical memory address. Each memory page may be associated with one or more threads that may access data in the memory page when the threads access a virtual memory address corresponding to the memory page.
In practical applications, when the memory is insufficient, some memory pages may be compressed, for example, the least recently used memory page (Least Recently Used) may be selected to be compressed, so as to release a portion of the memory space, and when the data in these memory pages needs to be used, decompression may be performed again. However, the decompression process needs to consume a certain performance and time, and for some threads with high real-time requirements, the thread may be blocked, and thus, the application to which the thread is applied may be blocked.
Through long-term research, the inventor provides a memory compression method, a memory compression device and electronic equipment, which can avoid the problem of blocking caused by memory decompression processing when a thread uses a memory. This will be described in detail below.
Referring to fig. 1, fig. 1 is a block diagram of an electronic device according to an embodiment of the present application. The electronic device 100 may be any device having a data processing function, such as a smart phone, a tablet computer, an electronic book, a notebook computer, a personal computer (Personal Computer, PC), or the like. The electronic device 100 of the present application may include one or more of the following components: the processor 110, the memory 120, and one or more programs, wherein the one or more programs may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform the memory compression method described below.
Processor 110 may include one or more processing cores. The processor 110 connects various portions of the overall electronic device 100 using various interfaces and lines, performs various functions of the electronic device 100 and processes data by executing or executing execution, programs, code sets, or instruction sets stored in the memory 120, and invoking data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 110 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for being responsible for rendering and drawing of display content; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 110 and may be implemented solely by a single communication chip.
The Memory 120 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Memory 120 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc. The storage data area may also store data created by the mobile terminal 100 in use (such as first data, second data, hereinafter), and the like.
It is to be understood that the configuration shown in fig. 1 is merely exemplary, and that electronic device 100 may also include more or fewer components than shown in fig. 1, or have a completely different configuration than shown in fig. 1. The embodiment of the present application is not limited thereto.
Referring to fig. 2, fig. 2 is a flowchart illustrating a memory compression method according to an embodiment of the application, which can be applied to the electronic device 100 shown in fig. 1. The steps of the method are described below.
In step S110, when the memory is insufficient, the memory page to be compressed is determined.
In this embodiment, each application in the electronic device 100 needs to occupy a certain memory for preloading data that needs to be used by the application, so as to speed up the read-write operation. However, when the memory is occupied in a large amount, insufficient memory allocation of the application for subsequent operation may be caused, resulting in an operating system exception. The application may be a stand-alone application, or may be an application running on a third party platform in the electronic device 100, such as an applet running on a social platform. The application running on the third party platform may be controlled through a User Interface (UI) framework, for example, a LinUI, which is not limited in this embodiment.
It should be noted that the LinUI is a framework developed based on the Linux system, and runs in a kernel state, in which threads and processes have similar structures. Thus, the threads described in this embodiment may also be processes.
In the implementation process, the electronic device 100 may monitor the size of the idle memory currently remaining in the device, and determine whether the memory of the electronic device 100 is insufficient according to the monitored data. If sufficient, no processing may be done. If the memory compression is insufficient, the memory compression is needed, so that a memory page used for compression, namely the memory page to be compressed, can be selected. For example, a memory page in which the frequency of use of data is less, such as a least recently used memory page, may be selected.
In step S120, a target thread requiring real-time processing is determined.
Considering that some threads have high real-time requirements, i.e. real-time processing of data is required, otherwise, the problem of application blocking may occur, so in this embodiment, a thread with high real-time requirements is determined from the threads of the electronic device 100 and is used as a target thread, so as to further determine whether the memory page to be compressed needs to be compressed according to the determined target thread.
Step S130, determining whether the memory page to be compressed is associated with the target thread. If yes, go to step S140; if not, step S150 may be performed.
Step S140, not compressing the memory page to be compressed.
Step S150, compressing the memory page to be compressed.
In this embodiment, each memory page is associated with one or more threads, and the threads associated with the memory page use the data in the memory page during operation.
Thus, if the memory page to be compressed is associated with the target thread, it means that the target thread will use the data in the memory page to be compressed at run-time. If the memory page to be compressed is compressed, then the target thread needs to perform decompression processing first when the data in the memory page to be compressed is needed to be used later, and the decompression processing needs to take a certain time, so that the application corresponding to the target thread with high real-time requirement is blocked. Based on this, when the memory page to be compressed is associated with the target thread, the memory page to be compressed may not be compressed. Therefore, the target thread can be prevented from performing decompression processing when the data of the memory page to be compressed is used later, and the problem of clamping caused by the decompression processing can be avoided.
In addition, if the memory page to be compressed is not associated with the target thread, the target thread does not use the memory page to be compressed when running, and the problem of blocking caused by decompression processing does not exist in the follow-up process. Therefore, by judging whether the memory page to be compressed is associated with the target thread with high real-time requirement or not, and compressing the memory page to be compressed when the memory page to be compressed is not associated with the target thread, the memory page to be compressed associated with the target thread with high real-time requirement can be at least partially eliminated from the memory page to be compressed, so that application blocking of the target thread due to subsequent decompression processing is avoided, user experience is affected, and user viscosity is reduced.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a memory compression method according to another embodiment of the present application, which can be applied to the electronic device 100 shown in fig. 1, and the following description describes steps included in the method.
Step S210, when the memory is insufficient, the anonymous page in the inactive anonymous page chain table is determined as the memory page to be compressed.
In an alternative manner of this embodiment, the electronic device 100 may monitor whether the memory is insufficient through the flow shown in fig. 4. The detailed description is as follows.
In step S211, a memory allocation request is received, where the memory allocation request includes a first memory size requested to be allocated.
In practical applications, when a thread needs to be run, a corresponding memory space is generally allocated for the thread. Taking the LinUI as an example, when a third party platform in the electronic device 100 needs to run a certain thread, a memory allocation request is sent to the LinUI, where the memory allocation request includes a size of a memory requested to be allocated to the thread, and the size is a first memory size.
Step S212, judging whether the difference value between the current remaining second memory size and the first memory size reaches a memory threshold. If yes, go to step S213; if not, step S214 is performed.
In step S213, it is determined that the memory is not insufficient.
In step S214, the memory shortage is determined.
The memory threshold may be flexibly set, for example, may be set according to statistical data. After receiving the memory allocation request, the current remaining memory size of the electronic device 100 may be obtained, where the memory size is the second memory size. Then, a difference between the second memory size and the first memory size may be calculated and compared to the memory threshold.
In one example, assuming that the memory threshold is 10M, the current remaining memory size (i.e., the second memory size) is 30M, and the first memory size requested to be allocated is 20M, the difference between the second memory size and the first memory size may be calculated to be 10M, and 10M is reached, and it may be determined that the current memory is not insufficient.
In another example, assuming that the memory threshold is 10M, the second memory size remaining currently is 25M, and the first memory size requested to be allocated is 20M, the difference may be calculated to be 5M, and the memory threshold is not reached to 10M, so that it may be determined that the current memory is insufficient.
It will be appreciated that the flow shown in fig. 4 is merely an example. For example, a plurality of memory size ranges may be set, and whether the memory is insufficient is determined according to the memory size range in which the difference between the second memory size and the first memory size is located. The present embodiment is not limited thereto.
In this embodiment, the memory pages may be classified into anonymous pages (Anonynous pages) and file pages according to Page types, and may be classified into active pages and inactive pages according to activity. File pages generally refer to memory pages that can be directly recycled by the system of the electronic device 100, and anonymous pages generally refer to heap memory that is dynamically allocated, cannot be directly recycled, and may be accessed again later.
The electronic device 100 may maintain an LRU linked list, which is a doubly linked list. The LRU linked list records the memory page. In this embodiment, the electronic device 100 may divide the LRU linked list into the following types according to the page type and activity of the memory page: an inactive anonymous page list, an active anonymous page list, an inactive file page list, and an active file page list. The active anonymous page list records active anonymous pages, the inactive anonymous page list records inactive anonymous pages, the active file page list records active file pages, and the inactive file page records inactive file pages.
In this embodiment, the memory page to be compressed is selected from the anonymous pages because the anonymous pages cannot be recovered directly. In detail, the anonymous pages recorded in the inactive anonymous page linked list are all anonymous pages which are used less recently and are not used frequently in a short time, so that the anonymous pages in the inactive anonymous page linked list can be used as memory pages to be compressed.
Further, anonymous pages in the inactive anonymous page list are typically moved from the active anonymous page list. Illustratively, when an anonymous page is just allocated, the anonymous page is added to the active anonymous page list from the head of the active anonymous page list, and after a period of updating of the active anonymous page list, the anonymous page reaches the tail of the active anonymous page list, so that the anonymous page is added to the inactive anonymous page list by the electronic device 100. As can be seen, the anonymous pages at the tail of the active anonymous page linked list are anonymous pages with lower recent liveness, so that the target number of anonymous pages at the tail of the active anonymous page linked list can be determined as memory pages to be compressed. The target number may be flexibly set, for example, may be 1-5, for example, may be 3.
In a possible implementation manner, after determining all the memory pages to be compressed, the following steps S220-S260 may be performed for each memory page to be compressed, respectively.
In another possible implementation manner, anonymous pages in the inactive anonymous page linked list may be traversed, the currently accessed anonymous page is determined as a memory page to be compressed for the currently accessed anonymous page, and then subsequent steps S220-S260 are performed for the currently determined memory page to be compressed. Correspondingly, a target number of anonymous pages in the active anonymous page linked list can be sequentially accessed from the tail to the head, the currently accessed anonymous pages are determined to be memory pages to be compressed, and then the subsequent steps S220-S260 are executed aiming at the currently determined memory pages to be compressed.
In step S220, a target thread requiring real-time processing is determined.
In this embodiment, the target thread may be a UI thread or a real-time thread. The UI thread has high real-time requirements, and if the processing is not timely enough, frame loss may be caused. Based on this, step S220 may be implemented by at least one of the following steps:
determining a UI thread as the target thread;
and determining the real-time thread as the target thread.
In step S230, a Page Table (PT) of the target thread is determined, where the Page Table includes a plurality of Page Table Entries (PTEs) corresponding to different memory pages.
In this embodiment, each thread has a page table, where the page table includes a plurality of page table entries, and each page table entry may include a mapping relationship between a virtual memory address and a physical memory address of a memory page. When a thread accesses a certain virtual memory address, a page table entry containing the virtual memory address can be searched from a page table of the thread, and then a physical memory address in the page table entry is accessed, so that data in a memory page indicated by the physical memory address is accessed.
If a page table entry includes a virtual memory address or a physical memory address of a memory page, the page table entry and the memory page can be regarded as corresponding to each other.
Step S240, finding whether there is a page table entry corresponding to the memory page to be compressed from the page table. If yes, go to step S250; if not, step S270 is performed.
In practice, when a certain page table entry PTE-1 exists in the page table A1 of a thread T1, the page table entry PTE-1 includes a virtual memory address or a physical memory address of a certain memory page P1, which indicates that the page table entry PTE-1 corresponds to the memory page P1, the thread T1 may access data in the memory page P1 through the page table entry PTE-1, that is, the thread T1 is associated with the memory page P1. Thus, when there is a page table entry in the target thread's page table that corresponds to the memory page to be compressed, it may be determined that the memory page to be compressed is associated with the target thread.
When there is no page table entry in page table A2 of one thread T2 that contains a virtual memory address or a physical memory address of memory page P2, it means that thread T2 does not access data in memory page P2, i.e., thread T2 is not associated with memory page P2. When a page table entry corresponding to the memory page to be compressed does not exist in the page table of the target thread, it may be determined that the memory page to be compressed is not associated with the target thread.
Step S250, determining that the memory page to be compressed is associated with the target thread, and not compressing the memory page to be compressed.
Step S260, determining that the memory page to be compressed is not associated with the target thread, and compressing the memory page to be compressed.
The detailed implementation logic of step S250 and step S260 may refer to the detailed description of step S130-step S150 in the previous embodiment, and will not be repeated here.
By the flow shown in fig. 3, application blocking caused by decompression processing when the memory is used by the target thread with high real-time requirement can be avoided. In particular, when the target thread is a UI thread, if the UI thread cannot render in time, frame loss will be caused, and through the flow shown in fig. 3, the frame loss rate of the UI thread can be reduced. Through testing, the total frame loss amount can be reduced by 40%.
Alternatively, in this embodiment, the step of compressing the memory page to be compressed in step S260 may be implemented by a flow shown in fig. 5, which is described in detail below.
In step S261, first data in the memory page to be compressed is obtained.
As described above, a memory page refers to a memory space of 512-8 kbytes in which data is stored. After the memory page to be compressed is determined, the data stored in the determined memory page to be compressed is the first data. Correspondingly, in the implementation process, data can be read from the determined memory page to be compressed, and the read data is the first data.
Step S262, obtaining a compression result of the first data processed by a compression algorithm to obtain second data.
In this embodiment, the compression algorithm may be any algorithm capable of implementing data compression, for example, huffman coding algorithm, run-length coding, arithmetic coding, etc., which is not limited in this embodiment.
For example, in one example, the first data stored in the memory page to be compressed is not typically stored continuously, but rather is located in discrete locations, i.e., there may be many memory fragments in the memory page to be compressed. Such as shown in fig. 6A, where the distribution of data in one memory page is shown, where the empty boxes represent memory fragmentation. In practical applications, a continuous memory space is usually requested to be allocated, and the memory fragments cannot be allocated although they do not store data, which results in waste. Therefore, the data in the memory pages can be moved together according to a certain rule, so that the memory fragments can form a section of continuous memory space. The rule may be regarded as a compression algorithm that obtains the second data by changing the distribution position of the first data. Referring to fig. 6B, the distribution of data in a memory page processed by a compression algorithm is shown, where a blank block forms a continuous free memory space. The data stored in the memory page shown in fig. 6A is first data, and the data stored in the memory page shown in fig. 6B is second data.
Step S263, storing the second data.
Step S264, releasing the memory page to be compressed.
For example, the space actually occupied by the first data in the memory page shown in fig. 6A is, for example, 1M, and after compression, the space actually occupied by the second data in the memory page is, for example, 0.7M, and then the second data may be stored in a space with a size of 0.7M, so that the memory page to be compressed may be released for storing other data.
The electronic device 100 may store the second data in a different storage area than the memory page to be compressed. In an alternative, the storage area may be part of the memory space, in which case 0.3M of memory space may be freed up in the above example. Alternatively, the storage area may be another storage medium, for example, a space in the hard disk is typically pre-partitioned in the electronic device 100 as a swap (swap) partition, and the second data may be stored in the swap partition. In this manner, the entire memory page to be compressed may be freed up for storing other data.
In practical application, in order to enable the target thread to continue to use the first data originally stored in the memory page to be compressed, referring to fig. 7, before executing step S264, the step of compressing the memory page to be compressed in step S260 may further include the following steps:
step S265, determining a page table entry corresponding to the memory page to be compressed, where the page table entry includes a physical memory address of the memory page to be compressed.
With reference to the above description, each page table entry includes a mapping relationship between a virtual memory address and a physical memory address of a memory page, and correspondingly, the page table entry corresponding to the memory page to be compressed also includes the physical memory address and the virtual memory address of the memory page to be compressed.
Step S266, in the page table entry corresponding to the memory page to be compressed, updating the physical memory address of the memory page to be compressed to the current storage address of the second data.
In practical applications, when the target thread needs to access the first data in the memory page to be compressed, it will actually request to access the virtual memory address of the memory page to be compressed, and at this time, the electronic device 100 may determine the physical memory address of the memory page to be compressed according to the page table entry containing the virtual memory address of the memory page to be compressed (i.e., the page table entry corresponding to the memory page to be compressed), so as to obtain the required first data from the determined physical memory address. However, at this point the first data has been compressed into second data and stored in a location different from the memory page to be compressed. Therefore, before the memory page to be compressed is released, the physical memory address in the page table entry corresponding to the memory page to be compressed can be updated to the current memory address of the second data. In other words, the mapping relationship between the virtual memory address and the physical memory address of the memory page to be compressed is updated to the mapping relationship between the virtual memory address of the memory page to be compressed and the current storage address of the second data.
Thus, when the electronic device 100 accesses the virtual memory address of the memory page to be compressed, the second data can be found according to the relationship between the virtual memory address and the current memory address of the second data, so as to perform subsequent decompression processing.
Optionally, referring to fig. 8, after step S260 is performed, the memory compression method provided in the present embodiment may further include the steps shown in fig. 8. The detailed description is as follows.
Step S270, responding to the access instruction of the target thread to the virtual memory address, searching a page table item comprising the virtual memory address from the page table of the target thread according to the access instruction, and determining the current storage address of the second data from the searched page table item.
In practical applications, after the memory page to be compressed is compressed, the target thread may request to access the data in the memory page to be compressed, and at this time, the target thread typically issues an access instruction including the virtual memory address of the memory page to be compressed.
When the processor 110 of the electronic device 100 detects an access instruction issued by a target thread, if the access instruction is identified to include a virtual memory address, a page table of the target thread may be accessed, and a page table entry including the identified virtual memory address may be searched from page table entries included in the page table of the target thread. The found page table entry includes, in addition to the identified virtual memory address, another address, which is the current storage address of the second data.
Step S280, obtaining the second data from the current storage address.
In the implementation process, after determining the current storage address of the second data, the data in the current storage address can be read, and the read data is the second data.
Step S290, determining a decompression algorithm corresponding to the compression algorithm, and obtaining a decompression result of the second data processed by the decompression algorithm, thereby obtaining first data.
In this embodiment, the decompression algorithm adopted in step S290 corresponds to the compression algorithm adopted in step S262, that is, the decompression algorithm adopted in step S290 is a reverse processing procedure of the compression algorithm adopted in step S262. The second data may be restored to the first data by the decompression algorithm.
In step S2100, the first data is stored in a memory page, so that the target thread accesses the data in the memory page.
After the first data is decompressed, the first data may be stored in the memory page, and the current storage address in the searched page table entry may be updated to a physical memory address of the memory page storing the decompressed first data. Thus, when the target thread accesses the identified virtual memory address, the first data may be accessed.
Referring to fig. 9, a block diagram of a memory compression device according to an embodiment of the application is shown. The memory compression device 900 may include: a determination module 910, a determination module 920, and a compression module 930.
The determining module 910 is configured to determine a memory page to be compressed when the memory is insufficient, and determine a target thread that needs real-time processing.
Optionally, the determining module 910 may be further configured to: before determining a memory page to be compressed, receiving a memory allocation request, wherein the memory allocation request comprises a first memory size requested to be allocated; judging whether the difference value between the current remaining second memory size and the first memory size reaches a memory threshold value or not; if not, determining that the memory is insufficient; if yes, it can be determined that the memory is not insufficient.
Optionally, the determining module 910 may determine the memory page to be compressed by: and determining the anonymous page in the inactive anonymous page linked list as the memory page to be compressed.
Optionally, the determining module 910 may further determine the memory page to be compressed by: and determining the target number of anonymous pages at the tail of the active anonymous page linked list as the memory pages to be compressed.
Optionally, the manner in which the determining module 910 determines the target thread that requires real-time processing includes at least one of: determining a user interface thread as the target thread; and determining the real-time thread as the target thread.
The determining module 920 is configured to determine whether the memory page to be compressed is associated with the target thread.
Optionally, the determining module 920 may determine whether the memory page to be compressed is associated with the target thread by: determining a page table of the target thread, wherein the page table comprises a plurality of page table entries corresponding to different memory pages respectively; searching whether a page table item corresponding to the memory page to be compressed exists or not from the page table; and if the page table does not have a page table item corresponding to the memory page to be compressed, determining that the memory page to be compressed is not associated with the target thread. Correspondingly, if a page table entry corresponding to the memory page to be compressed exists in the page table, it may be determined that the memory page to be compressed is associated with the target thread.
Optionally, the determining module 920 may be further configured to: and when the memory page to be compressed is associated with the target thread, not compressing the memory page to be compressed.
The compression module 930 is configured to compress the memory page to be compressed when the memory page to be compressed is not associated with the target thread.
Optionally, the compressing module 930 compresses the memory page to be compressed by: acquiring first data in the memory page to be compressed; obtaining a compression result of the first data processed by a compression algorithm to obtain second data; storing the second data; and releasing the memory page to be compressed.
Optionally, before releasing the memory page to be compressed, the manner in which the compression module 830 compresses the memory page to be compressed may further include: determining a page table entry corresponding to the memory page to be compressed, wherein the page table entry comprises a physical memory address of the memory page to be compressed; and in a page table item corresponding to the memory page to be compressed, updating the physical memory address of the memory page to be compressed into the current storage address of the second data.
Optionally, the page table entry corresponding to the memory page to be compressed may further include a virtual memory address of the memory page to be compressed. Based thereon, the apparatus 900 may further comprise a decompression module.
The decompression module is used for: after the compression module 930 compresses the memory page to be compressed, responding to an access instruction of the target thread to a virtual memory address, searching a page table item including the virtual memory address from a page table of the target thread according to the access instruction, and determining the current storage address of the second data from the searched page table item; acquiring the second data from the current storage address; determining a decompression algorithm corresponding to the compression algorithm, and obtaining a decompression result of processing the second data through the decompression algorithm to obtain the first data; and storing the first data in a memory page, so that the target thread accesses the data in the memory page.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
In the several embodiments provided by the present application, the illustrated or discussed coupling or direct coupling or communication connection of the modules to each other may be through some interfaces, indirect coupling or communication connection of devices or modules, electrical, mechanical, or other forms.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
Referring to fig. 10, a block diagram of a computer readable storage medium according to an embodiment of the present application is shown. The computer readable medium 1000 has stored therein a program code 1010, said program code 1010 being callable by a processor for performing the method described in the above method embodiments.
The computer readable storage medium 1000 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, computer readable storage medium 1000 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). The computer readable storage medium 1000 has storage space for program code 1010 that performs any of the method steps described above. The program code can be read from or written to one or more computer program products. Program code 1010 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be appreciated by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not drive the essence of the corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (9)

1. A memory compression method, comprising:
receiving a memory allocation request, wherein the memory allocation request comprises a first memory size requested to be allocated;
judging whether the difference value between the current remaining second memory size and the first memory size reaches a memory threshold value or not;
if not, determining that the memory is insufficient, and determining the anonymous pages in the inactive anonymous page linked list and the target number of anonymous pages at the tail of the active anonymous page linked list as memory pages to be compressed, wherein the anonymous pages are not frequently used;
determining a user interface thread and a real-time thread as target threads;
judging whether the memory page to be compressed is associated with the target thread or not;
if the memory page to be compressed is not associated with the target thread, compressing the memory page to be compressed;
and storing the data in the compressed memory page to be compressed in other storage media, and releasing the memory page to be compressed.
2. The method according to claim 1, wherein the method further comprises:
and if the memory page to be compressed is associated with the target thread, not compressing the memory page to be compressed.
3. The method according to claim 1 or 2, wherein the determining whether the memory page to be compressed is associated with the target thread comprises:
determining a page table of the target thread, wherein the page table comprises a plurality of page table entries corresponding to different memory pages respectively;
searching whether a page table item corresponding to the memory page to be compressed exists or not from the page table;
and if the page table does not have a page table item corresponding to the memory page to be compressed, determining that the memory page to be compressed is not associated with the target thread.
4. The method of claim 3, wherein compressing the memory page to be compressed comprises:
acquiring first data in the memory page to be compressed;
obtaining a compression result of the first data processed by a compression algorithm to obtain second data;
storing the second data;
and releasing the memory page to be compressed.
5. The method of claim 4, wherein compressing the memory page to be compressed prior to releasing the memory page to be compressed, further comprises:
determining a page table entry corresponding to the memory page to be compressed, wherein the page table entry comprises a physical memory address of the memory page to be compressed;
and in a page table item corresponding to the memory page to be compressed, updating the physical memory address of the memory page to be compressed into the current storage address of the second data.
6. The method of claim 5, wherein the page table entry corresponding to the memory page to be compressed further comprises a virtual memory address of the memory page to be compressed; after the memory page to be compressed is compressed, the method further includes:
responding to an access instruction of the target thread to a virtual memory address, searching a page table item comprising the virtual memory address from a page table of the target thread according to the access instruction, and determining the current storage address of the second data from the searched page table item;
acquiring the second data from the current storage address;
determining a decompression algorithm corresponding to the compression algorithm, and obtaining a decompression result of processing the second data through the decompression algorithm to obtain the first data;
and storing the first data in a memory page, so that the target thread accesses the data in the memory page.
7. A memory compression device, comprising:
the memory allocation module is used for receiving a memory allocation request, wherein the memory allocation request comprises a first memory size requested to be allocated; judging whether the difference value between the current remaining second memory size and the first memory size reaches a memory threshold value or not; if not, determining that the memory is insufficient, determining anonymous pages in the inactive anonymous page linked list and target number of anonymous pages at the tail of the active anonymous page linked list as memory pages to be compressed, wherein the anonymous pages are not frequently used, and determining that a user interface thread and a real-time thread are target threads;
the judging module is used for judging whether the memory page to be compressed is associated with the target thread or not;
the compression module is used for compressing the memory page to be compressed when the memory page to be compressed is not associated with the target thread; and storing the data in the compressed memory page to be compressed in other storage media, and releasing the memory page to be compressed.
8. An electronic device, comprising:
one or more processors;
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-6.
9. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a program code, which is callable by a processor for performing the method according to any one of claims 1-6.
CN202010102545.9A 2020-02-19 2020-02-19 Memory compression method and device and electronic equipment Active CN111352861B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010102545.9A CN111352861B (en) 2020-02-19 2020-02-19 Memory compression method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010102545.9A CN111352861B (en) 2020-02-19 2020-02-19 Memory compression method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111352861A CN111352861A (en) 2020-06-30
CN111352861B true CN111352861B (en) 2023-09-29

Family

ID=71197988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010102545.9A Active CN111352861B (en) 2020-02-19 2020-02-19 Memory compression method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111352861B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880928B (en) * 2020-07-06 2024-04-19 Oppo广东移动通信有限公司 Method for releasing selection process and terminal equipment
CN112052089B (en) * 2020-09-01 2023-03-28 Oppo(重庆)智能科技有限公司 Memory recovery method and device and electronic equipment
CN112069433A (en) * 2020-09-10 2020-12-11 Oppo(重庆)智能科技有限公司 File page processing method and device, terminal equipment and storage medium
CN112685333A (en) * 2020-12-28 2021-04-20 上海创功通讯技术有限公司 Heap memory management method and device
CN113296940B (en) * 2021-03-31 2023-12-08 阿里巴巴新加坡控股有限公司 Data processing method and device
CN113885787B (en) * 2021-06-08 2022-12-13 荣耀终端有限公司 Memory management method and electronic equipment
CN114116191B (en) * 2021-06-24 2023-09-01 荣耀终端有限公司 Memory cold page processing method and electronic equipment
CN113610348A (en) * 2021-07-06 2021-11-05 安徽海博智能科技有限责任公司 Strip mine card scheduling method, system, device and storage medium
CN114461375B (en) * 2021-07-30 2023-01-20 荣耀终端有限公司 Memory resource management method and electronic equipment
CN117957527A (en) * 2022-08-30 2024-04-30 晶晨半导体(上海)股份有限公司 Memory management method and module, chip, electronic device and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103970256A (en) * 2014-04-22 2014-08-06 中国科学院计算技术研究所 Energy saving method and system based on memory compaction and CPU dynamic frequency modulation
CN104216696A (en) * 2013-06-05 2014-12-17 北京齐尔布莱特科技有限公司 Thumbnail component method
CN104750854A (en) * 2015-04-16 2015-07-01 武汉海达数云技术有限公司 Mass three-dimensional laser point cloud compression storage and rapid loading and displaying method
CN105631035A (en) * 2016-01-04 2016-06-01 北京百度网讯科技有限公司 Data storage method and device
CN106503032A (en) * 2016-09-09 2017-03-15 深圳大学 A kind of method and device of data compression
CN106557436A (en) * 2016-11-17 2017-04-05 乐视控股(北京)有限公司 The memory compression function enabled method of terminal and device
CN106970881A (en) * 2017-03-10 2017-07-21 浙江大学 The one cold and hot page based on big page is followed the trail of and pressure recovery method
CN107704321A (en) * 2017-09-30 2018-02-16 北京元心科技有限公司 Memory allocation method and device and terminal equipment
CN107885672A (en) * 2017-11-07 2018-04-06 杭州顺网科技股份有限公司 Internal storage management system and method
CN108062336A (en) * 2016-11-09 2018-05-22 腾讯科技(北京)有限公司 Media information processing method and device
CN110008016A (en) * 2019-04-15 2019-07-12 深圳市万普拉斯科技有限公司 Anonymous page management method, device, terminal device and readable storage medium storing program for executing
CN110457235A (en) * 2019-08-20 2019-11-15 Oppo广东移动通信有限公司 Memory compression methods, device, terminal and storage medium
CN110704199A (en) * 2019-09-06 2020-01-17 深圳平安通信科技有限公司 Data compression method and device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8751830B2 (en) * 2012-01-23 2014-06-10 International Business Machines Corporation Memory address translation-based data encryption/compression

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104216696A (en) * 2013-06-05 2014-12-17 北京齐尔布莱特科技有限公司 Thumbnail component method
CN103970256A (en) * 2014-04-22 2014-08-06 中国科学院计算技术研究所 Energy saving method and system based on memory compaction and CPU dynamic frequency modulation
CN104750854A (en) * 2015-04-16 2015-07-01 武汉海达数云技术有限公司 Mass three-dimensional laser point cloud compression storage and rapid loading and displaying method
CN105631035A (en) * 2016-01-04 2016-06-01 北京百度网讯科技有限公司 Data storage method and device
CN106503032A (en) * 2016-09-09 2017-03-15 深圳大学 A kind of method and device of data compression
CN108062336A (en) * 2016-11-09 2018-05-22 腾讯科技(北京)有限公司 Media information processing method and device
CN106557436A (en) * 2016-11-17 2017-04-05 乐视控股(北京)有限公司 The memory compression function enabled method of terminal and device
CN106970881A (en) * 2017-03-10 2017-07-21 浙江大学 The one cold and hot page based on big page is followed the trail of and pressure recovery method
CN107704321A (en) * 2017-09-30 2018-02-16 北京元心科技有限公司 Memory allocation method and device and terminal equipment
CN107885672A (en) * 2017-11-07 2018-04-06 杭州顺网科技股份有限公司 Internal storage management system and method
CN110008016A (en) * 2019-04-15 2019-07-12 深圳市万普拉斯科技有限公司 Anonymous page management method, device, terminal device and readable storage medium storing program for executing
CN110457235A (en) * 2019-08-20 2019-11-15 Oppo广东移动通信有限公司 Memory compression methods, device, terminal and storage medium
CN110704199A (en) * 2019-09-06 2020-01-17 深圳平安通信科技有限公司 Data compression method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种Linux内存管理机制;谢文娣 等;《新乡学院学报》;第33卷(第12期);31-36 *

Also Published As

Publication number Publication date
CN111352861A (en) 2020-06-30

Similar Documents

Publication Publication Date Title
CN111352861B (en) Memory compression method and device and electronic equipment
US9274839B2 (en) Techniques for dynamic physical memory partitioning
JP5211751B2 (en) Calculator, dump program and dump method
CN110765031B (en) Data storage method and device, mobile terminal and storage medium
CN110764906B (en) Memory recovery processing method and device, electronic equipment and storage medium
US20220035655A1 (en) Method and Device for Anonymous Page Management, Terminal Device, and Readable Storage Medium
CN110457235B (en) Memory compression method, device, terminal and storage medium
CN108205474B (en) Memory management method, terminal device, computer apparatus, and readable storage medium
CN111309267B (en) Storage space allocation method and device, storage equipment and storage medium
CN111090521A (en) Memory allocation method and device, storage medium and electronic equipment
CN110727607B (en) Memory recovery method and device and electronic equipment
CN111723057A (en) File pre-reading method, device, equipment and storage medium
US11467734B2 (en) Managing swap area in memory using multiple compression algorithms
WO2022151985A1 (en) Virtual memory-based data storage method and apparatus, device, and storage medium
CN110968529A (en) Method and device for realizing non-cache solid state disk, computer equipment and storage medium
CN115543532A (en) Processing method and device for missing page exception, electronic equipment and storage medium
KR20190117294A (en) Electronic apparatus and controlling method thereof
CN112654965A (en) External paging and swapping of dynamic modules
CN115421907A (en) Memory recovery method and device, electronic equipment and storage medium
CN115328405A (en) Data processing method and device and electronic equipment
CN111562983B (en) Memory optimization method and device, electronic equipment and storage medium
CN115587049A (en) Memory recovery method and device, electronic equipment and storage medium
CN111881065B (en) Physical address processing method, device, equipment and medium for data deduplication operation
US10838727B2 (en) Device and method for cache utilization aware data compression
CN113010454A (en) Data reading and writing method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant