CN111352861A - Memory compression method and device and electronic equipment - Google Patents

Memory compression method and device and electronic equipment Download PDF

Info

Publication number
CN111352861A
CN111352861A CN202010102545.9A CN202010102545A CN111352861A CN 111352861 A CN111352861 A CN 111352861A CN 202010102545 A CN202010102545 A CN 202010102545A CN 111352861 A CN111352861 A CN 111352861A
Authority
CN
China
Prior art keywords
memory
compressed
page
memory page
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010102545.9A
Other languages
Chinese (zh)
Other versions
CN111352861B (en
Inventor
彭冬炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010102545.9A priority Critical patent/CN111352861B/en
Publication of CN111352861A publication Critical patent/CN111352861A/en
Application granted granted Critical
Publication of CN111352861B publication Critical patent/CN111352861B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/40Specific encoding of data in memory or cache
    • G06F2212/401Compressed data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application discloses a memory compression method and device and electronic equipment, and relates to the technical field of computers. Wherein, the method comprises the following steps: when the memory is insufficient, determining a memory page to be compressed; determining a target thread needing real-time processing; judging whether a memory page to be compressed is associated with a target thread; and if not, compressing the memory page to be compressed. Therefore, the problem of application blockage caused by decompressing data in the memory page when the target thread uses the memory can be avoided.

Description

Memory compression method and device and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a memory compression method and apparatus, and an electronic device.
Background
When the memory of the electronic device is insufficient, the memory is usually required to be compressed. However, the existing memory compression method easily causes the problems of jamming and the like in some applications.
Disclosure of Invention
In view of the foregoing, the present application provides a memory compression method, device and electronic device to improve the foregoing problems.
In a first aspect, an embodiment of the present application provides a memory compression method, including: when the memory is insufficient, determining a memory page to be compressed; determining a target thread needing real-time processing; judging whether the memory page to be compressed is associated with the target thread; and if the memory page to be compressed is not associated with the target thread, compressing the memory page to be compressed.
In a second aspect, an embodiment of the present application provides a memory compression apparatus, including: the device comprises a determining module, a judging module and a compressing module. The determining module is used for determining a memory page to be compressed and determining a target thread needing real-time processing when the memory is insufficient. The judging module is used for judging whether the memory page to be compressed is associated with the target thread. The compression module is used for compressing the memory page to be compressed when the memory page to be compressed is not associated with the target thread.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the methods described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which program code is stored, and the program code can be called by a processor to execute the method described above.
Compared with the prior art, the scheme provided by the application can judge whether the memory page to be compressed is associated with the target thread needing real-time processing after the memory page to be compressed is determined, and if not, the memory page to be compressed is compressed. Therefore, the problem of application blockage caused by decompressing data in the memory page when the target thread uses the memory can be avoided.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a block schematic diagram of an electronic device provided in an embodiment of the present application.
Fig. 2 is a flowchart illustrating a memory compression method according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating a memory compression method according to another embodiment of the present application.
Fig. 4 shows a schematic diagram of the substeps of step S210 shown in fig. 3.
Fig. 5 shows a sub-step diagram of step S260 shown in fig. 3.
Fig. 6A shows a schematic diagram of the first data in the embodiment shown in fig. 3.
Fig. 6B shows a schematic diagram of the second data in the embodiment shown in fig. 3.
Fig. 7 shows another sub-step diagram of step S260 shown in fig. 3.
Fig. 8 is a schematic flow chart illustrating a memory compression method in the embodiment shown in fig. 3.
Fig. 9 is a block diagram illustrating a memory compression apparatus according to an embodiment of the present application.
Fig. 10 is a storage unit according to an embodiment of the present application, configured to store or carry program code for implementing a memory compression method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
The memory of the electronic device may be divided into a plurality of memory pages (pages), and the memory may be managed in units of memory pages. A memory page is typically 512-8k bytes (bytes). Each memory page has a physical memory address, the physical memory address corresponds to a virtual memory address, and a processor of the electronic device can indirectly access the corresponding physical memory address through the virtual memory address, thereby accessing data in the memory page corresponding to the physical memory address. Each memory page may be associated with one or more threads, and when the threads access a virtual memory address corresponding to the memory page, the threads may access data in the memory page.
In practical applications, when the memory is insufficient, some memory pages may be compressed, for example, a Least Recently Used (Least Recently Used) memory page is selected to be compressed to release a part of the memory space, and when the data in the memory pages needs to be Used, decompression is performed. However, the process of decompression needs to consume a certain amount of performance and time, and for some threads with high real-time requirements, the thread may be stuck, and further the application to which the thread is applied may be stuck.
The inventor provides a memory compression method, a memory compression device and an electronic device through long-term research, and can avoid the problem of stutter caused by memory decompression processing when a thread uses a memory. This is described in detail below.
Referring to fig. 1, fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 100 may be any device having a data processing function, such as a smart phone, a tablet Computer, an electronic book, a notebook Computer, and a Personal Computer (PC). The electronic device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, and one or more programs, wherein the one or more programs may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform the memory compression methods described below.
Processor 110 may include one or more processing cores. The processor 110 interfaces with various components throughout the electronic device 100 using various interfaces and lines to perform various functions of the electronic device 100 and process data by executing or executing executions, programs, sets of codes or instructions stored in the memory 120 and invoking data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data (such as first data, second data, hereinafter) created by the mobile terminal 100 in use, and the like.
It will be appreciated that the configuration shown in FIG. 1 is merely exemplary, and that electronic device 100 may include more or fewer components than shown in FIG. 1, or may have a completely different configuration than shown in FIG. 1. The embodiments of the present application do not limit this.
Referring to fig. 2, fig. 2 is a flowchart illustrating a memory compression method according to an embodiment of the present disclosure, where the method can be applied to the electronic device 100 shown in fig. 1. The steps of the method are explained below.
In step S110, when the memory is insufficient, the memory page to be compressed is determined.
In this embodiment, each application in the electronic device 100 needs to occupy a certain memory for running, so as to preload data that the application needs to use, thereby speeding up the read-write operation. However, when the memory is heavily occupied, the memory allocation of the application running subsequently may be insufficient, resulting in an abnormal operating system. The application may be a standalone application, or may be an application running on a third-party platform in the electronic device 100, such as an applet running on a social platform. The application running on the third-party platform may be controlled through a User Interface (UI) framework, which may be, for example, a LinUI, and this embodiment is not limited thereto.
It should be noted that the LinUI is a framework developed based on the Linux system, and runs in a kernel state, in which threads and processes have similar structures. Therefore, the threads described in this embodiment may also be processes.
In an implementation process, the electronic device 100 may monitor the size of the currently remaining free memory in the device, and determine whether the memory of the electronic device 100 is insufficient according to the monitored data. If sufficient, no treatment may be done. If the memory page is not enough, the memory compression is required, so that the memory page for compression, namely the memory page to be compressed, can be selected. Illustratively, memory pages in which data is used less frequently, e.g., the least recently used memory pages, may be selected.
Step S120, determining a target thread requiring real-time processing.
Considering that some threads have a high requirement on real-time performance, that is, data needs to be processed in real time, otherwise, an application stuck problem may occur, in this embodiment, a thread having a high requirement on real-time performance is determined from the threads of the electronic device 100 and is used as a target thread, so as to further determine whether the memory page to be compressed needs to be compressed according to the determined target thread.
Step S130, determining whether the memory page to be compressed is associated with the target thread. If yes, go to step S140; if not, step S150 may be performed.
In step S140, the memory page to be compressed is not compressed.
Step S150, compress the memory page to be compressed.
In this embodiment, each memory page is associated with one or more threads, and the threads associated with the memory page use data in the memory page during operation.
Therefore, if the memory page to be compressed is associated with the target thread, it indicates that the target thread will use the data in the memory page to be compressed at runtime. Then, if the memory page to be compressed is compressed, when the target thread subsequently needs to use the data in the memory page to be compressed, decompression needs to be performed first, and the decompression needs to consume a certain time, so that the application corresponding to the target thread with high real-time requirement is stuck. Based on this, when the memory page to be compressed is associated with the target thread, the memory page to be compressed may not be compressed. Therefore, the target thread can be prevented from being decompressed when the data of the memory page to be compressed is used subsequently, and the blockage problem caused by the decompression processing can be avoided.
In addition, if the memory page to be compressed is not associated with the target thread, the target thread does not use the memory page to be compressed during the operation, and the problem of jamming caused by decompression processing does not exist subsequently. Therefore, by judging whether the memory page to be compressed is associated with the target thread with high real-time requirement or not and compressing the memory page to be compressed when the memory page to be compressed is not associated with the target thread, the memory page to be compressed associated with the target thread with high real-time requirement can be at least partially excluded from the memory page to be compressed, so that the problem that the target thread is blocked by application due to subsequent decompression processing, user experience is influenced, and user viscosity is reduced is avoided.
Referring to fig. 3, fig. 3 is a flowchart illustrating a memory compression method according to another embodiment of the present application, which can be applied to the electronic device 100 shown in fig. 1, and the steps included in the method are described below.
Step S210, when the memory is insufficient, determining an anonymous page in the inactive anonymous page linked list as a memory page to be compressed.
In an optional manner of this embodiment, the electronic device 100 may monitor whether the memory is insufficient through the process shown in fig. 4. The detailed description is as follows.
Step S211, a memory allocation request is received, where the memory allocation request includes a first memory size requested to be allocated.
In practical applications, when a thread needs to be run, a corresponding memory space needs to be allocated to the thread. Taking the above-mentioned LinUI as an example, when a third-party platform in the electronic device 100 needs to run a certain thread, a memory allocation request is sent to the LinUI, where the memory allocation request includes a size of a memory requested to be allocated to the thread, and the size is the first memory size.
In step S212, it is determined whether the difference between the size of the currently remaining second memory and the size of the first memory reaches a memory threshold. If yes, go to step S213; if not, go to step S214.
In step S213, it is determined that the memory is not insufficient.
In step S214, the memory is determined to be insufficient.
The memory threshold may be flexibly set, for example, may be set according to statistical data. After receiving the memory allocation request, the current remaining memory size of the electronic device 100 may be obtained, where the memory size is the second memory size. A difference between the second memory size and the first memory size may then be calculated and compared to the memory threshold.
In an example, assuming that the memory threshold is 10M, the current remaining memory size (i.e., the second memory size) is 30M, and the first memory size requested to be allocated is 20M, the difference between the second memory size and the first memory size may be calculated to be 10M, which reaches 10M, and it may be determined that the current memory is not insufficient.
In another example, assuming that the memory threshold is 10M, the current remaining second memory size is 25M, and the first memory size requested to be allocated is 20M, the difference may be calculated to be 5M, and the memory threshold is not reached to 10M, so that it may be determined that the current memory is insufficient.
It is to be understood that the flow shown in fig. 4 is an example only. For example, a plurality of memory size ranges may be set, and whether the memory is insufficient may be determined according to the memory size range in which the difference between the second memory size and the first memory size is located. The present embodiment does not limit this.
In this embodiment, the memory Page may be divided into an anonymous Page (Anonynous Page) and a file Page according to a Page type, and may be divided into an active Page and an inactive Page according to activity. The file page generally refers to a memory page that can be directly recycled by a system of the electronic device 100, and the anonymous page generally refers to a dynamically allocated heap memory that cannot be directly recycled and may be subsequently accessed again.
The electronic device 100 may maintain an LRU linked list, which is a doubly linked list. The LRU linked list records memory pages. In this embodiment, the electronic device 100 may divide the LRU linked list into the following types according to the page type and the activity of the memory page: an inactive anonymous page linked list, an active anonymous page linked list, an inactive file page linked list, and an active file page linked list. The active anonymous page linked list records active anonymous pages, the inactive anonymous page linked list records inactive anonymous pages, the active file page linked list records active file pages, and the inactive file page records inactive file pages.
In this embodiment, the memory page to be compressed is selected from the anonymous page because the anonymous page cannot be directly recycled. In detail, the anonymous pages recorded in the inactive anonymous page list are all the anonymous pages which are used less recently, and the anonymous pages which are used less recently are usually not used frequently in a short time, so that the anonymous pages in the inactive anonymous page list can be used as the memory pages to be compressed.
Further, anonymous pages in the inactive anonymous page list are typically moved from the active anonymous page list. Illustratively, an anonymous page is added to the active anonymous page link list from the head of the active anonymous page link list just after being allocated, and the anonymous page reaches the tail of the active anonymous page link list after a period of time of updating of the active anonymous page link list, so that the electronic device 100 adds the anonymous page to the inactive anonymous page link list. Therefore, the anonymous pages at the tail part of the active anonymous page linked list are the anonymous pages with lower recent liveness, so that the target number of anonymous pages at the tail part of the active anonymous page linked list can be determined as the memory pages to be compressed. The target number may be flexibly set, and may be, for example, 1 to 5, and may be, for example, 3.
In a possible embodiment, after determining all the memory pages to be compressed, the subsequent steps S220 to S260 may be performed for each memory page to be compressed, respectively.
In another possible implementation, the anonymous page in the inactive anonymous page linked list may be traversed, the anonymous page currently accessed is determined as the memory page to be compressed for the anonymous page currently accessed, and then the subsequent steps S220 to S260 are performed for the memory page currently determined to be compressed. Correspondingly, the target number of anonymous pages in the active anonymous page linked list can be sequentially accessed from the tail part to the head part, the anonymous page which is currently accessed is determined as the memory page to be compressed, and the subsequent steps S220-S260 are executed according to the memory page which is currently determined to be compressed.
In step S220, a target thread requiring real-time processing is determined.
In this embodiment, the target thread may be a UI thread or a real-time thread. The UI thread has a high requirement on real-time performance, and if the processing is not timely enough, frame loss may be caused. Based on this, step S220 may be implemented by at least one of the following steps:
determining a UI thread as the target thread;
and determining the real-time thread as the target thread.
Step S230, determining a Page Table (Page Table, PT) of the target thread, where the Page Table includes a plurality of Page Table entries (Page Table entries, PTEs) respectively corresponding to different memory pages.
In this embodiment, each thread has a page table, and the page table includes a plurality of page table entries, and each page table entry may include a mapping relationship between a virtual memory address and a physical memory address of a memory page. When a thread accesses a virtual memory address, a page table entry containing the virtual memory address can be found from a page table of the thread, and a physical memory address in the page table entry is accessed, so that data in a memory page indicated by the physical memory address is accessed.
If a page table entry includes a virtual memory address or a physical memory address of a memory page, the page table entry and the memory page may be regarded as corresponding to each other.
Step S240, searching whether a page table entry corresponding to the memory page to be compressed exists in the page table. If yes, go to step S250; if not, go to step S270.
In implementation, when a page table entry PTE-1 exists in the page table a1 of a thread T1, and the page table entry PTE-1 contains a virtual memory address or a physical memory address of a memory page P1, indicating that the page table entry PTE-1 corresponds to the memory page P1, the thread T1 may access data in the memory page P1 through the page table entry PTE-1, that is, the thread T1 is associated with the memory page P1. Therefore, when a page table entry corresponding to the memory page to be compressed exists in the page table of the target thread, it can be determined that the memory page to be compressed is associated with the target thread.
When no page table entry containing either the virtual memory address or the physical memory address of memory page P2 exists in page table a2 of one thread T2, it indicates that thread T2 does not access data in memory page P2, i.e., thread T2 is not associated with memory page P2. When a page table entry corresponding to the memory page to be compressed does not exist in the page table of the target thread, it may be determined that the memory page to be compressed is not associated with the target thread.
Step S250, determining that the memory page to be compressed is associated with the target thread, and not compressing the memory page to be compressed.
Step S260, determining that the memory page to be compressed is not associated with the target thread, and compressing the memory page to be compressed.
The detailed implementation logic of step S250 and step S260 may refer to the detailed description of step S130 to step S150 in the previous embodiment, and is not described herein again.
Through the flow shown in fig. 3, it is possible to avoid application deadlock caused by decompression processing when a target thread with high real-time requirement uses a memory. Particularly, when the target thread is a UI thread, if the UI thread cannot render in time, frame loss will result, and the frame loss rate of the UI thread can be reduced through the flow shown in fig. 3. The total frame loss number can be reduced by 40% through tests.
Optionally, in this embodiment, the step of compressing the memory page to be compressed in step S260 may be implemented by the flow shown in fig. 5, which is described in detail as follows.
In step S261, first data in the memory page to be compressed is obtained.
As mentioned above, a memory page refers to a memory space of 512-8 kbytes in size in which data is stored. After the memory page to be compressed is determined, the data stored in the determined memory page to be compressed is the first data. Correspondingly, in the implementation process, data can be read from the determined memory page to be compressed, and the read data is the first data.
Step S262, obtaining a compression result of the first data processed by the compression algorithm to obtain second data.
In this embodiment, the compression algorithm may be any algorithm capable of implementing data compression, such as huffman coding algorithm, run-length coding, arithmetic coding, and the like, which is not limited in this embodiment.
For example, in one example, the first data stored in the memory page to be compressed is not usually stored continuously, but is dispersedly located at discrete locations, i.e., there may be many memory fragments in the memory page to be compressed. For example, as shown in fig. 6A, the distribution position of data in one memory page is shown, wherein the blank boxes represent memory fragments. In practical applications, the request allocation is usually a continuous memory space, and the memory fragments cannot be allocated although they do not store data, thereby causing waste. Therefore, the data in the memory pages can be moved together according to a certain rule, so that the memory fragments form a continuous memory space. The rule may be regarded as a compression algorithm that obtains the second data by changing the distribution position of the first data. Referring to FIG. 6B, the distribution positions of data in the memory page processed by the compression algorithm are shown, wherein the blank boxes constitute a continuous free memory space. The data stored in the memory page shown in fig. 6A is the first data, and the data stored in the memory page shown in fig. 6B is the second data.
Step S263, storing the second data.
Step S264, releasing the memory page to be compressed.
For example, the space actually occupied by the first data in the memory page shown in fig. 6A is, for example, 1M, and after the memory page is compressed, the space actually occupied by the second data in the memory page is, for example, 0.7M, and the second data can be stored in the space of 0.7M, so that the memory page to be compressed can be released for storing other data.
The electronic device 100 may store the second data in a memory area different from the memory page to be compressed. In an alternative, the storage area may be a part of the memory space, in which case, in the above example, 0.3M of memory space may be freed. In another alternative, the storage area may be other storage media, for example, a certain space in the hard disk is usually pre-divided in the electronic device 100 as a swap partition, and the second data may be stored in the swap partition. In this manner, the entire memory page to be compressed may be freed up for use in storing other data.
In practical applications, in order to enable the target thread to continue to use the first data originally stored in the memory page to be compressed, referring to fig. 7, before executing step S264, the step of compressing the memory page to be compressed in step S260 may further include the following steps:
step S265, determining a page table entry corresponding to the memory page to be compressed, where the page table entry includes a physical memory address of the memory page to be compressed.
Referring to the above description, each page table entry includes a mapping relationship between a virtual memory address and a physical memory address of a memory page, and correspondingly, a page table entry corresponding to a memory page to be compressed also includes the physical memory address and the virtual memory address of the memory page to be compressed.
Step S266, in the page table entry corresponding to the memory page to be compressed, update the physical memory address of the memory page to be compressed to the current storage address of the second data.
In practical applications, when the target thread needs to access the first data in the memory page to be compressed, the target thread actually requests to access the virtual memory address of the memory page to be compressed, and at this time, the electronic device 100 may determine the physical memory address of the memory page to be compressed according to a page table entry (i.e., a page table entry corresponding to the memory page to be compressed) including the virtual memory address of the memory page to be compressed, so as to obtain the required first data from the determined physical memory address. However, at this point the first data has been compressed into second data and stored in a location different from the memory page to be compressed. Therefore, before the memory page to be compressed is released, the physical memory address in the page table entry corresponding to the memory page to be compressed may be updated to the current storage address of the second data. In other words, the mapping relationship between the virtual memory address and the physical memory address of the memory page to be compressed is updated to the mapping relationship between the virtual memory address of the memory page to be compressed and the current storage address of the second data.
Thus, when the electronic device 100 accesses the virtual memory address of the memory page to be compressed, the second data can be found according to the relationship between the virtual memory address and the current storage address of the second data, and then the subsequent decompression processing is performed.
Optionally, referring to fig. 8, after step S260 is executed, the memory compression method provided in this embodiment may further include the step shown in fig. 8. The detailed description is as follows.
Step S270, responding to an access instruction of a target thread to a virtual memory address, searching a page table entry including the virtual memory address from a page table of the target thread according to the access instruction, and determining a current storage address of second data from the searched page table entry.
In practical applications, after the memory page to be compressed is compressed, the target thread may request to access data in the memory page to be compressed, and at this time, the target thread usually issues an access instruction including a virtual memory address of the memory page to be compressed.
When the processor 110 of the electronic device 100 detects an access instruction issued by a target thread, if the access instruction is identified to include a virtual memory address, the processor may access the page table of the target thread, and look up a page table entry including the identified virtual memory address from page table entries included in the page table of the target thread. The found page table entry includes another address, which is the current storage address of the second data, in addition to the identified virtual memory address.
Step S280, obtaining the second data from the current storage address.
In an implementation process, after the current storage address of the second data is determined, the data in the current storage address may be read, and the read data is the second data.
Step S290, determining a decompression algorithm corresponding to the compression algorithm, and obtaining a decompression result of the second data processed by the decompression algorithm to obtain the first data.
In this embodiment, the decompression algorithm adopted in step S290 corresponds to the compression algorithm adopted in step S262, that is, the decompression algorithm adopted in step S290 is a reverse process of the compression algorithm adopted in step S262. The second data may be restored to the first data by the decompression algorithm.
In step S2100, the first data is stored in a memory page, so that the target thread accesses the data in the memory page.
After the first data is decompressed, the first data may be stored in a memory page, and the current storage address in the above found page table entry may be updated to the physical memory address of the memory page in which the decompressed first data is stored. In this way, the first data may be accessed when the target thread accesses the identified virtual memory address.
Referring to fig. 9, a block diagram of a memory compression apparatus according to an embodiment of the present disclosure is shown. The memory compression apparatus 900 may include: a determination module 910, a determination module 920, and a compression module 930.
The determining module 910 is configured to determine a memory page to be compressed and determine a target thread that needs to be processed in real time when the memory is insufficient.
Optionally, the determining module 910 may further be configured to: before determining a memory page to be compressed, receiving a memory allocation request, wherein the memory allocation request comprises a first memory size required to be allocated; judging whether the difference value of the size of the currently remaining second memory and the size of the first memory reaches a memory threshold value; if not, determining that the memory is insufficient; if so, it may be determined that the memory is not insufficient.
Optionally, the determining module 910 may determine the memory page to be compressed by: and determining the anonymous page in the inactive anonymous page linked list as the memory page to be compressed.
Optionally, the determining module 910 may further determine the memory page to be compressed by: and determining the target number of anonymous pages at the tail part of the active anonymous page linked list as the memory page to be compressed.
Optionally, the determining module 910 determines the target thread needing real-time processing in at least one of the following ways: determining a user interface thread as the target thread; and determining the real-time thread as the target thread.
The determining module 920 is configured to determine whether the memory page to be compressed is associated with the target thread.
Optionally, the manner for the determining module 920 to determine whether the memory page to be compressed is associated with the target thread may be: determining a page table of the target thread, wherein the page table comprises a plurality of page table entries corresponding to different memory pages respectively; searching whether a page table entry corresponding to the memory page to be compressed exists in the page table; if the page table does not have a page table entry corresponding to the memory page to be compressed, determining that the memory page to be compressed is not associated with the target thread. Correspondingly, if a page table entry corresponding to the memory page to be compressed exists in the page table, it may be determined that the memory page to be compressed is associated with the target thread.
Optionally, the determining module 920 may be further configured to: and when the memory page to be compressed is associated with the target thread, the memory page to be compressed is not compressed.
The compressing module 930 is configured to compress the memory page to be compressed when the memory page to be compressed is not associated with the target thread.
Optionally, the manner of compressing the memory page to be compressed by the compression module 930 includes: acquiring first data in the memory page to be compressed; obtaining a compression result of the first data processed by a compression algorithm to obtain second data; storing the second data; and releasing the memory page to be compressed.
Optionally, before releasing the memory page to be compressed, the manner of compressing the memory page to be compressed by the compression module 830 may further include: determining a page table entry corresponding to the memory page to be compressed, wherein the page table entry comprises a physical memory address of the memory page to be compressed; and updating the physical memory address of the memory page to be compressed into the current storage address of the second data in the page table entry corresponding to the memory page to be compressed.
Optionally, the page table entry corresponding to the memory page to be compressed may further include a virtual memory address of the memory page to be compressed. Based on this, the apparatus 900 may further include a decompression module.
The decompression module is configured to: after the compression module 930 compresses the memory page to be compressed, in response to an access instruction of the target thread to a virtual memory address, searching a page table entry including the virtual memory address from a page table of the target thread according to the access instruction, and determining the current storage address of the second data from the searched page table entry; acquiring the second data from the current storage address; determining a decompression algorithm corresponding to the compression algorithm, and acquiring a decompression result of the second data processed by the decompression algorithm to obtain the first data; and storing the first data in a memory page, so that the target thread accesses the data in the memory page.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 10, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer readable medium 1000 has stored therein a program code 1010, said program code 1010 being invokable by a processor for performing the method described in the above method embodiments.
The computer-readable storage medium 1000 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 1000 includes a non-transitory computer-readable storage medium. The computer readable storage medium 1000 has storage space for program code 1010 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 1010 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (13)

1. A method for memory compression, comprising:
when the memory is insufficient, determining a memory page to be compressed;
determining a target thread needing real-time processing;
judging whether the memory page to be compressed is associated with the target thread;
and if the memory page to be compressed is not associated with the target thread, compressing the memory page to be compressed.
2. The method of claim 1, further comprising:
and if the memory page to be compressed is associated with the target thread, the memory page to be compressed is not compressed.
3. The method according to claim 1 or 2, wherein the determining the memory page to be compressed comprises:
and determining the anonymous page in the inactive anonymous page linked list as the memory page to be compressed.
4. The method of claim 3, wherein determining the memory page to be compressed further comprises:
and determining the target number of anonymous pages at the tail part of the active anonymous page linked list as the memory page to be compressed.
5. The method according to claim 1 or 2, wherein the determining the target thread requiring real-time processing comprises at least one of the following steps:
determining a user interface thread as the target thread;
and determining the real-time thread as the target thread.
6. The method according to claim 1 or 2, wherein the determining whether the memory page to be compressed is associated with the target thread comprises:
determining a page table of the target thread, wherein the page table comprises a plurality of page table entries corresponding to different memory pages respectively;
searching whether a page table entry corresponding to the memory page to be compressed exists in the page table;
if the page table does not have a page table entry corresponding to the memory page to be compressed, determining that the memory page to be compressed is not associated with the target thread.
7. The method according to claim 6, wherein the compressing the memory page to be compressed comprises:
acquiring first data in the memory page to be compressed;
obtaining a compression result of the first data processed by a compression algorithm to obtain second data;
storing the second data;
and releasing the memory page to be compressed.
8. The method according to claim 7, wherein before releasing the memory page to be compressed, the compressing the memory page to be compressed further comprises:
determining a page table entry corresponding to the memory page to be compressed, wherein the page table entry comprises a physical memory address of the memory page to be compressed;
and updating the physical memory address of the memory page to be compressed into the current storage address of the second data in the page table entry corresponding to the memory page to be compressed.
9. The method according to claim 8, wherein the page table entry corresponding to the memory page to be compressed further includes a virtual memory address of the memory page to be compressed; after the memory page to be compressed is compressed, the method further includes:
responding to an access instruction of the target thread to a virtual memory address, searching a page table item comprising the virtual memory address from a page table of the target thread according to the access instruction, and determining the current storage address of the second data from the searched page table item;
acquiring the second data from the current storage address;
determining a decompression algorithm corresponding to the compression algorithm, and acquiring a decompression result of the second data processed by the decompression algorithm to obtain the first data;
and storing the first data in a memory page, so that the target thread accesses the data in the memory page.
10. The method according to claim 1 or 2, wherein before determining the memory page to be compressed, the method further comprises:
receiving a memory allocation request, wherein the memory allocation request comprises a first memory size required to be allocated;
judging whether the difference value of the size of the currently remaining second memory and the size of the first memory reaches a memory threshold value;
if not, determining that the memory is insufficient.
11. A memory compression device, comprising:
the determining module is used for determining a memory page to be compressed and determining a target thread needing real-time processing when the memory is insufficient;
the judging module is used for judging whether the memory page to be compressed is associated with the target thread;
and the compression module is used for compressing the memory page to be compressed when the memory page to be compressed is not associated with the target thread.
12. An electronic device, comprising:
one or more processors;
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-10.
13. A computer-readable storage medium, characterized in that a program code is stored in the computer-readable storage medium, which program code can be called by a processor to perform the method according to any one of claims 1-10.
CN202010102545.9A 2020-02-19 2020-02-19 Memory compression method and device and electronic equipment Active CN111352861B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010102545.9A CN111352861B (en) 2020-02-19 2020-02-19 Memory compression method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010102545.9A CN111352861B (en) 2020-02-19 2020-02-19 Memory compression method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111352861A true CN111352861A (en) 2020-06-30
CN111352861B CN111352861B (en) 2023-09-29

Family

ID=71197988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010102545.9A Active CN111352861B (en) 2020-02-19 2020-02-19 Memory compression method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111352861B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880928A (en) * 2020-07-06 2020-11-03 Oppo广东移动通信有限公司 Method for selecting process to release and terminal equipment
CN112052089A (en) * 2020-09-01 2020-12-08 Oppo(重庆)智能科技有限公司 Memory recovery method and device and electronic equipment
CN112069433A (en) * 2020-09-10 2020-12-11 Oppo(重庆)智能科技有限公司 File page processing method and device, terminal equipment and storage medium
CN112685333A (en) * 2020-12-28 2021-04-20 上海创功通讯技术有限公司 Heap memory management method and device
CN113296940A (en) * 2021-03-31 2021-08-24 阿里巴巴新加坡控股有限公司 Data processing method and device
CN113610348A (en) * 2021-07-06 2021-11-05 安徽海博智能科技有限责任公司 Strip mine card scheduling method, system, device and storage medium
CN113885787A (en) * 2021-06-08 2022-01-04 荣耀终端有限公司 Memory management method and electronic equipment
CN114461375A (en) * 2021-07-30 2022-05-10 荣耀终端有限公司 Memory resource management method and electronic equipment
WO2022267664A1 (en) * 2021-06-24 2022-12-29 荣耀终端有限公司 Memory cold page processing method and electronic device
WO2024044986A1 (en) * 2022-08-30 2024-03-07 晶晨半导体(上海)股份有限公司 Memory management method and module, chip, electronic device, and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130191649A1 (en) * 2012-01-23 2013-07-25 International Business Machines Corporation Memory address translation-based data encryption/compression
CN103970256A (en) * 2014-04-22 2014-08-06 中国科学院计算技术研究所 Energy saving method and system based on memory compaction and CPU dynamic frequency modulation
CN104216696A (en) * 2013-06-05 2014-12-17 北京齐尔布莱特科技有限公司 Thumbnail component method
CN104750854A (en) * 2015-04-16 2015-07-01 武汉海达数云技术有限公司 Mass three-dimensional laser point cloud compression storage and rapid loading and displaying method
CN105631035A (en) * 2016-01-04 2016-06-01 北京百度网讯科技有限公司 Data storage method and device
CN106503032A (en) * 2016-09-09 2017-03-15 深圳大学 A kind of method and device of data compression
CN106557436A (en) * 2016-11-17 2017-04-05 乐视控股(北京)有限公司 The memory compression function enabled method of terminal and device
CN106970881A (en) * 2017-03-10 2017-07-21 浙江大学 The one cold and hot page based on big page is followed the trail of and pressure recovery method
CN107704321A (en) * 2017-09-30 2018-02-16 北京元心科技有限公司 Memory allocation method and device and terminal equipment
CN107885672A (en) * 2017-11-07 2018-04-06 杭州顺网科技股份有限公司 Internal storage management system and method
CN108062336A (en) * 2016-11-09 2018-05-22 腾讯科技(北京)有限公司 Media information processing method and device
CN110008016A (en) * 2019-04-15 2019-07-12 深圳市万普拉斯科技有限公司 Anonymous page management method, device, terminal device and readable storage medium storing program for executing
CN110457235A (en) * 2019-08-20 2019-11-15 Oppo广东移动通信有限公司 Memory compression methods, device, terminal and storage medium
CN110704199A (en) * 2019-09-06 2020-01-17 深圳平安通信科技有限公司 Data compression method and device, computer equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130191649A1 (en) * 2012-01-23 2013-07-25 International Business Machines Corporation Memory address translation-based data encryption/compression
CN104216696A (en) * 2013-06-05 2014-12-17 北京齐尔布莱特科技有限公司 Thumbnail component method
CN103970256A (en) * 2014-04-22 2014-08-06 中国科学院计算技术研究所 Energy saving method and system based on memory compaction and CPU dynamic frequency modulation
CN104750854A (en) * 2015-04-16 2015-07-01 武汉海达数云技术有限公司 Mass three-dimensional laser point cloud compression storage and rapid loading and displaying method
CN105631035A (en) * 2016-01-04 2016-06-01 北京百度网讯科技有限公司 Data storage method and device
CN106503032A (en) * 2016-09-09 2017-03-15 深圳大学 A kind of method and device of data compression
CN108062336A (en) * 2016-11-09 2018-05-22 腾讯科技(北京)有限公司 Media information processing method and device
CN106557436A (en) * 2016-11-17 2017-04-05 乐视控股(北京)有限公司 The memory compression function enabled method of terminal and device
CN106970881A (en) * 2017-03-10 2017-07-21 浙江大学 The one cold and hot page based on big page is followed the trail of and pressure recovery method
CN107704321A (en) * 2017-09-30 2018-02-16 北京元心科技有限公司 Memory allocation method and device and terminal equipment
CN107885672A (en) * 2017-11-07 2018-04-06 杭州顺网科技股份有限公司 Internal storage management system and method
CN110008016A (en) * 2019-04-15 2019-07-12 深圳市万普拉斯科技有限公司 Anonymous page management method, device, terminal device and readable storage medium storing program for executing
CN110457235A (en) * 2019-08-20 2019-11-15 Oppo广东移动通信有限公司 Memory compression methods, device, terminal and storage medium
CN110704199A (en) * 2019-09-06 2020-01-17 深圳平安通信科技有限公司 Data compression method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢文娣 等: "一种Linux内存管理机制", 《新乡学院学报》, vol. 33, no. 12, pages 31 - 36 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880928B (en) * 2020-07-06 2024-04-19 Oppo广东移动通信有限公司 Method for releasing selection process and terminal equipment
CN111880928A (en) * 2020-07-06 2020-11-03 Oppo广东移动通信有限公司 Method for selecting process to release and terminal equipment
CN112052089A (en) * 2020-09-01 2020-12-08 Oppo(重庆)智能科技有限公司 Memory recovery method and device and electronic equipment
CN112069433A (en) * 2020-09-10 2020-12-11 Oppo(重庆)智能科技有限公司 File page processing method and device, terminal equipment and storage medium
CN112685333A (en) * 2020-12-28 2021-04-20 上海创功通讯技术有限公司 Heap memory management method and device
CN113296940B (en) * 2021-03-31 2023-12-08 阿里巴巴新加坡控股有限公司 Data processing method and device
CN113296940A (en) * 2021-03-31 2021-08-24 阿里巴巴新加坡控股有限公司 Data processing method and device
CN113885787A (en) * 2021-06-08 2022-01-04 荣耀终端有限公司 Memory management method and electronic equipment
WO2022267664A1 (en) * 2021-06-24 2022-12-29 荣耀终端有限公司 Memory cold page processing method and electronic device
CN113610348A (en) * 2021-07-06 2021-11-05 安徽海博智能科技有限责任公司 Strip mine card scheduling method, system, device and storage medium
CN114461375B (en) * 2021-07-30 2023-01-20 荣耀终端有限公司 Memory resource management method and electronic equipment
CN114461375A (en) * 2021-07-30 2022-05-10 荣耀终端有限公司 Memory resource management method and electronic equipment
WO2024044986A1 (en) * 2022-08-30 2024-03-07 晶晨半导体(上海)股份有限公司 Memory management method and module, chip, electronic device, and storage medium

Also Published As

Publication number Publication date
CN111352861B (en) 2023-09-29

Similar Documents

Publication Publication Date Title
CN111352861B (en) Memory compression method and device and electronic equipment
US9274839B2 (en) Techniques for dynamic physical memory partitioning
JP5211751B2 (en) Calculator, dump program and dump method
CN111625191A (en) Data reading and writing method and device, electronic equipment and storage medium
CN110765031B (en) Data storage method and device, mobile terminal and storage medium
US20220035655A1 (en) Method and Device for Anonymous Page Management, Terminal Device, and Readable Storage Medium
US20240061789A1 (en) Methods, apparatuses, and electronic devices for evicting memory block in cache
CN111309267B (en) Storage space allocation method and device, storage equipment and storage medium
US11467734B2 (en) Managing swap area in memory using multiple compression algorithms
CN111723057A (en) File pre-reading method, device, equipment and storage medium
CN112905111A (en) Data caching method and data caching device
CN112764925A (en) Data storage method, device, equipment and storage medium based on virtual memory
CN115543532A (en) Processing method and device for missing page exception, electronic equipment and storage medium
CN108681469B (en) Page caching method, device, equipment and storage medium based on Android system
KR20190117294A (en) Electronic apparatus and controlling method thereof
CN108875036B (en) Page data caching method and device and electronic equipment
CN112654965A (en) External paging and swapping of dynamic modules
CN113138941A (en) Memory exchange method and device
US9405470B2 (en) Data processing system and data processing method
CN115328405A (en) Data processing method and device and electronic equipment
CN115421907A (en) Memory recovery method and device, electronic equipment and storage medium
CN115576863A (en) Data reading and writing method, storage device and storage medium
CN111562983B (en) Memory optimization method and device, electronic equipment and storage medium
CN115543859A (en) Wear leveling optimization method, device, equipment and medium for multi-partition SSD
CN113849311A (en) Memory space management method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant