CN114518962A - Memory management method and device - Google Patents

Memory management method and device Download PDF

Info

Publication number
CN114518962A
CN114518962A CN202210392252.8A CN202210392252A CN114518962A CN 114518962 A CN114518962 A CN 114518962A CN 202210392252 A CN202210392252 A CN 202210392252A CN 114518962 A CN114518962 A CN 114518962A
Authority
CN
China
Prior art keywords
memory
segment
space
application program
memory segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210392252.8A
Other languages
Chinese (zh)
Inventor
赵刚
倪佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Oceanbase Technology Co Ltd
Original Assignee
Beijing Oceanbase Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Oceanbase Technology Co Ltd filed Critical Beijing Oceanbase Technology Co Ltd
Priority to CN202311453727.0A priority Critical patent/CN117435343A/en
Priority to CN202210392252.8A priority patent/CN114518962A/en
Publication of CN114518962A publication Critical patent/CN114518962A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Abstract

The present disclosure provides a memory management method and device, the method includes: responding to a first memory requirement sent by an application program, and allocating a first memory segment for the application program, wherein the first memory segment is located in a heap memory segment of a user space of an operating system; searching a free memory space in the first memory segment in response to a second memory requirement of the application program; and when the free memory space in the first memory segment meets the second memory requirement, allocating the memory space required by the application program from the first memory segment. According to the method and the device, the first memory segment with the larger memory space is applied first, when the available memory space in the first memory segment meets the current memory requirement of the application program, the needed memory space can be directly allocated from the first memory segment without sending memory application to the operating system, so that the interaction frequency with the operating system is reduced, and the operating efficiency of the application program is improved.

Description

Memory management method and device
Technical Field
The present disclosure relates to the field of memory management technologies, and in particular, to a method and an apparatus for managing a memory.
Background
In the prior art, in the process of running an application program, memory application and release requests need to be sent to an operating system continuously, and interaction with the operating system is excessive, so that the program running efficiency is low.
Disclosure of Invention
The present disclosure provides a memory management method and device to solve the problem of low program running efficiency caused by excessive interaction with an operating system. Meanwhile, by designing a three-level memory management method, memory isolation among multiple tenants of the application program and modules in the tenants is realized, and fine control and tuning are facilitated.
In a first aspect, a method for managing a memory is provided, where the method includes: responding to a first memory requirement sent by an application program, and allocating a first memory segment for the application program, wherein the first memory segment is located in a heap memory segment of a user space of an operating system; searching a free memory space in the first memory segment in response to a second memory requirement of the application program; and when the free memory space in the first memory segment meets the second memory requirement, allocating the memory space required by the application program from the first memory segment.
As a possible implementation, the method further includes: when the first memory segment has no available memory space or the free memory space in the first memory segment does not meet the second memory requirement, responding to the second memory requirement sent to the operating system, and allocating a second memory segment to the application program, wherein the second memory segment is located in a heap memory segment of a user space.
As a possible implementation manner, the allocating a second memory segment for the application program includes: if the total memory space of the first memory segment and the second memory segment meets a preset condition, releasing all or part of the memory of the first memory segment; and if the total memory space of the first memory segment and the second memory segment does not meet the preset condition, allocating a second memory segment for the application program.
As a possible implementation manner, the preset condition is that a total memory space of the first memory segment and the second memory segment exceeds an upper memory limit applicable by the application program.
As a possible implementation, the method further includes: and responding to a memory release request sent by an application program, and releasing the memory of the first memory segment.
As a possible implementation manner, the performing memory release on the first memory segment in response to a memory release request sent by an application includes: obtaining the residual memory space applicable by the application program, wherein the residual memory space is the difference between the upper memory limit applicable and the first memory segment; responding to a memory release request sent by an application program, and if the residual memory space does not exceed the memory space of the first memory segment, performing memory release on the first memory segment.
As a possible implementation, the method further includes: and responding to a memory release request sent by an application program, if the residual memory space exceeds the memory space of the first memory segment, terminating the memory release of the first memory segment, and caching the first memory segment.
As a possible implementation manner, when the free memory space in the first memory segment meets the second memory requirement, allocating the memory space required by the application program from the first memory segment includes: and when the free memory space in the first memory segment meets the second memory requirement, allocating memory space of a tenant level from the first memory segment according to the memory requirement of each tenant.
In a second aspect, an apparatus for managing a memory is provided, the apparatus including: the device comprises a first allocation module, a second allocation module and a third allocation module, wherein the first allocation module is configured to allocate a first memory segment for an application program in response to a first memory requirement sent by the application program, and the first memory segment is located in a heap memory segment of a user space of an operating system; a search module configured to search for a free memory space in the first memory segment in response to a second memory requirement of the application program; and the second allocation module is configured to allocate the required memory space for the application program from the first memory segment when the free memory space in the first memory segment meets the second memory requirement.
As a possible implementation manner, the second allocating module is further configured to: and when the first memory segment has no available memory space or the free memory space in the first memory segment does not meet the second memory requirement, allocating a second memory segment to the application program, wherein the second memory segment is located in the heap memory segment of the user space.
As a possible implementation manner, the second allocating module is further configured to: if the total memory space of the first memory segment and the second memory segment meets a preset condition, releasing all or part of the memory of the first memory segment; and if the total memory space of the first memory segment and the second memory segment does not meet the preset condition, allocating a second memory segment for the application program.
As a possible implementation manner, the preset condition is that a total memory space of the first memory segment and the second memory segment exceeds an upper memory limit applicable by the application program.
As a possible implementation manner, the apparatus further includes: and the releasing module is configured to respond to a memory releasing request sent by an application program and release the memory of the first memory segment.
As a possible implementation, the releasing module is configured to: obtaining the residual memory space applicable by the application program, wherein the residual memory space is the difference between the upper memory limit applicable and the first memory segment; responding to a memory release request sent by an application program, and if the residual memory space does not exceed the memory space of the first memory segment, performing memory release on the first memory segment.
As a possible implementation manner, the apparatus further includes: and the cache module is configured to respond to a memory release request sent by an application program, terminate memory release of the first memory segment and cache the first memory segment if the remaining memory space exceeds the memory space of the first memory segment.
As a possible implementation manner, the second allocating module is configured to: and when the free memory space in the first memory segment meets the second memory requirement, allocating memory space of a tenant level from the first memory segment according to the memory requirement of each tenant.
In a third aspect, an apparatus for managing a memory is provided, where the apparatus includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the method according to the first aspect or any implementation manner of the first aspect when executing the computer program.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon executable code that, when executed, is capable of implementing a method as described in the first aspect or any one of the implementations of the first aspect.
In a fifth aspect, there is provided a computer program product comprising executable code that, when executed, is capable of implementing a method as described in the first aspect or any implementation manner of the first aspect.
The disclosed embodiment provides a memory management method, which comprises the steps of firstly applying for a first memory segment with a larger memory space, managing the current memory application, and directly allocating the required memory space from the first memory segment without sending the memory application to an operating system when the available memory space in the first memory segment meets the current memory requirement of the application program, thereby reducing the interaction frequency with the operating system and improving the operating efficiency of the application program.
Drawings
Fig. 1 is a schematic diagram of a layout structure of a virtual address space according to an embodiment of the disclosure.
Fig. 2 is a flowchart illustrating a memory management method according to an embodiment of the disclosure.
Fig. 3 is a schematic structural diagram of a memory management architecture according to an embodiment of the disclosure.
Fig. 4 is a schematic structural diagram of a memory management architecture according to another embodiment of the disclosure.
Fig. 5 is a flowchart illustrating a memory application method according to an embodiment of the disclosure.
Fig. 6 is a flowchart illustrating a memory releasing method according to an embodiment of the disclosure.
Fig. 7 is a schematic structural diagram of a memory management device according to an embodiment of the present disclosure.
Fig. 8 is a schematic structural diagram of a memory management device according to another embodiment of the present disclosure.
Detailed Description
Technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings of the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments.
The memory is one of the most important components of a computer, and the memory can also be called a main memory, which is a bridge between a CPU and external storage devices such as a disk/tape. When a program runs, an operating system firstly needs to load data and the program to be run into a memory and then calls the data and the program into a CPU from the memory, if the data which the CPU needs to access is not in the memory, page missing interruption can be automatically triggered, and the operating system and special hardware equipment respond to the interruption and load the data into the memory.
To simplify the management of applications, operating systems typically manage memory using virtual memory techniques to provide a consistent view of memory addresses for applications. The technology can use a part of a disk as a memory to release the memory occupation of the application program. The operating system may be a computer program for managing hardware resources and software resources, for example, a Windows operating system, a Linux operating system, an iOS operating system, and the like, which is not limited in this disclosure. The physical memory may be a memory space obtained by physical memory banks, and the size of the memory space may be, for example, the size of the capacity of a real memory bank inserted in a memory slot on a motherboard. The memory bank may be, for example, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Synchronous Dynamic Random Access Memory (SDRAM), and a double data rate (DDR SDRAM).
The operating system may allocate a Virtual Address (VA) to the program, and when the processor reads data, the processor may convert the virtual address in the virtual memory into a Physical Address (PA) in the physical memory through a memory management unit (e.g., a Memory Management Unit (MMU)), and perform actual reading and writing of the data in a physical address space, where the memory management unit may be, for example, a Memory Management Unit (MMU), and may also be, for example, a System Memory Management Unit (SMMU).
When a process is created, the operating system allocates a fixed-size virtual address space (i.e., a virtual memory) to the process, which is described in detail below with reference to fig. 1 by taking the Linux operating system as an example. As shown in FIG. 1, the virtual address space 100 may include a user space where applications run and a kernel space where an operating system runs. For example, each process may be allocated a virtual address space of 4G size, where 0-3G are user spaces of the process and 3-4G are kernel spaces. Further, the Kernel space may include, for example, a Kernel Image (Kernel Image), a physical page frame table (Mem _ map), a dynamic memory Mapping region (e.g., vmaloc region, Ioremap region), a Persistent memory Mapping region (Persistent Kernel Mapping), a Persistent memory Mapping region (temporal Kernel Mapping), and so on. The user space may be divided into segments, and may include, for example, a Reserved area (Reserved), a program Segment (Text Segment), an initialization Data Segment (Data Segment), an uninitialized Data Segment (BSS Segment), a Heap (Heap) address Segment (which may also be referred to as a Heap memory Segment), a Stack (Stack) address Segment, and Command-line parameters (Command-line identifiers) and global Environment Variables (Environment Variables). It should be noted that the heap address field is an address field that is managed and used by the application program. The application program can apply for the heap memory through API interfaces such as malloc/alloc/mmap and the like, and can release the heap memory through API interfaces such as free/munmap and the like.
The heap address segment is managed and used by the application program, and in order to improve the utilization rate of the heap memory segment, a policy of allocating as required is usually adopted in the running process of the application program, that is, memory is continuously applied or released to an operating system according to the memory requirement. This results in excessive interaction with the operating system, making the program less efficient to run.
In order to solve the above problem, an embodiment of the present disclosure provides a memory management method, where a first memory segment with a large memory space is applied first, and a current memory application is managed, and when an available memory space in the first memory segment meets a current memory requirement of an application program, a memory space required by the application program can be directly allocated from the first memory segment without issuing a memory application to an operating system, so that an interaction frequency with the operating system is reduced, and an operation efficiency of the application program is improved.
The following describes a memory management method according to an embodiment of the disclosure with reference to fig. 2. As shown in FIG. 2, the memory management method 200 includes steps S220-S260.
In step S220, a first memory segment is allocated to the application program in response to the first memory requirement sent by the application program.
An application may refer to a computer program that is developed to run in user space in order to accomplish a particular task or tasks. It should be understood that an application may include one or more processes, each of which requires a certain memory space, and generally, a process sends a memory request to an operating system, and the operating system performs memory allocation.
The operating system may be the above-mentioned operating system, and the operating system may be, for example, a Windows operating system, a Linux operating system, an iOS operating system, or the like.
The first memory requirement may be a memory request sent by an application (or a process in the application) to the operating system. As an example, when an application runs for the first time (for example, a process is created), the application sends a memory application request to an operating system, and in response to the memory application request, the operating system allocates a segment of memory for the application; as another example, for each login user, the application program of the existing system (e.g., MySQL database system) also sends a memory request to the operating system, and the operating system performs memory allocation, for example, a separate thread structure THD may be allocated using the new operator. It can be understood that in the subsequent user SQL processing procedure, a new memory structure is also allocated according to the situation.
The first memory segment is allocated memory, and the first memory segment may refer to a segment of memory address space allocated by the operating system for the application program, and the first memory segment is located in a heap memory segment of a user space of the operating system.
It should be noted that the heap address field is located in the user space, and is a segment of address field that is managed and used by the application program itself.
In step S240, in response to the second memory requirement of the application program, a free memory space in the first memory segment is searched.
After the first memory segment is allocated, when the application program has a memory requirement again, the free memory space in the first memory segment can be searched first, instead of directly sending a memory application to the operating system, so that the interaction with the operating system can be reduced.
The second memory requirement is the application memory requirement after the operating system allocates the first memory segment for the application.
The manner of searching the free memory space of the first memory segment in the embodiment of the present disclosure is not particularly limited, and for example, the free memory space of the first memory segment may be searched by an SQL command.
In step S260, when the free memory space in the first memory segment meets the second memory requirement, the memory space required by the application program is allocated from the first memory segment.
In some embodiments, in response to the second memory request, the free memory space in the first memory segment may be queried first, and when there is no available memory space in the first memory segment or the free memory space in the first memory segment does not satisfy the second memory request, the memory application request for the second memory request may be sent to the operating system. Otherwise, in response to the second memory requirement, the memory space required by the application program can be directly allocated from the first memory segment without sending a memory application to the operating system, so that the interaction frequency with the operating system is reduced, and the operating efficiency of the application program is improved. It should be noted that the second memory segment is located in the heap memory segment of the user space.
In some embodiments, in order to centrally manage the memory applications of the application program in a decentralized manner, the memory allocation of the operating system may be encapsulated, and a unique dynamic memory application interface is provided for the outside to apply for a memory block with a fixed size. For example, encapsulation of an operating system memory allocation api (mmap) may be implemented by ChunkMgr classes, and a unique dynamic memory application interface alloc _ chunk is provided externally. That is, the present disclosure may implement a method of providing a unique memory application or memory release. In response to each memory request, the operating system may allocate a fixed size memory address space (e.g., 10MB size memory space) for the application. As one example, in response to a first memory requirement, a first memory segment is allocated for an application, the first memory segment having a memory space of 10 MB. As another example, in response to a second memory requirement being sent to the operating system, the operating system allocates a second memory segment for the application, the second memory segment also having a memory space of 10 MB. That is, the operating system allocates a segment of memory address space of the same size each time.
In some embodiments, on the basis of centralized management, in order to manage the allocable space of the application program, for example, an upper limit of allocable memory (i.e., an upper limit of memory that can be applied) may be set, an upper limit of memory reserved for emergency use may also be set, and an upper limit of memory of a maximum cache may also be set. For more sophisticated management, the allocated memory segment with larger memory space may be split, i.e. into a plurality of common memory blocks. For example, the memory may be split according to the allocable memory of the tenant. As an example, a first memory segment of 10MB may be split into 5 common memory chunks of 2 MB. And then, chain management is carried out on a plurality of common memory blocks, and after the common memory blocks are used, the common memory blocks can be added into a cache chain for caching without sending a release request to an operating system, so that the interaction frequency with the operating system is reduced, and the operating efficiency of an application program is improved.
In some embodiments, the frequency of interaction with the operating system may be controlled by controlling the application and release of memory. For example, when the application program sends a memory application request for the second memory requirement to the operating system, the memory application request may be calculated first, and the total memory amount applied is counted, and then it is determined whether the total memory amount applied meets a preset condition, and if the preset condition is met, partial or all memory release needs to be performed on the allocated memory (i.e., the first memory segment at this time). If the predetermined condition is not satisfied, the operating system may allocate a second memory segment to the application program in response to the second memory requirement.
Total memory space of the applied first memory segment and the applied second memory segment (i.e., total amount of memory applied)
It should be noted that the preset condition may be that all the memory amount requested has exceeded the upper limit of the allocable memory. The total memory requested is the sum of the currently requested memory amount and the allocated memory amount. As an example, when the application program sends a memory application request for a second memory requirement to the operating system, the allocated memory amount is the first memory segment, the currently applied memory amount is the second memory segment, and at this time, the total memory amount applied is the sum of the first memory segment and the second memory segment.
It should be noted that, each time a memory application request is issued to the operating system, it is necessary to determine whether all the memory amounts applied satisfy a preset condition. It should be understood that, in response to that the applied total memory amount satisfies the preset condition, after the allocated memory is released, it still needs to determine again whether the applied total memory amount satisfies the preset condition, where the allocated memory amount refers to the remaining allocated memory amount after the release.
In some embodiments, in response to a memory release request from an application to a first memory segment, a memory release may be performed for the first memory segment. For example, when the applied total memory amount satisfies the preset condition, a memory release request may be sent to the operating system, and part or all of the memory in the first memory segment is released.
In some embodiments, in response to a memory release request of an application program for a requested memory segment, a remaining memory space that can be requested by the application program may be calculated and obtained, and if the remaining memory space does not exceed an allocated memory space (for example, before a second memory segment is allocated, the allocated memory space is a first memory segment), a memory release may be performed on the allocated memory space. If the remaining memory space exceeds the allocated memory space, the memory release of the allocated memory space may be terminated, and the first memory segment may be cached for subsequent centralized release, reducing the interaction frequency with the operating system. Taking the first memory segment as the currently allocated memory space as an example, if the remaining memory space does not exceed the memory space of the first memory segment, responding to the memory release request for the first memory segment, performing memory release on the first memory segment. And if the residual memory space exceeds the memory space of the first memory segment, responding to the memory release request of the first memory segment, terminating the memory release of the first memory segment, and caching the first memory segment.
It should be noted that the remaining memory space that can be applied may refer to a memory difference between an upper memory application limit (i.e., an upper memory allocable limit) of the application program and an allocated memory space (e.g., the first memory segment). The upper limit of the allocable memory can be set by user-defined according to user requirements, and the comparison of the embodiments of the present disclosure is not particularly limited.
It should be noted that, when there is a free memory space in the allocated memory and the memory requirement of the application program is met, the block resource manager or the software program may allocate the required memory from the free space without sending an application request to the operating system. For example, chunk resource management may be provided by ChunkMgr to enable allocation of free memory space within a chunk.
As can be seen from the foregoing, the present disclosure provides a memory management method, which is mainly used for managing an applicable memory space of an application program. For example, a first memory segment with a larger memory space may be applied first, when the application program sends out a memory request again, the available memory space in the first memory segment may be searched first, and if the available memory space meets the memory request of the application program, the memory space required by the application program may be directly allocated from the first memory segment without applying for the memory from the operating system. When the allocated memory space needs to be released, the remaining memory space that can be applied by the application program can be calculated and obtained first, and if the remaining memory space exceeds the allocated memory space, the memory space to be released currently can be cached first to wait for subsequent centralized release without sending a memory release request to the operating system. That is to say, the scheme in the present disclosure reduces the interaction frequency with the operating system by controlling the application/release of the memory, thereby improving the operating efficiency of the application program.
A database system can have a plurality of tenants, and the plurality of tenants can be completely isolated. In the aspect of data security, the data assets of the user can be ensured not to be leaked in a mode of not allowing data access across tenants. A tenant may be a container of resources in a database system, where the resources may be hardware resources such as CPUs, hard disks, and memories, and these resources may be used by multiple users. That is, a tenant may be a collection of database functions having several soft and hard resources, and a user may use the database through the tenant. One tenant may have one or more users, and one user may belong to different tenants.
In some embodiments, when the free memory space in the first memory segment meets the second memory requirement, the tenant-level memory space required by each tenant may be allocated from the first memory segment according to the actual memory requirement of each tenant. That is, after the application applies for a large block of memory (e.g., the first memory segment) from the operating system, the memory required by each tenant can be obtained from the applied large block of memory according to different tenants in the application. Furthermore, tenant-level memory application or release can be performed on the first memory segment according to the applicable memory upper limit of each tenant. The memory management granularity is set to the tenant level, and each tenant is independent, so that the memory resource isolation among tenants is realized. Therefore, different tenant memories in the running process of the application program can be finely tuned according to specific service scenes.
In some embodiments, each tenant may include, for example, an execution plan module, a memtable module, a sql module, a transaction module, and the like. And memory application or release at module level can be performed according to different modules in a single tenant. That is, after each tenant acquires the memory at the tenant level, the memory required by each module may be acquired from the tenant according to the module that is not used in the tenant. Further, module-level memory application or release can be performed on the tenant according to the applicable memory upper limit of each module. For example, the memory application or release can be performed according to modules such as an execution plan module, a memtable module, an sql module and a transaction module. The memory management granularity is changed to a module level, and each module is independent, so that the memory resource isolation among the modules is more favorably realized, and different memory modules in the tenant operation process can be finely tuned and optimized according to a specific service scene.
According to the scheme, through management of a module level, a tenant level, a block level and a three-level layer, application or release of tiny and various small memories (such as a module-level memory or a tenant-level memory) can be aggregated, and finally, the block-level memory applies or releases the memory to an operating system to form a self-defined large-block memory management method, so that the problem of memory fragmentation is effectively avoided, and interaction with the operating system is reduced. In addition, by designing a three-level memory management method, memory isolation between multiple tenants of an application program and between modules in the tenants can be realized, and fine control and tuning are facilitated.
In order to further describe the memory management method in the embodiment of the present disclosure, the following describes the memory management architecture in detail with reference to fig. 3 and fig. 4. It should be appreciated that the memory management architecture of the present disclosure may be utilized to manage the application and release of heap memory segments.
As shown in fig. 3, the memory management architecture 300 may include a block management layer 310, a tenant management layer 320, and a module management layer 330.
The block management layer 310 may include a block resource manager, which is used to implement encapsulation of memory allocation of the operating system and provide a unique dynamic memory application interface to the outside. The dynamic interface can apply for encapsulated fixed-size memory blocks. As an example, for example, encapsulation of an operating system memory allocation API (such as mmap) may be implemented by ChunkMgr class, and an alloc _ chunk is externally provided as a unique dynamic memory application interface for application of a large block of memory. Such as a memory block having a larger memory address space allocated by the first memory segment operating system. In some embodiments, for example, encapsulation of an operating system memory allocation api (mmap) may be implemented by ChunkMgr class, and a unique dynamic memory application interface alloc _ chunk is provided externally. That is, the present disclosure may implement a method of providing a unique memory application or memory release. In response to each memory request, the operating system may allocate a fixed size memory address space (e.g., 10MB size memory space) for the application. As one example, in response to a first memory requirement, a first memory segment is allocated for an application, the first memory segment having a memory space of 10 MB. As another example, in response to a second memory requirement being sent to the operating system, the operating system allocates a second memory segment for the application, the second memory segment also having a memory space of 10 MB. That is, the operating system allocates a segment of memory address space of the same size each time.
In some embodiments, to facilitate detailed management of the memory space, the first memory block with a larger memory space may be split, that is, split into a plurality of common memory blocks (for example, the memory blocks may be split according to allocable memory of a tenant), and then chain management is performed. As an example, a first memory segment of 10MB may be split into 5 common memory chunks of 2 MB. Thus, when the upper limit of the allocable memory of the application program is not reached, the earlier used ordinary memory block can be cached, and the cached ordinary memory block can also be used for subsequent memory allocation. Thus, the purpose of only applying and not releasing can be achieved. Therefore, interaction with an operating system is reduced, and the program running efficiency is improved.
A single distribution and release interface is presented outwards by designing a unique interactive inlet with an operating system so as to control the scattered memory application and release. Meanwhile, the memory blocks which are used in the earlier stage can be cached through the chain structure, and the cache memory blocks can also be used for subsequent distribution, so that the interaction between the system and the operating system can be reduced, and the operating efficiency of the system can be improved to the maximum extent.
Tenant management layer 320 may include a tenant-level resource manager for tenant-level management of allocated memory segments. Taking the first memory segment as the currently allocated memory as an example, in response to the memory requirement of each tenant, the tenant-level memory space required by each tenant may be obtained from the first memory segment. Each tenant can be managed in the tenant management layer 320, and memory application and release of the tenant can be controlled according to the allocable memory upper limit of each tenant. That is, when the allocable memory upper limit of the tenant is not reached, the used memory block in the tenant may be cached first, and the cache block in the tenant may also be used for subsequent memory allocation. Therefore, the purposes of only applying and not releasing the tenant level can be achieved. So as to refine and optimize the memory components. In some embodiments, the tenant management layer 320 may further implement encapsulation of the tenant layer, and a single interface is provided for the outside for the module management layer 330 to obtain a memory resource of a certain tenant.
Module management layer 330 may include a module memory allocator to fine-tune the memory usage of different modules within each tenant. For example, each tenant may include an SQL module, a transaction module, and the like. In response to the memory requirements of the modules in a tenant, the module level memory space required by each module may be obtained from the tenant. Each module of each tenant can be managed in the module management layer 320, and memory application and release of the module in the tenant can be controlled according to the allocable memory upper limit of the module in the tenant. That is, when the allocable memory upper limit of the module is not reached, the used memory block in the module may be cached first, and the cache block in the module may also be used for subsequent memory allocation. Thus, the purposes of only applying and not releasing at the module level can be achieved.
As shown in fig. 4, an application may use a module memory allocator in module management layer 430 to apply for memory. The module management layer 430 may send memory application requests of different modules to tenants corresponding to the tenant management layer 420, respond to the memory application request in the module management layer 430, a tenant in the tenant management layer 420 may perform memory allocation on a module or send a tenant memory application request to the block management layer 410 according to an applicable memory upper limit and an allocated free memory space, and the block management layer 410 may perform memory allocation on the tenant or send a block memory application request to an operating system according to the application program allocable memory upper limit and a free memory space in a currently allocated memory block. Similarly, the application may use the module memory allocator in module management layer 430 to perform the memory release. The module management layer 430 may send a memory release request of a module to a tenant corresponding to the tenant management layer 420, respond to the memory release request in the module management layer 430, the tenant in the tenant management layer 420 may cache the module memory to be released or send the tenant memory release request to the block management layer 410 according to an applicable upper memory limit and a free memory space in the tenant, and the block management layer 410 may cache the tenant memory to be released or send the block memory release request to the operating system according to an allocable upper memory limit of the application program or a free space in the block. Through module-level, tenant-level, block-level and three-level management, application or release of tiny and various small memories (such as module-level memories or tenant-level memories) can be aggregated, and finally, memory application or release is performed on an operating system through the block-level memories, so that a self-defined large-block memory management method is formed, the problem of memory fragmentation is effectively avoided, and interaction with the operating system is reduced.
It will be appreciated that when a process is created, the operating system allocates a memory address space for the process. In order to facilitate detailed management of the memory space, the allocated block with a larger memory space may be split, that is, split into a plurality of ordinary memory blocks (for example, the block may be split according to the allocable memory of the tenant), and then chain management is performed. As an example, a first memory segment of 10MB may be split into 5 common memory chunks of 2 MB. On the basis, if the currently allocated memory space is the first memory segment, the embodiment of the present disclosure provides a memory application and memory release management method.
As shown in fig. 5, the method for managing a memory application includes the following steps.
Step 1: and responding to a memory application request of an application program, judging whether the current memory application is the application of a common memory block, if so, jumping to the step 2, otherwise, jumping to the step 4.
And 2, step: checking whether the current cache chain has available cache blocks, if so, jumping to the step 3, otherwise, jumping to the step 10.
It should be understood that an available cache block may refer to a normal memory block that has been used earlier and added to the cache chain for caching without being released.
And step 3: and allocating available cache blocks, namely selecting available cache blocks from the cache chain in response to the memory application request, allocating the available cache blocks, and jumping to the step 13.
And 4, step 4: and (5) judging whether the total memory amount applied exceeds the upper limit of the allocable memory, if not, sending the memory application request to the operating system, and jumping to the step 5. Otherwise jump to step 6.
It should be noted that all the memory requested may refer to the sum of the currently requested memory amount and the allocated memory amount.
And 5: the operating system allocates a memory segment for the application and jumps to step 13.
And 6: and checking and judging whether a cache exists in the current cache chain, if so, jumping to the step 7, otherwise, jumping to the step 9.
And 7: and controlling to release part or all of the caches in the cache chain and sending a memory application request to the operating system.
And 8: the operating system allocates a memory segment for the application and jumps to step 13.
And step 9: and sending a memory overrun prompt to the user when the applied total memory amount is overrun.
Step 10: and judging whether the total memory amount applied exceeds the upper limit of the allocable memory, if not, sending the memory application request to the operating system, and jumping to the step 11. Otherwise jump to step 12.
Step 11: the operating system allocates a memory segment for the application and jumps to step 13.
Step 12: and sending a memory overrun prompt to a user when the applied total memory amount is overrun.
Step 13: and ending the current memory application process.
It can be seen from the above method that when there are available cache blocks in the cache chain, the required memory can be allocated directly from the cache chain, thereby reducing interaction with the operating system.
As shown in fig. 6, the method for managing memory release includes the following steps.
Step 1: and responding to a memory release request of the application program, judging whether the current memory release is the release of the common memory block, if so, jumping to the step 2, otherwise, jumping to the step 4.
Step 2: and checking whether the buffer amount in the current buffer chain exceeds the maximum buffer amount, if not, stopping the current release request of the common memory block, and jumping to the step 3, otherwise, jumping to the step 7.
It should be understood that the maximum amount of caching may refer to the maximum number of cacheable memory blocks or the maximum cacheable memory space.
And 3, step 3: and adding the common memory block to be released into the cache chain for caching, and jumping to the step 8.
And 4, step 4: and judging whether the residual memory space applicable by the application program can apply for a large memory block (such as a first memory segment), if so, jumping to the step 5, otherwise, jumping to the step 6.
It should be noted that the remaining memory space that can be applied for may refer to a memory space difference between an upper memory limit allocable for the application program and an allocated memory space.
And 5: and terminating the release of the non-ordinary memory block, splitting the non-ordinary memory block into ordinary memory blocks, and jumping to the step 2.
Step 6: and directly releasing the current non-ordinary memory block.
And 7: and directly releasing the current common memory block.
And 8: and ending the current memory release process.
According to the method, when the cache condition is met, the memory blocks can be returned to the cache chain for caching so as to be used for next allocation, so that interaction with an operating system is reduced, and the success rate of memory allocation is improved.
By the memory management method, the memory application and release work of the application program in a scattered manner is centralized, and the interaction frequency with an operating system can be reduced, so that the running efficiency of the application program is improved. In addition, the possibility of missed release of the heap memory can be reduced, and the reliability of program operation can be improved.
In some embodiments, taking a Linux system as an example, the present disclosure provides a main design method and members of a block management layer. The main design methods include a public method (which may also be referred to as an external method) and a private method. The members of the tile management layer may include, for example, internal members and other members, among others.
(1) Design method of block management layer
1) Public method
alloc _ chunk: a unique memory block application method is provided.
free _ chunk: a unique memory chunk release method is provided.
set _ max _ chunk _ cache _ cnt: and setting the number of the memory blocks of the maximum cache.
set _ limit: and setting an upper memory limit capable of being allocated.
set _ urgent: and setting an upper limit of the memory reserved for use in emergency situations.
get _ limit: and acquiring the maximum allocable upper memory limit.
get _ urgent: and acquiring an upper limit of a memory reserved for use in emergency.
get _ hold: and acquiring the total memory amount applied, including the memory blocks in the cache.
get _ freelist _ hold: and acquiring the memory amount in the current cache.
2) Private method
direct _ alloc: for applying for the memory block directly from the operating system.
direct _ free: and informing the operating system to release the memory.
update _ hold: the total amount of memory that has been applied for by the operating system is updated.
(2) Member of block management layer
1) Internal member
limit _: and recording the on-line of the memory which can be used by the system.
urgent _: the record is reserved for an upper memory limit used in emergency situations.
hold _: and recording all the applied memories of the system, including the memory blocks in the cache.
2) Other internal members
mmaps _: and recording the times of applying for the memory from the operating system.
munmaps _: and recording the times of releasing the memory to the operating system.
In some embodiments, taking a Linux system as an example, the present disclosure presents main design methods and members of a tenant management layer. The main design methods include a public method (which may also be referred to as an external method) and a private method. Members of the tenant management layer may include, for example, internal members.
(1) Design method of tenant management layer
1) Public method
alloc _ chunk: and the tenant memory block allocation module calls an alloc _ chunk interface of the ChunkMgr to allocate a memory block and specifies the module to which the memory block belongs.
free _ chunk: and releasing the tenant memory block, calling a free _ chunk interface of the ChunkMgr of the instance memory manager to allocate the memory block, and simultaneously designating the module to which the tenant memory block belongs.
set _ limit: and setting total memory limit of the tenant.
set _ ctx _ limit: and setting memory limitation of the designated module.
get _ limit: and acquiring the total memory upper limit of the tenant.
get _ sum _ hold: and acquiring the total amount of the current allocated memory of the tenant.
get _ ctx _ limit: and acquiring the memory upper limit of the tenant specified module.
get _ ctx _ hold: and acquiring the total amount of the allocated memory of the tenant designated module.
2) Private method
update _ hold: and updating the total memory occupation amount of the tenant.
update _ ctx _ hold: and updating the memory occupation of the tenant specified module.
(2) Members of tenant management layer
1) Internal member
tent _ id _: and the ID of the tenant to which the memory manager of the tenant belongs.
limit _: and the memory of the memory manager of the tenant is limited.
sum _ hold _: and the memory manager of the tenant allocates all the memory amount.
Ctx _ hold _ bytes _ [ N ]: and the array members record the memory usage amount distributed by different modules of the tenant.
Ctx _ limit _ bytes _ [ N ]: and the array members record the memory use upper limit of different modules of the tenant.
In some embodiments, taking a Linux system as an example, the present disclosure presents the main design methods and members of the module management layer. The main design methods include public methods. The members of the module management layer may include private members.
(1) Public method
alloc: and the corresponding Linux system malloc API is used for distributing the tiny memory objects.
free: and the corresponding Linux system free API is used for releasing the tiny memory objects.
realloc: and the corresponding realloc API of the Linux system is used for modifying the size of the smile memory object.
(2) Private member
obj _ mgr _: and the object manager is used for real memory object allocation. When the external module uses the alloc method of the distributor to distribute the memory, the API of obj _ mgr _ is called to distribute the required memory objects.
tent _ id _: the tenant ID to which it belongs.
ctx _ id _: the module ID to which it belongs.
Method embodiments of the present disclosure are described in detail above in conjunction with fig. 1-6, and apparatus embodiments of the present disclosure are described in detail below in conjunction with fig. 7 and 8. It is to be understood that the description of the apparatus embodiments corresponds to the description of the method embodiments, and therefore reference may be made to the preceding method embodiments for parts which are not described in detail.
Fig. 7 is a schematic structural diagram of a memory management device according to an embodiment of the present disclosure. The apparatus 700 includes a first assignment module 710, a lookup module 720, and a second assignment module 730.
The first allocation module 710 may be configured to allocate a first memory segment for an application in response to a first memory requirement sent by the application, wherein the first memory segment is located in a heap memory segment of a user space of an operating system.
The lookup module 720 may be configured to lookup a free memory space in the first memory segment in response to a second memory requirement of the application.
The second allocating module 730 may be configured to allocate the memory space required by the application program from the first memory segment when the free memory space in the first memory segment meets the second memory requirement.
Optionally, the second allocating module 730 is further configured to: and when the first memory segment has no available memory space or the free memory space in the first memory segment does not meet the second memory requirement, allocating a second memory segment to the application program, wherein the second memory segment is located in the heap memory segment of the user space.
Optionally, the second allocating module 730 is further configured to: if the total memory space of the first memory segment and the second memory segment meets a preset condition, releasing all or part of the memory of the first memory segment; and if the total memory space of the first memory segment and the second memory segment does not meet the preset condition, allocating a second memory segment for the application program.
Optionally, the preset condition is that the total memory space of the first memory segment and the second memory segment exceeds the applicable upper memory limit of the application program.
Optionally, the apparatus 700 further comprises: the releasing module 740 may be configured to release the memory of the first memory segment in response to a memory release request sent by an application program.
Optionally, the releasing module 740 is configured to: obtaining the residual memory space applicable by the application program, wherein the residual memory space is the difference between the upper memory limit applicable and the first memory segment; responding to a memory release request sent by an application program, and if the residual memory space does not exceed the memory space of the first memory segment, performing memory release on the first memory segment.
Optionally, the apparatus 700 further comprises: the caching module 750 may be configured to terminate the memory release of the first memory segment and cache the first memory segment if the remaining memory space exceeds the memory space of the first memory segment in response to a memory release request sent by an application program.
Optionally, the second allocating module 730 is configured to: and when the free memory space in the first memory segment meets the second memory requirement, allocating memory space of a tenant level from the first memory segment according to the memory requirement of each tenant.
Fig. 8 is a schematic structural diagram of another memory management device according to an embodiment of the present disclosure. Memory management device 800 depicted in fig. 8 may include a memory 810 and a processor 820, where memory 810 may be used to store executable code. The processor 820 may be used to execute executable code stored in the memory 810 to implement the steps in the various methods described previously. In some embodiments, the apparatus 800 may further include a network interface 830, and the data exchange between the processor 820 and the external device may be implemented through the network interface 830.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the disclosure are, in whole or in part, generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be read by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (17)

1. A method of managing memory, the method comprising:
responding to a first memory requirement sent by an application program, and allocating a first memory segment for the application program, wherein the first memory segment is located in a heap memory segment of a user space of an operating system;
searching a free memory space in the first memory segment in response to a second memory requirement of the application program;
and when the free memory space in the first memory segment meets the second memory requirement, allocating the memory space required by the application program from the first memory segment.
2. The method of claim 1, further comprising:
and when the first memory segment has no available memory space or the free memory space in the first memory segment does not meet the second memory requirement, allocating a second memory segment to the application program, wherein the second memory segment is located in the heap memory segment of the user space.
3. The method of claim 2, wherein said allocating a second memory segment for the application comprises:
if the total memory space of the first memory segment and the second memory segment meets a preset condition, releasing all or part of the memory of the first memory segment;
and if the total memory space of the first memory segment and the second memory segment does not meet the preset condition, allocating a second memory segment for the application program.
4. The method according to claim 3, wherein the preset condition is that the total memory space of the first memory segment and the second memory segment exceeds an upper memory limit applicable by the application program.
5. The method of claim 1, further comprising:
and responding to a memory release request sent by an application program, and releasing the memory of the first memory segment.
6. The method of claim 5, wherein the releasing the memory of the first memory segment in response to the memory release request sent by the application program comprises:
obtaining the residual memory space applicable by the application program, wherein the residual memory space is the difference between the upper memory limit applicable and the first memory segment;
responding to a memory release request sent by an application program, and if the residual memory space does not exceed the memory space of the first memory segment, performing memory release on the first memory segment.
7. The method of claim 6, further comprising:
and responding to a memory release request sent by an application program, if the residual memory space exceeds the memory space of the first memory segment, terminating the memory release of the first memory segment, and caching the first memory segment.
8. The method of claim 1, wherein allocating the memory space required by the application program from the first memory segment when the free memory space in the first memory segment meets the second memory requirement comprises:
and when the free memory space in the first memory segment meets the second memory requirement, allocating memory space of a tenant level from the first memory segment according to the memory requirement of each tenant.
9. An apparatus for managing memory, the apparatus comprising:
the device comprises a first allocation module, a second allocation module and a third allocation module, wherein the first allocation module is configured to allocate a first memory segment for an application program in response to a first memory requirement sent by the application program, and the first memory segment is located in a heap memory segment of a user space of an operating system;
a search module configured to search for a free memory space in the first memory segment in response to a second memory requirement of the application program;
a second allocating module, configured to allocate a memory space required by the application program from the first memory segment when a free memory space in the first memory segment meets the second memory requirement.
10. The apparatus of claim 9, the second assignment module further to:
and when the first memory segment has no available memory space or the free memory space in the first memory segment does not meet the second memory requirement, allocating a second memory segment to the application program, wherein the second memory segment is located in the heap memory segment of the user space.
11. The apparatus of claim 10, the second assignment module further to:
if the total memory space of the first memory segment and the second memory segment meets a preset condition, releasing all or part of the memory of the first memory segment;
and if the total memory space of the first memory segment and the second memory segment does not meet the preset condition, allocating a second memory segment for the application program.
12. The apparatus of claim 11, wherein the predetermined condition is that a total memory space of the first memory segment and the second memory segment exceeds an upper memory limit applicable by the application program.
13. The apparatus of claim 9, the apparatus further comprising:
and the releasing module is configured to respond to a memory releasing request sent by an application program and release the memory of the first memory segment.
14. The apparatus of claim 13, the release module to:
obtaining the residual memory space applicable by the application program, wherein the residual memory space is the difference between the upper memory limit applicable and the first memory segment;
responding to a memory release request sent by an application program, and if the residual memory space does not exceed the memory space of the first memory segment, performing memory release on the first memory segment.
15. The apparatus of claim 14, the apparatus further comprising:
and the cache module is configured to respond to a memory release request sent by an application program, terminate memory release of the first memory segment and cache the first memory segment if the remaining memory space exceeds the memory space of the first memory segment.
16. The apparatus of claim 9, the second assignment module to:
and when the free memory space in the first memory segment meets the second memory requirement, allocating memory space of a tenant level from the first memory segment according to the memory requirement of each tenant.
17. An apparatus for managing memory, comprising a memory and a processor, wherein the memory has stored therein executable code, and the processor is configured to execute the executable code to implement the method of any one of claims 1-8.
CN202210392252.8A 2022-04-15 2022-04-15 Memory management method and device Pending CN114518962A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202311453727.0A CN117435343A (en) 2022-04-15 2022-04-15 Memory management method and device
CN202210392252.8A CN114518962A (en) 2022-04-15 2022-04-15 Memory management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210392252.8A CN114518962A (en) 2022-04-15 2022-04-15 Memory management method and device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202311453727.0A Division CN117435343A (en) 2022-04-15 2022-04-15 Memory management method and device

Publications (1)

Publication Number Publication Date
CN114518962A true CN114518962A (en) 2022-05-20

Family

ID=81600411

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210392252.8A Pending CN114518962A (en) 2022-04-15 2022-04-15 Memory management method and device
CN202311453727.0A Pending CN117435343A (en) 2022-04-15 2022-04-15 Memory management method and device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202311453727.0A Pending CN117435343A (en) 2022-04-15 2022-04-15 Memory management method and device

Country Status (1)

Country Link
CN (2) CN114518962A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023236930A1 (en) * 2022-06-10 2023-12-14 维沃移动通信有限公司 Memory allocation method and apparatus based on ion allocator, electronic device, and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853215A (en) * 2010-06-01 2010-10-06 恒生电子股份有限公司 Memory allocation method and device
CN103020077A (en) * 2011-09-24 2013-04-03 国家电网公司 Method for managing memory of real-time database of power system
US20140282589A1 (en) * 2013-03-13 2014-09-18 Samsung Electronics Company, Ltd. Quota-based adaptive resource balancing in a scalable heap allocator for multithreaded applications
CN104182350A (en) * 2013-05-28 2014-12-03 中国银联股份有限公司 Memory management method and device aiming at application containing multiple processes
US20160371021A1 (en) * 2015-06-17 2016-12-22 International Business Machines Corporation Secured Multi-Tenancy Data in Cloud-Based Storage Environments
CN107153618A (en) * 2016-03-02 2017-09-12 阿里巴巴集团控股有限公司 A kind of processing method and processing device of Memory Allocation
CN108459898A (en) * 2017-02-20 2018-08-28 阿里巴巴集团控股有限公司 A kind of recovery method as resource and device
CN109815162A (en) * 2019-01-28 2019-05-28 Oppo广东移动通信有限公司 EMS memory management process, device, mobile terminal and storage medium
CN113296703A (en) * 2021-05-27 2021-08-24 山东云海国创云计算装备产业创新中心有限公司 Heap memory management method, device, equipment and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853215A (en) * 2010-06-01 2010-10-06 恒生电子股份有限公司 Memory allocation method and device
CN103020077A (en) * 2011-09-24 2013-04-03 国家电网公司 Method for managing memory of real-time database of power system
US20140282589A1 (en) * 2013-03-13 2014-09-18 Samsung Electronics Company, Ltd. Quota-based adaptive resource balancing in a scalable heap allocator for multithreaded applications
CN104182350A (en) * 2013-05-28 2014-12-03 中国银联股份有限公司 Memory management method and device aiming at application containing multiple processes
US20160371021A1 (en) * 2015-06-17 2016-12-22 International Business Machines Corporation Secured Multi-Tenancy Data in Cloud-Based Storage Environments
CN107153618A (en) * 2016-03-02 2017-09-12 阿里巴巴集团控股有限公司 A kind of processing method and processing device of Memory Allocation
CN108459898A (en) * 2017-02-20 2018-08-28 阿里巴巴集团控股有限公司 A kind of recovery method as resource and device
CN109815162A (en) * 2019-01-28 2019-05-28 Oppo广东移动通信有限公司 EMS memory management process, device, mobile terminal and storage medium
CN113296703A (en) * 2021-05-27 2021-08-24 山东云海国创云计算装备产业创新中心有限公司 Heap memory management method, device, equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023236930A1 (en) * 2022-06-10 2023-12-14 维沃移动通信有限公司 Memory allocation method and apparatus based on ion allocator, electronic device, and readable storage medium

Also Published As

Publication number Publication date
CN117435343A (en) 2024-01-23

Similar Documents

Publication Publication Date Title
KR101137172B1 (en) System, method and program to manage memory of a virtual machine
US10241550B2 (en) Affinity aware parallel zeroing of memory in non-uniform memory access (NUMA) servers
US7231504B2 (en) Dynamic memory management of unallocated memory in a logical partitioned data processing system
US8478931B1 (en) Using non-volatile memory resources to enable a virtual buffer pool for a database application
US20110246742A1 (en) Memory pooling in segmented memory architecture
US9058212B2 (en) Combining memory pages having identical content
CN103577345A (en) Methods and structure for improved flexibility in shared storage caching by multiple systems
US11593186B2 (en) Multi-level caching to deploy local volatile memory, local persistent memory, and remote persistent memory
KR20150141282A (en) Method for sharing reference data among application programs executed by a plurality of virtual machines and Reference data management apparatus and system therefor
US9348819B1 (en) Method and system for file data management in virtual environment
CN113031857B (en) Data writing method, device, server and storage medium
CN114518962A (en) Memory management method and device
CN110162395B (en) Memory allocation method and device
US7840772B2 (en) Physical memory control using memory classes
US20050144389A1 (en) Method, system, and apparatus for explicit control over a disk cache memory
US20220382672A1 (en) Paging in thin-provisioned disaggregated memory
CN116225693A (en) Metadata management method, device, computer equipment and storage medium
CN110447019B (en) Memory allocation manager and method for managing memory allocation performed thereby
US20220318042A1 (en) Distributed memory block device storage
KR101692055B1 (en) Method, apparatus, and computer program stored in computer readable storage medium for managing shared memory in database server
US10168911B1 (en) Defragmentation of persistent main memory
US11144445B1 (en) Use of compression domains that are more granular than storage allocation units
US10579515B1 (en) Recycling segment pages while preserving integrity of memory addressing
KR100825724B1 (en) Object-based storage system using PMEM useful for high speed transmission with DMA and method thereof
CN117234409A (en) Data access method, device, storage system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220520