CN115145735B - Memory allocation method and device and readable storage medium - Google Patents

Memory allocation method and device and readable storage medium Download PDF

Info

Publication number
CN115145735B
CN115145735B CN202211063392.7A CN202211063392A CN115145735B CN 115145735 B CN115145735 B CN 115145735B CN 202211063392 A CN202211063392 A CN 202211063392A CN 115145735 B CN115145735 B CN 115145735B
Authority
CN
China
Prior art keywords
memory
allocation
page
size
memory block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211063392.7A
Other languages
Chinese (zh)
Other versions
CN115145735A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nfs China Software Co ltd
Original Assignee
Nfs China Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nfs China Software Co ltd filed Critical Nfs China Software Co ltd
Priority to CN202211063392.7A priority Critical patent/CN115145735B/en
Publication of CN115145735A publication Critical patent/CN115145735A/en
Application granted granted Critical
Publication of CN115145735B publication Critical patent/CN115145735B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool

Abstract

The embodiment of the application provides a memory allocation method, a memory allocation device and a readable storage medium. The method comprises the following steps: receiving a first memory allocation request triggered by calling a preset function for a target application program, wherein the first memory allocation request is used for requesting to allocate a memory block with a first size; converting the first memory allocation request into a second memory allocation request, wherein the second memory allocation request is used for requesting allocation of memory blocks with a second size, the second size is determined according to the first size and the granularity level of the memory pages in the memory pool, the memory pool comprises the memory pages with different granularity levels, and the memory blocks in each memory page have the same granularity level; searching the allocable memory blocks in the cached memory pages, wherein the granularity level of the memory page to which the allocable memory blocks belong is matched with the second size; and returning the address of the searched allocable memory block. According to the embodiment of the application, the memory fragments can be reduced, and the memory allocation efficiency can be improved.

Description

Memory allocation method and device and readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a memory allocation method and apparatus, and a readable storage medium.
Background
The memory allocation method refers to a method for allocating or recycling a storage space in the process of executing an application program. Memory allocation is an important function in modern computer operating systems.
In the c language, a malloc/free function is used for allocating/releasing the memory, and the malloc function is a system function and is used for applying for a continuous memory block area with a specified size. Since the allocated memory chunks may be larger than the requested memory size, the memory chunks need to be cut and the remaining memory chunks are inserted into the free linked list. When an application program frequently uses the malloc/free function to allocate/release the memory, especially when a large number of memory blocks with fixed size are frequently applied/released, a large memory block may be divided into small memory blocks to be allocated, so that a large number of memory fragments are generated, which not only causes memory resource waste, but also increases the time consumption for searching the allocable memory blocks, and affects the memory allocation efficiency.
Disclosure of Invention
Embodiments of the present application provide a memory allocation method, a memory allocation device, and a readable storage medium, which can reduce memory fragments and improve memory allocation efficiency.
In order to solve the above problem, an embodiment of the present application discloses a memory allocation method, where the method includes:
receiving a first memory allocation request triggered by calling a preset function for a target application program, wherein the first memory allocation request is used for requesting allocation of a memory block with a first size;
converting the first memory allocation request into a second memory allocation request, where the second memory allocation request is used to request allocation of a memory block of a second size, the second size is determined according to the first size and a granularity level of a memory page in a memory pool, the memory pool includes memory pages of different granularity levels, and the memory blocks included in each memory page have the same granularity level;
searching an allocable memory block in a cached memory page, wherein the granularity level of the memory page to which the allocable memory block belongs is matched with the second size;
if the distributable memory blocks do not exist in the cached memory pages, requesting the distributable memory pages from the memory pool to a kernel, caching the distributable memory pages, and searching the distributable memory blocks in the distributable memory pages;
and returning the address of the searched allocable memory block.
On the other hand, an embodiment of the present application discloses a memory allocation device, where the device includes:
the allocation request receiving module is configured to receive a first memory allocation request triggered by a target application calling a preset function, where the first memory allocation request is used to request allocation of a memory block of a first size;
a request conversion module, configured to convert the first memory allocation request into a second memory allocation request, where the second memory allocation request is used to request allocation of a memory block of a second size, the second size is determined according to the first size and granularity levels of memory pages in a memory pool, the memory pool includes memory pages of different granularity levels, and the memory blocks included in each memory page have the same granularity level;
a first searching module, configured to search an allocable memory block in a cached memory page, where a granularity level of a memory page to which the allocable memory block belongs is matched with the second size;
a second searching module, configured to request a distributable memory page from the memory pool to a kernel if a distributable memory block does not exist in the cached memory page, cache the distributable memory page, and search the distributable memory block in the distributable memory page;
and the result returning module is used for returning the searched address of the allocable memory block.
In yet another aspect, an embodiment of the present application discloses a memory allocation apparatus, which includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by one or more processors, and the one or more programs include instructions for performing the memory allocation method according to any one of the foregoing methods.
In yet another aspect, an embodiment of the present application discloses a readable storage medium having stored thereon instructions that, when executed by one or more processors of an apparatus, cause the apparatus to perform a memory allocation method as described in any one of the preceding claims.
The embodiment of the application has the following advantages:
in the embodiment of the present application, a memory allocation/release function in a target application is replaced with a preset function of the present application, and when the target application executes the preset function, a first memory allocation request is triggered, where the first memory allocation request is used to request allocation of a memory block with a first size. And the preset function converts the first memory allocation request into a second memory allocation request by using a memory allocation mode based on a memory pool, wherein the second memory allocation request is used for requesting allocation of a memory block with a second size, and the second size is determined according to the first size and the granularity level of the memory page in the memory pool. The second size is a fixed memory size provided by the memory pool, that is, the embodiment of the present application converts the variable memory allocation request in the target application program into a fixed memory allocation request based on the memory pool. Because the memory pool is a mode of dividing the memory blocks with different sizes from small to large according to a certain granularity level, and the memory pages of each granularity level are configured with a certain number, the memory blocks can be distributed according to a fixed size each time without cutting the memory blocks, so that memory fragments can be avoided, time consumed for searching the distributable memory blocks can be reduced, and the memory distribution efficiency can be improved. In addition, the memory allocation method provided by the application only needs to carry out simple function replacement in the source code of the target application program, does not need to modify other codes, is low in operation cost, has wide applicability and is convenient for code maintenance.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating steps of an embodiment of a memory allocation method according to the present application;
FIG. 2 is a schematic diagram of a system architecture to which the memory allocation method of the present application is applied;
FIG. 3 is a schematic diagram of memory pages with different granularity levels in a memory pool according to an example of the present application;
fig. 4 is a schematic flowchart of a memory allocation method according to an example of the present application;
FIG. 5 is a diagram illustrating a data structure of a memory block into which header data is inserted according to an example of the present application;
FIG. 6 is a block diagram of an embodiment of a memory allocation apparatus according to the present application;
FIG. 7 is a block diagram of a memory allocation apparatus 800 according to the present application;
fig. 8 is a schematic diagram of a server in some embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. Furthermore, the term "and/or" in the specification and claims is used to describe an association relationship of associated objects, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In the embodiments of the present application, the term "plurality" means two or more, and other terms are similar thereto.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a memory allocation method according to the present application is shown, where the method is applicable to a computer device, and the method may include the following steps:
step 101, receiving a first memory allocation request, wherein the first memory allocation request is triggered by a target application program calling a preset function, and the first memory allocation request is used for requesting to allocate a memory block with a first size;
step 102, converting the first memory allocation request into a second memory allocation request, where the second memory allocation request is used to request allocation of a memory block of a second size, the second size is determined according to the first size and a granularity level of a memory page in a memory pool, the memory pool includes memory pages of different granularity levels, and the memory blocks included in each memory page have the same granularity level;
step 103, searching allocable memory blocks in the cached memory pages, wherein the granularity level of the memory page to which the allocable memory blocks belong is matched with the second size;
step 104, if the assignable memory blocks do not exist in the cached memory pages, requesting the assignable memory pages from the memory pool to a kernel, caching the assignable memory pages, and searching the assignable memory blocks in the assignable memory pages;
and 105, returning the searched address of the allocable memory block.
The computer device may include, but is not limited to, any of the following: the system comprises a server, a smart phone, a recording pen, a tablet computer, an electronic book reader, an MP3 (dynamic image Experts compress standard Audio Layer 3, moving image Experts Group Audio Layer III) player, an MP4 (dynamic image Experts compress standard Audio Layer 4, moving image Experts Group Audio Layer IV) player, a laptop portable computer, a vehicle-mounted computer, a desktop computer, a set-top box, an intelligent television, a wearable device and the like.
The computer device may be installed with a Linux Operating System, and the type of the Linux Operating System is not limited in the embodiments of the present application, for example, the Linux Operating System may include but is not limited to any one of Debian, ubuntu (wubang), centros (Community Enterprise Operating System), UOS (trusted desktop Operating System), kylin Operating System, and sode Operating System.
It should be noted that, the kernel described in the embodiment of the present application refers to a Linux kernel.
Memory pools (Memory pools) are a type of Memory allocation that allows dynamic allocation of Memory blocks of a fixed size. The memory pool means that a certain number of memory blocks with equal size (in general) are applied in advance to be reserved before the application program actually uses the memory, and then the memory allocation and release of the application program are operated and managed in the memory pool. A Slab object caching mechanism (hereinafter referred to as a Slab mechanism) is a memory allocation mechanism of a Linux kernel, and the basic idea of the Slab mechanism is similar to a memory pool. According to the method and the device, management of the memory pool is achieved by using a Slab mechanism of a Linux kernel, and a variable memory allocation mode of a malloc/free mode used by a target application program is converted into a fixed memory allocation mode.
The memory allocation method provided by the application can be used for allocating the memory for the target application program running in the computer equipment. The target application program can be any application program running in the computer equipment, and if the target application program frequently calls the malloc/free function, a large amount of memory fragments can be generated, and the memory allocation efficiency is influenced. In order to solve the problem, in the embodiment of the application, the malloc/free function in the target application program is replaced by the preset function of the application, and the preset function can convert a variable memory allocation mode originally using the malloc/free function by the target application program into a fixed memory allocation mode based on a memory pool by using a Slab mechanism of a Linux kernel, so that memory fragments are avoided, and the memory allocation efficiency can be improved.
Referring to fig. 2, a system architecture diagram applying the memory allocation method of the present application is shown. As shown in fig. 2, the system architecture of the present application may include an application layer, a middle layer, a driver layer, and a kernel layer.
The application layer comprises a target application program, and the target application program calls a memory allocation/release function (malloc/free function) to request memory allocation/release.
The middle layer provides an interface for calling preset functions, which may include preset memory allocation functions (e.g., noted as ccalloc functions) and preset memory release functions (e.g., noted as cchfree functions). The method comprises the steps of replacing a malloc/free function in a target application program with a preset function ccalloc/cchfree, specifically, replacing the malloc function with a ccalloc function, and replacing the free function with a cchfree function. The preset function provided by the middle layer can be used for converting a variable memory allocation mode of a target application program originally using a malloc/free function into a fixed memory allocation mode based on a memory pool.
The driver layer provides an interface for calling a Linux kernel Slab mechanism, and in the embodiment of the application, the driver layer interface is recorded as cchdev. The driver layer Interface may include an input/output control (I/O) API (Application Programming Interface) or a new system call. Wherein the ioctlAPI is a user interface driven by linux kernel characters. The new system call is to add a new user interface function to the operating system. The driving layer can be used for realizing interaction between the middle layer and the kernel layer, and is mainly used for address space conversion and memory pool management. Wherein address space translation refers to a translation between an application layer address and a kernel layer address.
The kernel layer refers to a Linux kernel and provides specific implementation of a Slab mechanism.
As shown in fig. 2, in the system architecture, by adding the middle layer and the driver layer between the application layer and the kernel layer, a kernel interface is provided for the target application program, so that the target application program can use a Slab mechanism of the kernel layer through the middle layer and the driver layer to implement a fixed memory allocation manner, thereby avoiding memory fragmentation and improving memory allocation efficiency.
In an optional embodiment of the present application, the preset function in the target application may be obtained by replacing a memory allocation/release function in the target application when the target application meets a preset condition; the preset conditions may include: the target application program includes a preset number of memory allocation/release functions, and the preset number of memory allocation/release functions are used for requesting allocation/release of memory blocks of a fixed size, or the preset number of memory allocation/release functions are used for requesting allocation/release of memory blocks of a non-fixed size smaller than the minimum granularity level. The memory allocation/release function refers to a memory allocation function or a memory release function, and the allocation/release refers to allocation or release.
The preset number may be a specific number greater than a certain preset value. In practical applications, if a target application program includes a preset number of memory allocation/release functions, and the preset number of memory allocation/release functions are used to request allocation/release of fixed-size memory blocks, which indicates that the target application program has a large number of allocation/release requests for fixed-size memory blocks, at this time, if a variable memory allocation manner is adopted, a large number of memory fragments will be generated and memory allocation efficiency will be affected. Or, if the target application program includes a preset number of memory allocation/release functions, and the preset number of memory allocation/release functions are used to request allocation/release of a memory block with a non-fixed size smaller than the minimum granularity level. For example, the target application has a large amount of requests for allocating/releasing memory blocks smaller than 16 bytes, and in this case, if a variable memory allocation method is adopted, a large amount of memory fragments will be generated and memory allocation efficiency will be affected.
Therefore, under the condition that the preset conditions are met, the memory allocation/release function in the target application program can be replaced by the preset function of the application, so that the variable memory allocation mode of the target application program originally using the malloc/free function can be converted into the fixed memory allocation mode based on the memory pool, memory fragments can be avoided, and the memory allocation efficiency can be improved. In addition, the memory allocation method provided by the application only needs to perform simple function replacement in the source code of the target application program, for example, the malloc function is replaced by a cchalloc function, the free function is replaced by a cchfffee function, other codes do not need to be modified, the operation cost is low, the applicability is wide, and the code maintenance is convenient.
Further, a dynamic selection switch may be provided where the memory allocation/release function malloc/free needs to be replaced by the preset function ccalloc/cchfree in the target application program, so that whether the memory allocation/release function malloc/free or the preset function ccalloc/cchfree is called currently may be dynamically selected.
By adding the middle layer and the driving layer, the target application program of the application layer can use the Slab mechanism of the kernel layer. The middle layer can be used for receiving a first memory allocation request of a target application program and converting the first memory allocation request into a second memory allocation request. Specifically, the intermediate layer may provide an interface for calling a preset function ccalloc/cchfree, where the preset memory allocation function ccalloc is used to replace a memory allocation function malloc in the target application program, and the preset memory allocation function ccalloc is used to convert variable memory allocation into fixed memory allocation. The preset memory release function cchfree is used for replacing a memory release function free in the target application program, and the preset memory release function cchfree is used for releasing the memory by using a Slab mechanism.
In an optional embodiment of the present application, the second memory allocation request may be implemented by a Slab mechanism that calls a kernel through a preset interface of the target application program; the preset interface may include a driver layer located between the application layer and the kernel layer, or the preset interface may include a system call interface of a Slab mechanism built in the application layer.
The embodiment of the application can provide two ways of using the Slab mechanism of the Linux kernel.
The first way is to add a driver layer (as shown in fig. 2) between the middle layer and the kernel layer, where the middle layer may interact with the kernel through the driver layer, for example, the middle layer applies for a memory page from the kernel through the driver layer.
The second way is to transplant the implementation of the Slab mechanism of the Linux kernel to the application layer, so that the middle layer can directly use the Slab mechanism through a system call interface of the built-in Slab mechanism of the application layer, and a driver layer is not needed. However, the Slab mechanism for transplanting the Linux kernel is complex, and increases the complexity and the occupied space of the application layer, and therefore, the first mode is preferably adopted in the embodiment of the present application.
In the embodiment of the present application, before the target application program actually uses the memory, a plurality of memory pages with the same size may be pre-allocated by a Slab mechanism of the Linux kernel, which is referred to as a memory pool. The memory pages in the memory pool may be stored in an array, and if the array is recorded as cache, the memory allocated or released by the subsequent target application program may be managed by the array. The memory pool comprises memory pages with different granularity levels, and the memory blocks in each memory page have the same granularity level. The granularity level refers to the size of the memory block. In a specific implementation, the granularity level of the page memory provided by the Slab mechanism may include: 16. 32, 48, 64, \ 8230 \ 8230;, 1024, etc. That is, the memory pool allocated by the kernel may include the memory pages of the above various granularity levels.
Referring to fig. 3, a schematic diagram of a memory page including memory pools of different granularity levels in an example of the present application is shown, where the size of each memory page is 4 kbytes, and a memory page with a granularity level of 16 kbytes includes 128 memory blocks, and each memory block has a size of 16 bytes.
In this embodiment, the first memory allocation request refers to an original variable memory allocation request in a target application program, and the second memory allocation request refers to a converted fixed memory allocation request of this application. In one example, assume that the target application includes the following memory allocation function: and the malloc (22) is used for requesting to allocate the memory block of 22 bytes. Assume that the memory allocation function in the target application is replaced with a preset memory allocation function as follows: and a ccalloc (22), when the target application program executes to the preset memory allocation function ccalloc (22), a first memory allocation request is triggered, and the first memory allocation request is used for requesting to allocate a memory block with a first size (22 bytes). The intermediate layer receives a first memory allocation request and converts the first memory allocation request into a second memory allocation request, where the second memory allocation request is used to request allocation of a memory block of a second size, and the second size is determined according to the first size and a granularity level of a memory page in the memory pool. Specifically, the second size may be a smallest granularity level of the granularity levels that is greater than or equal to the first size.
In the above example, according to the granularity level provided by the Slab mechanism, the granularity level of 32 bytes may be determined to be the minimum granularity level larger than the first size of 22 bytes, and therefore, for the preset memory allocation function ccalloc (22), the second size that needs to be converted may be determined to be 32 bytes. That is, in the embodiment of the present application, the memory allocation function malloc (22) is replaced by the preset memory allocation function ccallooc (22), so that the first memory allocation request originally applying for 22 bytes can be converted into the second memory allocation request applying for 32 bytes.
For another example, assume that the target application further includes the following memory allocation function: malloc (28) and malloc (19), and replacing the two memory allocation functions with preset memory allocation functions, callloc (28) and callloc (19), respectively. For the preset memory allocation function ccalloc (28), the first size is 28 bytes and the second size can be determined to be 32 bytes. For a preset memory allocation function ccalloc (19), the first size is 19 bytes and the second size may be determined to be 32 bytes. That is, in the embodiment of the present application, both the first memory allocation request originally applying for 28 bytes and the first memory allocation request originally applying for 19 bytes are converted into the second memory allocation request originally applying for 32 bytes.
For another example, assume that the target application further includes the following memory allocation function: malloc (38) and replace the memory allocation function there with a preset memory allocation function ccalloc (38). For a preset memory allocation function ccalloc (38), the first size is 38 bytes and the second size can be determined to be 48 bytes. That is, the embodiment of the present application converts the first memory allocation request originally applying for 38 bytes into the second memory allocation request applying for 48 bytes.
The embodiments of the present application convert a first memory allocation request originally applying for a first size (e.g., 22 bytes, 28 bytes, and 19 bytes as described above) in a target application into a second memory allocation request applying for a second size (e.g., 32 bytes and 48 bytes as described above). The second size is a fixed memory size provided by the memory pool, that is, the variable memory allocation method of the target application program originally using the malloc/free function is converted into the fixed memory allocation method based on the memory pool by the preset function in the embodiment of the present application. Because the memory pool is divided into the memory blocks with different sizes from small to large according to a certain granularity level, and a certain number of memory pages are configured for each granularity level, the memory blocks can be distributed according to a fixed size each time without cutting the memory blocks, so that memory fragments can be avoided, time consumption for searching the distributable memory blocks can be reduced, and the memory distribution efficiency can be improved.
In the embodiment of the present application, after the first memory allocation request is converted into the second memory allocation request, allocable memory blocks are first searched in a cached memory page, and a granularity level of the memory page to which the allocable memory blocks belong is matched with the second size. Illustratively, after the target application calls the preset memory allocation function ccalloc (22), the intermediate layer receives a first memory allocation request, where the first memory allocation request is used for applying for a 22-byte memory block, and the intermediate layer converts the 22-byte first memory allocation request into a 32-byte second memory allocation request. Then, the middle layer searches allocable memory blocks in the cached memory pages with the granularity level of 32 bytes, and requests the allocable memory pages from the kernel through the driving layer if the allocable memory blocks do not exist in the cached memory pages. The kernel searches the allocable memory pages from the memory pool and returns the addresses of the allocable memory pages to the middle layer through the driving layer, and the middle layer searches the allocable memory blocks in the allocable memory pages and caches the allocable memory pages. And finally, the intermediate layer returns the searched addresses of the allocable memory blocks to the target application program.
In specific implementation, for the memory pages allocated by the kernel, the middle layer may perform caching, and the number of the cached memory pages is not limited in this application. Illustratively, the middle layer may cache several memory pages of each granularity level.
In an optional embodiment of the present application, the cached memory pages include memory pages with different granularity levels, and memory management data corresponding to each cached memory page is recorded in the cache; for a cached memory page, the memory management data corresponding to the memory page is used to record whether each memory block in the memory page is allocated.
It should be noted that the cached memory pages are memory pages in a memory pool allocated by the kernel, and the memory pages in the memory pool are all managed by the kernel in a unified manner. For the memory pages cached in the middle layer, the middle layer may store the memory management data corresponding to each cached memory page in the cache, and in the embodiment of the present application, the memory management data may be represented by an array structure, assuming that the memory management data is recorded as an array bitmap. The bitmap uses an integer bit to correspond to the allocation state of a memory block respectively. For example, a bit value of 0 indicates that the memory block corresponding to the bit is not allocated, and a bit value of 1 indicates that the memory block corresponding to the bit is allocated.
In this embodiment of the present application, for the cached memory pages, memory management data (hereinafter referred to as bitmap) corresponding to each cached memory page may be stored. For example, for a memory page with a granularity level of 32 bytes, assuming that the size of the memory page is 4 kbytes, the total memory page has 128 memory blocks of 32 bytes, bit 0 corresponds to the first memory block, bit 1 corresponds to the second memory block, and so on, bit 127 corresponds to the 128 th memory block, and then 128 bits are needed in total, that is, 16 integers of 8 bytes are needed to represent the allocation status of all the memory blocks in the memory page.
In one example, it is assumed that memory management data corresponding to the ith memory page of the cache is: bitmap [ i ] =0x00000000000000FF, which indicates that the 0 th to 7 th memory blocks in the ith memory page are allocated, and the 8 th to 63 th memory blocks are not allocated.
In an optional embodiment of the present application, each bit of the memory management data obtained through the shaping corresponds to each memory block in one memory page, and a value of each bit indicates whether the corresponding memory block is allocated; after searching the allocable memory blocks in the cached memory page, the method may further include: if the allocable memory block is found in the cached memory page, after the address of the allocable memory block is returned, the value of the bit corresponding to the allocable memory block in the memory management data is updated.
For the cached memory page, the middle layer can manage the memory block in the cached memory page through bitmap.
Taking the calling of the prearranged memory allocation function ccalloc (22) as an example, the middle layer converts it into a second memory allocation request applying for 32 bytes. The middle layer searches whether a distributable memory block exists in the cached memory page with the granularity level of 32 bytes, namely searches in a bitmap corresponding to each cached memory page with the granularity level of 32 bytes, and searches whether a bit with the value of 0 exists; if a bit with a value of 0 is found in the bitmap of a certain memory page, it is determined that a distributable memory block is found, the address of the memory block can be returned to the target application program, and the value of the bit corresponding to the distributable memory block in the bitmap is updated to 1, which indicates that the memory block is distributed.
In an optional embodiment of the present application, the searching for the allocable memory block in the cached memory page may include:
step S11, determining a memory page with a matched size in the cached memory pages, wherein the size matching refers to matching of the granularity level and the second size;
step S12, acquiring memory management data corresponding to the memory page with the matched size;
step S13, searching whether an unallocated memory block exists in the memory page with the matched size according to the memory management data;
step S14, if the unallocated memory blocks exist in the memory pages with the matched sizes, determining to search the allocable memory blocks;
step S15, if all the size-matched memory pages of the cache are found to have no unallocated memory blocks, determining that no allocable memory blocks exist in the cached memory pages.
Taking the above-mentioned calling of the preset memory allocation function ccalloc (22) as an example, the middle layer converts it into a second memory allocation request applying for 32 bytes, and the second size is 32 bytes. The middle layer searches in a bitmap corresponding to each cached memory page with the granularity level of 32 bytes, and searches whether a bit with the value of 0 exists or not; if all bits in the bitmap corresponding to all the memory pages with the cached granularity level of 32 bytes are 1, it indicates that all the memory blocks in all the memory pages with the cached granularity level of 32 bytes are allocated, and no allocable memory block with 32 bytes exists in the cached memory pages. At this time, the middle layer needs to apply for an allocable memory page from the kernel layer.
Referring to fig. 4, a schematic flow chart of a memory allocation method in an example of the present application is shown. As shown in fig. 4, the memory allocation method may include the following steps:
step 401, when the target application program meets the preset condition, replacing the memory allocation/release function in the target application program with a preset function.
Specifically, the memory allocation function malloc in the source code of the target application program may be replaced by a preset memory allocation function cchalloc, and the memory release function free in the source code of the target application program may be replaced by a preset memory release function cchffe.
Wherein the preset condition may include: the target application program comprises a preset number of memory allocation functions, and the preset number of memory allocation functions are used for requesting allocation of memory blocks with fixed size, or the preset number of memory allocation functions are used for requesting allocation of memory blocks with non-fixed size smaller than the minimum granularity level.
In step 402, the middle layer receives a first memory allocation request triggered by a target application program calling a preset function.
The target application program triggers a first memory allocation request by calling a preset function, and requests to allocate a memory block of a first size, and if the preset memory allocation function called by the target application program is cchallloc (n), the first size is n bytes.
In step 403, the middle layer converts the first memory allocation request into a second memory allocation request.
The middle layer converts a first memory allocation request applying for a first-size memory block into a second memory allocation request applying for a second-size memory block. The second size is determined according to the first size and granularity levels of memory pages in the memory pool, and the second size may be a minimum granularity level greater than or equal to the first size in the granularity levels.
Step 404, the middle layer searches the memory pages in the cache for allocable memory blocks.
Specifically, the intermediate layer searches whether the allocable memory block exists in the memory page of which the granularity level of the cache is matched with the second size. That is, searching in the bitmap corresponding to each memory page with the granularity level of the cache matched with the second size, searching whether a bit with a value of 0 exists, and if the bit with the value of 0 exists, determining to search the allocable memory block in the cached memory page; and if the bit with the value of 0 does not exist, determining that the allocable memory block is not found in the cached memory page.
Step 405, if the allocable memory blocks are found in the cached memory page, executing step 408; if the allocable memory block is not found in the cached memory page, step 406 is executed.
And step 406, the middle layer applies for an allocable memory page from the kernel through the driver layer, and searches allocable memory blocks in the allocable memory page.
The middle layer applies for an allocable memory page from the kernel through the driver layer, the kernel determines an allocable memory page in the memory pool, and returns the first address of the allocable memory page to the driver layer. The allocable memory page may be a new memory page, or the allocable memory page may be a partition page, where the partition page refers to a memory page including a part of allocated memory blocks and a part of unallocated memory blocks.
Further, when the allocable memory page is a new memory page, the kernel may return the first address of the new memory page to the driver layer, and return the first address to the intermediate layer through the driver layer. When the allocable memory page is a partition page, the kernel also generates memory management data bitmap corresponding to the partition page according to the allocation state of each memory block in the partition page, returns the head address of the partition page and the bitmap of the partition page to the driving layer together, and returns the head address of the partition page and the bitmap of the partition page to the middle layer through the driving layer.
It should be noted that, in practical applications, the memory pages are allocated by using a Slab mechanism of the Linux kernel, and the Slab mechanism uses a kernel-state address, and an application layer cannot be directly used, so that the driver layer also needs to call a system function of the kernel, perform address space conversion on a first address of the allocable memory pages returned by the kernel, convert the first address into a virtual address which can be used by the application layer, and return the virtual address to the intermediate layer.
Step 407, the middle layer caches the allocable memory pages.
And step 408, the intermediate layer returns the addresses of the allocable memory blocks to the target application program.
When the target application program has a large amount of memory allocation/release requirements with fixed size, or a large amount of memory allocation/release requirements with small size (smaller than the minimum granularity level), the preset function provided by the application can be used for replacing the memory allocation/release function in the target application program, the preset function of the application can convert the original variable memory allocation request in the target application program into a fixed memory allocation request based on a memory pool, and allocate memory blocks with fixed size to the target application program, so that memory fragmentation can be avoided, and the memory allocation efficiency can be improved.
In an optional embodiment of the present application, the method may further comprise: after the allocable memory blocks are found, inserting head data at the head addresses of the allocable memory blocks; the header data includes attribute information of the allocable memory block, where the attribute information may include, but is not limited to, the second size and a page identifier of a memory page to which the allocable memory block belongs.
According to the embodiment of the application, management of the memory pool is achieved by using a Slab mechanism of a Linux kernel, and memory blocks with fixed sizes are distributed for the target application program. In order to facilitate management of the memory blocks allocated by the Linux kernel by the intermediate layer and facilitate tracing and releasing the memory blocks by the intermediate layer when the memory blocks need to be released subsequently, in the embodiment of the present application, after the allocatable memory blocks are found, header data (header data) is inserted into the head addresses of the allocatable memory blocks. The header data is metadata of the memory block, has a fixed size, and can be used to record attribute information of the memory block. Illustratively, the header data may be 4 bytes or 8 bytes. The header data may include attribute information of the allocable memory block, where the attribute information may include, but is not limited to, the second size and a page identifier of a memory page to which the allocable memory block belongs. For each memory page of the cache, the middle layer may record a page identifier corresponding to each memory page of the cache, so that each memory page of the cache may be managed and identified by the page identifier.
Referring to fig. 5, a data structure diagram of a memory block into which header data is inserted in an example of the present application is shown. As shown in fig. 5, the header data of the memory block includes a second size (size) and a page identification (page _ id). In addition, some other attribute information (not shown in the figure) of the memory block may also be included in the header data.
In one example, when the target application calls a preset memory allocation function cchalloc (17), a first memory allocation request corresponding to the preset memory allocation function cchalloc (17) is converted into a second memory allocation request of 32 bytes, and after an allocable memory block of 32 bytes is found, the middle layer inserts header data at the head address of the allocable memory block. For example, the size of the inserted header data is 32, and the page_id is 2. The inserted header data may be used to indicate that the size of the memory block is 32 bytes, and the page identifier of the memory page to which the memory block belongs is 2.
It should be noted that "size of application" shown in fig. 5 refers to a first size, and in the embodiment of the present application, although a first memory allocation request of a target application for a first-size memory block is converted into a second memory allocation request for a second-size memory block, and a second-size memory block is allocated to a target application, the target application actually needs a first-size memory block, and therefore, although a second-size memory block is allocated to the target application, in the allocated second-size memory block, the memory size actually used by the target application is the first size, and the target application does not perceive more allocated memory space, and the target application does not perceive header data in the allocated memory block. The size (second size) recorded in the header data is for identifying the size of the released memory block when the memory block is released.
In this embodiment of the present application, the returning the found addresses of the allocable memory blocks may include: and returning a pointer pointing to the offset address of the allocable memory block so that the target application program can use the memory space pointed by the pointer, wherein the offset address of one memory block is the sum of the head address of the memory block and the offset of the head data of the memory block.
In one example, it is assumed that a target application calls a preset memory allocation function ccalloc (22) to request allocation of a memory, an intermediate layer acquires that a head address of an allocable memory block is 0x7000, and inserts header data at the head address of the allocable memory block, and the size of the header data is 8 bytes. The address of the allocatable memory block returned by the middle layer to the application layer is 0x7000 plus the offset of the header data, i.e., the address returned to the application layer is 0x7008 (0x7000 + 8). The target application may use memory space from 0x7008 to 0x7030 addresses in the allocable memory block.
In an optional embodiment of the present application, the method may further comprise:
step S21, receiving a memory release request, wherein the memory release request carries a pointer pointing to an offset address of a target memory block;
step S22, obtaining the head data of the target memory block according to the pointer pointing to the offset address of the target memory block;
step S23, analyzing the header data of the target memory block to obtain a page identifier of a memory page to which the target memory block belongs;
step S24, according to the page identifier of the memory page to which the target memory block belongs, querying whether the memory page to which the target memory block belongs is in the cache;
step 25, if it is determined that the memory page to which the target memory block belongs is in the cache, releasing the target memory block;
step S26, if it is determined that the memory page to which the target memory block belongs is not in the cache, caching the memory release request, and submitting the cached memory release request to the kernel for release when a release space corresponding to the cached memory release request reaches a preset threshold.
In this embodiment, for a memory block in a cached memory page, the middle layer may be released directly. For the memory blocks in the memory pages which are not in the cache, the middle layer can be released by the kernel in a delayed release mode.
The memory release request is triggered by a target application program by calling a preset memory release function cchfree (ptr), for example, the cchfree (ptr) is a pointer pointing to an object of a target memory block, and the target memory block refers to a memory block needing to be released.
According to the pointer pointing to the offset address of the target memory block, the header data of the target memory block may be obtained (as shown in fig. 5). For example, in the above example, the memory release request of the target application carries a pointer to the offset address of the target memory block, such as the pointer pointing to address 0x7008; the intermediate layer subtracts the offset of the head data from the offset address to obtain the head address (e.g. 0x 7000) of the target memory block; the method includes acquiring 8 bytes from a head address of a target memory block, that is, header data of the target memory block, and analyzing the header data of the target memory block to obtain a page identifier of a memory page to which the target memory block belongs.
For each cached memory page, the middle layer may record a page identifier of each cached memory page. Therefore, the page identifier of each cached memory page may be queried according to the page identifier of the memory page to which the target memory block belongs, and whether the memory page to which the target memory block belongs is in the cache is determined.
Further, if it is determined that the memory page to which the target memory block belongs is in the cache, releasing the target memory block may include: and acquiring memory management data corresponding to a memory page to which the target memory block belongs, and updating the value of the bit corresponding to the target memory block in the memory management data. If the value of the bit corresponding to the target memory block in the memory management data is set to 0, the target memory block, that is, the idle memory block, may be reallocated.
It should be noted that the size of the target memory block is a second size. In the embodiment of the present application, the first size is the size of the memory actually requested and actually used by the target application, and the second size is the size of the memory actually allocated to the target application. When the target memory block is released, the second size may be obtained from the header data of the target memory block, and the memory block with the second size may be released.
If the memory page to which the target memory block belongs is determined not to be in the cache, the memory page can be released by the kernel in a delayed release mode. The delayed release means that the memory release request is cached first, and when the release space corresponding to the cached memory release request reaches a preset threshold, the cached memory release request is submitted to the kernel through the driver layer for release. That is, when releasing the memory, the memory block is not really released to the operating system, and only when the memory blocks with the preset threshold are all idle, the memory block is released to the operating system. Therefore, the operation times of the kernel can be reduced, and the system performance is improved. The embodiment of the present application does not limit the preset threshold. For example, the preset threshold may be n free memory pages, and the value of n may be set as needed, for example, n is set to 128. An idle memory page means that all memory blocks in the memory page are idle (the corresponding bit is 0).
In summary, in the embodiments of the present application, a memory allocation/release function in a target application is replaced with a preset function of the present application, and when the target application executes the preset function, a first memory allocation request is triggered, where the first memory allocation request is used to request allocation of a memory block with a first size. And the preset function converts the first memory allocation request into a second memory allocation request by using a memory allocation mode based on a memory pool, wherein the second memory allocation request is used for requesting allocation of a memory block with a second size, and the second size is determined according to the first size and the granularity level of the memory page in the memory pool. The second size is a fixed memory size provided by the memory pool, that is, the embodiment of the present application converts the variable memory allocation request in the target application program into a fixed memory allocation request based on the memory pool. Because the memory pool is divided into the memory blocks with different sizes from small to large according to a certain granularity level, and a certain number of memory pages are configured for each granularity level, the memory blocks can be distributed according to a fixed size each time without cutting the memory blocks, so that memory fragments can be avoided, time consumption for searching the distributable memory blocks can be reduced, and the memory distribution efficiency can be improved. In addition, the memory allocation method provided by the application only needs to carry out simple function replacement in the source code of the target application program, does not need to modify other codes, is low in operation cost, has wide applicability and is convenient for code maintenance.
It should be noted that for simplicity of description, the method embodiments are described as a series of acts, but those skilled in the art should understand that the embodiments are not limited by the described order of acts, as some steps can be performed in other orders or simultaneously according to the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
Referring to fig. 6, a block diagram of a memory allocation apparatus according to an embodiment of the present application is shown, where the memory allocation apparatus is applied to a computer device, and the apparatus may include:
an allocation request receiving module 601, configured to receive a first memory allocation request triggered by a target application calling a preset function, where the first memory allocation request is used to request allocation of a memory block with a first size;
a request conversion module 602, configured to convert the first memory allocation request into a second memory allocation request, where the second memory allocation request is used to request to allocate a memory block of a second size, the second size is determined according to the first size and a granularity level of a memory page in a memory pool, the memory pool includes memory pages of different granularity levels, and the memory blocks included in each memory page have the same granularity level;
a first searching module 603, configured to search a memory page cached in a cache for a allocable memory block, where a granularity level of a memory page to which the allocable memory block belongs is matched with the second size;
a second searching module 604, configured to request a distributable memory page from the memory pool to a kernel if a distributable memory block does not exist in the cached memory page, cache the distributable memory page, and search the distributable memory block in the distributable memory page;
a result returning module 605, configured to return the address of the searched allocable memory block.
Optionally, the preset function in the target application program is obtained by replacing a memory allocation/release function in the target application program when the target application program meets a preset condition; the preset conditions may include: the target application program includes a preset number of memory allocation/release functions, and the preset number of memory allocation/release functions are used for requesting allocation/release of memory blocks of a fixed size, or the preset number of memory allocation/release functions are used for requesting allocation/release of memory blocks of a non-fixed size smaller than a minimum granularity level.
Optionally, the cached memory pages include memory pages with different granularity levels, and memory management data corresponding to each cached memory page is recorded in the cache; for a cached memory page, the memory management data corresponding to the memory page is used to record whether each memory block in the memory page is allocated.
Optionally, each bit of the memory management data after shaping corresponds to each memory block in one memory page, and a value of each bit indicates whether the corresponding memory block is allocated; the device further comprises:
and the data updating module is configured to, if the assignable memory block is found in the cached memory page, update the value of the bit corresponding to the assignable memory block in the memory management data after returning the address of the assignable memory block.
Optionally, the apparatus further comprises:
the data inserting module is used for inserting head data into the head address of the allocable memory block after the allocable memory block is found; the header data includes attribute information of the allocable memory block, where the attribute information includes the second size and a page identifier of a memory page to which the allocable memory block belongs;
the result returning module is specifically configured to return a pointer pointing to an offset address of the allocable memory block, where the offset address of one memory block is a sum of a head address of the memory block and an offset of header data of the memory block.
A result return module, the apparatus further comprising:
a release request receiving module, configured to receive a memory release request, where the memory release request carries a pointer pointing to an offset address of a target memory block;
a structure obtaining module, configured to obtain the head data of the target memory block according to the pointer pointing to the offset address of the target memory block;
the identifier obtaining module is configured to analyze the header data of the target memory block to obtain a page identifier of a memory page to which the target memory block belongs;
a cache searching module, configured to query whether a memory page to which the target memory block belongs is in a cache according to a page identifier of the memory page to which the target memory block belongs;
a first releasing module, configured to release the target memory block if it is determined that the memory page to which the target memory block belongs is in the cache;
and a second releasing module, configured to cache the memory release request if it is determined that the memory page to which the target memory block belongs is not in the cache, and submit the cached memory release request to the kernel for releasing when a release space corresponding to the cached memory release request reaches a preset threshold.
Optionally, the second memory allocation request is realized by calling a Slab object cache mechanism of a kernel through a preset interface of the target application program; the preset interface comprises a driving layer positioned between an application layer and a kernel layer, or the preset interface comprises a system calling interface of a Slab object cache mechanism arranged in the application layer.
In the embodiment of the application, a memory allocation/release function in a target application program is replaced by the preset function of the application, and when the target application program executes to the preset function, a first memory allocation request is triggered, where the first memory allocation request is used to request allocation of a memory block of a first size. The preset function converts the first memory allocation request into a second memory allocation request by using a memory allocation mode based on a memory pool, wherein the second memory allocation request is used for requesting allocation of a memory block with a second size, and the second size is determined according to the first size and the granularity level of a memory page in the memory pool. The second size is a fixed memory size provided by the memory pool, that is, the embodiment of the present application converts the variable memory allocation request in the target application program into a fixed memory allocation request based on the memory pool. Because the memory pool is divided into the memory blocks with different sizes from small to large according to a certain granularity level, and a certain number of memory pages are configured for each granularity level, the memory blocks can be distributed according to a fixed size each time without cutting the memory blocks, so that memory fragments can be avoided, time consumption for searching the distributable memory blocks can be reduced, and the memory distribution efficiency can be improved. In addition, the memory allocation method provided by the application only needs to carry out simple function replacement in the source code of the target application program, does not need to modify other codes, is low in operation cost, has wide applicability and is convenient for code maintenance.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are all described in a progressive manner, and each embodiment focuses on differences from other embodiments, and portions that are the same and similar between the embodiments may be referred to each other.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The embodiment of the present application provides a memory allocation device, which includes a memory, and one or more programs, where the one or more programs are stored in the memory, and configured to be executed by one or more processors, where the one or more programs include instructions for performing the memory allocation method according to one or more embodiments.
Fig. 7 is a block diagram illustrating an apparatus 800 for memory allocation in accordance with an exemplary embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 7, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing elements 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, audio component 810 includes a Microphone (MIC) configured to receive external audio signals when apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice information processing mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed state of the device 800, the relative positioning of components, such as a display and keypad of the apparatus 800, the sensor assembly 814 may also search for a change in the position of the apparatus 800 or a component of the apparatus 800, the presence or absence of user contact with the apparatus 800, orientation or acceleration/deceleration of the apparatus 800, and a change in the temperature of the apparatus 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the apparatus 800 and other devices in a wired or wireless manner. The apparatus 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on radio frequency information processing (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 8 is a schematic diagram of a server in some embodiments of the present application. The server 1900 may vary widely by configuration or performance and may include one or more Central Processing Units (CPUs) 1922 (e.g., one or more processors) and memory 1932, one or more storage media 1930 (e.g., one or more mass storage devices) storing applications 1942 or data 1944. Memory 1932 and storage medium 1930 can be, among other things, transient or persistent storage. The program stored in the storage medium 1930 may include one or more modules (not shown), each of which may include a series of instructions operating on a server. Still further, a central processor 1922 may be provided in communication with the storage medium 1930 to execute a series of instruction operations in the storage medium 1930 on the server 1900.
The server 1900 may also include one or more power supplies 1926, one or more wired or wireless network interfaces 1950, one or more input-output interfaces 1958, one or more keyboards 1956, and/or one or more operating systems 1941, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc.
A non-transitory computer readable storage medium in which instructions, when executed by a processor of an apparatus (server or terminal), enable the apparatus to perform the memory allocation method shown in fig. 1.
A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of a device (server or terminal), enable the device to perform the description of the memory allocation method in the embodiment corresponding to fig. 1, and therefore, the description thereof will not be repeated herein. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in the embodiments of the computer program product or the computer program referred to in the present application, reference is made to the description of the embodiments of the method of the present application.
Further, it should be noted that: embodiments of the present application also provide a computer program product or computer program, which may include computer instructions, which may be stored in a computer-readable storage medium. The processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor can execute the computer instruction, so that the computer device executes the description of the memory allocation method in the embodiment corresponding to fig. 1, which is described above, and therefore, the description thereof will not be repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in the computer program product or computer program embodiments referred to in the present application, reference is made to the description of the method embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
The foregoing describes in detail a memory allocation method, a memory allocation apparatus, and a readable storage medium provided by the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the foregoing examples are only used to help understand the method and core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (9)

1. A memory allocation method applied to a computer device is characterized by comprising the following steps:
receiving a first memory allocation request triggered by calling a preset function for a target application program, wherein the first memory allocation request is used for requesting to allocate a memory block with a first size;
converting the first memory allocation request into a second memory allocation request, where the second memory allocation request is used to request allocation of a memory block of a second size, the second size is determined according to the first size and a granularity level of a memory page in a memory pool, the memory pool includes memory pages of different granularity levels, and the memory blocks included in each memory page have the same granularity level;
searching a memory block capable of being allocated in a cached memory page, wherein the granularity level of the memory page to which the memory block capable of being allocated belongs is matched with the second size;
if the distributable memory block does not exist in the cached memory page, requesting the distributable memory page from the memory pool to a kernel, caching the distributable memory page, and searching the distributable memory block in the distributable memory page;
returning the searched address of the allocable memory block;
the preset function in the target application program is obtained by replacing a memory allocation/release function in the target application program when the target application program meets a preset condition; the preset conditions include: the target application program includes a preset number of memory allocation/release functions, and the preset number of memory allocation/release functions are used for requesting allocation/release of memory blocks of a fixed size, or the preset number of memory allocation/release functions are used for requesting allocation/release of memory blocks of a non-fixed size smaller than a minimum granularity level.
2. The method according to claim 1, wherein the cached memory pages include memory pages with different granularity levels, and memory management data corresponding to each cached memory page is recorded in the cache; for one cached memory page, the memory management data corresponding to the memory page is used to record whether each memory block in the memory page is allocated.
3. The method according to claim 2, wherein each bit of the memory management data passing through the shaping corresponds to each memory block in one memory page, and a value of each bit indicates whether the corresponding memory block is allocated; after searching the allocable memory blocks in the cached memory page, the method further includes:
if the allocable memory block is found in the cached memory page, after the address of the allocable memory block is returned, the value of the corresponding bit of the allocable memory block in the memory management data is updated.
4. The method of claim 1, further comprising:
after the allocable memory blocks are found, inserting head data at the head addresses of the allocable memory blocks; the header data includes attribute information of the allocable memory block, where the attribute information includes the second size and a page identifier of a memory page to which the allocable memory block belongs;
the returning of the searched address of the allocatable memory block includes:
and returning a pointer pointing to the offset addresses of the allocable memory blocks, wherein the offset address of one memory block is the sum of the head address of the memory block and the offset of the head data of the memory block.
5. The method of claim 4, further comprising:
receiving a memory release request, wherein the memory release request carries a pointer pointing to an offset address of a target memory block;
acquiring the head data of the target memory block according to the pointer pointing to the offset address of the target memory block;
analyzing the head data of the target memory block to obtain a page identifier of a memory page to which the target memory block belongs;
inquiring whether the memory page to which the target memory block belongs is in a cache or not according to the page identifier of the memory page to which the target memory block belongs;
if the memory page to which the target memory block belongs is determined to be in the cache, releasing the target memory block;
if the memory page to which the target memory block belongs is determined not to be in the cache, caching the memory release request, and submitting the cached memory release request to the kernel for release when a release space corresponding to the cached memory release request reaches a preset threshold.
6. The method according to any one of claims 1 to 5, wherein the second memory allocation request is realized by calling a Slab object cache mechanism of a kernel through a preset interface of the target application program; the preset interface comprises a driving layer positioned between an application layer and a kernel layer, or the preset interface comprises a system calling interface of a Slab object cache mechanism arranged in the application layer.
7. A memory allocation apparatus applied to a computer device, the apparatus comprising:
the allocation request receiving module is configured to receive a first memory allocation request triggered by a target application calling a preset function, where the first memory allocation request is used to request allocation of a memory block of a first size;
a request conversion module, configured to convert the first memory allocation request into a second memory allocation request, where the second memory allocation request is used to request allocation of a memory block of a second size, the second size is determined according to the first size and a granularity level of a memory page in a memory pool, the memory pool includes memory pages of different granularity levels, and the memory blocks included in each memory page have the same granularity level;
a first searching module, configured to search a memory page of the cache for a allocable memory block, where a granularity level of a memory page to which the allocable memory block belongs is matched with the second size;
a second searching module, configured to request an allocable memory page from the memory pool to a kernel if an allocable memory block does not exist in the cached memory page, cache the allocable memory page, and search for an allocable memory block in the allocable memory page;
the result returning module is used for returning the searched addresses of the allocable memory blocks;
the preset function in the target application program is obtained by replacing a memory allocation/release function in the target application program when the target application program meets a preset condition; the preset conditions include: the target application program includes a preset number of memory allocation/release functions, and the preset number of memory allocation/release functions are used for requesting allocation/release of memory blocks of a fixed size, or the preset number of memory allocation/release functions are used for requesting allocation/release of memory blocks of a non-fixed size smaller than a minimum granularity level.
8. A memory allocation apparatus comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors, the one or more programs comprising instructions for performing the memory allocation method of any one of claims 1 to 6.
9. A readable storage medium having stored thereon instructions which, when executed by one or more processors of an apparatus, cause the apparatus to perform the memory allocation method of any one of claims 1 to 6.
CN202211063392.7A 2022-09-01 2022-09-01 Memory allocation method and device and readable storage medium Active CN115145735B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211063392.7A CN115145735B (en) 2022-09-01 2022-09-01 Memory allocation method and device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211063392.7A CN115145735B (en) 2022-09-01 2022-09-01 Memory allocation method and device and readable storage medium

Publications (2)

Publication Number Publication Date
CN115145735A CN115145735A (en) 2022-10-04
CN115145735B true CN115145735B (en) 2022-11-15

Family

ID=83415267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211063392.7A Active CN115145735B (en) 2022-09-01 2022-09-01 Memory allocation method and device and readable storage medium

Country Status (1)

Country Link
CN (1) CN115145735B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116501511B (en) * 2023-06-29 2023-09-15 恒生电子股份有限公司 Memory size processing method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815005A (en) * 2017-11-22 2019-05-28 华为技术有限公司 A kind of method, apparatus and storage system of managing internal memory
CN110134514A (en) * 2019-04-18 2019-08-16 华中科技大学 Expansible memory object storage system based on isomery memory

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11886332B2 (en) * 2020-10-30 2024-01-30 Universitat Politecnica De Valencia Dynamic memory allocation methods and systems

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815005A (en) * 2017-11-22 2019-05-28 华为技术有限公司 A kind of method, apparatus and storage system of managing internal memory
CN110134514A (en) * 2019-04-18 2019-08-16 华中科技大学 Expansible memory object storage system based on isomery memory

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Nginx_Slab算法研究;宋雅琴 等;《网络新媒体技术》;20180331;第7卷(第2期);54-61 *

Also Published As

Publication number Publication date
CN115145735A (en) 2022-10-04

Similar Documents

Publication Publication Date Title
CN110751275B (en) Graph training system, data access method and device, electronic device and storage medium
EP3514689A1 (en) Memory management method and apparatus
KR102077149B1 (en) Method for managing memory and apparatus thereof
CN107291626B (en) Data storage method and device
CN115145735B (en) Memory allocation method and device and readable storage medium
JP2021506016A (en) Methods and devices for processing memory and storage media
KR102402780B1 (en) Apparatus and method for managing memory
CN114546897A (en) Memory access method and device, electronic equipment and storage medium
CN110554837A (en) Intelligent switching of fatigue-prone storage media
CN113419670A (en) Data writing processing method and device and electronic equipment
CN114428797A (en) Method, device and equipment for caching embedded parameters and storage medium
CN111638938B (en) Migration method and device of virtual machine, electronic equipment and storage medium
CN112948440A (en) Page data processing method and device, terminal and storage medium
CN110287000B (en) Data processing method and device, electronic equipment and storage medium
CN109992790B (en) Data processing method and device for data processing
CN114416178A (en) Data access method, device and non-transitory computer readable storage medium
CN115495020A (en) File processing method and device, electronic equipment and readable storage medium
CN114428589A (en) Data processing method and device, electronic equipment and storage medium
CN111400563B (en) Pattern matching method and device for pattern matching
CN114610324A (en) Binary translation method, processor and electronic equipment
CN116360671A (en) Storage method, storage device, terminal and storage medium
CN113064724A (en) Memory allocation management method and device and memory allocation management device
CN111708715A (en) Memory allocation method, memory allocation device and terminal equipment
CN117827709B (en) Method, device, equipment and storage medium for realizing direct memory access
CN117909258B (en) Optimization method and device for processor cache, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant