WO2022062833A1 - 内存分配方法及相关设备 - Google Patents

内存分配方法及相关设备 Download PDF

Info

Publication number
WO2022062833A1
WO2022062833A1 PCT/CN2021/114967 CN2021114967W WO2022062833A1 WO 2022062833 A1 WO2022062833 A1 WO 2022062833A1 CN 2021114967 W CN2021114967 W CN 2021114967W WO 2022062833 A1 WO2022062833 A1 WO 2022062833A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
target
thread
node
capacity
Prior art date
Application number
PCT/CN2021/114967
Other languages
English (en)
French (fr)
Inventor
马登云
顾鹏
Original Assignee
深圳云天励飞技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳云天励飞技术股份有限公司 filed Critical 深圳云天励飞技术股份有限公司
Publication of WO2022062833A1 publication Critical patent/WO2022062833A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Definitions

  • the present application relates to the technical field of computer storage, and in particular, to a memory allocation method and related equipment.
  • Embedded systems often use multiple threads to complete different tasks, and multiple threads can use the same system memory resources, but the system memory resources are limited.
  • a thread cannot apply for memory resources due to insufficient system memory resources, the thread cannot perform task processing, which has a huge impact on the business of the system. Therefore, how to improve the utilization of memory resources is an urgent problem to be solved.
  • Embodiments of the present application provide a memory allocation method and related equipment, which are used to improve the utilization rate of memory resources.
  • an embodiment of the present application provides a memory allocation method, which is applied to an electronic device, and the method includes:
  • Allocate target memory for the first thread based on the expected memory capacity, and record attribute information of the target memory
  • an embodiment of the present application provides a memory allocation device, which is applied to an electronic device, and the device includes:
  • a first receiving unit configured to receive a memory allocation request, where the memory allocation request is used for requesting memory allocation for the first thread, and the memory allocation request carries an expected memory capacity;
  • an allocation unit configured to allocate target memory for the first thread based on the expected memory capacity
  • a second receiving unit configured to receive a memory release request, where the memory release request is used to release the target memory
  • a release unit configured to release the target memory based on the attribute information.
  • an embodiment of the present application provides an electronic device, including a processor, a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor,
  • the one or more programs described above include instructions for executing steps in the method described in the first aspect of the embodiments of the present application.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program enables a computer to execute the computer program as described in the first embodiment of the present application.
  • an embodiment of the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute as implemented in the present application.
  • the computer program product may be a software installation package.
  • a memory allocation request is first received, the memory allocation request is used to request memory allocation for the first thread, and the memory allocation request carries the expected memory capacity; and then the target memory is allocated for the first thread based on the expected memory capacity. , and record the attribute information of the target memory; then receive the memory release request, the memory release request is used to release the target memory; finally, the target memory is released based on the attribute information. Since the electronic device allocates the target memory to the first thread according to the expected memory capacity, instead of randomly allocating memory to the first thread, it is beneficial to improve the utilization rate of memory resources.
  • FIG. 1 is a schematic flowchart of a memory allocation method provided by an embodiment of the present application.
  • FIG. 2 is a diagram of a memory allocation architecture provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of attribute information storage provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a memory allocation apparatus provided by an embodiment of the present application.
  • a memory allocation method provided by an embodiment of the present application is applied to the above-mentioned electronic device, and specifically includes the following steps:
  • Step 101 Receive a memory allocation request, where the memory allocation request is used for requesting memory allocation for the first thread, and the memory allocation request carries an expected memory capacity.
  • the electronic device may allocate memory to the first thread, or may not allocate memory to the first thread.
  • Step 102 Allocate a target memory for the first thread based on the expected memory capacity, and record attribute information of the target memory.
  • the memory capacity of memory pool 2 is smaller than that of memory pool 3, and memory pool 2, memory pool 3, and memory block 4 are all allocated from memory pool 1.
  • n in FIG. 2 may be 16, and m is a positive integer. In other embodiments, the value of n may be adjusted accordingly according to the actual situation, that is, it may be larger than 16 or smaller than 16.
  • the memory pool 2 may include 16 memory blocks, the memory capacity of each memory block is 512KB, each memory block includes at least one memory node, and the memory capacity of the memory node is less than or equal to 512KB. In other embodiments, the number of memory blocks included in the memory pool 2 and the memory capacity of each memory block can be adjusted accordingly according to actual conditions.
  • the memory pool 3 includes up to 16 memory blocks, the memory capacity of each memory block is 3M, each memory block includes at least one memory node, and the memory capacity of the memory node is greater than 512KB and less than or equal to 3M. In other embodiments, the number of memory blocks included in the memory pool 3 and the memory capacity of each memory block can be adjusted accordingly according to actual conditions.
  • the memory capacity of the memory block 4 is greater than 3M. That is, the memory capacity of memory block 4 is greater than the memory capacity of memory blocks in memory pool 3, and the memory capacity of memory blocks in memory pool 3 is greater than the memory capacity of memory blocks in memory pool 2.
  • the size of the memory capacity of the memory block 4 can be adjusted according to the actual situation, and the memory capacity of the memory block 4 is the same as the memory capacity of the memory block in the memory pool 3 and the memory capacity of the memory block in the memory pool 2.
  • the relationship between the size of the capacity can also be adjusted according to the actual situation.
  • the expected memory capacity is less than or equal to 512KB, allocate memory nodes from the memory block in memory pool 2; if the expected memory capacity is greater than 512KB and less than or equal to 3M, allocate memory from the memory block in memory pool 3 The node, if the expected memory capacity is greater than 3M, fetches memory blocks from memory pool 1.
  • the target memory may be memory block 4 in FIG. 1 , or may be a memory node in memory pool 2 , or may be a memory node in memory pool 3 .
  • the attribute information includes the address of the target memory, the address of the memory block where the target memory is located, and the actual memory capacity applied for by the first thread.
  • the capacity of the target memory is greater than or equal to the actual memory capacity applied for by the first thread, and the first thread is allowed to use the applied actual memory capacity in the target memory.
  • the first thread can use the 2M memory in the target memory.
  • the memory allocation request also carries an address alignment condition, the address alignment condition is used to determine an attribute record address, the attribute record address is used to store the attribute information, and the attribute record address is based on the address. Alignment conditions and the starting address of the target memory are determined.
  • the attribute record address is determined based on the address alignment condition and the starting address of the target memory, including:
  • the attribute record address is determined based on the second start address.
  • the second starting address is the address returned to the user.
  • the address alignment condition is that the second starting address is an integer multiple of a preset value, and the preset value may be 256bytes or other values.
  • the attribute record addresses for storing attribute information include address 1, address 2, address 3 and address 4, address 1 is the first 3 addresses of address 4, address 2 is address 4 The first 2 addresses of , and address 3 is the first address of address 4.
  • the applied actual memory capacity is stored in address 2
  • the target memory address is stored in address 3
  • address 4 is the second starting address.
  • the character type of the address is uint8_t. It can be understood that in other embodiments, the character type of the address may be other types.
  • the applied actual memory capacity is determined based on the address alignment condition, the expected memory capacity, and the memory capacity required for additional information.
  • the additional information includes control information and/or debugging information, such as the size of the memory of the current segment, the start address of the memory of the lower segment, the memory boundary mark, and the like.
  • the memory capacity required for additional information in a 32-bit operating system is 12 bytes
  • the memory capacity required for additional information in a 64-bit operating system is 24 bytes.
  • the expected memory capacity is 1M
  • the address alignment condition is that the second starting address is an integer multiple of 256byte
  • the memory capacity required for additional information is 24 bytes
  • the actual memory capacity applied is 1M+256byte+24byte .
  • Step 103 Receive a memory release request, where the memory release request is used to release the target memory.
  • Step 104 Release the target memory based on the attribute information.
  • a memory allocation request is first received, the memory allocation request is used to request memory allocation for the first thread, and the memory allocation request carries the expected memory capacity; and then the target memory is allocated for the first thread based on the expected memory capacity. , and record the attribute information of the target memory; then receive the memory release request, the memory release request is used to release the target memory; finally, the target memory is released based on the attribute information. Since the electronic device allocates the target memory to the first thread according to the expected memory capacity, instead of randomly allocating memory to the first thread, it is beneficial to improve the utilization rate of memory resources.
  • the allocating target memory for the first thread based on the expected memory capacity includes:
  • the target memory is allocated from the first memory pool, and the target memory capacity is greater than the expected memory capacity
  • the target memory is allocated based on the occupancy information of a second memory pool and the expected memory capacity, and the capacity of the first memory pool is greater than that of the second memory pool The capacity of the memory pool.
  • the first memory capacity may be 3M, or may be other capacities.
  • the target memory is the memory block 4 in FIG. 1 .
  • the second memory pool is allocated from the first memory pool, and the second memory pool may be the memory pool 2 in FIG. 1 or the memory pool 3 in FIG. 1 .
  • the second memory pool includes N first memory blocks, each of the first memory blocks includes at least one memory node, and N is a positive integer;
  • the allocation of the target memory based on the occupancy information of the second memory pool and the expected memory capacity includes:
  • the first thread When the first thread does not occupy the N first memory blocks and the N first memory blocks are all occupied, or the N first memory blocks are all occupied and the first thread occupies M memory blocks
  • the first memory is allocated from the first memory pool, and the target memory is determined in the first memory, Allocate the target memory, the M is a positive integer, and the M is less than or equal to the N;
  • the target memory determines the target memory based on the M first memory blocks, and allocate all the first memory blocks.
  • the N first memory blocks include the first free memory block, based on the The first free memory block determines the target memory, and allocates the target memory, and the first free memory block is a non-volatile memory block other than the M first memory blocks among the N first memory blocks.
  • the first block of memory occupied In the case that the first thread occupies the M first memory blocks, there are no free memory nodes in the M first memory blocks, and the N first memory blocks include the first free memory block, based on the The first free memory block determines the target memory, and allocates the target memory, and the first free memory block is a non-volatile memory block other than the M first memory blocks among the N first memory blocks. The first block of memory occupied.
  • the capacities of the N first memory blocks may be the same.
  • the capacity of each memory node can be the same.
  • a memory node in the same memory block can only be occupied by one thread, and different threads need to occupy memory nodes in different memory blocks.
  • the M first memory blocks form a doubly circular linked list
  • the header of the linked list may be the first memory block occupied by the first thread first
  • the tail of the linked list may be It is the first memory block occupied by the first thread last.
  • the first memory may be divided into 16 memory blocks, the 16 memory blocks include the target memory, and the memory capacity of each memory block in the 16 memory blocks may be 3M or 512 KB.
  • the first free memory block is stored in the free pool in the form of a singly linked list.
  • determining the target memory based on the first free block is to determine the first free block serving as the header of the singly linked list as the target memory.
  • the first thread does not occupy memory nodes in the memory blocks occupied by other threads, which helps to avoid the electronic device from locking the memory nodes and reduces the computational complexity of the electronic device.
  • the header information of the first memory is stored in a system heap.
  • the header information of the first memory includes control information and debugging information, such as the size of the memory of the current segment, the start address of the memory of the lower segment, the memory boundary mark, and the like.
  • the determining the target memory in the first memory includes:
  • a first target memory node in the first target memory block is determined as the target memory.
  • the S second memory blocks form a singly linked list, and the header of the singly linked list is the first target memory block.
  • the memory capacities of the S second memory blocks are the same.
  • S can be 16 or other values.
  • the memory nodes in the first target memory block form a singly linked list, and the header of the singly linked list is the first target memory node.
  • the header information of the first memory is stored in the system heap, so that the header information of the first memory does not occupy the storage space of the first memory, thereby helping to improve the second memory number of blocks.
  • the free memory node includes a first free memory node and/or a second free memory node, and the first free node is an unoccupied memory node, or the first free memory node is a memory node applied by a thread and released by the first thread, and the second free memory node is a memory node applied by the first thread and not released by the first thread;
  • the determining the target memory based on the M first memory blocks includes:
  • the first free memory node in the second free memory node is Three target memory nodes are determined as the target memory.
  • the first idle node is a memory node applied for by the first thread and released by the first thread
  • the first idle node is the private memory node of the first thread
  • the first idle node constitutes a one-way A linked list
  • the header of the singly linked list is the second target memory node.
  • the singly linked list after determining that the header of the singly linked list is the second target memory node, the singly linked list does not include the second target memory node, and the header of the singly linked list is the downstream neighbor node of the second target memory node.
  • the second idle node is a common memory node of the first thread, and the second idle node forms a singly linked list, and the header of the singly linked list is the third target memory node.
  • the singly linked list after determining that the header of the singly linked list is the third target memory node, the singly linked list does not include the third target memory node, and the header of the singly linked list is the downstream neighbor node of the third target memory node.
  • the second free node is a memory node to be released.
  • the second idle node is used to avoid the problem of memory exhaustion after the thread runs for a long time.
  • the determining the target memory based on the first free memory block includes:
  • a fourth target memory node in the second target memory block is determined as the target memory.
  • the first free memory block is stored in the free pool in the form of a singly linked list, and the header of the singly linked list is the second target memory block.
  • the singly linked list after determining that the header of the singly linked list is the second target memory block, the singly linked list does not include the second target memory block, and the header of the singly linked list is the downstream neighbor node of the second target memory block.
  • the memory nodes in the second target memory block form a singly-linked list
  • the header of the singly-linked list is the fourth target memory node
  • the singly linked list after determining that the header of the singly linked list is the fourth target memory node, the singly linked list does not include the fourth target memory node, and the header of the singly linked list is the downstream neighbor node of the fourth target memory node.
  • determining the second target memory block in the first free memory block is beneficial to improve memory utilization.
  • the attribute information includes a first target address of the target memory, and a second target address of a third memory block where the target memory is located;
  • the releasing the target memory based on the attribute information includes:
  • the second target address is the first address, using a second thread to release the target memory based on the first target address;
  • a third thread is used to release the target memory based on the first target address and the second target address.
  • the value corresponding to the first address may be zero or other values.
  • the second thread and the third thread may or may not be the same thread.
  • the second target address is the first address, it means that the target memory is allocated by the first memory pool, and the target memory is released into the first memory pool.
  • the memory block where the target memory is located may be determined based on the second target address, and the target memory may be released to the third memory block based on the first target address.
  • the memory where the target memory is located is determined by the second target address, which is beneficial to improve the efficiency of releasing the target memory.
  • the method further includes:
  • first thread and the third thread are not the same thread, and the first thread is closed, define the public memory node included in the third memory block as the private memory node of the third memory block memory node;
  • the third thread is used to release the third memory block.
  • the first number may be zero.
  • the target memory node is defined as the private node of the first thread, and the number of memory nodes that can be allocated by the third memory block is reduced by one.
  • the third memory block is released.
  • the target memory node is defined as a common node of the third memory block.
  • the target memory node is defined as the private node of the first thread, the memory block to which the target memory node belongs is not changed, and the third thread It can release memory nodes and is not responsible for memory exhaustion caused by memory node application.
  • the second memory pool is determined to be a third memory pool, and the second memory capacity is smaller than the first memory capacity
  • the second memory pool is determined to be the fourth memory pool, and the capacity of the memory nodes included in the third memory pool is larger than the memory included in the fourth memory pool The capacity of the node.
  • the third memory pool and the fourth memory pool are both allocated from the first memory pool.
  • the third memory pool is memory pool 2 in FIG. 1
  • the fourth memory pool is memory pool 3 in FIG. 1 .
  • the unoccupied memory blocks in the third memory pool can form a singly linked list.
  • the fourth memory pool includes at least one memory block, and each memory block includes at least one memory node.
  • the unoccupied memory blocks in the fourth memory pool may form a singly linked list.
  • FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device includes a processor, a memory, and an or A plurality of programs, wherein the one or more programs described above are stored in the memory described above and are configured to be executed by the processor described above, the programs comprising instructions for performing the following steps:
  • Allocate target memory for the first thread based on the expected memory capacity, and record attribute information of the target memory
  • the above program includes instructions for performing the following steps:
  • the target memory is allocated from the first memory pool, and the target memory capacity is greater than the expected memory capacity
  • the target memory is allocated based on the occupancy information of a second memory pool and the expected memory capacity, and the capacity of the first memory pool is greater than that of the second memory pool The capacity of the memory pool.
  • the second memory pool includes N first memory blocks, each of the first memory blocks includes at least one memory node, and N is a positive integer; based on the second memory pool
  • the first thread When the first thread does not occupy the N first memory blocks and the N first memory blocks are all occupied, or the N first memory blocks are all occupied and the first thread occupies M memory blocks
  • the first memory is allocated from the first memory pool, and the target memory is determined in the first memory, Allocate the target memory, the M is a positive integer, and the M is less than or equal to the N;
  • the target memory determines the target memory based on the M first memory blocks, and allocate all the first memory blocks.
  • the N first memory blocks include the first free memory block, based on the The first free memory block determines the target memory, and allocates the target memory, and the first free memory block is a non-volatile memory block other than the M first memory blocks among the N first memory blocks.
  • the first block of memory occupied In the case that the first thread occupies the M first memory blocks, there are no free memory nodes in the M first memory blocks, and the N first memory blocks include the first free memory block, based on the The first free memory block determines the target memory, and allocates the target memory, and the first free memory block is a non-volatile memory block other than the M first memory blocks among the N first memory blocks. The first block of memory occupied.
  • the header information of the first memory is stored in a system heap.
  • the above program is specifically used to execute the instructions of the following steps:
  • a first target memory node in the first target memory block is determined as the target memory.
  • the free memory node includes a first free memory node and/or a second free memory node, and the first free node is an unoccupied memory node, or the first free memory node is a memory node applied by a thread and released by the first thread, and the second free memory node is a memory node applied by the first thread and not released by the first thread;
  • the above program is specifically used to execute the instructions of the following steps:
  • the first free memory node in the second free memory node is Three target memory nodes are determined as the target memory.
  • the above program is specifically used to execute the instructions of the following steps:
  • a fourth target memory node in the second target memory block is determined as the target memory.
  • the attribute information includes a first target address of the target memory, and a second target address of a third memory block where the target memory is located;
  • the above program is specifically used to execute the instructions of the following steps:
  • the second target address is the first address, using a second thread to release the target memory based on the first target address;
  • a third thread is used to release the target memory based on the first target address and the second target address.
  • the above program is further used to execute the instructions of the following steps:
  • first thread and the third thread are not the same thread, and the first thread is closed, define the public memory node included in the third memory block as the private memory node of the third memory block memory node;
  • the third thread is used to release the third memory block.
  • the second memory pool is determined to be a third memory pool, and the second memory capacity is smaller than the first memory capacity
  • the second memory pool is determined to be the fourth memory pool, and the capacity of the memory nodes included in the third memory pool is larger than the memory included in the fourth memory pool The capacity of the node.
  • FIG. 5 is a memory allocation device provided by an embodiment of the present application, applied to the above-mentioned electronic equipment, and the device includes:
  • a first receiving unit 501 configured to receive a memory allocation request, where the memory allocation request is used to request memory allocation for the first thread, and the memory allocation request carries an expected memory capacity;
  • an allocation unit 502 configured to allocate target memory for the first thread based on the expected memory capacity
  • a recording unit 503 configured to record the attribute information of the target memory
  • a second receiving unit 504 configured to receive a memory release request, where the memory release request is used to release the target memory
  • the releasing unit 505 is configured to release the target memory based on the attribute information.
  • the above-mentioned allocation unit 502 is configured to execute the instructions of the following steps:
  • the target memory is allocated from the first memory pool, and the target memory capacity is greater than the expected memory capacity
  • the target memory is allocated based on the occupancy information of a second memory pool and the expected memory capacity, and the capacity of the first memory pool is greater than that of the second memory pool The capacity of the memory pool.
  • the second memory pool includes N first memory blocks, each of the first memory blocks includes at least one memory node, and N is a positive integer; based on the second memory pool
  • the above-mentioned allocation unit 502 is specifically used to execute the instructions of the following steps:
  • the first thread When the first thread does not occupy the N first memory blocks and the N first memory blocks are all occupied, or the N first memory blocks are all occupied and the first thread occupies M memory blocks
  • the first memory is allocated from the first memory pool, and the target memory is determined in the first memory, Allocate the target memory, the M is a positive integer, and the M is less than or equal to the N;
  • the target memory determines the target memory based on the M first memory blocks, and allocate all the first memory blocks.
  • the N first memory blocks include the first free memory block, based on the The first free memory block determines the target memory, and allocates the target memory, and the first free memory block is a non-volatile memory block other than the M first memory blocks among the N first memory blocks.
  • the first block of memory occupied In the case that the first thread occupies the M first memory blocks, there are no free memory nodes in the M first memory blocks, and the N first memory blocks include the first free memory block, based on the The first free memory block determines the target memory, and allocates the target memory, and the first free memory block is a non-volatile memory block other than the M first memory blocks among the N first memory blocks. The first block of memory occupied.
  • the header information of the first memory is stored in a system heap.
  • the above allocation unit 502 is specifically configured to execute the instructions of the following steps:
  • a first target memory node in the first target memory block is determined as the target memory.
  • the free memory node includes a first free memory node and/or a second free memory node, and the first free node is an unoccupied memory node, or the first free memory node is a memory node applied by a thread and released by the first thread, and the second free memory node is a memory node applied by the first thread and not released by the first thread;
  • the above allocation unit 502 is specifically configured to execute the instructions of the following steps:
  • the first free memory node in the second free memory node is Three target memory nodes are determined as the target memory.
  • the allocation unit 502 is specifically configured to execute the instructions of the following steps:
  • a fourth target memory node in the second target memory block is determined as the target memory.
  • the attribute information includes a first target address of the target memory, and a second target address of a third memory block where the target memory is located;
  • the above-mentioned releasing unit 305 is specifically configured to execute the instructions of the following steps:
  • the second target address is the first address, using a second thread to release the target memory based on the first target address;
  • a third thread is used to release the target memory based on the first target address and the second target address.
  • the memory allocation apparatus further includes a defining unit 506 and a determining unit 507 .
  • the above-mentioned definition unit 506 is specifically configured to execute the instructions of the following steps:
  • first thread and the third thread are not the same thread, and the first thread is closed, define the public memory node included in the third memory block as the private memory node of the third memory block memory node;
  • the above-mentioned releasing unit 505 is also specifically used to execute the instructions of the following steps:
  • the third thread is used to release the third memory block.
  • the above determining unit 507 is specifically configured to execute the instructions of the following steps:
  • the expected memory capacity is greater than the second memory capacity, determining that the second memory pool is a third memory pool, and the second memory capacity is smaller than the first memory capacity;
  • the second memory pool is determined to be the fourth memory pool, and the capacity of the memory nodes included in the third memory pool is larger than the memory included in the fourth memory pool The capacity of the node.
  • first receiving unit 501 the allocating unit 502 , the recording unit 503 , the second receiving unit 504 , the releasing unit 505 , the defining unit 506 and the determining unit 507 of the electronic device may be implemented by a processor.
  • Embodiments of the present application further provide a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes a computer to execute the electronic Some or all of the steps described by the device.
  • Embodiments of the present application also provide a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute the electronic method as described above. Some or all of the steps described by the device.
  • the computer program product may be a software installation package.
  • the steps of the method or algorithm described in the embodiments of the present application may be implemented in a hardware manner, or may be implemented in a manner in which a processor executes software instructions.
  • Software instructions can be composed of corresponding software modules, and the software modules can be stored in random access memory (Random Access Memory). Access Memory, RAM), flash memory, read-only memory (Read Only Memory, ROM), Erasable Programmable Read-Only Memory (Erasable Programmable ROM, EPROM), Electrically Erasable Programmable Read-Only Memory (Electrically EPROM, EEPROM), registers, hard disks, removable hard disks, compact disks (CD-ROMs), or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and storage medium may reside in an ASIC. Additionally, the ASIC may reside in access network equipment, target network equipment or core network equipment.
  • the processor and the storage medium may also exist in the access network device, the target network device or the core network device as discrete components.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System (AREA)

Abstract

一种内存分配方法,应用于电子设备,该方法包括:接收内存分配请求(101),所述内存分配请求用于请求为第一线程分配内存,所述内存分配请求携带期望内存容量;基于所述期望内存容量为所述第一线程分配目标内存,以及记录所述目标内存的属性信息(102);接收内存释放请求(103),所述内存释放请求用于释放所述目标内存;基于所述属性信息释放所述目标内存(104)。采用该方法可提升内存利用率。

Description

内存分配方法及相关设备 技术领域
本申请涉及计算机存储技术领域,尤其涉及一种内存分配方法及相关设备。
本申请要求于2020年9月22日提交中国专利局,申请号为202011004107.5、发明名称为“内存分配方法及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
背景技术
嵌入式系统往往使用多个线程来完成不同的任务,多个线程可以使用的系统内存资源相同,而系统内存资源有限。当线程因系统内存资源不够而申请不到内存资源时,会导致线程不能进行任务处理,对系统的业务造成巨大的影响,因此如何提升内存资源的利用率是亟待解决的问题。
技术解决方案
本申请实施例提供一种内存分配方法及相关设备,用于提高内存资源的利用率。
第一方面,本申请实施例提供一种内存分配方法,应用于电子设备,方法包括:
接收内存分配请求,所述内存分配请求用于请求为第一线程分配内存,所述内存分配请求携带期望内存容量;
基于所述期望内存容量为所述第一线程分配目标内存,以及记录所述目标内存的属性信息;
接收内存释放请求,所述内存释放请求用于释放所述目标内存;
基于所述属性信息释放所述目标内存。
第二方面,本申请实施例提供一种内存分配装置,应用于电子设备,装置包括:
第一接收单元,用于接收内存分配请求,所述内存分配请求用于请求为第一线程分配内存,所述内存分配请求携带期望内存容量;
分配单元,用于基于所述期望内存容量为所述第一线程分配目标内存;
记录单元,用于记录所述目标内存的属性信息;
第二接收单元,用于接收内存释放请求,所述内存释放请求用于释放所述目标内存;
释放单元,用于基于所述属性信息释放所述目标内存。
第三方面,本申请实施例提供一种电子设备,包括处理器、存储器以及一个或多个程序,其中,上述一个或多个程序被存储在上述存储器中,并且被配置由上述处理器执行,上述一个或多个程序包括用于执行本申请实施例第一方面所述的方法中的步骤的指令。
第四方面,本申请实施例提供了一种计算机可读存储介质,其中,上述计算机可读存储介质存储用于电子数据交换的计算机程序,其中,上述计算机程序使得计算机执行如本申请实施例第一方面所述的方法中所描述的部分或全部步骤。
第五方面,本申请实施例提供了一种计算机程序产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例第一方面所述的方法中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
可以看出,在本申请实施例中,首先接收内存分配请求,内存分配请求用于请求为第一线程分配内存,内存分配请求携带期望内存容量;然后基于期望内存容量为第一线程分配目标内存,以及记录目标内存的属性信息;再然后接收内存释放请求,内存释放请求用于释放目标内存;最后基于属性信息释放目标内存。由于电子设备根据期望内存容量为第一线程分配目标内存,而不是随意的为第一线程分配内存,因此有利于提升内存资源的利用率。
附图说明
图1是本申请实施例提供的一种内存分配方法的流程示意图;
图2是本申请实施例提供的一种内存分配架构图;
图3是本申请实施例提供的一种属性信息存储示意图;
图4本申请实施例提供的一种电子设备的结构示意图;
图5本申请实施例提供的一种内存分配装置的结构示意图。
本发明的实施方式
如图1所示,本申请实施例提供的一种内存分配方法,应用于上述电子设备,具体包括以下步骤:
步骤101:接收内存分配请求,所述内存分配请求用于请求为第一线程分配内存,所述内存分配请求携带期望内存容量。
其中,在接收到内存分配请求之前,电子设备可能为第一线程分配过内存,也可能没有为第一线程分配过内存。
步骤102:基于所述期望内存容量为所述第一线程分配目标内存,以及记录所述目标内存的属性信息。
其中,如图2所示的内存分配架构,内存池2的内存容量小于内存池3的内存容量,内存池2、内存池3以及内存块4都是从内存池1中分配的。
其中,图2中的n可以为16,m为正整数。在其它实施例中,n值可根据实际情况进行相应的调整,即,可以大于16,也可以小于16。
其中,内存池2可以包括16个内存块,每个内存块的内存容量均为512KB,每个内存块包括至少一个内存节点,内存节点的内存容量小于等于512KB。在其它实施例中,内存池2所包括的内存块的数量,以及每个内存块的内存容量大小,均可根据实际情况进行相应的调整。
其中,内存池3包括可以16个内存块,每个内存块的内存容量均为3M,每个内存块均包括至少一个内存节点,内存节点的内存容量大于512KB且小于等于3M。在其它实施例中,内存池3所包括的内存块的数量,以及每个内存块的内存容量大小,均可根据实际情况进行相应的调整。
其中,内存块4的内存容量大于3M。即,内存块4的内存容量大于内存池3中内存块的内存容量,内存池3中内存块的内存容量大于内存池2中内存块的内存容量。
在其它实施例中,内存块4的内存容量的大小可根据实际情况进行相应的调整,并且,内存块4的内存容量与内存池3中内存块的内存容量以及内存池2中内存块的内存容量的大小关系,也可以根据实际情况进行相应的调整。
其中,若期望内存容量小于或等于512KB,则从内存池2中的内存块中分配内存节点,若期望内存容量大于512KB,且小于或等于3M,则从内存池3中的内存块中分配内存节点,若期望内存容量大于3M,则从内存池1中取内存块。
其中,目标内存可以是图1中的内存块4,也可以是内存池2中的内存节点,也可以是内存池3中的内存节点。
其中,属性信息包括目标内存的地址,目标内存所在的内存块的地址以及第一线程申请到的实际内存容量。
其中,目标内存的容量大于或等于第一线程申请到的实际内存容量,在目标内存中第一线程允许使用申请到的实际内存容量。
举例来说,若目标内存的容量为3.5M,而第一线程申请到的实际内存容量为2M,因此第一线程可使用目标内存中的2M内存。
可选地,所述内存分配请求还携带地址对齐条件,所述地址对齐条件用于确定属性记录地址,所述属性记录地址用于存储所述属性信息,所述属性记录地址是基于所述地址对齐条件和所述目标内存的起始地址确定的。
其中,所述属性记录地址是基于所述地址对齐条件和所述目标内存的起始地址确定的,包括:
基于所述目标内存的第一起始地址和所述地址对齐条件确定第二起始地址;
基于所述第二起始地址确定所述属性记录地址。
其中,第二起始地址为返回给用户的地址。
其中,地址对齐条件为第二起始地址为预设数值的整数倍,该预设数值可以是256byte,也可以是其他数值。
其中,如图3所示为属性信息存储示意图,存储属性信息的属性记录地址包括地址1、地址2、地址3以及地址4,地址1为地址4的第前3个地址,地址2为地址4的第前2个地址,地址3为地址4的第前1个地址。
其中,目标内存所在的内存块的地址存储地址1中,申请到的实际内存容量存储在地址2中,目标内存地址存储地址3中,地址4为第二起始地址。
其中,地址的字符类型为uint8_t。可以理解,在其它实施例中,地址的字符类型可以为其它类型。
可选地,所述申请到的实际内存容量是基于所述地址对齐条件、所述期望内存容量以及额外信息所需的内存容量确定的。
其中,所述额外信息包括控制信息和/或调试信息,例如本段内存大小、下段内存起始地址、内存边界标记,等等。
其中,在32位操作系统中额外信息所需的内存容量为12byte,在64位操作系统中额外信息所需的内存容量为24byte。
举例来说,若期望内存容量为1M,地址对齐条件为第二起始地址为256byte的整数倍,额外信息所需的内存容量为24 byte,则申请到的实际内存容量为1M+256byte+24byte。
步骤103:接收内存释放请求,所述内存释放请求用于释放所述目标内存。
步骤104:基于所述属性信息释放所述目标内存。
可以看出,在本申请实施例中,首先接收内存分配请求,内存分配请求用于请求为第一线程分配内存,内存分配请求携带期望内存容量;然后基于期望内存容量为第一线程分配目标内存,以及记录目标内存的属性信息;再然后接收内存释放请求,内存释放请求用于释放目标内存;最后基于属性信息释放目标内存。由于电子设备根据期望内存容量为第一线程分配目标内存,而不是随意的为第一线程分配内存,因此有利于提升内存资源的利用率。
在本申请的一实现方式中,所述基于所述期望内存容量为所述第一线程分配目标内存,包括:
若所述期望内存容量大于第一内存容量,则从第一内存池中分配所述目标内存,所述目标内存的容量大于所述期望内存容量;
若所述期望内存容量小于或等于所述第一内存容量,则基于第二内存池的占用信息和所述期望内存容量分配所述目标内存,所述第一内存池的容量大于所述第二内存池的容量。
其中,第一内存容量可以是3M,也可以是其他容量。
其中,在期望内存容量大于第一内存容量的情况下,目标内存为图1中的内存块4。
其中,第二内存池是从第一内存池中分配的,第二内存池可能为图1中的内存池2,也可能为图1中的内存池3。
可以看出,根据期望内存容量分配目标内存,有利于提升内存利用率。
在本申请的一实现方式中,所述第二内存池包括N个第一内存块,每个所述第一内存块包括至少一个内存节点,所述N为正整数;
所述基于第二内存池的占用信息和所述期望内存容量分配所述目标内存,包括:
在所述第一线程未占用所述N个第一内存块且所述N个第一内存块均被占用,或者所述N个第一内存块均被占用、所述第一线程占用M个第一内存块以及所述M个第一内存块不存在空闲内存节点的情况下,则从所述第一内存池中分配第一内存,以及在所述第一内存中确定所述目标内存,对所述目标内存进行分配,所述M为正整数,所述M小于或等于所述N;
在所述第一线程占用所述M个第一内存块且所述M个第一内存块存在空闲内存节点的情况下,基于所述M个第一内存块确定所述目标内存,以及分配所述目标内存;
在所述第一线程占用所述M个第一内存块、M个所述第一内存块不存在空闲内存节点以及所述N个第一内存块包括第一空闲内存块的情况下,基于所述第一空闲内存块确定所述目标内存,以及分配所述目标内存,所述第一空闲内存块为所述N个第一内存块中除所述M个第一内存块之外的未被占用的第一内存块。
其中,N个第一内存块的容量可以相同。
其中,每个内存节点的容量可以相同。
其中,同一个内存块中的内存节点仅能被一个线程占用,不同的线程需占用不同内存块中的内存节点。
其中,若第一线程占用M个第一内存块,则M个第一内存块构成一个双向环形链表,链表的表头可以为第一线程最先占用的第一内存块,链表的表尾可以为第一线程最后占用的第一内存块。
其中,第一内存可以划分为16个内存块,该16个内存块包括目标内存,该16个内存块中每个内存块的内存容量可以为3M,也可以为512 KB。
其中,第一空闲内存块以单向链表的形式存储在空闲池中。
其中,基于第一空闲块确定目标内存,是将作为单向链表表头的第一空闲块确定为目标内存。
可以看出,在本申请实施例中,第一线程不占用其他线程占用的内存块中的内存节点,有利于避免电子设备对内存节点加锁,降低了电子设备的计算复杂度。
在本申请的一实现方式中,所述第一内存的头部信息存储在系统堆中。
第一内存的头部信息包括控制信息、调试信息,例如本段内存大小、下段内存起始地址、内存边界标记,等等。
在本申请的一实现方式中,所述在所述第一内存中确定所述目标内存,包括:
将所述第一内存划分为S个第二内存块,以及将所述S个第二内存块放入所述第二内存池;
在所述S个第二内存块中确定第一目标内存块;
将所述第一目标内存块中的第一目标内存节点确定为所述目标内存。
其中,S个第二内存块构成一个单向链表,该单向链表的表头为第一目标内存块。
其中,S个第二内存块的内存容量相同。
其中,S可以是16,也可以是其他值。
其中,第一目标内存块中的内存节点构成单向链表,该单向链表的表头为第一目标内存节点。
可以看出,在本申请实施例中,将第一内存的头部信息存储在系统堆中,以使第一内存的头部信息不占用第一内存的存储空间,从而有利于提升第二内存块的数量。
在本申请的一实现方式中,所述空闲内存节点包括第一空闲内存节点和/或第二空闲内存节点,所述第一空闲节点为未被占用过的内存节点,或由所述第一线程申请且由所述第一线程释放的内存节点,所述第二空闲内存节点为由所述第一线程申请且不由所述第一线程释放的内存节点;
所述基于所述M个第一内存块确定所述目标内存,包括:
在所述M个第一内存块存在所述第一空闲内存节点的情况下,将所述第一空闲内存节点中的第二目标内存节点确定为所述目标内存;
在所述M个第一内存块不存在所述第一空闲内存节点且所述M个第一内存块存在所述第二空闲内存节点的情况下,将所述第二空闲内存节点中的第三目标内存节点确定为所述目标内存。
其中,若第一空闲节点为由所述第一线程申请且由所述第一线程释放的内存节点,则第一空闲节点为第一线程的私有内存节点,且第一空闲节点构成一个单向链表,且该单向链表的表头为第二目标内存节点。
其中,确定单向链表的表头为第二目标内存节点后,该单向链表不包括第二目标内存节点,且该单向链表的表头为第二目标内存节点的下游邻节点。
其中,第二空闲节点为第一线程的公共内存节点,且第二空闲节点构成一个单向链表,且该单向链表的表头为第三目标内存节点。
其中,确定单向链表的表头为第三目标内存节点后,该单向链表不包括第三目标内存节点,且该单向链表的表头为第三目标内存节点的下游邻节点。
其中,第二空闲节点为待释放的内存节点。
可以看出,在本申请实施例中,由于释放内存节点所需的时间远大于申请内存节点的时间,因此使用第二空闲节点,避免了出现线程长时间运行后内存耗尽的问题。
在本申请的一实现方式中,所述基于所述第一空闲内存块确定所述目标内存,包括:
在所述第一空闲内存块中确定第二目标内存块;
将所述第二目标内存块中的第四目标内存节点确定为所述目标内存。
其中,第一空闲内存块以单向链表的形式存储在空闲池中,该单向链表的表头为第二目标内存块。
其中,确定单向链表的表头为第二目标内存块后,该单向链表不包括第二目标内存块,且该单向链表的表头为第二目标内存块的下游邻节点。
其中,第二目标内存块中的内存节点构成一个单项链表,该单向链表的表头为第四目标内存节点。
其中,确定单向链表的表头为第四目标内存节点后,该单向链表不包括第四目标内存节点,且该单向链表的表头为第四目标内存节点的下游邻节点。
可以看出,在本申请实施例中,在第一空闲内存块中确定第二目标内存块,有利于提升内存利用率。
在本申请的一实现方式中,所述属性信息包括所述目标内存的第一目标地址,所述目标内存所在的第三内存块的第二目标地址;
所述基于所述属性信息释放所述目标内存,包括:
若所述第二目标地址为第一地址,则基于所述第一目标地址,采用第二线程对所述目标内存进行释放;
若所述第二目标地址不为所述第一地址,则基于所述第一目标地址和所述第二目标地址,采用第三线程对所述目标内存进行释放。
其中,第一地址对应的值可以是零,也可以是其他值。
其中,第二线程和第三线程可能是同一个线程,也可能不是同一个线程。
其中,若第二目标地址为第一地址,表示目标内存是第一内存池分配的,以及将目标内存释放到第一内存池中。
其中,若第二目标地址不为第一地址,则可基于第二目标地址确定目标内存所在的内存块,以及基于第一目标地址将目标内存释放到第三内存块中。
可以看出,在本申请实施例中,通过第二目标地址,确定目标内存所在的内存,有利于提升释放目标内存的效率。
在本申请的一实现方式中,所述采用第三线程对所述目标内存进行释放之后,所述方法还包括:
在所述第一线程和所述第三线程不是同一个线程,且所述第一线程已关闭的情况下,定义所述第三内存块包括的公共内存节点为所述第三内存块的私有内存节点;
在所述第三内存块已分配的内存节点的数量为第一数量的情况下,采用所述第三线程对所述第三内存块进行释放。
其中,第一数量可以是零。
可选地,若第一线程和第三线程是同一个线程,则将目标内存节点定义为所述第一线程的私有节点,以及将所述第三内存块可分配的内存节点的数量减一;
在所述第三内存块可分配的内存节点的数量为零的情况下,将所述第三内存块进行释放。
可选地,若第一线程和第三线程不是同一个线程,且所述第一线程未结束,则将所述目标内存节点定义为所述第三内存块的公共节点。
可以看出,在本申请实施例中,在第一线程已结束的情况下,将目标内存节点定义为第一线程的私有节点,不改变目标内存节点所属的内存块,避免了第三线程只能释放内存节点,不负责内存节点申请而导致的内存耗尽的问题。
在本申请的一实现方式中,若所述期望内存容量大于第二内存容量,则确定所述第二内存池为第三内存池,所述第二内存容量小于所述第一内存容量;
若所述期望内存容量小或等于第二内存容量,则确定所述第二内存池为第四内存池,所述第三内存池包括的内存节点的容量大于所述第四内存池包括的内存节点的容量。
其中,第三内存池和第四内存池均是从第一内存池中分配的。
其中,第三内存池为图1中的内存池2,第四内存池为图1中的内存池3。
其中,第三内存池中未被占用的内存块可以组成一个单向链表。
其中,第四内存池包括至少一个内存块,每个内存块包括至少一个内存节点。
其中,第四内存池中未被占用的内存块可以组成一个单向链表。
可以看出,在本申请实施例中,不同的目标内存容量申请到的内存节点的容量不同,有利提升内存利用率。
 
与上述图1所示的实施例一致的,请参阅图4,图4是本申请实施例提供的一种电子设备的结构示意图,如图所示,该电子设备包括处理器、存储器以及一个或多个程序,其中,上述一个或多个程序被存储在上述存储器中,并且被配置由上述处理器执行,上述程序包括用于执行以下步骤的指令:
接收内存分配请求,所述内存分配请求用于请求为第一线程分配内存,所述内存分配请求携带期望内存容量;
基于所述期望内存容量为所述第一线程分配目标内存,以及记录所述目标内存的属性信息;
接收内存释放请求,所述内存释放请求用于释放所述目标内存;
基于所述属性信息释放所述目标内存。
在本申请的一实现方式中,在基于所述期望内存容量为所述第一线程分配目标内存方面,上述程序包括用于执行以下步骤的指令:
若所述期望内存容量大于第一内存容量,则从第一内存池中分配所述目标内存,所述目标内存的容量大于所述期望内存容量;
若所述期望内存容量小于或等于所述第一内存容量,则基于第二内存池的占用信息和所述期望内存容量分配所述目标内存,所述第一内存池的容量大于所述第二内存池的容量。
在本申请的一实现方式中,所述第二内存池包括N个第一内存块,每个所述第一内存块包括至少一个内存节点,所述N为正整数;在基于第二内存池的占用信息和所述期望内存容量分配所述目标内存方面,上述程序具体用于执行以下步骤的指令:
在所述第一线程未占用所述N个第一内存块且所述N个第一内存块均被占用,或者所述N个第一内存块均被占用、所述第一线程占用M个第一内存块以及所述M个第一内存块不存在空闲内存节点的情况下,则从所述第一内存池中分配第一内存,以及在所述第一内存中确定所述目标内存,对所述目标内存进行分配,所述M为正整数,所述M小于或等于所述N;
在所述第一线程占用所述M个第一内存块且所述M个第一内存块存在空闲内存节点的情况下,基于所述M个第一内存块确定所述目标内存,以及分配所述目标内存;
在所述第一线程占用所述M个第一内存块、M个所述第一内存块不存在空闲内存节点以及所述N个第一内存块包括第一空闲内存块的情况下,基于所述第一空闲内存块确定所述目标内存,以及分配所述目标内存,所述第一空闲内存块为所述N个第一内存块中除所述M个第一内存块之外的未被占用的第一内存块。
在本申请的一实现方式中,所述第一内存的头部信息存储在系统堆中。
在本申请的一实现方式中,在在所述第一内存中确定所述目标内存方面,上述程序具体用于执行以下步骤的指令:
将所述第一内存划分为S个第二内存块,以及将所述S个第二内存块放入所述第二内存池;
在所述S个第二内存块中确定第一目标内存块;
将所述第一目标内存块中的第一目标内存节点确定为所述目标内存。
在本申请的一实现方式中,所述空闲内存节点包括第一空闲内存节点和/或第二空闲内存节点,所述第一空闲节点为未被占用过的内存节点,或由所述第一线程申请且由所述第一线程释放的内存节点,所述第二空闲内存节点为由所述第一线程申请且不由所述第一线程释放的内存节点;
在基于所述M个第一内存块确定所述目标内存方面,上述程序具体用于执行以下步骤的指令:
在所述M个第一内存块存在所述第一空闲内存节点的情况下,将所述第一空闲内存节点中的第二目标内存节点确定为所述目标内存;
在所述M个第一内存块不存在所述第一空闲内存节点且所述M个第一内存块存在所述第二空闲内存节点的情况下,将所述第二空闲内存节点中的第三目标内存节点确定为所述目标内存。
在本申请的一实现方式中,在基于所述第一空闲内存块确定所述目标内存方面,上述程序具体用于执行以下步骤的指令:
在所述第一空闲内存块中确定第二目标内存块;
将所述第二目标内存块中的第四目标内存节点确定为所述目标内存。
在本申请的一实现方式中,所述属性信息包括所述目标内存的第一目标地址,所述目标内存所在的第三内存块的第二目标地址;
在基于所述属性信息释放所述目标内存方面,上述程序具体用于执行以下步骤的指令:
若所述第二目标地址为第一地址,则基于所述第一目标地址,采用第二线程对所述目标内存进行释放;
若所述第二目标地址不为所述第一地址,则基于所述第一目标地址和所述第二目标地址,采用第三线程对所述目标内存进行释放。
在本申请的一实现方式中,在采用第三线程对所述目标内存进行释放之后,上述程序具体还用于执行以下步骤的指令:
在所述第一线程和所述第三线程不是同一个线程,且所述第一线程已关闭的情况下,定义所述第三内存块包括的公共内存节点为所述第三内存块的私有内存节点;
在所述第三内存块已分配的内存节点的数量为第一数量的情况下,采用所述第三线程对所述第三内存块进行释放。
在本申请的一实现方式中,若所述期望内存容量大于第二内存容量,则确定所述第二内存池为第三内存池,所述第二内存容量小于所述第一内存容量;
若所述期望内存容量小或等于第二内存容量,则确定所述第二内存池为第四内存池,所述第三内存池包括的内存节点的容量大于所述第四内存池包括的内存节点的容量。
需要说明的是,本实施例的具体实现过程可参见上述方法实施例所述的具体实现过程,在此不再叙述。
请参阅图5,图5是本申请实施例提供的一种内存分配装置,应用于上述电子设备,该装置包括:
第一接收单元501,用于接收内存分配请求,所述内存分配请求用于请求为第一线程分配内存,所述内存分配请求携带期望内存容量;
分配单元502,用于基于所述期望内存容量为所述第一线程分配目标内存;
记录单元503,用于记录所述目标内存的属性信息;
第二接收单元504,用于接收内存释放请求,所述内存释放请求用于释放所述目标内存;
释放单元505,用于基于所述属性信息释放所述目标内存。
在本申请的一实现方式中,在基于所述期望内存容量为所述第一线程分配目标内存方面,上述分配单元502用于执行以下步骤的指令:
若所述期望内存容量大于第一内存容量,则从第一内存池中分配所述目标内存,所述目标内存的容量大于所述期望内存容量;
若所述期望内存容量小于或等于所述第一内存容量,则基于第二内存池的占用信息和所述期望内存容量分配所述目标内存,所述第一内存池的容量大于所述第二内存池的容量。
在本申请的一实现方式中,所述第二内存池包括N个第一内存块,每个所述第一内存块包括至少一个内存节点,所述N为正整数;在基于第二内存池的占用信息和所述期望内存容量分配所述目标内存方面,上述分配单元502具体用于执行以下步骤的指令:
在所述第一线程未占用所述N个第一内存块且所述N个第一内存块均被占用,或者所述N个第一内存块均被占用、所述第一线程占用M个第一内存块以及所述M个第一内存块不存在空闲内存节点的情况下,则从所述第一内存池中分配第一内存,以及在所述第一内存中确定所述目标内存,对所述目标内存进行分配,所述M为正整数,所述M小于或等于所述N;
在所述第一线程占用所述M个第一内存块且所述M个第一内存块存在空闲内存节点的情况下,基于所述M个第一内存块确定所述目标内存,以及分配所述目标内存;
在所述第一线程占用所述M个第一内存块、M个所述第一内存块不存在空闲内存节点以及所述N个第一内存块包括第一空闲内存块的情况下,基于所述第一空闲内存块确定所述目标内存,以及分配所述目标内存,所述第一空闲内存块为所述N个第一内存块中除所述M个第一内存块之外的未被占用的第一内存块。
在本申请的一实现方式中,所述第一内存的头部信息存储在系统堆中。
在本申请的一实现方式中,在所述第一内存中确定所述目标内存方面,上述分配单元502具体用于执行以下步骤的指令:
将所述第一内存划分为S个第二内存块,以及将所述S个第二内存块放入所述第二内存池;
在所述S个第二内存块中确定第一目标内存块;
将所述第一目标内存块中的第一目标内存节点确定为所述目标内存。
在本申请的一实现方式中,所述空闲内存节点包括第一空闲内存节点和/或第二空闲内存节点,所述第一空闲节点为未被占用过的内存节点,或由所述第一线程申请且由所述第一线程释放的内存节点,所述第二空闲内存节点为由所述第一线程申请且不由所述第一线程释放的内存节点;
在基于所述M个第一内存块确定所述目标内存方面,上述分配单元502具体用于执行以下步骤的指令:
在所述M个第一内存块存在所述第一空闲内存节点的情况下,将所述第一空闲内存节点中的第二目标内存节点确定为所述目标内存;
在所述M个第一内存块不存在所述第一空闲内存节点且所述M个第一内存块存在所述第二空闲内存节点的情况下,将所述第二空闲内存节点中的第三目标内存节点确定为所述目标内存。
在本申请的一实现方式中,在基于所述第一空闲内存块确定所述目标内存方面,上述分配单元502具体用于执行以下步骤的指令:
在所述第一空闲内存块中确定第二目标内存块;
将所述第二目标内存块中的第四目标内存节点确定为所述目标内存。
在本申请的一实现方式中,所述属性信息包括所述目标内存的第一目标地址,所述目标内存所在的第三内存块的第二目标地址;
在基于所述属性信息释放所述目标内存方面,上述释放单元305具体用于执行以下步骤的指令:
若所述第二目标地址为第一地址,则基于所述第一目标地址,采用第二线程对所述目标内存进行释放;
若所述第二目标地址不为所述第一地址,则基于所述第一目标地址和所述第二目标地址,采用第三线程对所述目标内存进行释放。
在本申请的一实现方式中,所述内存分配装置还包括定义单元506和确定单元507。
在本申请的一实现方式中,在采用第三线程对所述目标内存进行释放之后,上述定义单元506具体用于执行以下步骤的指令:
在所述第一线程和所述第三线程不是同一个线程,且所述第一线程已关闭的情况下,定义所述第三内存块包括的公共内存节点为所述第三内存块的私有内存节点;
上述释放单元505具体还用于执行以下步骤的指令:
在所述第三内存块已分配的内存节点的数量为第一数量的情况下,采用所述第三线程对所述第三内存块进行释放。
在本申请的一实现方式中,上述确定单元507具体用于执行以下步骤的指令:
若所述期望内存容量大于第二内存容量,则确定所述第二内存池为第三内存池,,所述第二内存容量小于所述第一内存容量;
若所述期望内存容量小或等于第二内存容量,则确定所述第二内存池为第四内存池,所述第三内存池包括的内存节点的容量大于所述第四内存池包括的内存节点的容量。
需要说明的是,电子设备的第一接收单元501、分配单元502,记录单元503、第二接收单元504、释放单元505、定义单元506及确定单元507可通过处理器实现。
本申请实施例还提供了一种计算机可读存储介质,其中,所述计算机可读存储介质存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如上述方法实施例中电子设备所描述的部分或全部步骤。
本申请实施例还提供了一种计算机程序产品,其中,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如上述方法中电子设备所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
本申请实施例所描述的方法或者算法的步骤可以以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于接入网设备、目标网络设备或核心网设备中。当然,处理器和存储介质也可以作为分立组件存在于接入网设备、目标网络设备或核心网设备中。 

Claims (13)

  1. 一种内存分配方法,其特征在于,应用于电子设备,所述方法包括:
    接收内存分配请求,所述内存分配请求用于请求为第一线程分配内存,所述内存分配请求携带期望内存容量;
    基于所述期望内存容量为所述第一线程分配目标内存,以及记录所述目标内存的属性信息;
    接收内存释放请求,所述内存释放请求用于释放所述目标内存;
    基于所述属性信息释放所述目标内存。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述期望内存容量为所述第一线程分配目标内存,包括:
    若所述期望内存容量大于第一内存容量,则从第一内存池中分配所述目标内存,所述目标内存的容量大于所述期望内存容量;
    若所述期望内存容量小于或等于所述第一内存容量,则基于第二内存池的占用信息和所述期望内存容量分配所述目标内存,所述第一内存池的容量大于所述第二内存池的容量。
  3. 根据权利要求2所述的方法,其特征在于,所述第二内存池包括N个第一内存块,每个所述第一内存块包括至少一个内存节点,所述N为正整数;
    所述基于第二内存池的占用信息和所述期望内存容量分配所述目标内存,包括:
    在所述第一线程未占用所述N个第一内存块且所述N个第一内存块均被占用,或者所述N个第一内存块均被占用、所述第一线程占用M个第一内存块以及所述M个第一内存块不存在空闲内存节点的情况下,则从所述第一内存池中分配第一内存,以及在所述第一内存中确定所述目标内存,对所述目标内存进行分配,所述M为正整数,所述M小于或等于所述N;
    在所述第一线程占用所述M个第一内存块且所述M个第一内存块存在空闲内存节点的情况下,基于所述M个第一内存块确定所述目标内存,以及分配所述目标内存;
    在所述第一线程占用所述M个第一内存块、M个所述第一内存块不存在空闲内存节点以及所述N个第一内存块包括第一空闲内存块的情况下,基于所述第一空闲内存块确定所述目标内存,以及分配所述目标内存,所述第一空闲内存块为所述N个第一内存块中除所述M个第一内存块之外的未被占用的第一内存块。
  4. 根据权利要求3所述的方法,其特征在于,所述第一内存的头部信息存储在系统堆中。
  5. 根据权利要求4所述的方法,其特征在于,所述在所述第一内存中确定所述目标内存,包括:
    将所述第一内存划分为S个第二内存块,以及将所述S个第二内存块放入所述第二内存池;
    在所述S个第二内存块中确定第一目标内存块;
    将所述第一目标内存块中的第一目标内存节点确定为所述目标内存。
  6. 根据权利要求3所述的方法,其特征在于,所述空闲内存节点包括第一空闲内存节点和/或第二空闲内存节点,所述第一空闲节点为未被占用过的内存节点,或由所述第一线程申请且由所述第一线程释放的内存节点,所述第二空闲内存节点为由所述第一线程申请且不由所述第一线程释放的内存节点;
    所述基于所述M个第一内存块确定所述目标内存,包括:
    在所述M个第一内存块存在所述第一空闲内存节点的情况下,将所述第一空闲内存节点中的第二目标内存节点确定为所述目标内存;
    在所述M个第一内存块不存在所述第一空闲内存节点且所述M个第一内存块存在所述第二空闲内存节点的情况下,将所述第二空闲内存节点中的第三目标内存节点确定为所述目标内存。
  7. 根据权利要求3所述的方法,其特征在于,所述基于所述第一空闲内存块确定所述目标内存,包括:
    在所述第一空闲内存块中确定第二目标内存块;
    将所述第二目标内存块中的第四目标内存节点确定为所述目标内存。
  8. 根据权利要求2-7任一项所述的方法,其特征在于,所述属性信息包括所述目标内存的第一目标地址,所述目标内存所在的第三内存块的第二目标地址;
    所述基于所述属性信息释放所述目标内存,包括:
    若所述第二目标地址为第一地址,则基于所述第一目标地址,采用第二线程对所述目标内存进行释放;
    若所述第二目标地址不为所述第一地址,则基于所述第一目标地址和所述第二目标地址,采用第三线程对所述目标内存进行释放。
  9. 根据权利要求8所述的方法,其特征在于,所述采用第三线程对所述目标内存进行释放之后,所述方法还包括:
    在所述第一线程和所述第三线程不是同一个线程,且所述第一线程已关闭的情况下,定义所述第三内存块包括的公共内存节点为所述第三内存块的私有内存节点;
    在所述第三内存块已分配的内存节点的数量为第一数量的情况下,采用所述第三线程对所述第三内存块进行释放。
  10. 根据权利要求9所述的方法,其特征在于,若所述期望内存容量大于第二内存容量,则确定所述第二内存池为第三内存池,所述第二内存容量小于所述第一内存容量;
    若所述期望内存容量小于或等于第二内存容量,则确定所述第二内存池为第四内存池,所述第三内存池包括的内存节点的容量大于所述第四内存池包括的内存节点的容量。
  11. 一种内存分配装置,其特征在于,应用于电子设备,所述装置包括:
    第一接收单元,用于接收内存分配请求,所述内存分配请求用于请求为第一线程分配内存,所述内存分配请求携带期望内存容量;
    分配单元,用于基于所述期望内存容量为所述第一线程分配目标内存;
    记录单元,用于记录所述目标内存的属性信息;
    第二接收单元,用于接收内存释放请求,所述内存释放请求用于释放所述目标内存;
    释放单元,用于基于所述属性信息释放所述目标内存。
  12. 一种电子设备,其特征在于,所述电子设备包括处理器、存储器以及一个或多个程序,所述程序被存储在所述存储器中,并且被配置由所述处理器执行,所述程序包括用于执行如权利要求1-10任一项所述的方法中的步骤的指令。
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,其中,所述计算机程序被处理执行如权利要求1-10任一项所述的方法。
PCT/CN2021/114967 2020-09-22 2021-08-27 内存分配方法及相关设备 WO2022062833A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011004107.5 2020-09-22
CN202011004107.5A CN112214313A (zh) 2020-09-22 2020-09-22 内存分配方法及相关设备

Publications (1)

Publication Number Publication Date
WO2022062833A1 true WO2022062833A1 (zh) 2022-03-31

Family

ID=74050091

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114967 WO2022062833A1 (zh) 2020-09-22 2021-08-27 内存分配方法及相关设备

Country Status (2)

Country Link
CN (1) CN112214313A (zh)
WO (1) WO2022062833A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116627359A (zh) * 2023-07-24 2023-08-22 成都佰维存储科技有限公司 内存管理方法、装置、可读存储介质及电子设备
CN117130949A (zh) * 2023-08-28 2023-11-28 零束科技有限公司 内存管理方法、装置、电子设备及存储介质
CN117519988A (zh) * 2023-12-28 2024-02-06 苏州元脑智能科技有限公司 一种基于raid的内存池动态调配方法、装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214313A (zh) * 2020-09-22 2021-01-12 深圳云天励飞技术股份有限公司 内存分配方法及相关设备
CN113032156B (zh) * 2021-05-25 2021-10-15 北京金山云网络技术有限公司 内存分配方法和装置、电子设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104375899A (zh) * 2014-11-21 2015-02-25 北京应用物理与计算数学研究所 高性能计算机numa感知的线程和内存资源优化方法与系统
CN107153618A (zh) * 2016-03-02 2017-09-12 阿里巴巴集团控股有限公司 一种内存分配的处理方法及装置
CN111090521A (zh) * 2019-12-10 2020-05-01 Oppo(重庆)智能科技有限公司 内存分配方法、装置、存储介质及电子设备
CN111324461A (zh) * 2020-02-20 2020-06-23 西安芯瞳半导体技术有限公司 内存分配方法、装置、计算机设备和存储介质
CN112214313A (zh) * 2020-09-22 2021-01-12 深圳云天励飞技术股份有限公司 内存分配方法及相关设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1276361C (zh) * 2003-12-29 2006-09-20 北京中视联数字系统有限公司 一种嵌入式系统内存管理的方法
WO2007109920A1 (fr) * 2006-03-27 2007-10-04 Zte Corporation Procédé de construction et d'utilisation d'un pool de mémoire
CN102915276B (zh) * 2012-09-25 2015-06-03 武汉邮电科学研究院 一种用于嵌入式系统的内存控制方法
CN102968378B (zh) * 2012-10-23 2016-06-15 融创天下(上海)科技发展有限公司 一种内存分配和释放的方法、装置及系统
CN103077126B (zh) * 2012-12-24 2016-08-03 中兴通讯股份有限公司 一种内存管理方法和装置
CN107133182A (zh) * 2016-02-29 2017-09-05 北大方正集团有限公司 一种内存管理方法及装置
CN108595259B (zh) * 2017-03-16 2021-08-20 哈尔滨英赛克信息技术有限公司 一种基于全局管理的内存池管理方法
CN107766153A (zh) * 2017-10-17 2018-03-06 华为技术有限公司 一种内存管理方法及装置
CN110245091B (zh) * 2018-10-29 2022-08-26 浙江大华技术股份有限公司 一种内存管理的方法、装置及计算机存储介质
CN111078408B (zh) * 2019-12-10 2022-10-21 Oppo(重庆)智能科技有限公司 内存分配方法、装置、存储介质及电子设备
CN111367671B (zh) * 2020-03-03 2023-12-29 深信服科技股份有限公司 一种内存分配方法、装置、设备及可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104375899A (zh) * 2014-11-21 2015-02-25 北京应用物理与计算数学研究所 高性能计算机numa感知的线程和内存资源优化方法与系统
CN107153618A (zh) * 2016-03-02 2017-09-12 阿里巴巴集团控股有限公司 一种内存分配的处理方法及装置
CN111090521A (zh) * 2019-12-10 2020-05-01 Oppo(重庆)智能科技有限公司 内存分配方法、装置、存储介质及电子设备
CN111324461A (zh) * 2020-02-20 2020-06-23 西安芯瞳半导体技术有限公司 内存分配方法、装置、计算机设备和存储介质
CN112214313A (zh) * 2020-09-22 2021-01-12 深圳云天励飞技术股份有限公司 内存分配方法及相关设备

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116627359A (zh) * 2023-07-24 2023-08-22 成都佰维存储科技有限公司 内存管理方法、装置、可读存储介质及电子设备
CN116627359B (zh) * 2023-07-24 2023-11-14 成都佰维存储科技有限公司 内存管理方法、装置、可读存储介质及电子设备
CN117130949A (zh) * 2023-08-28 2023-11-28 零束科技有限公司 内存管理方法、装置、电子设备及存储介质
CN117130949B (zh) * 2023-08-28 2024-05-10 零束科技有限公司 内存管理方法、装置、电子设备及存储介质
CN117519988A (zh) * 2023-12-28 2024-02-06 苏州元脑智能科技有限公司 一种基于raid的内存池动态调配方法、装置
CN117519988B (zh) * 2023-12-28 2024-03-19 苏州元脑智能科技有限公司 一种基于raid的内存池动态调配方法、装置

Also Published As

Publication number Publication date
CN112214313A (zh) 2021-01-12

Similar Documents

Publication Publication Date Title
WO2022062833A1 (zh) 内存分配方法及相关设备
US10686756B2 (en) Method and apparatus for managing MAC address generation for virtualized environments
US20080028154A1 (en) Method and Apparatus for Memory Utilization
WO2021008104A1 (zh) Tee系统中的数据传输方法和装置
US9104501B2 (en) Preparing parallel tasks to use a synchronization register
WO2009098547A1 (en) Memory management
CN110750336B (zh) 一种OpenStack虚拟机内存热扩容方法
CN106385377B (zh) 一种信息处理方法和系统
US7849272B2 (en) Dynamic memory management in an RDMA context
CN111857992B (zh) 一种Radosgw模块中线程资源分配方法和装置
CN112612623A (zh) 一种共享内存管理的方法和设备
CN109564502A (zh) 应用于存储设备中的访问请求的处理方法和装置
US11385900B2 (en) Accessing queue data
CN110162395B (zh) 一种内存分配的方法及装置
JP2005209206A (ja) マルチプロセッサシステムにおけるデータ転送方法、マルチプロセッサシステム、及び、この方法を実施するプロセッサ
CN113535087A (zh) 数据迁移过程中的数据处理方法、服务器及存储系统
CN114157717B (zh) 一种微服务动态限流的系统及方法
CN114140115B (zh) 区块链交易池的分片方法、系统、存储介质和计算机系统
JP7217341B2 (ja) プロセッサおよびレジスタの継承方法
CN110673797A (zh) 一种分布式块存储服务中的逻辑卷拷贝方法
CN115509763B (zh) 指纹计算方法及装置
WO2017220020A1 (zh) 存储资源分配方法和装置
CN117742977B (zh) 芯片内存数据拷贝方法、电子设备和介质
CN112817766B (zh) 一种内存管理方法、电子设备及介质
CN117806776B (zh) 一种数据迁移方法、装置及电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21871206

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21871206

Country of ref document: EP

Kind code of ref document: A1