CN112214313A - Memory allocation method and related equipment - Google Patents

Memory allocation method and related equipment Download PDF

Info

Publication number
CN112214313A
CN112214313A CN202011004107.5A CN202011004107A CN112214313A CN 112214313 A CN112214313 A CN 112214313A CN 202011004107 A CN202011004107 A CN 202011004107A CN 112214313 A CN112214313 A CN 112214313A
Authority
CN
China
Prior art keywords
memory
target
thread
capacity
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011004107.5A
Other languages
Chinese (zh)
Other versions
CN112214313B (en
Inventor
马登云
顾鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN202011004107.5A priority Critical patent/CN112214313B/en
Publication of CN112214313A publication Critical patent/CN112214313A/en
Priority to PCT/CN2021/114967 priority patent/WO2022062833A1/en
Application granted granted Critical
Publication of CN112214313B publication Critical patent/CN112214313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System (AREA)

Abstract

The application discloses a memory allocation method, which is applied to electronic equipment and comprises the following steps: receiving a memory allocation request, wherein the memory allocation request is used for requesting to allocate a memory for a first thread, and the memory allocation request carries an expected memory capacity; allocating a target memory for the first thread based on the expected memory capacity, and recording attribute information of the target memory; receiving a memory release request, wherein the memory release request is used for releasing the target memory; and releasing the target memory based on the attribute information. By adopting the embodiment of the application, the memory utilization rate can be improved.

Description

Memory allocation method and related equipment
Technical Field
The present application relates to the field of computer storage technologies, and in particular, to a memory allocation method and related devices.
Background
An embedded system often uses multiple threads to complete different tasks, and the multiple threads may use the same system memory resource, which is limited. When the thread cannot apply for the memory resource due to insufficient memory resource of the system, the thread cannot perform task processing, which has a great influence on the system service, and thus how to improve the utilization rate of the memory resource is a problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a memory allocation method and related equipment, which are used for improving the utilization rate of memory resources.
In a first aspect, an embodiment of the present application provides a memory allocation method applied to an electronic device, where the method includes:
receiving a memory allocation request, wherein the memory allocation request is used for requesting to allocate a memory for a first thread, and the memory allocation request carries an expected memory capacity;
allocating a target memory for the first thread based on the expected memory capacity, and recording attribute information of the target memory;
receiving a memory release request, wherein the memory release request is used for releasing the target memory;
and releasing the target memory based on the attribute information.
In a second aspect, an embodiment of the present application provides a memory allocation apparatus, applied to an electronic device, the apparatus including:
a first receiving unit, configured to receive a memory allocation request, where the memory allocation request is used to request that a memory is allocated to a first thread, and the memory allocation request carries an expected memory capacity;
a distribution unit, configured to distribute a target memory for the first thread based on the expected memory capacity;
the recording unit is used for recording the attribute information of the target memory;
a second receiving unit, configured to receive a memory release request, where the memory release request is used to release the target memory;
and the releasing unit is used for releasing the target memory based on the attribute information.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the one or more programs include instructions for executing steps in the method according to the first aspect of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform some or all of the steps described in the method according to the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps described in the method according to the first aspect of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the present application, a memory allocation request is first received, where the memory allocation request is used to request to allocate a memory for a first thread, and the memory allocation request carries an expected memory capacity; then, distributing a target memory for the first thread based on the expected memory capacity, and recording the attribute information of the target memory; then receiving a memory release request, wherein the memory release request is used for releasing a target memory; and finally releasing the target memory based on the attribute information. The electronic device allocates the target memory to the first thread according to the expected memory capacity, and does not allocate the memory to the first thread randomly, so that the utilization rate of the memory resource is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a memory allocation method according to an embodiment of the present disclosure;
fig. 2 is a diagram of a memory allocation architecture according to an embodiment of the present application;
FIG. 3 is a schematic diagram of attribute information storage provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a memory allocation apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Electronic devices may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to wireless modems, as well as various forms of User Equipment (UE), Mobile Stations (MS), Terminal devices (Terminal Device), and so forth, having wireless communication capabilities.
The following describes embodiments of the present application in detail.
As shown in fig. 1, a memory allocation method provided in an embodiment of the present application is applied to the electronic device, and specifically includes the following steps:
step 101: receiving a memory allocation request, wherein the memory allocation request is used for requesting to allocate memory for a first thread, and the memory allocation request carries expected memory capacity.
Before receiving the memory allocation request, the electronic device may or may not allocate memory for the first thread.
Step 102: allocating a target memory for the first thread based on the expected memory capacity, and recording attribute information of the target memory.
As shown in fig. 2, in the memory allocation architecture, the memory capacity of the memory pool 2 is smaller than that of the memory pool 3, and the memory pool 2, the memory pool 3, and the memory block 4 are allocated from the memory pool 1.
Wherein n in fig. 2 may be 16, and m is a positive integer. In other embodiments, the value of n may be adjusted accordingly according to actual conditions, that is, may be greater than 16, or may be smaller than 16.
The memory pool 2 may include 16 memory blocks, each memory block has a memory capacity of 512KB, each memory block includes at least one memory node, and the memory capacity of the memory node is less than or equal to 512 KB. In other embodiments, the number of the memory blocks included in the memory pool 2 and the size of the memory capacity of each memory block may be adjusted accordingly according to actual situations.
The memory pool 3 includes 16 memory blocks, the memory capacity of each memory block is 3M, each memory block includes at least one memory node, and the memory capacity of the memory node is greater than 512KB and less than or equal to 3M. In other embodiments, the number of the memory blocks included in the memory pool 3 and the size of the memory capacity of each memory block may be adjusted accordingly according to actual situations.
Wherein, the memory capacity of the memory block 4 is larger than 3M. That is, the memory capacity of the memory block 4 is greater than that of the memory block in the memory pool 3, and the memory capacity of the memory block in the memory pool 3 is greater than that of the memory block in the memory pool 2.
In other embodiments, the size of the memory capacity of the memory block 4 may be adjusted according to an actual situation, and the relationship between the memory capacity of the memory block 4 and the memory capacity of the memory block in the memory pool 3 and the size of the memory capacity of the memory block in the memory pool 2 may also be adjusted according to the actual situation.
If the expected memory capacity is less than or equal to 512KB, a memory node is allocated from the memory block in the memory pool 2, if the expected memory capacity is greater than 512KB and less than or equal to 3M, a memory node is allocated from the memory block in the memory pool 3, and if the expected memory capacity is greater than 3M, a memory block is fetched from the memory pool 1.
The target memory may be the memory block 4 in fig. 1, or a memory node in the memory pool 2, or a memory node in the memory pool 3.
The attribute information includes an address of the target memory, an address of a memory block where the target memory is located, and an actual memory capacity applied by the first thread.
The capacity of the target memory is greater than or equal to the actual memory capacity applied by the first thread, and the first thread is allowed to use the applied actual memory capacity in the target memory.
For example, if the target memory size is 3.5M and the actual memory size requested by the first thread is 2M, the first thread may use 2M of the target memory.
Optionally, the memory allocation request further carries an address alignment condition, where the address alignment condition is used to determine an attribute record address, the attribute record address is used to store the attribute information, and the attribute record address is determined based on the address alignment condition and a start address of the target memory.
Wherein the attribute record address is determined based on the address alignment condition and the start address of the target memory, and includes:
determining a second starting address based on the first starting address of the target memory and the address alignment condition;
determining the attribute recording address based on the second starting address.
Wherein the second starting address is the address returned to the user.
The address alignment condition is that the second start address is an integer multiple of a preset value, and the preset value may be 256 bytes or other values.
As shown in fig. 3, the attribute information storage diagram is shown, where the attribute recording address for storing the attribute information includes address 1, address 2, address 3, and address 4, where address 1 is the first 3 addresses of address 4, address 2 is the first 2 addresses of address 4, and address 3 is the first 1 address of address 4.
The applied actual memory capacity is stored in the address 2 in the address storage address 1 of the memory block where the target memory is located, and the address 4 is the second starting address in the target memory address storage address 3.
Wherein the character type of the address is the uint8_ t. It is understood that in other embodiments, the character type of the address may be other types.
Optionally, the applied actual memory capacity is determined based on the address alignment condition, the expected memory capacity, and a memory capacity required for additional information.
The additional information includes control information and/or debug information, such as the size of the current segment of memory, the starting address of the next segment of memory, the boundary flag of the memory, and so on.
The memory capacity required for the extra information in the 32-bit operating system is 12 bytes, and the memory capacity required for the extra information in the 64-bit operating system is 24 bytes.
For example, if the expected memory capacity is 1M, the address alignment condition is that the second start address is an integer multiple of 256 bytes, and the memory capacity required by the additional information is 24 bytes, the applied actual memory capacity is 1M +256 bytes +24 bytes.
Step 103: and receiving a memory release request, wherein the memory release request is used for releasing the target memory.
Step 104: and releasing the target memory based on the attribute information.
It can be seen that, in the embodiment of the present application, a memory allocation request is first received, where the memory allocation request is used to request to allocate a memory for a first thread, and the memory allocation request carries an expected memory capacity; then, distributing a target memory for the first thread based on the expected memory capacity, and recording the attribute information of the target memory; then receiving a memory release request, wherein the memory release request is used for releasing a target memory; and finally releasing the target memory based on the attribute information. The electronic device allocates the target memory to the first thread according to the expected memory capacity, and does not allocate the memory to the first thread randomly, so that the utilization rate of the memory resource is improved.
In an implementation manner of the present application, the allocating a target memory for the first thread based on the expected memory capacity includes:
if the expected memory capacity is larger than a first memory capacity, allocating the target memory from a first memory pool, wherein the capacity of the target memory is larger than the expected memory capacity;
and if the expected memory capacity is smaller than or equal to the first memory capacity, allocating the target memory based on the occupation information of a second memory pool and the expected memory capacity, wherein the capacity of the first memory pool is larger than that of the second memory pool.
The first memory capacity may be 3M, or may be other capacities.
In the case that the expected memory capacity is larger than the first memory capacity, the target memory is the memory block 4 in fig. 1.
The second memory pool is allocated from the first memory pool, and the second memory pool may be the memory pool 2 in fig. 1 or the memory pool 3 in fig. 1.
It can be seen that the target memory is allocated according to the expected memory capacity, which is beneficial to improving the memory utilization rate.
In an implementation manner of the present application, the second memory pool includes N first memory blocks, each first memory block includes at least one memory node, and N is a positive integer;
the allocating the target memory based on the occupancy information of the second memory pool and the expected memory capacity includes:
when the first thread does not occupy the N first memory blocks and the N first memory blocks are all occupied, or the N first memory blocks are all occupied, the first thread occupies M first memory blocks, and the M first memory blocks do not have idle memory nodes, allocating a first memory from the first memory pool, determining a target memory in the first memory, and allocating the target memory, where M is a positive integer and is less than or equal to N;
determining the target memory based on the M first memory blocks and allocating the target memory under the condition that the first thread occupies the M first memory blocks and the M first memory blocks have idle memory nodes;
determining the target memory based on the first idle memory block and allocating the target memory under the condition that the first thread occupies the M first memory blocks, the M first memory blocks have no idle memory nodes, and the N first memory blocks include the first idle memory blocks, where the first idle memory blocks are unoccupied first memory blocks of the N first memory blocks except the M first memory blocks.
The capacities of the N first memory blocks may be the same.
Wherein, the capacity of each memory node may be the same.
The memory nodes in the same memory block can only be occupied by one thread, and different threads need to occupy the memory nodes in different memory blocks.
If the first thread occupies M first memory blocks, the M first memory blocks form a bidirectional circular linked list, a head of the linked list may be the first memory block occupied by the first thread first, and a tail of the linked list may be the first memory block occupied by the first thread last.
The first memory may be divided into 16 memory blocks, where the 16 memory blocks include a target memory, and a memory capacity of each of the 16 memory blocks may be 3M or 512 KB.
The first idle memory block is stored in the idle pool in a form of a single linked list.
And determining the target memory based on the first idle block, wherein the first idle block serving as the head of the singly linked list is determined as the target memory.
It can be seen that, in the embodiment of the present application, the first thread does not occupy the memory node in the memory block occupied by other threads, which is beneficial to avoiding locking the memory node by the electronic device, and reduces the computational complexity of the electronic device.
In one implementation of the present application, the header information of the first memory is stored in a system heap.
The header information of the first memory includes control information, debug information, such as the size of the current segment of memory, the starting address of the next segment of memory, the boundary flag of the memory, and so on.
In an implementation manner of the present application, the determining the target memory in the first memory includes:
dividing the first memory into S second memory blocks, and putting the S second memory blocks into the second memory pool;
determining a first target memory block in the S second memory blocks;
and determining a first target memory node in the first target memory block as the target memory.
The S second memory blocks form a single-direction linked list, and the head of the single-direction linked list is a first target memory block.
The S second memory blocks have the same memory capacity.
S may be 16 or another value.
The memory nodes in the first target memory block form a unidirectional linked list, and the head of the unidirectional linked list is the first target memory node.
It can be seen that, in the embodiment of the present application, the header information of the first memory is stored in the system heap, so that the header information of the first memory does not occupy the storage space of the first memory, thereby facilitating to increase the number of the second memory blocks.
In an implementation manner of the present application, the idle memory nodes include a first idle memory node and/or a second idle memory node, where the first idle node is an unoccupied memory node or a memory node that is applied by the first thread and released by the first thread, and the second idle memory node is a memory node that is applied by the first thread and not released by the first thread;
the determining the target memory based on the M first memory blocks includes:
determining a second target memory node in the first idle memory nodes as the target memory under the condition that the first idle memory nodes exist in the M first memory blocks;
determining a third target memory node in the second idle memory node as the target memory under the condition that the first idle memory nodes do not exist in the M first memory blocks and the second idle memory nodes exist in the M first memory blocks.
If the first idle node is a memory node applied by the first thread and released by the first thread, the first idle node is a private memory node of the first thread, the first idle node forms a one-way linked list, and the header of the one-way linked list is a second target memory node.
After the header of the one-way linked list is determined to be the second target memory node, the one-way linked list does not include the second target memory node, and the header of the one-way linked list is a downstream neighbor node of the second target memory node.
The second idle node is a common memory node of the first thread, the second idle node forms a single-direction linked list, and the head of the single-direction linked list is a third target memory node.
After the head of the single-direction chain table is determined to be the third target memory node, the single-direction chain table does not comprise the third target memory node, and the head of the single-direction chain table is the downstream adjacent node of the third target memory node.
And the second idle node is a memory node to be released.
It can be seen that in the embodiment of the present application, since the time required for releasing the memory node is much longer than the time for applying the memory node, the second idle node is used, and the problem of memory exhaustion after the thread runs for a long time is avoided.
In an implementation manner of the present application, the determining the target memory based on the first idle memory block includes:
determining a second target memory block in the first idle memory block;
and determining a fourth target memory node in the second target memory block as the target memory.
The first idle memory block is stored in the idle pool in a form of a single-direction linked list, and a header of the single-direction linked list is a second target memory block.
After the head of the single-direction linked list is determined to be the second target memory block, the single-direction linked list does not include the second target memory block, and the head of the single-direction linked list is a downstream adjacent node of the second target memory block.
And the memory nodes in the second target memory block form a single linked list, and the head of the single linked list is a fourth target memory node.
After the header of the one-way linked list is determined to be the fourth target memory node, the one-way linked list does not include the fourth target memory node, and the header of the one-way linked list is a downstream neighbor node of the fourth target memory node.
It can be seen that, in the embodiment of the present application, a second target memory block is determined in a first idle memory block, which is beneficial to improving the memory utilization rate.
In an implementation manner of the present application, the attribute information includes a first target address of the target memory and a second target address of a third memory block in which the target memory is located;
the releasing the target memory based on the attribute information includes:
if the second target address is the first address, releasing the target memory by adopting a second thread based on the first target address;
and if the second target address is not the first address, releasing the target memory by adopting a third thread based on the first target address and the second target address.
The value corresponding to the first address may be zero, or may be another value.
The second thread and the third thread may or may not be the same thread.
If the second target address is the first address, the target memory is allocated by the first memory pool, and the target memory is released to the first memory pool.
If the second target address is not the first address, the memory block where the target memory is located may be determined based on the second target address, and the target memory is released to the third memory block based on the first target address.
It can be seen that, in the embodiment of the present application, the memory where the target memory is located is determined by using the second target address, which is beneficial to improving the efficiency of releasing the target memory.
In an implementation manner of the present application, after the releasing the target memory by using the third thread, the method further includes:
when the first thread and the third thread are not the same thread and the first thread is closed, defining a public memory node included in the third memory block as a private memory node of the third memory block;
and releasing the third memory block by using the third thread when the number of the allocated memory nodes of the third memory block is the first number.
Wherein the first number may be zero.
Optionally, if the first thread and the third thread are the same thread, defining a target memory node as a private node of the first thread, and reducing the number of memory nodes that can be allocated to the third memory block by one;
and releasing the third memory block when the number of memory nodes that can be allocated to the third memory block is zero.
Optionally, if the first thread and the third thread are not the same thread and the first thread is not finished, defining the target memory node as a common node of the third memory block.
It can be seen that, in the embodiment of the present application, when the first thread is finished, the target memory node is defined as the private node of the first thread, and the memory block to which the target memory node belongs is not changed, so that a problem of memory exhaustion caused by that the third thread can only release the memory node and is not responsible for the memory node application is avoided.
In an implementation manner of the present application, if the expected memory capacity is greater than a second memory capacity, it is determined that the second memory pool is a third memory pool, and the second memory capacity is smaller than the first memory capacity;
if the expected memory capacity is smaller than or equal to a second memory capacity, determining that the second memory pool is a fourth memory pool, and the capacity of the memory nodes included in the third memory pool is larger than the capacity of the memory nodes included in the fourth memory pool.
And the third memory pool and the fourth memory pool are distributed from the first memory pool.
The third memory pool is the memory pool 2 in fig. 1, and the fourth memory pool is the memory pool 3 in fig. 1.
The unoccupied memory blocks in the third memory pool may form a single linked list.
The fourth memory pool includes at least one memory block, and each memory block includes at least one memory node.
The unoccupied memory blocks in the fourth memory pool may form a single linked list.
It can be seen that, in the embodiment of the present application, the capacities of the memory nodes applied for different target memory capacities are different, which is beneficial to improving the memory utilization rate.
Referring to fig. 4, in accordance with the embodiment shown in fig. 1, fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, and as shown in the figure, the electronic device includes a processor, a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for performing the following steps:
receiving a memory allocation request, wherein the memory allocation request is used for requesting to allocate a memory for a first thread, and the memory allocation request carries an expected memory capacity;
allocating a target memory for the first thread based on the expected memory capacity, and recording attribute information of the target memory;
receiving a memory release request, wherein the memory release request is used for releasing the target memory;
and releasing the target memory based on the attribute information.
In one implementation of the present application, in allocating a target memory for the first thread based on the desired memory capacity, the program includes instructions for:
if the expected memory capacity is larger than a first memory capacity, allocating the target memory from a first memory pool, wherein the capacity of the target memory is larger than the expected memory capacity;
and if the expected memory capacity is smaller than or equal to the first memory capacity, allocating the target memory based on the occupation information of a second memory pool and the expected memory capacity, wherein the capacity of the first memory pool is larger than that of the second memory pool.
In an implementation manner of the present application, the second memory pool includes N first memory blocks, each first memory block includes at least one memory node, and N is a positive integer; in allocating the target memory based on the occupancy information of the second memory pool and the expected memory capacity, the program is specifically configured to execute instructions of:
when the first thread does not occupy the N first memory blocks and the N first memory blocks are all occupied, or the N first memory blocks are all occupied, the first thread occupies M first memory blocks, and the M first memory blocks do not have idle memory nodes, allocating a first memory from the first memory pool, determining a target memory in the first memory, and allocating the target memory, where M is a positive integer and is less than or equal to N;
determining the target memory based on the M first memory blocks and allocating the target memory under the condition that the first thread occupies the M first memory blocks and the M first memory blocks have idle memory nodes;
determining the target memory based on the first idle memory block and allocating the target memory under the condition that the first thread occupies the M first memory blocks, the M first memory blocks have no idle memory nodes, and the N first memory blocks include the first idle memory blocks, where the first idle memory blocks are unoccupied first memory blocks of the N first memory blocks except the M first memory blocks.
In one implementation of the present application, the header information of the first memory is stored in a system heap.
In an implementation manner of the present application, in determining the target memory in the first memory, the program is specifically configured to execute instructions of the following steps:
dividing the first memory into S second memory blocks, and putting the S second memory blocks into the second memory pool;
determining a first target memory block in the S second memory blocks;
and determining a first target memory node in the first target memory block as the target memory.
In an implementation manner of the present application, the idle memory nodes include a first idle memory node and/or a second idle memory node, where the first idle node is an unoccupied memory node or a memory node that is applied by the first thread and released by the first thread, and the second idle memory node is a memory node that is applied by the first thread and not released by the first thread;
in determining the target memory based on the M first memory blocks, the program is specifically configured to execute the following instructions:
determining a second target memory node in the first idle memory nodes as the target memory under the condition that the first idle memory nodes exist in the M first memory blocks;
determining a third target memory node in the second idle memory node as the target memory under the condition that the first idle memory nodes do not exist in the M first memory blocks and the second idle memory nodes exist in the M first memory blocks.
In an implementation manner of the present application, in determining the target memory based on the first free memory block, the program is specifically configured to execute instructions of the following steps:
determining a second target memory block in the first idle memory block;
and determining a fourth target memory node in the second target memory block as the target memory.
In an implementation manner of the present application, the attribute information includes a first target address of the target memory and a second target address of a third memory block in which the target memory is located;
in terms of releasing the target memory based on the attribute information, the program is specifically configured to execute instructions of:
if the second target address is the first address, releasing the target memory by adopting a second thread based on the first target address;
and if the second target address is not the first address, releasing the target memory by adopting a third thread based on the first target address and the second target address.
In an implementation manner of the present application, after the target memory is released by using the third program, the program is further specifically configured to execute instructions of the following steps:
when the first thread and the third thread are not the same thread and the first thread is closed, defining a public memory node included in the third memory block as a private memory node of the third memory block;
and releasing the third memory block by using the third thread when the number of the allocated memory nodes of the third memory block is the first number.
In an implementation manner of the present application, if the expected memory capacity is greater than a second memory capacity, it is determined that the second memory pool is a third memory pool, and the second memory capacity is smaller than the first memory capacity;
if the expected memory capacity is smaller than or equal to a second memory capacity, determining that the second memory pool is a fourth memory pool, and the capacity of the memory nodes included in the third memory pool is larger than the capacity of the memory nodes included in the fourth memory pool.
It should be noted that, for the specific implementation process of the present embodiment, reference may be made to the specific implementation process described in the above method embodiment, and a description thereof is omitted here.
Referring to fig. 5, fig. 5 is a memory allocation apparatus for use in the electronic device according to an embodiment of the present disclosure, the apparatus including:
a first receiving unit 501, configured to receive a memory allocation request, where the memory allocation request is used to request to allocate a memory for a first thread, and the memory allocation request carries an expected memory capacity;
an allocating unit 502, configured to allocate a target memory for the first thread based on the expected memory capacity;
a recording unit 503, configured to record attribute information of the target memory;
a second receiving unit 504, configured to receive a memory release request, where the memory release request is used to release the target memory;
a releasing unit 505, configured to release the target memory based on the attribute information.
In an implementation manner of the present application, in allocating a target memory for the first thread based on the expected memory capacity, the allocating unit 502 is configured to execute the following instructions:
if the expected memory capacity is larger than a first memory capacity, allocating the target memory from a first memory pool, wherein the capacity of the target memory is larger than the expected memory capacity;
and if the expected memory capacity is smaller than or equal to the first memory capacity, allocating the target memory based on the occupation information of a second memory pool and the expected memory capacity, wherein the capacity of the first memory pool is larger than that of the second memory pool.
In an implementation manner of the present application, the second memory pool includes N first memory blocks, each first memory block includes at least one memory node, and N is a positive integer; in allocating the target memory based on the occupation information of the second memory pool and the expected memory capacity, the allocating unit 502 is specifically configured to execute the following steps:
when the first thread does not occupy the N first memory blocks and the N first memory blocks are all occupied, or the N first memory blocks are all occupied, the first thread occupies M first memory blocks, and the M first memory blocks do not have idle memory nodes, allocating a first memory from the first memory pool, determining a target memory in the first memory, and allocating the target memory, where M is a positive integer and is less than or equal to N;
determining the target memory based on the M first memory blocks and allocating the target memory under the condition that the first thread occupies the M first memory blocks and the M first memory blocks have idle memory nodes;
determining the target memory based on the first idle memory block and allocating the target memory under the condition that the first thread occupies the M first memory blocks, the M first memory blocks have no idle memory nodes, and the N first memory blocks include the first idle memory blocks, where the first idle memory blocks are unoccupied first memory blocks of the N first memory blocks except the M first memory blocks.
In one implementation of the present application, the header information of the first memory is stored in a system heap.
In an implementation manner of the present application, in determining the target memory in the first memory, the allocating unit 502 is specifically configured to execute the following instructions:
dividing the first memory into S second memory blocks, and putting the S second memory blocks into the second memory pool;
determining a first target memory block in the S second memory blocks;
and determining a first target memory node in the first target memory block as the target memory.
In an implementation manner of the present application, the idle memory nodes include a first idle memory node and/or a second idle memory node, where the first idle node is an unoccupied memory node or a memory node that is applied by the first thread and released by the first thread, and the second idle memory node is a memory node that is applied by the first thread and not released by the first thread;
in determining the target memory based on the M first memory blocks, the allocating unit 502 is specifically configured to execute the following instructions:
determining a second target memory node in the first idle memory nodes as the target memory under the condition that the first idle memory nodes exist in the M first memory blocks;
determining a third target memory node in the second idle memory node as the target memory under the condition that the first idle memory nodes do not exist in the M first memory blocks and the second idle memory nodes exist in the M first memory blocks.
In an implementation manner of the present application, in determining the target memory based on the first free memory block, the allocating unit 502 is specifically configured to execute instructions of the following steps:
determining a second target memory block in the first idle memory block;
and determining a fourth target memory node in the second target memory block as the target memory.
In an implementation manner of the present application, the attribute information includes a first target address of the target memory and a second target address of a third memory block in which the target memory is located;
in releasing the target memory based on the attribute information, the release unit 305 is specifically configured to execute the following steps:
if the second target address is the first address, releasing the target memory by adopting a second thread based on the first target address;
and if the second target address is not the first address, releasing the target memory by adopting a third thread based on the first target address and the second target address.
In an implementation manner of the present application, the memory allocation apparatus further includes a defining unit 506 and a determining unit 507.
In an implementation manner of the present application, after the target memory is released by using the third thread, the defining unit 506 is specifically configured to execute the following instructions:
when the first thread and the third thread are not the same thread and the first thread is closed, defining a public memory node included in the third memory block as a private memory node of the third memory block;
the release unit 505 is further specifically configured to execute the following steps:
and releasing the third memory block by using the third thread when the number of the allocated memory nodes of the third memory block is the first number.
In an implementation manner of the present application, the determining unit 507 is specifically configured to execute the following steps:
if the expected memory capacity is larger than a second memory capacity, determining that the second memory pool is a third memory pool, wherein the second memory capacity is smaller than the first memory capacity;
if the expected memory capacity is smaller than or equal to a second memory capacity, determining that the second memory pool is a fourth memory pool, and the capacity of the memory nodes included in the third memory pool is larger than the capacity of the memory nodes included in the fourth memory pool.
The first receiving unit 501, the allocating unit 502, the recording unit 503, the second receiving unit 504, the releasing unit 505, the defining unit 506, and the determining unit 507 of the electronic device may be implemented by a processor.
The present application also provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform some or all of the steps described in the electronic device in the above method embodiments.
Embodiments of the present application also provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps described in the electronic device in the above method. The computer program product may be a software installation package.
The steps of a method or algorithm described in the embodiments of the present application may be implemented in hardware, or may be implemented by a processor executing software instructions. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in an access network device, a target network device, or a core network device. Of course, the processor and the storage medium may reside as discrete components in an access network device, a target network device, or a core network device.
Those skilled in the art will appreciate that in one or more of the examples described above, the functionality described in the embodiments of the present application may be implemented, in whole or in part, by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., Digital Video Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the embodiments of the present application in further detail, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present application, and are not intended to limit the scope of the embodiments of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (13)

1. A memory allocation method is applied to electronic equipment, and the method comprises the following steps:
receiving a memory allocation request, wherein the memory allocation request is used for requesting to allocate a memory for a first thread, and the memory allocation request carries an expected memory capacity;
allocating a target memory for the first thread based on the expected memory capacity, and recording attribute information of the target memory;
receiving a memory release request, wherein the memory release request is used for releasing the target memory;
and releasing the target memory based on the attribute information.
2. The method of claim 1, wherein said allocating target memory for the first thread based on the expected memory capacity comprises:
if the expected memory capacity is larger than a first memory capacity, allocating the target memory from a first memory pool, wherein the capacity of the target memory is larger than the expected memory capacity;
and if the expected memory capacity is smaller than or equal to the first memory capacity, allocating the target memory based on the occupation information of a second memory pool and the expected memory capacity, wherein the capacity of the first memory pool is larger than that of the second memory pool.
3. The method according to claim 2, wherein the second memory pool includes N first memory blocks, each of the first memory blocks includes at least one memory node, and N is a positive integer;
the allocating the target memory based on the occupancy information of the second memory pool and the expected memory capacity includes:
when the first thread does not occupy the N first memory blocks and the N first memory blocks are all occupied, or the N first memory blocks are all occupied, the first thread occupies M first memory blocks, and the M first memory blocks do not have idle memory nodes, allocating a first memory from the first memory pool, determining a target memory in the first memory, and allocating the target memory, where M is a positive integer and is less than or equal to N;
determining the target memory based on the M first memory blocks and allocating the target memory under the condition that the first thread occupies the M first memory blocks and the M first memory blocks have idle memory nodes;
determining the target memory based on the first idle memory block and allocating the target memory under the condition that the first thread occupies the M first memory blocks, the M first memory blocks have no idle memory nodes, and the N first memory blocks include the first idle memory blocks, where the first idle memory blocks are unoccupied first memory blocks of the N first memory blocks except the M first memory blocks.
4. The method of claim 3, wherein the header information of the first memory is stored in a system heap.
5. The method of claim 4, wherein determining the target memory in the first memory comprises:
dividing the first memory into S second memory blocks, and putting the S second memory blocks into the second memory pool;
determining a first target memory block in the S second memory blocks;
and determining a first target memory node in the first target memory block as the target memory.
6. The method according to claim 3, wherein the free memory nodes comprise a first free memory node and/or a second free memory node, the first free node being an unoccupied memory node or a memory node that is applied by the first thread and released by the first thread, the second free memory node being a memory node that is applied by the first thread and not released by the first thread;
the determining the target memory based on the M first memory blocks includes:
determining a second target memory node in the first idle memory nodes as the target memory under the condition that the first idle memory nodes exist in the M first memory blocks;
determining a third target memory node in the second idle memory node as the target memory under the condition that the first idle memory nodes do not exist in the M first memory blocks and the second idle memory nodes exist in the M first memory blocks.
7. The method according to claim 3, wherein the determining the target memory based on the first free memory block includes:
determining a second target memory block in the first idle memory block;
and determining a fourth target memory node in the second target memory block as the target memory.
8. The method according to any of claims 2 to 7, wherein the attribute information includes a first target address of the target memory, and a second target address of a third memory block in which the target memory is located;
the releasing the target memory based on the attribute information includes:
if the second target address is the first address, releasing the target memory by adopting a second thread based on the first target address;
and if the second target address is not the first address, releasing the target memory by adopting a third thread based on the first target address and the second target address.
9. The method of claim 8, wherein after the releasing the target memory using the third thread, the method further comprises:
when the first thread and the third thread are not the same thread and the first thread is closed, defining a public memory node included in the third memory block as a private memory node of the third memory block;
and releasing the third memory block by using the third thread when the number of the allocated memory nodes of the third memory block is the first number.
10. The method of claim 9, wherein if the expected memory capacity is greater than a second memory capacity, determining the second memory pool to be a third memory pool, the second memory capacity being less than the first memory capacity;
if the expected memory capacity is smaller than or equal to a second memory capacity, determining that the second memory pool is a fourth memory pool, wherein the capacity of the memory nodes included in the third memory pool is larger than the capacity of the memory nodes included in the fourth memory pool.
11. A memory allocation apparatus, applied to an electronic device, the apparatus comprising:
a first receiving unit, configured to receive a memory allocation request, where the memory allocation request is used to request that a memory is allocated to a first thread, and the memory allocation request carries an expected memory capacity;
a distribution unit, configured to distribute a target memory for the first thread based on the expected memory capacity;
the recording unit is used for recording the attribute information of the target memory;
a second receiving unit, configured to receive a memory release request, where the memory release request is used to release the target memory;
and the releasing unit is used for releasing the target memory based on the attribute information.
12. An electronic device, characterized in that the electronic device comprises a processor, a memory, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method according to any of claims 1-10.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, wherein the computer program is processed to perform the method according to any of the claims 1-10.
CN202011004107.5A 2020-09-22 2020-09-22 Memory allocation method and related equipment Active CN112214313B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011004107.5A CN112214313B (en) 2020-09-22 2020-09-22 Memory allocation method and related equipment
PCT/CN2021/114967 WO2022062833A1 (en) 2020-09-22 2021-08-27 Memory allocation method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011004107.5A CN112214313B (en) 2020-09-22 2020-09-22 Memory allocation method and related equipment

Publications (2)

Publication Number Publication Date
CN112214313A true CN112214313A (en) 2021-01-12
CN112214313B CN112214313B (en) 2024-09-27

Family

ID=74050091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011004107.5A Active CN112214313B (en) 2020-09-22 2020-09-22 Memory allocation method and related equipment

Country Status (2)

Country Link
CN (1) CN112214313B (en)
WO (1) WO2022062833A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113032156A (en) * 2021-05-25 2021-06-25 北京金山云网络技术有限公司 Memory allocation method and device, electronic equipment and storage medium
WO2022062833A1 (en) * 2020-09-22 2022-03-31 深圳云天励飞技术股份有限公司 Memory allocation method and related device
CN118069071A (en) * 2024-04-19 2024-05-24 苏州元脑智能科技有限公司 Resource access control method, device, computer equipment and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116627359B (en) * 2023-07-24 2023-11-14 成都佰维存储科技有限公司 Memory management method and device, readable storage medium and electronic equipment
CN117130949B (en) * 2023-08-28 2024-05-10 零束科技有限公司 Memory management method, device, electronic equipment and storage medium
CN117573364A (en) * 2023-11-30 2024-02-20 赛力斯汽车有限公司 Memory pool management method, device and storage medium
CN117591293B (en) * 2023-12-01 2024-07-19 深圳计算科学研究院 Memory management method, memory management device, computer equipment and computer readable storage medium
CN117519988B (en) * 2023-12-28 2024-03-19 苏州元脑智能科技有限公司 RAID-based memory pool dynamic allocation method and device

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1635482A (en) * 2003-12-29 2005-07-06 北京中视联数字系统有限公司 A memory management method for embedded system
WO2007109920A1 (en) * 2006-03-27 2007-10-04 Zte Corporation A method for constructing and using a memory pool
CN102915276A (en) * 2012-09-25 2013-02-06 武汉邮电科学研究院 Memory control method for embedded systems
CN102968378A (en) * 2012-10-23 2013-03-13 深圳市融创天下科技股份有限公司 Method, device and system for allocating and releasing memory
CN103077126A (en) * 2012-12-24 2013-05-01 中兴通讯股份有限公司 Memory management method and device
CN103399825A (en) * 2013-08-05 2013-11-20 武汉邮电科学研究院 Unlocked memory application releasing method
CN103425592A (en) * 2013-08-05 2013-12-04 大唐移动通信设备有限公司 Memory management method and device for multiprocess system
CN107133182A (en) * 2016-02-29 2017-09-05 北大方正集团有限公司 A kind of EMS memory management process and device
CN107153618A (en) * 2016-03-02 2017-09-12 阿里巴巴集团控股有限公司 A kind of processing method and processing device of Memory Allocation
CN107766153A (en) * 2017-10-17 2018-03-06 华为技术有限公司 A kind of EMS memory management process and device
CN108595259A (en) * 2017-03-16 2018-09-28 哈尔滨英赛克信息技术有限公司 A kind of internal memory pool managing method based on global administration
CN110245091A (en) * 2018-10-29 2019-09-17 浙江大华技术股份有限公司 A kind of method, apparatus and computer storage medium of memory management
CN111078408A (en) * 2019-12-10 2020-04-28 Oppo(重庆)智能科技有限公司 Memory allocation method and device, storage medium and electronic equipment
CN111090521A (en) * 2019-12-10 2020-05-01 Oppo(重庆)智能科技有限公司 Memory allocation method and device, storage medium and electronic equipment
CN111324461A (en) * 2020-02-20 2020-06-23 西安芯瞳半导体技术有限公司 Memory allocation method and device, computer equipment and storage medium
CN111367671A (en) * 2020-03-03 2020-07-03 深信服科技股份有限公司 Memory allocation method, device, equipment and readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104375899B (en) * 2014-11-21 2016-03-30 北京应用物理与计算数学研究所 The thread of high-performance computer NUMA perception and memory source optimization method and system
CN112214313B (en) * 2020-09-22 2024-09-27 深圳云天励飞技术股份有限公司 Memory allocation method and related equipment

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1635482A (en) * 2003-12-29 2005-07-06 北京中视联数字系统有限公司 A memory management method for embedded system
WO2007109920A1 (en) * 2006-03-27 2007-10-04 Zte Corporation A method for constructing and using a memory pool
CN102915276A (en) * 2012-09-25 2013-02-06 武汉邮电科学研究院 Memory control method for embedded systems
CN102968378A (en) * 2012-10-23 2013-03-13 深圳市融创天下科技股份有限公司 Method, device and system for allocating and releasing memory
CN103077126A (en) * 2012-12-24 2013-05-01 中兴通讯股份有限公司 Memory management method and device
CN103399825A (en) * 2013-08-05 2013-11-20 武汉邮电科学研究院 Unlocked memory application releasing method
CN103425592A (en) * 2013-08-05 2013-12-04 大唐移动通信设备有限公司 Memory management method and device for multiprocess system
CN107133182A (en) * 2016-02-29 2017-09-05 北大方正集团有限公司 A kind of EMS memory management process and device
CN107153618A (en) * 2016-03-02 2017-09-12 阿里巴巴集团控股有限公司 A kind of processing method and processing device of Memory Allocation
CN108595259A (en) * 2017-03-16 2018-09-28 哈尔滨英赛克信息技术有限公司 A kind of internal memory pool managing method based on global administration
CN107766153A (en) * 2017-10-17 2018-03-06 华为技术有限公司 A kind of EMS memory management process and device
CN110245091A (en) * 2018-10-29 2019-09-17 浙江大华技术股份有限公司 A kind of method, apparatus and computer storage medium of memory management
CN111078408A (en) * 2019-12-10 2020-04-28 Oppo(重庆)智能科技有限公司 Memory allocation method and device, storage medium and electronic equipment
CN111090521A (en) * 2019-12-10 2020-05-01 Oppo(重庆)智能科技有限公司 Memory allocation method and device, storage medium and electronic equipment
CN111324461A (en) * 2020-02-20 2020-06-23 西安芯瞳半导体技术有限公司 Memory allocation method and device, computer equipment and storage medium
CN111367671A (en) * 2020-03-03 2020-07-03 深信服科技股份有限公司 Memory allocation method, device, equipment and readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
(美)洛夫著: "《Linux系统编程》", vol. 2009, 31 July 2009, 东南大学出版社, pages: 273 - 278 *
宋雅琴;郭志川;: "Nginx Slab算法研究", 网络新媒体技术, no. 02, 15 March 2018 (2018-03-15) *
徐延东;华蓓;: "面向GPU的内存管理与应用", 电子技术, no. 07, 25 July 2017 (2017-07-25) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022062833A1 (en) * 2020-09-22 2022-03-31 深圳云天励飞技术股份有限公司 Memory allocation method and related device
CN113032156A (en) * 2021-05-25 2021-06-25 北京金山云网络技术有限公司 Memory allocation method and device, electronic equipment and storage medium
CN118069071A (en) * 2024-04-19 2024-05-24 苏州元脑智能科技有限公司 Resource access control method, device, computer equipment and storage medium
CN118069071B (en) * 2024-04-19 2024-08-13 苏州元脑智能科技有限公司 Resource access control method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2022062833A1 (en) 2022-03-31
CN112214313B (en) 2024-09-27

Similar Documents

Publication Publication Date Title
CN112214313B (en) Memory allocation method and related equipment
CN107872489B (en) File slice uploading method and device and cloud storage system
CN103607428B (en) A kind of method and apparatus for accessing shared drive
CN111309644B (en) Memory allocation method and device and computer readable storage medium
CN105446813A (en) Resource distribution method and device
CN111597040B (en) Resource allocation method, device, storage medium and electronic equipment
CN111338779B (en) Resource allocation method, device, computer equipment and storage medium
CN111857992B (en) Method and device for allocating linear resources in Radosgw module
CN114385370B (en) Memory allocation method, system, device and medium
CN113032156B (en) Memory allocation method and device, electronic equipment and storage medium
CN115964319A (en) Data processing method for remote direct memory access and related product
CN110750336A (en) OpenStack virtual machine memory hot-expanding method
CN114168490A (en) Method for determining memory recovery threshold and related equipment
CN112612623A (en) Method and equipment for managing shared memory
CN113849260A (en) Instance processing core allocation method and device
CN115002046A (en) Message processing method, NUMA node, electronic device and storage medium
CN111722908B (en) Virtual machine creating method, system, equipment and medium
CN108139969B (en) Memory configuration method, device and system
CN115914236B (en) Storage space allocation adjustment method and device, electronic equipment and storage medium
CN111625358A (en) Resource allocation method and device, electronic equipment and storage medium
CN112346848A (en) Method, device and terminal for managing memory pool
CN114327862B (en) Memory allocation method and device, electronic equipment and storage medium
CN115658295A (en) Resource scheduling method and device, electronic equipment and storage medium
CN108073453B (en) Method and device for scheduling CPU (Central processing Unit) resources in distributed cluster
CN115794396A (en) Resource allocation method, system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant