CN110858162A - Memory management method and device and server - Google Patents

Memory management method and device and server Download PDF

Info

Publication number
CN110858162A
CN110858162A CN201810975221.9A CN201810975221A CN110858162A CN 110858162 A CN110858162 A CN 110858162A CN 201810975221 A CN201810975221 A CN 201810975221A CN 110858162 A CN110858162 A CN 110858162A
Authority
CN
China
Prior art keywords
memory
memory space
processor
data
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810975221.9A
Other languages
Chinese (zh)
Other versions
CN110858162B (en
Inventor
张占忠
王霖
王蘅
汪泽成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201810975221.9A priority Critical patent/CN110858162B/en
Publication of CN110858162A publication Critical patent/CN110858162A/en
Application granted granted Critical
Publication of CN110858162B publication Critical patent/CN110858162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a memory management method, a memory management device and a server, and belongs to the technical field of memories. The method comprises the following steps: counting the total shared memory capacity required by a plurality of processes running on a processor; allocating a shared memory from the memory according to the required total memory capacity, wherein the capacity of the shared memory is greater than or equal to the total shared memory capacity required by a plurality of processes; acquiring a memory allocation request triggered by a first process in a plurality of processes; and responding to the memory allocation request, and allocating memory space for the first process from the shared memory. The method and the device solve the problem that the processor needs to spend longer time for distributing the shared memory for each process from the memory, so that the communication efficiency between the processes is lower, reduce the time for distributing the memory for the processes, and improve the communication efficiency between the processes.

Description

Memory management method and device and server
Technical Field
The present application relates to the field of memory technologies, and in particular, to a memory management method and apparatus, and a server.
Background
With the continuous development of memory technology, the number of processes running on the processor of the same server is increasing. Multiple processes running on the same processor may communicate through a shared memory.
For example, the server includes a processor and a memory, and the processor may obtain a memory allocation request triggered by each process, and allocate a shared memory in the memory to the process according to the memory allocation request, so that the process may perform read and write operations on the shared memory. After a process writes data in the shared memory, if another process reads the data in the shared memory, it may be said that the two processes are communicating through the shared memory.
However, before each process performs read/write operations on the shared memory, the processor needs to allocate the shared memory to the process from the memory, and it takes a long time to allocate the shared memory from the memory, so that the read/write efficiency of each process on the shared memory is low, and the communication efficiency between the processes is also low.
Disclosure of Invention
The application provides a memory management method, a memory management device and a server, which can solve the problem that a processor needs to spend longer time for allocating shared memory for each process from a memory, so that the communication efficiency between the processes is lower, and the technical scheme is as follows:
in a first aspect, a memory management method is provided, where the method is applied to a processor in a server, where the server further includes a memory, and the method includes: counting the total shared memory capacity required by a plurality of processes running on the processor; allocating a shared memory from the memory according to the required total memory capacity, wherein the capacity of the shared memory is greater than or equal to the total shared memory capacity required by the processes; acquiring a memory allocation request triggered by a first process in the plurality of processes; and responding to the memory allocation request, and allocating memory space for the first process from the shared memory. Since the processor allocates a large block of shared memory from the memory in advance, the processor can allocate the memory space in the shared memory to the processes, so that the processes can communicate with each other through the memory space. Because the memory space used by each process in communication with other processes is obtained by allocating the memory space from the pre-allocated shared memory, the processor does not need to allocate the shared memory to each process from the memory, the time required by allocating the memory to the processes is reduced, the read-write efficiency of the processes to the shared memory is improved, and the communication efficiency between the processes is improved.
Optionally, the shared memory includes a first set and a second set; the allocating a memory space for the first process from the shared memory includes: allocating memory space for the first process from the first set; the method further comprises the following steps: writing the data corresponding to the first process into the memory space; storing the address written by the data in the memory space in the second set; and when a second process in the plurality of processes triggers a read data request, reading the data according to the address in the second set, and feeding back the data to the second process. That is, when the first process communicates with the second process, the processor only needs to write the data corresponding to the first process into the memory space and write the address written by the data into the second set, so that the information transmission between the first process and the second process is realized through the second set, and the data corresponding to the first process does not need to be copied in the whole process, so that the efficiency of the inter-process communication is high.
Optionally, at least two idle linked lists are stored in the memory, where the at least two idle linked lists are used to manage allocable spaces in the first set at different granularities; the allocating memory space for the first process from the first set comprises: selecting a target linked list from the at least two idle linked lists according to the memory capacity required by the first process; and allocating memory space for the first process from the allocable space managed by the target linked list. Because a plurality of idle linked lists in the memory can manage the allocable space in different granularities, the corresponding idle linked list can be selected according to the memory capacity required by the process to allocate the memory space, thereby allocating the memory space with various granularities for the process.
Optionally, the second set includes a plurality of subsets, where each subset corresponds to each process in the plurality of processes; the storing the address written by the data in the memory space in the second set includes: and storing the address written by the data in the memory space in a subset corresponding to the second process. That is, when the first process communicates with the second process, the processor needs to write the address, into which the data corresponding to the first process is written, into the subset corresponding to the second process, so that information is transferred between the first process and the second process through the subset.
Optionally, the method further includes: receiving a first release instruction triggered by the first process, wherein the first release instruction is used for indicating a memory space occupied by releasing the data; receiving a second release instruction triggered by the second process, wherein the second release instruction is used for indicating the memory space occupied by the data to be released; and after receiving the first release instruction and the second release instruction, deleting the data and releasing the memory space. That is, after receiving each release instruction, the processor needs to detect whether all release instructions sent by the processes using the memory space are received, and can release the memory space after receiving the release instructions, so as to prevent the memory space from being released and causing data loss when part of the processes need to release a certain memory space and part of the processes are using the memory space.
In a second aspect, a memory management device is provided, where the memory management device includes modules for executing the memory management method.
In a third aspect, a server is provided, which includes: the processor comprises the memory management device.
In a fourth aspect, a computer-readable storage medium is provided, in which instructions are stored, and when the computer-readable storage medium runs on a computer, the computer is caused to execute the above memory management method.
In a fifth aspect, a computer program product comprising instructions is provided, which when run on a computer, causes the computer to perform the memory management method described above.
Drawings
Fig. 1 is a schematic structural diagram of a server according to an embodiment of the present invention;
fig. 2 is a flowchart of a memory management method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a unit memory space according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an idle linked list according to an embodiment of the present invention;
fig. 5 is a flowchart of a method for allocating a memory space for a first process according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a partition unit memory space according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating another partitioned unit memory space according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating a first message and a second message according to an embodiment of the present invention;
fig. 9 is a flowchart of another memory management method according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a memory management device according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of another memory management device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a server according to an embodiment of the present invention, and as shown in fig. 1, the server 0 may include: a processor 01 and a memory 02, and a plurality of processes may run on the processor 01.
Optionally, a virtual machine and/or a container (container) may be deployed on the server, where the virtual machine includes a part of software resources of the server and a part of virtualized hardware resources, and the container includes a part of software resources of the server. The plurality of processes may include: processes local to the server (not belonging to the virtual machine nor to the container); when a container is deployed on the server, the plurality of processes further include: a process in a container; when a virtual machine is deployed on the server, the plurality of processes further include: a process in a virtual machine.
Optionally, with continuing reference to fig. 1, the server 0 may further include: the system comprises at least one network interface 03 and at least one bus 04, wherein the bus 04 is used for realizing connection and communication among a processor, the network interface and a memory, and the memory 02 and the network interface 03 are respectively connected with the processor 01 through the bus 04. The processor 01 is adapted to execute executable modules, such as computer programs, stored in the memory 02. The Memory 02 may include a Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the server and the at least one further device is realized via at least one network interface 03 (wired or wireless). In some embodiments, the memory 02 stores a management program a, which can be executed by the processor 01 to implement the memory management method provided by the embodiment of the present invention. The memory 02 also stores a plurality of service programs B, and when the service programs B are executed on the processor 01, the processes executed on the processor 01 may be formed.
The embodiment of the invention provides a memory management method for a processor. For example, the method may be applied to the processor 01 in the server shown in fig. 1, and as shown in fig. 2, the memory management method may include:
in step 201, the processor 01 counts a total shared memory capacity required by a plurality of processes running on the processor 01.
In the embodiment of the present invention, the processor 01 needs to count in advance a total shared memory capacity required by a plurality of processes running on the processor 01 (that is, a process formed when the service program B stored in the memory in fig. 1 runs on the processor 01, and may be referred to as a service process) for communication. For example, the memory capacity required for any two processes to communicate may be equal to the Input/Output (I/O) depth of the two processes when communicating, and the memory capacity required for the multiple processes to communicate is equal to the sum of the IO depths of the processes when communicating.
In step 202, the processor 01 allocates a shared memory from the memory according to the required total shared memory capacity, where the shared memory capacity is greater than or equal to the total shared memory capacity required by the multiple processes.
After determining the total shared memory capacity required by the processes running on the processor 01, the processor 01 may allocate a shared memory from the memory of the server according to the total shared memory capacity, through which the processes in step 201 need to communicate, so that the capacity of the shared memory needs to be greater than or equal to the total shared memory capacity required by the processes.
For example, assuming that the total shared memory capacity required by the processes is 60 kilobytes (k), processor 01 may allocate a shared memory capacity of 70k from memory for the processes to communicate.
In step 203, the processor 01 obtains a memory allocation request triggered by a first process of the multiple processes.
Optionally, the first process may be any process in the multiple processes, and the first process may also be another process other than the multiple processes, which is not limited in this embodiment of the present invention.
For example, when a first process receives a write request triggered by a user, the first process may trigger the memory allocation request, so that the processor allocates a memory space for the first process according to the memory allocation request, and then the first process may store data carried in the write request in the memory space. When the first process receives a read request triggered by a user, the first process may trigger the memory allocation request, so that the processor allocates a memory space for the first process according to the memory allocation request, and then the first process may temporarily store the read data in the memory space. When the first process needs to communicate with another process (e.g., a second process), the first process may also trigger the memory allocation request to trigger the processor 01 to allocate a memory space for the process according to the memory allocation request, and then the first process may communicate with the second process through the memory space. It should be noted that, in the embodiment of the present invention, only the above three scenarios in which the first process triggers the memory allocation request are taken as examples, in practical applications, the first process may also trigger the memory allocation request in other scenarios, which is not limited in the embodiment of the present invention.
In step 204, the processor 01 allocates a memory space for the first process from the shared memory in response to the memory allocation request.
Optionally, the shared memory allocated by the processor 01 from the memory may include a first set and a second set, where the first set is used for reading and writing data of the processes, and the second set is used for relaying communication between the processes. After capturing the memory allocation request triggered by the first process, the processor 01 may allocate a memory space for the first process from the first set.
Further, the memory may store at least two idle linked lists, where the at least two idle linked lists are used to manage allocable space in the first set at different granularities, and the processor may allocate memory space for the first process based on the idle linked lists. Optionally, the smallest of these particle sizes may be: 8 bytes, or other granularities (e.g., 16 bytes, 32 bytes, or 64 bytes, etc.), which are not limited in this embodiment of the present invention.
Optionally, the at least two idle linked lists include n-level idle linked lists, where the n-level idle linked lists are used to manage allocable spaces in the first set at different granularities, and in the n-level idle linked lists, the granularity of the i-th level idle linked list for managing the allocable spaces is twice the granularity of the i-1-th level idle linked list for managing the allocable spaces, and i is greater than 1 and less than or equal to n. Illustratively, each free linked list is used to manage allocable space in the first set with a granularity, that is: the free linked list is used for managing unit memory space with one granularity in the allocable space. Each free list may include at least one node, and the free list may be capable of recording the unit memory space managed by the free list through the node (e.g., recording the unit memory space by recording the address of the unit memory space). When the processor 01 has not allocated memory space for any process, the nth level of the idle linked lists comprises: the head node and the non-head node, except the nth level idle linked list, all the other level idle linked lists only comprise the head node. The head node only contains pointers and does not contain data used for recording the unit memory space, the non-head node not only contains the pointers but also contains the data used for recording the unit memory space, and the unit memory spaces recorded by the non-head nodes in all the nth-level idle linked lists form the allocable space in the first set.
For example, as shown in fig. 3 and fig. 4, assuming that the capacity of the first set is 64k, the free linked list in the memory may include: a four-level idle linked list. The fourth-level idle linked list is used to manage allocable space in the first set with a granularity of 64k, and the first set may be divided into: unit memory space 1 (capacity 64 k); the third-level idle linked list is used to manage allocable space in the first set with a granularity of 32k, and the first set may be divided into: a unit memory space 2 and a unit memory space 3 (both 32k in capacity); the second-level idle linked list is used for managing allocable space in a first set with granularity of 16k, and the first set can be divided into the following parts according to the granularity of 16 k: a unit memory space 4, a unit memory space 5, a unit memory space 6 and a unit memory space 7 (the capacity is 16 k); the first-level idle linked list is used for managing allocable space in a first set with granularity of 8k, and the first set can be divided into the following parts according to the granularity of 8 k: unit memory space 8, unit memory space 9, unit memory space 10, unit memory space 11, unit memory space 12, unit memory space 13, unit memory space 14, unit memory space 15, and unit memory space 16 (capacity is 8 k). When the processor 01 has not allocated the memory space to any process, the fourth-stage idle linked list includes: head node and one non-head node for recording 64k of unit memory space, while the other levels of the free-link list each include only one head node.
Optionally, each level of the idle linked list may be a bi-directional circular linked list, that is, each node in the idle linked list includes two pointers, where one pointer is used to indicate a node next to the node, and the other pointer is used to indicate a node previous to the node. The n-level idle linked lists may form a group of idle linked lists, that is, in the embodiment of the present invention, a group of idle linked lists is stored in the memory as an example, and optionally, a plurality of groups of such idle linked lists may also be stored in the memory, which is not limited in the embodiment of the present invention.
When allocating a memory space for the first process from the first set based on the idle linked lists, the processor 01 may select a target linked list from at least two idle linked lists according to a memory capacity required by the first process. Processor 01 also needs to query the target linked list to determine whether the target linked list manages a unit memory space. When the target linked list has a unit memory space, the processor 01 needs to modify the target linked list and allocate a memory space for the first process from the unit memory space managed by the target linked list. When the target linked list does not manage a unit memory space, the processor 01 needs to divide or merge the unit memory spaces managed by the other levels of idle linked lists to obtain the unit memory space managed by the target linked list, and further allocates a memory space for the first process from the unit memory space. As shown in fig. 5, step 204 may include:
step 2041, the processor 01 selects the jth level idle linked list as the target linked list according to the memory capacity required by the first process, wherein the granularity of the allocable space managed by the jth level idle linked list is as follows: in the granularity of the allocable space managed by the first-level to nth-level idle linked lists, the granularity of the memory capacity required by the first process and the minimum difference value of the memory capacity is achieved, and j is more than or equal to 1 and less than or equal to n. Step 2042 is performed.
Optionally, the memory allocation request triggered by the first process may carry a memory capacity required by the first process. Processor 01 may determine the memory capacity required by the first process based on the memory allocation request. Then, the processor 01 may make a difference between the granularity of the allocable space managed by each level of the idle linked list and the memory capacity required by the first process; then, the processor 01 determines the granularity corresponding to the obtained minimum non-negative difference as a target granularity, and determines a jth-level idle linked list used for managing allocable space with the target granularity as a target linked list.
For example, please continue to combine fig. 3 and fig. 4, assuming that the memory capacity required by the first process is 16k, the processor 01 may determine that the granularity (8k, 16k, 32k, and 64k) of the allocable space managed by the four idle linked lists is different from 16k, so as to determine that the difference between 8k and 16k is-8 k, the difference between 16k and 16k is 0k, the difference between 32k and 16k is 16k, and the difference between 64k and 16k is 48k, and further may determine that the second-level idle linked list used for managing the allocable space with the granularity of 16k is the target linked list.
Step 2042, the processor 01 queries the jth-level idle linked list to the nth-level idle linked list until the non-empty linked list managing the unit memory space is queried. Step 2043 is performed.
After determining that the jth-level idle linked list is the target linked list, the processor 01 may sequentially query, starting from the jth-level idle linked list, whether each level of idle linked lists from the jth-level idle linked list to the nth-level idle linked list is a non-empty linked list (the non-empty linked list is also a linked list that manages a unit memory space). If the non-empty linked list is not queried from the jth-level idle linked list to the nth-level idle linked list, the processor 01 may query each of the jth-level idle linked list to the nth-level idle linked list again. And the processor may stop the query once a non-empty linked list is queried.
For example, continuing to refer to fig. 4, assuming that the target linked list is the second-level idle linked list, the processor 01 may sequentially query, starting from the second-level idle linked list, whether each of the second-level idle linked list to the fourth-level idle linked list is a non-idle linked list. When the processor queries the fourth-level idle linked list, the processor can determine that neither the second-level idle linked list nor the third-level idle linked list is a non-empty linked list, and can determine that the fourth-level idle linked list manages a unit memory space with a capacity of 64k, thereby determining that the fourth-level idle linked list is a non-empty linked list, and stopping querying.
Optionally, after querying the nth level idle linked list each time, if the non-empty linked list is not detected, the processor 01 may sequentially query whether each level of idle linked lists in the first to j-1 th level idle linked lists manages at least one group of partner spaces. When it is determined that the p-th idle linked list manages at least one group of partner spaces, the processor 01 removes the nodes used for recording the at least one group of partner spaces in the p-th idle linked list, and adds the nodes recording the unit memory space formed by each group of partner spaces in the p + 1-th idle linked list, wherein p is more than or equal to 1 and is less than j-1. Removing nodes in the idle linked list and adding nodes in the idle linked list by the processor 01 can be realized by modifying pointers in the idle linked list. When determining that at least one group of partner spaces are not managed in the first-level to (j-1) th-level idle linked lists, the processor 01 starts to query whether non-empty linked lists exist in the j-level to (n) th-level idle linked lists again.
For example, a first set in the shared memory may be divided into a plurality of unit memory spaces with continuous physical addresses and the same capacity according to the granularity of allocable spaces managed by each idle linked list, and when an x +1 th unit memory space and an x +2 th unit memory space in the unit memory spaces are both allocable spaces belonging to the first set, the x +1 th unit memory space and the x +2 th unit memory space are a set of partner spaces, where x is a non-negative even number. For example, assuming that the first-level idle linked list manages eight 8k unit memory spaces with consecutive physical addresses, and the eight unit memory spaces all belong to allocable spaces, the first and second unit memory spaces in the eight unit memory spaces are a set of partner spaces, the third and fourth unit memory spaces are a set of partner spaces, the fifth and sixth unit memory spaces are a set of partner spaces, and the seventh and eighth unit memory spaces are a set of partner spaces.
Optionally, in the embodiment of the present invention, when querying each idle linked list, the processor 01 may start querying from a head node in the idle linked list, that is, the processor 01 starts querying from the same position of the idle linked list each time, so that the idle linked list may be stored in a cache (cache) in the memory. And the efficiency of processing the files stored in the cache by the processor 01 is higher, so that the speed of querying the linked list by the processor 01 in the embodiment of the invention is higher. It should be noted that, when querying each idle linked list, the processor 01 may also always start querying from another node (e.g., the last node) of the idle linked list, which is not limited in this embodiment of the present invention.
Step 2043, processor 01 determines whether the queried non-empty linked list is the jth level idle linked list. If the non-empty linked list is the jth level idle linked list, executing step 2045; if the non-empty linked list is not the jth level idle linked list, then step 2044 is performed.
Step 2044, processor 01 divides any unit memory space managed by the non-empty linked list into unit memory spaces managed by the jth level idle linked list. Step 2045 is performed.
The processor 01 may remove the node in the non-empty linked list that records any unit memory space, divide the unit memory space, and add the node in the j-th level idle linked list and other level idle linked lists that records the unit memory space according to the division result.
For example, as shown in fig. 4, assuming that the queried non-empty linked list is a fourth-level idle linked list and the jth-level idle linked list is a second-level idle linked list, as shown in fig. 6, the processor 01 may first divide a unit memory space 1 with a capacity of 64k managed by the fourth-level idle linked list to obtain unit memory spaces 2 and 3 with capacities of 32k, remove a non-head node recorded with the unit memory space 1 in the fourth-level idle linked list, and add two non-head nodes recorded with the unit memory spaces 2 and 3 in the third-level idle linked list. Then, as shown in fig. 7, the processor 01 may subdivide the unit memory space 2 into unit memory spaces 4 and 5 with capacities of 16k, remove the non-head node recorded in the unit memory space 2 in the third-level idle linked list, and add two non-head nodes recorded in the unit memory spaces 4 and 5 in the second-level idle linked list. Thus, the second-level idle linked list is managed with a unit memory space.
Step 2045, the processor 01 allocates any unit memory space managed by the jth level idle linked list to the first process.
When the processor allocates any unit memory space managed by the jth-level idle linked list to the first process, the processor also needs to remove the node in the jth-level idle linked list, where the unit memory space is recorded.
In addition, after the processor 01 allocates the shared memory, the state information of each unit memory space in the plurality of unit memory spaces (e.g., the plurality of unit memory spaces shown in fig. 3) obtained by dividing the first set may be further managed according to the granularity of the allocable space in each level of the idle linked list. Wherein the status information includes: first information indicating whether the unit memory space is divided, second information indicating whether the unit memory space is allocated to a process, and third information for managing an idle linked list of the unit memory space when the unit memory space belongs to an allocable space, and the state information further includes, when the unit memory space is allocated to a process: fourth information indicating a process to which the unit memory space is allocated. And when the state of each unit memory space changes, the processor 01 needs to adjust its state information. After step 2045, processor 01 may also modify the second information and the fourth information of any unit of memory space.
Alternatively, the first information and the second information of the unit memory space may be recorded in a bitmap manner, as shown in fig. 8, in the 16 unit memory spaces shown in fig. 3, each of the first information and the second information of each unit memory space may occupy one binary bit, so that 3 bytes are required in total to record the first information and the second information of the 16 unit memory spaces.
In summary, in step 204, the processor 01 may perform the query and modification in steps 2041 to 2042 on at least one idle linked list according to the memory allocation request triggered by the first process, so as to allocate the memory space in the shared memory for the first process.
Further, in order to prevent the mutual exclusion operation of multiple processes, when the processor 01 performs at least one of querying and modifying on a certain idle linked list according to the memory allocation request triggered by the first process, the idle linked list may be locked, so that the idle linked list only allows the processor 01 to operate according to the memory allocation request triggered by the first process, but cannot be operated by the processor 01 according to the memory allocation requests triggered by other processes. Moreover, after the processor 01 completes the at least one operation on the idle linked list according to the memory allocation request triggered by the first process, the idle linked list may be unlocked, so that the idle linked list allows the processor 01 to operate according to the memory allocation request triggered by any process (e.g., the first process or another process except the first process).
For example, the memory may store lock information of each idle linked list, and when the processor 01 does not operate on the idle linked list, the lock information of the idle linked list includes: when the processor 01 performs at least one operation on the idle linked list according to a memory allocation request triggered by a certain process, if the lock information of the idle linked list includes an unlock state value, the processor 01 needs to change the unlock state value to an lock state value (for example, to 1), and record the identifier of the process in the lock information. After the linked list is operated, the processor 01 may restore the locked state value of the idle linked list in the lock information to the unlocked state value, and delete the identifier of the process in the lock information.
In step 205, the processor 01 writes the data corresponding to the first process into the memory space allocated to the first process.
It should be noted that, after the processor 01 allocates the shared memory from the memory, it needs to send the mapping relationship between the physical address and the logical address in the shared memory to each process. When allocating the memory space for the first process, the processor 01 may feed back the physical address of the memory space allocated to the first process.
After the processor 01 allocates the memory space to the first process, the first process may convert the physical address of the allocated memory space into the logical address according to the mapping relationship between the physical address and the logical address in the shared memory. Thereafter, the first process may send both the data to be written and the logical address to processor 01. The processor 01 may convert the logical address sent by the first process into a physical address, and write data that needs to be written by the first process (i.e., data corresponding to the first process) into a memory space corresponding to the physical address (i.e., a memory space allocated to the first process).
In step 206, the processor 01 writes the data corresponding to the first process into the address in the memory space and stores the address in the second set.
For example, the second set may include a plurality of subsets, where each subset corresponds to each process in the plurality of processes, that is, the plurality of subsets correspond to the plurality of processes one to one.
In step 206, the processor 01 may store an address (physical address, or logical address) to which the data is written in the memory space allocated to the first process in a subset corresponding to a second process that the first process needs to communicate, so that a subsequent second process may read the data according to the address. Optionally, each subset may store the addresses written for multiple times in a queue manner, so as to ensure that the addresses can be subsequently read according to the writing sequence of the addresses, and then the data in the addresses are sequentially acquired.
In step 207, the processor 01 feeds back a write completion indication to the first process.
The processor 01 may feed back a write completion indication to the first process after writing the data corresponding to the first process into the memory space and writing the address into which the data is written into the subset corresponding to the second process.
In step 208, the processor 01 receives a first release instruction triggered by the first process, where the first release instruction is used to instruct to release the memory space occupied by the data.
It should be noted that, after receiving the write completion indication fed back by the processor 01, if the first process does not need to communicate with the second process, the first process may trigger the first release indication to indicate that the first process does not need to use the memory space at present, and instruct the processor 01 to release the memory space occupied by the data corresponding to the first process.
Step 209, when a second process of the multiple processes triggers a read data request, the processor 01 reads data according to the address in the second set and feeds back the data to the second process.
Optionally, each process (for example, the second process) may trigger a read data request every preset time period, or after the processor 01 stores an address in the subset corresponding to each process, the processor 01 may send an instruction to the process, so that the process triggers a read data request, which is not limited in the embodiment of the present invention.
When the processor 01 captures a read data request triggered by the second process, the processor 01 needs to search a subset corresponding to the second process in the second set, read data stored in an address in the subset, and feed back the read data to the second process. Because the processor 01 writes the data corresponding to the first process into the memory space allocated to the first process, and then writes the address of the data written into the memory space into the subset corresponding to the second process, in step 209, the processor 01 may read and feed back the data corresponding to the first process to the second process according to the address of the subset corresponding to the second process, thereby implementing that the first process sends the data to the second process through the shared memory, and implementing communication between the first process and the second process.
In step 210, the processor 01 receives a second release instruction triggered by the second process, where the second release instruction is used to instruct to release the memory space occupied by the data corresponding to the first process.
After the processor 01 feeds back the data corresponding to the first process to the second process, if the second process does not need to communicate with the first process, the second release indication may be triggered to indicate that the processor 01 does not need to use the memory space allocated to the first process at present, and indicate to release the memory space occupied by the data.
It should be noted that, in the embodiment of the present invention, after the processor 01 feeds back the write completion indication to the first process, the first process does not need to communicate with the second process, and after the processor 01 feeds back the data corresponding to the first process to the second process, the second process triggers the second release indication.
Optionally, after the processor 01 feeds back the data corresponding to the first process to the second process, the second process may further trigger a data write request to the processor 01 according to the data, so that the processor 01 writes the data corresponding to the second process in the memory space allocated to the first process according to the data write request, and writes the address where the data corresponding to the second process is written in the subset corresponding to the first process. Then, when receiving a data reading request triggered by the first process, the processor 01 may read data according to an address in the subset corresponding to the first process and feed back the data to the first process, thereby implementing the second communication between the first process and the second process through the shared memory. The first process and the second process may also perform more communications through the shared memory, which is not limited in the embodiment of the present invention. The first release indication may be triggered once the first process determines that communication with the second process is not required, and the second release indication may be triggered once the second process determines that communication with the first process is not required.
In step 211, after receiving the first release instruction and the second release instruction, the processor 01 deletes the data corresponding to the first process and releases the memory space allocated to the first process.
After receiving each release instruction, the processor 01 needs to detect whether release instructions sent by all processes using the memory space are received, and can release the memory space after receiving the release instructions, so as to prevent data loss caused by releasing the memory space when some processes need to release a certain memory space and some processes are using the memory space.
For example, after receiving a first release instruction triggered by a first process and a second release instruction triggered by a second process, the processor 01 may determine that none of the processes currently using the memory space allocated to the first process need to use the memory space, may delete data corresponding to the first process at this time, and release the memory space to change the memory space into an allocable space in the first set, so that the memory space may be allocated to other processes. When releasing the memory space allocated to the first process, the processor 01 may determine, according to the third information of the memory space, an idle linked list used for managing the memory space when the memory space belongs to the allocable space, and add a non-head node in which the memory space is recorded in the idle linked list.
In summary, in the embodiment of the present invention, when the first process communicates with the second process, the processor 01 only needs to write the data corresponding to the first process into the memory space, and write the address into which the data is written into the subset corresponding to the second process, so that information is transferred between the first process and the second process through the subset, and the data corresponding to the first process does not need to be copied in the whole process, so that the efficiency of inter-process communication is high.
Optionally, in the embodiment of the present invention, the processor 01 may further monitor whether the process crashes in real time, perform rollback on an operation executed by the crashed process, and recycle a memory space allocated to the crashed process, so as to prevent memory leakage. As shown in fig. 9, the memory management method may further include:
step 301, processor 01 detects whether the first process crashes. If the first process is detected to be crashed, go to step 302; if the first process crash is not detected, step 301 is executed.
Processor 01 may detect in real time (or at intervals) whether the first process crashes.
Step 302, processor 01 determines whether processor 01 is modifying any idle linked list according to the trigger of the first process. If determining that the processor 01 is modifying any idle linked list according to the trigger of the first process, executing step 303; if it is determined that processor 01 is not modifying any of the free linked lists as triggered by the first process, step 304 is performed.
For example, the modifying of the idle linked list by the processor 01 may include: adding non-head nodes in the idle linked list or removing the non-head nodes from the idle linked list.
Step 303, processor 01 backs any idle linked list to the state before modification. Step 304 is performed.
If it is determined that a certain idle linked list is being modified according to the trigger of the first process, the processor 01 may back up the idle linked list to a state before the modification occurs according to the modification that has been performed on the idle linked list.
For example, suppose that when the first process crashes, the processor 01 is adding a node to a certain idle linked list according to the trigger of the first process. When the idle linked list only includes the head node, processor 01 adds a non-head node in the idle linked list and needs to modify four pointers, where the four pointers are two pointers in the head node and two pointers in the added non-head node respectively. When the idle linked list includes not only the head node but also the non-head node, the processor 01 adds a non-head node in the idle linked list and needs to modify six pointers, wherein the six pointers are two pointers in the head node, two pointers in the last non-head node, and two pointers in an added non-head node. In step 303, the processor 01 needs to roll back all the pointers modified according to the trigger of the first process (the modified pointers in the four pointers described above or the modified pointers in the six pointers described above) to the state before modification, so that the free linked list is rolled back to the state before modification.
In step 304, the processor 01 determines whether a target memory space allocated to the first process exists in the first set. If the target memory space exists in the first set, step 305 is executed. If the target memory space does not exist in the first set, step 301 is executed.
The processor 01 records state information of a plurality of unit memory spaces in the memory, wherein the state information includes: first information indicating whether the unit memory space is divided, second information indicating whether the unit memory space is allocated to the process, and third information for managing an idle linked list of the unit memory space, and when the unit memory space is allocated to the process, the state information further includes: fourth information indicating a process to which the unit memory space is allocated. And when the state of each unit memory space changes, the processor 01 needs to adjust its state information. In step 304, the processor 01 may further determine whether at least one unit memory space not partitioned exists in the first set according to the recorded first information. When the at least one unit memory space exists, the processor 01 determines whether a target memory space allocated to the first process exists in the unit memory spaces according to the fourth information of the at least one unit memory space.
In step 305, the processor 01 deletes the data in the target memory space and releases the target memory space.
When releasing the target memory space, the processor 01 may determine, according to the recorded third information of the target memory space, an idle linked list for managing the target memory space when the target memory space belongs to the allocable space, and add a non-head node in which the target memory space is recorded in the idle linked list.
In summary, in the memory management method provided in the embodiments of the present invention, the processor allocates a large block of shared memory from the memory in advance, and then the processor can allocate the memory space in the shared memory to the processes, so that the processes can communicate with each other through the memory space. Because the memory space used by each process in communication with other processes is obtained by allocating the memory space from the pre-allocated shared memory, the processor does not need to allocate the shared memory to each process from the memory, the time required by allocating the memory to the processes is reduced, the read-write efficiency of the processes to the shared memory is improved, and the communication efficiency between the processes is improved.
It should be noted that, the multiple processes for performing communication in the embodiment of the present invention may include: the server local process, optionally, the plurality of processes may further include at least one of a process in the container and a process in the virtual machine. Generally, the three libraries that can be accessed in the memory are independent of each other, and therefore, any two processes of the three processes in the related art cannot communicate with each other. In order to enable any two processes of the three processes to communicate, in the related art, a library (which is in a kernel state) that can be accessed by any two processes of the three processes may be partitioned from the memory, but once the library has a problem, the stability of the entire memory is affected. In the embodiment of the present invention, the processor allocates the shared memory in the memory in advance, so that the shared memory is already allocated from the memory (the shared memory is in the user state), and therefore the shared memory can be accessed by the three processes, thereby enabling communication between any two processes of the three processes. Moreover, since the shared memory is already allocated from the memory, when a problem occurs in the shared memory, the stability of the remaining portion of the memory is not affected.
In addition, the processor in the embodiment of the invention can freely set the granularity of the idle linked list management allocable space, so that the memory capacity required by most processes can be counted, and the minimum granularity of the idle linked list management allocable space is set as the memory capacity, so that the unit memory space capable of being allocated to the processes can be quickly found in the shared memory, and the efficiency of communication among the processes is further improved.
Fig. 10 is a schematic structural diagram of a memory management device according to an embodiment of the present invention, where the memory management device may be applied to the processor 01 in the server in fig. 1, and as shown in fig. 10, the memory management device 10 may include:
a counting module 101, configured to count a total shared memory capacity required by multiple processes running on a processor;
a first allocation module 102, configured to allocate a shared memory from a memory according to a required total memory capacity, where the shared memory capacity is greater than or equal to a total shared memory capacity required by multiple processes;
an obtaining module 103, configured to obtain a memory allocation request triggered by a first process in a plurality of processes;
the second allocating module 104 is configured to allocate a memory space for the first process from the shared memory in response to the memory allocation request.
In summary, in the memory management device provided in the embodiments of the present invention, the first allocating module allocates a large block of shared memory from the memory in advance, and then the second allocating module may allocate a memory space in the shared memory to the process, so that the processes can communicate with each other through the memory space. Because the memory space used by each process in communication with other processes is obtained by allocating the memory space from the pre-allocated shared memory, the processor does not need to allocate the shared memory to each process from the memory, the time required by allocating the memory to the processes is reduced, the read-write efficiency of the processes to the shared memory is improved, and the communication efficiency between the processes is improved.
Optionally, the shared memory includes a first set and a second set, and the second allocating module 104 may include: the allocation submodule is used for allocating memory space for the first process from the first set; fig. 11 is a schematic structural diagram of another memory management device according to an embodiment of the present invention, as shown in fig. 11, on the basis of fig. 10, the memory management device may further include:
a write-in module 105, configured to write data corresponding to the first process into a memory space;
a saving module 106, configured to save the address written in the memory space in the second set;
and the reading module 107 is configured to, when a second process of the multiple processes triggers a read data request, read data according to an address in the second set, and feed back the read data to the second process.
Optionally, at least two idle linked lists are stored in the memory, and the at least two idle linked lists are used for managing allocable space in the first set at different granularities; the allocation submodule is configured to: selecting a target linked list from at least two idle linked lists according to the memory capacity required by the first process; and allocating memory space for the first process from the allocable space managed by the target linked list.
Optionally, the second set includes a plurality of subsets, where each subset corresponds to each process in the plurality of processes; the save module 106 may be configured to: and storing the address written in the data in the memory space in the corresponding subset of the second process.
Optionally, the memory management device 10 may further include:
a first receiving module 108, configured to receive a first release instruction triggered by a first process, where the first release instruction is used to instruct a memory space occupied by release data;
a second receiving module 109, configured to receive a second release instruction triggered by a second process, where the second release instruction is used to instruct a memory space occupied by release data;
the releasing module 110 is configured to delete the data and release the memory space after receiving the first release instruction and the second release instruction.
In summary, in the memory management device provided in the embodiments of the present invention, the first allocating module allocates a large block of shared memory from the memory in advance, and then the second allocating module may allocate a memory space in the shared memory to the process, so that the processes can communicate with each other through the memory space. Because the memory space used by each process in communication with other processes is obtained by allocating the memory space from the pre-allocated shared memory, the processor does not need to allocate the shared memory to each process from the memory, the time required by allocating the memory to the processes is reduced, the read-write efficiency of the processes to the shared memory is improved, and the communication efficiency between the processes is improved.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product comprising one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium, or a semiconductor medium (e.g., solid state disk), among others.
It should be noted that, the method embodiment provided in the embodiment of the present invention can be mutually referred to a corresponding apparatus embodiment, and the embodiment of the present invention does not limit this. The sequence of the steps of the method embodiments provided by the embodiments of the present invention can be appropriately adjusted, and the steps can be correspondingly increased or decreased according to the situation, and any method that can be easily conceived by those skilled in the art within the technical scope disclosed by the present invention shall be covered by the protection scope of the present invention, and therefore, the detailed description thereof shall not be repeated.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A memory management method is applied to a processor in a server, the server further comprises a memory, and the method comprises the following steps:
counting the total shared memory capacity required by a plurality of processes running on the processor;
allocating a shared memory from the memory according to the required total memory capacity, wherein the capacity of the shared memory is greater than or equal to the total shared memory capacity required by the processes;
acquiring a memory allocation request triggered by a first process in the plurality of processes;
and responding to the memory allocation request, and allocating memory space for the first process from the shared memory.
2. The method of claim 1, wherein the shared memory comprises a first set and a second set;
the allocating a memory space for the first process from the shared memory includes: allocating memory space for the first process from the first set;
the method further comprises the following steps:
writing the data corresponding to the first process into the memory space;
storing the address written by the data in the memory space in the second set;
and when a second process in the plurality of processes triggers a read data request, reading the data according to the address in the second set, and feeding back the data to the second process.
3. The method of claim 2, wherein at least two idle linked lists are stored in the memory, and wherein the at least two idle linked lists are used for managing allocable space in the first set at different granularities;
the allocating memory space for the first process from the first set comprises:
selecting a target linked list from the at least two idle linked lists according to the memory capacity required by the first process;
and allocating memory space for the first process from the allocable space managed by the target linked list.
4. The method of claim 2, wherein the second set comprises a plurality of subsets, wherein each subset corresponds to each process of the plurality of processes;
the storing the address written by the data in the memory space in the second set includes: and storing the address written by the data in the memory space in a subset corresponding to the second process.
5. The method of any of claims 2 to 4, further comprising:
receiving a first release instruction triggered by the first process, wherein the first release instruction is used for indicating a memory space occupied by releasing the data;
receiving a second release instruction triggered by the second process, wherein the second release instruction is used for indicating the memory space occupied by the data to be released;
and after receiving the first release instruction and the second release instruction, deleting the data and releasing the memory space.
6. A memory management device, wherein the memory management device is applied to a processor in a server, the server further includes a memory, and the memory management device includes:
the statistical module is used for counting the total shared memory capacity required by a plurality of processes running on the processor;
a first allocation module, configured to allocate a shared memory from the memory according to the required total memory capacity, where the capacity of the shared memory is greater than or equal to the total shared memory capacity required by the multiple processes;
an obtaining module, configured to obtain a memory allocation request triggered by a first process in the multiple processes;
and the second allocating module is used for responding to the memory allocation request and allocating memory space for the first process from the shared memory.
7. The memory management device according to claim 6, wherein the shared memory comprises a first set and a second set;
the second allocating module includes: the allocation submodule is used for allocating memory space for the first process from the first set;
the memory management device further includes:
a write-in module, configured to write data corresponding to the first process into the memory space;
a storage module, configured to store the address written in the data in the memory space in the second set;
and the reading module is used for reading the data according to the address in the second set and feeding back the data to a second process when the second process in the multiple processes triggers a read data request.
8. The memory management device according to claim 7, wherein at least two idle linked lists are stored in the memory, and the at least two idle linked lists are used for managing allocable space in the first set at different granularities;
the allocation submodule is configured to:
selecting a target linked list from the at least two idle linked lists according to the memory capacity required by the first process;
and allocating memory space for the first process from the allocable space managed by the target linked list.
9. The memory management device according to claim 7, wherein the second set comprises a plurality of subsets, wherein each subset corresponds to each process of the plurality of processes;
the saving module is used for: and storing the address written by the data in the memory space in a subset corresponding to the second process.
10. The memory management device according to any one of claims 7 to 9, wherein the memory management device further comprises:
a first receiving module, configured to receive a first release instruction triggered by the first process, where the first release instruction is used to instruct to release a memory space occupied by the data;
a second receiving module, configured to receive a second release instruction triggered by the second process, where the second release instruction is used to instruct to release a memory space occupied by the data;
and the releasing module is used for deleting the data and releasing the memory space after receiving the first releasing instruction and the second releasing instruction.
11. A server, characterized in that the server comprises: a processor and a memory, the processor comprising the memory management device of any of claims 6 to 10.
12. A computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the memory management method of any of claims 1 to 5.
CN201810975221.9A 2018-08-24 2018-08-24 Memory management method and device and server Active CN110858162B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810975221.9A CN110858162B (en) 2018-08-24 2018-08-24 Memory management method and device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810975221.9A CN110858162B (en) 2018-08-24 2018-08-24 Memory management method and device and server

Publications (2)

Publication Number Publication Date
CN110858162A true CN110858162A (en) 2020-03-03
CN110858162B CN110858162B (en) 2022-09-23

Family

ID=69636300

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810975221.9A Active CN110858162B (en) 2018-08-24 2018-08-24 Memory management method and device and server

Country Status (1)

Country Link
CN (1) CN110858162B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488123A (en) * 2020-04-07 2020-08-04 Tcl移动通信科技(宁波)有限公司 Storage space management method and device, storage medium and mobile terminal
CN112669852A (en) * 2020-12-15 2021-04-16 北京百度网讯科技有限公司 Memory allocation method and device and electronic equipment
CN112995610A (en) * 2021-04-21 2021-06-18 浙江所托瑞安科技集团有限公司 Method for application in shared in-existence multi-channel video monitoring
CN114327868A (en) * 2021-12-08 2022-04-12 中汽创智科技有限公司 Dynamic memory regulation and control method, device, equipment and medium
CN114385370A (en) * 2022-01-18 2022-04-22 重庆紫光华山智安科技有限公司 Memory allocation method, system, device and medium
WO2023051591A1 (en) * 2021-09-29 2023-04-06 华为技术有限公司 Interprocess communication method and related apparatus
US12020069B2 (en) * 2022-08-11 2024-06-25 Next Silicon Ltd Memory management in a multi-processor environment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003010626A2 (en) * 2001-07-25 2003-02-06 Times N Systems Inc. Distributed shared memory management
CN103425592A (en) * 2013-08-05 2013-12-04 大唐移动通信设备有限公司 Memory management method and device for multiprocess system
CN104980454A (en) * 2014-04-02 2015-10-14 腾讯科技(深圳)有限公司 Method, server and system for sharing resource data
CN106681842A (en) * 2017-01-18 2017-05-17 迈普通信技术股份有限公司 Management method and device for sharing memory in multi-process system
CN107209716A (en) * 2015-02-09 2017-09-26 华为技术有限公司 Memory management apparatus and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003010626A2 (en) * 2001-07-25 2003-02-06 Times N Systems Inc. Distributed shared memory management
CN103425592A (en) * 2013-08-05 2013-12-04 大唐移动通信设备有限公司 Memory management method and device for multiprocess system
CN104980454A (en) * 2014-04-02 2015-10-14 腾讯科技(深圳)有限公司 Method, server and system for sharing resource data
CN107209716A (en) * 2015-02-09 2017-09-26 华为技术有限公司 Memory management apparatus and method
CN106681842A (en) * 2017-01-18 2017-05-17 迈普通信技术股份有限公司 Management method and device for sharing memory in multi-process system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488123A (en) * 2020-04-07 2020-08-04 Tcl移动通信科技(宁波)有限公司 Storage space management method and device, storage medium and mobile terminal
CN111488123B (en) * 2020-04-07 2022-11-04 Tcl移动通信科技(宁波)有限公司 Storage space management method and device, storage medium and mobile terminal
CN112669852A (en) * 2020-12-15 2021-04-16 北京百度网讯科技有限公司 Memory allocation method and device and electronic equipment
CN112669852B (en) * 2020-12-15 2023-01-31 北京百度网讯科技有限公司 Memory allocation method and device and electronic equipment
CN112995610A (en) * 2021-04-21 2021-06-18 浙江所托瑞安科技集团有限公司 Method for application in shared in-existence multi-channel video monitoring
WO2023051591A1 (en) * 2021-09-29 2023-04-06 华为技术有限公司 Interprocess communication method and related apparatus
CN114327868A (en) * 2021-12-08 2022-04-12 中汽创智科技有限公司 Dynamic memory regulation and control method, device, equipment and medium
CN114327868B (en) * 2021-12-08 2023-12-26 中汽创智科技有限公司 Memory dynamic regulation and control method, device, equipment and medium
CN114385370A (en) * 2022-01-18 2022-04-22 重庆紫光华山智安科技有限公司 Memory allocation method, system, device and medium
US12020069B2 (en) * 2022-08-11 2024-06-25 Next Silicon Ltd Memory management in a multi-processor environment

Also Published As

Publication number Publication date
CN110858162B (en) 2022-09-23

Similar Documents

Publication Publication Date Title
CN110858162B (en) Memory management method and device and server
CN106776967B (en) Method and device for storing massive small files in real time based on time sequence aggregation algorithm
US11262916B2 (en) Distributed storage system, data processing method, and storage node
US10120588B2 (en) Sliding-window multi-class striping
CN110555001B (en) Data processing method, device, terminal and medium
CN113396566B (en) Resource allocation based on comprehensive I/O monitoring in distributed storage system
US11314454B2 (en) Method and apparatus for managing storage device in storage system
US20190026317A1 (en) Memory use in a distributed index and query system
CN103324533A (en) distributed data processing method, device and system
US11231964B2 (en) Computing device shared resource lock allocation
WO2022120522A1 (en) Memory space allocation method and device, and storage medium
WO2024099448A1 (en) Memory release method and apparatus, memory recovery method and apparatus, and computer device and storage medium
CN111190537B (en) Method and system for managing sequential storage disk in additional writing scene
CN109960662A (en) A kind of method for recovering internal storage and equipment
CN109213423A (en) Concurrent I/O command is handled without lock based on address barrier
CN113672171A (en) Distributed object storage method, device and system and metadata server
CN116955219A (en) Data mirroring method, device, host and storage medium
CN114785662B (en) Storage management method, device, equipment and machine-readable storage medium
CN107102898B (en) Memory management and data structure construction method and device based on NUMA (non Uniform memory Access) architecture
CN115904211A (en) Storage system, data processing method and related equipment
CN115509437A (en) Storage system, network card, processor, data access method, device and system
CN113741787B (en) Data storage method, device, equipment and medium
CN117632953B (en) Data cycle storage method, device, server and storage medium
CN113312522B (en) Management method and device of kernel object, storage medium and electronic equipment
CN117667987A (en) Storage system, data updating method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant