CN106844041B - Memory management method and memory management system - Google Patents

Memory management method and memory management system Download PDF

Info

Publication number
CN106844041B
CN106844041B CN201611241110.2A CN201611241110A CN106844041B CN 106844041 B CN106844041 B CN 106844041B CN 201611241110 A CN201611241110 A CN 201611241110A CN 106844041 B CN106844041 B CN 106844041B
Authority
CN
China
Prior art keywords
memory
module
thread
thread module
management
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611241110.2A
Other languages
Chinese (zh)
Other versions
CN106844041A (en
Inventor
黄福堂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201611241110.2A priority Critical patent/CN106844041B/en
Publication of CN106844041A publication Critical patent/CN106844041A/en
Application granted granted Critical
Publication of CN106844041B publication Critical patent/CN106844041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Abstract

The application discloses a memory management method and a memory management system, wherein the method is applied to the memory management system, the memory management system comprises a memory management module and a plurality of first thread modules, the memory management module is used for managing a plurality of memory spaces, the first memory space in the memory spaces comprises a plurality of first memories, the capacity of the first memory is a fixed value, the first thread modules are used for managing a second memory space, the second memory space comprises at least one first memory, and the method comprises the following steps: the first thread module receives service data; the first thread module determines whether the second memory space meets the memory requirement of the business data; when the second memory space of the first thread module meets the memory requirement of the business data, the first thread module uses the memory in the second memory space to process the business data. The embodiment of the application can reduce lock conflicts and improve the concurrency capability of the system.

Description

Memory management method and memory management system
Technical Field
The present invention relates to the field of computer storage, and more particularly, to a memory management method and a memory management system.
Background
The memory management is an important system basic service in the storage system, can manage memory application and release of an Input/Output (IO) module and a service module of the system, provides basic service for a high-performance system, and avoids that the system stores abnormal services in an IO path due to insufficient memory resources, which finally affects service provision for upper-layer services.
The memory management is to manage the memory of the system through a series of algorithms, for example, the management algorithm of the memory in the linux memory includes a memory management partner algorithm (buddy) algorithm, a distribution mechanism algorithm (slab), and the like. In addition, the storage system can be part of the core of a large-scale system, and the mainstream trend in the industry is storage cloud, so that various X86 server-based distributed storage products, such as open source distributed storage (Ceph), have appeared. The storage is characterized in that the core processes of the storage are deployed in a user mode of Linux, and pre-application allocation is carried out on resources of the system, so that the system avoids dynamic allocation and release of the resources to the Linux system in the operation process, normal operation of the system is ensured, and frequent application and release of memory to the system due to service requirements in the IO process are avoided, and a large amount of CPU cost is brought to the system. More importantly, if a large amount of system resources stored in the bottom layer cannot apply for the memory because the system memory resources are insufficient, the system memory resources will have a great influence on the service of the upper layer system. Thus, the storage system typically manages itself all of the involved memory needed by the storage system.
In the prior art, a simple queue (i.e. a linked list mode) of a fixed-length memory is adopted to manage the memory, when the number of threads for applying the memory and releasing the memory is more than one, a mutual exclusion lock needs to be configured, but when the number of threads is too many, the conflict of the mutual exclusion lock is serious, and the concurrency capability of the system is low.
Disclosure of Invention
The embodiment of the application provides a memory management method and a memory management system, which can reduce lock conflicts and improve system concurrency.
In a first aspect, a method for memory management is provided, where the method is applied to a memory management system including a memory management module and a plurality of first thread modules, the memory management module is configured to manage a plurality of memory spaces, a first memory space in the plurality of memory spaces includes a plurality of first memories, capacity of the first memory is a fixed value, the first thread module is configured to manage a second memory space, and the second memory space includes at least one first memory, and the method includes: the first thread module receives service data; the first thread module determines whether the second memory space meets the memory requirement of the business data; when the second memory space meets the memory requirement of the service data, the first thread module processes the service data by using the memory in the second memory space.
The first service thread receives service data, determines whether a second memory space can meet the memory size required by the service data, and processes the service data according to the memory in the second memory space when the second memory space meets the memory requirement of the service data. Therefore, under the scene of a plurality of service threads, different thread modules process service data through respective second memory spaces, and lock conflicts caused by the fact that the memory management module wants to lock all memories in the first memory space through the management lock on the memory linked list when the plurality of thread modules process the service data simultaneously are avoided, so that the lock overhead is reduced, and the concurrency capability of the system is improved.
In some possible implementations, the method further includes: when the second memory space does not meet the memory requirement of the service data, the first thread module sends a memory request to the memory management module, wherein the memory request is used for requesting to apply for a memory from the first memory space; the memory management module allocates a memory according to the memory request, and sets a first management lock for the first memory space, wherein the first management lock is used for controlling the access of the memory in the first memory space; the memory management module configures a memory for the first thread module and releases the first management lock; and the first thread module stores the configured memory into the second memory space so that the second memory space can meet the memory requirement of the business data.
When the second memory space can not meet the requirement, each first thread module applies for the memory from the memory management module, that is, each first thread can still process the service data through the respective second memory space, and the lock conflict caused by the fact that the memory management module wants to lock all the memories in the first memory space through the management lock on the memory linked list when a plurality of thread modules process the service data simultaneously is avoided, so that the lock overhead is reduced, and the concurrency capability of the system is improved.
In some possible implementations, the allocating, by the memory management module, the memory according to the memory request includes: the memory management module allocates at least one first memory set in the first memory space according to the memory request, wherein the first memory space comprises a plurality of first memory sets, and each first memory set comprises at least two first memories; wherein the sending of the memory to the first thread module by the memory management module comprises: the memory management module sends the at least one memory set to the first thread module.
Under the condition that the first thread module needs a large amount of memories, the situation that each memory is traversed one by one is avoided, and therefore time delay can be saved.
In some possible implementations, the method further includes: the memory management module configures a memory index value for each first memory set in the plurality of first memory sets; wherein the allocating, by the memory management module, at least one first memory set according to the memory request includes: the memory management module allocates the at least one first memory set according to the memory request and the memory index value.
When the first thread module needs to obtain the memories in batches, the memories can be rapidly configured in batches, and only a plurality of pointers need to be modified, so that the expenditure is saved.
In some possible implementations, before the first thread module determines whether the second memory space satisfies the memory requirement of the business data, the method further includes: the first thread module sends a first registration request to the memory management module, wherein the first registration request is used for requesting a memory from the memory management module; the memory management module sends at least one first memory to the first thread module according to the first registration request; the first thread module generates the second memory space according to the at least one first memory.
The first thread module can acquire the memory in advance and further be used for processing the service data, and the memory does not need to be acquired from the first memory space when the memory is needed, so that the service data processing time delay is saved.
In some possible implementations, the memory management system further includes a second thread module; wherein the processing, by the first thread module, the service data according to the memory in the second memory space includes: the first thread module encapsulates the service data in the memory in the second memory space to generate a service message; the first thread module sends the service message to the second thread module.
The first thread module may process the service data according to the memory in the second memory space, and specifically, may encapsulate the service data in the memory in the second memory space to generate a service message, and send the service message to the second thread module in the memory management system.
In some possible implementations, the method further includes: the second thread module sends a second registration request to the memory management module, wherein the second registration request is used for requesting a memory from the memory management module; the memory management module sends at least one first memory to the second thread module according to the second registration request; the second thread module generates a third memory space according to the at least one first memory.
The second thread module can acquire the memory in advance and further be used for processing the service data, and the memory does not need to be acquired from the first memory space when the memory is needed, so that the service data processing time delay is saved.
In some possible implementations, the first thread module is a business thread module, and the second thread module is a network thread module, and the method further includes: the second thread module sends the service message to a storage node; after the second thread module sends the service message to the storage node, a second management lock is set for the third memory space, and the memory usage count and the service path of the third memory space are modified, wherein the second management lock is used for controlling the modification of the memory usage count and the service path, and is different from the first management lock.
The memory use count is used for detecting whether the memory occupied by the service message is used or not, and the service path is used for rapidly positioning when the system memory leaks, the memory is stepped on and the like. Therefore, the requirement of auxiliary design such as memory positioning is met, and meanwhile, the lock conflict caused by the arrangement of the same management lock is avoided.
In some possible implementations, the first thread module is a network thread module, and the second thread module is a business thread module, and the method further includes: the second thread module sends the service message to the user; after the second thread module sends the service message to the user, the second thread module sets a second management lock for the third memory space, and modifies the memory usage count and the service path of the third memory space, the second management lock is used for controlling the modification of the memory usage count and the service path, and the second management lock is different from the first management lock.
The method and the device meet the requirements of auxiliary design such as memory positioning and the like, and simultaneously avoid lock conflicts caused by the arrangement of the same management lock.
In some possible implementations, the method further includes: and the second thread module determines whether to release the memory occupied by the service message according to the memory use count.
And the second thread module releases the memory after the memory occupied by the service message is used, so that the memory can be used by other service messages, and the memory utilization rate is improved.
In some possible implementation manners, the third memory space includes a first linked list and a second linked list, an initial preset value of the memory usage count is 1, and the memory usage count is increased by 1 when the corresponding memory is in an occupied state and decreased by 1 when the corresponding memory is in an idle state; wherein, the second thread module determines whether to release the memory occupied by the service message according to the memory usage count, and the determining step includes: when the memory usage count is 1, the second thread module releases the memory occupied by the service message to the second linked list; and when the number of the memories in the second linked list exceeds a first number threshold, the second thread module releases the residual memories occupied by the service messages to the first linked list.
The third memory space includes a first linked list and a second linked list, and the first linked list and the second linked list are used for storing at least one first memory in the third memory space. That is to say, at least one first memory in the third memory space may be managed in a linked list form, for example, taking a free list linked list (denoted as a first linked list) as an example, when the number of memories in the free list linked list exceeds a low level (denoted as a first number threshold), the first memory is not placed in the free list linked list, but is placed in a unit list (denoted as a second linked list). Therefore, the memory in the free list can be conveniently recycled in subsequent batches, and the recycling efficiency is improved.
In some possible implementations, the method further includes: when the number of memories in the third memory space exceeds a second number threshold, the second thread module releases the memories in the second linked list to the first memory space.
The memory management module can recycle part of the memory in the third memory space to the first memory space, so that the problem that the resources are fully occupied by individual business threads due to the uneven IO (input output) can be avoided.
In a second aspect, a memory management system is provided, which includes modules for performing the method of the first aspect or any possible implementation manner of the first aspect.
In a third aspect, a memory management system is provided, including: a processor, a memory, and a communication interface. The processor is coupled to the memory and the communication interface. The memory is for storing instructions, the processor is for executing the instructions, and the communication interface is for communicating with other network elements under control of the processor. When the processor executes the instructions stored in the memory, the execution causes the processor to perform the method of memory management of the first aspect or any possible implementation manner of the first aspect.
In a fourth aspect, the present application provides a computer storage medium having program code stored therein, the program code being indicative of instructions for executing the method of memory management in the first aspect or any one of the possible implementations of the first aspect.
Based on the above technical solution, the first service thread receives service data, determines whether the second memory space can meet the memory size of the service data requirement, and processes the service data according to the memory in the second memory space when the second memory space meets the memory requirement of the service data. Therefore, the thread modules process the service data through the respective second memory spaces, and the configuration of a management lock on the memory chain table and the locking of all memories in the first memory space are avoided, so that lock conflicts are reduced, and the concurrency capability of the system is improved.
Drawings
FIG. 1 is an application scenario according to an embodiment of the present application;
FIG. 2 is a block diagram of a memory management system according to the present application;
FIG. 3 is a schematic illustration of memory management according to a single producer single consumer scenario;
FIG. 4 is a schematic illustration of memory management according to a multi-producer multi-consumer scenario;
FIG. 5 is a schematic diagram of an embodiment of memory management according to a multi-producer, multi-consumer scenario;
FIG. 6 is a schematic diagram of another memory management according to a multi-producer, multi-consumer scenario;
FIG. 7 is a block diagram illustrating an architecture of memory management according to an embodiment of the present application;
FIG. 8 is a schematic flow chart diagram of a method of memory management according to an embodiment of the present application;
FIG. 9 is a schematic flow chart diagram of a method of memory management according to another embodiment of the present application;
FIG. 10 is a schematic flow chart diagram of a method of memory management according to yet another embodiment of the present application;
FIG. 11 is a diagram illustrating an apparatus of a memory management system according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a memory management system according to an embodiment of the present application.
Detailed Description
The memories in the storage system are specifically classified into the following three types: (1) a memory with a fixed length; (2) a memory with unfixed length; (3) memory with a length that is not fixed and needs IO alignment, such as 512 byte alignment, 4K alignment, etc. The three different memory types are different in corresponding management methods and different in corresponding acquisition algorithms. For example, a memory with a fixed length may be managed in a linked list manner; the length of the memory is not fixed, when the application sizes of the IO services are different, the memory in the middle of the linked list may not be released, and the memory behind the linked list cannot meet the requirement, so that a large number of small memory fragments are generated, and the memory is generally managed by adopting a buddy algorithm similar to that in a Linux system. The embodiment of the application mainly relates to a memory with fixed length.
Fig. 1 shows an application scenario of an embodiment of the present application. As shown in fig. 1, a service module in a distributed storage module in a computing and storage all-in-one machine is provided, where a processing function of the service module is equivalent to a stored client (client), and the service module provides an external service storage capability. The memory management system according to the embodiment of the present application may be applied to any module of the IO request module 110, the service processing module 120, and the CACHE POOL (CACHE POOL)130 in the all-in-one computer with computing and storing functions shown in fig. 1. The cache pool 130 may be a Central Processing Unit (CPU), a Flash memory (Flash), a DISK (DISK), or the like.
It should be understood that the embodiment of the present application may be applied to a key-value storage system, and may also be applied to other storage systems, which is not limited in this application. For convenience of description, the embodiment of the present application is described by taking a key-value storage system as an example.
Fig. 2 shows a block diagram of a memory management system. As shown in fig. 2, the memory management system mainly includes a plurality of business threads 210, a plurality of network threads 220, and a frame memory management module 230.
The service thread 210 is mainly configured to process an IO of an upper service module, obtain a message of an interface (SCSI) from a kernel mode, convert a Key Value message (Key Value) identified by the cost system, send the Key Value message to a network thread according to an IO routing device of the Key Value message, and finally send the Key Value message to a storage node.
The network thread 220 is mainly used for services such as maintenance of a network link between a client node and a storage node and message sending and receiving, for example, when an upper layer service performs a write service, IO is processed from the service thread, and then the corresponding network thread is sent according to the IO route of the Key Value message, and finally the network thread sends IO to the storage node at the back end. After the storage node IO is written in and stored, the message is returned from the storage node to the client, and also needs to receive the feedback of the storage node through the network thread, and then the message is returned to the service processing thread, and the service processing thread feeds back the result of writing or reading to the upper-layer service.
The frame memory management module 230 is mainly used for managing memories with fixed length, variable length, IO and the like, when the IO is issued to each thread of service processing, due to the service requirement, the frame memory needs to be applied for the memory, then the message is sent to the network thread and forwarded to the storage node, and after the network module sends the message out, the frame memory is released to the memory. Similarly, if the message is returned from the storage node, the network thread also needs to apply for the memory from the frame memory module for filling the message information, and returns the returned message to the service processing thread, and after the service processing thread returns the message result to the upper application, the memory is released to the frame memory.
In a storage system, frequent memory application and release are involved in the IO process, the performance of the system is greatly affected by efficient management of the memory, and if the management is inefficient, the overhead of a system CPU is increased, and the concurrency capability of the system is affected, so that the performance of the system is finally reduced. In a scenario of multiple producers and multiple consumers (that is, multiple threads apply for a certain type of critical resources (such as a memory), and each thread needs to apply for and release resources), the prior art adopts simple queue management, and each application by a producer and a consumer needs to be allocated and released from a queue, which requires a large amount of garbage collection, and is not beneficial to obtaining and collecting in batch, and the efficiency is low. Especially, when the number of producers and consumers is large, the competition between the head of the queue and the tail of the queue is large, the synchronization is difficult, and the resource overhead of the system is large. Specifically, the following are generally managed for a fixed-length memory in the prior art:
fig. 3 shows the common memory management for a single producer and single consumer scenario. In the scene, locking is not needed, a fixed-length memory can control queue operation through a read pointer and a write pointer, and the realization of a first-in first-out (FIFO) lock-free algorithm (kfifo) of a queue provided by a linux kernel can be referred to.
Fig. 4 shows the general memory management for a multi-producer and multi-consumer scenario. In a multithreading scenario, generally, in a scenario involving multiple producers and multiple consumers, the simplest management method is to use a linked list, and mutually exclusive locks are set at the head and tail of the linked list to control access to critical resources (such as memory No. 1 and memory No. 4 in fig. 4) between threads. However, the disadvantage of this approach is more apparent as more threads are involved.
Specifically, the example of setting a mutex lock in the multi-producer multi-consumer scenario shown in fig. 5 is taken as an example for explanation. As shown in fig. 5, each memory block (i.e., fixed-length memory) with the same size is concatenated into a chain, for example, 32B data block by chain, 64B data block by chain, each chain has a lock, when a certain consumer needs to apply for, for example, 32B memory resources, the chain of 32B needs to be locked in the application process, at this time, other consumers cannot apply for the chain, and the chain is unlocked only after the previous application is completed, so that when multiple consumers simultaneously apply for a chain of memory resources, lock conflicts may occur in the lock application process, and concurrency is not possible.
It should be understood that the fixed-length memory may be 32B, 64B, 128B, …, 2nB, etc., and is not limited in this application.
Another common memory management for a multi-producer and multi-consumer scenario is shown in fig. 6. The jammer (disturber) is a currently recognized high-efficiency management method for processing multiple producers and multiple consumers, and adopts a data structure realized by a special ring buffer queue, so that the problem of serious lock conflict of the multiple producers and the multiple consumers can be solved, the track of a memory can be conveniently tracked by a system, and the jammer can be quickly positioned in the abnormal situations of memory leakage, memory treading and the like of the system. However, when a number of producers and consumers are stored in the system, and when a plurality of service threads simultaneously apply for memory resources, two ring buffers (ring buffers) are in ring cooperation, lock conflicts are serious, management and implementation are complex, and later-stage code understanding and maintenance costs are high.
In the management of the memory of the current large-scale storage system, the management of the application and the release of the memory is simply carried out, and meanwhile, more auxiliary designs are added for facilitating the system to track the track of the memory, so that the problem of quick positioning can be solved when the system memory is leaked, the memory is stepped on and the like. However, the thread module also needs to apply for memory resources and manage critical resources of the memory resources to perform operations such as tracking memory, and thus may involve nesting one lock into another.
Therefore, as shown in fig. 7, an architectural diagram of memory management is provided in the embodiment of the present application, which can reduce lock conflicts and improve concurrency of the system. In this embodiment of the application, the frame memory management module (corresponding to the frame memory management module 230 in fig. 2) further includes:
(1) module Id (MID): as the frame memory management module is a common module, it can manage memories of different modules, as shown in fig. 7, in the embodiment of the present application, a service module (e.g., a client service module), a network module, and the like are respectively represented by different MIDs.
(2) Metadata (metadata) which assigns a global metadata management to the different modules, respectively, and whose metadata structure is as follows:
struct dsw_mem_meta
{
dsw _ u8 mid; // Module ID number
...
...
...
dsw _ u16 × fmt _ map; v/managing metadata for memories of different fixed lengths
};
Where fmt _ map is an array, each metadata manages a different size, fixed length memory unit, typically an exponential increase of 2 for the fixed length size we have designed, e.g., 32, 64, 128, 256 bytes long memory. As shown in fig. 7, in the structure of each global metadata, N linked lists of fixed length are managed.
(3) And fmt _ level is an index chain table value of the fixed-length memory, for example, the index of 32 is 5, the index of 64 is 6 and the like, so that the service thread can conveniently apply for fast indexes of the memory with corresponding length.
(4) At the beginning of the system, a service thread firstly estimates how many memories with the length as possible are needed in the IO service process, and then registers in the frame memory. When the process is started, the fixed-length memories are managed in a linked list mode according to the number of the registers to form a linked list large pool, and the linked list large pool comprises a plurality of linked list large pools as shown in fig. 7. Each fixed-length memory has a memory management unit, and the management of the memory by the management unit mainly includes the use condition of the memory, the track in the use process and other messages. The data structure of the management unit of the fixed-length memory is as follows:
structdsw_mem_fmt_unit
{
dsw _ u32mid: 6; // Module ID number
dsw _ mem _ fmt _ unit _ op _ trace _ t _ op _ trace; // record memory track
dsw _ u32 index; // index of length
dsw_u32alloc_time;
dsw_s64used_flag;/*use flag 0:free,1:alloc;*/
mem _ bs _ info; information of the actual memory location of the execution Pond
...
...
...
chardata _ addr; v/memory actual allocated address
};
In addition, the address of the management head of each management unit actually pointing to the memory is data addr, 13 bytes are added before each actually allocated address, for example, ref cnt is used to detect whether the current memory is used up, and when ref cnt is 1 (when the system contract is initialized, 1 is added when the system contract is used in a subsequent application, or when the system contract is sent to other threads), it indicates that the memory can be released. If the memory location may be cross-threaded, and a lock is required, spin lock can be used when addressing the modification of ref cnt by multiple threads, the detailed structure is shown in FIG. 7. The unit index is used to indicate which type length the current memory unit belongs to, such as 32B bytes, 64 bytes, and the like; the memid is used for indicating whether the memory belongs to a dynamically managed memory type or a statically allocated managed type ID; spin lock is used for locking attributes of ref cnt and op trace in the memory management header, and the probability of lock conflict is further reduced by subdividing resources and using two different locks.
(5) Head lock/tail lock: when the number of the memories is registered by the service threads, whether the memories are used by a plurality of people or not is set, if only one service thread is used, a linked list with a lock is not needed, and if a plurality of services share the resources, the locking is needed; when the business thread applies for the memory resource, the memory resource is distributed from the head of the queue, and the head lock is used for mutual exclusion; when the service threads need to release partial resources to the large pool, tail lock is adopted for mutual exclusion, the safety of the linked list is ensured, and chain breakage of the linked list caused by modification of the linked list by a plurality of service threads is avoided.
(6) Big pool (big pool): the framework module uniformly places the mode of the memory installation linked list initially registered by the service thread in the linked list pool, and the linked list is collectively called as a large pool in the subsequent description.
(7) Large pool index (big pool index): in the large pool linked list, because the resources registered by part of services are more, when the subsequent service threads are distributed in batches, if traversal acquisition is performed after the lock, the lock overhead is larger. Therefore, in order to reduce the overhead of lock conflict, the linked list of the large pool is indexed, and the batch distribution can be rapidly carried out when the batch distribution is ensured, and the overhead is only the cost of modifying a plurality of pointers.
Service module/network module: in fig. 7, the service module includes a plurality of service module threads 710, the network module includes a plurality of network module threads 720, each thread corresponds to a private pointer, and the private pointer points to metadata management of a small pool (small pool) of the service thread module or the network thread module, where the metadata management structure of the small pool is as follows:
typedef struct dsw_mem_small_list_s
{
dsw _ int water _ level; // water level (number) -Normal set to 5% of the total (maximum 20 threads) -TODO: available
dsw_int water_high;//water_level*2
dsw _ list _ t unit _ list; // Single formatting chain
}dsw_mem_small_list_t;
When a service module or a network module thread needs to apply for a memory, firstly checking whether a residual memory exists in a small memory pool in the private space of the thread, and if so, directly allocating the memory space to the service thread; if the queue of the business thread has no memory space, the business thread needs to apply from a large pool in the frame memory, when applying from the large pool, the number of memory blocks with a certain amount is applied in batch, and the specific number refers to an algorithm of a subsequent distribution registration part.
Fig. 8 is a schematic diagram of a memory management method according to an embodiment of the present application.
801, the first thread module receives business data.
The memory management system includes a memory management module, which may be the frame memory management module 230 in fig. 2, and a plurality of first thread modules, which may be the service thread 210 or the network thread 220.
The memory management module includes a plurality of memory spaces, where the memory spaces are respectively used to store any one of the fixed-length memories 32B, 64B, or 128B, and each memory space may include a plurality of fixed-length memories (denoted as a first memory), for example, any one memory space (denoted as a first memory space) of the memory spaces may include a plurality of 32B fixed-length memories. In addition, the memory management system includes a plurality of first thread modules, each of the first thread modules includes a private memory space (denoted as a second memory space), and the second memory space includes at least one fixed-length memory (e.g., 32B fixed-length memory). That is, any first thread module can know the memory capacity of its second memory space.
It should be understood that the fixed-length memory in the second memory space may be obtained from the memory space of the memory management module, or may be an original memory of the first thread module, which is not limited in this application.
It is also understood that the plurality of first memories in the first memory space, and the at least one first memory in the second memory space may exist in a linked list.
802, the memory management module determines whether the second memory space satisfies the memory requirement of the service data.
After a first business thread module receives business data, whether a second memory space of the first thread module can meet the memory size required by the business data is determined. That is, the first service thread module has a fixed-size memory (i.e., the second memory space), and the second memory space of each of the plurality of service thread modules can be used for independently processing the service data without affecting each other.
Alternatively, after the business thread module is started, a request (first registration request) may be registered with the memory management module. The memory management module sends at least one first memory in the first memory space to the first thread module according to the first registration request. The first thread module stores the at least one first memory to generate a second memory space, so that the business thread can acquire the memory in advance and further be used for processing the business data, the memory is not required to be acquired from the first memory space when the memory is needed, and the processing time delay of the business data is saved.
The registration request may specifically include registering a memory with a fixed length, where the type of the registered memory, the number of memories included in the memory linked list, whether the memory needs to be configured with a management lock (also referred to as a "lock" for short), and the like. The memory type can be a fixed-length memory with any one of the size of 32B, 64B or 128B; the memory linked list comprises a plurality of fixed-length memories; and the memory management module is used for judging whether the thread module for service is single or multiple, if the thread module is single, the memory linked list does not need to be configured with a management lock, and if the thread module is multiple, the memory linked list needs to be configured with the management lock.
It should be understood that, in the embodiment of the present application, the first memory space may be expressed as a "big pool", and the second memory space may also be expressed as a "small pool corresponding to the first thread module".
803, when the second memory space meets the memory requirement of the service data, the first thread module processes the service data according to the second memory space.
When the second memory space can satisfy the memory requirement of the service data, for example, the first thread module needs a 64B memory for processing the service data, and if the second memory space includes a 64B memory, the first thread module can process the service data according to the 64B memory. Therefore, under the scene that the service data needs a plurality of service threads, the plurality of thread modules process the service data through respective second memory spaces respectively, and the lock conflict caused by the fact that the memory management module wants to lock all memories in the first memory space through the management lock on the memory linked list when the plurality of thread modules process the service data simultaneously is avoided, so that the lock overhead is reduced, and the concurrency capability of the system is improved.
Optionally, as an embodiment, when the second memory space does not meet the memory requirement of the service data, the first thread module sends a memory request to the memory management module to apply for a memory resource, the memory management module allocates a memory for the first thread module and sends the memory to the first thread module, and the first thread module stores the received memory in the second memory space. The first thread module can then process the service data according to the current (i.e. applied to the memory) second memory space.
It should be noted that, after receiving the memory request of the first thread module, the memory management module needs to determine whether the first memory space has a remaining memory that can be allocated to the first thread module. If the memory is left, the memory management module sets a management lock (denoted as a first management lock) for the first memory space, and unlocks the first management lock after the memory is sent to the first thread module. The first administrative lock herein may be the mutually exclusive lock described above.
It should be understood that the first thread module may continuously apply for the memory from the memory management module until the memory in the second memory space can meet the memory requirement of the service data.
Optionally, as another embodiment, the memory management module may divide the memory in the first memory space into a plurality of memory sets (denoted as first memory sets), and each first memory set includes the same number of memories. In this way, when the first thread module applies for the memory to the first memory space each time, one or more first memory sets can be acquired, that is, under the condition that the first thread module needs a large amount of memory, it is avoided to traverse each memory one by one, so that the time delay can be saved.
Optionally, as another embodiment, the memory management module may set a memory index value for each of the plurality of first memory sets, and the plurality of memory index values may be presented in a list form (for example, in an index table). For example, if the memory management module determines that there is a remaining memory that can be allocated as the first thread module, the memory management module sets a first management lock for the first memory space, and then obtains the head node and tail node pointers of the batch from the index table of the first memory space. When the first thread module needs to obtain the memories in batches, the memories can be rapidly configured in batches, and only a plurality of pointers need to be modified, so that the expenditure is saved.
It should be understood that the first memory space can only serve the first thread module after the memory management module sets the first management lock, and the memory management module only unlocks the first thread module after allocating the memory for the first thread module.
It should also be understood that after the memory management module configures the memory for the first thread module, the pointer of the index table in the first memory space needs to be modified to avoid chain breakage of the linked list.
Optionally, the processing, by the first thread module, the service data according to the memory in the second memory space may specifically be that the service data is encapsulated in the current memory in the second memory space to generate a service message, and then the service message is sent to the second thread module.
It should be understood that the current second memory space here may be an original second memory space corresponding to the first thread module, or may be a second memory space obtained after the original second memory space does not meet the memory requirement of the service data and the memory is obtained from the memory management module.
Optionally, the second thread module may also send a registration request (denoted as a second registration request) to the memory management module, and the memory management module sends at least one first memory in the first memory space to the second thread module according to the second registration request. The second thread module stores the at least one first memory to generate a third memory space, so that the business thread can acquire the memory in advance and further be used for processing the business data, the memory is not required to be acquired from the first memory space when the memory is needed, and the processing time delay of the business data is saved.
It should be understood that the memory management system may include a plurality of second thread modules, and each of the second thread modules may obtain the third memory space in the above manner. In addition, the number of the first memories included in the third memory space may be the same as or different from the number of the memories included in the second memory space, which is not limited in this application.
Optionally, in this embodiment of the present application, the first thread module may be a service thread module or a network thread module. When the first thread module is a service thread module and the second thread module is a network thread module, the network thread module sends the service message to the storage node after the service thread module sends the service message to the network thread module. The network thread module then modifies the memory usage count and traffic path of the memory space (i.e., the third memory space) of the network thread module. The memory use count is used for detecting whether the memory occupied by the service message is used or not, and the service path is used for rapidly positioning when the system memory leaks, the memory is stepped on and the like.
For example, assuming that the memory usage count (ref cnt) is 1 at the time of system contract initialization, 1 may be added when a subsequent application is used, or when a request is sent to another thread. That is, when the ref cnt value is greater than 1, it indicates that a thread is in use; if the ref cnt value equals 1, indicating that the thread module is finished using, the memory can be released.
After the service thread module sends the service message to the network thread module, the network thread module sets a management lock (denoted as a second management lock) for the third memory space, and the second management lock is used for controlling the modification of the memory usage count and the service path, so that only one thread can modify the memory usage count and the service path. In addition, the second management lock is different from the first management lock, so that the lock conflict caused by setting the same management lock is avoided while the auxiliary design requirements such as memory positioning and the like are met.
Specifically, the configuration structure of the second management lock (i.e., spin lock) may be as shown in fig. 7, where spin lock is used for attributes of a lock ref cnt and a service path (op trace) in the memory management header, and by subdividing resources, two different locks are used to further reduce the probability of lock collision.
It should be understood that the second management lock may be referred to as a "mini lock" for short, the third memory space may also be referred to as a "mini pool corresponding to the second thread module," and the "memory usage count" may also be referred to as a "memory reference count," which is not limited in this application.
Optionally, the embodiment of the present application may also be applied to a scenario where the storage node feeds back result information to the user after processing the service message, that is, the processing flow is equivalent to an inverse process from the service thread module to the network thread module. At this time, the first thread module is a network thread module, and the second thread module is a business thread module. And the network thread module returns the result information to the service thread module, and the service thread module sends the result information to the user and modifies the use count and the use track of the memory space of the network thread module. The memory use count is used for detecting whether the memory occupied by the service message is used or not, and the service path is used for rapidly positioning when the system memory leaks, the memory is stepped on and the like.
It should be understood that the process from the network thread module to the service thread module is the same as the process from the service thread module to the network thread module, and details are not repeated in this application in order to avoid repetition.
Optionally, the second thread module may determine whether to release the memory resource occupied by the service message according to the memory usage count.
Optionally, if the initial preset value of the memory usage count is 1, and the memory usage count is incremented by 1 when the corresponding memory is in the occupied state, and decremented by 1 when the corresponding memory is in the idle state, the second thread module may release the memory occupied by the service message to the third memory space when the memory usage count is decremented to 1. The third memory space includes a first linked list and a second linked list, and the first linked list and the second linked list are used for storing at least one first memory in the third memory space. That is to say, at least one first memory in the third memory space may be managed in a linked list form, for example, taking a free list linked list (denoted as a first linked list) as an example, wherein during the recycling, the second thread module may put the released memory into the free list in the third memory space first, and when the number of memories in the free list exceeds a low level (denoted as a first number threshold), the released memory is not put into the free list linked list, but is put into the unit list. Therefore, the memory in the free list can be conveniently recycled in subsequent batches, and the recycling efficiency is improved.
It should be noted that the memory corresponding to the memory usage count may be any memory, that is, each memory may have a corresponding memory usage count to determine whether the resource is in the third memory space; or at least one memory occupied by one service message is recorded by a memory usage count, which is not limited in the present application.
It should be understood that the low water level may be configured when the memory management module leaves the factory, or may be flexibly set according to the size or the number of the memories in the second memory space, which is not limited in this application.
Optionally, the memory management module may configure a second number threshold (or may be referred to as a "high water level") for the third memory space. When the number of memories in the third memory space exceeds the high water level, the memory management module determines that the total number of memories in the free list linked list and the unitlist linked list exceeds the high water level. The memory management module can recycle part of the memory in the third memory space to the first memory space, so that the problem that the resources are fully occupied by individual business threads due to the uneven IO (input output) can be avoided.
Optionally, in the process of recovering the memory in the third memory space to the first memory space, the memory management module may recover the memory in batches, that is, the second memory space may recover all the memories in the free list to the second memory space in batches, so as to improve the recovery efficiency.
Therefore, in the memory management method according to the embodiment of the present application, the first service thread receives the service data, determines whether the second memory space can meet the memory size required by the service data, and processes the service data according to the memory in the second memory space when the second memory space meets the memory requirement of the service data. Therefore, under the scene of a plurality of service threads, different thread modules process service data through respective second memory spaces, and lock conflicts caused by the fact that the memory management module wants to lock all memories in the first memory space through the management lock on the memory linked list when the plurality of thread modules process the service data simultaneously are avoided, so that the lock overhead is reduced, and the concurrency capability of the system is improved.
Fig. 9 is an interaction flow diagram of a memory management method according to an embodiment of the present application. The meaning of various terms in this embodiment is the same as that of the foregoing embodiments.
901. And the business thread module is started and sends a first registration request to the memory management module.
902. And the memory management module configures at least one first memory for the service thread module according to the first registration request. The first memory space of the memory management module comprises a plurality of first memories.
903. And the business thread module stores the private memory space of the business thread module according to the configured at least one first memory.
904. And the network thread module starts and sends a second registration request to the memory management module.
905. And the memory management module configures at least one first memory for the network thread module according to the second registration request.
It should be understood that the first registration request and the second registration request may be the same or different, and the application is not limited thereto.
906. The network thread module stores the configured at least one first memory into a private memory space of the network thread.
It should be understood that in the present application, the service thread and the network thread may apply for the memory at the same time, or may apply for the memory according to the sequence, that is, the sequence between steps 901 to 903 and steps 904 to 906 is not limited in the present application.
907. And the business thread module receives the business data.
The step 904 is before or after any step between the steps 901 to 906, and the application does not limit the steps.
908. And the service thread module determines whether the private memory space of the service thread meets the memory requirement of the service data or not according to the service data.
909. And when the private memory space of the service thread does not meet the memory requirement of the service data, the first service thread sends a memory request to the memory management module.
If the private memory space of the business thread meets the memory requirement of the business data, step 913 is directly performed.
910. The memory management module divides the first memory space into a plurality of memory sets, and configures memory index values for the plurality of first memory sets according to the memory request, so that the memory management module can configure the memory for the service thread module according to the memory index values, and can configure a plurality of memories at one time without one traversal.
It should be understood that, during the process of configuring the memory for the service thread module, the memory management module also needs to set the first management lock to prevent other thread modules from using the first memory space.
911. The memory management module configures at least one first memory set for the service thread module. After the memory is sent, the first management lock is unlocked.
912. The business thread module stores the at least one first memory set to a second memory space.
913. And the service thread module encapsulates the service data in the memory in the second memory space to generate a service message.
914. And the service thread module sends the service message to the network thread module.
915. The network thread module forwards the service data to the storage node.
916. And after the network thread module sends out the service data, modifying the memory use count and the use track in the private memory space of the network thread module.
917. When the memory usage count is 1, the network thread module releases the memory occupied by the service message to a second linked list in the private memory space of the network thread module, and releases the memory to the first linked list after the memory usage count exceeds a first number threshold of the second linked list. And releasing the memory of the second linked list into the first memory space of the memory management module after the memory in the private memory space of the first network thread module exceeds the second memory number threshold.
918. And the memory management module receives the memory released by the network thread module.
919. The memory management module modifies the index table information of the first memory space.
Therefore, in the memory management method according to the embodiment of the present application, the service thread module receives the service data, determines whether the private memory space of the network thread module can meet the memory size required by the service data, and processes the service data according to the private memory space of the network thread module when the private memory space of the network thread module meets the memory requirement of the service data. When the private memory space of the network thread module cannot meet the memory requirement of the service data, the service thread applies for a memory to the memory management module until the private memory space of the network thread module can meet the memory requirement of the service data, and the service thread module packages the service data in the memory in the private memory space of the network thread module to generate a service message and sends the service message to the network thread module. Therefore, under the scene of a plurality of service threads, different thread modules process service data through respective private memory spaces, and lock conflicts caused by the fact that the memory management module wants to lock all memories in the first memory space through the management lock on the memory linked list when the plurality of thread modules process the service data simultaneously are avoided, so that the lock overhead is reduced, and the concurrency capability of the system is improved.
Fig. 10 is an interaction flow diagram of a memory management method according to another embodiment of the present application. The meaning of various terms in this embodiment is the same as that of the foregoing embodiments.
1001. And the network thread module starts and sends a second registration request to the memory management module.
1002. And the memory management module sends at least one first memory configured for the service thread module to the service thread module according to the second registration request. The first memory space of the memory management module comprises a plurality of first memories.
1003. The network thread module stores the received at least one first memory into a private space of the network thread module.
1004. And the business thread module is started and sends a first registration request to the memory management module.
1005. And the memory management module configures at least one first memory for the service thread module according to the first registration request.
It should be understood that the first registration request and the second registration request may be the same or different, and the application is not limited thereto.
1006. And the business thread module stores the configured at least one first memory into a private space of the business thread module.
It should be understood that in the present application, the service thread and the network thread may apply for the memory at the same time, or may apply for the memory according to the sequence, that is, the sequence between steps 1001 to 1003 and steps 1004 to 1006 is not limited in the present application.
1007. The network thread module receives response service data, for example, if the service data is used for a write operation, the response service data is result information of the write.
The step 1004 is before or after any step between the steps 1001 to 1006, and the application does not limit the steps. Alternatively, if the step is performed after the embodiment corresponding to fig. 9, the step 1007 may be performed after 918, and the above-mentioned steps 1001 to 1006 are not required.
1008. And the network thread module determines whether the private memory space of the network thread module meets the memory requirement of the service data or not according to the service data.
1009. And when the private memory space of the network thread module does not meet the memory requirement of the service data, the network thread module sends a memory request to the memory management module.
If the private memory space of the network thread module meets the memory requirement of the service data, directly execute step 1013.
1010. The memory management module divides the first memory space into a plurality of memory sets and configures memory index values for the first memory sets, so that the memory management module can configure memories for the service thread module according to the memory index values, and can sequentially configure a plurality of memories without one traversal.
1011. The memory management module configures at least one first memory set for the network thread module.
1012. The network thread module stores the at least one first memory set to a private memory space of the network thread module.
1013. And the network thread module encapsulates the service data in the memory in the private memory space of the network thread module to generate the service message.
1014. And the network thread module sends the service message to the service thread module.
1015. The network thread module forwards the service data to the user.
1016. And after the network thread module sends out the service data, modifying the use count and the use track of the memory.
1017. When the memory usage count is 1, the service thread module releases the memory occupied by the service message to a first linked list in a private memory space of the network thread module, and releases the memory to a second linked list after the memory usage count exceeds a first memory number threshold of the first linked list. And releasing the memory of the first linked list into the first memory space of the memory management module after the memory in the private memory space of the network thread module exceeds the second memory number threshold.
1018. And the memory management module receives the memory released by the service thread module.
1019, the memory management module modifies the index table information of the first memory space.
Therefore, in the memory management method according to the embodiment of the present application, the network thread module receives the response service data, determines whether the private memory space of the network thread module can meet the memory size required by the service data, and processes the service data according to the memory in the private memory space of the network thread module when the private memory space of the network thread module meets the memory requirement of the service data. When the private memory space of the network thread module cannot meet the memory requirement of the service data, the service thread applies for the memory to the memory management module until the private memory space of the network thread module can meet the memory requirement of the service data, and the service thread module packages the service data in the memory in the private memory space of the network thread module to generate a service message and sends the service message to the service thread module. Therefore, under the scene of a plurality of service threads, different thread modules respectively process service data through respective private memory spaces, and the configuration of a management lock on a memory chain table is avoided, and all memories in the first memory space are locked, so that lock conflicts are reduced, and the concurrency capability of the system is improved.
Having described the method for memory management according to the embodiment of the present application in detail, a memory management system according to the embodiment of the present application will be described below.
Fig. 11 illustrates a schematic block diagram of a memory management system 1100 according to an embodiment of the present application. The memory management system includes a memory management module 1110 and a plurality of first thread modules 1120, where the memory management module 1110 is configured to manage a plurality of memory spaces, a first memory space of the plurality of memory spaces includes a plurality of first memories, a capacity of the first memory is a fixed value, the first thread module 1120 is configured to manage a second memory space, and the second memory space includes at least one first memory. As shown in fig. 11, the memory management system 1100 includes:
the first thread module 1120 is configured to receive service data;
the first thread module 1120 is further configured to determine whether the second memory space meets the memory requirement of the service data;
the first thread module 1120 is further configured to process the service data using the memory in the second memory space.
Optionally, the first thread module 1120 is further configured to send a memory request to the memory management module 1110, where the memory request is used to request a memory application from the first memory space; the memory management module 1110 is configured to allocate a memory according to the memory request, and set a first management lock for the first memory space, where the first management lock is used to control access to the memory in the first memory space; the memory management module 1110 allocates a memory for the first thread module and releases the first management lock; the first thread module 1120 is further configured to store the configured memory in the second memory space, so that the second memory space can meet the memory requirement of the service data.
Optionally, the memory management module 1110 is specifically configured to: allocating at least one first memory set in the first memory space according to the memory request, wherein the first memory space comprises a plurality of first memory sets, and the first memory sets comprise at least two first memories; the memory management module 1110 is further configured to send the at least one memory set to the first thread module.
Optionally, the memory management module 1110 is configured to configure a memory index value for each of the plurality of first memory sets; the memory management module 1110 is specifically configured to: and allocating the at least one first memory set according to the memory request and the memory index value.
Optionally, the first thread module 1120 is further configured to send a first registration request to the memory management module, where the first registration request is used to request a memory from the memory management module; the memory management module 1110 is further configured to send at least one first memory to the first thread module according to the first registration request; the first thread module 1120 is further configured to generate the second memory space according to the at least one first memory.
Optionally, the memory management system further includes a second thread module, and the first thread module 1120 is specifically configured to: packaging the service data in the memory in the second memory space to generate a service message; the first thread module 1120 is further configured to send the service message to the second thread module.
Optionally, the second thread module is further configured to send a second registration request to the memory management module, where the second registration request is used to request a memory from the memory management module; the memory management module 1110 is further configured to send at least one first memory to the second thread module according to the second registration request; the second thread module is configured to generate a third memory space according to the at least one first memory.
Optionally, the first thread module 1120 is a service thread module, the second thread module is a network thread module, and the second thread module is further configured to send the service message to the storage node; the second thread module is further configured to set a second management lock for the third memory space after sending the service message to the storage node, and modify the memory usage count and the service path of the third memory space, where the second management lock is used to control the modification of the memory usage count and the service path, and the second management lock is different from the first management lock.
Optionally, the first thread module 1120 is a network thread module, the second thread module is a service thread module, and the second thread module is further configured to send the service message to the user; the second thread module is further configured to set a second management lock for the third memory space, and modify the memory usage count and the service path of the third memory space, where the second management lock is used to control the modification of the memory usage count and the service path, and the second management lock is different from the first management lock.
Optionally, the second thread module is further configured to determine whether to release the memory occupied by the service message according to the memory usage count.
Optionally, the third memory space includes a first linked list and a second linked list, an initial preset value of the memory usage count is 1, and the memory usage count is increased by 1 when the corresponding memory is in an occupied state and decreased by 1 when the corresponding memory is in an idle state; the second thread module is further configured to release the memory occupied by the service message into the second linked list when the memory usage count is 1; the second thread module is further configured to release the remaining memory occupied by the service message to the first linked list when the number of memories in the second linked list exceeds a first number threshold.
Optionally, the second thread module is further configured to release the memory in the second linked list to the first memory space when the number of memories in the third memory space exceeds a second number threshold.
Therefore, the first service thread receives the service data, determines whether the second memory space can meet the memory size required by the service data, and processes the service data according to the memory in the second memory space when the second memory space meets the memory requirement of the service data. Therefore, the thread modules process the service data through the respective second memory spaces, and the configuration of a management lock on the memory chain table and the locking of all memories in the first memory space are avoided, so that lock conflicts are reduced, and the concurrency capability of the system is improved.
Fig. 12 is a schematic structural diagram illustrating a memory management system according to an embodiment of the present application. As shown in fig. 12, the memory management system includes at least one processor 1202 (e.g., a general purpose processor CPU, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA), etc., with computing and processing capabilities), and the processor 1202 is configured to manage and schedule modules and devices within the memory management system. The memory management module 1110, the first thread module 1120 and the second thread module in the embodiment shown in fig. 11 can be implemented by the processor 1202. The memory management system also includes at least one transceiver 1205 (receiver/transmitter), storage 1206. The various components of the memory management system communicate with each other, passing control and/or data signals, through the internal connection paths.
The methods disclosed in the embodiments of the present application described above may be applied to the processor 1202 or used to execute executable modules, such as computer programs, stored in the memory 1206. The Memory 1206 may comprise a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), which may comprise both rom and RAM and may provide the required signaling or data, programs, etc. to the processor. The portion of memory may also include non-volatile row random access memory (NVRAM). A communication connection with at least one other network element is made through at least one transceiver 1205 (which may be wired or wireless).
In some embodiments, the memory 1206 stores the program 12061, and the processor 1202 executes the program 12061 to:
receiving traffic data via transceiver 1205;
determining whether the second memory space meets the memory requirement of the service data;
and processing the service data by using the memory in the second memory space.
It should be noted that the memory management system may be embodied as the memory management system in the embodiment shown in fig. 11, and may be used to execute each step and/or flow corresponding to the memory management module 1110, the first thread module 1120, and the second thread module in the method embodiment shown in fig. 10.
It can be seen from the above technical solutions provided in the embodiments of the present application that a first service thread receives service data, determines whether a second memory space can meet a memory size required by the service data, and processes the service data according to a memory in the second memory space when the second memory space meets the memory requirement of the service data. Therefore, the thread modules process the service data through the respective second memory spaces, and the configuration of a management lock on the memory chain table and the locking of all memories in the first memory space are avoided, so that lock conflicts are reduced, and the concurrency capability of the system is improved.
Embodiments of the present application also provide a computer storage medium that can store program instructions for instructing any one of the methods described above.
Alternatively, the storage medium may be specifically the memory 1206.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially or partially contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. A method for memory management, where the method is applied to a memory management system including a memory management module and a plurality of first thread modules, the memory management module is configured to manage a plurality of memory spaces, a first memory space in the plurality of memory spaces includes a plurality of first memories, capacity of the first memory is a fixed value, the first thread module is configured to manage a second memory space, and the second memory space includes at least one first memory, and the method includes:
the first thread module receives service data;
the first thread module determines whether the second memory space meets the memory requirement of the business data;
when the second memory space meets the memory requirement of the service data, the first thread module processes the service data by using the memory in the second memory space;
the memory management system also comprises a second thread module;
wherein the processing, by the first thread module, the service data according to the memory in the second memory space includes:
the first thread module encapsulates the service data in the memory in the second memory space to generate a service message;
the first thread module sends the service message to the second thread module;
the method further comprises the following steps:
the second thread module sends a second registration request to the memory management module, wherein the second registration request is used for requesting a memory from the memory management module;
the memory management module sends at least one first memory to the second thread module according to the second registration request;
the second thread module generates a third memory space according to the at least one first memory;
the first thread module is a business thread module, the second thread module is a network thread module, and the method further comprises the following steps:
the second thread module sends the service message to a storage node;
after the second thread module sends the service message to the storage node, setting a second management lock for the third memory space, and modifying the memory usage count and the service path of the third memory space, where the second management lock is used to control the modification of the memory usage count and the service path, and the second management lock is different from the first management lock; or
The first thread module is a network thread module, the second thread module is a business thread module, and the method further comprises the following steps:
the second thread module sends the service message to a user;
after the second thread module sends the service message to the user, the second thread module sets a second management lock for the third memory space, and modifies the memory usage count and the service path of the third memory space, the second management lock is used for controlling the modification of the memory usage count and the service path, and the second management lock is different from the first management lock.
2. The method of claim 1, further comprising:
when the second memory space does not meet the memory requirement of the service data, the first thread module sends a memory request to the memory management module, wherein the memory request is used for requesting to apply for a memory from the first memory space;
the memory management module allocates a memory according to the memory request, and sets the first management lock for the first memory space, wherein the first management lock is used for controlling the access of the memory in the first memory space;
the memory management module configures a memory for the first thread module and releases the first management lock;
and the first thread module stores the configured memory into the second memory space so that the second memory space can meet the memory requirement of the business data.
3. The method of claim 2, wherein the memory management module allocating memory according to the memory request comprises:
the memory management module allocates at least one first memory set in the first memory space according to the memory request, wherein the first memory space comprises a plurality of first memory sets, and each first memory set comprises at least two first memories;
wherein the sending, by the memory management module, the memory to the first thread module includes:
the memory management module sends the at least one memory set to the first thread module.
4. The method of claim 3, further comprising:
the memory management module configures a memory index value for each first memory set in the plurality of first memory sets;
wherein the allocating, by the memory management module, at least one first memory set according to the memory request includes:
and the memory management module allocates the at least one first memory set according to the memory request and the memory index value.
5. The method of any of claims 1 to 4, wherein before the first thread module determines whether the second memory space satisfies the memory requirement of the business data, the method further comprises:
the first thread module sends a first registration request to the memory management module, wherein the first registration request is used for requesting a memory from the memory management module;
the memory management module sends at least one first memory to the first thread module according to the first registration request;
and the first thread module generates the second memory space according to the at least one first memory.
6. The method according to any one of claims 1 to 4, further comprising:
and the second thread module determines whether to release the memory occupied by the service message according to the memory usage count.
7. The method according to claim 6, wherein the third memory space includes a first linked list and a second linked list, an initial preset value of the memory usage count is 1, and the memory usage count is increased by 1 when the corresponding memory is in an occupied state and decreased by 1 when the corresponding memory is in an idle state;
wherein, the second thread module determines whether to release the memory occupied by the service message according to the memory usage count includes:
when the memory usage count is 1, the second thread module releases the memory occupied by the service message to the second linked list;
and when the number of the memories in the second linked list exceeds a first number threshold, the second thread module releases the residual memories occupied by the service messages to the first linked list.
8. The method of claim 7, further comprising:
and when the number of the memories in the third memory space exceeds a second number threshold, the second thread module releases the memories in the second linked list to the first memory space.
9. A memory management system is characterized by comprising a memory management module and a plurality of first thread modules, wherein the memory management module is used for managing a plurality of memory spaces, the first memory spaces in the memory spaces comprise a plurality of first memories, the capacity of the first memories is a fixed value, the first thread modules are used for managing a second memory space, and the second memory space comprises at least one first memory;
the first thread module is used for receiving service data;
the first thread module is further configured to determine whether the second memory space meets the memory requirement of the service data;
the first thread module is further configured to process the service data by using the memory in the second memory space;
the memory management system also comprises a second thread module;
the first thread module is specifically configured to:
the service data is packaged in the memory in the second memory space to generate a service message;
the first thread module is further configured to send the service message to the second thread module;
the second thread module is further configured to send a second registration request to the memory management module, where the second registration request is used to request a memory from the memory management module;
the memory management module is further configured to send at least one first memory to the second thread module according to the second registration request;
the second thread module is configured to generate a third memory space according to the at least one first memory;
the first thread module is a service thread module, the second thread module is a network thread module, and the second thread module is also used for sending the service message to a storage node;
the second thread module is further configured to set a second management lock for the third memory space after sending the service message to the storage node, and modify the memory usage count and the service path of the third memory space, where the second management lock is used to control the modification of the memory usage count and the service path, and the second management lock is different from the first management lock; or
The first thread module is a network thread module, the second thread module is a service thread module, and the second thread module is also used for sending the service message to a user;
the second thread module is further configured to set a second management lock for the third memory space, and modify the memory usage count and the service path of the third memory space, where the second management lock is used to control modification of the memory usage count and the service path, and the second management lock is different from the first management lock.
10. The memory management system according to claim 9, wherein the first thread module is further configured to send a memory request to the memory management module, the memory request requesting for a memory from the first memory space;
the memory management module is configured to allocate a memory according to the memory request, and set the first management lock to the first memory space, where the first management lock is used to control access to the memory in the first memory space;
the memory management module is further configured to configure a memory for the first thread module and release the first management lock;
the first thread module is further configured to store the configured memory in the second memory space, so that the second memory space can meet the memory requirement of the service data.
11. The memory management system according to claim 10, wherein the memory management module is specifically configured to:
allocating at least one first memory set in the first memory space according to the memory request, wherein the first memory space comprises a plurality of first memory sets, and each first memory set comprises at least two first memories;
the memory management module is further configured to send the at least one memory set to the first thread module.
12. The memory management system according to claim 11, wherein the memory management module is configured to configure a memory index value for each of the plurality of first memory sets;
the memory management module is specifically configured to:
and allocating the at least one first memory set according to the memory request and the memory index value.
13. The memory management system according to any one of claims 9 to 12, wherein the first thread module is further configured to send a first registration request to the memory management module, where the first registration request is used to request a memory from the memory management module;
the memory management module is further configured to send at least one first memory to the first thread module according to the first registration request;
the first thread module is further configured to generate the second memory space according to the at least one first memory.
14. The memory management system according to any one of claims 9 to 12, wherein the second thread module is further configured to determine whether to release the memory occupied by the service message according to the memory usage count.
15. The memory management system according to claim 14, wherein the third memory space includes a first linked list and a second linked list, an initial preset value of the memory usage count is 1, and the memory usage count is incremented by 1 when the corresponding memory is in the occupied state and decremented by 1 when the corresponding memory is in the idle state;
the second thread module is further configured to release the memory occupied by the service message into the second linked list when the memory usage count is 1;
the second thread module is further configured to release the remaining memory occupied by the service message to the first linked list when the number of memories in the second linked list exceeds a first number threshold.
16. The memory management system of claim 15, wherein the second thread module is further configured to release memory in the second linked list to the first memory space when the number of memories in the third memory space exceeds a second number threshold.
CN201611241110.2A 2016-12-29 2016-12-29 Memory management method and memory management system Active CN106844041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611241110.2A CN106844041B (en) 2016-12-29 2016-12-29 Memory management method and memory management system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611241110.2A CN106844041B (en) 2016-12-29 2016-12-29 Memory management method and memory management system

Publications (2)

Publication Number Publication Date
CN106844041A CN106844041A (en) 2017-06-13
CN106844041B true CN106844041B (en) 2020-06-16

Family

ID=59113130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611241110.2A Active CN106844041B (en) 2016-12-29 2016-12-29 Memory management method and memory management system

Country Status (1)

Country Link
CN (1) CN106844041B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107147590A (en) * 2017-07-12 2017-09-08 郑州云海信息技术有限公司 A kind of method and system based on rdma protocol message communicating
CN107391281A (en) * 2017-08-09 2017-11-24 腾讯科技(深圳)有限公司 A kind of data processing method of server, device and storage medium
CN108037988A (en) * 2017-12-11 2018-05-15 郑州云海信息技术有限公司 A kind of samba multi-threading performances get method and device ready
CN108319505A (en) * 2017-12-18 2018-07-24 湖北鸿云科技股份有限公司 Network data communication system and method based on IOCP mechanism combinations pond fragment
CN108762940B (en) * 2018-04-12 2020-09-04 武汉斗鱼网络科技有限公司 Multithreading access method and device
CN108920276A (en) * 2018-06-27 2018-11-30 郑州云海信息技术有限公司 Linux system memory allocation method, system and equipment and storage medium
CN109522113B (en) * 2018-09-28 2020-12-18 迈普通信技术股份有限公司 Memory management method and device
CN109874027A (en) * 2019-03-11 2019-06-11 宸瑞普惠(广州)科技有限公司 A kind of low delay educational surgery demonstration live broadcasting method and its system
CN110275978A (en) * 2019-07-01 2019-09-24 成都启英泰伦科技有限公司 Quick storage of the voice big data on redundant arrays of inexpensive disks and access amending method
CN112711546A (en) * 2019-10-24 2021-04-27 华为技术有限公司 Memory configuration method and device and storage medium
CN113296962B (en) * 2021-07-26 2022-01-11 阿里云计算有限公司 Memory management method, device, equipment and storage medium
CN115878335A (en) * 2021-09-27 2023-03-31 华为技术有限公司 Lock transmission method and related device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101493787A (en) * 2009-02-18 2009-07-29 中兴通讯股份有限公司 Internal memory operation management method and system
CN102508872A (en) * 2011-10-12 2012-06-20 恒生电子股份有限公司 Data processing method and system of online processing system based on memory
CN102567107A (en) * 2011-10-31 2012-07-11 广东电网公司电力科学研究院 Highly-concurrent real-time memory resource management and scheduling method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8205062B2 (en) * 2009-10-14 2012-06-19 Inetco Systems Limited Tiered data management method and system for high performance data monitoring
WO2013020001A1 (en) * 2011-08-02 2013-02-07 Cavium, Inc. Lookup front end output processor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101493787A (en) * 2009-02-18 2009-07-29 中兴通讯股份有限公司 Internal memory operation management method and system
CN102508872A (en) * 2011-10-12 2012-06-20 恒生电子股份有限公司 Data processing method and system of online processing system based on memory
CN102567107A (en) * 2011-10-31 2012-07-11 广东电网公司电力科学研究院 Highly-concurrent real-time memory resource management and scheduling method

Also Published As

Publication number Publication date
CN106844041A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN106844041B (en) Memory management method and memory management system
US11010681B2 (en) Distributed computing system, and data transmission method and apparatus in distributed computing system
CN112422615B (en) Communication method and device
US8381230B2 (en) Message passing with queues and channels
CN105511954A (en) Method and device for message processing
CN110704214B (en) Inter-process communication method and device
US8666958B2 (en) Approaches to reducing lock communications in a shared disk database
US11579874B2 (en) Handling an input/output store instruction
WO2015085826A1 (en) Method and apparatus for accessing shared resource
CN110119304B (en) Interrupt processing method and device and server
US20170031798A1 (en) Activity tracing diagnostic systems and methods
CN110800328A (en) Buffer status reporting method, terminal and computer storage medium
CN112698959A (en) Multi-core communication method and device
US20130061009A1 (en) High Performance Free Buffer Allocation and Deallocation
CN112256460A (en) Inter-process communication method and device, electronic equipment and computer readable storage medium
US8543722B2 (en) Message passing with queues and channels
CN115733832A (en) Computing device, message receiving method, programmable network card and storage medium
US9021492B2 (en) Dual mode reader writer lock
CN110830385A (en) Packet capturing processing method, network equipment, server and storage medium
CN113826081A (en) Method for transmitting message in computing system and computing system
CN107911317B (en) Message scheduling method and device
US20170344488A1 (en) Sharing data structures between processes by semi-invasive hybrid approach
US9438539B1 (en) Apparatus and method for optimizing the number of accesses to page-reference count storage in page link list based switches
CN113672400A (en) Data processing method, device and equipment and readable storage medium
WO2022151950A1 (en) Tensor processing method, apparatus and device, and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant