WO2021254330A1 - Memory management method and system, client, server and storage medium - Google Patents

Memory management method and system, client, server and storage medium Download PDF

Info

Publication number
WO2021254330A1
WO2021254330A1 PCT/CN2021/100120 CN2021100120W WO2021254330A1 WO 2021254330 A1 WO2021254330 A1 WO 2021254330A1 CN 2021100120 W CN2021100120 W CN 2021100120W WO 2021254330 A1 WO2021254330 A1 WO 2021254330A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
client
server
queue
data
Prior art date
Application number
PCT/CN2021/100120
Other languages
French (fr)
Chinese (zh)
Inventor
金浩
屠要峰
韩银俊
郭斌
高洪
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2021254330A1 publication Critical patent/WO2021254330A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Definitions

  • This application relates to the field of communication technology, and specifically relates to a memory management method, system, client, server, and storage medium.
  • Remote Direct Memory Access is a network transmission technology that directly accesses the storage space of a remote node. It quickly transmits data from one end to the storage space of the other end, bypassing the operating system kernel protocol stack. Without occupying the resources of the central processing unit (CPU) of the node, the data transmission performance can be significantly improved.
  • RDMA technology has been applied to various business scenarios, especially distributed storage systems that require very high bandwidth and delay. Using RDMA networks to transmit big data can give full play to the high performance of new hardware.
  • the client initiates a memory allocation request to the server according to business needs, the server allocates free memory blocks from the memory pool to the client, and then the client writes data to the server memory, and the server After detecting the completion of the data writing, the data in the memory is processed according to the business logic.
  • the client Before the client writes data to the server memory next time, it re-initiates a memory allocation request, and the server again allocates free memory blocks from the memory pool to the client. Therefore, the client needs to apply for memory space from the server every time before transmitting data.
  • the server allocates free memory according to the memory usage. In this way, the server and the CPU on both sides of the client need to participate in multiple interactions, which reduces the efficiency of data transmission. .
  • the client requests memory from multiple servers, it is necessary to initiate memory allocation requests to multiple servers, and the client needs to frequently request memory from multiple servers, and the implementation of distributed synchronization protocols is more complicated.
  • the embodiments of the present application provide a memory management method, system, client, server, and storage medium, which can improve the efficiency of data transmission and the convenience of address management.
  • an embodiment of the present application provides a memory management method, the memory management method is applied to a client, and the memory management method includes:
  • an embodiment of the present application also provides a memory management method, the memory management method is applied to a server, and the memory management method includes:
  • Allocate memory for the client based on the memory allocation request, and send memory information of the allocated memory to the client, where the memory information is used to instruct the client to create a queue for recording memory status ;
  • a preset message is returned to the client, and the preset message is used to instruct the client to update the idle state of the memory in the queue.
  • an embodiment of the present application also provides a client, including a processor and a memory, where a computer program is stored, and the processor executes the computer program provided in the embodiment of the present application when calling the computer program in the memory. Any memory management method applied to the client.
  • an embodiment of the present application also provides a server, including a processor and a memory.
  • the memory stores a computer program.
  • the processor invokes the computer program in the memory, the Any memory management method applied to the server.
  • an embodiment of the present application also provides a memory management system, including a client and a server.
  • the client is any of the clients provided in the embodiments of the present application
  • the server is provided by the embodiments of the present application. Any kind of server.
  • an embodiment of the present application also provides a storage medium for computer-readable storage, the storage medium is used to store a computer program, and the computer program is loaded by a processor to execute the Any kind of memory management method.
  • the embodiment of the application can obtain the memory information corresponding to the memory allocated to the client by the server, create a queue for recording the memory state based on the memory information, and update the memory state in the queue when the data interaction between the client and the server changes. Allows the client to monitor the memory status of the allocated memory, and update the memory status in the queue in time when the data interaction changes, so as to effectively monitor the memory status, without frequently requesting memory from the server, which improves data transmission The efficiency and the convenience of address management are improved.
  • FIG. 1 is a schematic diagram of a scene of a memory management method provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a memory management method provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a memory management method provided by another embodiment of the present application.
  • FIG. 4 is a schematic diagram of updating the memory state in a circular queue according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a memory management method provided by another embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a memory management method provided by another embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a client provided by an embodiment of the present application.
  • Fig. 8 is a schematic structural diagram of a server provided by an embodiment of the present application.
  • the embodiments of the present application provide a memory management method, system, client, server, and storage medium.
  • the memory management method can be applied to network devices, which can include devices such as hubs, switches, bridges, routers, gateways, and repeaters.
  • Figure 1 is a schematic diagram of a scenario for implementing the memory management method provided by an embodiment of the present application.
  • the memory management method can be applied to the memory management scenario of RDMA data transmission, and the server can establish a connection with the client.
  • the server can exchange data with the client.
  • the client can be a client integrated on a terminal such as a desktop computer, a notebook computer, a mobile phone, and a smart TV.
  • the server can be an RDMA server.
  • the server can configure a memory pool, the client can send a memory allocation request to the server, and then the server can allocate memory for the client and send the memory information of the allocated memory to the client.
  • the client can create based on the memory information A circular queue used to record the state of the memory.
  • the client can send the data to be processed to the server and update the memory status in the circular queue, for example, update the occupancy status of the memory in the circular queue occupied by the data to be processed.
  • the server can process the data to be processed.
  • the server sends a preset message to the client.
  • the client can update the idle state of the memory in the circular queue based on the preset message. Allows the client to monitor the memory status of the allocated memory, and update the memory status in the queue in time when the data interaction changes, so as to effectively monitor the memory status, without frequently requesting memory from the server, which improves data transmission The efficiency and the convenience of address management are improved.
  • FIG. 2 is a schematic flowchart of a memory management method provided by an embodiment of the present application.
  • the memory management method is applied to the client, and the memory management method may include, but is not limited to, step S101 to step S103, etc., which may be specifically as follows:
  • the memory information may include a memory address, a memory size, and primary key information (may also be referred to as key information) and so on.
  • the client can passively obtain the memory information corresponding to the memory allocated by the server to the client.
  • the server can send the memory information corresponding to the allocated memory to the client after the memory is allocated regularly or automatically to the client.
  • the client can actively acquire the memory information corresponding to the memory allocated by the server to the client.
  • acquiring the memory information corresponding to the memory allocated by the server to the client may include: sending the memory allocation to the server Request; Receive the memory information returned by the server based on the memory allocation request.
  • the client can send a memory allocation request to the server.
  • the memory allocation request can carry information such as the required memory size.
  • the client can apply to the server to allocate memory with a memory size of R bytes. .
  • the server After the server receives the memory allocation request sent by the client, it can allocate memory to the client based on the memory allocation request, and then send the memory information of the allocated memory to the client, and the client can receive the server based on the memory allocation request The returned memory information.
  • sending a memory allocation request to the server may include: sending a memory allocation request to the server when the client is started; or sending a memory allocation request to the server when the client detects that the memory allocated by the server is insufficient.
  • the client can send a memory allocation request to the server when it finishes starting.
  • the client can determine whether the memory allocated by the server is sufficient based on the data to be sent. For example, the client can determine whether the free memory is sufficient to store the memory to be sent according to the memory status maintained by itself. data.
  • the client detects that the memory allocated by the server is insufficient, it sends a memory allocation request to the server.
  • the client detects that the memory allocated by the server is sufficient, even if it currently needs to send data to the server, it does not need to send a memory allocation request to the server at this time. .
  • the server can pre-configure the memory pool and generate configuration information.
  • the configuration information can include the number of memory blocks in the memory pool (that is, the capacity of the memory pool), the size of the memory block (that is, the unit byte), and the free memory waiting time threshold. (Max Available Time, MAT), and free memory accumulation threshold (Max Available Segments, MAS), etc.
  • the configuration information can be written into the configuration file.
  • the configuration file can specify the RDMA memory pool capacity, memory block size, MAT, MAS, And the synchronization method, etc., the synchronization method can be a method in which the server synchronizes the memory state to the client.
  • a preset message (such as a syn message) carrying the memory state is sent To the client, so that the client can update the memory state maintained by itself based on the received preset message.
  • the server can be configured with a memory pool with a capacity of N.
  • the memory pool includes multiple memory blocks, each memory block is m bytes, and all memory blocks can be registered to the RNIC network card separately.
  • the client After receiving the memory information, the client can create a queue for recording the memory status based on the memory information such as the address, memory size, and key information of the memory block.
  • the type and form of the queue can be flexibly set according to actual needs, for example ,
  • the queue can be in the form of a list, the queue can be a circular queue, to realize the RDMA memory synchronization based on the circular queue, the queue used to record the memory state can update the memory state in time according to the change of the memory, and realize the memory state mapping of the server Locally, the client can perceive the memory occupancy state or idle state in time, and synchronize the memory for the RDMA transmission process.
  • the client when the server allocates n memory blocks to the client, the client can establish a circular queue composed of memory blocks from 0 to n-1.
  • the table length of the circular queue is n, and the memory status of the server is maintained based on the circular queue.
  • the memory status between ->[head] is occupied, and the memory status between [head]--->[tail] is idle.
  • memory block 0 and memory block 1 are in the occupied state, and when memory block 2 to memory block n-1 are in an idle state, the tail pointer points to memory block 0 and the head pointer points to memory block 2.
  • changes in the data interaction between the client and the server may include the client sending the data to be processed to the server, and the server completing the processing of the data (the data is the data to be processed sent by the client to the server), etc.
  • the status can include occupied status and idle status.
  • the client needs to send data to the server, if the client detects that there is enough free memory in the circular queue, it can immediately execute the RDMA command. At this time, the free memory needs to be consumed. Update the occupancy state of the memory in the circular queue. If the client receives the server and returns a syn message after completing the data processing, the client updates the free state of the memory in the circular queue, that is, increases the free memory. It solves the problem of frequently applying for memory in the RDMA data transmission process, and optimizes the speed of high-speed network data transmission.
  • step S103 may include but is not limited to step S1031 to step S1033, etc., which may be specifically as follows:
  • Step S1031 Send the to-be-processed data to the server, and update the occupancy status of the memory in the data occupancy queue.
  • Step S1032 Receive a preset message returned by the server after processing the data.
  • Step S1033 Update the idle state of the memory in the queue based on the preset message.
  • the client can send the data to be processed to the server. After the server receives the data to be processed, it needs to consume memory to cache the data to be processed, so the server allocates the memory to the client. Part or all of it will be occupied. At this time, the client can update the occupancy status of the memory in the data occupancy queue.
  • the client can determine the size of the memory that the data to be processed needs to occupy in the circular queue.
  • the size of the memory that the data to be processed needs to occupy is 2*m bytes
  • the size of each memory block in the circular queue is m bytes
  • the size of the memory in the circular queue can be occupied according to the size of each memory block in the circular queue m bytes and the data to be processed 2 *m bytes
  • set the first pointer such as the tail pointer
  • the head pointer points to memory block 3
  • the memory state of memory block 0 and memory block 1 between [tail]--->[head] is the occupancy state
  • the server After the server receives the to-be-processed data sent by the client, it can process the to-be-processed data one by one, and its processing mode can be flexibly set according to actual application scenarios, and the specific content is not limited here.
  • the memory block used to store the data After the server finishes processing the data, the memory block used to store the data is updated to an idle state, and can return a preset message to the client.
  • the preset message can be flexibly set according to actual needs, for example, the preset message It may be a syn message, and the preset message may carry information such as a message that data processing has been completed, and the size of the released memory (that is, the size of the memory in an idle state).
  • the client can receive the preset message returned by the server after the data processing is completed, determine the idle state of the memory based on the information carried in the preset message, and update the idle state of the memory in the queue.
  • the server when the server returns the preset message to the client, it may return the preset message to the client immediately after the data processing is completed, that is, synchronize immediately when the memory status changes. Alternatively, the server can determine whether the free memory waiting time is greater than the free memory waiting time threshold. When the free memory waiting time corresponding to the client is greater than the free memory waiting time threshold, it returns a preset message to the client, and when the client corresponds to the free memory When the waiting time is less than or equal to the idle memory waiting time threshold, continue to wait, and there is no need to return a preset message to the client at this time. Alternatively, the server can determine whether the free memory is greater than the free memory accumulation threshold.
  • the free memory corresponding to the client When the free memory corresponding to the client is greater than the free memory accumulation threshold, it returns a preset message to the client. When the free memory corresponding to the client is less than or equal to the free memory accumulation threshold When the threshold is set, continue to wait, and there is no need to return a preset message to the client at this time.
  • the queue is a circular queue composed of multiple memory blocks
  • updating the idle state of the memory in the circular queue based on a preset message may include: determining the data processed by the server based on the preset message; The memory block size and the memory size occupied by the processed data determine the memory block interval released in the circular queue; the second pointer is set to point to the first memory block of the released memory block interval to indicate the free state of the memory in the circular queue.
  • the client can determine the data processed by the server based on the received preset message. For example, the processed data occupies a memory size of 6*m bytes.
  • the size of each memory block in the circular queue is m bytes.
  • the circular queue can be determined according to the size of each memory block in the circular queue is m bytes and the memory size occupied by the processed data is 6*m bytes
  • the range of the memory block released in the memory block is from memory block 2 to memory block 7.
  • the second pointer (for example, the head pointer) can be set to point to the first memory block of the released memory block range, that is, the head pointer points to the memory block 3 to indicate the idle state of the memory in the circular queue.
  • the head pointer points to memory block 0, the memory state of memory block 0 and memory block 1 between [tail]--->[head] is occupied, and the memory block between [head]--->[tail] The memory state from 3 to memory block n-1 is the idle state. Subsequently, after the client receives the syn message carrying the released t*m byte memory block, it can control the tail pointer to slide t memory blocks forward to update the memory state to the idle state, and the pointer will recycle from 0 after reaching the boundary , In order to update the memory state based on the memory synchronization mechanism of the circular queue.
  • the embodiment of this application simplifies the RDMA unilateral data transmission operation, and solves the problem that the existing RDMA unilateral operation requires multiple applications for memory.
  • the client can directly initiate RDMA operations without applying for RDMA memory.
  • the server memory block is free, the memory status can be updated in time according to the configuration batch or delayed sending of syn messages, which greatly reduces the number of interactions of unilateral operations, significantly improves the data transmission efficiency, and realizes a true unilateral operate.
  • the client can apply for memory from one or more servers.
  • the client can send memory allocation requests to each server, and each server can allocate memory for the client. And send the memory information of the allocated memory to the client.
  • the client can create a circular queue based on the memory information corresponding to each server, and each circular queue records the memory state corresponding to each server.
  • the client can send the data to be processed to each server, and update the memory status in the circular queue of the corresponding server, for example, update the occupancy status of the memory in the circular queue occupied by the data to be processed.
  • Each server can process the data to be processed. After processing the data, each server sends a preset message to the client.
  • the client can be based on the server identification carried in the preset message, and the amount of free memory, etc. Update the free state of the memory in the circular queue.
  • client A can apply to server B, server C, and server D for an RDMA memory pool of length N respectively, and create a circular queue corresponding to server B 1, a circular queue corresponding to server C, and a circular queue corresponding to server D Queue 3 waits for 3 circular queues.
  • Client A needs to synchronize data of length L to server B, server C, and server D.
  • Client A writes x bytes of data to server B, server C, and server D in turn.
  • the head pointer in the circular queue corresponding to each server slides forward by x/m memory blocks.
  • each server After each server receives the data writing and processes the data, it returns a syn message to client A according to the configuration, and client A receives the syn message, and the tail pointer in the circular queue corresponding to each server moves forward to update the memory status.
  • the tail pointers of different servers B, C, and D may not be synchronized.
  • Client A calculates the accumulated value ACKi of syn packets for each server, and judges that the transaction is completed according to the majority principle of the raft protocol.
  • the memory management method of the embodiment of the present application is applied to the raft synchronization scenario, not only can reduce the number of memory synchronizations between the client A and each server, but also the syn message can be used as a raft response message to improve the synchronization efficiency of the raft protocol.
  • the client can obtain the memory information corresponding to the memory allocated to the client by the server, create a queue for recording the memory state based on the memory information, and update the memory in the queue when the data interaction between the client and the server changes. state. Allows the client to monitor the memory status of the allocated memory, and update the memory status in the queue in time when the data interaction changes, so as to effectively monitor the memory status, without frequently requesting memory from the server, which improves data transmission The efficiency and the convenience of address management are improved.
  • FIG. 5 is a schematic flowchart of a memory management method provided by an embodiment of the present application.
  • the memory management method is applied to a server, and the memory management method may include but is not limited to steps S201 to S203, etc., and may be specifically as follows:
  • the memory management method before receiving the memory allocation request sent by the client, may further include: configuring a memory pool and generating configuration information.
  • the configuration information may include the number of memory blocks in the memory pool, the size of the memory blocks, the free memory waiting time threshold, and the free memory accumulation threshold.
  • allocating memory for the client based on the memory allocation request and sending memory information of the allocated memory to the client may include: allocating memory in the memory pool for the client based on the memory allocation request; extracting from the configuration information The memory information of the allocated memory is sent to the client.
  • the memory information may include memory address, memory size, and primary key information (may also be referred to as key information).
  • the server can pre-configure the memory pool and generate configuration information.
  • the configuration information can include the number of memory blocks in the memory pool (that is, the capacity of the memory pool), the size of the memory block (that is, unit byte), the free memory waiting time threshold MAT, and the free memory Cumulative threshold MAS, etc.
  • the configuration information can be written into a configuration file.
  • the configuration file can specify RDMA memory pool capacity, memory block size, MAT, MAS, and synchronization mode, etc.
  • the synchronization mode can be that the server synchronizes the memory state to the client For example, when the free memory waiting time of the memory block is greater than MAT, the preset message (for example, syn message) carrying the memory status is sent to the client, so that the client can update itself based on the received preset message Maintained memory status.
  • the server can be configured with a memory pool with a capacity of N.
  • the memory pool includes multiple memory blocks, each memory block is m bytes, and all memory blocks can be registered to the RNIC network card separately.
  • the client can send a memory allocation request to the server when it finishes starting up.
  • the client can determine whether the memory allocated by the server is sufficient based on the data to be sent. For example, the client can determine whether the free memory is sufficient to store the memory to be sent according to the memory status maintained by itself. data.
  • the client detects that the memory allocated by the server is insufficient, it sends a memory allocation request to the server.
  • the client detects that the memory allocated by the server is sufficient, even if it currently needs to send data to the server, it does not need to send a memory allocation request to the server at this time. .
  • S202 Allocate memory for the client based on the memory allocation request, and send memory information of the allocated memory to the client, where the memory information is used to instruct the client to create a queue for recording the memory state.
  • the client can send a memory allocation request to the server.
  • the memory allocation request can carry information such as the required memory size.
  • the client can apply to the server to allocate memory with a memory size of R bytes. .
  • the server After the server receives the memory allocation request sent by the client, it can allocate memory to the client based on the memory allocation request, and then send the memory information of the allocated memory to the client, and the client can receive the server based on the memory allocation request The returned memory information.
  • the client After receiving the memory information, the client can create a queue for recording the memory status based on the memory information such as the address, memory size, and key information of the memory block.
  • the type and form of the queue can be flexibly set according to actual needs, for example ,
  • the queue can be in the form of a list, the queue can be a circular queue, to realize the RDMA memory synchronization based on the circular queue, the queue used to record the memory state can update the memory state in time according to the change of the memory, and realize the memory state mapping of the server Locally, the client can perceive the memory occupancy state or idle state in time, and synchronize the memory for the RDMA transmission process.
  • the client when the server allocates n memory blocks to the client, the client can establish a circular queue composed of memory blocks from 0 to n-1.
  • the table length of the circular queue is n, and the memory status of the server is maintained based on the circular queue.
  • the memory status between ->[head] is occupied, and the memory status between [head]--->[tail] is idle.
  • memory block 0 and memory block 1 are in the occupied state, and when memory block 2 to memory block n-1 are in an idle state, the tail pointer points to memory block 0 and the head pointer points to memory block 2.
  • S203 Receive the to-be-processed data sent by the client, and process the to-be-processed data.
  • the server After the server receives the data to be processed, it needs to consume memory to cache the data to be processed, so some or all of the memory allocated by the server to the client will be occupied. At this time, the client can update the occupancy status of the memory in the data occupancy queue.
  • the server After the server receives the to-be-processed data sent by the client, it can process the to-be-processed data one by one, and its processing mode can be flexibly set according to actual application scenarios, and the specific content is not limited here.
  • the memory block used to store the data After the server finishes processing the data, the memory block used to store the data is updated to an idle state, and can return a preset message to the client.
  • the preset message can be flexibly set according to actual needs, for example, the preset message It may be a syn message, and the preset message may carry information such as a message that data processing has been completed, and the size of the released memory (that is, the size of the memory in an idle state).
  • the client can receive the preset message returned by the server after the data processing is completed, determine the idle state of the memory based on the information carried in the preset message, and update the idle state of the memory in the queue.
  • returning the preset message to the client may include: when the waiting time of the free memory corresponding to the client is greater than the idle memory waiting time threshold, returning the preset message to the client; or, when the client corresponds to When the free memory is greater than the accumulation threshold of free memory, a preset message is returned to the client.
  • the server When the server returns the preset message to the client, it may return the preset message to the client immediately after the data processing is completed, that is, synchronize immediately when the memory status changes. Alternatively, the server can determine whether the free memory waiting time is greater than the free memory waiting time threshold. When the free memory waiting time corresponding to the client is greater than the free memory waiting time threshold, it returns a preset message to the client, and when the client corresponds to the free memory When the waiting time is less than or equal to the idle memory waiting time threshold, continue to wait, and there is no need to return a preset message to the client at this time. Alternatively, the server can determine whether the free memory is greater than the free memory accumulation threshold.
  • the free memory corresponding to the client When the free memory corresponding to the client is greater than the free memory accumulation threshold, it returns a preset message to the client. When the free memory corresponding to the client is less than or equal to the free memory accumulation threshold When the threshold is set, continue to wait, and there is no need to return a preset message to the client at this time.
  • the server may receive the memory allocation request sent by the client, allocate memory for the client based on the memory allocation request, and send the memory information of the allocated memory to the client.
  • the memory information is used to instruct the client to create a memory for recording Status of the queue.
  • Receive the to-be-processed data sent by the client process the to-be-processed data, and return a preset message to the client after the data processing is completed.
  • the preset message is used to instruct the client to update the idle state of the memory in the queue.
  • This solution can send the memory information to the client, so that the client can create a queue for recording the memory status based on the memory information, effectively monitor the memory status of the allocated memory, and return to the client after the data processing is completed.
  • FIG. 6 is a schematic diagram of the interaction between the client and the server in the memory management system provided by an embodiment of the present application.
  • the details can be as follows:
  • the server configures a memory pool.
  • the server can configure the memory pool to generate configuration information.
  • the client sends a memory allocation request to the server.
  • the client When the client starts, it can send a memory allocation request to the server; or, when the client detects that the memory allocated by the server is insufficient, it can send a memory allocation request to the server.
  • the server allocates memory to the client based on the received memory allocation request, and sends memory information to the client.
  • the server sends the memory information corresponding to the memory allocated to the client to the client.
  • the memory information may include memory address, memory size, and primary key information.
  • the memory information is used to instruct the client to create a queue for recording the memory status.
  • the client creates a circular queue based on the received memory information.
  • the client receives the memory information returned by the server based on the memory allocation request, and creates a queue for recording the memory status based on the memory information.
  • the memory status in the queue can be updated.
  • the client sends the data to be processed to the server.
  • the client updates the memory status in the circular queue.
  • the client can update the data occupancy state of the memory in the circular queue. It should be noted that the order of execution between step S14 and step S15 can be step S14 first, then step S15; or step S15 first, then step S14; or step S14 and step S15 at the same time implement.
  • the server processes the data.
  • the server After the server receives the to-be-processed data sent by the client, it can process the to-be-processed data.
  • the server After processing the data, the server sends a preset message to the client.
  • the server After processing the data, the server returns a preset message to the client, and the preset message is used to instruct the client to update the idle state of the memory in the queue.
  • the preset message is returned to the client; or, when the free memory corresponding to the client is greater than the free memory accumulation threshold, the preset message is returned to the client Message.
  • the client updates the memory status in the circular queue.
  • the client receives the preset message returned by the server after completing the data processing, and updates the idle state of the memory in the queue based on the preset message.
  • FIG. 7 is a schematic block diagram of the structure of a client according to an embodiment of the present application.
  • the client 300 may include a processor 302, a memory 303, and a communication interface 304 connected through a system bus 301, where the memory 303 may include a non-volatile computer-readable storage medium and internal memory.
  • the non-volatile computer-readable storage medium can store the computer program.
  • the computer program includes program instructions, and when the program instructions are executed, the processor can execute any memory management method.
  • the processor 302 is used to provide computing and control capabilities to support the operation of the entire client.
  • the memory 303 provides an environment for running a computer program in a non-volatile computer-readable storage medium.
  • the processor 302 can execute any memory management method.
  • the communication interface 304 is used for communication.
  • the structure shown in FIG. 7 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the client 300 to which the solution of the present application is applied.
  • the specific client 300 may include more or fewer components than shown in the figure, or combine certain components, or have a different component arrangement.
  • the bus 301 is, for example, an I2C (Inter-integrated Circuit) bus
  • the memory 303 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk or a mobile hard disk, etc.
  • the processor 302 may be a central processing unit (Central Processing Unit, CPU), the processor 302 may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the processor 302 is configured to run a computer program stored in the memory 303 to perform the following steps:
  • the processor 302 when updating the memory status in the queue, the processor 302 further executes: sending the data to be processed to the server, and updating the memory occupancy status of the data in the queue; receiving the server's return after the data processing is completed. Set message; update the idle state of the memory in the queue based on the preset message.
  • the queue is a circular queue composed of multiple memory blocks.
  • the processor 302 When updating the occupancy state of the memory in the data occupancy queue, the processor 302 also executes: determining the size of the memory in the cyclic queue occupied by the data; The size of a memory block and the size of the memory occupied by the data in the circular queue are determined, and the memory block interval in the circular queue occupied by the data is determined; the first pointer is set to point to the first memory block of the memory block interval to indicate the occupancy status of the memory in the circular queue occupied by the data.
  • the queue is a circular queue composed of multiple memory blocks.
  • the processor 302 executes: determining the data processed by the server based on the preset message ;According to the size of each memory block in the circular queue and the memory size occupied by the processed data, determine the memory block interval released in the circular queue; set the second pointer to point to the first memory block of the released memory block interval to indicate the circular queue The idle state of the memory.
  • the memory information includes memory address, memory size, and primary key information.
  • the processor 302 when acquiring memory information corresponding to the memory allocated by the server to the client, the processor 302 further executes: sending a memory allocation request to the server; receiving the memory information returned by the server based on the memory allocation request.
  • the processor 302 when sending a memory allocation request to the server, the processor 302 further executes: when the client starts, it sends a memory allocation request to the server; or, when the client detects that the memory allocated by the server is insufficient, it sends a memory allocation request to the server Send a memory allocation request.
  • FIG. 8 is a schematic block diagram of the structure of a server according to an embodiment of the present application.
  • the server 400 may include a processor 402, a memory 403, and a communication interface 404 connected through a system bus 401, where the memory 403 may include a non-volatile computer-readable storage medium and internal memory.
  • the non-volatile computer-readable storage medium can store the computer program.
  • the computer program includes program instructions, and when the program instructions are executed, the processor can execute any memory management method.
  • the processor 402 is used to provide computing and control capabilities to support the operation of the entire server.
  • the memory 403 provides an environment for running a computer program in a non-volatile computer-readable storage medium.
  • the processor 402 can execute any memory management method.
  • the communication interface 404 is used for communication.
  • FIG. 8 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the server 400 to which the solution of the present application is applied.
  • the specific server 400 may be Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • the bus 401 is, for example, an I2C (Inter-integrated Circuit) bus
  • the memory 403 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk or a mobile hard disk, etc.
  • the processor 402 may be a central processing unit (Central Processing Unit, CPU), the processor 402 may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the processor 402 is configured to run a computer program stored in the memory 403 to perform the following steps:
  • Receive the memory allocation request sent by the client allocate memory for the client based on the memory allocation request, and send the memory information of the allocated memory to the client.
  • the memory information is used to instruct the client to create a queue for recording the memory status; receive the client
  • the data to be processed sent by the client is processed, and after the data processing is completed, a preset message is returned to the client, and the preset message is used to instruct the client to update the idle state of the memory in the queue.
  • the processor 402 before receiving the memory allocation request sent by the client, the processor 402 further executes: configuring a memory pool, and generating configuration information.
  • the processor 402 when allocating memory for the client based on the memory allocation request and sending the memory information of the allocated memory to the client, the processor 402 further executes: allocating memory in the memory pool for the client based on the memory allocation request ; The memory information of the allocated memory is extracted from the configuration information, and the memory information is sent to the client.
  • the configuration information includes the number of memory blocks in the memory pool, the size of the memory blocks, the free memory waiting time threshold, and the free memory accumulation threshold.
  • the processor 402 when returning a preset message to the client, the processor 402 further executes: when the waiting time of the free memory corresponding to the client is greater than the idle memory waiting time threshold, returning the preset message to the client; or , When the free memory corresponding to the client is greater than the free memory accumulation threshold, the preset message is returned to the client.
  • the embodiment of the present application also provides a storage medium, the storage medium is a computer-readable storage medium, and the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the Apply for any of the memory management methods provided in the embodiments.
  • the storage medium is a computer-readable storage medium
  • the computer-readable storage medium stores a computer program
  • the computer program includes program instructions
  • the processor executes the program instructions to implement the Apply for any of the memory management methods provided in the embodiments.
  • the computer-readable storage medium may be the internal storage unit of the mobile terminal of the foregoing embodiment, such as the hard disk or memory of the mobile terminal.
  • the computer-readable storage medium may also be an external storage device of the mobile terminal, such as a plug-in hard disk equipped on the mobile terminal, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash memory card ( Flash Card) etc.
  • the computer program stored in the computer-readable storage medium can execute any of the memory management methods provided in the embodiments of the present application, it can implement what can be achieved by any of the memory management methods provided in the embodiments of the present application.
  • the beneficial effects of refer to the previous embodiment for details, and will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present application discloses a memory management method and system, a client, a server and a storage medium, belonging to the field of communication technologies. Said method comprises: acquiring memory information corresponding to a memory allocated to a client by a server; creating, on the basis of the memory information, a queue for recording memory states; and when data exchange between the client and the server is changed, updating the memory state in the queue. Thus, a client is enabled to monitor the memory state of an allocated memory, and update a memory state in a queue in a timely manner when data exchange is changed, so as to effectively monitor the memory state without the need to request for a memory from the server frequently, thereby improving the efficiency of data transmission and the convenience of address management.

Description

内存管理方法、系统、客户端、服务器及存储介质Memory management method, system, client, server and storage medium
相关申请的交叉引用Cross-references to related applications
本申请基于2020年06月19日提交的发明名称为“内存管理方法、系统、客户端、服务器及存储介质”的中国专利申请CN202010568853.0,并且要求该专利申请的优先权,通过引用将其所公开的内容全部并入本申请。This application is based on the Chinese patent application CN202010568853.0 with the title of "Memory Management Method, System, Client, Server, and Storage Medium" filed on June 19, 2020, and claims the priority of the patent application, which is incorporated by reference The contents disclosed are all incorporated into this application.
技术领域Technical field
本申请涉及通信技术领域,具体涉及一种内存管理方法、系统、客户端、服务器及存储介质。This application relates to the field of communication technology, and specifically relates to a memory management method, system, client, server, and storage medium.
背景技术Background technique
远程直接数据存取(Remote Direct Memory Access,RDMA)是一种直接访问远端节点存储空间的网络传输技术,将数据从一个端快速传输到另一个端的存储空间,绕过操作系统内核协议栈,不占用节点中央处理器(central processing unit,CPU)的资源,能够显著提高数据传输性能。RDMA技术已经应用到各类业务场景,尤其是对带宽和时延要求非常高的分布式存储系统,采用RDMA网络传输大数据,可以充分发挥新型硬件的高性能。Remote Direct Memory Access (RDMA) is a network transmission technology that directly accesses the storage space of a remote node. It quickly transmits data from one end to the storage space of the other end, bypassing the operating system kernel protocol stack. Without occupying the resources of the central processing unit (CPU) of the node, the data transmission performance can be significantly improved. RDMA technology has been applied to various business scenarios, especially distributed storage systems that require very high bandwidth and delay. Using RDMA networks to transmit big data can give full play to the high performance of new hardware.
现有技术中,RDMA数据传输的过程中,首先,客户端根据业务需要向服务器发起内存分配请求,服务器从内存池中分配空闲内存块给客户端,然后客户端向服务器内存写入数据,服务器检测到数据写入完成后,根据业务逻辑处理内存中的数据。客户端下次向服务器内存写入数据之前,重新发起内存分配请求,服务端再次从内存池中分配空闲内存块给客户端。因此,客户端每次传输数据前都需要向服务端申请内存空间,服务端根据内存使用情况分配空闲内存,这样服务器和客户端两侧CPU都需要参与的多次交互,降低了数据传输的效率。此外,在客户端现有向多个服务器申请内存的分布式存储场景,需要向多个服务器发起内存分配请求,客户端需要频繁向多个服务器申请内存,分布式同步协议实现更加复杂。In the prior art, in the process of RDMA data transmission, first, the client initiates a memory allocation request to the server according to business needs, the server allocates free memory blocks from the memory pool to the client, and then the client writes data to the server memory, and the server After detecting the completion of the data writing, the data in the memory is processed according to the business logic. Before the client writes data to the server memory next time, it re-initiates a memory allocation request, and the server again allocates free memory blocks from the memory pool to the client. Therefore, the client needs to apply for memory space from the server every time before transmitting data. The server allocates free memory according to the memory usage. In this way, the server and the CPU on both sides of the client need to participate in multiple interactions, which reduces the efficiency of data transmission. . In addition, in the existing distributed storage scenario where the client requests memory from multiple servers, it is necessary to initiate memory allocation requests to multiple servers, and the client needs to frequently request memory from multiple servers, and the implementation of distributed synchronization protocols is more complicated.
发明内容Summary of the invention
本申请实施例提供一种内存管理方法、系统、客户端、服务器及存储介质,可以提高数据传输的效率,以及提高对地址管理的便捷性。The embodiments of the present application provide a memory management method, system, client, server, and storage medium, which can improve the efficiency of data transmission and the convenience of address management.
第一方面,本申请实施例提供了一种内存管理方法,所述内存管理方法应用于客户端,所述内存管理方法包括:In the first aspect, an embodiment of the present application provides a memory management method, the memory management method is applied to a client, and the memory management method includes:
获取服务器分配给所述客户端的内存对应的内存信息;Acquiring memory information corresponding to the memory allocated by the server to the client;
基于所述内存信息创建用于记录内存状态的队列;Creating a queue for recording the memory state based on the memory information;
当所述客户端与所述服务器之间的数据交互发生变化时,更新所述队列中的内存状态。When the data interaction between the client and the server changes, the memory status in the queue is updated.
第二方面,本申请实施例还提供了一种内存管理方法,所述内存管理方法应用于服务器,所述内存管理方法包括:In the second aspect, an embodiment of the present application also provides a memory management method, the memory management method is applied to a server, and the memory management method includes:
接收客户端发送的内存分配请求;Receive the memory allocation request sent by the client;
基于所述内存分配请求为所述客户端分配内存,并将分配的所述内存的内存信息发送给所述客户端,所述内存信息用于指示所述客户端创建用于记录内存状态的队列;Allocate memory for the client based on the memory allocation request, and send memory information of the allocated memory to the client, where the memory information is used to instruct the client to create a queue for recording memory status ;
接收所述客户端发送的待处理的数据,对所述待处理的数据进行处理;Receive the to-be-processed data sent by the client, and process the to-be-processed data;
在对所述数据处理完成后,向所述客户端返回预设报文,所述预设报文用于指示所述客户端更新所述队列中内存的空闲状态。After the data processing is completed, a preset message is returned to the client, and the preset message is used to instruct the client to update the idle state of the memory in the queue.
第三方面,本申请实施例还提供了一种客户端,包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器调用所述存储器中的计算机程序时执行本申请实施例提供的应用于客户端的任一种内存管理方法。In a third aspect, an embodiment of the present application also provides a client, including a processor and a memory, where a computer program is stored, and the processor executes the computer program provided in the embodiment of the present application when calling the computer program in the memory. Any memory management method applied to the client.
第四方面,本申请实施例还提供了一种服务器,包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器调用所述存储器中的计算机程序时执行本申请实施例提供的应用于服务器的任一种内存管理方法。In a fourth aspect, an embodiment of the present application also provides a server, including a processor and a memory. The memory stores a computer program. When the processor invokes the computer program in the memory, the Any memory management method applied to the server.
第五方面,本申请实施例还提供了一种内存管理系统,包括客户端和服务器,所述客户端为本申请实施例提供的任一种客户端,所述服务器为本申请实施例提供的任一种服务器。In a fifth aspect, an embodiment of the present application also provides a memory management system, including a client and a server. The client is any of the clients provided in the embodiments of the present application, and the server is provided by the embodiments of the present application. Any kind of server.
第六方面,本申请实施例还提供了一种存储介质,用于计算机可读存储,所述储介质用于存储计算机程序,所述计算机程序被处理器加载,以执行本申请实施例提供的任一种内存管理方法。In a sixth aspect, an embodiment of the present application also provides a storage medium for computer-readable storage, the storage medium is used to store a computer program, and the computer program is loaded by a processor to execute the Any kind of memory management method.
本申请实施例可以获取服务器分配给客户端的内存对应的内存信息,基于内存信息创建用于记录内存状态的队列,当客户端与服务器之间的数据交互发生变化时,更新队列中的内存状态。使得客户端可以监控已分配的内存的内存状态,并在数据交互发生变化时,及时更新队列中的内存状态,以便对内存状态进行有效监控,而不需要频繁向服务器申请内存,提高了数据传输的效率,以及提高了对地址管理的便捷性。The embodiment of the application can obtain the memory information corresponding to the memory allocated to the client by the server, create a queue for recording the memory state based on the memory information, and update the memory state in the queue when the data interaction between the client and the server changes. Allows the client to monitor the memory status of the allocated memory, and update the memory status in the queue in time when the data interaction changes, so as to effectively monitor the memory status, without frequently requesting memory from the server, which improves data transmission The efficiency and the convenience of address management are improved.
附图说明Description of the drawings
图1是本申请一个实施例提供的内存管理方法的场景示意图;FIG. 1 is a schematic diagram of a scene of a memory management method provided by an embodiment of the present application;
图2是本申请一个实施例提供的内存管理方法的流程示意图;FIG. 2 is a schematic flowchart of a memory management method provided by an embodiment of the present application;
图3是本申请另一实施例提供的内存管理方法的流程示意图;FIG. 3 is a schematic flowchart of a memory management method provided by another embodiment of the present application;
图4是本申请一个实施例提供的更新循环队列中内存状态的示意图;FIG. 4 is a schematic diagram of updating the memory state in a circular queue according to an embodiment of the present application;
图5是本申请另一实施例提供的内存管理方法的流程示意图;FIG. 5 is a schematic flowchart of a memory management method provided by another embodiment of the present application;
图6是本申请另一实施例提供的内存管理方法的流程示意图;FIG. 6 is a schematic flowchart of a memory management method provided by another embodiment of the present application;
图7是本申请一个实施例提供的客户端的结构示意图;FIG. 7 is a schematic structural diagram of a client provided by an embodiment of the present application;
图8是本申请一个实施例提供的服务器的结构示意图。Fig. 8 is a schematic structural diagram of a server provided by an embodiment of the present application.
具体实施方式detailed description
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments. Based on the embodiments in this application, all other embodiments obtained by those skilled in the art without creative work shall fall within the protection scope of this application.
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的 顺序有可能根据实际情况改变。The flowchart shown in the drawings is only an example, and does not necessarily include all contents and operations/steps, nor does it have to be executed in the described order. For example, some operations/steps can also be decomposed, combined or partially combined, so the actual execution order may be changed according to actual conditions.
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。Hereinafter, some embodiments of the present application will be described in detail with reference to the accompanying drawings. In the case of no conflict, the following embodiments and features in the embodiments can be combined with each other.
本申请的实施例提供了一种内存管理方法、系统、客户端、服务器及存储介质。其中,该内存管理方法可以应用于网络设备中,该网络设备可以包括集线器、交换机、网桥、路由器、网关、以及中继器等设备。The embodiments of the present application provide a memory management method, system, client, server, and storage medium. Wherein, the memory management method can be applied to network devices, which can include devices such as hubs, switches, bridges, routers, gateways, and repeaters.
请参阅图1,图1是实施本申请实施例提供的内存管理方法的一场景示意图,如图1所示,内存管理方法可以应用于RDMA数据传输的内存管理场景,服务器可以与客户端建立连接,服务器与客户端之间可以进行数据交互,该客户端可以是集成在台式电脑、笔记本电脑、手机以及智能电视等终端上的客户端,该服务器可以是RDMA服务端。具体地,服务器可以配置内存池,客户端可以向服务器发送内存分配请求,然后服务器可以为客户端分配内存,并向客户端发送已分配的内存的内存信息,此时客户端可以基于内存信息创建用于记录内存状态的循环队列。客户端可以向服务器发送待处理的数据,并更新循环队列中的内存状态,例如,更新待处理的数据占用循环队列中内存的占用状态。服务器可以对待处理的数据进行处理,在对数据处理完成后,服务器向客户端发送预设报文,此时客户端可以基于预设报文更新循环队列中内存的空闲状态。使得客户端可以监控已分配的内存的内存状态,并在数据交互发生变化时,及时更新队列中的内存状态,以便对内存状态进行有效监控,而不需要频繁向服务器申请内存,提高了数据传输的效率,以及提高了对地址管理的便捷性。Please refer to Figure 1. Figure 1 is a schematic diagram of a scenario for implementing the memory management method provided by an embodiment of the present application. As shown in Figure 1, the memory management method can be applied to the memory management scenario of RDMA data transmission, and the server can establish a connection with the client. , The server can exchange data with the client. The client can be a client integrated on a terminal such as a desktop computer, a notebook computer, a mobile phone, and a smart TV. The server can be an RDMA server. Specifically, the server can configure a memory pool, the client can send a memory allocation request to the server, and then the server can allocate memory for the client and send the memory information of the allocated memory to the client. At this time, the client can create based on the memory information A circular queue used to record the state of the memory. The client can send the data to be processed to the server and update the memory status in the circular queue, for example, update the occupancy status of the memory in the circular queue occupied by the data to be processed. The server can process the data to be processed. After processing the data, the server sends a preset message to the client. At this time, the client can update the idle state of the memory in the circular queue based on the preset message. Allows the client to monitor the memory status of the allocated memory, and update the memory status in the queue in time when the data interaction changes, so as to effectively monitor the memory status, without frequently requesting memory from the server, which improves data transmission The efficiency and the convenience of address management are improved.
请参阅图2,图2是本申请一实施例提供的内存管理方法的流程示意图。该内存管理方法应用于客户端,该内存管理方法可以包括但不限于步骤S101至步骤S103等,具体可以如下:Please refer to FIG. 2. FIG. 2 is a schematic flowchart of a memory management method provided by an embodiment of the present application. The memory management method is applied to the client, and the memory management method may include, but is not limited to, step S101 to step S103, etc., which may be specifically as follows:
S101、获取服务器分配给客户端的内存对应的内存信息。S101. Obtain memory information corresponding to the memory allocated by the server to the client.
在一实施例中,内存信息可以包括内存地址、内存大小和主键信息(也可以称为key信息)等。客户端可以被动获取服务器分配给客户端的内存对应的内存信息,例如,服务器在定时分配或自动分配给客户端的内存后,可以向客户端发送已分配的内存对应的内存信息。In an embodiment, the memory information may include a memory address, a memory size, and primary key information (may also be referred to as key information) and so on. The client can passively obtain the memory information corresponding to the memory allocated by the server to the client. For example, the server can send the memory information corresponding to the allocated memory to the client after the memory is allocated regularly or automatically to the client.
为了提高内存信息获取的灵活性,客户端可以主动获取服务器分配给客户端的内存对应的内存信息,在一实施例中,获取服务器分配给客户端的内存对应的内存信息可以包括:向服务器发送内存分配请求;接收服务器基于内存分配请求返回的内存信息。In order to improve the flexibility of acquiring memory information, the client can actively acquire the memory information corresponding to the memory allocated by the server to the client. In one embodiment, acquiring the memory information corresponding to the memory allocated by the server to the client may include: sending the memory allocation to the server Request; Receive the memory information returned by the server based on the memory allocation request.
例如,当需要服务器分配内存时,客户端可以向服务器发送内存分配请求,该内存分配请求中可以携带有所需内存大小等信息,例如客户端可以向服务器申请分配内存大小为R字节的内存。其中,若客户端待发送数据为L字节,则实际发送请求的内存大小可以为:R=min(ceil(L/m),r),其中,m表示每个内存块大小为m字节,r表示分配给客户端的内存中当前空闲内存的数量,ceil表示向上取整函数。此时,服务器在接收到客户端发送的内存分配请求后,可以基于内存分配请求为客户端分配内存,然后将已分配的内存的内存信息发送给客户端,客户端可以接收服务器基于内存分配请求返回的内存信息。For example, when the server needs to allocate memory, the client can send a memory allocation request to the server. The memory allocation request can carry information such as the required memory size. For example, the client can apply to the server to allocate memory with a memory size of R bytes. . Among them, if the data to be sent by the client is L bytes, the memory size of the actual sending request can be: R=min(ceil(L/m),r), where m means that the size of each memory block is m bytes , R represents the current amount of free memory in the memory allocated to the client, and ceil represents the round-up function. At this point, after the server receives the memory allocation request sent by the client, it can allocate memory to the client based on the memory allocation request, and then send the memory information of the allocated memory to the client, and the client can receive the server based on the memory allocation request The returned memory information.
在一实施例中,向服务器发送内存分配请求可以包括:当客户端启动时,向服务器发送内存分配请求;或者,当客户端检测到服务器分配的内存不足时,向服务器发送内存分配请求。In an embodiment, sending a memory allocation request to the server may include: sending a memory allocation request to the server when the client is started; or sending a memory allocation request to the server when the client detects that the memory allocated by the server is insufficient.
为了提高内存分配请求发送的及时性,客户端可以在完成启动时,向服务器发送内存分 配请求。或者,为了提高内存分配请求发送的灵活性,客户端可以基于待发送的数据,确定服务器已分配的内存是否充足,例如,客户端可以根据自身维护的内存状态确定空闲内存是否足够存储待发送的数据。当客户端检测到服务器分配的内存不足时,向服务器发送内存分配请求,当客户端检测到服务器分配的内存充足时,即使当前需要向服务器发送数据,此时也不需要向服务器发送内存分配请求。In order to improve the timeliness of sending memory allocation requests, the client can send a memory allocation request to the server when it finishes starting. Or, in order to improve the flexibility of sending memory allocation requests, the client can determine whether the memory allocated by the server is sufficient based on the data to be sent. For example, the client can determine whether the free memory is sufficient to store the memory to be sent according to the memory status maintained by itself. data. When the client detects that the memory allocated by the server is insufficient, it sends a memory allocation request to the server. When the client detects that the memory allocated by the server is sufficient, even if it currently needs to send data to the server, it does not need to send a memory allocation request to the server at this time. .
需要说明的是,服务器可以预先配置内存池,并生成配置信息,该配置信息可以包括内存池中内存块数量(即内存池容量)、内存块大小(即单位字节)、空闲内存等待时长阈值(Max Available Time,MAT)、以及空闲内存累积阈值(Max Available Segments,MAS)等,该配置信息可以写入配置文件,该配置文件中可以指定RDMA内存池容量、内存块大小、MAT、MAS、以及同步方式等,该同步方式可以是服务器将内存状态同步给客户端的方式,例如,当内存块的空闲内存等待时长大于MAT时,将携带内存状态的预设报文(例如syn报文)发送给客户端,以便客户端基于接收到的预设报文更新自身维护的内存状态。例如,服务器可以配置容量为N的内存池,该内存池包括多个内存块,每个内存块为m字节,所有内存块可以分别注册到RNIC网卡。服务器在接收到客户端发送的携带所需内存大小为R字节的内存分配请求后,可以基于内存分配请求为客户端分配内存池中的内存,例如,服务器可以为客户端分配n个连续内存块,满足条件“m*n>=R”,将n个内存块的地址、内存大小、以及key信息等内存信息返回给客户端。It should be noted that the server can pre-configure the memory pool and generate configuration information. The configuration information can include the number of memory blocks in the memory pool (that is, the capacity of the memory pool), the size of the memory block (that is, the unit byte), and the free memory waiting time threshold. (Max Available Time, MAT), and free memory accumulation threshold (Max Available Segments, MAS), etc. The configuration information can be written into the configuration file. The configuration file can specify the RDMA memory pool capacity, memory block size, MAT, MAS, And the synchronization method, etc., the synchronization method can be a method in which the server synchronizes the memory state to the client. For example, when the free memory waiting time of the memory block is greater than MAT, a preset message (such as a syn message) carrying the memory state is sent To the client, so that the client can update the memory state maintained by itself based on the received preset message. For example, the server can be configured with a memory pool with a capacity of N. The memory pool includes multiple memory blocks, each memory block is m bytes, and all memory blocks can be registered to the RNIC network card separately. After the server receives the memory allocation request with the required memory size of R bytes sent by the client, it can allocate the memory in the memory pool to the client based on the memory allocation request. For example, the server can allocate n contiguous memory to the client. Block, meet the condition "m*n>=R", and return memory information such as the address, memory size, and key information of n memory blocks to the client.
S102、基于内存信息创建用于记录内存状态的队列。S102. Create a queue for recording the memory state based on the memory information.
客户端在接收到内存信息后,可以基于内存块的地址、内存大小、以及key信息等内存信息创建用于记录内存状态的队列,该队列的类型和形式等可以根据实际需要进行灵活设置,例如,队列可以是列表的形式,该队列可以是循环队列,以实现基于循环队列的RDMA内存同步,用于记录内存状态的队列可以根据内存的变化及时更新内存状态,实现了将服务器的内存状态映射到本地,客户端能够及时感知内存占用状态或空闲状态,针对RDMA传输过程的内存同步。After receiving the memory information, the client can create a queue for recording the memory status based on the memory information such as the address, memory size, and key information of the memory block. The type and form of the queue can be flexibly set according to actual needs, for example , The queue can be in the form of a list, the queue can be a circular queue, to realize the RDMA memory synchronization based on the circular queue, the queue used to record the memory state can update the memory state in time according to the change of the memory, and realize the memory state mapping of the server Locally, the client can perceive the memory occupancy state or idle state in time, and synchronize the memory for the RDMA transmission process.
例如,当服务器为客户端分配n块内存块时,客户端可以建立基于0至n-1的内存块构成的循环队列,循环队列的表长为n,基于循环队列维护服务器的内存状态,当循环队列中的所有内存块均处于空闲状态时,可以使用head指针和tail指针初始位置0,即head指针指向为0的内存块,tail指针指向为n-1的内存块,[tail]--->[head]之间内存状态为占用状态,[head]--->[tail]之间内存状态为空闲状态。例如,如图4所示,内存块0和内存块1处于占用状态,内存块2至内存块n-1处于空闲状态时,tail指针指向内存块0,head指针指向内存块2。For example, when the server allocates n memory blocks to the client, the client can establish a circular queue composed of memory blocks from 0 to n-1. The table length of the circular queue is n, and the memory status of the server is maintained based on the circular queue. When all the memory blocks in the circular queue are in an idle state, you can use the initial position 0 of the head pointer and the tail pointer, that is, the head pointer points to the memory block of 0, and the tail pointer points to the memory block of n-1, [tail]-- The memory status between ->[head] is occupied, and the memory status between [head]--->[tail] is idle. For example, as shown in Figure 4, memory block 0 and memory block 1 are in the occupied state, and when memory block 2 to memory block n-1 are in an idle state, the tail pointer points to memory block 0 and the head pointer points to memory block 2.
S103、当客户端与服务器之间的数据交互发生变化时,更新队列中的内存状态。S103: When the data interaction between the client and the server changes, update the memory state in the queue.
其中,客户端与服务器之间的数据交互发生变化可以包括客户端向服务器发送待处理的数据、以及服务器完成对数据(该数据为客户端向服务器发送的待处理的数据)的处理等,内存状态可以包括占用状态和空闲状态等,例如,当客户端需要向服务器发送数据时,若客户端检测到循环队列有足够空闲内存,则可以立即执行RDMA指令,此时需要消耗空闲内存,客户端更新循环队列中内存的占用状态,若客户端接收到服务器完成数据处理后返回syn报文,则客户端更新循环队列中内存的空闲状态,即增加空闲内存。解决了RDMA数据传输过程存在频繁申请内存的问题,优化了高速网络传输数据的速度。Among them, changes in the data interaction between the client and the server may include the client sending the data to be processed to the server, and the server completing the processing of the data (the data is the data to be processed sent by the client to the server), etc. The status can include occupied status and idle status. For example, when the client needs to send data to the server, if the client detects that there is enough free memory in the circular queue, it can immediately execute the RDMA command. At this time, the free memory needs to be consumed. Update the occupancy state of the memory in the circular queue. If the client receives the server and returns a syn message after completing the data processing, the client updates the free state of the memory in the circular queue, that is, increases the free memory. It solves the problem of frequently applying for memory in the RDMA data transmission process, and optimizes the speed of high-speed network data transmission.
参照图3,在一实施例中,在步骤S103可以包括但不限于步骤S1031至步骤S1033等,具体可以如下:3, in an embodiment, step S103 may include but is not limited to step S1031 to step S1033, etc., which may be specifically as follows:
步骤S1031、向服务器发送待处理的数据,并更新数据占用队列中内存的占用状态。Step S1031: Send the to-be-processed data to the server, and update the occupancy status of the memory in the data occupancy queue.
步骤S1032、接收服务器对数据处理完成后返回的预设报文。Step S1032: Receive a preset message returned by the server after processing the data.
步骤S1033、基于预设报文更新队列中内存的空闲状态。Step S1033: Update the idle state of the memory in the queue based on the preset message.
为了提高对内存状态更新的准确性和及时性,客户端可以向服务器发送待处理的数据,由于服务器接收到待处理的数据后,需要消耗内存缓存待处理的数据,因此服务器分配给客户端的内存会被占用部分或全部,此时客户端可以更新数据占用队列中内存的占用状态。In order to improve the accuracy and timeliness of the memory status update, the client can send the data to be processed to the server. After the server receives the data to be processed, it needs to consume memory to cache the data to be processed, so the server allocates the memory to the client. Part or all of it will be occupied. At this time, the client can update the occupancy status of the memory in the data occupancy queue.
在一实施例中,队列为多个内存块构成的循环队列,更新数据占用队列中内存的占用状态可以包括:确定数据占用循环队列中内存的大小;根据循环队列中每个内存块大小和数据占用循环队列中内存的大小,确定数据占用循环队列中内存块区间;设置第一指针指向内存块区间的首部内存块,以指示数据占用循环队列中内存的占用状态。In one embodiment, the queue is a circular queue composed of multiple memory blocks. Updating the occupancy status of the memory in the data-occupied queue may include: determining the size of the memory occupied by the data in the circular queue; according to the size and data of each memory block in the circular queue The size of the memory in the cyclic queue is occupied, and the memory block interval in the cyclic queue is determined by the data; the first pointer is set to point to the first memory block of the memory block interval to indicate the occupancy status of the memory in the cyclic queue occupied by the data.
以队列为包括多个内存块构成的循环队列为例,客户端可以确定待处理的数据需要占用循环队列中内存的大小,例如,如图4所示,待处理的数据需要占用内存的大小为2*m字节,循环队列中每个内存块的大小均为m字节,此时,可以根据循环队列中每个内存块大小m字节和待处理的数据占用循环队列中内存的大小2*m字节,确定待处理的数据占用循环队列中内存块区间为内存块0至内存块1,设置第一指针(例如tail指针)指向内存块区间的首部内存块,即tail指针指向内存块0,以指示数据占用循环队列中内存的占用状态,head指针指向内存块3,[tail]--->[head]之间的内存块0和内存块1的内存状态为占用状态,[head]--->[tail]之间的内存块3至内存块n-1的内存状态为空闲状态。后续,客户端继续向服务端端写入x*m字节的数据时,可以控制head指针可以向前滑动x个内存块。Taking the queue as an example of a circular queue consisting of multiple memory blocks, the client can determine the size of the memory that the data to be processed needs to occupy in the circular queue. For example, as shown in Figure 4, the size of the memory that the data to be processed needs to occupy is 2*m bytes, the size of each memory block in the circular queue is m bytes, at this time, the size of the memory in the circular queue can be occupied according to the size of each memory block in the circular queue m bytes and the data to be processed 2 *m bytes, determine that the data to be processed occupies the memory block range in the circular queue from memory block 0 to memory block 1, set the first pointer (such as the tail pointer) to point to the first memory block of the memory block range, that is, the tail pointer points to the memory block 0, to indicate the occupancy state of the memory in the circular queue that the data occupies, the head pointer points to memory block 3, the memory state of memory block 0 and memory block 1 between [tail]--->[head] is the occupancy state, [head ]--->[tail] The memory status of memory block 3 to memory block n-1 is idle. Later, when the client continues to write x*m bytes of data to the server, it can control the head pointer to slide x memory blocks forward.
服务器在接收到客户端发送的待处理的数据后,可以对待处理的数据逐一进行处理,其处理方式可以根据实际应用场景进行灵活设置,具体内容在此处不做限定。服务器在对数据处理完成后,用于存储数据的内存块更新为空闲状态,可以向客户端返回预设报文,该预设报文可以根据实际需要进行灵活设置,例如,该预设报文可以是syn报文,该预设报文中可以携带有已完成对数据处理的消息、以及已释放的内存大小(即处于空闲状态的内存大小)等信息。此时,客户端可以接收服务器对数据处理完成后返回的预设报文,基于预设报文中携带的信息确定内存的空闲状态,并更新队列中内存的空闲状态。After the server receives the to-be-processed data sent by the client, it can process the to-be-processed data one by one, and its processing mode can be flexibly set according to actual application scenarios, and the specific content is not limited here. After the server finishes processing the data, the memory block used to store the data is updated to an idle state, and can return a preset message to the client. The preset message can be flexibly set according to actual needs, for example, the preset message It may be a syn message, and the preset message may carry information such as a message that data processing has been completed, and the size of the released memory (that is, the size of the memory in an idle state). At this time, the client can receive the preset message returned by the server after the data processing is completed, determine the idle state of the memory based on the information carried in the preset message, and update the idle state of the memory in the queue.
需要说明的是,服务器向客户端返回预设报文时,可以是在数据处理完成后,立即向客户端返回预设报文,即内存状态变化时立即同步。或者,服务器可以判断空闲内存等待时长是否大于空闲内存等待时长阈值,当客户端对应的空闲内存等待时长大于空闲内存等待时长阈值时,向客户端返回预设报文,当客户端对应的空闲内存等待时长小于或等于空闲内存等待时长阈值时,继续等待,此时不需要向客户端返回预设报文。或者,服务器可以判断空闲内存是否大于空闲内存累积阈值,当客户端对应的空闲内存大于空闲内存累积阈值时,向客户端返回预设报文,当客户端对应的空闲内存小于或等于空闲内存累积阈值时,继续等待,此时不需要向客户端返回预设报文。It should be noted that, when the server returns the preset message to the client, it may return the preset message to the client immediately after the data processing is completed, that is, synchronize immediately when the memory status changes. Alternatively, the server can determine whether the free memory waiting time is greater than the free memory waiting time threshold. When the free memory waiting time corresponding to the client is greater than the free memory waiting time threshold, it returns a preset message to the client, and when the client corresponds to the free memory When the waiting time is less than or equal to the idle memory waiting time threshold, continue to wait, and there is no need to return a preset message to the client at this time. Alternatively, the server can determine whether the free memory is greater than the free memory accumulation threshold. When the free memory corresponding to the client is greater than the free memory accumulation threshold, it returns a preset message to the client. When the free memory corresponding to the client is less than or equal to the free memory accumulation threshold When the threshold is set, continue to wait, and there is no need to return a preset message to the client at this time.
在一实施例中,队列为多个内存块构成的循环队列,基于预设报文更新循环队列中内存的空闲状态可以包括:基于预设报文确定服务器已处理的数据;根据循环队列中每个内存块大小和已处理的数据占用的内存大小,确定循环队列中释放的内存块区间;设置第二指针指 向释放的内存块区间的首部内存块,以指示循环队列中内存的空闲状态。In one embodiment, the queue is a circular queue composed of multiple memory blocks, and updating the idle state of the memory in the circular queue based on a preset message may include: determining the data processed by the server based on the preset message; The memory block size and the memory size occupied by the processed data determine the memory block interval released in the circular queue; the second pointer is set to point to the first memory block of the released memory block interval to indicate the free state of the memory in the circular queue.
以队列为包括多个内存块构成的循环队列为例,客户端可以基于接收到的预设报文确定服务器已处理的数据,例如,已处理的数据占用内存的大小为6*m字节,循环队列中每个内存块的大小均为m字节,此时可以根据循环队列中每个内存块大小为m字节和已处理的数据占用的内存大小为6*m字节,确定循环队列中释放的内存块区间为内存块2至内存块7。可以设置第二指针(例如head指针)指向释放的内存块区间的首部内存块,即head指针指向内存块3,以指示循环队列中内存的空闲状态。若head指针指向内存块0,[tail]--->[head]之间的内存块0和内存块1的内存状态为占用状态,[head]--->[tail]之间的内存块3至内存块n-1的内存状态为空闲状态。后续,客户端收到携带已释放t*m字节内存块的syn报文后,可以控制tail指针向前滑动t个内存块,以更新内存状态为空闲状态,指针到达边界后从0重新循环,以便基于循环队列的内存同步机制更新内存状态。本申请实施例简化了RDMA单边数据传输操作,解决了现有RDMA单边操作需要多次申请内存的问题,只要循环队列中有空闲内存,客户端可以直接发起RDMA操作,不需要申请RDMA内存;服务端内存块空闲时,可以根据配置批量、或延时发送syn报文及时更新内存状态,使得单边操作的交互次数大为减少,显著提高数据传输效率,实现了真正意义上的单边操作。Taking the queue as an example of a circular queue consisting of multiple memory blocks, the client can determine the data processed by the server based on the received preset message. For example, the processed data occupies a memory size of 6*m bytes. The size of each memory block in the circular queue is m bytes. At this time, the circular queue can be determined according to the size of each memory block in the circular queue is m bytes and the memory size occupied by the processed data is 6*m bytes The range of the memory block released in the memory block is from memory block 2 to memory block 7. The second pointer (for example, the head pointer) can be set to point to the first memory block of the released memory block range, that is, the head pointer points to the memory block 3 to indicate the idle state of the memory in the circular queue. If the head pointer points to memory block 0, the memory state of memory block 0 and memory block 1 between [tail]--->[head] is occupied, and the memory block between [head]--->[tail] The memory state from 3 to memory block n-1 is the idle state. Subsequently, after the client receives the syn message carrying the released t*m byte memory block, it can control the tail pointer to slide t memory blocks forward to update the memory state to the idle state, and the pointer will recycle from 0 after reaching the boundary , In order to update the memory state based on the memory synchronization mechanism of the circular queue. The embodiment of this application simplifies the RDMA unilateral data transmission operation, and solves the problem that the existing RDMA unilateral operation requires multiple applications for memory. As long as there is free memory in the circular queue, the client can directly initiate RDMA operations without applying for RDMA memory. ; When the server memory block is free, the memory status can be updated in time according to the configuration batch or delayed sending of syn messages, which greatly reduces the number of interactions of unilateral operations, significantly improves the data transmission efficiency, and realizes a true unilateral operate.
需要说明的是,客户端可以向一个或多个服务器申请内存,当客户端可以向多个服务器申请内存时,客户端可以向各个服务器分别发送内存分配请求,各个服务器可以为客户端分配内存,并向客户端发送已分配的内存的内存信息,此时客户端可以基于各个服务器对应的内存信息分别创建循环队列,各个循环队列分别记录各个服务器对应的内存状态。客户端可以向各个服务器发送待处理的数据,并更新对应服务器的循环队列中的内存状态,例如,更新待处理的数据占用循环队列中内存的占用状态。各个服务器可以对待处理的数据进行处理,在对数据处理完成后,各个服务器向客户端发送预设报文,此时客户端可以基于预设报文中携带的服务器标识、以及空闲内存大小等信息更新循环队列中内存的空闲状态。例如,客户端A可以向服务器B、服务器C、以及服务器D分别申请长度为N的RDMA内存池,并创建服务器B对应的循环队列1、服务器C对应的循环队列2、以及服务器D对应的循环队列3等3个循环队列。客户端A需要向服务器B、服务器C、以及服务器D同步长度为L的数据,计算该数据占用的内存大小x为:x=min(ceil(L/m),min()),其中min()为循环队列空闲窗口最小值,ceil表示向上取整函数,m表示每个内存块的大小。客户端A依次向服务器B、服务器C、以及服务器D写入x字节数据,此时各个服务器对应的循环队列中head指针向前滑动x/m个内存块。各个服务器收到数据写入并对数据进行处理后,根据配置向客户端A返回syn报文,客户端A收到syn报文,各个服务器对应的循环队列中的tail指针前移更新内存状态,其中,不同服务器B、服务器C、以及服务器D的tail指针可以不同步。客户端A为各个服务器计算syn报文累计值ACKi,以根据raft协议的多数原则判断事务完成。本申请实施例的内存管理方法应用到raft同步场景,不仅可以减少客户端A与各个服务器的内存同步次数,而且syn报文可以作为raft响应消息,提高raft协议同步效率。It should be noted that the client can apply for memory from one or more servers. When the client can apply for memory from multiple servers, the client can send memory allocation requests to each server, and each server can allocate memory for the client. And send the memory information of the allocated memory to the client. At this time, the client can create a circular queue based on the memory information corresponding to each server, and each circular queue records the memory state corresponding to each server. The client can send the data to be processed to each server, and update the memory status in the circular queue of the corresponding server, for example, update the occupancy status of the memory in the circular queue occupied by the data to be processed. Each server can process the data to be processed. After processing the data, each server sends a preset message to the client. At this time, the client can be based on the server identification carried in the preset message, and the amount of free memory, etc. Update the free state of the memory in the circular queue. For example, client A can apply to server B, server C, and server D for an RDMA memory pool of length N respectively, and create a circular queue corresponding to server B 1, a circular queue corresponding to server C, and a circular queue corresponding to server D Queue 3 waits for 3 circular queues. Client A needs to synchronize data of length L to server B, server C, and server D. The memory size x occupied by the data is calculated as: x=min(ceil(L/m),min()), where min( ) Is the minimum free window of the circular queue, ceil represents the round-up function, and m represents the size of each memory block. Client A writes x bytes of data to server B, server C, and server D in turn. At this time, the head pointer in the circular queue corresponding to each server slides forward by x/m memory blocks. After each server receives the data writing and processes the data, it returns a syn message to client A according to the configuration, and client A receives the syn message, and the tail pointer in the circular queue corresponding to each server moves forward to update the memory status. Among them, the tail pointers of different servers B, C, and D may not be synchronized. Client A calculates the accumulated value ACKi of syn packets for each server, and judges that the transaction is completed according to the majority principle of the raft protocol. The memory management method of the embodiment of the present application is applied to the raft synchronization scenario, not only can reduce the number of memory synchronizations between the client A and each server, but also the syn message can be used as a raft response message to improve the synchronization efficiency of the raft protocol.
本申请实施例客户端可以获取服务器分配给客户端的内存对应的内存信息,基于内存信息创建用于记录内存状态的队列,当客户端与服务器之间的数据交互发生变化时,更新队列中的内存状态。使得客户端可以监控已分配的内存的内存状态,并在数据交互发生变化时,及时更新队列中的内存状态,以便对内存状态进行有效监控,而不需要频繁向服务器申请内 存,提高了数据传输的效率,以及提高了对地址管理的便捷性。In the embodiment of the application, the client can obtain the memory information corresponding to the memory allocated to the client by the server, create a queue for recording the memory state based on the memory information, and update the memory in the queue when the data interaction between the client and the server changes. state. Allows the client to monitor the memory status of the allocated memory, and update the memory status in the queue in time when the data interaction changes, so as to effectively monitor the memory status, without frequently requesting memory from the server, which improves data transmission The efficiency and the convenience of address management are improved.
请参阅图5,图5是本申请一实施例提供的内存管理方法的流程示意图。该内存管理方法应用于服务器,该内存管理方法可以包括但不限于步骤S201至步骤S203等,具体可以如下:Please refer to FIG. 5, which is a schematic flowchart of a memory management method provided by an embodiment of the present application. The memory management method is applied to a server, and the memory management method may include but is not limited to steps S201 to S203, etc., and may be specifically as follows:
S201、接收客户端发送的内存分配请求。S201. Receive a memory allocation request sent by a client.
在一实施例中,接收客户端发送的内存分配请求之前,内存管理方法还可以包括:配置内存池,生成配置信息。In an embodiment, before receiving the memory allocation request sent by the client, the memory management method may further include: configuring a memory pool and generating configuration information.
在一实施例中,配置信息可以包括内存池中内存块数量、内存块大小、空闲内存等待时长阈值、以及空闲内存累积阈值。In an embodiment, the configuration information may include the number of memory blocks in the memory pool, the size of the memory blocks, the free memory waiting time threshold, and the free memory accumulation threshold.
在一实施例中,基于内存分配请求为客户端分配内存,并将分配的内存的内存信息发送给客户端可以包括:基于内存分配请求为客户端分配内存池中的内存;从配置信息中提取分配的内存的内存信息,将内存信息发送给客户端。In an embodiment, allocating memory for the client based on the memory allocation request and sending memory information of the allocated memory to the client may include: allocating memory in the memory pool for the client based on the memory allocation request; extracting from the configuration information The memory information of the allocated memory is sent to the client.
其中,内存信息可以包括内存地址、内存大小和主键信息(也可以称为key信息)等。Among them, the memory information may include memory address, memory size, and primary key information (may also be referred to as key information).
服务器可以预先配置内存池,并生成配置信息,该配置信息可以包括内存池中内存块数量(即内存池容量)、内存块大小(即单位字节)、空闲内存等待时长阈值MAT、以及空闲内存累积阈值MAS等,该配置信息可以写入配置文件,该配置文件中可以指定RDMA内存池容量、内存块大小、MAT、MAS、以及同步方式等,该同步方式可以是服务器将内存状态同步给客户端的方式,例如,当内存块的空闲内存等待时长大于MAT时,将携带内存状态的预设报文(例如syn报文)发送给客户端,以便客户端基于接收到的预设报文更新自身维护的内存状态。例如,服务器可以配置容量为N的内存池,该内存池包括多个内存块,每个内存块为m字节,所有内存块可以分别注册到RNIC网卡。服务器在接收到客户端发送的携带所需内存大小为R字节的内存分配请求后,可以基于内存分配请求为客户端分配内存池中的内存,例如,服务器可以为客户端分配n个连续内存块,满足条件“m*n>=R”,将n个内存块的地址、内存大小、以及key信息等内存信息返回给客户端。The server can pre-configure the memory pool and generate configuration information. The configuration information can include the number of memory blocks in the memory pool (that is, the capacity of the memory pool), the size of the memory block (that is, unit byte), the free memory waiting time threshold MAT, and the free memory Cumulative threshold MAS, etc. The configuration information can be written into a configuration file. The configuration file can specify RDMA memory pool capacity, memory block size, MAT, MAS, and synchronization mode, etc. The synchronization mode can be that the server synchronizes the memory state to the client For example, when the free memory waiting time of the memory block is greater than MAT, the preset message (for example, syn message) carrying the memory status is sent to the client, so that the client can update itself based on the received preset message Maintained memory status. For example, the server can be configured with a memory pool with a capacity of N. The memory pool includes multiple memory blocks, each memory block is m bytes, and all memory blocks can be registered to the RNIC network card separately. After the server receives the memory allocation request with the required memory size of R bytes sent by the client, it can allocate the memory in the memory pool to the client based on the memory allocation request. For example, the server can allocate n contiguous memory to the client. Block, meet the condition "m*n>=R", and return memory information such as the address, memory size, and key information of n memory blocks to the client.
需要说明的是,客户端可以在完成启动时,向服务器发送内存分配请求。或者,为了提高内存分配请求发送的灵活性,客户端可以基于待发送的数据,确定服务器已分配的内存是否充足,例如,客户端可以根据自身维护的内存状态确定空闲内存是否足够存储待发送的数据。当客户端检测到服务器分配的内存不足时,向服务器发送内存分配请求,当客户端检测到服务器分配的内存充足时,即使当前需要向服务器发送数据,此时也不需要向服务器发送内存分配请求。It should be noted that the client can send a memory allocation request to the server when it finishes starting up. Or, in order to improve the flexibility of sending memory allocation requests, the client can determine whether the memory allocated by the server is sufficient based on the data to be sent. For example, the client can determine whether the free memory is sufficient to store the memory to be sent according to the memory status maintained by itself. data. When the client detects that the memory allocated by the server is insufficient, it sends a memory allocation request to the server. When the client detects that the memory allocated by the server is sufficient, even if it currently needs to send data to the server, it does not need to send a memory allocation request to the server at this time. .
S202、基于内存分配请求为客户端分配内存,并将分配的内存的内存信息发送给客户端,内存信息用于指示客户端创建用于记录内存状态的队列。S202: Allocate memory for the client based on the memory allocation request, and send memory information of the allocated memory to the client, where the memory information is used to instruct the client to create a queue for recording the memory state.
例如,当需要服务器分配内存时,客户端可以向服务器发送内存分配请求,该内存分配请求中可以携带有所需内存大小等信息,例如客户端可以向服务器申请分配内存大小为R字节的内存。其中,若客户端待发送数据为L字节,则实际发送请求的内存大小可以为:R=min(ceil(L/m),r),其中,m表示每个内存块大小为m字节,r表示分配给客户端的内存中当前空闲内存的数量,ceil表示向上取整函数。此时,服务器在接收到客户端发送的内存分配请求后,可以基于内存分配请求为客户端分配内存,然后将已分配的内存的内存信息发送 给客户端,客户端可以接收服务器基于内存分配请求返回的内存信息。For example, when the server needs to allocate memory, the client can send a memory allocation request to the server. The memory allocation request can carry information such as the required memory size. For example, the client can apply to the server to allocate memory with a memory size of R bytes. . Among them, if the data to be sent by the client is L bytes, the memory size of the actual sending request can be: R=min(ceil(L/m),r), where m means that the size of each memory block is m bytes , R represents the current amount of free memory in the memory allocated to the client, and ceil represents the round-up function. At this point, after the server receives the memory allocation request sent by the client, it can allocate memory to the client based on the memory allocation request, and then send the memory information of the allocated memory to the client, and the client can receive the server based on the memory allocation request The returned memory information.
客户端在接收到内存信息后,可以基于内存块的地址、内存大小、以及key信息等内存信息创建用于记录内存状态的队列,该队列的类型和形式等可以根据实际需要进行灵活设置,例如,队列可以是列表的形式,该队列可以是循环队列,以实现基于循环队列的RDMA内存同步,用于记录内存状态的队列可以根据内存的变化及时更新内存状态,实现了将服务器的内存状态映射到本地,客户端能够及时感知内存占用状态或空闲状态,针对RDMA传输过程的内存同步。After receiving the memory information, the client can create a queue for recording the memory status based on the memory information such as the address, memory size, and key information of the memory block. The type and form of the queue can be flexibly set according to actual needs, for example , The queue can be in the form of a list, the queue can be a circular queue, to realize the RDMA memory synchronization based on the circular queue, the queue used to record the memory state can update the memory state in time according to the change of the memory, and realize the memory state mapping of the server Locally, the client can perceive the memory occupancy state or idle state in time, and synchronize the memory for the RDMA transmission process.
例如,当服务器为客户端分配n块内存块时,客户端可以建立基于0至n-1的内存块构成的循环队列,循环队列的表长为n,基于循环队列维护服务器的内存状态,当循环队列中的所有内存块均处于空闲状态时,可以使用head指针和tail指针初始位置0,即head指针指向为0的内存块,tail指针指向为n-1的内存块,[tail]--->[head]之间内存状态为占用状态,[head]--->[tail]之间内存状态为空闲状态。例如,如图4所示,内存块0和内存块1处于占用状态,内存块2至内存块n-1处于空闲状态时,tail指针指向内存块0,head指针指向内存块2。For example, when the server allocates n memory blocks to the client, the client can establish a circular queue composed of memory blocks from 0 to n-1. The table length of the circular queue is n, and the memory status of the server is maintained based on the circular queue. When all the memory blocks in the circular queue are in an idle state, you can use the initial position 0 of the head pointer and the tail pointer, that is, the head pointer points to the memory block of 0, and the tail pointer points to the memory block of n-1, [tail]-- The memory status between ->[head] is occupied, and the memory status between [head]--->[tail] is idle. For example, as shown in Figure 4, memory block 0 and memory block 1 are in the occupied state, and when memory block 2 to memory block n-1 are in an idle state, the tail pointer points to memory block 0 and the head pointer points to memory block 2.
S203、接收客户端发送的待处理的数据,对待处理的数据进行处理。S203: Receive the to-be-processed data sent by the client, and process the to-be-processed data.
S204、在对数据处理完成后,向客户端返回预设报文,预设报文用于指示客户端更新队列中内存的空闲状态。S204. After processing the data, return a preset message to the client, where the preset message is used to instruct the client to update the idle state of the memory in the queue.
由于服务器接收到待处理的数据后,需要消耗内存缓存待处理的数据,因此服务器分配给客户端的内存会被占用部分或全部,此时客户端可以更新数据占用队列中内存的占用状态。After the server receives the data to be processed, it needs to consume memory to cache the data to be processed, so some or all of the memory allocated by the server to the client will be occupied. At this time, the client can update the occupancy status of the memory in the data occupancy queue.
服务器在接收到客户端发送的待处理的数据后,可以对待处理的数据逐一进行处理,其处理方式可以根据实际应用场景进行灵活设置,具体内容在此处不做限定。服务器在对数据处理完成后,用于存储数据的内存块更新为空闲状态,可以向客户端返回预设报文,该预设报文可以根据实际需要进行灵活设置,例如,该预设报文可以是syn报文,该预设报文中可以携带有已完成对数据处理的消息、以及已释放的内存大小(即处于空闲状态的内存大小)等信息。此时,客户端可以接收服务器对数据处理完成后返回的预设报文,基于预设报文中携带的信息确定内存的空闲状态,并更新队列中内存的空闲状态。After the server receives the to-be-processed data sent by the client, it can process the to-be-processed data one by one, and its processing mode can be flexibly set according to actual application scenarios, and the specific content is not limited here. After the server finishes processing the data, the memory block used to store the data is updated to an idle state, and can return a preset message to the client. The preset message can be flexibly set according to actual needs, for example, the preset message It may be a syn message, and the preset message may carry information such as a message that data processing has been completed, and the size of the released memory (that is, the size of the memory in an idle state). At this time, the client can receive the preset message returned by the server after the data processing is completed, determine the idle state of the memory based on the information carried in the preset message, and update the idle state of the memory in the queue.
在一实施例中,向客户端返回预设报文可以包括:当客户端对应的空闲内存等待时长大于空闲内存等待时长阈值时,向客户端返回预设报文;或者,当客户端对应的空闲内存大于空闲内存累积阈值时,向客户端返回预设报文。In an embodiment, returning the preset message to the client may include: when the waiting time of the free memory corresponding to the client is greater than the idle memory waiting time threshold, returning the preset message to the client; or, when the client corresponds to When the free memory is greater than the accumulation threshold of free memory, a preset message is returned to the client.
服务器向客户端返回预设报文时,可以是在数据处理完成后,立即向客户端返回预设报文,即内存状态变化时立即同步。或者,服务器可以判断空闲内存等待时长是否大于空闲内存等待时长阈值,当客户端对应的空闲内存等待时长大于空闲内存等待时长阈值时,向客户端返回预设报文,当客户端对应的空闲内存等待时长小于或等于空闲内存等待时长阈值时,继续等待,此时不需要向客户端返回预设报文。或者,服务器可以判断空闲内存是否大于空闲内存累积阈值,当客户端对应的空闲内存大于空闲内存累积阈值时,向客户端返回预设报文,当客户端对应的空闲内存小于或等于空闲内存累积阈值时,继续等待,此时不需要向客户端返回预设报文。When the server returns the preset message to the client, it may return the preset message to the client immediately after the data processing is completed, that is, synchronize immediately when the memory status changes. Alternatively, the server can determine whether the free memory waiting time is greater than the free memory waiting time threshold. When the free memory waiting time corresponding to the client is greater than the free memory waiting time threshold, it returns a preset message to the client, and when the client corresponds to the free memory When the waiting time is less than or equal to the idle memory waiting time threshold, continue to wait, and there is no need to return a preset message to the client at this time. Alternatively, the server can determine whether the free memory is greater than the free memory accumulation threshold. When the free memory corresponding to the client is greater than the free memory accumulation threshold, it returns a preset message to the client. When the free memory corresponding to the client is less than or equal to the free memory accumulation threshold When the threshold is set, continue to wait, and there is no need to return a preset message to the client at this time.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对内存管理方法的详细描述,此处不再赘述。In the foregoing embodiments, the description of each embodiment has its own focus. For parts that are not described in detail in an embodiment, please refer to the detailed description of the memory management method above, which will not be repeated here.
本申请实施例服务器可以接收客户端发送的内存分配请求,基于内存分配请求为客户端分配内存,并将分配的内存的内存信息发送给客户端,内存信息用于指示客户端创建用于记录内存状态的队列。接收客户端发送的待处理的数据,对待处理的数据进行处理,在对数据处理完成后,向客户端返回预设报文,预设报文用于指示客户端更新队列中内存的空闲状态。该方案可以将内存信息发送给客户端,以便客户端基于内存信息创建用于记录内存状态的队列,对已分配的内存的内存状态行有效监控,以及在对数据处理完成后向客户端返回预设报文,以便客户端基于预设报文更新队列中内存的空闲状态,而不需要频繁申请内存,减少了服务器与客户端之间的交互次数,提高了数据传输的效率,以及提高了对地址管理的便捷性。In the embodiment of the application, the server may receive the memory allocation request sent by the client, allocate memory for the client based on the memory allocation request, and send the memory information of the allocated memory to the client. The memory information is used to instruct the client to create a memory for recording Status of the queue. Receive the to-be-processed data sent by the client, process the to-be-processed data, and return a preset message to the client after the data processing is completed. The preset message is used to instruct the client to update the idle state of the memory in the queue. This solution can send the memory information to the client, so that the client can create a queue for recording the memory status based on the memory information, effectively monitor the memory status of the allocated memory, and return to the client after the data processing is completed. Set the message so that the client can update the idle state of the memory in the queue based on the preset message, instead of frequently applying for memory, which reduces the number of interactions between the server and the client, improves the efficiency of data transmission, and improves the Convenience of address management.
请参阅图6,图6是本申请一实施例提供的内存管理系统中客户端和服务器交互的流程示意图。具体可以如下:Please refer to FIG. 6, which is a schematic diagram of the interaction between the client and the server in the memory management system provided by an embodiment of the present application. The details can be as follows:
S10、服务器配置内存池。S10. The server configures a memory pool.
服务器可以配置内存池,生成配置信息。The server can configure the memory pool to generate configuration information.
S11、客户端向服务器发送内存分配请求。S11. The client sends a memory allocation request to the server.
当客户端启动时,可以向服务器发送内存分配请求;或者,当客户端检测到服务器分配的内存不足时,可以向服务器发送内存分配请求。When the client starts, it can send a memory allocation request to the server; or, when the client detects that the memory allocated by the server is insufficient, it can send a memory allocation request to the server.
S12、服务器基于接收到的内存分配请求为客户端分配内存,并向客户端发送内存信息。S12. The server allocates memory to the client based on the received memory allocation request, and sends memory information to the client.
服务器向客户端发送分配给客户端的内存对应的内存信息,该内存信息可以包括内存地址、内存大小和主键信息等。该内存信息用于指示客户端创建用于记录内存状态的队列。The server sends the memory information corresponding to the memory allocated to the client to the client. The memory information may include memory address, memory size, and primary key information. The memory information is used to instruct the client to create a queue for recording the memory status.
S13、客户端基于接收到的内存信息创建循环队列。S13. The client creates a circular queue based on the received memory information.
客户端接收服务器基于内存分配请求返回的内存信息,基于内存信息创建用于记录内存状态的队列,当客户端与服务器之间的数据交互发生变化时,可以更新队列中的内存状态。The client receives the memory information returned by the server based on the memory allocation request, and creates a queue for recording the memory status based on the memory information. When the data interaction between the client and the server changes, the memory status in the queue can be updated.
S14、客户端向服务器发送待处理的数据。S14. The client sends the data to be processed to the server.
S15、客户端更新循环队列中的内存状态。S15. The client updates the memory status in the circular queue.
客户端可以更新数据占用循环队列中内存的占用状态。需要说明的是,步骤S14和步骤S15之间的执行顺序,可以是先执行步骤S14,后执行步骤S15;或者可以是先执行步骤S15,后执行步骤S14;或者可以是步骤S14和步骤S15同时执行。The client can update the data occupancy state of the memory in the circular queue. It should be noted that the order of execution between step S14 and step S15 can be step S14 first, then step S15; or step S15 first, then step S14; or step S14 and step S15 at the same time implement.
S16、服务器对数据进行处理。S16. The server processes the data.
服务器在接收到客户端发送的待处理的数据后,可以对待处理的数据进行处理。After the server receives the to-be-processed data sent by the client, it can process the to-be-processed data.
S17、在对数据处理完成后,服务器向客户端发送预设报文。S17. After processing the data, the server sends a preset message to the client.
在对数据处理完成后,服务器向客户端返回预设报文,预设报文用于指示客户端更新队列中内存的空闲状态。After processing the data, the server returns a preset message to the client, and the preset message is used to instruct the client to update the idle state of the memory in the queue.
例如,当客户端对应的空闲内存等待时长大于空闲内存等待时长阈值时,向客户端返回预设报文;或者,当客户端对应的空闲内存大于空闲内存累积阈值时,向客户端返回预设报文。For example, when the free memory waiting time corresponding to the client is greater than the free memory waiting time threshold, the preset message is returned to the client; or, when the free memory corresponding to the client is greater than the free memory accumulation threshold, the preset message is returned to the client Message.
S18、客户端更新循环队列中的内存状态。,S18. The client updates the memory status in the circular queue. ,
客户端接收服务器对数据处理完成后返回的预设报文,基于预设报文更新队列中内存的空闲状态。The client receives the preset message returned by the server after completing the data processing, and updates the idle state of the memory in the queue based on the preset message.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可 以参见上文针对内存管理方法的详细描述,此处不再赘述。In the above-mentioned embodiments, the description of each embodiment has its own emphasis. For parts that are not described in detail in an embodiment, please refer to the detailed description of the memory management method above, which will not be repeated here.
请参阅图7,图7是本申请实施例提供的一种客户端的结构示意性框图。Please refer to FIG. 7, which is a schematic block diagram of the structure of a client according to an embodiment of the present application.
如图7所示,该客户端300可以包括通过系统总线301连接的处理器302、存储器303和通信接口304,其中,存储器303可以包括非易失性计算机可读存储介质和内存储器。As shown in FIG. 7, the client 300 may include a processor 302, a memory 303, and a communication interface 304 connected through a system bus 301, where the memory 303 may include a non-volatile computer-readable storage medium and internal memory.
非易失性计算机可读存储介质可存储计算机程序。该计算机程序包括程序指令,该程序指令被执行时,可使得处理器执行任意一种内存管理方法。The non-volatile computer-readable storage medium can store the computer program. The computer program includes program instructions, and when the program instructions are executed, the processor can execute any memory management method.
处理器302用于提供计算和控制能力,支撑整个客户端的运行。The processor 302 is used to provide computing and control capabilities to support the operation of the entire client.
存储器303为非易失性计算机可读存储介质中的计算机程序的运行提供环境,该计算机程序被处理器302执行时,可使得处理器302执行任意一种内存管理方法。The memory 303 provides an environment for running a computer program in a non-volatile computer-readable storage medium. When the computer program is executed by the processor 302, the processor 302 can execute any memory management method.
该通信接口304用于通信。本领域技术人员可以理解,图7中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的客户端300的限定,具体的客户端300可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。The communication interface 304 is used for communication. Those skilled in the art can understand that the structure shown in FIG. 7 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the client 300 to which the solution of the present application is applied. The specific client 300 may include more or fewer components than shown in the figure, or combine certain components, or have a different component arrangement.
应当理解的是,该总线301比如为I2C(Inter-integrated Circuit)总线,存储器303可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等,处理器302可以是中央处理单元(Central Processing Unit,CPU),该处理器302还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。其中,通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。It should be understood that the bus 301 is, for example, an I2C (Inter-integrated Circuit) bus, and the memory 303 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk or a mobile hard disk, etc. The processor 302 may be a central processing unit (Central Processing Unit, CPU), the processor 302 may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc. Among them, the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
其中,在一实施例中,处理器302用于运行存储在存储器303中的计算机程序,以执行如下步骤:Wherein, in an embodiment, the processor 302 is configured to run a computer program stored in the memory 303 to perform the following steps:
获取服务器分配给客户端的内存对应的内存信息,基于内存信息创建用于记录内存状态的队列;当客户端与服务器之间的数据交互发生变化时,更新队列中的内存状态。Obtain the memory information corresponding to the memory allocated by the server to the client, and create a queue for recording the memory state based on the memory information; when the data interaction between the client and the server changes, the memory state in the queue is updated.
在一实施例中,在更新队列中的内存状态时,处理器302还执行:向服务器发送待处理的数据,并更新数据占用队列中内存的占用状态;接收服务器对数据处理完成后返回的预设报文;基于预设报文更新队列中内存的空闲状态。In one embodiment, when updating the memory status in the queue, the processor 302 further executes: sending the data to be processed to the server, and updating the memory occupancy status of the data in the queue; receiving the server's return after the data processing is completed. Set message; update the idle state of the memory in the queue based on the preset message.
在一实施例中,队列为多个内存块构成的循环队列,在更新数据占用队列中内存的占用状态时,处理器302还执行:确定数据占用循环队列中内存的大小;根据循环队列中每个内存块大小和数据占用循环队列中内存的大小,确定数据占用循环队列中内存块区间;设置第一指针指向内存块区间的首部内存块,以指示数据占用循环队列中内存的占用状态。In one embodiment, the queue is a circular queue composed of multiple memory blocks. When updating the occupancy state of the memory in the data occupancy queue, the processor 302 also executes: determining the size of the memory in the cyclic queue occupied by the data; The size of a memory block and the size of the memory occupied by the data in the circular queue are determined, and the memory block interval in the circular queue occupied by the data is determined; the first pointer is set to point to the first memory block of the memory block interval to indicate the occupancy status of the memory in the circular queue occupied by the data.
在一实施例中,队列为多个内存块构成的循环队列,在基于预设报文更新循环队列中内存的空闲状态时,处理器302还执行:基于预设报文确定服务器已处理的数据;根据循环队列中每个内存块大小和已处理的数据占用的内存大小,确定循环队列中释放的内存块区间;设置第二指针指向释放的内存块区间的首部内存块,以指示循环队列中内存的空闲状态。In an embodiment, the queue is a circular queue composed of multiple memory blocks. When the idle state of the memory in the circular queue is updated based on a preset message, the processor 302 further executes: determining the data processed by the server based on the preset message ;According to the size of each memory block in the circular queue and the memory size occupied by the processed data, determine the memory block interval released in the circular queue; set the second pointer to point to the first memory block of the released memory block interval to indicate the circular queue The idle state of the memory.
在一实施例中,内存信息包括内存地址、内存大小和主键信息。In an embodiment, the memory information includes memory address, memory size, and primary key information.
在一实施例中,在获取服务器分配给客户端的内存对应的内存信息时,处理器302还执 行:向服务器发送内存分配请求;接收服务器基于内存分配请求返回的内存信息。In an embodiment, when acquiring memory information corresponding to the memory allocated by the server to the client, the processor 302 further executes: sending a memory allocation request to the server; receiving the memory information returned by the server based on the memory allocation request.
在一实施例中,在向服务器发送内存分配请求时,处理器302还执行:当客户端启动时,向服务器发送内存分配请求;或者,当客户端检测到服务器分配的内存不足时,向服务器发送内存分配请求。In one embodiment, when sending a memory allocation request to the server, the processor 302 further executes: when the client starts, it sends a memory allocation request to the server; or, when the client detects that the memory allocated by the server is insufficient, it sends a memory allocation request to the server Send a memory allocation request.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对内存管理方法的详细描述,此处不再赘述。In the foregoing embodiments, the description of each embodiment has its own focus. For parts that are not described in detail in an embodiment, please refer to the detailed description of the memory management method above, which will not be repeated here.
请参阅图8,图8是本申请实施例提供的一种服务器的结构示意性框图。Please refer to FIG. 8. FIG. 8 is a schematic block diagram of the structure of a server according to an embodiment of the present application.
如图8所示,该服务器400可以包括通过系统总线401连接的处理器402、存储器403和通信接口404,其中,存储器403可以包括非易失性计算机可读存储介质和内存储器。As shown in FIG. 8, the server 400 may include a processor 402, a memory 403, and a communication interface 404 connected through a system bus 401, where the memory 403 may include a non-volatile computer-readable storage medium and internal memory.
非易失性计算机可读存储介质可存储计算机程序。该计算机程序包括程序指令,该程序指令被执行时,可使得处理器执行任意一种内存管理方法。The non-volatile computer-readable storage medium can store the computer program. The computer program includes program instructions, and when the program instructions are executed, the processor can execute any memory management method.
处理器402用于提供计算和控制能力,支撑整个服务器的运行。The processor 402 is used to provide computing and control capabilities to support the operation of the entire server.
存储器403为非易失性计算机可读存储介质中的计算机程序的运行提供环境,该计算机程序被处理器402执行时,可使得处理器402执行任意一种内存管理方法。The memory 403 provides an environment for running a computer program in a non-volatile computer-readable storage medium. When the computer program is executed by the processor 402, the processor 402 can execute any memory management method.
该通信接口404用于通信。本领域技术人员可以理解,图8中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的服务器400的限定,具体的服务器400可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。The communication interface 404 is used for communication. Those skilled in the art can understand that the structure shown in FIG. 8 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the server 400 to which the solution of the present application is applied. The specific server 400 may be Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
应当理解的是,该总线401比如为I2C(Inter-integrated Circuit)总线,存储器403可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等,处理器402可以是中央处理单元(Central Processing Unit,CPU),该处理器402还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。其中,通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。It should be understood that the bus 401 is, for example, an I2C (Inter-integrated Circuit) bus, and the memory 403 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk or a mobile hard disk, etc. The processor 402 may be a central processing unit (Central Processing Unit, CPU), the processor 402 may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc. Among them, the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
其中,在一实施例中,处理器402用于运行存储在存储器403中的计算机程序,以执行如下步骤:Wherein, in an embodiment, the processor 402 is configured to run a computer program stored in the memory 403 to perform the following steps:
接收客户端发送的内存分配请求,基于内存分配请求为客户端分配内存,并将分配的内存的内存信息发送给客户端,内存信息用于指示客户端创建用于记录内存状态的队列;接收客户端发送的待处理的数据,对待处理的数据进行处理,在对数据处理完成后,向客户端返回预设报文,预设报文用于指示客户端更新队列中内存的空闲状态。Receive the memory allocation request sent by the client, allocate memory for the client based on the memory allocation request, and send the memory information of the allocated memory to the client. The memory information is used to instruct the client to create a queue for recording the memory status; receive the client The data to be processed sent by the client is processed, and after the data processing is completed, a preset message is returned to the client, and the preset message is used to instruct the client to update the idle state of the memory in the queue.
在一实施例中,在接收客户端发送的内存分配请求之前,处理器402还执行:配置内存池,生成配置信息。In an embodiment, before receiving the memory allocation request sent by the client, the processor 402 further executes: configuring a memory pool, and generating configuration information.
在一实施例中,在基于内存分配请求为客户端分配内存,并将分配的内存的内存信息发送给客户端时,处理器402还执行:基于内存分配请求为客户端分配内存池中的内存;从配置信息中提取分配的内存的内存信息,将内存信息发送给客户端。In one embodiment, when allocating memory for the client based on the memory allocation request and sending the memory information of the allocated memory to the client, the processor 402 further executes: allocating memory in the memory pool for the client based on the memory allocation request ; The memory information of the allocated memory is extracted from the configuration information, and the memory information is sent to the client.
在一实施例中,配置信息包括内存池中内存块数量、内存块大小、空闲内存等待时长阈 值、以及空闲内存累积阈值。In an embodiment, the configuration information includes the number of memory blocks in the memory pool, the size of the memory blocks, the free memory waiting time threshold, and the free memory accumulation threshold.
在一实施例中,在向客户端返回预设报文时,处理器402还执行:当客户端对应的空闲内存等待时长大于空闲内存等待时长阈值时,向客户端返回预设报文;或者,当客户端对应的空闲内存大于空闲内存累积阈值时,向客户端返回预设报文。In one embodiment, when returning a preset message to the client, the processor 402 further executes: when the waiting time of the free memory corresponding to the client is greater than the idle memory waiting time threshold, returning the preset message to the client; or , When the free memory corresponding to the client is greater than the free memory accumulation threshold, the preset message is returned to the client.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对内存管理方法的详细描述,此处不再赘述。In the foregoing embodiments, the description of each embodiment has its own focus. For parts that are not described in detail in an embodiment, please refer to the detailed description of the memory management method above, which will not be repeated here.
本申请的实施例中还提供一种存储介质,该存储介质为计算机可读存储介质,,计算机可读存储介质存储有计算机程序,该计算机程序中包括程序指令,处理器执行程序指令,实现本申请实施例提供的任一项内存管理方法。以上各个操作的具体实施可参见前面的实施例,在此不再赘述。The embodiment of the present application also provides a storage medium, the storage medium is a computer-readable storage medium, and the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the Apply for any of the memory management methods provided in the embodiments. For the specific implementation of the above operations, please refer to the previous embodiments, which will not be repeated here.
其中,计算机可读存储介质可以是前述实施例的移动终端的内部存储单元,例如移动终端的硬盘或内存。计算机可读存储介质也可以是移动终端的外部存储设备,例如移动终端上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。The computer-readable storage medium may be the internal storage unit of the mobile terminal of the foregoing embodiment, such as the hard disk or memory of the mobile terminal. The computer-readable storage medium may also be an external storage device of the mobile terminal, such as a plug-in hard disk equipped on the mobile terminal, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash memory card ( Flash Card) etc.
由于该计算机可读存储介质中所存储的计算机程序,可以执行本申请实施例所提供的任一种内存管理方法,因此,可以实现本申请实施例所提供的任一种内存管理方法所能实现的有益效果,详见前面的实施例,在此不再赘述。Since the computer program stored in the computer-readable storage medium can execute any of the memory management methods provided in the embodiments of the present application, it can implement what can be achieved by any of the memory management methods provided in the embodiments of the present application. For the beneficial effects of, refer to the previous embodiment for details, and will not be repeated here.
应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。It should be understood that the terms used in the specification of this application are only for the purpose of describing specific embodiments and are not intended to limit the application. As used in the specification of this application and the appended claims, unless the context clearly indicates other circumstances, the singular forms "a", "an" and "the" are intended to include plural forms.
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。It should also be understood that the term "and/or" used in the specification and appended claims of this application refers to any combination of one or more of the items listed in the associated and all possible combinations, and includes these combinations. It should be noted that in this article, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or system including a series of elements not only includes those elements, It also includes other elements not explicitly listed, or elements inherent to the process, method, article, or system. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article, or system that includes the element.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。以上所述,仅是本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The serial numbers of the foregoing embodiments of the present application are for description only, and do not represent the superiority or inferiority of the embodiments. The above are only specific implementations of this application, but the scope of protection of this application is not limited to this. Anyone familiar with this technical field can easily think of various equivalents within the technical scope disclosed in this application. Modifications or replacements, these modifications or replacements shall be covered within the scope of protection of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.

Claims (15)

  1. 一种内存管理方法,所述内存管理方法应用于客户端,所述内存管理方法包括:A memory management method, the memory management method is applied to a client, and the memory management method includes:
    获取服务器分配给所述客户端的内存对应的内存信息;Acquiring memory information corresponding to the memory allocated by the server to the client;
    基于所述内存信息创建用于记录内存状态的队列;Creating a queue for recording the memory state based on the memory information;
    当所述客户端与所述服务器之间的数据交互发生变化时,更新所述队列中的内存状态。When the data interaction between the client and the server changes, the memory status in the queue is updated.
  2. 根据权利要求1所述的内存管理方法,其中,所述当所述客户端与所述服务器之间的数据交互发生变化时,更新所述队列中的内存状态包括:The memory management method according to claim 1, wherein, when the data interaction between the client and the server changes, updating the memory status in the queue comprises:
    向服务器发送待处理的数据,并更新所述数据占用所述队列中内存的占用状态;Sending the data to be processed to the server, and updating the occupancy state of the memory in the queue occupied by the data;
    接收所述服务器对所述数据处理完成后返回的预设报文;Receiving a preset message returned by the server after processing the data;
    基于所述预设报文更新所述队列中内存的空闲状态。The idle state of the memory in the queue is updated based on the preset message.
  3. 根据权利要求2所述的内存管理方法,其中,所述队列为多个内存块构成的循环队列,所述更新所述数据占用所述队列中内存的占用状态包括:The memory management method according to claim 2, wherein the queue is a circular queue formed by a plurality of memory blocks, and the updating the occupancy state of the memory in the queue by the data comprises:
    确定所述数据占用所述循环队列中内存的大小;Determining the size of the memory in the circular queue occupied by the data;
    根据所述循环队列中每个内存块大小和所述数据占用所述循环队列中内存的大小,确定所述数据占用所述循环队列中内存块区间;Determine the memory block interval occupied by the data in the circular queue according to the size of each memory block in the circular queue and the size of the memory in the circular queue occupied by the data;
    设置第一指针指向所述内存块区间的首部内存块,以指示所述数据占用所述循环队列中内存的占用状态。The first pointer is set to point to the first memory block of the memory block interval to indicate the occupancy state of the memory in the circular queue occupied by the data.
  4. 根据权利要求2所述的内存管理方法,其中,所述队列为多个内存块构成的循环队列,所述基于所述预设报文更新所述循环队列中内存的空闲状态包括:The memory management method according to claim 2, wherein the queue is a circular queue formed by a plurality of memory blocks, and the updating the idle state of the memory in the circular queue based on the preset message comprises:
    基于所述预设报文确定所述服务器已处理的数据;Determining the data processed by the server based on the preset message;
    根据所述循环队列中每个内存块大小和所述已处理的数据占用的内存大小,确定所述循环队列中释放的内存块区间;Determine the interval of the memory block released in the circular queue according to the size of each memory block in the circular queue and the memory size occupied by the processed data;
    设置第二指针指向所述释放的内存块区间的首部内存块,以指示所述循环队列中内存的空闲状态。A second pointer is set to point to the first memory block of the released memory block interval to indicate the idle state of the memory in the circular queue.
  5. 根据权利要求1所述的内存管理方法,其中,所述内存信息包括内存地址、内存大小和主键信息。The memory management method according to claim 1, wherein the memory information includes memory address, memory size, and primary key information.
  6. 根据权利要求1至5任一项所述的内存管理方法,其中,所述获取服务器分配给所述客户端的内存对应的内存信息包括:The memory management method according to any one of claims 1 to 5, wherein said acquiring memory information corresponding to the memory allocated by the server to the client comprises:
    向所述服务器发送内存分配请求;Sending a memory allocation request to the server;
    接收所述服务器基于所述内存分配请求返回的所述内存信息。Receiving the memory information returned by the server based on the memory allocation request.
  7. 根据权利要求6所述的内存管理方法,其中,所述向所述服务器发送内存分配请求包括:The memory management method according to claim 6, wherein the sending a memory allocation request to the server comprises:
    当所述客户端启动时,向所述服务器发送内存分配请求;或者,When the client is started, a memory allocation request is sent to the server; or,
    当所述客户端检测到所述服务器分配的内存不足时,向所述服务器发送内存分配请求。When the client detects that the memory allocated by the server is insufficient, it sends a memory allocation request to the server.
  8. 一种内存管理方法,所述内存管理方法应用于服务器,所述内存管理方法包括:A memory management method, the memory management method is applied to a server, and the memory management method includes:
    接收客户端发送的内存分配请求;Receive the memory allocation request sent by the client;
    基于所述内存分配请求为所述客户端分配内存,并将分配的所述内存的内存信息发送给所述客户端,所述内存信息用于指示所述客户端创建用于记录内存状态的队列;Allocate memory for the client based on the memory allocation request, and send memory information of the allocated memory to the client, where the memory information is used to instruct the client to create a queue for recording memory status ;
    接收所述客户端发送的待处理的数据,对所述待处理的数据进行处理;Receive the to-be-processed data sent by the client, and process the to-be-processed data;
    在对所述数据处理完成后,向所述客户端返回预设报文,所述预设报文用于指示所述客户端更新所述队列中内存的空闲状态。After the data processing is completed, a preset message is returned to the client, and the preset message is used to instruct the client to update the idle state of the memory in the queue.
  9. 根据权利要求8所述的内存管理方法,其中,所述接收客户端发送的内存分配请求之前,所述内存管理方法还包括:8. The memory management method according to claim 8, wherein before the receiving the memory allocation request sent by the client, the memory management method further comprises:
    配置内存池,生成配置信息;Configure the memory pool and generate configuration information;
    所述基于所述内存分配请求为所述客户端分配内存,并将分配的所述内存的内存信息发送给所述客户端包括:The allocating memory for the client based on the memory allocation request, and sending memory information of the allocated memory to the client includes:
    基于所述内存分配请求为所述客户端分配所述内存池中的内存;Allocating memory in the memory pool for the client based on the memory allocation request;
    从所述配置信息中提取分配的所述内存的内存信息,将所述内存信息发送给所述客户端。The memory information of the allocated memory is extracted from the configuration information, and the memory information is sent to the client.
  10. 根据权利要求9所述的内存管理方法,其中,所述配置信息包括内存池中内存块数量、内存块大小、空闲内存等待时长阈值、以及空闲内存累积阈值。9. The memory management method according to claim 9, wherein the configuration information includes the number of memory blocks in the memory pool, the size of the memory blocks, the free memory waiting time threshold, and the free memory accumulation threshold.
  11. 根据权利要求10所述的内存管理方法,其中,所述向所述客户端返回预设报文包括:The memory management method according to claim 10, wherein the returning a preset message to the client comprises:
    当所述客户端对应的空闲内存等待时长大于所述空闲内存等待时长阈值时,向所述客户端返回预设报文;或者,When the idle memory waiting time corresponding to the client is greater than the idle memory waiting time threshold, return a preset message to the client; or,
    当所述客户端对应的空闲内存大于所述空闲内存累积阈值时,向所述客户端返回预设报文。When the free memory corresponding to the client is greater than the free memory accumulation threshold, return a preset message to the client.
  12. 一种客户端,包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器调用所述存储器中的计算机程序时执行如权利要求1至7任一项所述的内存管理方法。A client includes a processor and a memory, and a computer program is stored in the memory. When the processor calls the computer program in the memory, the memory management method according to any one of claims 1 to 7 is executed.
  13. 一种服务器,包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器调用所述存储器中的计算机程序时执行如权利要求8至11任一项所述的内存管理方法。A server includes a processor and a memory, and a computer program is stored in the memory. When the processor calls the computer program in the memory, the memory management method according to any one of claims 8 to 11 is executed.
  14. 一种内存管理系统,包括客户端和服务器,所述客户端为权利要求12所述的客户端,所述服务器为权利要求13所述的服务器。A memory management system includes a client and a server, the client is the client according to claim 12, and the server is the server according to claim 13.
  15. 一种存储介质,用于计算机可读存储,所述存储介质用于存储计算机程序,所述计算机程序被处理器加载以执行权利要求1至7任一项所述的内存管理方法,或者所述计算机程序被处理器加载以执行权利要求8至11任一项所述的内存管理方法。A storage medium for computer-readable storage, the storage medium for storing a computer program, the computer program being loaded by a processor to execute the memory management method according to any one of claims 1 to 7, or the The computer program is loaded by the processor to execute the memory management method according to any one of claims 8 to 11.
PCT/CN2021/100120 2020-06-19 2021-06-15 Memory management method and system, client, server and storage medium WO2021254330A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010568853.0A CN113485822A (en) 2020-06-19 2020-06-19 Memory management method, system, client, server and storage medium
CN202010568853.0 2020-06-19

Publications (1)

Publication Number Publication Date
WO2021254330A1 true WO2021254330A1 (en) 2021-12-23

Family

ID=77932643

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/100120 WO2021254330A1 (en) 2020-06-19 2021-06-15 Memory management method and system, client, server and storage medium

Country Status (2)

Country Link
CN (1) CN113485822A (en)
WO (1) WO2021254330A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114297107A (en) * 2021-12-29 2022-04-08 成都智明达电子股份有限公司 Management method, equipment and medium for label Tag
CN114363428A (en) * 2022-01-06 2022-04-15 齐鲁空天信息研究院 Socket-based data transmission method
CN114968890A (en) * 2022-05-27 2022-08-30 中国第一汽车股份有限公司 Synchronous communication control method, device, system and storage medium
CN116777009A (en) * 2023-08-24 2023-09-19 之江实验室 Intelligent computing system architecture based on memory pool and parallel training method

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113986141A (en) * 2021-11-08 2022-01-28 北京奇艺世纪科技有限公司 Server model updating method, system, electronic device and readable storage medium
CN114253733B (en) * 2021-12-24 2024-01-12 苏州浪潮智能科技有限公司 Memory management method, device, computer equipment and storage medium
CN115174484A (en) * 2022-06-16 2022-10-11 阿里巴巴(中国)有限公司 RDMA (remote direct memory Access) -based data transmission method, device, equipment and storage medium
CN117093508B (en) * 2023-10-17 2024-01-23 苏州元脑智能科技有限公司 Memory resource management method and device, electronic equipment and storage medium
CN117215995B (en) * 2023-11-08 2024-02-06 苏州元脑智能科技有限公司 Remote direct memory access method, distributed storage system and electronic equipment
CN117251292B (en) * 2023-11-13 2024-03-29 山东泽赢信息科技服务有限公司 Memory management method, system, terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440202A (en) * 2013-08-07 2013-12-11 华为技术有限公司 RDMA-based (Remote Direct Memory Access-based) communication method, RDMA-based communication system and communication device
CN104915302A (en) * 2014-03-10 2015-09-16 华为技术有限公司 Data transmission processing method and data transmission unit
US20160026604A1 (en) * 2014-07-28 2016-01-28 Emulex Corporation Dynamic rdma queue on-loading
CN105978985A (en) * 2016-06-07 2016-09-28 华中科技大学 Memory management method of user-state RPC over RDMA
CN106953797A (en) * 2017-04-05 2017-07-14 广东浪潮大数据研究有限公司 A kind of method and apparatus of the RDMA data transfers based on Dynamic link library
CN108268208A (en) * 2016-12-30 2018-07-10 清华大学 A kind of distributed memory file system based on RDMA

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440202A (en) * 2013-08-07 2013-12-11 华为技术有限公司 RDMA-based (Remote Direct Memory Access-based) communication method, RDMA-based communication system and communication device
CN104915302A (en) * 2014-03-10 2015-09-16 华为技术有限公司 Data transmission processing method and data transmission unit
US20160026604A1 (en) * 2014-07-28 2016-01-28 Emulex Corporation Dynamic rdma queue on-loading
CN105978985A (en) * 2016-06-07 2016-09-28 华中科技大学 Memory management method of user-state RPC over RDMA
CN108268208A (en) * 2016-12-30 2018-07-10 清华大学 A kind of distributed memory file system based on RDMA
CN106953797A (en) * 2017-04-05 2017-07-14 广东浪潮大数据研究有限公司 A kind of method and apparatus of the RDMA data transfers based on Dynamic link library

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114297107A (en) * 2021-12-29 2022-04-08 成都智明达电子股份有限公司 Management method, equipment and medium for label Tag
CN114297107B (en) * 2021-12-29 2024-05-24 成都智明达电子股份有限公司 Label Tag management method, device and medium
CN114363428A (en) * 2022-01-06 2022-04-15 齐鲁空天信息研究院 Socket-based data transmission method
CN114363428B (en) * 2022-01-06 2023-10-17 齐鲁空天信息研究院 Socket-based data transmission method
CN114968890A (en) * 2022-05-27 2022-08-30 中国第一汽车股份有限公司 Synchronous communication control method, device, system and storage medium
CN116777009A (en) * 2023-08-24 2023-09-19 之江实验室 Intelligent computing system architecture based on memory pool and parallel training method
CN116777009B (en) * 2023-08-24 2023-10-20 之江实验室 Intelligent computing system architecture based on memory pool and parallel training method

Also Published As

Publication number Publication date
CN113485822A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
WO2021254330A1 (en) Memory management method and system, client, server and storage medium
WO2017133623A1 (en) Data stream processing method, apparatus, and system
WO2020186909A1 (en) Virtual network service processing method, apparatus and system, and controller and storage medium
US10331613B2 (en) Methods for enabling direct memory access (DMA) capable devices for remote DMA (RDMA) usage and devices therof
US10248615B2 (en) Distributed processing in a network
US11500666B2 (en) Container isolation method and apparatus for netlink resource
WO2024037296A1 (en) Protocol family-based quic data transmission method and device
US20240069977A1 (en) Data transmission method and data transmission server
CN113179327B (en) High concurrency protocol stack unloading method, equipment and medium based on large-capacity memory
GB2479653A (en) A Method of FIFO Tag Switching in a Multi-core Packet Processing Apparatus
CN106936931B (en) Method, related equipment and system for realizing distributed lock
US10397096B2 (en) Path resolution in InfiniBand and ROCE networks
WO2014180397A1 (en) Network data packet sending method and device
WO2023098050A1 (en) Remote data access method and apparatus
CN115576654A (en) Request processing method, device, equipment and storage medium
US20050091390A1 (en) Speculative method and system for rapid data communications
WO2022067830A1 (en) Application context migration method and device
US8473579B2 (en) Data reception management apparatus, systems, and methods
CN112052104A (en) Message queue management method based on multi-computer-room realization and electronic equipment
CN115189977B (en) Broadcast transmission method, system and medium based on AXI protocol
CN110895517B (en) Method, equipment and system for transmitting data based on FPGA
CN114186163A (en) Application layer network data caching method
WO2017177400A1 (en) Data processing method and system
CN116032498A (en) Memory area registration method, device and equipment
US20190050274A1 (en) Technologies for synchronizing triggered operations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21824742

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 260423)

122 Ep: pct application non-entry in european phase

Ref document number: 21824742

Country of ref document: EP

Kind code of ref document: A1