CN115756962A - Memory backup acceleration method, device, equipment and computer readable storage medium - Google Patents

Memory backup acceleration method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN115756962A
CN115756962A CN202211458494.9A CN202211458494A CN115756962A CN 115756962 A CN115756962 A CN 115756962A CN 202211458494 A CN202211458494 A CN 202211458494A CN 115756962 A CN115756962 A CN 115756962A
Authority
CN
China
Prior art keywords
memory
local
fpga
host
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211458494.9A
Other languages
Chinese (zh)
Inventor
刘伟
宿栋栋
沈艳梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Beijing Electronic Information Industry Co Ltd
Original Assignee
Inspur Beijing Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Beijing Electronic Information Industry Co Ltd filed Critical Inspur Beijing Electronic Information Industry Co Ltd
Priority to CN202211458494.9A priority Critical patent/CN115756962A/en
Publication of CN115756962A publication Critical patent/CN115756962A/en
Priority to PCT/CN2023/081742 priority patent/WO2024108825A1/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Techniques For Improving Reliability Of Storages (AREA)

Abstract

The application discloses a memory backup acceleration method, which relates to the technical field of servers and comprises the following steps: when the host memory space of the local host is insufficient, judging whether the on-chip memory of the local FPGA has an allocable space; if the on-chip memory of the local FPGA has the allocable space, backing up the target data to the on-chip memory of the local FPGA; and if the on-chip memory of the local FPGA does not have the allocable space, backing up the target data to the remote memory array equipment. The method uses the memory of the local FPGA on the local host and the memory of the remote memory array equipment as physical carriers for backing up data in the host memory of the local host, and can effectively improve the running speed of the system. The application also discloses a memory backup accelerating device, equipment and a computer readable storage medium, which have the technical effects.

Description

Memory backup acceleration method, device, equipment and computer readable storage medium
Technical Field
The application relates to the technical field of servers, in particular to a memory backup acceleration method; also relates to a memory backup accelerating device, equipment and a computer readable storage medium.
Background
Early computers or embedded devices now using an 8-bit/16-bit MCU (Microcontroller Unit), programs were run directly on physical memory. By running directly on physical memory is meant that the addresses accessed by the program during run-time are all physical addresses. The method for directly running the program on the physical memory is simple to implement, but has the defects of insufficient physical memory, uncertain addresses of program running, low memory utilization rate and the like. For this reason, virtual memory management techniques are currently introduced. The basic idea of virtual memory management is that the total size of memory used by a program can exceed the size of physical memory, and the operating system places some of the data currently in use in memory while keeping other unused portions on the hard disk.
Although the virtual memory management technique solves many defects of directly running a program on a physical memory, there are some problems, for example, during the running process of the program, the operating system needs to spend time maintaining and updating a memory management Unit in a page table (Central Processing Unit (CPU), converting a virtual address into a physical address by looking up the page table), adding and deleting table entries, which is also called creating and deleting a memory map. When the memory space required by the program is larger than the existing physical memory in the system, the operating system needs to frequently move data between the physical memory and the hard disk, and the data transmission rate (the order of magnitude of 0.1 GB/s) of the traditional hard disk is far slower than the access rate (the order of magnitude of 10 GB/s) of the memory, so that the overall operation speed of the system is seriously reduced.
In view of this, how to increase the operation speed of the system has become a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide a memory backup acceleration method which can improve the running speed of a system. Another object of the present application is to provide a memory backup acceleration apparatus, a device and a computer readable storage medium, all having the above technical effects.
In order to solve the above technical problem, the present application provides a memory backup acceleration method, including:
when the host memory space of the local host is insufficient, judging whether the on-chip memory of the local FPGA has an allocable space;
if the on-chip memory of the local FPGA has the allocable space, backing up target data to the on-chip memory of the local FPGA;
and if the on-chip memory of the local FPGA does not have the allocable space, backing up the target data to a remote memory array device.
Optionally, the backing up the target data to the remote memory array device includes:
if the on-chip memory of the far-end FPGA in the far-end memory array equipment has the allocable space, transmitting the target data to the far-end FPGA, and storing the target data into the on-chip memory of the far-end FPGA;
and if the on-chip memory of the remote FPGA does not have the allocable space, transmitting the target data to the remote FPGA, and storing the target data into the remote memory of the remote memory array equipment through the remote FPGA.
Optionally, transmitting the target data to the remote FPGA includes:
transmitting the target data to the remote FPGA through an RDMA transmit module of the local FPGA.
Optionally, the method further includes:
when accessing data, if the data to be accessed is positioned in the host memory, accessing the host memory;
if the data to be accessed is located in the on-chip memory of the local FPGA, copying the data to be accessed from the on-chip memory of the local FPGA to the host memory, and accessing the host memory;
and if the data to be accessed is located in the remote memory array equipment, copying the data to be accessed from the remote memory array equipment to the host memory, and accessing the host memory.
Optionally, the copying the data to be accessed from the remote memory array device to the host memory includes:
if the data to be accessed is located in an on-chip memory of a far-end FPGA of the far-end memory array equipment, copying the data to be accessed from the on-chip memory of the far-end FPGA to the local memory;
and if the data to be accessed is located in a remote memory of the remote memory array equipment, copying the data to be accessed from the remote memory to the local memory.
Optionally, the method further includes:
and after the data to be accessed are copied from the on-chip memory of the local FPGA to the host memory, releasing the space occupied by the data to be accessed in the on-chip memory of the local FPGA.
Optionally, the method further includes:
and after the data to be accessed is copied from the remote memory array equipment to the host memory, releasing the space occupied by the data to be accessed in the remote memory array equipment.
Optionally, the releasing the space occupied by the data to be accessed in the remote memory array device includes:
if the data to be accessed is copied from the on-chip memory of the far-end FPGA to the local memory, releasing the space occupied by the data to be accessed in the on-chip memory of the far-end FPGA;
and if the data to be accessed is copied from the remote memory to the local memory, releasing the space occupied by the data to be accessed in the remote memory.
Optionally, the method further includes:
and after backing up the target data to the on-chip memory of the local FPGA or the remote memory array equipment, releasing the space of the local memory occupied by the target data.
Optionally, the method further includes:
and when the host memory space of the local host is enough, allocating the host memory and establishing memory mapping.
Optionally, when the host memory space of the local host is sufficient, allocating the host memory, and before establishing the memory mapping, the method further includes:
judging whether the local host establishes the memory mapping;
if the local host does not establish the memory mapping, judging whether the host memory space of the local host is enough;
and if the host memory space of the local host is enough, allocating the host memory and establishing memory mapping.
Optionally, the method further includes:
and recording the information of the equipment where the backed-up target data is located and the address of a memory in the equipment for storing the target data.
In order to solve the above technical problem, the present application further provides a memory backup acceleration apparatus, including:
the judging module is used for judging whether the on-chip memory of the local FPGA has the allocable space when the host memory space of the local host is insufficient;
the first backup module is used for backing up target data to the on-chip memory of the local FPGA if the on-chip memory of the local FPGA has an allocable space;
and the second backup module is used for backing up the target data to a remote memory array device if the on-chip memory of the local FPGA does not have the allocable space.
In order to solve the above technical problem, the present application further provides a memory backup acceleration device, including:
a memory for storing a computer program;
a processor, configured to implement the steps of the memory backup acceleration method according to any one of the above items when the computer program is executed.
In order to solve the above technical problem, the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps of the memory backup acceleration method are implemented as described in any one of the above.
The memory backup acceleration method provided by the application comprises the following steps: when the host memory space of the local host is insufficient, judging whether the on-chip memory of the local FPGA has an allocable space; if the on-chip memory of the local FPGA has the allocable space, backing up target data to the on-chip memory of the local FPGA; and if the on-chip memory of the local FPGA does not have the allocable space, backing up the target data to a remote memory array device.
Therefore, the memory backup acceleration method provided by the application uses the memory of the local FPGA on the local host and the memory of the remote memory array device as physical carriers for backing up data in the host memory of the local host, replaces the traditional scheme of using a hard disk as a data backup carrier, and can effectively improve the running speed of the system.
The memory backup accelerating device, the equipment and the computer readable storage medium provided by the application have the technical effects.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed in the prior art and the embodiments are briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a diagram of a conventional hardware architecture;
FIG. 2 is a schematic diagram of existing software logic;
fig. 3 is a schematic flowchart of a memory backup acceleration method according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a hardware architecture according to an embodiment of the present application;
FIG. 5 is a schematic diagram of software logic provided in an embodiment of the present application;
fig. 6 is a schematic diagram of a memory backup acceleration device according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a memory backup acceleration device according to an embodiment of the present application.
Detailed Description
The core of the application is to provide a memory backup acceleration method which can improve the running speed of a system. Another core of the present application is to provide a memory backup acceleration apparatus, a device and a computer readable storage medium, all having the above technical effects.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1 and fig. 2, in the prior art, when an application needs to access a memory, an operating system executes processing logic shown in fig. 2 and is denoted as swap processing logic. When the local memory of the host computer is insufficient, backing up data which is not used for a long time from the local memory to a swap partition in the hard disk to vacate a memory space for new data; and if the data to be accessed at the time is backed up to the swap partition, copying the data to be accessed from the swap partition to the local memory, and then accessing the local memory. However, the adoption of the technical scheme can seriously reduce the overall operation speed of the system, so the application provides a memory backup acceleration method, and aims to improve the operation speed of the system.
Referring to fig. 3, fig. 3 is a schematic flow chart of a memory backup acceleration method according to an embodiment of the present application, and referring to fig. 3, the method includes:
s101: when the host memory space of the local host is insufficient, judging whether the on-chip memory of a local FPGA (Field Programmable Gate Array) has an allocable space;
s102: if the on-chip memory of the local FPGA has the allocable space, backing up target data to the on-chip memory of the local FPGA;
s103: and if the on-chip memory of the local FPGA does not have the allocable space, backing up the target data to a remote memory array device.
Referring to fig. 4, the local host includes a local FPGA in addition to the CPU chip and the local memory. The local FPGA and the processor core are connected through a PCIe (peripheral component interconnect express) bridge, and the processor core can access the local FPGA through an address bus- > PCIe bridge. The local FPGA can access the local memory through the PCIe bridge- > address bus. The local FPGA may expose a portion of on-chip memory for access by the processor core.
The remote memory array device may be a physical carrier specially configured to backup data in the host memory of the local host, or may be another server.
When the residual capacity of the local memory is reduced to a certain preset threshold value, the local memory is considered to be insufficient. For example, when the remaining capacity of the local memory decreases to 10%, the local memory is considered to be insufficient. The target data may be data that has not been used for a period of time. In order to obtain a faster access rate, the present embodiment preferentially uses the on-chip memory of the local FPGA. And when the local memory is insufficient, preferentially using the on-chip memory of the local FPGA as a backup carrier of the target data, and backing up the target data to the on-chip memory of the local FPGA. If the local FPGA does not have the distributable storage space, namely the space of the on-chip memory is insufficient, the memory of the remote memory array device is used as a backup carrier of the target data, and the target data is backed up in the memory of the remote memory array device. When the residual capacity of the local FPGA is reduced to a certain preset threshold value, the space of the on-chip memory of the local FPGA can be considered to be insufficient. Or when the residual capacity of the local FPGA is smaller than the size of the target data, the space of the on-chip memory of the local FPGA can be considered to be insufficient.
The information of the device where the backed-up target data is located (for example, the number of the local FPGA, the number of the remote memory array device, and the like) and the memory address thereof need to be recorded, so that when the target data is accessed by a program in the future, the backed-up target data is copied to the local memory again.
The management work of the on-chip memory of the local FPGA, the on-chip memory of the remote FPGA and the remote memory is taken charge of by a memory management module, and mainly comprises memory allocation, memory recovery, judgment of whether enough memory is available and the like.
In some embodiments, the backing up the target data to a remote memory array device includes:
if the on-chip memory of the far-end FPGA in the far-end memory array equipment has the allocable space, transmitting the target data to the far-end FPGA, and storing the target data into the on-chip memory of the far-end FPGA;
and if the on-chip memory of the remote FPGA does not have the allocable space, transmitting the target data to the remote FPGA, and storing the target data into the remote memory of the remote memory array equipment through the remote FPGA.
Referring to fig. 4, in the present embodiment, the remote memory array device includes a remote memory and a remote FPGA. The remote memory and the remote FPGA can be connected through a PCIe bridge. Alternatively, for a remote FPGA that can directly access the remote memory, the remote FPGA and the remote memory may be directly connected without a PCIe bridge.
When the target data is to be backed up to the remote memory array device, the target data is backed up to the on-chip memory of the remote FPGA in the remote memory array device preferentially. And if the on-chip memory space of the far-end FPGA is insufficient, storing the target data into a far-end memory of the far-end memory array equipment.
To further increase the operating speed of the system, in some embodiments, transmitting the target data to the remote FPGA comprises:
transmitting the target data to the remote FPGA through an RDMA transmit module of the local FPGA.
Referring to fig. 4, in this embodiment, both the local FPGA and the remote FPGA are provided with an RDMA transmission module, and the local host and the remote memory array device are connected to an RDMA network through the RDMA transmission module.
RDMA is an abbreviation for Remote Direct Memory Access, meaning Remote Direct address Access. Through RDMA, the local node can directly access the memory of the remote node. The direct reading and writing process means that the memory of the opposite end node can be read and written by bypassing a traditional complex TCP/IP network protocol stack of the Ethernet like accessing the local memory, a CPU of the opposite end node does not participate in the reading and writing process, and most of the reading and writing process is completed by hardware instead of software.
And when the space of the local memory and the on-chip memory of the local FPGA is insufficient, the target data is preferentially transmitted to the remote FPGA through the RDMA module and stored in the on-chip memory of the remote FPGA. If the on-chip memory space of the remote FPGA is insufficient, the target data is transmitted to the remote FPGA through the RDMA module, and the remote FPGA stores the target data into the remote memory through the PCIe bridge or directly.
And if the on-chip memory of the remote FPGA does not have the allocable space, transmitting the target data to the remote FPGA, and storing the target data into the remote memory of the remote memory array equipment through the remote FPGA.
Further, in some embodiments, the method further comprises:
and after backing up the target data to the on-chip memory of the local FPGA or the remote memory array equipment, releasing the space of the local memory occupied by the target data.
Referring to fig. 5, after the target data is backed up in the on-chip memory of the local FPGA or in the remote memory array device, the memory map (page table entry) of the local memory occupied by the target data is deleted to release the space of the local memory occupied by the target data.
Further, in some embodiments, the method further comprises:
when accessing data, if the data to be accessed is located in the host memory, accessing the host memory;
if the data to be accessed is located in the on-chip memory of the local FPGA, copying the data to be accessed from the on-chip memory of the local FPGA to the host memory, and accessing the host memory;
and if the data to be accessed is located in the remote memory array equipment, copying the data to be accessed from the remote memory array equipment to the host memory, and accessing the host memory.
Specifically, if the data to be accessed is located in the host memory, the data to be accessed in the host memory is directly accessed. If the data to be accessed is located in the on-chip memory of the local FPGA, the data to be accessed is firstly copied from the on-chip memory of the local FPGA to the host memory through the PCIe bridge, and then the copied data to be accessed in the host memory is accessed. If the data to be accessed is located in the remote memory array device, under the condition that the local host and the remote memory array device are connected with the RDMA network through the RDMA transmission module, the data to be accessed is firstly copied into the memory of the host through the RDMA transmission module, and then the copied data to be accessed in the memory of the host is accessed.
In an embodiment where the remote memory array device includes a remote FPGA and a remote memory, the copying the data to be accessed from the remote memory array device to the host memory includes:
if the data to be accessed is located in the on-chip memory of the far-end FPGA of the far-end memory array equipment, copying the data to be accessed from the on-chip memory of the far-end FPGA to the local memory;
and if the data to be accessed is located in a remote memory of the remote memory array equipment, copying the data to be accessed from the remote memory to the local memory.
Further, in some embodiments, the method further comprises:
and after the data to be accessed are copied from the on-chip memory of the local FPGA to the host memory, releasing the space occupied by the data to be accessed in the on-chip memory of the local FPGA.
And after the data to be accessed is copied from the remote memory array equipment to the host memory, releasing the space occupied by the data to be accessed in the remote memory array equipment.
Referring to fig. 5, after the data to be accessed is copied from the on-chip memory of the local FPGA to the host memory through the PCIe bridge, the space occupied by the data to be accessed in the on-chip memory of the local FPGA is released. After the data to be accessed is copied from the remote memory array device to the host memory through the RDMA module, the space occupied by the data to be accessed in the remote memory array is released.
Wherein the releasing of the space occupied by the data to be accessed in the remote memory array device comprises:
if the data to be accessed are copied from the on-chip memory of the far-end FPGA to the local memory, releasing the space occupied by the data to be accessed in the on-chip memory of the far-end FPGA;
and if the data to be accessed is copied from the remote memory to the local memory, releasing the space occupied by the data to be accessed in the remote memory.
That is, if the data to be accessed is located in the on-chip memory of the remote FPGA, after the data to be accessed is copied from the on-chip memory of the remote FPGA to the host memory, the space occupied by the data to be accessed in the on-chip memory of the remote FPGA is released. And if the data to be accessed is located in the remote memory, releasing the space occupied by the data to be accessed in the remote memory after the data to be accessed is copied from the remote memory to the host memory.
Further, in some embodiments, the method further comprises:
and when the host memory space of the local host is enough, allocating the host memory and establishing memory mapping.
When the host memory space of the local host is sufficient, allocating the host memory and before establishing the memory mapping, the method further comprises:
judging whether the local host establishes the memory mapping;
if the local host does not establish the memory mapping, judging whether the host memory space of the local host is enough;
and if the host memory space of the local host is enough, allocating the host memory and establishing memory mapping.
Referring to fig. 5, when the application program starts to access the host memory, it first determines whether a memory mapping is established, and if so, executes a memory access command to access the host memory. If the memory mapping is not established, whether the host memory space of the local host is enough is judged. And if the host memory space of the local host is enough, allocating the host memory and establishing memory mapping. If the host memory space of the local host is not enough, step S101 to step S103 are executed. Establishing Memory mapping refers to establishing a mapping relationship between a virtual address and a physical address, and the method is to create a table entry of a page table for an MMU (Memory Management Unit) in the CPU to query.
In an application program, data writing (for example, an operation of assigning a value to an address pointed to by a pointer in C language code) is an indispensable operation. In the following, a specific implementation is described by taking the assignment operation as an example:
the application assigns a value directly to an address. The address is a virtual address, and no physical address is allocated, so that the CPU will generate a page fault exception.
*p=A;
When the above statements are executed at the processor level, data assignment instructions, such as "STRB W0, [ < address > ]," are executed.
After the CPU generates the page fault exception, the page fault processing section of the operating system finds that there is not enough local physical address to perform address mapping, and then executes the processing logic of steps S101 to S103, which is denoted as rdma _ swap processing logic.
In the rdma _ swap processing logic, when the on-chip memory of the local FPGA is found to be insufficient, data which are not used for a long time in the local memory are backed up to the memory of the remote memory array device.
The specific process is as follows:
1. an RDMA _ WRITE instruction is issued to an RDMA transfer module in the native FPGA.
RDMA _ sq _ WR. Opcode = IBV _ WR _ RDMA _ WRITE; // RDMA operation instruction code
rdma _ sgl. Addr = local _ addr; // local memory address where data is not used for a long time
rdma_sq_wr.sg_list=&rdma_sgl;
rdma _ sq _ wr.wr.rdma. // remote destination Address
ibv _ post _ send (qp, & rdma _ sq _ wr, & bad _ wr); // issuing commands to RDMA transfer Module
2. Waiting for RDMA transfer module processing to complete.
ibv_poll_cq(cq,1,&wc);
And after the RDMA transmission module finishes the processing, deleting the memory mapping where the data which is not used for a long time is positioned, and releasing the space for the address which is accessed currently.
In practical applications, the existing swap processing logic in the operating system as shown in fig. 2 can be replaced by the rdma _ swap processing logic provided by the present application. And new logic can be added on the basis of the swap processing logic, namely the swap processing logic is compatible with the original swap processing logic. If the new logic is added on the basis of the swap processing logic, the new logic can be modified in the swap processing logic, and whether an available on-chip memory of a local FPGA or an available memory of a remote memory array device exists or not is judged before data is stored in a swap partition in a hard disk. If so, then the processing logic shown in FIG. 5 is run; if not, the original swap processing logic is operated.
In summary, the memory backup acceleration method provided by the application uses the memory of the local FPGA on the local host and the memory of the remote memory array device as physical carriers for backing up data in the host memory of the local host, replaces the traditional scheme of using a hard disk as a data backup carrier, and can effectively improve the system operation speed.
The present application further provides a memory backup acceleration apparatus, which can be referred to in correspondence with the above-described method. Referring to fig. 6, fig. 6 is a schematic diagram of a memory backup acceleration device according to an embodiment of the present application, and referring to fig. 6, the device includes:
the judging module 10 is configured to judge whether an on-chip memory of the local FPGA has an allocable space when a host memory space of the local host is insufficient;
the first backup module 20 is configured to backup target data to an on-chip memory of the local FPGA if the on-chip memory of the local FPGA has an allocable space;
the second backup module 30 is configured to backup the target data to a remote memory array device if the on-chip memory of the local FPGA does not have an allocable space.
On the basis of the foregoing embodiment, as a specific implementation manner, the second backup module 30 includes:
the first backup unit is used for transmitting the target data to the far-end FPGA and storing the target data into the on-chip memory of the far-end FPGA if the on-chip memory of the far-end FPGA in the far-end memory array equipment has an allocable space;
and the second backup unit is used for transmitting the target data to the remote FPGA if the on-chip memory of the remote FPGA does not have the allocable space, and storing the target data into the remote memory of the remote memory array device through the remote FPGA.
On the basis of the foregoing embodiment, as a specific implementation manner, the first backup unit and the second backup unit are specifically configured to:
transmitting the target data to the remote FPGA through an RDMA transmit module of the local FPGA.
On the basis of the above embodiment, as a specific implementation manner, the method further includes:
the first access module is used for accessing the host memory if the data to be accessed is located in the host memory when the data is accessed;
the second access module is used for copying the data to be accessed from the on-chip memory of the local FPGA to the host memory and accessing the host memory if the data to be accessed is located in the on-chip memory of the local FPGA;
and the third access module is used for copying the data to be accessed from the remote memory array equipment to the host memory and accessing the host memory if the data to be accessed is located in the remote memory array equipment.
On the basis of the foregoing embodiment, as a specific implementation manner, the third access module includes:
the first copying unit is used for copying the data to be accessed from the on-chip memory of the far-end FPGA to the local memory if the data to be accessed is located in the on-chip memory of the far-end FPGA of the far-end memory array equipment;
a second copying unit, configured to copy the data to be accessed from the remote memory to the local memory if the data to be accessed is located in a remote memory of the remote memory array device.
On the basis of the above embodiment, as a specific implementation manner, the method further includes:
and the first space releasing module is used for releasing the space occupied by the data to be accessed in the on-chip memory of the local FPGA after copying the data to be accessed from the on-chip memory of the local FPGA to the host memory.
On the basis of the above embodiment, as a specific implementation manner, the method further includes:
and the second space releasing module is used for releasing the space occupied by the data to be accessed in the remote memory array equipment after copying the data to be accessed from the remote memory array equipment to the host memory.
On the basis of the foregoing embodiment, as a specific implementation manner, the second space release module includes:
the first releasing unit is used for releasing the space occupied by the data to be accessed in the on-chip memory of the far-end FPGA if the data to be accessed is copied from the on-chip memory of the far-end FPGA to the local memory;
a second releasing unit, configured to release a space occupied by the data to be accessed in the remote memory if the data to be accessed is copied from the remote memory to the local memory.
On the basis of the above embodiment, as a specific implementation manner, the method further includes:
and the third space releasing unit is used for releasing the space of the local memory occupied by the target data after the target data is backed up in the on-chip memory of the local FPGA or the remote memory array equipment.
On the basis of the above embodiment, as a specific implementation manner, the method further includes:
and the memory mapping establishing module is used for allocating the host memory and establishing the memory mapping when the host memory space of the local host is enough.
On the basis of the above embodiment, as a specific implementation manner, the method further includes:
the first judgment module is used for judging whether the local host establishes the memory mapping;
a second determining module, configured to determine whether a host memory space of the local host is sufficient if the local host does not establish a memory mapping;
if the host memory space of the local host is enough, the memory mapping establishing module allocates the host memory and establishes the memory mapping.
On the basis of the above embodiment, as a specific implementation manner, the method further includes:
and the recording module is used for recording the information of the equipment where the backed-up target data is located and the address of a memory for storing the target data in the equipment.
According to the memory backup accelerating device, the memory of the local FPGA on the local host and the memory of the remote memory array device are used as physical carriers for backing up data in the host memory of the local host, the traditional scheme that a hard disk is used as a data backup carrier is replaced, and the system operation speed can be effectively improved.
The present application also provides a memory backup acceleration device, which is shown in fig. 7 and includes a storage 1 and a processor 2.
A memory 1 for storing a computer program;
a processor 2 for executing a computer program to implement the steps of:
when the host memory space of the local host is insufficient, judging whether the on-chip memory of the local FPGA has an allocable space; if the on-chip memory of the local FPGA has the allocable space, backing up target data to the on-chip memory of the local FPGA; and if the on-chip memory of the local FPGA does not have the allocable space, backing up the target data to a remote memory array device.
According to the memory backup accelerating device, the memory of the local FPGA on the local host and the memory of the remote memory array device are used as physical carriers for backing up data in the host memory of the local host, the traditional scheme that a hard disk is used as a data backup carrier is replaced, and the system operation speed can be effectively improved.
For the introduction of the device provided in the present application, please refer to the method embodiments described above, which are not described herein again.
The present application further provides a computer readable storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of:
when the host memory space of the local host is insufficient, judging whether the on-chip memory of the local FPGA has an allocable space; if the on-chip memory of the local FPGA has the allocable space, backing up target data to the on-chip memory of the local FPGA; and if the on-chip memory of the local FPGA does not have the allocable space, backing up the target data to a remote memory array device.
The computer-readable storage medium provided by the application uses the memory of the local FPGA on the local host and the memory of the remote memory array device as physical carriers for backing up data in the host memory of the local host, replaces the traditional scheme of taking a hard disk as a data backup carrier, and can effectively improve the running speed of a system.
The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
For the introduction of the computer-readable storage medium provided in the present application, please refer to the above method embodiments, which are not described herein again.
The embodiments are described in a progressive mode in the specification, the emphasis of each embodiment is on the difference from the other embodiments, and the same and similar parts among the embodiments can be referred to each other. The device, the apparatus and the computer-readable storage medium disclosed by the embodiments correspond to the method disclosed by the embodiments, so that the description is simple, and the relevant points can be referred to the description of the method.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The memory backup acceleration method, apparatus, device, and computer-readable storage medium provided by the present application are described in detail above. The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.

Claims (15)

1. A memory backup acceleration method is characterized by comprising the following steps:
when the host memory space of the local host is insufficient, judging whether the on-chip memory of the local FPGA has an allocable space;
if the on-chip memory of the local FPGA has the allocable space, backing up target data to the on-chip memory of the local FPGA;
and if the on-chip memory of the local FPGA does not have the allocable space, backing up the target data to a remote memory array device.
2. The method of claim 1, wherein backing up the target data to a remote memory array device comprises:
if the on-chip memory of the far-end FPGA in the far-end memory array equipment has the allocable space, transmitting the target data to the far-end FPGA, and storing the target data into the on-chip memory of the far-end FPGA;
and if the on-chip memory of the remote FPGA does not have the allocable space, transmitting the target data to the remote FPGA, and storing the target data into the remote memory of the remote memory array equipment through the remote FPGA.
3. The memory backup acceleration method of claim 1, wherein transmitting the target data to the remote FPGA comprises:
transmitting the target data to the remote FPGA through an RDMA transmit module of the local FPGA.
4. The memory backup acceleration method according to claim 1, further comprising:
when accessing data, if the data to be accessed is located in the host memory, accessing the host memory;
if the data to be accessed is located in the on-chip memory of the local FPGA, copying the data to be accessed from the on-chip memory of the local FPGA to the host memory, and accessing the host memory;
and if the data to be accessed is located in the remote memory array device, copying the data to be accessed from the remote memory array device to the host memory, and accessing the host memory.
5. The memory backup acceleration method of claim 4, wherein the copying the data to be accessed from the remote memory array device to the host memory comprises:
if the data to be accessed is located in an on-chip memory of a far-end FPGA of the far-end memory array equipment, copying the data to be accessed from the on-chip memory of the far-end FPGA to the local memory;
and if the data to be accessed is located in a remote memory of the remote memory array equipment, copying the data to be accessed from the remote memory to the local memory.
6. The memory backup acceleration method according to claim 5, further comprising:
and after the data to be accessed are copied from the on-chip memory of the local FPGA to the host memory, releasing the space occupied by the data to be accessed in the on-chip memory of the local FPGA.
7. The memory backup acceleration method according to claim 5, further comprising:
and after the data to be accessed is copied from the remote memory array equipment to the host memory, releasing the space occupied by the data to be accessed in the remote memory array equipment.
8. The memory backup acceleration method according to claim 7, wherein the releasing the space occupied by the data to be accessed in the remote memory array device comprises:
if the data to be accessed is copied from the on-chip memory of the far-end FPGA to the local memory, releasing the space occupied by the data to be accessed in the on-chip memory of the far-end FPGA;
and if the data to be accessed is copied from the remote memory to the local memory, releasing the space occupied by the data to be accessed in the remote memory.
9. The memory backup acceleration method according to claim 1, further comprising:
and after backing up the target data to the on-chip memory of the local FPGA or the remote memory array equipment, releasing the space of the local memory occupied by the target data.
10. The memory backup acceleration method according to claim 1, further comprising:
and when the host memory space of the local host is enough, allocating the host memory and establishing memory mapping.
11. The method of claim 10, wherein before allocating the host memory and establishing the memory map when the host memory space of the local host is sufficient, the method further comprises:
judging whether the local host establishes the memory mapping;
if the local host does not establish the memory mapping, judging whether the host memory space of the local host is enough;
and if the host memory space of the local host is enough, allocating the host memory and establishing memory mapping.
12. The memory backup acceleration method according to claim 1, further comprising:
and recording the information of the equipment where the backed-up target data is located and the address of a memory in the equipment for storing the target data.
13. A memory backup acceleration device, comprising:
the judging module is used for judging whether the on-chip memory of the local FPGA has the allocable space when the host memory space of the local host is insufficient;
the first backup module is used for backing up target data to the on-chip memory of the local FPGA if the on-chip memory of the local FPGA has the allocable space;
and the second backup module is used for backing up the target data to a remote memory array device if the on-chip memory of the local FPGA does not have the allocable space.
14. A memory backup acceleration device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the memory backup acceleration method according to any one of claims 1 to 12 when executing the computer program.
15. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, performs the steps of the memory backup acceleration method according to any one of claims 1 to 12.
CN202211458494.9A 2022-11-21 2022-11-21 Memory backup acceleration method, device, equipment and computer readable storage medium Pending CN115756962A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211458494.9A CN115756962A (en) 2022-11-21 2022-11-21 Memory backup acceleration method, device, equipment and computer readable storage medium
PCT/CN2023/081742 WO2024108825A1 (en) 2022-11-21 2023-03-15 Memory backup acceleration method, apparatus and device, and non-volatile readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211458494.9A CN115756962A (en) 2022-11-21 2022-11-21 Memory backup acceleration method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115756962A true CN115756962A (en) 2023-03-07

Family

ID=85333939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211458494.9A Pending CN115756962A (en) 2022-11-21 2022-11-21 Memory backup acceleration method, device, equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN115756962A (en)
WO (1) WO2024108825A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117493259A (en) * 2023-12-28 2024-02-02 苏州元脑智能科技有限公司 Data storage system, method and server

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117493259A (en) * 2023-12-28 2024-02-02 苏州元脑智能科技有限公司 Data storage system, method and server
CN117493259B (en) * 2023-12-28 2024-04-05 苏州元脑智能科技有限公司 Data storage system, method and server

Also Published As

Publication number Publication date
WO2024108825A1 (en) 2024-05-30

Similar Documents

Publication Publication Date Title
US7707337B2 (en) Object-based storage device with low process load and control method thereof
JP4831759B2 (en) Method, system, and computer program for allocating DMA address space
US8356149B2 (en) Memory migration
US7526578B2 (en) Option ROM characterization
US9021148B2 (en) Fast path userspace RDMA resource error detection
US7809918B1 (en) Method, apparatus, and computer-readable medium for providing physical memory management functions
US20190243757A1 (en) Systems and methods for input/output computing resource control
CN112612623B (en) Method and equipment for managing shared memory
CN106557427B (en) Memory management method and device for shared memory database
CN109324991A (en) A kind of hot plug device of PCIE device, method, medium and system
US20230168953A1 (en) Inter-process communication method and apparatus
EP1934762A2 (en) Apparatus and method for handling dma requests in a virtual memory environment
CN114327777B (en) Method and device for determining global page directory, electronic equipment and storage medium
CN112306415A (en) GC flow control method and device, computer readable storage medium and electronic equipment
KR102326280B1 (en) Method, apparatus, device and medium for processing data
CN114153779A (en) I2C communication method, system, equipment and storage medium
CN115756962A (en) Memory backup acceleration method, device, equipment and computer readable storage medium
US8156510B2 (en) Process retext for dynamically loaded modules
US7783849B2 (en) Using trusted user space pages as kernel data pages
US20200272520A1 (en) Stack management
US6463515B1 (en) System and method for recovering physical memory locations in a computer system
CN115185874B (en) PCIE resource allocation method and related device
CN112486410B (en) Method, system, device and storage medium for reading and writing persistent memory file
KR20150096177A (en) Method for performing garbage collection and flash memory apparatus using the method
US8813075B2 (en) Virtual computer system and method of installing virtual computer system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination