CN118012338A - High-speed storage method and device based on field programmable gate array - Google Patents

High-speed storage method and device based on field programmable gate array Download PDF

Info

Publication number
CN118012338A
CN118012338A CN202410058142.7A CN202410058142A CN118012338A CN 118012338 A CN118012338 A CN 118012338A CN 202410058142 A CN202410058142 A CN 202410058142A CN 118012338 A CN118012338 A CN 118012338A
Authority
CN
China
Prior art keywords
data
target
request
programmable gate
field programmable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410058142.7A
Other languages
Chinese (zh)
Inventor
郑杰
刘春�
黄雅峥
王伟伟
郑芳只
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 52 Research Institute
Original Assignee
CETC 52 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 52 Research Institute filed Critical CETC 52 Research Institute
Priority to CN202410058142.7A priority Critical patent/CN118012338A/en
Publication of CN118012338A publication Critical patent/CN118012338A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application discloses a high-speed storage method and a device based on a field programmable gate array, wherein the method comprises the steps that a central processing unit initiates a data writing request, and the central processing unit and a rear-end storage array split the data writing request to obtain at least two target requests; the method comprises the steps that a back-end memory bank array sends each target request to a field programmable gate array, and the field programmable gate array analyzes and processes each target request to determine a target address range corresponding to each target request; acquiring corresponding target data based on each target address range by a field programmable gate array, and determining a logic identifier corresponding to each target data; each target data is sent to the back-end bank array by the field programmable gate array based on each logical identification. According to the application, the rearrangement response of the data request is completed through the FPGA, the problem of disorder of the data request is solved, the system performance is improved, and the low-delay and high-bandwidth data storage is realized.

Description

High-speed storage method and device based on field programmable gate array
Technical Field
The application relates to the technical fields of data storage and computer application, in particular to a high-speed storage method and device based on a field programmable gate array.
Background
There are many manufacturers of domestic electronic disks supporting nonvolatile memory express (Non-Volatile Memory Express, NVMe) protocol in the market at present, and the types of main controllers of the hard disks used are also various. In order to fully exert the read-write performance of the electronic disk, some disk master manufacturers adopt a mode of parallel issuing data request instructions by a multi-core multi-controller when writing data in the electronic disk, which leads to the condition that the data request instructions received by a user terminal are disordered. It should be noted that, the data request disorder here occurs when data is written into the memory, unlike the data disorder caused by Non-Posted Request message disorder during normal memory read operation.
For the case of out-of-order write data request packets, existing solutions can be divided into the following two types:
One is: a Central Processing Unit (CPU) is used as a receiving end of user data and is used for completing disc writing operation together with an NVMe controller. The CPU is responsible for processing command interaction (such as analysis of data commands and the like) with the electronic disk and ordering response of user data.
The second step is: and the Field Programmable Gate Array (FPGA) is used as a receiving end of user data to write the data into the memory of the CPU, and then the CPU writes the data written into the memory into the electronic disk. Wherein the CPU is responsible for command interaction with the electronic disk and ordering response of user data.
However, both schemes actually complete the ordering response of the data requests by the CPU, so that the CPU resources are excessively occupied, the system performance is reduced, and the response time of the system to the requests is delayed, so that the maximum performance of the electronic disk cannot be exerted during high-speed storage, and further, the low-delay and high-bandwidth data storage cannot be realized.
Disclosure of Invention
In order to solve the problem of out-of-order data request packets, the embodiment of the application provides a high-speed storage method and device based on a field programmable gate array, and the technical scheme is as follows:
In a first aspect, an embodiment of the present application provides a high-speed storage method based on a field programmable gate array, where the method is applied to a data acquisition architecture, and the data acquisition architecture includes a field programmable gate array, a central processor, and a back-end memory bank array, and the method includes:
Initiating a data writing request by a central processing unit, splitting the data writing request by the central processing unit and a rear-end storage array to obtain at least two target requests; wherein the data amount of each target request is the same;
The method comprises the steps that a back-end memory bank array sends each target request to a field programmable gate array, and the field programmable gate array analyzes and processes each target request to determine a target address range corresponding to each target request;
Acquiring corresponding target data based on each target address range by a field programmable gate array, and determining a logic identifier corresponding to each target data;
each target data is sent to the back-end bank array by the field programmable gate array based on each logical identification.
In an alternative of the first aspect, the data acquisition architecture further comprises a cache unit;
before a data write request is initiated by a central processor, comprising:
Receiving user data by the field programmable gate array and transmitting the user data to the cache unit; wherein the user data is data corresponding to a data writing request;
Calculating the data quantity of the cache unit in real time by a field programmable gate array, and judging whether the data quantity reaches a preset threshold value or not;
when the data quantity is identified to reach a preset threshold value, the field programmable gate array performs writing operation on the user data in the cache unit according to a preset writing mode, and generates an interrupt signal.
Initiating, by the central processor, a data write request, comprising:
Transmitting an interrupt signal to the central processing unit by the field programmable gate array;
and the central processing unit initiates a data writing request corresponding to the interrupt signal according to the interrupt signal.
In yet another alternative of the first aspect, the back-end bank array includes a first split unit and a second split unit;
Splitting the data writing request by the central processing unit and the back-end storage array to obtain at least two target requests, wherein the splitting comprises the following steps:
Splitting the data writing request by the central processing unit to obtain at least two first requests, and sending each first request to a first splitting unit; wherein the data volume of each first request is the same, and the data volume of the first request is smaller than the data volume of the data writing request;
splitting each first request by a first splitting unit to obtain at least two second requests, and sending each second request to a second splitting unit; wherein the data volume of each second request is the same, the data volume of the second request being smaller than the data volume of the first request;
Splitting each second request by a second splitting unit to obtain at least two target requests; wherein the amount of data of the target request is less than the amount of data of the second request.
In yet another alternative of the first aspect, the parsing, by the field programmable gate array, of each target request determines a target address range corresponding to each target request, including:
analyzing each target request by the field programmable gate array to obtain a target head address and a data effective length corresponding to each target request;
And calculating a target address range corresponding to each target request according to each target head address and the effective length of the data.
In yet another alternative of the first aspect, the obtaining, by the field programmable gate array, the respective target data based on each target address range, and determining the logical identification corresponding to each target data, includes:
Performing shift processing on each target address range by using a field programmable gate array according to preset shift information and each target address range to obtain a storage address range corresponding to each target address range; wherein the storage address range comprises a storage head address and a storage tail address;
Acquiring target data corresponding to each storage address range according to each storage head address and the corresponding storage tail address;
and determining a logic identifier corresponding to each target data according to each storage head address.
In a further alternative of the first aspect, the data acquisition architecture further comprises a data exchange unit;
Transmitting, by the field programmable gate array, each target data to the back-end bank array based on each logical identification, comprising:
Determining the exchange address of each target data by the field programmable gate array according to each logic identifier and a preset mapping relation;
transmitting, by the field programmable gate array, each target data to the data switching unit based on the switching address;
all target data are sent to the back-end bank array by the data exchange unit.
In a further alternative of the first aspect, the sending, by the data exchange unit, all the target data to the back-end bank array comprises:
when the data exchange unit receives all the target data, integrating all the target data to obtain request data;
the request data is sent to the back-end bank array by the data exchange unit.
In a second aspect, an embodiment of the present application provides a high-speed storage device based on a field programmable gate array, where the device is applied to a data acquisition architecture, and the data acquisition architecture includes a field programmable gate array, a central processing unit, and a back-end bank array, and the device includes:
The first processing module is used for initiating a data writing request by the central processing unit, splitting the data writing request by the central processing unit and the rear-end storage array to obtain at least two target requests; wherein the data amount of each target request is the same;
The second processing module is used for sending each target request to the field programmable gate array by the back-end memory bank array, analyzing and processing each target request by the field programmable gate array, and determining a target address range corresponding to each target request;
the third processing module is used for acquiring corresponding target data based on each target address range by the field programmable gate array and determining a logic identifier corresponding to each target data;
And the fourth processing module is used for transmitting each target data to the back-end memory bank array by the field programmable gate array based on each logic identifier.
In a third aspect, an embodiment of the present application further provides a high-speed storage device based on a field programmable gate array, including a processor and a memory;
The processor is connected with the memory;
A memory for storing executable program code;
The processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for implementing the high-speed storage method based on the field programmable gate array provided in the first aspect of the embodiment of the present application or any implementation manner of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer storage medium, where a computer program is stored, where the computer program includes program instructions, where the program instructions, when executed by a processor, implement a high-speed storage method based on a field programmable gate array provided in the first aspect or any implementation manner of the first aspect of the embodiment of the present application.
The technical scheme provided by some embodiments of the present specification has the following beneficial effects:
in a high-speed storage process based on a field programmable gate array, a central processing unit initiates a data writing request, and the central processing unit and a rear-end storage bank array split the data writing request to obtain at least two target requests; the method comprises the steps that a back-end memory bank array sends each target request to a field programmable gate array, and the field programmable gate array analyzes and processes each target request to determine a target address range corresponding to each target request; acquiring corresponding target data based on each target address range by a field programmable gate array, and determining a logic identifier corresponding to each target data; each target data is sent to the back-end bank array by the field programmable gate array based on each logical identification. The target request obtained after splitting is resolved through the field programmable gate array, target data are obtained according to the resolved target address range, a logic identifier is determined, and then the target data are sent to the rear-end storage array according to the logic identifier, so that the sequential landing of the data is completed, the problem of disordered data request packets is solved, the occupation of CPU resources is reduced, the system performance is improved, and the low-delay and high-bandwidth data storage is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a general flow chart of a high-speed storage method based on a field programmable gate array according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating a splitting process of a data write request according to an embodiment of the present application;
FIG. 3 is a block diagram of a high-speed memory method based on a field programmable gate array according to an embodiment of the present application;
FIG. 4 is a block diagram of a logic module of a field programmable gate array according to an embodiment of the application;
FIG. 5 is a schematic diagram of a high-speed memory device based on a field programmable gate array according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of still another high-speed memory device based on a field programmable gate array according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
In the following description, the terms "first," "second," and "first," are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The following description provides various embodiments of the application that may be substituted or combined between different embodiments, and thus the application is also to be considered as embracing all possible combinations of the same and/or different embodiments described. Thus, if one embodiment includes feature A, B, C and another embodiment includes feature B, D, then the application should also be seen as embracing one or more of all other possible combinations of one or more of A, B, C, D, although such an embodiment may not be explicitly recited in the following.
The following description provides examples and does not limit the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements described without departing from the scope of the application. Various examples may omit, replace, or add various procedures or components as appropriate. For example, the described methods may be performed in a different order than described, and various steps may be added, omitted, or combined. Furthermore, features described with respect to some examples may be combined into other examples.
Referring to fig. 1, fig. 1 is an overall flowchart of a high-speed storage method based on a field programmable gate array according to an embodiment of the present application.
As shown in fig. 1, the high-speed storage method based on the field programmable gate array at least comprises the following steps:
And step 101, a central processing unit initiates a data writing request, and the central processing unit and the back-end storage array split the data writing request to obtain at least two target requests.
In the embodiment of the application, the high-speed storage method based on the Field Programmable gate array is applied to a data acquisition architecture, and the data acquisition architecture at least comprises the Field Programmable gate array, a central processing unit and a back-end memory bank array, wherein the Field Programmable gate array (Field-Programmable GATE ARRAY, namely FPGA) is a Programmable logic device, and specific digital circuit functions can be realized by configuring and reprogramming internal logic units, connections and memory units; the back-end bank array may be, but is not limited to being, a disk array. In the process of high-speed storage based on the field programmable gate array, a central processing unit initiates a data writing request, a field programmable gate array analyzes a target request obtained by splitting the central processing unit and a rear-end memory array, and then the field programmable gate array acquires target data according to the analyzed target address range and determines a logic identifier, so that the target data is sent to the rear-end memory array according to the logic identifier to finish the sequential disk-falling of the data, the problem of disordered data request packets is solved, the resource occupation of the central processing unit (Central Processing Unit, namely a CPU) is reduced, the system performance is improved, and the low-delay and high-bandwidth data storage is realized.
In one or more embodiments described below, a CPU is used in place of a central processing unit, an FPGA is used in place of a field programmable gate array.
In particular, in a field programmable gate array based high speed storage process, when a CPU initiates a data write request, the request may be split into multiple smaller sub-requests in order to more efficiently process the request. These sub-requests may be split according to memory address ranges or data block sizes and then sent in parallel to different storage devices or storage units, thereby improving overall data writing speed and system response performance.
Further, the CPU sends the sub-request subjected to one split processing to the back-end memory bank array, and when the back-end memory bank array receives the sub-request, the sub-request can be subjected to at least one split processing to obtain at least two target requests.
It should be noted that the splitting process herein must follow the same principle for the data amount of each sub-request or target request.
As an option of the embodiment of the present application, the data acquisition architecture further includes a cache unit;
before a data write request is initiated by a central processor, comprising:
Receiving user data by the field programmable gate array and transmitting the user data to the cache unit; wherein the user data is data corresponding to a data writing request;
Calculating the data quantity of the cache unit in real time by a field programmable gate array, and judging whether the data quantity reaches a preset threshold value or not;
when the data quantity is identified to reach a preset threshold value, the field programmable gate array performs writing operation on the user data in the cache unit according to a preset writing mode, and generates an interrupt signal.
Initiating, by the central processor, a data write request, comprising:
Transmitting an interrupt signal to the central processing unit by the field programmable gate array;
and the central processing unit initiates a data writing request corresponding to the interrupt signal according to the interrupt signal.
In particular, the above mentioned data acquisition architecture may further comprise a buffer unit, such as a double data rate synchronous dynamic random access memory (DDR). Before the CPU initiates a data writing request, the FPGA can collect user data and send the user data to the cache unit.
It should be noted that, the user data is data corresponding to the data writing request initiated by the CPU, and the collection manner of the user data may be, but is not limited to,: external devices (e.g., external sensors, etc.) transmit data to the FPGA through a parallel interface (e.g., GPIO) or a serial interface (e.g., SPI, I2C, UART).
It can be understood that, because the storage space of the buffer unit is larger and the transmission rate is faster, the data access speed can be increased while more user data is stored, so that the response delay of the system is reduced, and the low-delay and high-bandwidth data storage is realized.
Then, the FPGA can calculate the data amount written into the buffer unit in real time, and when the data amount reaches the portable value (i.e., a preset threshold value), it indicates that the data amount written into the buffer unit has reached the data amount related to one transmission operation, so that the data in the buffer unit can be written into the memory BRAM (i.e., block RAM, which is a memory type commonly used in digital integrated circuits such as FPGA, ASI, etc.) inside the FPGA according to a preset writing manner. The preset writing mode may be, but not limited to, a ping-pong writing mode, that is, data is alternately written into two or more storage areas, for example, there are storage devices bram_a and bram_b, after 1KB of data is written into the storage device bram_a, 1KB is written into the storage device bram_b, and the ping-pong writing operation is performed in a reciprocating cycle.
It is worth noting that the ping-pong writing mode can solve the delay problem in the parallel data transmission and processing process, and further achieve low-delay and high-bandwidth data storage. At the same time, the FPGA generates an interrupt signal indicating that it is in a state where it can receive a data write request.
After the interrupt signal is generated, the FPGA sends the interrupt signal to the CPU to remind the CPU to initiate a data writing request. And the CPU initiates a data writing request corresponding to the interrupt signal according to the transportable value in the interrupt signal. For example, at a portable value of 1MB, the CPU may request that 1MB of data be written to the back-end bank array.
As yet another alternative of the embodiment of the present application, the back-end bank array includes a first splitting unit and a second splitting unit;
Splitting the data writing request by the central processing unit and the back-end storage array to obtain at least two target requests, wherein the splitting comprises the following steps:
Splitting the data writing request by the central processing unit to obtain at least two first requests, and sending each first request to a first splitting unit; wherein the data volume of each first request is the same, and the data volume of the first request is smaller than the data volume of the data writing request;
splitting each first request by a first splitting unit to obtain at least two second requests, and sending each second request to a second splitting unit; wherein the data volume of each second request is the same, the data volume of the second request being smaller than the data volume of the first request;
Splitting each second request by a second splitting unit to obtain at least two target requests; wherein the amount of data of the target request is less than the amount of data of the second request.
In particular, the back-end bank array may further include a first split unit and a second split unit, wherein the first split unit may be, but is not limited to being, a core of a bank in the back-end bank array (generally referred to as a processing core or a control core in a memory controller), such as a core of an NVMe disk; the second splitting unit may be, but is not limited to, a DMA (Direct Memory Access ) controller in a memory bank, such as a DMA controller in an NVMe disk. Notably, there may be multiple cores in one bank and DMA controllers, e.g., 4 cores, 3 DMA controllers in an NVMe disk.
After the CPU initiates the data writing request, the CPU can split the request to obtain at least two first requests with the same data volume, and all the first requests are sent to the first splitting unit. After that, the first splitting unit splits each first request into at least two second requests with the same data volume, and sends all the second requests to the second splitting unit, and then the second splitting unit splits each second request into at least two target requests with the same data volume. It is noted that the amount of data of the target request is smaller than the amount of data of the second request, which is smaller than the amount of data of the first request, which is smaller than the amount of data of the data write request.
Referring now to fig. 2, fig. 2 is a flowchart illustrating a splitting process of a data write request according to an embodiment of the present application.
As shown in fig. 2, core 1, core 2, core 3 and core 4 are the 4 cores mentioned above for the NVMe disk, the data amount identified in the figure is the data amount of the request, the data amounts are the same type of request, and the line number in the figure does not represent the specific number of requests.
It can be understood that the data writing request is 1MB, split by the CPU into a plurality of first requests with data size of 32KB, and all the first requests are sent to the cores of the NVMe disk by the CPU. After receiving the first requests with the data volume of 32KB, the core of the NVMe disk splits each first request to obtain a plurality of second requests with the data volume of 4 KB. Then, the core of the NVMe disk sends each split second request with the data size of 4KB to the controller of the NVMe disk, and the controller splits each second request to obtain a plurality of target requests with the data size of 512B, and then the controller can send each target request with the data size of 512B to the FPGA. The controller may refer to the DMA controller mentioned above.
It is noted that, the first request with the data size of 32KB reaches the FPGA after being split by the NVMe disk, but the first request with the data size of 32KB is continuous, for example, the first request with the data size of 32KB obtained by splitting is 0-32 KB in 1MB, the second request with the data size of 32 KB-64 KB, and so on. Therefore, the disordered range is changed from 1MB to 32KB, and the FPGA can realize the sequential transmission of the data only by responding to the corresponding target request according to the address for the data within 32 KB. Here, the reason why the request is split multiple times is that the BRAM resource inside the FPGA is limited, and the rearrangement response cannot be directly performed for 1MB of data.
Step 102, the back-end memory bank array sends each target request to the field programmable gate array, and the field programmable gate array analyzes each target request to determine a target address range corresponding to each target request.
Specifically, after the data writing request is split by the CPU and the back-end memory bank array, the obtained target requests are sent to the FPGA by the back-end memory bank array, and each target request is analyzed by the FPGA. After the FPGA analyzes the target requests, address information in the target requests, that is, a target address range corresponding to each target request, can be obtained.
For example, data requested by a certain target request is data 1 to data 8, and addresses of data 1 to data 8 are 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, and 0x07, so that a target address range corresponding to the target request is 0x00 to 0x07.
As still another alternative of the embodiment of the present application, the analyzing, by the field programmable gate array, each target request to determine a target address range corresponding to each target request includes:
analyzing each target request by the field programmable gate array to obtain a target head address and a data effective length corresponding to each target request;
And calculating a target address range corresponding to each target request according to each target head address and the effective length of the data.
Specifically, in the process of resolving target requests by the FPGA to obtain corresponding target address ranges, the FPGA firstly resolves each target request to obtain a target head address and a data effective length corresponding to each target request. Then, the FPGA can calculate a target end address according to each target head address, the effective length of data and a corresponding calculation formula, and further can obtain a target address range corresponding to each target request according to the target head address and the target end address. The calculation formula of the target end address may be, but is not limited to,: destination end address = destination head address + data effective length-1.
For example, the FPGA may analyze a certain target request to obtain a target first address of 0x1000, and the effective length of the data is 16 bytes, and according to the above calculation formula, the target last address may be calculated as: 0x1000+16—1=0x100F, where F represents 15, and 0x represents the address in hexadecimal. Therefore, according to the target head address 0x1000 and the target tail address 0x100F, the target address range is obtained as follows: 0x1000 to 0x100F.
Step 103, acquiring corresponding target data based on each target address range by the field programmable gate array, and determining a logic identifier corresponding to each target data.
Specifically, after determining the target address range corresponding to each target request, the FPGA may read the data stored in the target address range according to the target address range, that is, the target data.
For example, if a certain target address range is 0x 00-0 x07, the FPGA may acquire data in the target address range from the internal BRAM in the storage space corresponding to the 8 addresses as target data.
Meanwhile, the logic identifier corresponding to each target data may be determined according to the numerical characteristic of the target address range, where the numerical characteristic may include, but is not limited to, that the first address and the last address of the target address range are unique, and the logic identifier is also uniquely corresponding. For example, since the first address in each target address range is unique and has a size relationship, the target data may be ordered according to the first address in the target address range, and the ordering result (such as a sequence number or the like) or the first address in the target address range may be used as a logic identifier of the target data, but is not limited to the ordering result.
As still another alternative of the embodiment of the present application, the acquiring, by the field programmable gate array, the corresponding target data based on each target address range, and determining the logic identifier corresponding to each target data includes:
Performing shift processing on each target address range by using a field programmable gate array according to preset shift information and each target address range to obtain a storage address range corresponding to each target address range; wherein the storage address range comprises a storage head address and a storage tail address;
Acquiring target data corresponding to each storage address range according to each storage head address and the corresponding storage tail address;
and determining a logic identifier corresponding to each target data according to each storage head address.
Specifically, after determining the target address range corresponding to each target request, the FPGA may perform a shift process on each target address range according to preset shift information to obtain a corresponding storage address range, where the preset shift information may include, but is not limited to, a shift manner and a shift bit number, the shift manner may include, but is not limited to, a logical left shift and a logical right shift, and the target address range and the storage address range have different values, but have the same range size.
The logical shift operation refers to shifting a binary representation of a number to the left or right by a specified number of bits and filling the vacated bits with 0s during the shift.
For example, a certain target address range is 0x 05-0 x07, the binary of the target address range is 0101-0111, the shift information is logically shifted left by one bit, and the binary of the corresponding storage address is 1010-1110, namely 0x 0A-0 x0E, where a represents the number 10 in hexadecimal and E represents the number 14 in hexadecimal.
It is understood that the range of memory addresses obtained after the shift process includes the memory head address and the memory tail address, and according to the above example, the memory head address and the memory tail address are 0x0A and 0x0E, which are also addresses in BRAM.
Then, data can be read from the first address in the BRAM, the data stored in the last address is read in sequence all the time, and all the read data are target data.
Meanwhile, since each storage head address is unique and has a size relationship, all the storage head addresses can be ordered, and the ordering result (such as a sequence number and the like) is used as a logic identifier of target data corresponding to the storage head address.
Step 104, each target data is sent to the back-end memory bank array by the field programmable gate array based on each logic identification.
Specifically, after determining the logic identifier of each target data, the FPGA may determine, according to each logic identifier, a write address in the back-end memory bank array corresponding to the logic identifier, and write, according to each write address, the corresponding data into the back-end memory bank array, so as to complete data storage.
For example, when the logical identifiers of the 5 target data are respectively 1,2, 3,4 and 5, each sequence number represents a block of storage space, and the sequence numbers are connected, that is, represent the storage space connection, and the sequence number 1 represents the first block of storage space of the first block of disk, so that the 5 data can be sequentially written into the first block of disk in the back-end storage array from the sequence of the sequence numbers from small to large.
As a further alternative of the embodiment of the present application, the data acquisition architecture further comprises a data exchange unit;
Transmitting, by the field programmable gate array, each target data to the back-end bank array based on each logical identification, comprising:
Determining the exchange address of each target data by the field programmable gate array according to each logic identifier and a preset mapping relation;
transmitting, by the field programmable gate array, each target data to the data switching unit based on the switching address;
all target data are sent to the back-end bank array by the data exchange unit.
Specifically, the data acquisition architecture may further include a data exchange unit, where in the process of sending the target data to the back-end memory array by the FPGA according to the logic identifier, the FPGA may send the target data to the data exchange unit first, and then the data exchange unit sends the target data to the back-end memory array.
It can be understood that the FPGA determines, according to the logic identifier and the preset mapping relationship, an address of a storage space in the data exchange unit to which the target data should be written, that is, an exchange address, and then writes each received target data into a storage space corresponding to the corresponding exchange address. The preset mapping relation is the corresponding relation between the logic identifier and the exchange address.
For example, the logical identifiers of the target data 1,2 and 3 are A, B, C respectively, and according to the preset mapping relationship, the corresponding exchange address is X, Y, Z. If the sequence of the received target data is 231 in sequence, writing the data 2 into the memory space with the exchange address of Y, writing the data 3 into the memory space with the exchange address of Z, and finally writing the data 1 into the memory space with the exchange address of X.
It should be noted that the preset mapping relationship may be obtained by querying a database, for example, a table in which a certain database stores the mapping relationship, and the logical identifier may be used to query the exchange address uniquely corresponding to the logical identifier.
All target data may then be sent to the back-end bank array by the data exchange unit after it is received.
As yet another alternative of the embodiment of the present application, the sending, by the data exchange unit, all the target data to the back-end bank array includes:
when the data exchange unit receives all the target data, integrating all the target data to obtain request data;
the request data is sent to the back-end bank array by the data exchange unit.
Specifically, when the data exchange unit receives all target data, that is, represents that all user data corresponding to the data writing request has been received, the data exchange unit integrates all target data into one complete request data and sends the request data to the back-end storage array. The method of integrating may be, but not limited to, integrating a plurality of data into one data packet using a network protocol and integrating a plurality of data into one data packet according to an address of the transmitted data. Here, the target data in the request data is sequential rather than out of order, which also means that ordering of all target data has been completed before the request data is sent to the back-end bank array.
It should be noted that, the main body for realizing the ordering of the target data is an FPGA, and the FPGA obtains the disordered target data according to the disordered target request and sends the disordered target data to the data exchange unit according to the address response, so as to realize the rearrangement response.
Referring to fig. 3, fig. 3 shows an overall architecture diagram of a high-speed storage method based on a field programmable gate array according to an embodiment of the present application.
As shown in fig. 3, the architecture includes a front-end acquisition module, a recording control module, a data exchange module, and a back-end storage array, where the front-end acquisition module includes an FPGA and a cache (the cache may be, but is not limited to, a DDR cache), the recording control module includes a CPU and a system main memory, the data exchange module (i.e., the above-mentioned data exchange unit) includes a PCIe switch, the back-end storage array includes a plurality of storage banks and, as an optional disk array (i.e., a RAID), and the RAID herein mainly refers to a hardware RAID.
It can be understood that the user data is collected by the FPGA in the front-end collection module, and the FPGA sends the collected user data to the buffer, and then prefetches the data from the buffer (i.e., before receiving the data writing request, the user data is read from the buffer) so as to reduce the delay response of the front-end collection module. And then, the FPGA sends the prefetched data to a data exchange module, and the data exchange module sends the data to a back-end memory array so as to realize low-delay and high-bandwidth data storage.
It should be noted that, the CPU in the record control module is used as an upstream port of PCIe (i.e., a port of a device at a higher level than the PCIe switch in the network), and is mainly responsible for management of a command queue, splitting of a data writing request, and control of a data interaction flow, for example, receiving an interrupt signal from the FPGA and initiating operations such as a data writing request according to the interrupt signal.
Referring to fig. 4, fig. 4 is a logic block diagram of a field programmable gate array according to an embodiment of the application.
As shown in fig. 4, the logic module block diagram of the FPGA at least includes an off-chip cache control module, a core module, and a command parsing and data transmission module, where the core module includes a memory control module, a memory module, and a cache queue. The memory module may refer to the BRAM mentioned above, and the buffer queue may be, but is not limited to, FIFO (First-In-First-Out), i.e. a First-In-First-Out buffer or queue.
It should be noted that, the off-chip cache control module is mainly used for implementing read-write control of off-chip cache (i.e. cache outside the FPGA chip) by calling the Xilinx official IP core, where the off-chip cache can refer to the above-mentioned DDR; the core module is mainly used for writing the user data into the BRAM by adopting a ping-pong writing mode after the user data are pre-read from the DDR, reading the data in the BRAM and sending the data to the PCIe bus (a high-speed serial computer bus standard is used for connecting various external devices to a computer system and can be understood as a transmission channel of a PCIe switch); the command parsing and data transmission module is mainly used for parsing and transmitting TLP command packets (i.e. target requests) sent from the NVMe disk. The Xilinx official IP core refers to reusable IP cores provided by Xilinx corporation, which are Xilinx authenticated and verified, and can be directly used on Xilinx FPGA and SoC devices (System-on-Chip, i.e. System-on-Chip, which is a Chip integrated with a processor core, a memory, an input/output interface, a controller and other key components, and can provide complete computing and communication functions, and are generally used in embedded systems and mobile devices) to accelerate the process of design development.
It can be understood that when the FPGA receives the target request, the FPGA immediately reads the data corresponding to the address information from the BRAM according to the address information analyzed by the command analysis and data transmission module, and sends the data to the FIFO at the back end, so as to send the data to the command analysis and data transmission module through the FIFO, so as to achieve accurate transmission of the target data corresponding to each target request.
It should be noted that, in the command parsing and data transmission module, the TLP command packet received from the NVMe disk may be buffered in two stages through the FIFO, that is, an extra buffer layer, i.e., a second buffer layer, is added between the first buffer and the main memory. Here, these TLP command packets are stored in a first-in first-out (FIFO) buffer as a second-level buffer, that is, after passing through the first-level buffer, the target request is distributed into two parts, one part of the address information of the target request is resolved and sent to the core module, and the other part of the address information is sent to the second-level buffer for subsequent command resolution processing. Under the condition, the processor of the FPGA can access the first-level cache first, if the needed data is not in the first-level cache, the second-level cache is checked next, and if the data is found in the second-level cache, the data can be obtained from the second-level cache without reading from the main memory, so that the data access speed is improved, the access frequency to the main memory is reduced, and the overall performance and the response speed of the system are improved.
Referring to fig. 5, fig. 5 shows a schematic structural diagram of a high-speed memory device based on a field programmable gate array according to an embodiment of the application.
As shown in fig. 5, the high-speed storage device based on the field programmable gate array may include at least a first processing module 501, a second processing module 502, a third processing module 503, and a fourth processing module 504, wherein:
The first processing module 501 is configured to initiate a data writing request by a central processing unit, and split the data writing request by the central processing unit and a back-end storage array to obtain at least two target requests; wherein the data amount of each target request is the same;
the second processing module 502 is configured to send each target request to the field programmable gate array by using the back-end bank array, and perform parsing processing on each target request by using the field programmable gate array, so as to determine a target address range corresponding to each target request;
A third processing module 503, configured to obtain, by the field programmable gate array, corresponding target data based on each target address range, and determine a logic identifier corresponding to each target data;
A fourth processing module 504 is configured to send, by the field programmable gate array, each target data to the back-end bank array based on each logical identification.
In some possible embodiments, the data acquisition architecture further comprises a cache unit;
before a data write request is initiated by a central processor, comprising:
the first processing module 501 is specifically configured to:
Receiving user data by the field programmable gate array and transmitting the user data to the cache unit; wherein the user data is data corresponding to a data writing request;
Calculating the data quantity of the cache unit in real time by a field programmable gate array, and judging whether the data quantity reaches a preset threshold value or not;
when the data quantity is identified to reach a preset threshold value, the field programmable gate array performs writing operation on the user data in the cache unit according to a preset writing mode, and generates an interrupt signal.
Initiating, by the central processor, a data write request, comprising:
the first processing module 501 is specifically configured to:
Transmitting an interrupt signal to the central processing unit by the field programmable gate array;
and the central processing unit initiates a data writing request corresponding to the interrupt signal according to the interrupt signal.
In some possible embodiments, the back-end bank array includes a first split unit and a second split unit;
Splitting the data writing request by the central processing unit and the back-end storage array to obtain at least two target requests, wherein the splitting comprises the following steps:
the first processing module 501 is specifically configured to:
Splitting the data writing request by the central processing unit to obtain at least two first requests, and sending each first request to a first splitting unit; wherein the data volume of each first request is the same, and the data volume of the first request is smaller than the data volume of the data writing request;
splitting each first request by a first splitting unit to obtain at least two second requests, and sending each second request to a second splitting unit; wherein the data volume of each second request is the same, the data volume of the second request being smaller than the data volume of the first request;
Splitting each second request by a second splitting unit to obtain at least two target requests; wherein the amount of data of the target request is less than the amount of data of the second request.
In some possible embodiments, the analyzing, by the field programmable gate array, each target request to determine a target address range corresponding to each target request includes:
the second processing module 502 is specifically configured to:
analyzing each target request by the field programmable gate array to obtain a target head address and a data effective length corresponding to each target request;
And calculating a target address range corresponding to each target request according to each target head address and the effective length of the data.
In some possible embodiments, acquiring, by the field programmable gate array, respective target data based on each target address range, and determining a logical identification corresponding to each target data, includes:
the third processing module 503 is specifically configured to:
Performing shift processing on each target address range by using a field programmable gate array according to preset shift information and each target address range to obtain a storage address range corresponding to each target address range; wherein the storage address range comprises a storage head address and a storage tail address;
Acquiring target data corresponding to each storage address range according to each storage head address and the corresponding storage tail address;
and determining a logic identifier corresponding to each target data according to each storage head address.
In some possible embodiments, the data acquisition architecture further comprises a data exchange unit;
Transmitting, by the field programmable gate array, each target data to the back-end bank array based on each logical identification, comprising:
the fourth processing module 504 is specifically configured to:
Determining the exchange address of each target data by the field programmable gate array according to each logic identifier and a preset mapping relation;
transmitting, by the field programmable gate array, each target data to the data switching unit based on the switching address;
all target data are sent to the back-end bank array by the data exchange unit.
In some possible embodiments, sending all target data to the back-end bank array by the data exchange unit includes:
the fourth processing module 504 is specifically configured to:
when the data exchange unit receives all the target data, integrating all the target data to obtain request data;
the request data is sent to the back-end bank array by the data exchange unit.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a structure of a high-speed memory device based on a field programmable gate array according to an embodiment of the application.
As shown in fig. 6, the field programmable gate array based high speed storage device 600 may include at least one processor 601, at least one network interface 604, a user interface 603, a memory 605, and at least one communication bus 602.
Wherein the communication bus 602 may be used to enable connectivity communication for the various components described above.
The user interface 603 may include keys, and the optional user interface may also include a standard wired interface, a wireless interface, among others.
The network interface 604 may include, but is not limited to, a bluetooth module, an NFC module, a Wi-Fi module, etc.
Wherein the processor 601 may include one or more processing cores. The processor 601 performs various functions and processes of the field programmable gate array based high speed storage device 600 by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 605 and invoking data stored in the memory 605 using various interfaces and lines to connect the various parts within the overall field programmable gate array based high speed storage device 600. Alternatively, the processor 601 may be implemented in at least one hardware form of DSP, FPGA, PLA. The processor 601 may integrate one or a combination of several of a CPU, GPU, modem, and the like. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 601 and may be implemented by a single chip.
The memory 605 may include RAM or ROM. Optionally, the memory 605 includes a non-transitory computer readable medium. Memory 605 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 605 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. The memory 605 may also optionally be at least one storage device located remotely from the processor 601. As shown in fig. 6, an operating system, a network communication module, a user interface module, and a field programmable gate array-based high-speed storage application can be included in the memory 605, which is a type of computer storage medium.
In particular, the processor 601 may be configured to invoke a field programmable gate array based high speed storage application stored in the memory 605 and to specifically perform the following operations:
Initiating a data writing request by a central processing unit, splitting the data writing request by the central processing unit and a rear-end storage array to obtain at least two target requests; wherein the data amount of each target request is the same;
The method comprises the steps that a back-end memory bank array sends each target request to a field programmable gate array, and the field programmable gate array analyzes and processes each target request to determine a target address range corresponding to each target request;
Acquiring corresponding target data based on each target address range by a field programmable gate array, and determining a logic identifier corresponding to each target data;
each target data is sent to the back-end bank array by the field programmable gate array based on each logical identification.
The present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method. The computer-readable storage medium may include, among other things, any type of disk including floppy disks, optical disks, DVDs, CD-ROMs, micro-drives, and magneto-optical disks, ROM, RAM, EPROM, EEPROM, DRAM, VRAM, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be performed by hardware associated with a program that is stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
The above are merely exemplary embodiments of the present disclosure and are not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Embodiments of the present disclosure will be readily apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a scope and spirit of the disclosure being indicated by the claims.

Claims (10)

1. A high-speed storage method based on a field programmable gate array, wherein the method is applied to a data acquisition architecture, the data acquisition architecture comprising a field programmable gate array, a central processing unit and a back-end memory bank array, the method comprising:
Initiating a data writing request by the central processing unit, and splitting the data writing request by the central processing unit and the back-end storage array to obtain at least two target requests; wherein the data amount of each target request is the same;
the back-end memory bank array sends each target request to the field programmable gate array, and the field programmable gate array analyzes each target request to determine a target address range corresponding to each target request;
acquiring corresponding target data based on each target address range by the field programmable gate array, and determining a logic identifier corresponding to each target data;
And transmitting each target data to the back-end memory bank array by the field programmable gate array based on each logic identifier.
2. The method of claim 1, wherein the data acquisition architecture further comprises a cache unit;
Before the initiation of the data write request by the central processor, comprising:
Receiving user data by the field programmable gate array and transmitting the user data to the cache unit; wherein the user data is data corresponding to the data writing request;
calculating the data volume of the cache unit in real time by the field programmable gate array, and judging whether the data volume reaches a preset threshold value or not;
When the data quantity is identified to reach the preset threshold value, the field programmable gate array performs writing operation on the user data in the cache unit according to a preset writing mode and generates an interrupt signal;
the initiating, by the central processor, a data write request, comprising:
Transmitting the interrupt signal to the central processing unit by the field programmable gate array;
And initiating a data writing request corresponding to the interrupt signal by the central processing unit according to the interrupt signal.
3. The method of claim 1, wherein the back-end bank array comprises a first split unit and a second split unit;
the splitting, by the central processor and the back-end storage array, the data writing request to obtain at least two target requests, including:
Splitting the data writing request by the central processing unit to obtain at least two first requests, and sending each first request to the first splitting unit; wherein, the data volume of each first request is the same, and the data volume of the first request is smaller than the data volume of the data writing request;
Splitting each first request by the first splitting unit to obtain at least two second requests, and sending each second request to the second splitting unit; wherein each of the second requests has the same amount of data, the second request having an amount of data less than the first request;
splitting each second request by the second splitting unit to obtain at least two target requests; wherein the amount of data of the target request is less than the amount of data of the second request.
4. The method of claim 1, wherein said parsing each of said target requests by said field programmable gate array to determine a target address range corresponding to each of said target requests comprises:
analyzing each target request by the field programmable gate array to obtain a target head address and a data effective length corresponding to each target request;
And calculating a target address range corresponding to each target request according to each target head address and the effective length of the data.
5. The method of claim 1, wherein the obtaining, by the field programmable gate array, the respective target data based on each of the target address ranges and determining the logical identification corresponding to each of the target data comprises:
Performing shift processing on each target address range by the field programmable gate array according to preset shift information and each target address range to obtain a storage address range corresponding to each target address range; wherein the storage address range comprises a storage head address and a storage tail address;
acquiring target data corresponding to each storage address range according to each storage head address and the corresponding storage tail address;
And determining a logic identifier corresponding to each target data according to each storage head address.
6. The method of claim 1, wherein the data acquisition architecture further comprises a data exchange unit;
The transmitting, by the field programmable gate array, each of the target data to the back-end bank array based on each of the logical identifications, comprising:
Determining the exchange address of each target data by the field programmable gate array according to each logic identifier and a preset mapping relation;
transmitting, by the field programmable gate array, each of the target data to the data switching unit based on the switching address;
and transmitting all the target data to the back-end storage array by the data exchange unit.
7. The method of claim 6, wherein said sending, by the data exchange unit, all of the target data to the back-end bank array, comprises:
when the data exchange unit receives all the target data, integrating all the target data to obtain request data;
And the data exchange unit sends the request data to the back-end storage bank array.
8. A field programmable gate array-based high-speed storage device, wherein the device is applied to a data acquisition architecture comprising a field programmable gate array, a central processor, and a back-end bank array, the device comprising:
The first processing module is used for initiating a data writing request by the central processing unit, and splitting the data writing request by the central processing unit and the back-end storage array to obtain at least two target requests; wherein the data amount of each target request is the same;
the second processing module is used for sending each target request to the field programmable gate array by the back-end storage array, analyzing and processing each target request by the field programmable gate array, and determining a target address range corresponding to each target request;
the third processing module is used for acquiring corresponding target data based on each target address range by the field programmable gate array and determining a logic identifier corresponding to each target data;
and a fourth processing module, configured to send, by the field programmable gate array, each of the target data to the back-end bank array based on each of the logic identifiers.
9. A high-speed storage device based on a field programmable gate array, which is characterized by comprising a processor and a memory;
The processor is connected with the memory;
The memory is used for storing executable program codes;
The processor runs a program corresponding to executable program code stored in the memory by reading the executable program code for performing the steps of the method according to any of claims 1-7.
10. A computer readable storage medium having stored thereon a computer program, characterized in that the computer readable storage medium has stored therein instructions which, when run on a computer or a processor, cause the computer or the processor to perform the steps of the method according to any of claims 1-7.
CN202410058142.7A 2024-01-16 2024-01-16 High-speed storage method and device based on field programmable gate array Pending CN118012338A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410058142.7A CN118012338A (en) 2024-01-16 2024-01-16 High-speed storage method and device based on field programmable gate array

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410058142.7A CN118012338A (en) 2024-01-16 2024-01-16 High-speed storage method and device based on field programmable gate array

Publications (1)

Publication Number Publication Date
CN118012338A true CN118012338A (en) 2024-05-10

Family

ID=90949639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410058142.7A Pending CN118012338A (en) 2024-01-16 2024-01-16 High-speed storage method and device based on field programmable gate array

Country Status (1)

Country Link
CN (1) CN118012338A (en)

Similar Documents

Publication Publication Date Title
CN112328185B (en) Intelligent pre-reading method based on distributed storage
US9734085B2 (en) DMA transmission method and system thereof
WO2017084400A1 (en) Nvme networked storage implementation method, terminal, server, and system
TW200406680A (en) Method, system, and program for handling input/output commands
KR20120061710A (en) Data prefetch in sas expanders
CN111177025B (en) Data storage method and device and terminal equipment
US8806071B2 (en) Continuous read burst support at high clock rates
CN113468097B (en) Data exchange method based on system on chip
US11372782B2 (en) Computing system for reducing latency between serially connected electronic devices
US20150347017A1 (en) Command trapping in an input/output virtualization (iov) host controller (hc) (iov-hc) of a flash-memory-based storage device
CN104461943A (en) Data reading method, device and system
CN112749113A (en) Data interaction method, system, device and medium
CN109426623A (en) A kind of method and device reading data
KR102180975B1 (en) Memory subsystem with wrapped-to-continuous read
TW201303870A (en) Effective utilization of flash interface
CN109426434A (en) A kind of data of optical disk reading/writing method
WO2022068760A1 (en) Method for memory management, and apparatus for same
WO2014056329A1 (en) Memory data pushing method and device
US9541921B2 (en) Measuring performance of an appliance
CN106681948A (en) Logic control method and device of programmable logic device
WO2019174206A1 (en) Data reading method and apparatus of storage device, terminal device, and storage medium
TWI275943B (en) Method, system, and computer readable recording medium for returning data to read requests
CN118012338A (en) High-speed storage method and device based on field programmable gate array
CN112825024A (en) Command fusion and split method and NVMe controller
WO2020029619A1 (en) Data processing method and device, and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination