CN116521607B - Heterogeneous platform and file system standardization method based on CPU and FPGA - Google Patents

Heterogeneous platform and file system standardization method based on CPU and FPGA Download PDF

Info

Publication number
CN116521607B
CN116521607B CN202310783345.8A CN202310783345A CN116521607B CN 116521607 B CN116521607 B CN 116521607B CN 202310783345 A CN202310783345 A CN 202310783345A CN 116521607 B CN116521607 B CN 116521607B
Authority
CN
China
Prior art keywords
file
fpga
cpu
data
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310783345.8A
Other languages
Chinese (zh)
Other versions
CN116521607A (en
Inventor
赵丹
蒋湘涛
马瑞欢
吴清华
扈世伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Runcore Innovation Technology Co ltd
Original Assignee
Hunan Runcore Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Runcore Innovation Technology Co ltd filed Critical Hunan Runcore Innovation Technology Co ltd
Priority to CN202310783345.8A priority Critical patent/CN116521607B/en
Publication of CN116521607A publication Critical patent/CN116521607A/en
Application granted granted Critical
Publication of CN116521607B publication Critical patent/CN116521607B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/13File access structures, e.g. distributed indices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/188Virtual file systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a heterogeneous platform and file system standardization method based on a CPU (Central processing Unit) and an FPGA (field programmable gate array), wherein the CPU end is used for receiving an operation request of a user layer, identifying the operation request and transmitting the operation request to the FPGA end at a high speed; the FPGA end is used for receiving an operation request sent by the CPU end, generating a nonstandard instruction which can be identified by the FPGA end according to the operation request sent by the CPU end, generating return data according to the nonstandard instruction, and transmitting the return data to the CPU end at a high speed, wherein the return data is a linked list returned in a file system of the FPGA end; the CPU end is also used for receiving the return data sent by the FPGA end, generating standard file data which can be identified by the CPU end according to the return data, and feeding the standard file data back to the user layer. The invention is beneficial to realizing data sharing and cooperative work between the FPGA and the CPU.

Description

Heterogeneous platform and file system standardization method based on CPU and FPGA
Technical Field
The invention relates to the technical field of heterogeneous platform file systems, in particular to a heterogeneous platform based on a CPU and an FPGA and a file system standardization method of the heterogeneous platform based on the CPU and the FPGA.
Background
In certain application scenarios, an FPGA (Field Programmable Gate Array ) needs to work in conjunction with a CPU (Central Processing Unit ) to achieve more efficient computation and data processing. Because the FPGA and the CPU have different architectures and working modes, the embedded heterogeneous platform file system based on the FPGA and the CPU commonly used at present has the following defects:
(1) The current mainstream file system is mainly designed aiming at the CPU processor architecture, and cannot be directly compatible with the FPGA platform, so that the CPU needs to interact with the FPGA through a network or a serial port by adopting host computer software customized by a manufacturer, and the host computer software needs to be modified according to different file storage modes or interaction protocols, so that the communication between the CPU and the FPGA is inconvenient.
(2) Under the heterogeneous platforms of the FPGA and the CPU, the FPGA and the CPU have different architectures and working modes, so that the CPU cannot read the files in the storage medium under the FPGA in a simple and visual mode, and the files need to be read through the self-defined upper computer software of a manufacturer, and the problems of difficult data reading and difficult data operation of the CPU are also faced.
(3) Because the FPGA platform has limited resources, the difficulty of realizing the Linux standard file system interface is high, and the file can be operated only by a custom protocol, so that the flexibility and the compatibility are poor.
Therefore, the heterogeneous platform for the cooperative work of the FPGA and the CPU is inconvenient for realizing the data sharing and the cooperative work between the FPGA and the CPU.
Disclosure of Invention
The invention mainly aims to provide a heterogeneous platform based on a CPU and an FPGA and a file system standardization method of the heterogeneous platform based on the CPU and the FPGA, and aims to solve the problem that the existing heterogeneous platform with the cooperative work of the FPGA and the CPU is inconvenient to realize data sharing and cooperative work between the FPGA and the CPU.
In order to achieve the above purpose, in the heterogeneous platform based on the CPU and the FPGA, the CPU end is in communication connection with the FPGA end, and the FPGA end is in communication connection with the storage medium; the CPU end comprises a node mapping layer and a first transmission layer; the FPGA end comprises a command analysis layer and a second transmission layer;
the first transmission layer is used for receiving an operation request of a user layer and identifying the operation request at a CPU (central processing unit), transmitting the operation request to a second transmission layer of the FPGA (field programmable gate array) end at a high speed, receiving return data sent by the second transmission layer of the FPGA end, and sending the return data to the node mapping layer; the node mapping layer is used for converting return data in the form of linked list nodes sent by the FPGA end into standard file data in the form of index nodes which can be identified by the CPU end, so that the standard file data is fed back to the user layer; the return data is a linked list returned in a file system of the FPGA end;
The second transmission layer is used for receiving an operation request sent by the first transmission layer of the CPU end and sending the operation request to the command analysis layer; the command analysis layer is used for converting an operation request in the form of an index node sent by the CPU end into a non-standard instruction in the form of a linked list node which can be identified by the FPGA end, so that the FPGA end generates return data according to the non-standard instruction; the second transmission layer is also used for transmitting the return data to the first transmission layer of the CPU end at high speed.
Preferably, the CPU side further comprises a virtual file system layer and ShareFS;
the virtual file system layer is used for receiving an operation request of the user layer;
the ShareFS is used for starting a corresponding general function interface from a preset general function interface according to the type of the operation request, analyzing setting information from the operation request and sending the setting information to the node mapping layer, wherein the setting information corresponds to the type of the operation request.
Preferably, the FPGA side further includes a linear linked list file system:
and the linear linked list file system is used for storing the linear file linked list and generating linear file data as return data according to the non-standard instruction.
Preferably, the node mapping layer is specifically configured to:
According to a preset data protocol format, packaging the setting information analyzed from the operation request to generate a message request packet, and submitting the message request packet to a waiting queue so as to transmit the message request packet from the waiting queue to a second transmission layer of the FPGA end at a high speed through a first transmission layer;
and acquiring each linked list node from the return data returned by the second transmission layer of the FPGA end according to the operation request, acquiring first data information needed by the index node from each linked list node, generating the index node according to the first data information, generating standard file data according to the index node, and transmitting the standard file data to the ShareFS.
Preferably, the command parsing layer is specifically configured to:
receiving an operation request forwarded by a second transmission layer;
and acquiring an index node from the setting information which is solved and separated by the operation request, acquiring second data information needed by the link list node from the index node, generating the link list node according to the second data information, generating a link list which can be identified by the linear link list file system according to the link list node as a non-standard instruction, and sending the non-standard instruction to the linear link list file system.
Preferably, the linear linked list file system is specifically configured to:
storing a linear file linked list, and searching a file meeting the condition in a local linear file linked list according to the non-standard instruction;
and acquiring the files meeting the conditions and generating return data, wherein the return data comprises a device port number, an information ID, a file name, a file size, a file offset address and a linear file linked list.
In order to achieve the above purpose, the invention also provides a file system standardization method of the heterogeneous platform based on the CPU and the FPGA, which is applied to the heterogeneous platform based on the CPU and the FPGA; the method comprises the following steps:
the CPU receives an operation request of a user layer and identifies the operation request, and the operation request is transmitted to a second transmission layer of the FPGA end at a high speed through a first transmission layer;
the method comprises the steps that a second transmission layer of an FPGA (field programmable gate array) end receives an operation request sent by a first transmission layer of a CPU end and sends the operation request to a command analysis layer, the operation request in the form of an index node sent by the CPU end is converted into a non-standard instruction in the form of a linked list node which can be identified by the FPGA end through the command analysis layer, return data is generated according to the non-standard instruction, the return data is transmitted to the first transmission layer of the CPU end at a high speed through the second transmission layer, and the return data is a linked list returned in a file system of the FPGA end;
The first transmission layer of the CPU end receives the return data sent by the second transmission layer of the FPGA end, the return data is sent to the node mapping layer, the return data in the form of linked list nodes sent by the FPGA end is converted into standard file data in the form of index nodes which can be identified by the CPU end through the node mapping layer, and the standard file data is fed back to the user layer.
Preferably, the step of converting, by the command parsing layer, an operation request in the form of an index node sent by a CPU end into a non-standard instruction in the form of a linked list node that can be identified by an FPGA end includes:
the command analysis layer obtains index nodes from the setting information obtained by solving the operation request of the CPU end;
acquiring second data information needed by the link list node from the index node;
generating a linked list node according to the second data information;
generating a linked list which can be identified by a linear linked list file system as a non-standard instruction according to the linked list nodes;
the step of converting the return data in the form of linked list nodes sent by the FPGA end into standard file data in the form of index nodes which can be identified by the CPU end through the node mapping layer comprises the following steps:
the node mapping layer acquires each linked list node from the return data returned by the FPGA end according to the operation request;
Acquiring first data information needed by forming an index node from each linked list node;
generating an index node according to the first data information;
and generating standard file data according to the index node.
Preferably, the step of generating a linked list node according to the second data information includes:
creating an empty linked list node;
mapping second data information obtained from the index node to the established linked list node, wherein the second data information comprises: file name, number of file bytes, timestamp and read/write offset address;
the step of generating the index node according to the first data information comprises the following steps:
creating an empty virtual index node, wherein the virtual index node does not point to a certain file local to the CPU, but points to the obtained linked list node, so that the operation on the virtual index node is mapped to the operation on the linked list node;
mapping first data information obtained from the linked list node to the established virtual index node, wherein the first data information comprises: file name, file size, start recording time, and start physical offset address.
Preferably, the step of generating return data according to the non-standard instruction includes:
The FPGA end judges the instruction type of the nonstandard instruction corresponding to the operation request;
when the non-standard instruction corresponding to the operation request is a read request, the FPGA end searches non-standard file data meeting the conditions in a locally stored linear file linked list according to setting information analyzed from the non-standard instruction, wherein the setting information comprises a file name, a read address and a read length of a file to be read;
generating return data according to the non-standard file data meeting the conditions;
acquiring an offset address of the nonstandard file data in a storage medium, and positioning the offset address to a physical storage address in the storage medium according to the offset address;
reading a file to be read with a specified length in a storage medium according to a physical storage address, and sending the file to be read to a data buffer area in an FPGA (field programmable gate array) end so as to wait for sending the file to be read to a first transmission layer of a CPU (central processing unit) end;
when the nonstandard instruction corresponding to the operation request is a file list query, the FPGA end searches a file list corresponding to the appointed storage directory in a linear file linked list stored locally according to the appointed storage directory of the storage medium analyzed from the nonstandard instruction;
and taking the file list as return data to wait for the file list to be sent to a first transmission layer of the CPU side.
In the technical scheme of the invention, the CPU end and the FPGA end are directly connected in a high-speed communication way, the CPU end directly interacts with the user layer, and the FPGA end directly manages the storage medium, so that communication between the CPU end and the FPGA end is not required by host computer software customized by a manufacturer. Specifically, the CPU terminal directly receives the operation request of the user layer, recognizes the operation request, and transmits the operation request to the FPGA terminal at high speed, wherein the operation request sent by the CPU terminal is defined based on a standard file system in the CPU terminal; after receiving the operation request sent by the CPU end, the FPGA end generates a non-standard instruction of the linear linked list file system based on the FPGA according to the operation request based on the standard file system sent by the CPU end, so that the FPGA end can directly analyze the non-standard instruction to understand the meaning of the operation request sent by the CPU end, and therefore, after understanding the meaning of the operation request, the FPGA end can generate required return data according to the non-standard instruction and transmit the return data to the CPU end at a high speed. Because the return data is the linked list file returned in the file system of the FPGA end, the CPU end generates standard file data which can be identified by the CPU end according to the return data after receiving the return data in the form of the linked list file, and feeds the standard file data back to the user layer. Therefore, in the technical scheme of the invention: the CPU end and the FPGA end directly have the functions of data exchange and sharing; both can generate data which can be identified by the file system of the user according to the data sent by the other party; the CPU end can also generate standardized data of a file system of the CPU end according to the nonstandard file sent by the FPGA end, and the CPU end has a file system standardized function. In summary, the technical scheme of the invention realizes the technical effects of data sharing and collaborative work between the FPGA and the CPU, and the invention does not depend on the communication of upper computer software any more, thereby improving the universality of the embedded heterogeneous platform.
Drawings
FIG. 1 is a schematic diagram of a hardware framework of a heterogeneous platform according to the present invention;
FIG. 2 is a schematic diagram of a functional implementation framework of a heterogeneous platform according to the present invention;
FIG. 3 is a schematic diagram of node mapping in the present invention;
FIG. 4 is a functional block diagram of the software of the heterogeneous platform of the present invention;
fig. 5 is a flow chart of a first embodiment of the method of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the following description, suffixes such as "unit", "part" or "unit" for representing elements are used only for facilitating the description of the present invention, and have no specific meaning per se. Thus, "unit," "component," or "unit" may be used in combination.
Referring to fig. 1 to 4, in order to achieve the above objects, the present invention provides a heterogeneous platform based on a CPU and an FPGA, where the CPU end is communicatively connected to the FPGA end, and the FPGA end is communicatively connected to a storage medium; the CPU end comprises a node mapping layer and a first transmission layer; the FPGA end comprises a command analysis layer and a second transmission layer;
The first transmission layer is used for receiving an operation request of a user layer and identifying the operation request at a CPU (central processing unit), transmitting the operation request to a second transmission layer of the FPGA (field programmable gate array) end at a high speed, receiving return data sent by the second transmission layer of the FPGA end, and sending the return data to the node mapping layer; the node mapping layer is used for converting return data in the form of linked list nodes sent by the FPGA end into standard file data in the form of index nodes which can be identified by the CPU end, so that the standard file data is fed back to the user layer; the return data is a linked list returned in a file system of the FPGA end;
the second transmission layer is used for receiving an operation request sent by the first transmission layer of the CPU end and sending the operation request to the command analysis layer; the command analysis layer is used for converting an operation request in the form of an index node sent by the CPU end into a non-standard instruction in the form of a linked list node which can be identified by the FPGA end, so that the FPGA end generates return data according to the non-standard instruction; the second transmission layer is also used for transmitting the return data to the first transmission layer of the CPU end at high speed.
In the technical scheme of the invention, the CPU end and the FPGA end are directly connected in a high-speed communication way, the CPU end directly interacts with the user layer, and the FPGA end directly manages the storage medium, so that communication between the CPU end and the FPGA end is not required by host computer software customized by a manufacturer. Specifically, the CPU terminal directly receives the operation request of the user layer, recognizes the operation request, and transmits the operation request to the FPGA terminal at high speed, wherein the operation request sent by the CPU terminal is defined based on a standard file system in the CPU terminal; after receiving the operation request sent by the CPU end, the FPGA end generates a non-standard instruction of the linear linked list file system based on the FPGA according to the operation request based on the standard file system sent by the CPU end, so that the FPGA end can directly analyze the non-standard instruction to understand the meaning of the operation request sent by the CPU end, and therefore, after understanding the meaning of the operation request, the FPGA end can generate required return data according to the non-standard instruction and transmit the return data to the CPU end at a high speed. Because the return data is the linked list file returned in the file system of the FPGA end, the CPU end generates standard file data which can be identified by the CPU end according to the return data after receiving the return data in the form of the linked list file, and feeds the standard file data back to the user layer. Therefore, in the technical scheme of the invention: the CPU end and the FPGA end directly have the functions of data exchange and sharing; both can generate data which can be identified by the file system of the user according to the data sent by the other party; the CPU end can also generate standardized data of a file system of the CPU end according to the nonstandard file sent by the FPGA end, and the CPU end has a file system standardized function. In summary, the technical scheme of the invention realizes the technical effects of data sharing and collaborative work between the FPGA and the CPU, and the invention does not depend on the communication of upper computer software any more, thereby improving the universality of the embedded heterogeneous platform.
According to the method, on the premise that a file system (linear chain table file system) of an FPGA end line is not modified, a file system standardization method is designed, a nonstandard file system in the FPGA end is subjected to data exchange, sharing, file system standardization and other methods, data in the FPGA end are presented under a CPU end in a standard file system, the problems that data reading and data operation under the CPU end are difficult are solved, and a more flexible and universal data access mode is provided.
The FPGA end and the CPU end are also respectively connected with a memory module (DDR). Specifically, an operating system is running in the CPU side. The CPU end is in communication connection with the FPGA end through a PCIE interface.
The FPGA end and the CPU end adopt a self-defined RIFFA (Reusable Integration Framework for FPGA Accelerators, RIFFA is an open source communication architecture, which allows data to be exchanged between the FPGA IP core of a user and the main memory of the CPU in real time) protocol for communication through PCIe, so that high-speed data transmission between the FPGA end and the CPU end is realized, real-time data exchange and sharing can be realized, and the performance and efficiency of the system are improved. And transmitting related information such as a linear file system linked list and data stored in the FPGA end to the CPU end, so that data sharing between the FPGA end and the CPU end is realized, wherein the information transmitted by the FPGA end to the CPU end mainly comprises a device port number, an information ID, a file name, a file size, a file offset address, a file linked list and the like.
Referring to fig. 1, the fpga side includes a PL (Progarmmable Logic, programmable logic) side and a PS side, where the PS side is responsible for link table management, the PL side is responsible for data reading and writing of the storage medium, and the PL side is connected to the storage medium through a SATA interface in a communication manner. The linear chain table file system of the FPGA end can be arranged at the PS end.
The storage medium may be a magnetic Disk or a Solid State Disk (Solid State Drive, abbreviated as SSD).
Referring to FIG. 2, the CPU side preferably further includes a virtual file system layer and ShareFS;
the virtual file system layer is used for receiving an operation request of the user layer;
the ShareFS is used for starting a corresponding general function interface from a preset general function interface according to the type of the operation request, analyzing setting information from the operation request and sending the setting information to the node mapping layer, wherein the setting information corresponds to the type of the operation request.
Specifically, when the operating system of the CPU terminal is a Linux operating system, the CPU terminal comprises a virtual file system layer, a ShareFS, a node mapping layer and a first transmission layer. The first transmission layer and the second transmission layer can be respectively RIFFA transmission layers, and the CPU end and the FPGA end are in communication connection through RIFFA protocol.
The minimum file unit stored by the FPGA end is a linked list Node, and the data transmitted from the FPGA end to the CPU end cannot be directly used by the CPU end and needs to be translated into a data format which can be read and understood by the CPU end. In the Linux file system, files are mainly represented by index nodes enode, and attribute information of the files is stored in the enode, so that the files need to be generated into the enode according to linked list information Node of the files. Similarly, the operation request sent by the CPU end cannot be directly identified by the FPGA end, a user-defined operation command is required to be generated and transmitted to the FPGA end, and the FPGA end responds after instruction analysis.
In the present invention, a file system is a method and a data structure for defining storage devices (magnetic disks, solid state disks, etc.) to store data, and manages the data in the form of files, that is, a method for organizing the data on the storage devices. The file system is responsible for organizing and distributing the space of the storage device, storing files and retrieving stored files. In particular, it is responsible for creating, storing, reading, modifying, deleting, retrieving files, etc. for the user.
Referring to fig. 4, in this embodiment, the ShareFS in the CPU end is: in the embodiment, the ShareFS is developed based on a VFS interface to realize standard file system operation, such as opening, reading, writing, closing files and the like, and mainly comprises general function interfaces such as open, read, write, release, readdir, llseek and the like. When the user layer has an operation request, responding to a general function interface of the ShareFS file system, generating file system standard data according to non-standard FPGA linear file data in the general function interface, and realizing the standardization of the file system.
The general function interface includes: an open function interface, a read function interface, a write function interface, a release function interface, a readdir function interface and a llseek function interface; the operation request includes: file name, read address and read length of the file to be read.
The ShareFS realizes the function of interface standardization, and when the user layer has an operation request, the standard interface of the ShareFS is responded, and the non-standard FPGA linear file data is converted into file system standard data by the general function interface.
Specifically, in this embodiment, the virtual file system layer, i.e. VFS (virtual File System) layer, is configured to receive an operation request of the user layer.
The setting information corresponds to the type of the operation request, for example, when the operation request is a file Read request, a Read interface of a standard file system (shareFS) is triggered, and the setting information to be Read is firstly analyzed in the interface, wherein the setting information comprises key information of a file name, a Read address and a Read length of the file to be Read.
Specifically, the virtual file system layer is communicatively coupled to a shareFS, the shareFS is communicatively coupled to a node mapping layer, and the node mapping layer is communicatively coupled to the first transport layer. And a first data receiving queue and a first data sending queue are arranged between the node mapping layer and the first transmission layer.
The first data receiving queue is provided with a plurality of return data from the FPGA end to be sent from the first transmission layer to the node mapping layer. A plurality of operation requests from ShareFS to be sent from the node mapping layer to the first transport layer are arranged in the first data sending queue.
Preferably, the FPGA side further includes a linear linked list file system:
and the linear linked list file system is used for storing the linear file linked list and generating linear file data as return data according to the non-standard instruction.
The files in the linear linked list file system form a linked list in the form of a plurality of linked list nodes. Each linked list Node is denoted as Node, and thus, the linked list includes a plurality of nodes such as Node1, node2, and Node 3.
Specifically, the linear chain table file system is in communication connection with the command analysis layer, the command analysis layer is in communication connection with the second transmission layer, and the node mapping layer is in communication connection with the command analysis layer. A task queue layer may be further disposed between the command parsing layer and the linear linked list file system, where the task queue layer includes a second data receiving queue and a second data sending queue. The second data receiving queue is arranged with a plurality of operation requests from the CPU end to be sent to the linear chain list file system from the command analysis layer. The second data transmission queue is arranged with a plurality of return data from the linear chain table file system to be transmitted from the linear chain table file system to the command analysis layer.
Referring to fig. 3, the node mapping layer is preferably specifically configured to:
according to a preset data protocol format, packaging the setting information analyzed from the operation request to generate a message request packet, and submitting the message request packet to a waiting queue so as to transmit the message request packet from the waiting queue to a second transmission layer of the FPGA end at a high speed through a first transmission layer;
and acquiring each linked list node from the return data returned by the second transmission layer of the FPGA end according to the operation request, acquiring first data information needed by the index node from each linked list node, generating the index node according to the first data information, generating standard file data according to the index node, and transmitting the standard file data to the ShareFS.
The message request packet is submitted to a waiting queue, namely, submitted to a first data sending queue, and after being submitted to the first data sending queue, the waiting message request packet is transmitted to the FPGA end at a high speed from the first data sending queue through a first transmission layer, and waits for a data response.
When the operation request is a read request, a readdir interface of a standard file system (ShareFS) is triggered, after receiving a message request packet, an FPGA end analyzes the message request packet, searches file data meeting the conditions in a local linked list, acquires an offset address of the file data in a storage medium after finding the file data, positions the offset address to a physical storage address to be read, reads the file data with a specified length from the storage medium to a data buffer area of the FPGA end, and transmits the file data as return data to a CPU end through a second transmission layer;
After receiving the data response, the CPU end creates a local virtual node through the node mapping layer, generates standard data which can be identified by the standard file system, and transmits the standard data to the data buffer area of the upper layer, so that the application layer receives the file data read from the storage medium, and the process of data reading is completed.
When the operation request is a file list query, the readdir interface of the standard file system (ShareFS) is triggered in the same way, the operation request is sent to the FPGA end through the first transmission layer, the FPGA end returns the file list in the linear chain table to the CPU end, the CPU end generates standard data which can be identified by the standard file system through the node mapping layer, and the standard data is returned to the upper layer of the ShareFS and displayed.
The information included in the linked list node is as follows: file name, file size, data source ID, start recording time, end recording time, start physical offset address. Wherein the data source ID is used to represent the data source.
Referring to fig. 3, preferably, the command parsing layer is specifically configured to:
receiving an operation request forwarded by a second transmission layer;
and acquiring an index node from the setting information which is solved and separated by the operation request, acquiring second data information needed by the link list node from the index node, generating the link list node according to the second data information, generating a link list which can be identified by the linear link list file system according to the link list node as a non-standard instruction, and sending the non-standard instruction to the linear link list file system.
The reverse process of fig. 3 (arrow reverse) is a process of command parsing. The information recorded by the index node comprises: file name, number of file bytes, file permissions, timestamp, read/write offset address, user ID, group ID. The file authority, the User ID and the GrouP ID are configured according to actual conditions.
When the index node and the linked list node are mapped with each other, the file name in the linked list node and the file name of the index node are mapped with each other, the file size in the linked list node and the number of file bytes of the index node are mapped with each other, the initial record time in the linked list node and the timestamp of the index node are mapped with each other, and the initial physical offset address in the linked list node and the read/write offset address of the index node are mapped with each other.
Preferably, the linear linked list file system is specifically configured to:
storing a linear file linked list, and searching a file meeting the condition in a local linear file linked list according to the non-standard instruction;
acquiring a file meeting the condition and generating return data, wherein the return data comprises a linked list file, and when an operation request issued by a CPU (Central processing Unit) end is a read request, the FPGA end also needs to read the data file corresponding to the read request from a storage medium and send the data file to the CPU end.
Referring to fig. 5, in addition, in order to achieve the above object, a first embodiment of the present invention provides a method for normalizing a file system of a heterogeneous platform based on a CPU and an FPGA, which is applied to the heterogeneous platform based on a CPU and an FPGA as described in any one of the above; the method comprises the following steps:
step S10, a CPU receives an operation request of a user layer and identifies the operation request, and the operation request is transmitted to a second transmission layer of an FPGA end at a high speed through a first transmission layer;
step S20, a second transmission layer of the FPGA end receives an operation request sent by a first transmission layer of the CPU end and sends the operation request to a command analysis layer, the operation request in the form of an index node sent by the CPU end is converted into a non-standard instruction in the form of a linked list node which can be identified by the FPGA end through the command analysis layer, return data is generated according to the non-standard instruction, the return data is transmitted to the first transmission layer of the CPU end at a high speed through the second transmission layer, and the return data is a linked list returned in a file system of the FPGA end;
step S30, the first transmission layer of the CPU end receives the return data sent by the second transmission layer of the FPGA end, the return data are sent to the node mapping layer, the return data in the form of linked list nodes sent by the FPGA end are converted into standard file data in the form of index nodes which can be identified by the CPU end through the node mapping layer, and the standard file data are fed back to the user layer.
Specifically, before step S10, in order to establish communication between the CPU and FPGA ends through the communication between the first and second transmission layers, the RIFFA drivers corresponding to the first and second transmission layers are first configured and initialized, and the respective buffers of the FPGA and CPU ends are managed to avoid data loss or damage, so as to implement high-speed data transmission between the FPGA and CPU ends.
Based on the first embodiment of the method for normalizing a file system of a heterogeneous platform based on a CPU and an FPGA of the present invention, in a second embodiment of the method for normalizing a file system of a heterogeneous platform based on a CPU and an FPGA of the present invention, the step of converting, by a command parsing layer, an operation request in the form of an index node sent by a CPU end into a non-standard instruction in the form of a linked list node identifiable by an FPGA end in step S20 includes:
step S21, a command analysis layer obtains index nodes from the setting information obtained by solving the operation request of the CPU end;
step S22, obtaining second data information needed by the linked list nodes from the index nodes;
step S23, generating a linked list node according to the second data information;
step S24, a linked list which can be identified by a linear linked list file system is generated according to the linked list nodes to serve as a non-standard instruction;
The step of converting, in step S30, the return data in the form of linked list nodes sent by the FPGA end into standard file data in the form of index nodes that can be identified by the CPU end through the node mapping layer includes:
step S31, the node mapping layer acquires each linked list node from return data returned by the FPGA end according to the operation request;
step S32, acquiring first data information needed by forming an index node from each linked list node;
step S33, generating index nodes according to the first data information;
step S34, generating standard file data according to the index node. The standard file data is then sent to shareFS.
The process of step S21 to step S24 is a linked list node mapping process, and the process of step S31 to step S34 is an index node mapping process.
In a second embodiment of the method for normalizing a file system of a heterogeneous platform based on a CPU and an FPGA according to the present invention, the step S23 includes:
step S231, creating an empty linked list node;
step S232, mapping second data information obtained from the index node to the established linked list node, where the second data information includes: file name, number of file bytes, timestamp and read/write offset address;
The step S33 includes:
step S331, creating an empty virtual index node, wherein the virtual index node does not point to a certain file in the CPU but points to the obtained linked list node, so that the operation on the virtual index node is mapped to the operation on the linked list node;
step S332, mapping the first data information obtained from the linked list node to the established virtual index node, where the first data information includes: file name, file size, start recording time, and start physical offset address.
Taking the inode mapping process as an example:
each file in the Linux file system has a corresponding index node enode, which contains meta information of the file, such as the byte number of the file, the authority of reading and writing execution, a time stamp, the location of data, and the like. The CPU side needs to create a virtual inode,
the virtual index node does not point to a file local to the CPU end, but points to the obtained linked list node, and the operation on the virtual node is converted into the operation on the linked list node. Specifically, firstly, an empty virtual index node is established, then effective information in the linked list nodes is mapped one by one, such as key information of file names, byte numbers, time stamps, offset addresses and the like, and other information such as file authority, user IDs and GrouP IDs are configured according to actual conditions.
When reading and writing files, the offset address of the files in the virtual index node is required to be mapped into the physical offset address of the storage medium (such as a disk) in the linked list node, and the storage medium is accessed through the physical offset address. The time stamp information of the file can be obtained by the start record time in the linked list node. The virtual index node is created only when the file is opened, and the release of the virtual node is completed after the reading and writing of the file are completed.
In a fourth embodiment of the method for normalizing a file system of a heterogeneous platform based on a CPU and an FPGA according to the first to third embodiments of the method for normalizing a file system of a heterogeneous platform based on a CPU and an FPGA according to the present invention, the step of generating the return data according to the non-standard instruction in step S20 includes:
step S25, the FPGA end judges the instruction type of the non-standard instruction corresponding to the operation request;
when the nonstandard instruction corresponding to the operation request is a read request, step S26 is executed: the FPGA end searches non-standard file data meeting the conditions in a locally stored linear file linked list according to setting information analyzed from a non-standard instruction, wherein the setting information comprises a file name, a reading address and a reading length of a file to be read;
Step S27, generating return data according to non-standard file data meeting the conditions;
step S28, obtaining an offset address of the nonstandard file data in a storage medium, and positioning the offset address to a physical storage address in the storage medium according to the offset address;
step S29, reading a file to be read with a specified length in a storage medium according to a physical storage address, and sending the file to be read to a data buffer area in an FPGA end so as to wait for sending the file to be read to a CPU end;
when the non-standard instruction corresponding to the operation request is a query file list, step S210 is executed: according to the appointed storage catalogue of the storage medium analyzed from the non-standard instruction, the FPGA end searches a file list corresponding to the appointed storage catalogue in a linear file linked list stored locally;
step S211, taking the file list as return data to wait for the file list to be sent to the CPU side.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part in the form of a software product stored in a computer readable storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device to enter the method according to the embodiments of the present invention.
In the description of the present specification, descriptions of terms "one embodiment," "another embodiment," "other embodiments," or "first embodiment through X-th embodiment," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, method steps or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. The heterogeneous platform based on the CPU and the FPGA is characterized in that the CPU end is in communication connection with the FPGA end, and the FPGA end is used for being in communication connection with a storage medium; the CPU end comprises a node mapping layer and a first transmission layer; the FPGA end comprises a command analysis layer and a second transmission layer;
the first transmission layer is used for receiving an operation request of a user layer and identifying the operation request at a CPU (central processing unit), transmitting the operation request to a second transmission layer of the FPGA (field programmable gate array) end at a high speed, receiving return data sent by the second transmission layer of the FPGA end, and sending the return data to the node mapping layer; the node mapping layer is used for converting return data in the form of linked list nodes sent by the FPGA end into standard file data in the form of index nodes which can be identified by the CPU end, so that the standard file data is fed back to the user layer; the return data is a linked list returned in a file system of the FPGA end;
The second transmission layer is used for receiving an operation request sent by the first transmission layer of the CPU end and sending the operation request to the command analysis layer; the command analysis layer is used for converting an operation request in the form of an index node sent by the CPU end into a non-standard instruction in the form of a linked list node which can be identified by the FPGA end, so that the FPGA end generates return data according to the non-standard instruction; the second transmission layer is also used for transmitting the return data to the first transmission layer of the CPU end at high speed.
2. The heterogeneous platform based on the CPU and the FPGA according to claim 1, wherein the CPU end further comprises a virtual file system layer and ShareFS;
the virtual file system layer is used for receiving an operation request of the user layer;
the ShareFS is used for starting a corresponding general function interface from a preset general function interface according to the type of the operation request, analyzing setting information from the operation request and sending the setting information to the node mapping layer, wherein the setting information corresponds to the type of the operation request.
3. The heterogeneous platform based on the CPU and the FPGA according to claim 1, wherein the FPGA side further comprises a linear linked list file system:
and the linear linked list file system is used for storing the linear file linked list and generating linear file data as return data according to the non-standard instruction.
4. The heterogeneous platform based on CPU and FPGA of claim 2, wherein the node mapping layer is specifically configured to:
according to a preset data protocol format, packaging the setting information analyzed from the operation request to generate a message request packet, and submitting the message request packet to a waiting queue so as to transmit the message request packet from the waiting queue to a second transmission layer of the FPGA end at a high speed through a first transmission layer;
and acquiring each linked list node from the return data returned by the second transmission layer of the FPGA end according to the operation request, acquiring first data information needed by the index node from each linked list node, generating the index node according to the first data information, generating standard file data according to the index node, and transmitting the standard file data to the ShareFS.
5. A heterogeneous platform based on CPU and FPGA according to claim 3, wherein the command parsing layer is specifically configured to:
receiving an operation request forwarded by a second transmission layer;
and acquiring an index node from the setting information which is solved and separated by the operation request, acquiring second data information needed by the link list node from the index node, generating the link list node according to the second data information, generating a link list which can be identified by the linear link list file system according to the link list node as a non-standard instruction, and sending the non-standard instruction to the linear link list file system.
6. A heterogeneous CPU and FPGA based platform according to claim 3 wherein the linear linked list file system is specifically configured to:
storing a linear file linked list, and searching a file meeting the condition in a local linear file linked list according to the non-standard instruction;
and acquiring the files meeting the conditions and generating return data, wherein the return data comprises a device port number, an information ID, a file name, a file size, a file offset address and a linear file linked list.
7. A method for standardizing a file system of a heterogeneous platform based on a CPU and an FPGA, which is applied to the heterogeneous platform based on a CPU and an FPGA as claimed in any one of claims 1 to 6; the method comprises the following steps:
the CPU receives an operation request of a user layer and identifies the operation request, and the operation request is transmitted to a second transmission layer of the FPGA end at a high speed through a first transmission layer;
the method comprises the steps that a second transmission layer of an FPGA (field programmable gate array) end receives an operation request sent by a first transmission layer of a CPU end and sends the operation request to a command analysis layer, the operation request in the form of an index node sent by the CPU end is converted into a non-standard instruction in the form of a linked list node which can be identified by the FPGA end through the command analysis layer, return data is generated according to the non-standard instruction, the return data is transmitted to the first transmission layer of the CPU end at a high speed through the second transmission layer, and the return data is a linked list returned in a file system of the FPGA end;
The first transmission layer of the CPU end receives the return data sent by the second transmission layer of the FPGA end, the return data is sent to the node mapping layer, the return data in the form of linked list nodes sent by the FPGA end is converted into standard file data in the form of index nodes which can be identified by the CPU end through the node mapping layer, and the standard file data is fed back to the user layer.
8. The method for normalizing file system of heterogeneous platform based on CPU and FPGA according to claim 7, wherein the step of converting the operation request in the form of index node sent by CPU end into the non-standard instruction in the form of linked list node identifiable by FPGA end through command parsing layer comprises:
the command analysis layer obtains index nodes from the setting information obtained by solving the operation request of the CPU end;
acquiring second data information needed by the link list node from the index node;
generating a linked list node according to the second data information;
generating a linked list which can be identified by a linear linked list file system as a non-standard instruction according to the linked list nodes;
the step of converting the return data in the form of linked list nodes sent by the FPGA end into standard file data in the form of index nodes which can be identified by the CPU end through the node mapping layer comprises the following steps:
The node mapping layer acquires each linked list node from the return data returned by the FPGA end according to the operation request;
acquiring first data information needed by forming an index node from each linked list node;
generating an index node according to the first data information;
and generating standard file data according to the index node.
9. The method for normalizing a file system of a heterogeneous platform based on a CPU and an FPGA according to claim 8, wherein the step of generating a linked list node according to the second data information comprises:
creating an empty linked list node;
mapping second data information obtained from the index node to the established linked list node, wherein the second data information comprises: file name, number of file bytes, timestamp and read/write offset address;
the step of generating the index node according to the first data information comprises the following steps:
creating an empty virtual index node, wherein the virtual index node does not point to a certain file local to the CPU, but points to the obtained linked list node, so that the operation on the virtual index node is mapped to the operation on the linked list node;
mapping first data information obtained from the linked list node to the established virtual index node, wherein the first data information comprises: file name, file size, start recording time, and start physical offset address.
10. The method of file system normalization of a heterogeneous platform based on CPUs and FPGAs according to any one of claims 7 to 9, wherein the step of generating return data from the non-standard instructions comprises:
the FPGA end judges the instruction type of the nonstandard instruction corresponding to the operation request;
when the non-standard instruction corresponding to the operation request is a read request, the FPGA end searches non-standard file data meeting the conditions in a locally stored linear file linked list according to setting information analyzed from the non-standard instruction, wherein the setting information comprises a file name, a read address and a read length of a file to be read;
generating return data according to the non-standard file data meeting the conditions;
acquiring an offset address of the nonstandard file data in a storage medium, and positioning the offset address to a physical storage address in the storage medium according to the offset address;
reading a file to be read with a specified length in a storage medium according to a physical storage address, and sending the file to be read to a data buffer area in an FPGA (field programmable gate array) end so as to wait for sending the file to be read to a first transmission layer of a CPU (central processing unit) end;
when the nonstandard instruction corresponding to the operation request is a file list query, the FPGA end searches a file list corresponding to the appointed storage directory in a linear file linked list stored locally according to the appointed storage directory of the storage medium analyzed from the nonstandard instruction;
And taking the file list as return data to wait for the file list to be sent to a first transmission layer of the CPU side.
CN202310783345.8A 2023-06-29 2023-06-29 Heterogeneous platform and file system standardization method based on CPU and FPGA Active CN116521607B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310783345.8A CN116521607B (en) 2023-06-29 2023-06-29 Heterogeneous platform and file system standardization method based on CPU and FPGA

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310783345.8A CN116521607B (en) 2023-06-29 2023-06-29 Heterogeneous platform and file system standardization method based on CPU and FPGA

Publications (2)

Publication Number Publication Date
CN116521607A CN116521607A (en) 2023-08-01
CN116521607B true CN116521607B (en) 2023-08-29

Family

ID=87390573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310783345.8A Active CN116521607B (en) 2023-06-29 2023-06-29 Heterogeneous platform and file system standardization method based on CPU and FPGA

Country Status (1)

Country Link
CN (1) CN116521607B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559152A (en) * 2013-10-31 2014-02-05 烽火通信科技股份有限公司 Device and method for CPU (central processing unit) to access local bus on basis of PCIE (peripheral component interface express) protocol
CN108958800A (en) * 2018-06-15 2018-12-07 中国电子科技集团公司第五十二研究所 A kind of DDR management control system accelerated based on FPGA hardware
CN111797058A (en) * 2020-07-02 2020-10-20 长沙景嘉微电子股份有限公司 Universal file system and file management method
CN114125077A (en) * 2022-01-26 2022-03-01 之江实验室 Method and device for realizing multi-executive TCP session normalization

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9080894B2 (en) * 2004-10-20 2015-07-14 Electro Industries/Gauge Tech Intelligent electronic device for receiving and sending data at high speeds over a network
US20190265976A1 (en) * 2018-02-23 2019-08-29 Yuly Goryavskiy Additional Channel for Exchanging Useful Information
US20200242263A1 (en) * 2019-01-28 2020-07-30 Red Hat, Inc. Secure and efficient access to host memory for guests

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559152A (en) * 2013-10-31 2014-02-05 烽火通信科技股份有限公司 Device and method for CPU (central processing unit) to access local bus on basis of PCIE (peripheral component interface express) protocol
CN108958800A (en) * 2018-06-15 2018-12-07 中国电子科技集团公司第五十二研究所 A kind of DDR management control system accelerated based on FPGA hardware
CN111797058A (en) * 2020-07-02 2020-10-20 长沙景嘉微电子股份有限公司 Universal file system and file management method
CN114125077A (en) * 2022-01-26 2022-03-01 之江实验室 Method and device for realizing multi-executive TCP session normalization

Also Published As

Publication number Publication date
CN116521607A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN102394872B (en) Data communication protocol
US6788648B1 (en) Method and apparatus for load balancing a distributed processing system
US6975595B2 (en) Method and apparatus for monitoring and logging the operation of a distributed processing system
JP7018516B2 (en) Data query
KR101137132B1 (en) Send by reference in a customizable, tag-based protocol
WO2019242148A1 (en) Log processing method and apparatus, and storage medium and server
US8473531B2 (en) Presenting a file system for a file containing items
WO2020019943A1 (en) Method and device for transmitting data, and method and apparatus for receiving data
JP2003241903A (en) Storage control device, storage system and control method thereof
CN114385091B (en) Method and device for realizing network disk drive character, network disk and storage medium
CN111241049B (en) Distributed operation log realization system based on micro-service architecture
EP2208317B1 (en) Compressing null columns in rows of the tabular data stream protocol
CN116521607B (en) Heterogeneous platform and file system standardization method based on CPU and FPGA
JPH0434648A (en) Data processing method
US7940409B2 (en) Data exchange in an exchange infrastructure
US8706714B2 (en) File aggregation method and information processing system using the same
CN107615259A (en) A kind of data processing method and system
CN116049131B (en) File management method, system, electronic equipment and storage medium
WO2024001280A1 (en) Data flow perception method and related apparatus
EP1132833A2 (en) A method and structure for dynamic conversion of data
JP3972593B2 (en) IDENTIFICATION INFORMATION MANAGEMENT DEVICE, COMPUTER DATA COMMUNICATION SYSTEM, AND PROGRAM
KR20080106672A (en) Data-synchronization method and gateway thereof
JP2775024B2 (en) Email transfer method
JP3990671B2 (en) Information management apparatus and information management method
JP4189592B2 (en) Image data management apparatus, image forming apparatus, and image data management method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant