CN114553411A - Encryption device for distributed memory and decryption device for distributed memory - Google Patents

Encryption device for distributed memory and decryption device for distributed memory Download PDF

Info

Publication number
CN114553411A
CN114553411A CN202210180559.1A CN202210180559A CN114553411A CN 114553411 A CN114553411 A CN 114553411A CN 202210180559 A CN202210180559 A CN 202210180559A CN 114553411 A CN114553411 A CN 114553411A
Authority
CN
China
Prior art keywords
interface
encryption
data
storage node
decryption
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210180559.1A
Other languages
Chinese (zh)
Other versions
CN114553411B (en
Inventor
张静东
阚宏伟
王江为
王媛丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202210180559.1A priority Critical patent/CN114553411B/en
Publication of CN114553411A publication Critical patent/CN114553411A/en
Application granted granted Critical
Publication of CN114553411B publication Critical patent/CN114553411B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0894Escrow, recovery or storing of secret information, e.g. secret key escrow or cryptographic key storage
    • H04L9/0897Escrow, recovery or storing of secret information, e.g. secret key escrow or cryptographic key storage involving additional devices, e.g. trusted platform module [TPM], smartcard or USB
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • H04L9/0866Generation of secret information including derivation or calculation of cryptographic keys or passwords involving user or device identifiers, e.g. serial number, physical or biometrical information, DNA, hand-signature or measurable physical characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/12Details relating to cryptographic hardware or logic circuitry
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a device for encrypting a distributed memory and a device for decrypting the distributed memory. The distributed memory encryption device comprises: the FPGA chip is respectively arranged on each storage node, and the FPGA chip of each storage node comprises: the first interface and the second interface are respectively used for the communication between the FPGA chip and the local storage node and between the FPGA chip and other storage nodes; the DMA engine is used for acquiring application data from a host of the local storage node; the PUF key generation module is used for acquiring PUF circuit input excitation from a host of a local storage node and generating a key; the encryption and decryption algorithm engine is used for encrypting the application data based on the secret key; the receiving and sending packet processing engine is used for acquiring packet header control information from a host of the local storage node, adding an information header to the encrypted data based on the packet header control information, and sending the encrypted data to other storage nodes for storage. The scheme of the invention solves the problems of data security, key storage and CPU load rate in the distributed shared memory scheme.

Description

Encryption device for distributed memory and decryption device for distributed memory
Technical Field
The present invention relates to the field of distributed storage, and in particular, to an encryption apparatus for distributed memory and a decryption apparatus for distributed memory.
Background
With the continuous development of cloud computing and big data, hardware resources such as a Central Processing Unit (CPU), a network and a memory of a data center are more and more consumed, various network applications are continuously in the cloud, and the national importance on network security, data privacy and other problems is more and more attached; public clouds and hybrid clouds are developed better and better, and how to ensure the security of sensitive data of customers is very important. The memory encryption technology generally utilizes a Trusted Execution Environment (TEE) technology of a CPU to realize the encryption of local memory data, a BIOS option is set before a server is started, the TEE of the CPU is enabled, all operations on the DIMM memory data under the node after the server is started are processed by an encryption and decryption module controlled by the TEE, the data is stored in the DIMM memory as encrypted data, and the data is decrypted by the encryption and decryption module controlled by the TEE after being read by the CPU and then is used by the CPU. Intel SGX (Intel Software Guard extensions) technology is an implementation of TEE, where a TEE environment for executing code in SGX can guarantee confidentiality and integrity of data, and only specific users and applications can access the code and data in SGX, and the TEE environment contains dedicated hardware logic in CPU and dedicated operating system Software, etc., and depends on a complete trust chain of a specific CPU model and a dedicated hardware platform.
At present, there are two data encryption card schemes based on FPGA (field-programmable gate array): in the first scheme, a dedicated password chip is required to encrypt data, as shown in fig. 1A, the FPGA is mainly responsible for transmitting plaintext and ciphertext between the host and the password chip, and the FPGA does not encrypt and decrypt data; referring to fig. 1B, a data encryption scheme based on an FPGA is shown, where an encryption and decryption function is implemented inside the FPGA, so that data can be moved and plaintext and ciphertext can be processed, but a third party provides a key pair to perform encryption and decryption services.
The prior art has the following defects: firstly, a single-node internal memory encryption scheme using the TEE technology needs to be supported by a specific CPU model, the scheme implementation depends on a bottom hardware architecture, the software and hardware coupling degree is high, the updating and upgrading are troublesome, TEE technology interface forms adopted by different CPU manufacturers are not uniform, the technical frameworks are different, and the software application calling and platform transplanting overhead is high. Secondly, in the distributed memory encryption process, the data needs to be cached after being encrypted, the network card of the server waits for the data to be transmitted to other nodes, and the memory data needs to be frequently copied, so that not only is the data transmission delay increased, but also the system energy consumption is increased. Thirdly, no matter the encryption and decryption of the memory adopting the TEE technology in a single node or the encryption and decryption of the multi-node distributed memory, the encryption and decryption work of the traditional scheme is executed by a CPU, and a large amount of CPU resources are consumed for calculation and memory resources are required for caching in the process. Fourth, the conventional encryption and decryption system scheme needs to store the key pair safely, and usually needs special storage resources to store the key pair, and some keys are cached in the memory, which undoubtedly increases the risk of leakage of the key to the key sensitive information, and the reliability of the system cannot be guaranteed.
Disclosure of Invention
In view of the above, it is desirable to provide a distributed memory encryption apparatus and a distributed memory decryption apparatus.
According to a first aspect of the present invention, there is provided a distributed memory encryption apparatus, where an FPGA chip is respectively disposed on each storage node, and the FPGA chip of each storage node includes:
the FPGA chip comprises a first interface and a second interface, wherein the first interface is used for communicating the FPGA chip with a local storage node, and the second interface is used for communicating the FPGA chip with other storage nodes;
a DMA engine to obtain application data from a host of a local storage node through the first interface;
a PUF key generation module to obtain a PUF circuit input stimulus from a host of a local storage node through the first interface and to generate a key based on the PUF circuit input stimulus;
an encryption and decryption algorithm engine to encrypt the application data based on the key to generate encrypted data;
and the receiving and sending packet processing engine is used for acquiring packet header control information from a host of a local storage node through a first interface, adding an information header to the encrypted data based on the packet header control information, and then sending the encrypted data to other storage nodes through a second interface for storage.
In some embodiments, the number of the encryption algorithm engines is multiple, and different encryption algorithm engines correspond to different encryption algorithms;
the host of the local storage node sends algorithm selection information to the plurality of encryption algorithm engines through the first interface to validate the selected encryption algorithm engines.
In some embodiments, the system further comprises an FPGA memory subsystem;
the FPGA memory subsystem is used for caching the encrypted data to be processed by the receiving and sending packet processing engine.
In some embodiments, the PUF key generation module includes a PUF circuit module and an ECC encoding module;
the PUF circuit module is to generate response data based on the PUF circuit input stimulus;
the ECC encoding module is used for correcting errors of the ECC encoding module to generate a key.
In some embodiments, the first interface is a CXL interface or a PCIe interface and the second interface is a MAC portal.
According to a second aspect of the present invention, there is provided a distributed memory decryption apparatus, where an FPGA chip is respectively disposed on each storage node, and the FPGA chip of each storage node includes:
the FPGA chip comprises a first interface and a second interface, wherein the first interface is used for communicating the FPGA chip with a local storage node, and the second interface is used for communicating the FPGA chip with other storage nodes;
the receiving and sending packet processing engine is used for receiving data packets from other storage nodes and analyzing information headers of the data packets to acquire data storage addresses and data packet types;
an encryption and decryption algorithm engine to determine whether to decrypt the data packet based on the data packet type;
the PUF key generation module is used for acquiring PUF circuit input excitation from a host of a local storage node through the first interface when the encryption and decryption algorithm engine needs decryption, generating a key based on the PUF circuit input excitation and sending the key to the encryption and decryption algorithm engine so that the encryption and decryption algorithm engine decrypts the data packet based on the key to generate decryption data;
and the DMA engine is used for acquiring the decrypted data from the encryption and decryption algorithm engine and sending the decrypted data to the host of the local storage node through the first interface so that the host of the local storage node stores the decrypted data based on the data storage address.
In some embodiments, the number of the encryption algorithm engines is multiple, and different encryption algorithm engines correspond to different encryption algorithms;
the host of the local storage node sends algorithm selection information to the plurality of encryption algorithm engines through the first interface to validate the selected encryption algorithm engines.
In some embodiments, the system further comprises an FPGA memory subsystem;
the FPGA memory subsystem is used for caching the data packets to be processed by the receiving and sending packet processing engine.
In some embodiments, the PUF key generation module includes a PUF circuit module and an ECC encoding module;
the PUF circuit module is to generate response data based on the PUF circuit input stimulus;
the ECC encoding module is used for correcting errors of the ECC encoding module to generate a key.
In some embodiments, the first interface is a CXL interface or a PCIe interface and the second interface is a MAC portal.
According to the encryption device for the distributed memory, the shared memory data is encrypted and decrypted by adding the data processing board card based on the FPGA to the traditional server, the PUF circuit is realized by utilizing internal hardware resources of the FPGA, a random number is generated to serve as a secret key, the secret key does not need to be stored and managed, and a new idea is provided for encryption and decryption of data in distributed application in cloud computing.
In addition, the device for decrypting the distributed memory provided by the invention can also achieve the technical effects, and the details are not repeated herein.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
Fig. 1A is a conventional FPGA-based distributed memory decryption scheme;
FIG. 1B is a diagram of another conventional FPGA-based distributed memory decryption scheme;
fig. 2 is a schematic structural diagram of a distributed memory encryption apparatus according to an embodiment of the present invention.
Fig. 3A is a functional block diagram of a PUF key generation module according to another embodiment of the present invention;
fig. 3B is a schematic diagram of a circuit structure of an RO-based PUF according to another embodiment of the present invention;
fig. 4 is a schematic diagram of a FPGA-based cross-Rack server (Rack) shared memory network topology according to another embodiment of the present invention.
[ description of reference ]
10: a first interface;
20: a second interface;
30: a DMA engine;
40: a PUF key generation module; 21: a PUF circuit module; 22: an ECC encoding module;
50: an encryption and decryption algorithm engine;
60: a transmit-receive packet processing engine;
70: and an FPGA memory subsystem.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
In an embodiment, referring to fig. 2, the present invention provides a distributed memory encryption apparatus, where each storage node is respectively provided with an FPGA chip, and the FPGA chip of each storage node includes:
the FPGA chip comprises a first interface 10 and a second interface 20, wherein the first interface 10 is used for the FPGA chip to communicate with a local storage node, and the second interface 20 is used for the FPGA chip to communicate with other storage nodes;
a DMA engine 30, the DMA engine 30 is used for obtaining application data from a host of a local storage node through the first interface 10;
a PUF key generation module 40, the PUF key generation module 40 to obtain a PUF circuit input stimulus from a host of a local storage node through the first interface 10 and generate a key based on the PUF circuit input stimulus; among them, the Physical Unclonable Function is referred to as PUF for short.
An encryption/decryption algorithm engine 50, the encryption/decryption algorithm engine 50 being configured to encrypt the application data based on the key to generate encrypted data;
and a packet receiving and sending processing engine 60, where the packet receiving and sending processing engine 60 is configured to obtain packet header control information from a host of a local storage node through the first interface 10, add an information header to the encrypted data based on the packet header control information, and send the encrypted data to other storage nodes through the second interface 20 for storage.
According to the encryption device for the distributed memory, the shared memory data is encrypted and decrypted by adding the data processing board card based on the FPGA to the traditional server, the PUF circuit is realized by utilizing internal hardware resources of the FPGA, a random number is generated to serve as a secret key, the secret key does not need to be stored and managed, and a new idea is provided for encryption and decryption of data in distributed application in cloud computing.
In some embodiments, the number of the encryption algorithm engines is multiple, and different encryption algorithm engines correspond to different encryption algorithms;
the host of the local storage node sends algorithm selection information to a plurality of the encryption algorithm engines through the first interface 10 to validate the selected encryption algorithm engines.
In some embodiments, the device further comprises an FPGA memory subsystem 70;
the FPGA memory subsystem 70 is configured to cache encrypted data to be processed by the transmit-receive packet processing engine 60.
In some embodiments, please refer to fig. 3A, the PUF key generation module 40 includes a PUF circuit module 41 and an ECC encoding module 42;
the PUF circuit module 41 is to generate response data based on the PUF circuit input stimulus;
the ECC encoding module 42 is configured to correct errors of the ECC encoding module 42 to generate a key.
In this embodiment, a PUF circuit is generally a circuit designed to exploit differences in electrical characteristics of transistors due to uncontrolled process variations in the chip manufacturing process. The RO PUF circuit is a kind of silicon PUF circuit, and as shown in fig. 3B, its basic unit is an oscillation ring formed by an odd number of inverters of the same number and an and gate functioning as a switch, and the output of one bit is obtained by comparing the frequency difference between the oscillation rings constrained at different positions of the chip; by comparing the frequency difference of a plurality of groups of oscillation rings, a multi-bit output can be obtained as a true random sequence, and the method can be applied to the aspects of generation of chip ID, encryption and decryption, protection of an IP core and the like.
In some embodiments, the first interface 10 is a CXL interface or a PCIe interface, and the second interface 20 is a MAC portal.
In another embodiment, the present invention further provides a decryption apparatus for distributed memory, where the difference between the decryption apparatus and the distributed memory encryption apparatus is that the decryption apparatus is configured to decrypt a received data packet, and an internal structure of an FPGA is the same as that of the decryption apparatus for distributed memory, and the difference between the two aspects lies in cooperation among modules inside an FPGA, please refer to fig. 2 again, where an FPGA chip is respectively disposed on each storage node, and an FPGA chip of each storage node includes:
the FPGA chip comprises a first interface 10 and a second interface 20, wherein the first interface 10 is used for the FPGA chip to communicate with a local storage node, and the second interface 20 is used for the FPGA chip to communicate with other storage nodes;
a packet receiving and sending processing engine 60, where the packet receiving and sending processing engine 60 is configured to receive a data packet from another storage node, and parse an information header of the data packet to obtain a data storage address and a data packet type;
an encryption/decryption algorithm engine 50, the encryption/decryption algorithm engine 50 for determining whether to decrypt the data packet based on the data packet type;
a PUF key generation module 40, where the PUF key generation module 40 is configured to obtain a PUF circuit input stimulus from a host of a local storage node through the first interface 10 when the encryption and decryption algorithm engine 50 needs to decrypt, and generate a key based on the PUF circuit input stimulus, and send the key to the encryption and decryption algorithm engine 50, so that the encryption and decryption algorithm engine 50 decrypts the data packet based on the key to generate decryption data;
a DMA engine 30, the DMA engine 30 is configured to obtain the decrypted data from the encryption and decryption algorithm engine 50, and send the decrypted data to the host of the local storage node through the first interface 10, so that the host of the local storage node stores the decrypted data based on the data storage address.
According to the device for decrypting the distributed memory, the shared memory data is encrypted and decrypted by adding the data processing board card based on the FPGA to the traditional server, the PUF circuit is realized by utilizing internal hardware resources of the FPGA, a random number is generated to serve as a key, the key does not need to be stored and managed, and a new idea is provided for encryption and decryption of data in distributed application in cloud computing.
In some embodiments, the number of the encryption algorithm engines is multiple, and different encryption algorithm engines correspond to different encryption algorithms;
the host of the local storage node sends algorithm selection information to a plurality of the encryption algorithm engines through the first interface 10 to validate the selected encryption algorithm engines.
In some embodiments, the device further comprises an FPGA memory subsystem 70;
the FPGA memory subsystem 70 is configured to cache data packets to be processed by the transmit-receive packet processing engine 60.
In some embodiments, the PUF key generation module 40 includes a PUF circuit module 41 and an ECC encoding module 42;
the PUF circuit module 41 is to generate response data based on the PUF circuit input stimulus;
the ECC encoding module 42 is configured to correct errors of the ECC encoding module 42 to generate a key.
In some embodiments, the first interface 10 is a CXL interface or a PCIe interface, and the second interface 20 is a MAC portal.
In another embodiment, for facilitating understanding of the technical solution of the present invention, please refer to fig. 4, which provides a cross-Rack server (Rack) shared memory network topology based on FPGAs, specifically taking two storage nodes as an example, namely a local storage node and a remote storage node, where the FPGAs of the local storage node and the remote storage node both adopt the same structure, and the following describes in detail the working processes for the distributed memory encryption apparatus and the distributed memory decryption apparatus by taking an interaction process between the local storage node and the remote storage node as an example:
1. designing PUF key generation modules in the local storage node and the remote storage node FPGA, wherein the PUF key generation modules comprise a PUF circuit module and an ECC coding module;
2. on a local storage node, when local application data needs to be encrypted, firstly, a host configures PUF circuit input excitation of a PUF key generation module through a PCIe channel to obtain a key required by the encryption of the application data; simultaneously configuring an encryption and decryption algorithm engine selection algorithm and a key and packet header information in a packet receiving and transmitting processing module;
3. then sending the data from the host to the FPGA processing board through a PCIe channel, and reading the data to be processed on the host by a DMA engine in the FPGA;
4. after the encryption algorithm engine obtains the key configuration information and the non-empty cache information in the DMA engine, starting the encryption engine, and encrypting the cache data sent by the DMA engine by using the key generated by the PUF key generation module; the PUF key generation module only opens a key output interface of the encryption and decryption algorithm engine, and other logics in the FPGA cannot access the port, so that the security of the key is guaranteed.
5. The encryption and decryption algorithm engine can be deployed in a plurality of ways to support a plurality of applications to perform encryption and decryption processes using different algorithms and keys, and one encryption and decryption algorithm process is illustrated in the figure.
6. After the data is encrypted by the encryption and decryption engine, the data is added with an information header containing a position address of the data stored in a remote node and a data packet type mark (a write memory data request packet, a read memory data request packet and a store memory data response packet) by the transceiving packet processing engine and is sent to the 400G MAC module. Although fig. 4 shows only one transmit-receive packet processing engine queue, in the specific implementation process, a user or a client may set the number of transmit-receive packet processing engines according to requirements, for example, the number may correspond to one encryption algorithm engine.
7. In order to ensure that data is stored in a memory of a remote node safely, the encrypted state of application data needs to be kept for transmission and storage, after the remote storage node receives the encrypted application memory data, a packet head of a data packet is analyzed through a packet receiving and sending processing engine, and a data storage address and a packet type (such as a request packet for writing memory data) are obtained;
8. on the remote storage node, when the encryption and decryption engine in the FPGA judges that the received packet is a 'write memory data request packet', starting bypass setting inside the encryption and decryption algorithm engine, directly transmitting the received data to the DMA engine without performing decryption processing on the received data, and storing the data on a corresponding memory address of the node through a PCIe channel.
9. When a memory data reading request is carried out on a memory of a remote storage node, the operation is opposite, after a host of the remote storage node receives a memory data reading request packet, an encryption and decryption algorithm engine bypass setting is configured by a packet receiving and transmitting processing engine, and bypass non-decryption processing is carried out on memory data to be read;
10. then starting a DMA engine to read data, packaging the data for a packet receiving and transmitting processing engine and transmitting the packaged data to a request node;
11. and after receiving the memory data reading response packet, the request node decrypts the encrypted data by using a corresponding algorithm and a key, and sends the decrypted data to the DMA engine to be written into the memory of the node host.
The above embodiments are combined to show that the scheme of the invention has at least the following beneficial technical effects: (1) the PUF key generation module is used for generating a true random sequence as a key pair for encrypting data to be sent and decrypting the data to be received, and the key pair does not need to be specially stored or cached on external storage equipment, so that the safety and reliability of the key and the whole system are improved; (2) due to the characteristics of the PUF circuit, the PUF key generation modules realized on different FPGAs obtain completely random and different keys even if input excitation is the same; (3) the encryption and decryption tasks are unloaded to a data processing board based on the FPGA by the CPU, so that the processing speed of encryption and decryption is improved, and the load rate of the CPU is reduced; (4) the encryption and decryption algorithm engine supports the data bypass function, is compatible with the traditional mode of encrypting and decrypting the memory data by using a CPU (central processing unit), and improves the flexibility of data processing; (5) the encryption and decryption algorithm engine and the packet receiving and transmitting processing engine support multi-queue operation so as to support scenes of a plurality of applications on an upper layer using different encryption and decryption algorithms and keys; (6) the PUF key generation module supports high-reliability self-error-correction coding and ensures the integrity and reliability of the key to data; (7) PCIe Gen5 standard is adopted between the FPGA and the host, CXL bus protocol is supported, and cache consistency is realized between the FPGA and the CPU.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. The utility model provides a be used for distributed memory encryption device which characterized in that sets up the FPGA chip respectively at each storage node, and the FPGA chip of each storage node includes:
the FPGA chip comprises a first interface and a second interface, wherein the first interface is used for communicating the FPGA chip with a local storage node, and the second interface is used for communicating the FPGA chip with other storage nodes;
a DMA engine to obtain application data from a host of a local storage node through the first interface;
a PUF key generation module to obtain a PUF circuit input stimulus from a host of a local storage node through the first interface and generate a key based on the PUF circuit input stimulus;
an encryption and decryption algorithm engine to encrypt the application data based on the key to generate encrypted data;
and the receiving and sending packet processing engine is used for acquiring packet header control information from a host of a local storage node through a first interface, adding an information header to the encrypted data based on the packet header control information, and then sending the encrypted data to other storage nodes through a second interface for storage.
2. The encryption device for distributed memory according to claim 1, wherein the number of the encryption algorithm engines is multiple, and different encryption algorithm engines correspond to different encryption algorithms;
the host of the local storage node sends algorithm selection information to the plurality of encryption algorithm engines through the first interface to validate the selected encryption algorithm engines.
3. The apparatus according to claim 1 or 2, further comprising an FPGA memory subsystem;
the FPGA memory subsystem is used for caching the encrypted data to be processed by the receiving and sending packet processing engine.
4. The apparatus of claim 1 or 2, wherein the PUF key generation module comprises a PUF circuit module and an ECC encoding module;
the PUF circuit module is to generate response data based on the PUF circuit input stimulus;
the ECC encoding module is used for correcting errors of the ECC encoding module to generate a key.
5. The apparatus according to claim 1 or 2, wherein the first interface is a CXL interface or a PCIe interface, and the second interface is a MAC port.
6. The utility model provides a be used for distributed memory decryption device which characterized in that sets up the FPGA chip respectively at each storage node, and the FPGA chip of each storage node includes:
the FPGA chip comprises a first interface and a second interface, wherein the first interface is used for communicating the FPGA chip with a local storage node, and the second interface is used for communicating the FPGA chip with other storage nodes;
the receiving and sending packet processing engine is used for receiving data packets from other storage nodes and analyzing information headers of the data packets to acquire data storage addresses and data packet types;
an encryption and decryption algorithm engine to determine whether to decrypt the data packet based on the data packet type;
the PUF key generation module is used for acquiring PUF circuit input excitation from a host of a local storage node through the first interface when the encryption and decryption algorithm engine needs decryption, generating a key based on the PUF circuit input excitation and sending the key to the encryption and decryption algorithm engine so that the encryption and decryption algorithm engine decrypts the data packet based on the key to generate decryption data;
and the DMA engine is used for acquiring the decrypted data from the encryption and decryption algorithm engine and sending the decrypted data to the host of the local storage node through the first interface so that the host of the local storage node stores the decrypted data based on the data storage address.
7. The distributed memory decryption device according to claim 6, wherein the number of the encryption algorithm engines is multiple, and different encryption algorithm engines correspond to different encryption algorithms;
the host of the local storage node sends algorithm selection information to the plurality of encryption algorithm engines through the first interface to validate the selected encryption algorithm engines.
8. The device for distributed memory decryption of claim 6 or 7, further comprising an FPGA memory subsystem;
the FPGA memory subsystem is used for caching the data packets to be processed by the receiving and sending packet processing engine.
9. The distributed memory decryption apparatus of claim 6 or 7, wherein the PUF key generation module comprises a PUF circuit module and an ECC encoding module;
the PUF circuit module to generate response data based on the PUF circuit input stimulus;
the ECC encoding module is used for correcting errors of the ECC encoding module to generate a key.
10. The apparatus according to claim 6 or 7, wherein the first interface is a CXL interface or a PCIe interface, and the second interface is a MAC network port.
CN202210180559.1A 2022-02-25 2022-02-25 Distributed memory encryption device and distributed memory decryption device Active CN114553411B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210180559.1A CN114553411B (en) 2022-02-25 2022-02-25 Distributed memory encryption device and distributed memory decryption device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210180559.1A CN114553411B (en) 2022-02-25 2022-02-25 Distributed memory encryption device and distributed memory decryption device

Publications (2)

Publication Number Publication Date
CN114553411A true CN114553411A (en) 2022-05-27
CN114553411B CN114553411B (en) 2023-07-14

Family

ID=81678970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210180559.1A Active CN114553411B (en) 2022-02-25 2022-02-25 Distributed memory encryption device and distributed memory decryption device

Country Status (1)

Country Link
CN (1) CN114553411B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115987513A (en) * 2023-03-17 2023-04-18 山东浪潮科学研究院有限公司 Distributed database fragment encryption and decryption methods, devices, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388953A (en) * 2017-08-02 2019-02-26 三星电子株式会社 Safety equipment, electronic equipment and the method for operating electronic equipment
CN109426727A (en) * 2017-08-24 2019-03-05 上海复旦微电子集团股份有限公司 Data ciphering method, decryption method, encryption system and decryption system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388953A (en) * 2017-08-02 2019-02-26 三星电子株式会社 Safety equipment, electronic equipment and the method for operating electronic equipment
CN109426727A (en) * 2017-08-24 2019-03-05 上海复旦微电子集团股份有限公司 Data ciphering method, decryption method, encryption system and decryption system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115987513A (en) * 2023-03-17 2023-04-18 山东浪潮科学研究院有限公司 Distributed database fragment encryption and decryption methods, devices, equipment and medium

Also Published As

Publication number Publication date
CN114553411B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
US10154013B1 (en) Updating encrypted cryptographic key
CN106063183B (en) Method and apparatus for cloud assisted cryptography
CN107430668B (en) Secure distributed backup for personal devices and cloud data
US7657754B2 (en) Methods and apparatus for the secure handling of data in a microcontroller
US9893885B1 (en) Updating cryptographic key pair
US10402172B1 (en) Multi-source entropy and randomness aggregation and distribution network
US10496841B2 (en) Dynamic and efficient protected file layout
CN108345806A (en) A kind of hardware encryption card and encryption method
US11243881B2 (en) Practical ORAM delegation for untrusted memory on cloud servers
US20220197825A1 (en) System, method and apparatus for total storage encryption
US20190260587A1 (en) Security authentication method and system, and integrated circuit
CN111917696B (en) TPM-based secure multi-party computing system using non-bypassable gateways
CN114553411B (en) Distributed memory encryption device and distributed memory decryption device
CN109905310B (en) Data transmission method and device and electronic equipment
CN112035900B (en) High-performance password card and communication method thereof
Abd Elminaam et al. SMCACC: developing an efficient dynamic secure framework for mobile capabilities augmentation using cloud computing
US20230090351A1 (en) Key filling method, system, and apparatus, device, and storage medium
CN116070239A (en) File encryption and decryption methods, devices, equipment and storage medium
CN210274109U (en) Ethernet card device supporting encryption function
US20230403138A1 (en) Agentless single sign-on techniques
US11736275B2 (en) Integrated infrastructure secure communication system
WO2023124530A1 (en) Data encryption system and related product
CN115987513B (en) Distributed database fragment encryption and decryption methods, devices, equipment and media
CN113411347B (en) Transaction message processing method and processing device
CN115225258B (en) Block chain-based cross-domain trusted data security management method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant