CN116015762A - Method for constructing non-deterministic symmetric encryption system based on deep learning - Google Patents

Method for constructing non-deterministic symmetric encryption system based on deep learning Download PDF

Info

Publication number
CN116015762A
CN116015762A CN202211577688.0A CN202211577688A CN116015762A CN 116015762 A CN116015762 A CN 116015762A CN 202211577688 A CN202211577688 A CN 202211577688A CN 116015762 A CN116015762 A CN 116015762A
Authority
CN
China
Prior art keywords
nonce
bob
ciphertext
parameters
eve2
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211577688.0A
Other languages
Chinese (zh)
Inventor
吴旭光
韩益亮
朱率率
李鱼
吕龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Engineering University of Chinese Peoples Armed Police Force
Original Assignee
Engineering University of Chinese Peoples Armed Police Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Engineering University of Chinese Peoples Armed Police Force filed Critical Engineering University of Chinese Peoples Armed Police Force
Priority to CN202211577688.0A priority Critical patent/CN116015762A/en
Publication of CN116015762A publication Critical patent/CN116015762A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

The invention relates to a method for constructing an uncertainty symmetric encryption system based on deep learning, which comprises the following steps: a step of constructing a system architecture, a step of determining a generation countermeasure optimization target, a step of setting a network structure and parameters, and a step of setting system super parameters. The system architecture constructing step is used for constructing a non-deterministic symmetric encryption system and determining each participant of the system and input and output data thereof; determining a step of generating an antagonism optimization target, and setting an optimization target of the whole system according to the symmetrical encryption system; setting a network structure and parameters, namely combining a full-connection layer and a 1-dimensional convolution layer to form the network structure of the invention, and setting proper activation functions and parameters; and setting system super parameters, namely rationally setting parameters required by the training of the nondeterministic symmetric encryption system, so that the training of the system is quick and accurate.

Description

Method for constructing non-deterministic symmetric encryption system based on deep learning
Technical Field
The invention belongs to the technical field of computer data security protection, relates to a data encryption algorithm, and in particular relates to a method for constructing an nondeterministic symmetric encryption system based on deep learning.
Background
The symmetric cipher has the features of fast encryption and decryption speed, easy realization of software and hardware, high safety, etc. and may be used widely. When designing symmetric cipher, people find nonlinear substitution manually and check cipher index with the aid of computer. The design process comprises more artificial components, and has the problems of low efficiency and difficult guarantee of safety. Therefore, the idea of automated design is one of the continuous pursuits.
In 1991, the well-known RSA designer Rivest summarized the relationship between cryptography and machine learning, and explored the automated implementation of search and optimization-based cryptographic algorithms for the first time from the point of view of machine learning modeling, after which this crossover field was studied more. The great university of martial arts Meng Qingshu and Zhang Huanguo in 2004 successfully generates a Bent function with maximum nonlinearity by using evolution calculation, and an S box designed by using the Bent function has stronger capability of resisting differential and linear attacks. However, many algorithms of artificial intelligence are not practical and are difficult to apply to practical problems due to the poor computing power at the time.
With the rapid development of hardware devices such as a GPU, the computing capacity is enhanced, and the artificial intelligence represented by the deep learning network can solve the original unsolvable problem. The 2016 Google Brain team utilized deep generation antagonism networks (Generative Adversarial Networks, GAN) to propose a head end-to-end block cipher automation design. This is a new direction, which is rapidly attracting attention and is a hot spot for research. Recent work can be divided into two categories: firstly, optimizing an original model to obtain better performance and safety; secondly, a new network model under the cryptographic security is provided, and the security is improved.
However, existing symmetric cryptosystem construction methods based on deep learning can only learn deterministic algorithms, i.e. using the same key and the same plaintext, which will generate the same ciphertext. This means that the key in the deterministic encryption algorithm can only be used once. It is obviously very practical to have a non-deterministic encryption scheme that can encrypt the same plaintext with the same key, resulting in different ciphertexts.
The invention provides a method for constructing a non-deterministic symmetric encryption system based on deep learning, wherein Alice encrypts the same plaintext and secret key for a plurality of times to generate different ciphertext; bob can correctly decrypt the ciphertext to obtain the correct plaintext. At the same time, an attacker eavesdrops on the communication between Alice and Bob, but cannot crack and obtain the plaintext.
Disclosure of Invention
Technical problem to be solved
In order to avoid the defects of the prior art, the invention provides a method for constructing a non-deterministic symmetric encryption system based on deep learning, which is based on generation of an antagonistic neural network model by nonces, wherein the nonces represent numbers which are used only once and are transmitted to communication participants, and confidentiality is not needed. Meanwhile, the invention optimizes the network structure by adding a BN layer to a Convolutional Neural Network (CNN), selecting an appropriate activation function and setting appropriate CNN parameters. The invention can realize nondeterministic symmetric encryption and has quick convergence and higher correct decryption rate.
Technical proposal
The construction method of the non-deterministic symmetric encryption system based on deep learning is characterized by comprising the following steps:
step 1, constructing a system architecture: the system architecture comprises four neural networks, namely an encryption party Alice, a decryption party Bob, an attacker Eve1 and an attacker Eve 2;
wherein;
alice and Bob share the same key K, while Eve1 and Eve2 do not know it;
nonce represents a number that is used only once, known by the recipient Bob and the attacker Eve 2;
alice outputs ciphertext C under the condition of inputting plaintext P, key K and Nonce;
bob decrypts the ciphertext P under the condition of inputting the secret key K, nonce and the ciphertext C Bob
Eve1 can only obtain ciphertext C, eve2 can obtain ciphertext C and Nonce;
step 2, determining to generate a countermeasure optimization target:
as a function of distance
Figure BDA0003989568630000031
The difference between the plain text is measured and,
wherein: n represents the length of the plaintext;
defining a single sample loss function for Bob as:
L BAB ,P,K,Nonce)=d(P,B(θ B ,A(θ A ,P,K,Nonce),K,Nonce)),
wherein: θ AB Neural network parameters, a (θ A P, K, nonce) represents the output of the neural network Alice, B (θ) B ,A(θ A P, K, nonce) represents the output of the neural network Bob;
bob's cost function is:
Figure BDA0003989568630000032
the single sample loss function and cost function for Eve1 and Eve2 are:
L E1AE1 ,P,K)=d(P,E1(θ E1 ,A(θ A ,P,K),K))
Figure BDA0003989568630000033
L E2AE2 ,P,K)=d(P,E2(θ E2 ,A(θ A ,P,K,Nonce),K,Nonce))
Figure BDA0003989568630000034
wherein: θ E1E2 Neural network parameters, E1 (θ E1 ,A(θ A P, K), K) and E2 (θ) E2 ,A(θ A P, K, nonce) represent the outputs of the neural networks Eve1 and Eve2, respectively;
the loss function of the entire generated countermeasure system can be defined as:
Figure BDA0003989568630000035
step 3, setting parameters of the neural network:
four participants Alice, bob, eve, eve2 in the system adopt similar neural network structures, and comprise 2 full connection layers FC, fully-connected layers and 6 1-dimensional convolution layers 1-D Convolutional Layer; the first full-connection layer receives an input layer of information, wherein the inputs of an encryption party Alice and a decryption party Bob are a secret key K, a plaintext P and a digital Nonce; the input of the attacker Eve1 is ciphertext C, and the input of the attacker Eve2 is ciphertext C and digital Nonce; the input and output parameters of the second full-connection layer are (3N, 3N), and the two full-connection layers both select a Sigmoid activation function; the full connection layer is followed by 6 one-dimensional convolution layers, followed by a BN layer (Batch Normalization), and finally ReLu and Tanh are selected as activation functions;
step 4, setting system super parameters:
the four neural networks Alice, bob, eve, eve2, the received random plaintext, key and digital Nonce are all N bits, and the generated floating point number ciphertext is N bits; the number of bits n=32, 64, 128, and 256 for plaintext, key, and ciphertext; batch Size is set to 512, 1024 or 2048; and (3) adopting Xavier for initialization, and selecting Adam of PyTorch.
The key K is repeatedly used or not repeatedly used, and the symmetric encryption system has nondeterministic property, and can obtain different ciphertexts even if the same plaintext is encrypted by using the same key.
Advantageous effects
The invention provides a method for constructing a non-deterministic symmetric encryption system based on deep learning, which comprises the following steps: a step of constructing a system architecture, a step of determining a generation countermeasure optimization target, a step of setting a network structure and parameters, and a step of setting system super parameters. The system architecture constructing step is used for constructing a non-deterministic symmetric encryption system and determining each participant of the system and input and output data thereof; determining a step of generating an antagonism optimization target, and setting an optimization target of the whole system according to the symmetrical encryption system; setting a network structure and parameters, namely combining a full-connection layer and a 1-dimensional convolution layer to form the network structure of the invention, and setting proper activation functions and parameters; and setting system super parameters, namely rationally setting parameters required by the training of the nondeterministic symmetric encryption system, so that the training of the system is quick and accurate.
The method of the present invention is based on the first example of a deep learning end-to-end non-deterministic symmetric encryption communication system, based on generation of nonces against neural network models, where "Nonce" means a "only once used" number, passed to the communicating party, and no privacy is required. Meanwhile, the invention optimizes the network structure by adding a BN layer to a Convolutional Neural Network (CNN), selecting an appropriate activation function and setting appropriate CNN parameters. The invention can realize nondeterministic symmetric encryption and has quick convergence and higher correct decryption rate.
The invention has at least the following advantages:
1. the invention can learn the non-deterministic symmetric encryption algorithm. To our knowledge, this is the first example of a deep learning based end-to-end non-deterministic symmetric encryption communication system.
2. The invention designs network structure and sets proper network parameters to ensure that encryption and decryption can be well carried out when the plaintext length is larger. Adding BN layer to CNN (convolutional neural network) and selecting appropriate activation functions; and appropriate CNN parameters are set, including input channel, output channel, kernel size and stride. These designs ensure fast convergence and correct decryption speed when the plaintext length is 256 or longer.
Drawings
FIG. 1 is a schematic diagram of the system architecture of the present invention;
FIG. 2 is a schematic diagram of the neural network architecture of the present invention;
fig. 3 is a diagram illustrating decryption error rates at different lengths according to the present invention.
Detailed Description
The invention will now be further described with reference to examples, figures:
in order to make the purposes, technical effects and technical solutions of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention are clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention; it will be apparent that the described embodiments are some of the embodiments of the present invention. Other embodiments, which may be made by those of ordinary skill in the art based on the disclosed embodiments without undue burden, are within the scope of the present invention.
Step 1, constructing a system architecture:
referring to fig. 1, the system architecture of the present invention is composed of an encrypting party Alice, a decrypting party Bob, an attacker Eve1 and an attacker Eve2, wherein each participating party is a neural network; under the condition that a plaintext P, a secret key K and a Nonce are input, the encryption party Alice outputs a ciphertext C; ciphertext C is transmitted to recipient Bob via the public channel, and in the case of input key K and Nonce, the ciphertext is decrypted to P Bob The method comprises the steps of carrying out a first treatment on the surface of the The attacker Eve1 can only get the ciphertext C and also try to decrypt the ciphertext, thus getting P Eve1 The method comprises the steps of carrying out a first treatment on the surface of the The attacker Eve2 can obtain ciphertext C and Nonce, and decrypt to obtain P Eve2
Alice and Bob share the same key K, while Eve1 and Eve2 do not know it; nonce represents a number that is used only once, does not need to be randomly generated, and does not need to be kept secret, and can be known to the recipient Bob and the attacker Eve 2.
Compared with the prior art, the key K in the invention can be reused, that is, the key obtained after the same plaintext is encrypted is different under the condition that the key K is the same.
Step 2, determining to generate a countermeasure optimization target:
in the generative countermeasure network training, alice, bob, eve and Eve2 parameters are used with θ ABE1 And theta E2 And (3) representing. Encryption function defining Alice is defined as a (θ A P, K, nonce), where θ A P, K, nonce represent Alice neural network parameters, plaintext, key and Nonce numbers, respectively. Similarly, define Bob's decryption function as B (θ B C, K, nonce), wherein θ B Representing Bob neural network parameters, C representing ciphertext, K and Nonce being the same as defined in Alice neural network. Define Eve 1's output as E1 (θ E1 C), eve2 has an output of E2 (θ) with input C and Nonce E2 C, nonce), where θ E1 And theta E2 The neural network parameters for Eve1 and Eve2 are shown, respectively.
The present invention introduces a distance function d to measure the difference between the plaintext and uses the L1 distance to calculate the distance. d (P, P ') represents the distance between P and P', written as
Figure BDA0003989568630000061
Where N represents the length of the plaintext. Since the original plaintext only contains-1 and 1, the decrypted P' contains floating point numbers between-1 and 1. d (P, P ') has a maximum value of 2, meaning that each bit of P' is different from P. But this is not a bad thing for an attacker, who can invert each bit of P' to obtain P; that is, an attacker Eve may flip P Eve To recover the exact plaintext P. Thus, in an ideal secure communication, eve decrypts the resulting P Eve Only half as much as P. At this time d (P, P) Eve ) =1, meaning that this is not different from random guesses when Eve tries to decrypt ciphertext. When P Bob When the distance d is the same as P (P Bob P) =0, indicating that Bob can successfully decrypt ciphertext C.
The single sample loss function of decryptor Bob is defined as
L BAB ,P,K,Nonce)=d(P,B(θ B ,A(θ A ,P,K,Nonce),K,Nonce))。
This means that Bob decrypts plaintext P when Bob has plaintext P, key K and digital Nonce Bob How much was wrong. Next, define Bob's loss function as:
Figure BDA0003989568630000071
similarly, a loss function for each sample is defined for Eve1 and Eve2 and extended to a distributed loss function:
L E1AE1 ,P,K)=d(P,E1(θ E1 ,A(θ A ,P,K),K))
Figure BDA0003989568630000072
L E2AE2 ,P,K)=d(P,E2(θ E2 ,A(θ A ,P,K,Nonce),K,Nonce))
Figure BDA0003989568630000073
/>
the loss function is important in GAN optimization, where the goal of the system is to minimize P Bob And the distance between the plain text P, and making d (P, P Eve1 ) And d (P, P) Eve2 ) Near 1. Thus, the loss function of the whole system is defined as:
(L ABAB )=L BAB )-L E1A ,O E1A ))-L E2A ,O E2A )))
here the number of the elements is the number,
Figure BDA0003989568630000074
a minimum loss value representing Eve 1;
Figure BDA0003989568630000075
representing the minimum loss value of Eve 2.
Thus, the loss function can be expressed as
Figure BDA0003989568630000076
Step 3, setting network structure and parameters:
referring to fig. 2, the neural network structure of the present invention includes 2 fully-connected layers (FCs) and 6 1-dimensional convolution layers (1-D convolutional layer). The first full-connection layer receives information such as plaintext, secret key and the like as input, wherein the input of Alice and Bob is secret key K, plaintext P and digital Nonce; the input of attacker Eve1 is ciphertext C, while the input of attacker Eve2 is ciphertext C and digital Nonce. The input/output parameters of the first full-connection layer are shown in table 1, the input/output parameters of the second full-connection layer are (3N, 3N), and the Sigmoid activation function is selected for both full-connection layers.
First full connection layer parameters of table 1Alice,Bob,Eve1 and Eve2
Figure BDA0003989568630000081
Convolutional neural network parameters of table 2Alice,Bob,Eve1 and Eve2
Figure BDA0003989568630000082
The fully connected layer is followed by 6 one-dimensional convolutional layers. In table 2, the number of input channels of the six modules is 1, 2, 4, 8, 4, respectively. The kernel size is important for the convolutional layer, and the invention sets the convolution kernel size to 1. To increase the complexity of the ciphertext, the present invention sets the step sizes to 1, 3, 1, and 1. Inspired by DCGAN, the present invention adds Bulk Normalization (BN) to the network layer and selects a new activation function. The BN is used for helping to form gradient flow, so that training speed and training stability are improved; reLu is selected as the activation function in the neural network, and Tanh is selected as the activation function of the last convolutional layer.
Step 4, setting system super parameters:
the invention selects a notebook Dell G7 which is provided with an Intel Core i9 CPU, a 32GB memory and an Injeida GeForce RTX2700 display card. The invention implements four networks Alice, bob, eve, eve2 that receive an N-bit random plaintext, a key K, and a digital Nonce, and generate an N-bit floating-point ciphertext. The super parameters of the experiment are as follows:
plain text (plantext): the plaintext in cryptography is typically a string of "0" and "1". To better exploit the characteristics of the neural network, the present invention converts it to floating point numbers, i.e. "0" to "-1.0",
"1" to "1.0". The number of bits n=32, 64, 128, and 256 of the plaintext.
Key K and Nonce: they are identical to the parameters of the plain text.
Batch Size (Batch Size): it is typically set to 512, 1024 or 2048. In this experiment, this batch size setting will have a slight impact on the results, as the network structure is set more precisely. As a demonstration of the experimental result, this value was set to 2048.
Ciphertext (cipert): it is identical to the plaintext in length and consists of floating point numbers with values between-1.0 and 1.0.
Initialization (Initialization): by adopting the Xavier initialization, the variance of the input and the output can be kept consistent, all output values are prevented from tending to 0, and the information flow in the network is enabled to be better.
Iteration number: it represents the number of training steps, typically set to 25000.
Optimizer): adam with PyTorch is adopted, and has the characteristics of simplicity in implementation and high calculation efficiency. The learning rate was set to 0.0008. If the loss value remains stable, the learning rate automatically decays to 10% of itself.
Because the calculation platform does not influence the experimental result, the invention uses PyTorch to carry out experimental verification, and the experimental result is as follows:
in order to verify the correctness of decryption, the invention mainly verifies the decryption error rate of the ciphertext. The ideal test result is that Bob's decryption error rate is close to 0 and that the decryption error rates of both attackers are close to 1. As illustrated in example fig. 3, the decryption error rates of Bob, eve1, and Eve2 are compared, where the lengths n=32, 64, 128, and 256 bits. It can be seen that Bob's error rate drops to near 0 in less than 100 epochs (1 epoch cycle = 10 steps). Thereafter, it always maintains a state of smoothly approaching 0. Whereas the two aggressors Eve1 and Eve2 do not achieve efficient decryption, the error rate remains almost always around 1.0, which means that the decryption of the two aggressors is almost equivalent to a random guess.
Table 3 shows the final decryption error rates for Bob, eve1 and Eve2 for different lengths. It can be seen that as the communication length increases, bob's error rate is always close to 0, and both Eve1 and Eve 2's error rates are close to 0.99. Experimental data shows that the present invention can ensure the correctness of communication even if the communication length is increased.
TABLE 3 final decryption error Rate for Bob, eve1 and Eve2 at different lengths
Figure BDA0003989568630000101
/>

Claims (2)

1. The construction method of the non-deterministic symmetric encryption system based on deep learning is characterized by comprising the following steps:
step 1, constructing a system architecture: the system architecture comprises four neural networks, namely an encryption party Alice, a decryption party Bob, an attacker Eve1 and an attacker Eve 2;
wherein;
alice and Bob share the same key K, while Eve1 and Eve2 do not know it;
nonce represents a number that is used only once, known by the recipient Bob and the attacker Eve 2;
alice outputs ciphertext C under the condition of inputting plaintext P, key K and Nonce;
bob decrypts the ciphertext P under the condition of inputting the secret key K, nonce and the ciphertext C Bob
Eve1 can only obtain ciphertext C, eve2 can obtain ciphertext C and Nonce;
step 2, determining to generate a countermeasure optimization target:
as a function of distance
Figure FDA0003989568620000011
The difference between the plain text is measured and,
wherein: n represents the length of the plaintext;
defining a single sample loss function for Bob as:
L BA ,θ B ,P,K,Nonce)=d(P,B(θ B ,A(θ A ,P,K,Nonce),K,Nonce)),
wherein: θ AB Neural network parameters, a (θ A P, K, nonce) represents the output of the neural network Alice, B (θ) B ,A(θ A P, K, nonce) represents the output of the neural network Bob;
bob's cost function is:
Figure FDA0003989568620000012
the single sample loss function and cost function for Eve1 and Eve2 are:
L E1A ,θ E1 ,P,K)=d(P,E1(θ E1 ,A(θ A ,P,K),K))
Figure FDA0003989568620000013
L E2A ,θ E2 ,P,K)=d(P,E2(θ E2 ,A(θ A ,P,K,Nonce),K,Nonce))
Figure FDA0003989568620000014
wherein: θ E1E2 Neural network parameters, E1 (θ E1 ,A(θ A P, K), K) and E2 (θ) E2 ,A(θ A P, K, nonce) represent the outputs of the neural networks Eve1 and Eve2, respectively;
the loss function of the entire generated countermeasure system can be defined as:
Figure FDA0003989568620000021
step 3, setting parameters of the neural network:
four participants Alice, bob, eve, eve2 in the system adopt similar neural network structures, and comprise 2 full connection layers FC, fully-connected layers and 6 1-dimensional convolution layers 1-D Convolutional Layer; the first full-connection layer receives an input layer of information, wherein the inputs of an encryption party Alice and a decryption party Bob are a secret key K, a plaintext P and a digital Nonce; the input of the attacker Eve1 is ciphertext C, and the input of the attacker Eve2 is ciphertext C and digital Nonce; the input and output parameters of the second full-connection layer are (3N, 3N), and the two full-connection layers both select a Sigmoid activation function; the full connection layer is followed by 6 one-dimensional convolution layers, followed by a BN layer (Batch Normalization), and finally ReLu and Tanh are selected as activation functions;
step 4, setting system super parameters:
the four neural networks Alice, bob, eve, eve2, the received random plaintext, key and digital Nonce are all N bits, and the generated floating point number ciphertext is N bits; the number of bits n=32, 64, 128, and 256 for plaintext, key, and ciphertext; batch Size is set to 512, 1024 or 2048; and (3) adopting Xavier for initialization, and selecting Adam of PyTorch.
2. The method for constructing the non-deterministic symmetric encryption system based on deep learning according to claim 1, wherein: the key K is repeatedly used or not repeatedly used, and the symmetric encryption system has nondeterministic property, and can obtain different ciphertexts even if the same plaintext is encrypted by using the same key.
CN202211577688.0A 2022-12-09 2022-12-09 Method for constructing non-deterministic symmetric encryption system based on deep learning Pending CN116015762A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211577688.0A CN116015762A (en) 2022-12-09 2022-12-09 Method for constructing non-deterministic symmetric encryption system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211577688.0A CN116015762A (en) 2022-12-09 2022-12-09 Method for constructing non-deterministic symmetric encryption system based on deep learning

Publications (1)

Publication Number Publication Date
CN116015762A true CN116015762A (en) 2023-04-25

Family

ID=86031724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211577688.0A Pending CN116015762A (en) 2022-12-09 2022-12-09 Method for constructing non-deterministic symmetric encryption system based on deep learning

Country Status (1)

Country Link
CN (1) CN116015762A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118214540A (en) * 2024-04-09 2024-06-18 重庆大学 Safety communication method and system based on antagonistic neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118214540A (en) * 2024-04-09 2024-06-18 重庆大学 Safety communication method and system based on antagonistic neural network

Similar Documents

Publication Publication Date Title
Zhao et al. PVD-FL: A privacy-preserving and verifiable decentralized federated learning framework
CN110138802B (en) User characteristic information acquisition method, device, block chain node, network and storage medium
CN113435592B (en) Neural network multiparty collaborative lossless training method and system with privacy protection
US10630465B2 (en) Key exchange method, key exchange system, communication device and storage medium therefore
Ugwuishiwu et al. An overview of quantum cryptography and shor’s algorithm
US20100046755A1 (en) Cryptography related to keys with signature
CN107145792A (en) Multi-user's secret protection data clustering method and system based on ciphertext data
Rührmair et al. On the practical use of physical unclonable functions in oblivious transfer and bit commitment protocols
Rührmair et al. Practical security analysis of PUF-based two-player protocols
Meraouche et al. Neural networks-based cryptography: A survey
CN101977112A (en) Public key cipher encrypting and decrypting method based on neural network chaotic attractor
CN111245610B (en) Data privacy protection deep learning method based on NTRU homomorphic encryption
US12045340B2 (en) Method for updating a neural network, terminal apparatus, computation apparatus, and program
CN116015762A (en) Method for constructing non-deterministic symmetric encryption system based on deep learning
Diffie et al. New Directions in cryptography (1976)
CN117708887B (en) Longitudinal logistic regression-based federal learning model acquisition method and system
CN115001651A (en) Multi-party computing method based on fully homomorphic encryption and suitable for semi-honest model
Shi et al. Privacy-preserving quantum protocol for finding the maximum value
Zhao et al. SGBoost: An efficient and privacy-preserving vertical federated tree boosting framework
Khodaiemehr et al. Navigating the quantum computing threat landscape for blockchains: A comprehensive survey
Vasani et al. Embracing the quantum frontier: Investigating quantum communication, cryptography, applications and future directions
Meraouche et al. Learning multi-party adversarial encryption and its application to secret sharing
Meraouche et al. Learning asymmetric encryption using adversarial neural networks
US10396983B2 (en) Method for cryptographic communication based on pure chance
CN115174046A (en) Federal learning bidirectional verifiable privacy protection method and system on vector space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination