CN118214540A - Safety communication method and system based on antagonistic neural network - Google Patents

Safety communication method and system based on antagonistic neural network Download PDF

Info

Publication number
CN118214540A
CN118214540A CN202410430617.0A CN202410430617A CN118214540A CN 118214540 A CN118214540 A CN 118214540A CN 202410430617 A CN202410430617 A CN 202410430617A CN 118214540 A CN118214540 A CN 118214540A
Authority
CN
China
Prior art keywords
plaintext
ciphertext
neural network
length
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410430617.0A
Other languages
Chinese (zh)
Inventor
胡宇扬
胡春强
秦郅涵
杨皓波
江佳艺
蔡斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202410430617.0A priority Critical patent/CN118214540A/en
Publication of CN118214540A publication Critical patent/CN118214540A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/06Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for block-wise or stream coding, e.g. DES systems or RC4; Hash functions; Pseudorandom sequence generators
    • H04L9/0618Block ciphers, i.e. encrypting groups of characters of a plain text message using fixed encryption transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/001Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols using chaotic signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a secure communication method and system based on an antagonistic neural network. The method comprises the following steps: the transmitting end converts the plaintext to be transmitted into a plurality of plaintext blocks; adopting a block cipher working mode, sequentially encrypting a plurality of plaintext blocks by using an encoder of an antagonistic neural network model, wherein the encoder adopts a multilayer convolutional neural network; combining the ciphertext block and the plaintext length representation to obtain ciphertext and transmitting; the receiving end adopts a block cipher working mode, a decoder of the antagonistic neural network model is utilized to sequentially decrypt a plurality of ciphertext blocks to obtain plaintext blocks corresponding to each ciphertext block, and the decoder adopts a multi-layer convolutional neural network; combining the decrypted plurality of plaintext blocks to generate a combined plaintext; and extracting a plaintext length corresponding portion to be transmitted from the combined plaintext as plaintext. The encoder, the decoder and the classifier of the application adopt convolutional neural networks, can reduce model training difficulty, have stronger processing capacity on text sequences, fully utilize unpredictability of the neural networks and strengthen communication safety.

Description

Safety communication method and system based on antagonistic neural network
Technical Field
The invention relates to the technical field of communication safety, in particular to a safety communication method and system based on an antagonistic neural network.
Background
In 2016, google team proposed antagonistic neurocryptography (ADVERSARIAL NEURAL CRYPTOGRAPHY, ANC for short) in paper "Learning to protect communications WITH ADVERSARIAL neural cryptography". Antagonistic neurocryptography is a cross application of neural networks and cryptography that enables secure communication between different parties in a manner that is resilient to almost any attachment. In this paper, the antagonistic neural network of the antagonistic neural cryptography is shown in fig. 1, and encryption is performed by using the antagonistic neural network without depending on other human knowledge, aiming at allowing the machine to learn encryption at its own right.
In fig. 1, alice is an encoder, bob is a decoder, eve is a classifier, and all three have similar network structures. Eve is considered as a ciphertext-only attacker that intercepts the intermediate content of both Alice and Bob communications and attempts to crack, alice and Bob then jointly learn to fight Eve's attacks in normal communications. Since Alice and Bob are co-trained and share a secret key, the two networks are more closely structured, and countermeasures to Eve are set in their loss functions, so they can communicate normally, but Eve cannot be broken. During communication, a sender encrypts a message by using a trained encoder (Alice) to obtain a ciphertext, and a receiver decrypts the ciphertext by using a trained decoder (Bob) to obtain the message. However, when the resistive neural network shown in fig. 1 is trained by using the deep learning method, there is a considerable bit loss, the bit accuracy of plaintext recovery is about 98%, and there is a certain probability of messy codes, which can cause great trouble to normal communication.
On the basis of the Google team, 2018 Cout inho et al designed in paper "Learning perfectly secure cryptography to protect communications with adversarial neural cryptography'' a secure communication model CPA-ANC under selective plaintext attack, which is shown in fig. 2 against the resistive neural network. A plaintext attack (Chosen Plaintext Attack) abbreviated CPA is selected. In fig. 2, the encoder (Alice), the decoder (Bob) and the classifier (Eve) are all set as feed-forward neural networks FCC, wherein the task of the encoder (Alice) and the decoder (Bob) is unchanged, whereas Eve acts as a classifier, which randomly sends one of the two plain texts to the encoder (Alice), intercepts the result it outputs and determines from which plain text this result is encrypted. This network implements an OTP (One-Time Pad) method for the first Time, and the classifier (Eve) is regarded as a selective plaintext attacker with higher attack power, which greatly improves the security of the antagonistic neural network and alleviates the bit loss problem of the antagonistic neural network proposed by Google team. But the network structure is more easily affected by gradient attack after being simplified, and meanwhile, the requirements of the network on data and calculation force are further increased, and the risk of gradient disappearance exists, so that the learning cost is increased sharply. In addition, with the development of the internet, various electronic document records of a computer replace the conventional paper file records, a large number of text sequences exist in daily communications, and the encoder, decoder and classifier of the antagonistic neural network shown in fig. 2 have weak capability of processing the text sequences due to the adoption of the feedforward neural network.
Disclosure of Invention
The invention aims to solve the technical problems that a resistive neural network structure proposed by Coutinho in the prior art is easily influenced by gradient attack, the learning cost is high, and the capability of processing a text sequence is weak due to the adoption of a feedforward neural network by an encoder, a decoder and a classifier, and provides a secure communication method and system based on the resistive neural network.
In order to achieve the above object of the present invention, according to a first aspect of the present invention, there is provided a secure communication method based on an antagonistic neural network, comprising: the transmitting end performs: generating a plaintext length representation according to the plaintext length to be transmitted, and converting the plaintext to be transmitted into a plurality of plaintext blocks; sequentially encrypting a plurality of plaintext blocks by using an encoder of a trained antagonistic neural network model by adopting a block cipher working mode to obtain ciphertext blocks corresponding to each plaintext block, wherein the encoder adopts a multi-layer convolutional neural network; combining the ciphertext block and the plaintext length representation to obtain a ciphertext, and transmitting the ciphertext; the receiving end executes: decomposing a plurality of ciphertext blocks and plaintext length representations from the received ciphertext; obtaining a plaintext length to be transmitted based on the plaintext length representation; sequentially decrypting a plurality of ciphertext blocks by using a decoder of a trained antagonistic neural network model by adopting a block cipher working mode to obtain plaintext blocks corresponding to each ciphertext block, wherein the decoder adopts a multi-layer convolutional neural network; combining the decrypted plurality of plaintext blocks to generate a combined plaintext; and extracting a part corresponding to the length of the plaintext to be transmitted from the combined plaintext as the plaintext.
In order to achieve the above object of the present invention, according to a second aspect of the present invention, there is provided a secure communication system for the secure communication method based on an antagonistic neural network according to the first aspect of the present invention, including a transmitting end, a receiving end, and a central server that are connected to each other for communication.
The present invention improves the antagonistic neural network proposed by Coutinho et al: the encoder, the decoder and the classifier do not use a feedforward neural network any more, but use a convolution neural network, so that the network complexity is improved, an antagonistic neural network model can be better converged and realized in training, gradient disappearance is not easy to occur, the model training difficulty is reduced, the model accuracy is improved, the text sequence processing capacity of the model is stronger by using the convolution neural network, the network parameters of the antagonistic neural network model are obtained according to training, the unpredictability of the neural network is fully utilized, and the communication safety is improved.
In addition, the invention generates the original abstract of the plaintext to be transmitted at the transmitting end, generates the verification abstract at the receiving end, verifies the consistency of the original abstract and the verification abstract at the receiving end, improves the communication safety, and aims to improve the forward attack defending capability of the abstract.
In addition, in order to ensure that data cannot be leaked or cannot be decrypted due to dynamic adjustment of the resistive neural network model, the invention combines ideas in PGP, KDC and TCP protocols in traditional cryptography, and particularly solves the problems faced by the resistive neural cryptographic system, and designs a network updating protocol.
Drawings
FIG. 1 is a block diagram of a first prior art antagonistic neural network model;
FIG. 2 is a block diagram of a second prior art antagonistic neural network model;
FIG. 3 is a flow chart of a secure communication method based on an antagonistic neural network according to a preferred embodiment of the present invention;
FIG. 4 is a flow chart of a secure communication method based on an antagonistic neural network according to a further preferred embodiment of the present invention;
FIG. 5 is an overall frame diagram of a secure communication system in accordance with a preferred embodiment of the present invention;
FIG. 6 is a schematic diagram of a summary generation flow in accordance with a preferred embodiment of the present invention;
FIG. 7 is a network update protocol diagram of a secure communication system in accordance with a preferred embodiment of the present invention;
Fig. 8 is a training effect diagram of an antagonistic neural network model provided in a preferred embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In the description of the present invention, it should be understood that the terms "longitudinal," "transverse," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the present invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention.
In the description of the present invention, unless otherwise specified and defined, it should be noted that the terms "mounted," "connected," and "coupled" are to be construed broadly, and may be, for example, mechanical or electrical, or may be in communication with each other between two elements, directly or indirectly through intermediaries, as would be understood by those skilled in the art, in view of the specific meaning of the terms described above.
The invention discloses a secure communication method based on an antagonistic neural network, which is implemented based on a secure communication system, wherein the secure communication system comprises a transmitting end, a receiving end and a central server which are mutually connected and communicated, as shown in fig. 5, the transmitting end is respectively connected and communicated with the receiving end and the central server, the central server is respectively connected and communicated with the receiving end and the transmitting end, and the receiving end is respectively connected and communicated with the transmitting end and the central server. The sending end, namely the sender, can be specifically electronic equipment equipped by the sender, the receiving end, namely the receiver, can be specifically electronic equipment equipped by the receiver, and the electronic equipment can be an intelligent terminal, a tablet personal computer, a personal desktop computer or a server and the like. The central server is preferably, but not limited to, a cloud server or a cluster of computers.
The central server trains the antagonistic neural network model and the self-encoder, respectively. The process of the central server training the antagonistic neural network model includes:
And 1, constructing an antagonistic neural network. A network structure of the encoder, decoder and classifier is constructed. Preferably, the encoder and the decoder both use a multi-layer convolutional neural network, and in particular, a four-layer convolutional neural network. The classifier adopts a single-layer convolutional neural network. The encoder, the decoder and the classifier are connected according to the antagonistic neural network frame designed by Coutinho and the like, namely, the connection relation shown in figure 2 is connected to form the antagonistic neural network, the convolutional neural network is innovatively adopted, the accuracy and the safety are improved while the characteristics of the convolutional network are maintained, and the text sequence processing capability is higher.
As shown in fig. 2, where an input terminal of an encoder (Alice) is connected to a first output terminal of a classifier (Eve), an output terminal of the encoder (Alice) is connected to an input terminal of a decoder (Bob), an output terminal of the decoder (Bob) outputs a plaintext decrypted by the decoder, the classifier (Eve) intercepts ciphertext output by the encoder (Alice) from a communication link between the encoder (Alice) and the decoder (Bob), and the classifier (Eve) decrypts the ciphertext to obtain a plaintext decrypted by the classifier.
And 2, constructing a sample pool, wherein the sample pool comprises a plurality of binary character strings with the length of a block. The block length needs to be matched with the computational power, hardware resources, etc. of the hardware device for executing the secure communication method provided by the invention, and is preferably but not limited to 32 bits or 64 bits.
A key generator is provided for generating the encryption key and transmitting it synchronously to the encoder and decoder.
And step 3, training the antagonistic neural network to obtain an antagonistic neural network model. In the course of the resistance network training, taking a block length of 64bits as an example, the classifier selects two samples of 64bits from the sample pool, randomly selects one sample plaintext as plaintext, and sends the plaintext to the encoder. The key generator generates an encryption key, which is also the block length. And splicing the plaintext and the secret key into a 128-bit tensor, recording the tensor as a first tensor, inputting the first tensor into an encoder for encryption processing to obtain a 64-bit training ciphertext, and decrypting the training ciphertext by a classifier to obtain a classifier plaintext player_Eve. And splicing the training ciphertext and the key to form a tensor, and recording the tensor as a second tensor. And inputting the second tensor into a decoder for decoding to obtain a decoder plaintext play_bob, respectively calculating a classifier loss L E and a coding and decoding loss L AB, adjusting the network parameters of the classifier according to the classifier loss, and adjusting the network parameters of the encoder and the decoder according to the coding and decoding loss.
Wherein:
LE=d(plain_Eve,plaintext)
d (play_Eve, plaintext) represents the Manhattan distance of the classifier plaintext play_Eve and plaintext plaintext. d (play_bob, plaintext) represents the manhattan distance of the decoder plaintext play_bob and plaintext plaintext.
In the training process, a small-batch training method is adopted, the optimizer uses Adam to set mini_batch to 128, the training round number is 30000, the learning rate is 0.0008, and Eve is trained twice when Alice-Bob trains once. And meanwhile, every two hundred rounds of training are set to replace one batch, and through continuous training, the aim that the classifier can not decode the original plaintext according to the intercepted ciphertext output by the encoder is finally achieved, and the encoder and the decoder are communicated safely. The trained antagonistic neural network model is issued to a transmitting end and a decoding end, the transmitting end encrypts plaintext by using an encoder, the receiving end decrypts the plaintext by using a decoder, communication safety is guaranteed, the whole cryptographic system provides safety communication guarantee by means of dynamic change and unpredictability of the change of the antagonistic neural network, and encryption and decryption of the plaintext are realized by using the unpredictability of the convolutional neural network.
The process of the central server training the self-encoder is as follows:
a plurality of sets of third tensors of block length, in one example 64bits, are randomly generated.
A two-layer self-encoder is constructed, the first layer being the encoding layer and the second layer being the reconstruction layer. The encoding layer generates a fourth tensor of 2 times of the block length based on the third tensor, and the decoding layer reduces the fourth tensor to the block length to obtain a reconstructed tensor of the third tensor.
And training the self-encoder by utilizing a plurality of groups of third tensors, taking the mean square error as a loss function, and continuously adjusting network parameters of the self-encoder according to the loss function value until the training is finished to obtain the trained self-encoder. And the central server transmits the trained self-encoder to the transmitting end and the receiving end.
In a preferred embodiment, a schematic path diagram of the secure communication method based on the antagonistic neural network disclosed by the invention is shown in fig. 3, and the secure communication method comprises the following steps:
the transmitting end performs:
And S11, generating a plaintext length representation according to the plaintext length to be transmitted, and converting the plaintext to be transmitted into a plurality of plaintext blocks.
The text to be sent can be text, firstly, the text to be sent is converted into ASCII code format; the ASCII code is then converted to a corresponding binary string, denoted as the first string.
Generating a plaintext length representation according to a plaintext length to be transmitted, comprising: recording a first character string length play_len corresponding to the text to be transmitted, and converting the first character string length play_len into binary numbers with set digits, wherein the binary numbers are the plaintext length representations. The number of bits is preferably, but not limited to, 8 bits, 10 bits or 12 bits. The set bit number is determined by the sender and the receiver through prior negotiation.
Converting plaintext to be transmitted into a plurality of plaintext blocks, comprising:
The preset position is complemented with 0 in the first character string corresponding to the text to be sent, the preset position needs to be negotiated in advance by the sending end and the receiving end, and the preset position can be the front end, the tail end and the middle part of the first character string, for example, the front end of the first character string is complemented with 0, the middle part of the first character string is complemented with 0, or the tail end of the first character string is complemented with 0. Preferably, 0 is appended at the end of the first string. By supplementing 0, the length of the first string after supplementing 0 is an integer multiple of the block length, in one example, the length of the first string after supplementing 0 is an integer multiple of 64, and the first string after supplementing 0 is sequentially divided into a plurality of plaintext blocks according to the block length, wherein the length of each plaintext block is the block length.
Step S12, a block cipher working mode (CBC) is adopted, a plurality of plaintext blocks are sequentially encrypted by using a trained encoder of an antagonistic neural network model, and ciphertext blocks corresponding to each plaintext block are obtained, wherein the encoder adopts a multi-layer convolutional neural network, and particularly can be a four-layer convolutional neural network.
The specific implementation process of step S12 is as follows:
For the first plaintext block, ciphertext block C 1, corresponding to first plaintext block M 1, is obtained using the following formula:
Wherein, IV represents the initialization vector, and the sending end and the receiving end negotiate the initialization vector in advance; key represents the encryption Key, alice (,) represents the encoder of the trained antagonistic neural network model.
For the second plaintext block and the plaintext blocks after the second plaintext block, setting the index as i ', i' or more than or equal to 2; the corresponding ciphertext block is obtained according to the following formula:
C i'-1 represents the ciphertext block corresponding to the i' -1 th plaintext block. M i' represents the i' th plaintext block.
And step S13, combining the ciphertext block and the plaintext length representation to obtain ciphertext, and transmitting the ciphertext. Ciphertext C is represented as:
C=(C1,C2...Cn,Splain_len)
Splain _len represents a plaintext length representation corresponding to the first string length plan-len.
In the secure communication method of the present embodiment, the reception end performs:
Step S21, a plurality of ciphertext blocks and plaintext length representations are decomposed from the received ciphertext. In one example, bits of a set number of bits corresponding to a plaintext length representation are decomposed from the end of ciphertext as the plaintext length representation.
Step S22, obtaining the plaintext length to be transmitted based on the plaintext length representation. And converting the plaintext length representation into a decimal value to serve as the length of the plaintext to be transmitted.
Step S23, adopting a block cipher working mode, and sequentially decrypting a plurality of ciphertext blocks by using a decoder of the trained antagonistic neural network model to obtain plaintext blocks corresponding to each ciphertext block, wherein the decoder adopts a multi-layer convolutional neural network, and preferably the decoder is a four-layer convolutional neural network.
In this embodiment, a block cipher mode of operation (CBC mode) decryption is adopted, and for the first ciphertext block, its corresponding plaintext block M 1 is obtained by the following formula:
wherein, IV is the initial vector when the encoder of the receiving end encrypts, i 'is more than or equal to 2, and for the i' th ciphertext block, the decryption formula for obtaining the corresponding plaintext block is as follows:
Bob (,) represents the decoder of the trained antagonistic neural network model and Key represents the encryption Key used by the encoder. And by analogy, respectively decrypting the plaintext blocks corresponding to all the ciphertext blocks.
Step S24, combining the plurality of plaintext blocks obtained by decryption to generate a combined plaintext M; the combined plaintext M is the plaintext after 0 supplement, m= (M 0,M1...Mn).
Step S25, extracting the corresponding part of the length of the plaintext to be transmitted from the combined plaintext as the plaintext, namely deleting the complementary 0 part from the combined plaintext, and taking the rest part as the plaintext. Specifically, the receiving end deletes the bit of the preset position of the complement 0 in the combined plaintext according to the negotiation with the sending end, and the rest part after deletion is used as the plaintext. In one example, when the sender and receiver agree to complement 0 at the end of the plaintext, the prepended_len bit is extracted from the combined plaintext as the plaintext.
Encryption can be regarded as a translation in cryptography, and a certain mode characteristic still exists in the result of encryption and can be extracted and attacked by machine learning, so in the embodiment, in order to improve the resistance of ciphertext to forward attack, further preferably, the ciphertext acquired by a transmitting end is further confused by using a first chaotic algorithm, so that the randomness of the ciphertext is enhanced, the mode characteristic of the ciphertext is reduced, and the confusion property is improved to better cope with forward attack. Specifically, in this embodiment, step S13 executed by the transmitting end is modified to:
Step S131, combining all ciphertext blocks and plaintext length representations to obtain an intermediate encryption vector, where the intermediate encryption vector is represented by letter C: c= (C 1,C2...Cn, splain _len);
Step S132, generating a chaos matrix CTM_matrix by using a first chaos algorithm; the first chaotic algorithm is preferably but not limited to an existing Tent map, a logical map, a cube map, hebyshev map, PIECEWISE map, sinusoida map, sine map, CMIC map, circle map, bernoulli map. The chaotic matrix is shaped as c_len 1, c_len represents the length of the intermediate encryption vector.
In step S133, the chaos matrix is multiplied by the intermediate encryption vector to obtain ciphertext C'.
Accordingly, the step S21 executed by the receiving end is improved as follows:
Step S211, generating a chaos matrix again by using a first chaos algorithm, and solving an inverse matrix of the chaos matrix; since the chaos matrix is multiplied by the intermediate encryption vector to obtain the ciphertext C ', the ciphertext C ' is a square matrix with the size of c_len_c_len, c_len can be obtained through the size of the ciphertext C ', and the chaos matrix generated by the receiving end is also c_len_1.
In step S212, the intermediate encryption vector is obtained by multiplying the inverted matrix by the ciphertext.
Step S213, decomposing all ciphertext blocks and plaintext length representations from the intermediate encryption vector. In one example, the plaintext length representation is resolved from the end of the ciphertext.
In this embodiment, in order to improve randomness of the chaotic matrix and improve forward attack resistance of the ciphertext, further preferably, the process of generating the chaotic matrix by the transmitting end and the receiving end by using the first chaotic algorithm includes:
Acquiring the length C_len of the intermediate encryption vector;
Randomly selecting a value as an initial value of a first variable X in a first variable threshold interval X epsilon [0,1], and iteratively executing the following formula C_len for a plurality of times, wherein each iteration obtains a first variable value:
where k represents the number of iterations of the first chaotic algorithm, Q represents a first coefficient, X (k) represents a value of the first variable x obtained in the kth iteration of the first chaotic algorithm;
The first variable values obtained by combining c_len iterations form a chaotic matrix of size c_len 1.
In this embodiment, the first variable initial value and the first coefficient need to be negotiated in advance by the transmitting end and the receiving end.
In a preferred embodiment, to further improve the communication security for implementing information authentication of the transmitting end and the receiving end, as shown in fig. 4, the transmitting end further performs:
Step S14, generating an original digest H of the plaintext to be transmitted, and encrypting the original digest by using a private-key of a transmitting end to obtain an encrypted digest RSA (H, private_key).
And S15, splicing the encrypted summary RSA (H, private-key) and the ciphertext C' to obtain splicing information, and transmitting the splicing information.
Correspondingly, after receiving the splicing information, the receiving end executes:
Step S26, splitting ciphertext and encrypted summary RSA (H, private_key) from the spliced information, decrypting the encrypted summary by using a public key of a receiving end to obtain an original summary H; and decrypting the ciphertext to obtain a plaintext, and generating a verification digest of the decrypted plaintext.
If the chaos matrix processing is used in the ciphertext, the receiving end needs to execute the steps S211-S213 to obtain all ciphertext blocks and plaintext length representations, and if the chaos matrix processing is not used in the ciphertext, the receiving end directly disassembles all ciphertext blocks and plaintext length representations from the ciphertext. After obtaining all ciphertext blocks and plaintext length representations, the receiving end performs the decryption process of steps S23-S25 described above to obtain plaintext.
Step S26, verifying the consistency of the original abstract and the verified abstract:
And if the original abstract and the verification abstract are consistent, receiving the decrypted plaintext, and if the original abstract and the verification abstract are inconsistent, refusing to receive the decrypted plaintext.
In this embodiment, in order to improve the forward attack defense capability of the digest, in the process of digest generation, the chaotic algorithm is used to supplement the original plaintext, and meanwhile, the LLE algorithm (collectively referred to as the local linear embedding algorithm (Locally Linear Embedding)) in the flow pattern learning is used to generate the digest, so as to resist the first prime attack. Further preferably, the process of generating the original digest of the plaintext to be transmitted by the transmitting end, as shown in fig. 6, includes:
step S141, converting the plaintext to be transmitted into a binary second character string, and obtaining the length of the second character string; here, the second character string is identical to the first character string, and if the first character string has been obtained, the first character string is directly used as the second character string.
In step S142, a target length that is greater than the second string length and closest to the second string length is obtained, the target length is an integer multiple of 2 times the block length, and a difference diff1 between the second string length and the target length is calculated and recorded as a first difference. In one example, the block length is 64bits, then the target length is 128bits, and the difference between the length of the second string and the nearest multiple of 128 is calculated and recorded as diff1.
Step S143, a first chaotic sequence is obtained by using a second chaotic algorithm, the first chaotic sequence is a binary sequence, and a first complementary bit plaintext is obtained by taking out a first complementary bit of a difference bit from the first chaotic sequence to a second character string. The sending end and the receiving end can negotiate the complementary bit position of the first difference bit in advance, and the complementary bit position can be the tail end or the head end of the first character string.
In step S144, the first bit-filling plaintext is divided into a plurality of first bit-filling blocks, and the length of the first bit-filling blocks is 2 times the length of the blocks. In one example, the first patch block length is 128bits.
Step S145, iteratively solving the abstract of each first bit-filling block according to the following formula, and taking the abstract solved by the last first bit-filling block as the original abstract H of the plaintext to be transmitted:
Wherein H i-1 represents the digest of the i-1 th first bit block, the initial digest H 0={0}L, the length of the first bit block is 2L, Representing the first L bits of the ith first bit-fill block M i,/>Representing the last L bits of the ith first bit-fill block M i; alice (,) represents the encoder of the trained antagonistic neural network model; AE () represents a trained self-encoder; LLE () represents a local linear embedding algorithm, L being a positive integer.
Accordingly, the process of generating the verification digest of the plaintext obtained by decryption at the receiving end in step S26 includes:
step S261, converting the plaintext obtained by decryption into a binary third string, and obtaining the third string length.
Step S262, obtaining a target length which is larger than the third character string length and is closest to the third character string length, wherein the target length is an integer multiple of 2 times of the block length, calculating the difference between the third character string length and the target length, and recording the difference as a second difference diff2;
Step S263, a second chaotic sequence is obtained by utilizing a second chaotic algorithm, the second chaotic sequence is a binary sequence, and a second complementary bit plaintext is obtained by taking out the second complementary bit of the difference bit from the second chaotic sequence to a third character string;
Step S264, dividing the second bit-filling plaintext into a plurality of second bit-filling blocks, wherein the length of each second bit-filling block is 2 times that of each block;
step S265, iteratively solving the digest of each second bit-filling block according to the following formula, and taking the digest obtained by the last second bit-filling block as the verification digest of the plaintext obtained by decryption
Wherein H 'j-1 represents the abstract of the j-1 th second bit block, H' 0={0}L, the length of the second bit block is 2L,Representing the first L bits of the j-th second bit-filling block M' j,/>The last L bits of the j-th second bit-filling block M' j; alice (,) represents the encoder of the trained antagonistic neural network model; AE () represents a trained self-encoder; LLE () represents a local linear embedding algorithm.
In this embodiment, the second chaotic algorithm may be an existing Tent map, logic map, cube map, hebyshev map, PIECEWISE map, sinusoidal map, sine map, CMIC map, circle map, bernoulli map.
In this embodiment, in order to improve randomness of the first chaotic sequence and improve forward attack resistance of the original digest, preferably, in step S143, the obtaining the first chaotic sequence by using a second chaotic algorithm includes:
Acquiring the length C_len of the intermediate encryption vector;
randomly selecting a value within the second variable threshold interval x k′ e (0, 1) as the initial value x 0 of the second variable x, but Iteratively executing the following formula to obtain a second variable value according to the corresponding times of the target length:
Wherein k' represents the iteration number of the second chaotic algorithm; t represents a randomly selected first control parameter; A Singer map value representing a second chaotic algorithm iteration at a kth' +1; mu 3 represents a third control parameter, mu 3∈(0,4];xk′ represents a second variable value at the kth 'second iteration of the chaotic algorithm, and x k′+1 represents a second variable value at the kth' +1 second iteration of the chaotic algorithm; mu 2 represents a second control parameter, mu 2 E (0, 4);/> Representing a logistic mapping value at the k' +1st iteration of the second chaotic algorithm;
And forming a sequence of all second variable values obtained by the times corresponding to the target length, and converting the sequence of the second variable values into a binary format to obtain a first chaotic sequence.
In this embodiment, the second variable initial value, the first control parameter, the second control parameter, and the third control parameter need to be negotiated in advance by the transmitting end and the receiving end.
In a preferred embodiment, as shown in fig. 5, a first vulnerability detection model is set at the transmitting end, the first vulnerability detection model calculates a distance between a ciphertext generated by the transmitting end and a plaintext corresponding to the ciphertext, preferably, the distance is a manhattan distance, and if the distance is less than or equal to a preset distance threshold, a warning is sent to the receiving end;
And/or the number of the groups of groups,
The receiving end is provided with a second vulnerability detection model, the second vulnerability detection model calculates the distance between the received ciphertext and the plaintext obtained by decrypting the ciphertext, preferably, the distance is Manhattan distance, and if the distance is smaller than or equal to a preset distance threshold, an alarm is sent to the sending end.
In the present embodiment, the distance threshold is preferably, but not limited to, 0.1. The network structure of the first vulnerability detection model is similar to the encoder structure of the antagonistic neural network, and the network structure of the second vulnerability detection model is similar to the decoder structure of the antagonistic neural network. But the inputs of the first alarm and the second alarm are only 64 bits. And continuously reading ciphertext which is stored in the plaintext library at a transmitting end and is used as a first vulnerability detection model input, outputting a Manhattan distance of plaintext corresponding to the ciphertext by the first vulnerability detection model as a loss function to train the first vulnerability detection model, and transmitting an alarm warning to the opposite party when the loss of the first vulnerability detection model is lower than 0.1. And at the receiving end, continuously reading the received ciphertext as the input of a second vulnerability detection model, training the second vulnerability detection model by taking the Manhattan distance between the output of the second vulnerability detection model and the plaintext obtained by decrypting the ciphertext as a loss function, and sending an alarm warning to the opposite party when the loss of the second vulnerability detection model is lower than 0.1.
The security and reliability of the antagonistic neural cryptographic system are derived from the dynamic system of cooperative training of the encryption and decryption modules, and meanwhile, the encryption and decryption modules are mutually in inverse function relationship and are not completely equivalent network structures, so that the problem of how to transmit data to perform cooperative training to achieve a shared dynamic network structure, how to ensure synchronous change of the network structures of the encryption and decryption modules so as to avoid the problem that old ciphertext cannot be decrypted by a new module, and how to ensure that the data cannot be leaked in the transmission process are more important. The conventional network protocol and the conventional cryptography protocol are based on a fixed public algorithm and flow, and the security of the algorithms is not derived from dynamic changes, so that the problem is not well solved, and the application designs the cryptographic protocol aiming at the antagonistic neural cryptographic system in order to make the antagonistic neural cryptographic system available.
In a preferred embodiment, in order to make a single group of networks perfect and protect a single direction of a certain communication channel and ensure that data cannot be leaked or cannot be decrypted due to dynamic adjustment of an antagonistic neural network model, the application combines ideas in PGP, KDC and TCP protocols in traditional cryptography, and particularly designs a network updating protocol as shown in fig. 7 to the problems faced by the antagonistic neural cryptographic system. The transmitting end and the receiving end execute a network updating protocol during the communication duration, comprising:
when the sending end gives an alarm, a timer is started, and when the timer is ended or the record of the alarm sent by the receiving end is received, the sending end sends a signal for preparing to replace (namely 'character information for preparing to replace') to the receiving end, and immediately stops encrypting the new ciphertext;
after receiving the ciphertext of the ready-to-replace signal, the receiving end starts to detect the ciphertext which is not decrypted in the receiving end, sends the ready signal (namely the word information which is ready) after the ciphertext is processed, and then the sending end and the receiving end exchange authorization codes and apply for an instruction of replacing the antagonistic neural network model to the central server by using a private key;
the central server transmits the newly trained antagonistic neural network model to the transmitting end and the receiving end, the transmitting end encrypts the replaced signal (namely 'replaced' text information) by using the newly trained antagonistic neural network model after replacing the antagonistic neural network model and transmits the encrypted replaced signal to the receiving end, and the receiving end replies an authorization code after verifying the encrypted signal, and then communication is restarted.
In the following example, detailed steps of the network update protocol are given, and the steps specifically include:
Step 601: the sender (sender) sets and trains the first vulnerability detection model, and the receiver (receiver) sets and trains the second vulnerability detection model.
Step 602: during the duration of the communication, the sender will continuously submit ciphertext to the central server and be used as a sample to place the sample into the sample pool.
Step 603: when the sender sends out an alarm warning, a two-minute timer is started, and when the timer is over, or after the record of the alarm is received, the sender sends a signal of 'ready to replace' to the receiver and immediately stops encrypting the new ciphertext.
Step 604: after receiving the cipher text ready for replacement, the receiver starts to detect the cipher text which is not decrypted in the system, and after the processing is finished, the receiver sends the "ready for replacement", and then the two parties exchange authorization codes and apply for an instruction of replacing the antagonistic neural network model to the central server by using the private key.
Step 605: the central server transmits the newly trained antagonistic neural network model to the sender and the receiver, the sender encrypts the 'replaced' model by using the new model after replacing the model and transmits the encrypted 'replaced' model to the receiver, and the receiver replies an authorization code after verification, and then communication is restarted.
The invention also discloses a safety communication system for realizing the safety communication method based on the antagonistic neural network, which is shown in fig. 5 and comprises a transmitting end, a receiving end and a central server which are mutually connected for communication.
The performance of the secure communication method and the secure communication system based on the antagonistic neural network provided by the application is verified. Experimental results show that the antagonistic neural network can reach the bit accuracy close to 100% by training the antagonistic neural network model on a personal computer with conventional computing power while maintaining the antagonistic training, which provides a feasible premise for realizing the digital signature and the CBC mode after the communication method, and the training effect is shown in figure 8. In fig. 8, a flatter curve is the reconstruction error of the classifier with the training frequency, and a curve with a decreasing abrupt change is the reconstruction error of the decoder with the training frequency.
The antagonism neural network model used in the application adopts an antagonism learning method, and the neural network parameters are not preset by other means, but are dynamically adjusted by the neural network in the training process. A complete protocol for the antagonistic neural password system is constructed to ensure that data is not revealed or is not decryptable due to dynamic adjustment of the model. And the chaotic system is adopted to further process the ciphertext so as to better eliminate the statistical characteristics of the ciphertext. By means of the vulnerability detection model, the leaked pattern features are monitored, and the features at a certain stage of the network are prevented from being learned, so that data are not safe any more. The secure communication system provided by the application is a dynamic change adjustment system, and the adjustment process depends on unpredictability and randomness of a neural network, and is a probability password to a certain extent and does not depend on fixed mathematical problems.
Describing the process of the LLE algorithm, the LLE algorithm is mainly divided into three stages, wherein the first stage is the process of solving the K neighbor. In the second stage, the linear relation of K neighbors in the neighborhood of each sample is obtained to obtain a linear relation weight coefficient W, and in the third stage, the weight coefficient is utilized to reconstruct sample data in a low dimension.
Step 301: inputting a sample set D= { x 1,x2,...,xm }, a nearest neighbor number k and a dimension D to which the dimension is reduced;
Step 302: outputting a low-dimensional sample set matrix D';
Step 303: for i 1 to m, calculating k nearest neighbors (x i1,xi2,...,xik,) nearest to x i, as measured by euclidean distance;
Step 304: for i 1 to m, find the local covariance matrix Z i=(xi-xj)(xi-xj)T, and find the corresponding weight coefficient vector W i:
Step 305: a weight coefficient matrix W calculation matrix m= (I-W) T is composed of weight coefficient vectors W i; i represents an identity matrix.
Step 306: calculating the first d+1 eigenvalues of the matrix M, and calculating eigenvectors { y 1,y2,...yd+1 } corresponding to the d+1 eigenvalues;
step 307: the matrix formed by the second to the (d+1) th eigenvectors is the output low-dimensional sample set matrix D' = (y 2,y3,...yd+1).
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. A method of secure communication based on an antagonistic neural network, comprising:
the transmitting end performs:
generating a plaintext length representation according to the plaintext length to be transmitted, and converting the plaintext to be transmitted into a plurality of plaintext blocks;
sequentially encrypting a plurality of plaintext blocks by using an encoder of a trained antagonistic neural network model by adopting a block cipher working mode to obtain ciphertext blocks corresponding to each plaintext block, wherein the encoder adopts a multi-layer convolutional neural network;
combining the ciphertext block and the plaintext length representation to obtain a ciphertext, and transmitting the ciphertext;
The receiving end executes:
Decomposing a plurality of ciphertext blocks and plaintext length representations from the received ciphertext;
obtaining a plaintext length to be transmitted based on the plaintext length representation;
Sequentially decrypting a plurality of ciphertext blocks by using a decoder of a trained antagonistic neural network model by adopting a block cipher working mode to obtain plaintext blocks corresponding to each ciphertext block, wherein the decoder adopts a multi-layer convolutional neural network;
combining the decrypted plurality of plaintext blocks to generate a combined plaintext;
And extracting a plaintext length corresponding portion to be transmitted from the combined plaintext as plaintext.
2. The method for secure communication based on resistive neural network as claimed in claim 1, wherein the transmitting end converts the plaintext to be transmitted into a plurality of plaintext blocks, comprising:
And converting the plaintext to be transmitted into a binary first character string, supplementing 0 at the tail of the first character string, enabling the length of the first character string after supplementing 0 to be an integer multiple of the block length, and sequentially dividing the first character string after supplementing 0 into a plurality of plaintext blocks according to the block length.
3. A method of secure communication based on an antagonistic neural network according to claim 1 or 2, wherein the step of obtaining ciphertext by combining ciphertext blocks and plaintext length representations at the transmitting end comprises:
combining all ciphertext blocks and plaintext length representations to obtain an intermediate encryption vector;
generating a chaos matrix by using a first chaos algorithm;
multiplying the chaos matrix with the intermediate encryption vector to obtain ciphertext;
Accordingly, the receiving end decomposes a plurality of ciphertext blocks and plaintext length representations from the received ciphertext, including:
Generating a chaotic matrix again by using a first chaotic algorithm, and solving an inverse matrix of the chaotic matrix;
multiplying the inverse matrix with the ciphertext to obtain an intermediate encryption vector;
all ciphertext blocks and plaintext length representations are decomposed from the intermediate encryption vector.
4. A secure communication method based on an antagonistic neural network according to claim 3, characterised in that the transmitting end further performs:
generating an original abstract of a plaintext to be transmitted, and encrypting the original abstract by using a private key of a transmitting end to obtain an encrypted abstract;
splicing the encrypted abstract and the ciphertext to obtain splicing information, and transmitting the splicing information;
Correspondingly, after receiving the splicing information, the receiving end executes:
splitting ciphertext and an encrypted abstract from the spliced information, and decrypting the encrypted abstract by using a public key of a receiving end to obtain an original abstract;
Decrypting the ciphertext to obtain a plaintext, and generating a verification abstract of the plaintext obtained by decryption;
And if the original abstract and the verification abstract are consistent, receiving the decrypted plaintext, and if the original abstract and the verification abstract are inconsistent, refusing to receive the decrypted plaintext.
5. The method for secure communication based on resistive neural network as claimed in claim 4, wherein the process of generating the original digest of the plaintext to be transmitted by the transmitting end comprises:
Converting a plaintext to be transmitted into a binary second character string, and acquiring the length of the second character string;
acquiring a target length which is larger than the second character string length and is nearest to the second character string length, wherein the target length is an integer multiple of 2 times of the block length, calculating a difference value between the second character string length and the target length, and marking the difference value as a first difference value;
obtaining a first chaotic sequence by using a second chaotic algorithm, and taking out the first difference bit complementary bit from the first chaotic sequence to a second character string to obtain a first complementary bit plaintext;
dividing a first bit-filling plaintext into a plurality of first bit-filling blocks, wherein the length of each first bit-filling block is 2 times that of each block;
iteratively solving the abstract of each first bit-filling block according to the following formula, and taking the abstract solved by the last first bit-filling block as the original abstract of the plaintext to be transmitted
Wherein H i-1 represents the digest of the i-1 th first bit block, the initial digest H 0={0}L, the length of the first bit block is 2L,Representing the first L bits of the ith first bit-fill block M i,/>Representing the last L bits of the ith first bit-fill block M i; alice (,) represents the encoder of the trained antagonistic neural network model; AE () represents a trained self-encoder; LLE () represents a local linear embedding algorithm, L being a positive integer.
6. The method for secure communication based on an antagonistic neural network according to claim 5, wherein the obtaining the first chaotic sequence using the second chaotic algorithm comprises:
Acquiring the length C_len of the intermediate encryption vector;
Randomly selecting a value as an initial value of a second variable x in a second variable threshold interval, and iteratively executing the following formula target length corresponding times to obtain a second variable value each time:
Wherein k' represents the iteration number of the second chaotic algorithm; t represents a randomly selected first control parameter; A Singer map value representing a second chaotic algorithm iteration at a kth' +1; mu 3 represents a third control parameter, mu 3∈(0,4];xk′ represents a second variable value at the kth 'second iteration of the chaotic algorithm, and x k′+1 represents a second variable value at the kth' +1 second iteration of the chaotic algorithm; mu 2 represents a second control parameter, mu 2 E (0, 4);/> Representing a logistic mapping value at the k' +1st iteration of the second chaotic algorithm;
And forming a sequence of all second variable values obtained by the times corresponding to the target length, and converting the sequence of the second variable values into a binary format to obtain a first chaotic sequence.
7. The security communication method based on the antagonistic neural network according to claim 1 or 2 or 4 or 5 or 6, wherein the transmitting end is provided with a first vulnerability detection model, the first vulnerability detection model calculates a distance between a ciphertext generated by the transmitting end and a plaintext corresponding to the ciphertext, and if the distance is smaller than or equal to a preset distance threshold, a warning is sent to the receiving end;
And/or the number of the groups of groups,
The receiving end is provided with a second vulnerability detection model, the second vulnerability detection model calculates the distance between the received ciphertext and the plaintext obtained by decrypting the ciphertext, and if the distance is smaller than or equal to a preset distance threshold, an alarm is sent to the sending end.
8. The method of claim 7, wherein the transmitting end and the receiving end perform a network update protocol during a communication duration, the network update protocol comprising:
When the sending end sends an alarm, a timer is started, and when the timer is over or the record of the alarm sent by the receiving end is received, the sending end sends a signal for preparing to replace to the receiving end and immediately stops encrypting the new ciphertext;
After receiving the ciphertext of the ready-to-replace signal, the receiving end starts to detect the ciphertext which is not decrypted in the receiving end, sends the ready signal after processing the ciphertext which is not decrypted, and then the sending end and the receiving end exchange authorization codes and apply for an instruction of replacing the antagonistic neural network model to the central server by using a private key;
the central server transmits the newly trained antagonistic neural network model to the transmitting end and the receiving end, the transmitting end encrypts the replaced signal by using the newly trained antagonistic neural network model after replacing the antagonistic neural network model and transmits the encrypted signal to the receiving end, and the receiving end replies an authorization code after verification, and communication is restarted.
9. The method for secure communication based on an antagonistic neural network according to claim 8, wherein the central server performs an antagonistic neural network model and a self-encoder training, respectively, and issues the trained resistive neural network model and self-encoder to the transmitting end and the receiving end, respectively;
The antagonism network model comprises an encoder, a decoder and a classifier, wherein the encoder is a multi-layer convolutional neural network, the decoder is a multi-layer convolutional neural network, and the classifier is a single-layer convolutional neural network.
10. A secure communication system for implementing the secure communication method based on an antagonistic neural network according to any one of claims 1 to 9, characterized by comprising a transmitting end, a receiving end and a central server which are in communication with each other.
CN202410430617.0A 2024-04-09 2024-04-09 Safety communication method and system based on antagonistic neural network Pending CN118214540A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410430617.0A CN118214540A (en) 2024-04-09 2024-04-09 Safety communication method and system based on antagonistic neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410430617.0A CN118214540A (en) 2024-04-09 2024-04-09 Safety communication method and system based on antagonistic neural network

Publications (1)

Publication Number Publication Date
CN118214540A true CN118214540A (en) 2024-06-18

Family

ID=91450342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410430617.0A Pending CN118214540A (en) 2024-04-09 2024-04-09 Safety communication method and system based on antagonistic neural network

Country Status (1)

Country Link
CN (1) CN118214540A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789053A (en) * 2016-12-13 2017-05-31 四川长虹电器股份有限公司 Random ciphertext generation method and system, decryption method and system
CN112417467A (en) * 2020-10-26 2021-02-26 南昌大学 Image encryption method based on anti-neurocryptography and SHA control chaos
EP4064095A1 (en) * 2021-03-23 2022-09-28 INCM - Imprensa Nacional-Casa da Moeda, S.A. Encoding, decoding and integrity validation systems for a security document with a steganography-encoded image and methods, security document, computing devices, computer programs and associated computer-readable data carrier
CN115225320A (en) * 2022-06-10 2022-10-21 北卡科技有限公司 Data transmission encryption and decryption method
CN116015762A (en) * 2022-12-09 2023-04-25 中国人民武装警察部队工程大学 Method for constructing non-deterministic symmetric encryption system based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789053A (en) * 2016-12-13 2017-05-31 四川长虹电器股份有限公司 Random ciphertext generation method and system, decryption method and system
CN112417467A (en) * 2020-10-26 2021-02-26 南昌大学 Image encryption method based on anti-neurocryptography and SHA control chaos
EP4064095A1 (en) * 2021-03-23 2022-09-28 INCM - Imprensa Nacional-Casa da Moeda, S.A. Encoding, decoding and integrity validation systems for a security document with a steganography-encoded image and methods, security document, computing devices, computer programs and associated computer-readable data carrier
CN115225320A (en) * 2022-06-10 2022-10-21 北卡科技有限公司 Data transmission encryption and decryption method
CN116015762A (en) * 2022-12-09 2023-04-25 中国人民武装警察部队工程大学 Method for constructing non-deterministic symmetric encryption system based on deep learning

Similar Documents

Publication Publication Date Title
CN111639361B (en) Block chain key management method, multi-person common signature method and electronic device
Shihab A backpropagation neural network for computer network security
JP2004534333A (en) Integrated protection method and system for distributed data processing in computer networks
CN111222645B (en) Management system and method based on Internet of things block chain quantum algorithm artificial intelligence
Riya et al. Encryption with User Authentication Model for Internet of Medical Things Environment.
CN112637161B (en) Data transmission method and storage medium
CN105846947B (en) A kind of encryption in physical layer method introducing Latin battle array
Liu et al. An efficient biometric identification in cloud computing with enhanced privacy security
CN114065169B (en) Privacy protection biometric authentication method and device and electronic equipment
Sun et al. Lightweight internet of things device authentication, encryption, and key distribution using end-to-end neural cryptosystems
Hassan et al. A hybrid encryption technique based on DNA cryptography and steganography
Jamil et al. Cyber Security for Medical Image Encryption using Circular Blockchain Technology Based on Modify DES Algorithm.
CN110932863B (en) Generalized signcryption method based on coding
Win et al. Protecting private data using improved honey encryption and honeywords generation algorithm
CN117093869A (en) Safe model multiplexing method and system
CN118214540A (en) Safety communication method and system based on antagonistic neural network
CN114003884B (en) Biometric authentication key negotiation method and system for secure communication
US20230141210A1 (en) Neural networks
Li Secure encryption algorithms for wireless sensor networks based on node trust value
Saffer et al. Lightweight cryptography method in the internet of things using elliptic curve and crow search algorithm
Arumugam An effective hybrid encryption model using biometric key for ensuring data security.
CN111835506B (en) Information security digital encryption method based on one-time use codebook
AlDerai et al. A Study of Image Encryption/Decryption by Using Elliptic Curve Cryptography ECC
Li et al. Minutiae matching with privacy protection based on the combination of garbled circuit and homomorphic encryption
Renuka et al. A Light Weight Self-Adaptive Honey Encryption for User Authentication Scheme in Cloud-IoT Health Services

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination