CN112243005A - Secure non-embedded steganography method based on generation of countermeasure network - Google Patents
Secure non-embedded steganography method based on generation of countermeasure network Download PDFInfo
- Publication number
- CN112243005A CN112243005A CN202011094188.2A CN202011094188A CN112243005A CN 112243005 A CN112243005 A CN 112243005A CN 202011094188 A CN202011094188 A CN 202011094188A CN 112243005 A CN112243005 A CN 112243005A
- Authority
- CN
- China
- Prior art keywords
- model
- key
- training
- extraction
- secret
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/04—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
- H04L63/0428—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
- H04L63/0442—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload wherein the sending and receiving network entities apply asymmetric encryption, i.e. different keys for encryption and decryption
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/08—Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
- H04L9/0816—Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
- H04L9/0819—Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s)
- H04L9/0825—Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s) using asymmetric-key encryption or public key infrastructure [PKI], e.g. key signature or public key certificates
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/08—Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
- H04L9/0861—Generation of secret information including derivation or calculation of cryptographic keys or passwords
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
Abstract
The invention discloses a safe non-embedded steganography method based on a generation countermeasure network, which comprises the following steps: training a steganography model based on a generated countermeasure network, and 2 encrypting and steganography is carried out on secret information; and 3, encrypted extraction of the secret information. The method can solve the common security threats of the known generation model attack, the known extraction model attack, the training extraction model attack and the like, thereby meeting the security requirements of information steganography and extraction in the communication transmission process.
Description
Technical Field
The invention belongs to the technical field of information security, and particularly relates to a secure non-embedded steganography method based on a generation countermeasure network.
Background
Information hiding is a technique for secret communication by which secret information can be transmitted without being perceived by a third party. Steganography is an important way to hide information, mainly by embedding and modifying a carrier medium, and simultaneously, the distortion of the carrier quality caused by modification is kept within a certain range, and the third party cannot be warned. Early spatial domain digital image steganography algorithms mainly used human vision to be insensitive to small variations in the image, replacing the least significant bits of the image pixels with secret information. Due to the different degrees of carrier modification, powerful deep learning-based steganalysis methods present challenges to the security of adaptive steganography.
In order to solve the existing problems, a symmetric encryption system is considered as proposed by a GAN-based classical encryption model, wherein a plaintext and a secret key are defined to be equal in length, and the network structures of a generation model and an extraction model are the same. And a fingerprint image is constructed through secret information and a secret key, and a construction-based non-embedded steganography algorithm is provided. Or a data-driven non-embedded steganography scheme GSS is provided, a sender uses a message and a key to obtain a carrier image through generator sampling, and the embedding and the extraction of the information respectively use a shared key without the participation of a carrier object. The method has the advantages that the embedded steganography is not needed, a carrier is not needed, a secret object is generated or selected under the drive of secret information, and the method is a novel steganography with great development potential. However, the existing non-embedded steganography algorithm, particularly the non-embedded steganography algorithm based on GAN, has some problems, including less embedded capacity, low recovery accuracy, and difficulty in resisting common security threats such as known generative model attack, known extraction model attack and training extraction model attack, and does not meet the Kerckhoffs criterion in modern cryptography.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a safe non-embedded steganography method based on a generation countermeasure network so as to solve the common security threats of known generation model attack, known extraction model attack, training extraction model attack and the like, thereby meeting the security requirements of information steganography and extraction in the communication transmission process.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention relates to a safe non-embedded steganography method based on a generation countermeasure network, which is characterized in that the method is applied to a network environment consisting of a sender, a receiver, a discriminator and a third party and is carried out according to the following steps:
step 2, the third party distributes the generated key en _ key to the generator and distributes the extracted key de _ key to the extractor;
step 3, training a GAN-based safe non-embedded steganography model:
step 3.1, acquiring a real image set and using the real image set as a pre-training data set w;
3.2, enabling the safe non-embedded steganography model to be composed of a generation model G, a discrimination model D and an extraction model E, wherein the generation model G is composed of a full-connection layer and a multi-layer deconvolution layer; the discrimination model D and the extraction model E are both composed of a full-connection layer and a plurality of convolution layers;
3.3, pre-training the generated model G and the discrimination model D;
step 3.3.1, connecting the generated key en _ key and the noise vector k together and using the connected key and the noise vector k as an input z of the generated model G, so as to obtain a secret-carrying image w' ═ G (z) through the generated model G;
inputting the pre-training data set w and the secret-carrying image w 'into the discriminant model D, and outputting discriminant scores D (w) and D (w');
step 3.3.2, establishing a loss function L of the generative model G as shown in the formula (1)G:
LG=log(1-D(w′)) (1)
Step 3.3.3, establishing judgment as shown in the formula (2)Loss function L of the other model DD:
LD=log(1-D(w′))+logD(w) (2)
Step 3.3.4, in the pre-training process, establishing an objective function L shown in the formula (3)(G,D):
L(G,D)=LG+LD (3)
Step 3.3.5, utilizing Adam optimization algorithm to carry out optimization on the target function L(G,D)Carrying out optimization solution until a loss function L is causedGValue of (A) and LDThe values of (A) and (B) converge to the optimum in mutual confrontation, so as to obtain a generation model G 'and a discrimination model D' after pre-training;
3.4, training the generation model G ', the discrimination model D' and the extraction model E after the pre-training in the middle period;
step 3.4.1, continuing to train the generated model G 'and the discrimination model D' according to the pre-training mode, so as to obtain an updated generated model G 'and a discrimination model D';
step 3.4.2, inputting the secret-carrying image w 'and the extraction key de _ key into the extraction model E, and outputting recovery information k';
step 3.4.3, establishing a loss function L of the extraction model E shown in the formula (4)E:
In formula (4), λ1And λ2Recovering weights for two pieces of information, wherein N represents the dimension of an input vector formed by the generated image c' and the extraction key de _ key;
step 3.4.4, establishing an objective function L shown in the formula (5)(G,E):
L(G,E)=λGLG+λELE (5)
In formula (5), λG,λERespectively representing the training weights of the pre-trained generated model G' and the extracted model E;
step 3.4.5, utilizing Adam optimization algorithm to carry out optimization on the target function L(G,E)Carrying out optimization solution until a loss function L is causedGValue of (A) and LEThe value of (A) is converged to the optimum in the training, thereby obtaining an extraction model E' after the middle training and a well-trained generation model G*And a discrimination model D*;
3.5, training the extraction model E' after the middle training in the later stage;
step 3.5.1, inputting the generated image w ' and the extraction key de _ key into an extraction model E ' after medium-term training, and outputting recovery information k ';
step 3.5.2, establishing a loss function L of an extraction model E' in the later training as shown in the formula (6)E′:
In formula (6), λ3And λ4Restoring weights for the other two messages;
step 3.5.3 of applying Adam optimization algorithm to the loss function LE′Carrying out optimization solution until a loss function L is causedE′The value of (A) is converged to the optimum value, thereby obtaining a trained extraction model E*;
Step 4, steganography process:
step 4.1, the sender has a trained generative model G*And the secret information m and the generation key en _ key are connected together and input into the generation model G*Generating a secret image c';
step 4.2, the sender determines the steganography capacity L of each image;
step 4.3, the sender determines the number of secret-carrying images needed for sending the secret information m according to the steganographic capacity LWherein length () represents the length of the secret information;
4.4, the sender divides the secret information m into n small blocks, and the secret information length of each small block is L;
step 4.5, secret information m in each small block is set to { m ═ m according to equation (7)1,m2…,mnMapped into noise vectors z ═ z, respectively1,z2…,zn}; wherein m isiRepresenting secret information contained in the ith patch, ziRepresents a noise vector included in the i-th small block, i ═ 1 … n;
in the formula (8), random (x, y) represents that a random noise value is generated in an interval from x to y, sigma represents the number of information bits mapped by a one-dimensional noise vector, and delta is an interval value of adjacent interval partitions;
step 4.6, the sender sets z to { z ═ by block unit1,z2…znIs respectively connected with a generation key en _ key and input into the generation model G*In the method, a secret image c ═ { c } is sequentially generated1′,c′2…c′nAnd sending the data to the receiver, wherein ci' represents a secret image contained in the ith patch;
step 5, an extraction process:
step 5.1, the receiver has a trained extraction model E*And the sequentially received secret images c '═ c'1,c′2…c′nAdding the key and the extraction key de _ key bit by bit respectively, and inputting the result to the extraction model E*So as to sequentially restore the noise vector z ' ═ z ' of the n blocks '1,z′2…z′nAnd then, secret information m ' ═ m ' is recovered by equation (7) '1,m′2…m′n}。
Compared with the prior art, the invention has the beneficial effects that:
1. the invention adopts an asymmetric key system, and applies a pair of generation keys and extraction keys to the input of the generation model and the extraction model respectively, thereby ensuring the safety requirements of information steganography and extraction in the communication transmission process.
2. The invention adopts a method of connecting secret information and a generated secret key and inputting the secret information and the generated secret key into a generated model together, and learns to generate a sample image which is similar to the distribution of a real data set in training. Due to the addition of the generated key, the receiving party does not extract misleading information according to the generated fake secret-carrying image under the condition that the attacker generates the fake secret-carrying image according to the generation model and the fake information.
3. The invention designs an extraction model of a neural network structure. In the extraction model, the extraction key and the secret-carrying image are used as the input of the extraction model, so that the extraction key and the secret-carrying image determine the parameters of the extraction model, an attacker still cannot successfully recover secret information under the condition of acquiring the secret-carrying image and all network parameters, and the safety of information transmission is ensured.
4. The invention uses a training mode combining end-to-end training and stage training, so that the convergence direction of the generated model is matched with the convergence direction of the corresponding extraction model while the steganographic image is generated by training, and therefore, an attacker cannot obtain a good convergence effect when training the extraction model according to paired information.
Drawings
FIG. 1 is a diagram of a system model framework for GAN-based secure non-embedded image steganography in accordance with the present invention;
FIG. 2 is a block diagram of a cryptographic network using the generative model of the generative key of the present invention;
fig. 3 is a diagram illustrating an extraction model network structure using an extraction key according to the present invention.
Detailed Description
In this embodiment, referring to fig. 1, a secure non-embedded steganography method based on generation of an countermeasure network is applied to a network environment formed by a sender, a receiver, a discriminator and a third party, and is performed according to the following steps:
step 2, the third party distributes the generated key en _ key to the generating party and distributes the extracted key de _ key to the extracting party;
step 3, training a GAN-based safe non-embedded steganography model:
step 3.1, acquiring a real image set and using the real image set as a pre-training data set w;
in a particular embodiment, the image dataset FFHQ is used as the pre-training dataset. The data set contains 7 million 128 × 128 × 3 high-definition face images, and the photos have strong diversity, including age, accessories (glasses, hat), and the like.
And 3.2, enabling the safe non-embedded steganography model to be composed of a generation model G, a discrimination model D and an extraction model E, wherein the generation model G is composed of a full-connection layer and a multi-layer deconvolution layer. The discrimination model D and the extraction model E are both composed of a fully-connected layer and a multi-layer convolution layer. In a specific embodiment, the generation model G and the discrimination model D adopt DCGAN structures. Specifically, the generative model G is formed by connecting four deconvolution layers behind a full-link layer to expand data; the discrimination model D and the extraction model E are both connected with a full connection layer behind the four convolution layers;
3.3, pre-training to generate a model G and a discrimination model D until a vivid image can be stably generated;
step 3.3.1, connecting the generated key en _ key and the noise vector k together and using the connected key en _ key and the noise vector k as an input z of the generated model G, so that a secret-carrying image w' is obtained through the generated model G (z);
the above process is shown in fig. 2. In a specific embodiment, the noise vector k and the generation key en _ key both take on random numbers between (-1, 1).
Step 3.3.2, inputting the pre-training data set w and the secret-carrying image w ' into a discriminant model D, and outputting discriminant scores D (w) and D (w '), wherein under an optimal condition, the generated model can generate an image very close to a real image set, and the discriminant network is difficult to judge that the generated image is a false image, namely D (w ') is 0.5;
step 3.3.3, establishing the formula (1)Generating a loss function L of the model GG:
LG=log(1-D(w′)) (1)
Step 3.3.4, establishing a loss function L of a discrimination model D shown in the formula (2)D:
LD=log(1-D(w′))+logD(w) (2)
Step 3.3.5, in the pre-training process, establishing an objective function L shown in the formula (3)(G,D):
L(G,D)=LG+LD (3)
Step 3.3.6, utilizing Adam optimization algorithm to carry out optimization on target function L(G,D)Carrying out optimization solution until a loss function L is causedGValue of (A) and LDThe values of (A) and (B) converge to the optimum in mutual confrontation, so as to obtain a generation model G 'and a discrimination model D' after pre-training;
in a specific embodiment, the model is updated using a batch random gradient descent method, with a batch size of 64. The generated image size was 64 × 64 × 3, the initial learning rate was set to 0.0002, one step (step) was recorded per 1000batch, and the dimensions of the noise vector were used in 100, 200, and 300 dimensions, respectively.
3.4, generating a model G 'and a discrimination model D' after mid-training and pre-training and extracting a model E;
3.4.1, continuing to train the generated model G 'and the discrimination model D' according to a pre-training mode, thereby obtaining an updated generated model G 'and a discrimination model D';
step 3.4.2, inputting the secret-carrying image w 'and the extraction key de _ key into an extraction model E, outputting recovery information k', wherein the extraction key de _ key is firstly subjected to full connection and then remolded into a matrix with the same size and dimension as the secret-carrying image, and then the secret-carrying image and the matrix are added bitwise and then input into the extraction model E;
the above process is illustrated in fig. 3. In a specific embodiment, the extracted key de _ key is fully connected and then reshaped into a matrix with the size dimension of 64 × 64 × 3 which is the same as that of the input secret-carrying image, and then the input image and the matrix are added together in a bitwise manner. After addition, a matrix of 64 × 64 × 3 is obtained as an input of the extraction model, and finally, the extraction information is trained.
Step 3.4.3, establishing a loss function L of the extraction model E shown in the formula (4)E:
In formula (4), λ1And λ2Recovering weights for two pieces of information, wherein N represents the dimension of an input vector formed by the generated image c' and the extraction key de _ key;
step 3.4.4, establishing an objective function L shown in the formula (5)(G,E):
L(G,E)=λGLG+λELE (5)
In formula (5), λG,λERespectively representing the training weights of the pre-trained generated model G' and the extracted model E;
step 3.4.5, utilizing Adam optimization algorithm to carry out optimization on target function L(G,E)Carrying out optimization solution until a loss function L is causedGValue of (A) and LEThe value of (A) is converged to the optimum in the training, thereby obtaining an extraction model E' after the middle training and a well-trained generation model G*And a discrimination model D*;
Different weights are set according to the dimension of the input noise, namely the larger the dimension of the input noise is, the larger the weight of the corresponding generated model is. In a specific embodiment, the updating steps are unified as follows: when the generated model and the discrimination model are iterated for 1 time, the extracted model is iterated for 3 times, the direction of the convergence of the generated model is matched with the direction of the recovery information of the extracted model, and meanwhile, the generated model is guaranteed to be continuously learned to generate images.
Step 3.5, extracting a model E' after the middle training of the later training;
step 3.5.1, inputting the generated image w ' and the extraction key de _ key into the extraction model E ' after the middle training, and outputting recovery information k ';
step 3.5.2, establishing a loss function L of an extraction model E' in the later training as shown in the formula (6)E′:
In formula (6), λ3And λ4Restoring weights for the other two messages;
step 3.5.3 of applying Adam optimization algorithm to the loss function LE′Carrying out optimization solution until a loss function L is causedE′The value of (A) is converged to the optimum value, thereby obtaining a trained extraction model E*;
Step 4, steganography process:
step 4.1, the sender has a trained generative model G*And the secret information m and the generation key en _ key are connected together and input into the generation model G*Generating a secret image c';
step 4.2, the sender determines the steganography capacity L of each image;
step 4.3, the sender determines the number of secret-carrying images needed for sending the secret information m according to the steganographic capacity LWherein length () represents the length of the secret information;
4.4, the sender divides the secret information m into n small blocks, and the secret information length of each small block is L;
step 4.5, secret information m in each small block is set to { m ═ m according to equation (7)1,m2…,mnMapped into noise vectors z ═ z, respectively1,z2…,zn}; wherein m isiRepresenting secret information contained in the ith patch, ziRepresents a noise vector included in the i-th small block, i ═ 1 … n;
in the formula (8), random (x, y) represents that a random noise value is generated in an interval from x to y, sigma represents the number of information bits mapped by a one-dimensional noise vector, and delta is an interval value of adjacent interval partitions;
step 4.6, the sender sets z to { z ═ by block unit1,z2…znIs respectively connected with a generation key en _ key and input into a generation model G*Sequentially generating a secret image c '═ c'1,c′2…c′nAnd sending the data to a receiver, wherein c'iRepresenting a secret-loaded image contained in an ith patch;
step 5, an extraction process:
step 5.1, the receiver has a trained extraction model E*And the sequentially received secret images c '═ c'1,c′2…c′nAdding the key and the extraction key de _ key bit by bit respectively, and inputting the result to the extraction model E*So as to sequentially restore the noise vector z ' ═ z ' of the n blocks '1,z′2…z′nAnd then, secret information m ' ═ m ' is recovered by equation (7) '1,m′2…m′n}。
Claims (1)
1. A safe non-embedded steganography method based on generation countermeasure network is characterized in that the method is applied to a network environment formed by a sender, a receiver, a discriminator and a third party and is carried out according to the following steps:
step 1, generating a key pair by using an openssl tool, comprising: generating a key en _ key and extracting a key de _ key;
step 2, the third party distributes the generated key en _ key to the generator and distributes the extracted key de _ key to the extractor;
step 3, training a GAN-based safe non-embedded steganography model:
step 3.1, acquiring a real image set and using the real image set as a pre-training data set w;
3.2, enabling the safe non-embedded steganography model to be composed of a generation model G, a discrimination model D and an extraction model E, wherein the generation model G is composed of a full-connection layer and a multi-layer deconvolution layer; the discrimination model D and the extraction model E are both composed of a full-connection layer and a plurality of convolution layers;
3.3, pre-training the generated model G and the discrimination model D;
step 3.3.1, connecting the generated key en _ key and the noise vector k together and using the connected key and the noise vector k as an input z of the generated model G, so as to obtain a secret-carrying image w' ═ G (z) through the generated model G;
inputting the pre-training data set w and the secret-carrying image w 'into the discriminant model D, and outputting discriminant scores D (w) and D (w');
step 3.3.2, establishing a loss function L of the generative model G as shown in the formula (1)G:
LG=log(1-D(w′)) (1)
Step 3.3.3, establishing a loss function L of a discrimination model D shown in the formula (2)D:
LD=log(1-D(w′))+logD(w) (2)
Step 3.3.4, in the pre-training process, establishing an objective function L shown in the formula (3)(G,D):
L(G,D)=LG+LD (3)
Step 3.3.5, utilizing Adam optimization algorithm to carry out optimization on the target function L(G,D)Carrying out optimization solution until a loss function L is causedGValue of (A) and LDThe values of (A) and (B) converge to the optimum in mutual confrontation, so as to obtain a generation model G 'and a discrimination model D' after pre-training;
3.4, training the generation model G ', the discrimination model D' and the extraction model E after the pre-training in the middle period;
step 3.4.1, continuing to train the generated model G 'and the discrimination model D' according to the pre-training mode, so as to obtain an updated generated model G 'and a discrimination model D';
step 3.4.2, inputting the secret-carrying image w 'and the extraction key de _ key into the extraction model E, and outputting recovery information k';
step 3.4.3, establishing a loss function L of the extraction model E shown in the formula (4)E:
In formula (4), λ1And λ2Recovering weights for two pieces of information, wherein N represents the dimension of an input vector formed by the generated image c' and the extraction key de _ key;
step 3.4.4, establishing an objective function L shown in the formula (5)(G,E):
L(G,E)=λGLG+λELE (5)
In formula (5), λG,λERespectively representing the training weights of the pre-trained generated model G' and the extracted model E;
step 3.4.5, utilizing Adam optimization algorithm to carry out optimization on the target function L(G,E)Carrying out optimization solution until a loss function L is causedGValue of (A) and LEThe value of (A) is converged to the optimum in the training, thereby obtaining an extraction model E' after the middle training and a well-trained generation model G*And a discrimination model D*;
3.5, training the extraction model E' after the middle training in the later stage;
step 3.5.1, inputting the generated image w ' and the extraction key de _ key into an extraction model E ' after medium-term training, and outputting recovery information k ';
step 3.5.2, establishing a loss function L of an extraction model E' in the later training as shown in the formula (6)E′:
In formula (6), λ3And λ4Restoring weights for the other two messages;
step 3.5.3 of applying Adam optimization algorithm to the loss function LE′Carrying out optimization solution until a loss function L is causedE′The value of (A) is converged to the optimum value, thereby obtaining a trained extraction model E*;
Step 4, steganography process:
step 4.1, the sender has a trained generative model G*And the secret information m and the generation key en _ key are connected together and input into the generation model G*Generating a secret image c';
step 4.2, the sender determines the steganography capacity L of each image;
step 4.3, the sender determines the number of secret-carrying images needed for sending the secret information m according to the steganographic capacity LWherein length () represents the length of the secret information;
4.4, the sender divides the secret information m into n small blocks, and the secret information length of each small block is L;
step 4.5, secret information m in each small block is set to { m ═ m according to equation (7)1,m2…,mnMapped into noise vectors z ═ z, respectively1,z2…,zn}; wherein m isiRepresenting secret information contained in the ith patch, ziRepresents a noise vector included in the i-th small block, i ═ 1 … n;
in the formula (8), random (x, y) represents that a random noise value is generated in an interval from x to y, sigma represents the number of information bits mapped by a one-dimensional noise vector, and delta is an interval value of adjacent interval partitions;
step 4.6, the sender sets z to { z ═ by block unit1,z2…znIs respectively connected with a generation key en _ key and input into the generation model G*Sequentially generating a secret image c '═ c'1,c′2…c′nAnd sending the data to the receiver, wherein c'iRepresenting a secret-loaded image contained in an ith patch;
step 5, an extraction process:
step 5.1, the receiver has a trained extraction model E*And the sequentially received secret images c '═ c'1,c′2…c′nAdding the key and the extraction key de _ key bit by bit respectively, and inputting the result to the extraction model E*So as to sequentially restore the noise vector z ' ═ z ' of the n blocks '1,z′2…z′nAnd then, secret information m ' ═ m ' is recovered by equation (7) '1,m′2…m′n}。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011094188.2A CN112243005B (en) | 2020-10-14 | 2020-10-14 | Secure non-embedded steganography method based on generation of countermeasure network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011094188.2A CN112243005B (en) | 2020-10-14 | 2020-10-14 | Secure non-embedded steganography method based on generation of countermeasure network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112243005A true CN112243005A (en) | 2021-01-19 |
CN112243005B CN112243005B (en) | 2022-03-15 |
Family
ID=74168877
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011094188.2A Active CN112243005B (en) | 2020-10-14 | 2020-10-14 | Secure non-embedded steganography method based on generation of countermeasure network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112243005B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114338093A (en) * | 2021-12-09 | 2022-04-12 | 上海大学 | Method for transmitting multi-channel secret information through capsule network |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108921764A (en) * | 2018-03-15 | 2018-11-30 | 中山大学 | A kind of image latent writing method and system based on generation confrontation network |
CN109587372A (en) * | 2018-12-11 | 2019-04-05 | 北京邮电大学 | A kind of invisible image latent writing art based on generation confrontation network |
US20190363876A1 (en) * | 2014-06-18 | 2019-11-28 | James C. Collier | Methods and Apparatus for Cryptography |
US10496809B1 (en) * | 2019-07-09 | 2019-12-03 | Capital One Services, Llc | Generating a challenge-response for authentication using relations among objects |
CN111598762A (en) * | 2020-04-21 | 2020-08-28 | 中山大学 | Generating type robust image steganography method |
-
2020
- 2020-10-14 CN CN202011094188.2A patent/CN112243005B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190363876A1 (en) * | 2014-06-18 | 2019-11-28 | James C. Collier | Methods and Apparatus for Cryptography |
CN108921764A (en) * | 2018-03-15 | 2018-11-30 | 中山大学 | A kind of image latent writing method and system based on generation confrontation network |
CN109587372A (en) * | 2018-12-11 | 2019-04-05 | 北京邮电大学 | A kind of invisible image latent writing art based on generation confrontation network |
US10496809B1 (en) * | 2019-07-09 | 2019-12-03 | Capital One Services, Llc | Generating a challenge-response for authentication using relations among objects |
CN111598762A (en) * | 2020-04-21 | 2020-08-28 | 中山大学 | Generating type robust image steganography method |
Non-Patent Citations (4)
Title |
---|
HIROSHI NAITO等: "《A New Steganography Method Based on Generative Adversarial Networks》", 《IEEE》 * |
JIA LIU等: "《Recent Advances of Image Steganography With Generative Adversarial Networks》", 《IEEE》 * |
王耀杰等: "基于生成对抗网络的信息隐藏方案", 《计算机应用》 * |
郑淑丽等: "基于无损压缩的加密图像可逆信息隐藏", 《合肥工业大学学报(自然科学版)》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114338093A (en) * | 2021-12-09 | 2022-04-12 | 上海大学 | Method for transmitting multi-channel secret information through capsule network |
CN114338093B (en) * | 2021-12-09 | 2023-10-20 | 上海大学 | Method for transmitting multi-channel secret information through capsule network |
Also Published As
Publication number | Publication date |
---|---|
CN112243005B (en) | 2022-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109818739B (en) | Generation type image steganography method based on countermeasure network | |
Chen et al. | Impulsive synchronization of reaction–diffusion neural networks with mixed delays and its application to image encryption | |
CN108566500B (en) | Reversible hiding method of self-adaptive image encryption domain based on hybrid encryption mechanism | |
Hadke et al. | Use of neural networks in cryptography: a review | |
CN111951149B (en) | Image information steganography method based on neural network | |
CN108346125A (en) | A kind of spatial domain picture steganography method and system based on generation confrontation network | |
CN107240061B (en) | Watermark embedding and extracting method and device based on dynamic BP neural network | |
CN112862001A (en) | Decentralized data modeling method under privacy protection | |
CN105389770A (en) | Method and apparatus for embedding and extracting image watermarking based on BP and RBF neural networks | |
CN113284033A (en) | Large-capacity image information hiding technology based on confrontation training | |
CN112243005B (en) | Secure non-embedded steganography method based on generation of countermeasure network | |
CN115695675B (en) | Video encryption method for network data secure exchange | |
CN110136045B (en) | Method for hiding and recovering based on mutual scrambling of two images | |
CN105260981A (en) | Optimal coupling image steganography method based on packet replacement | |
CN113746619A (en) | Image encryption method, image decryption method and image encryption system based on predefined time synchronization control | |
CN113992810B (en) | Agile image encryption method based on deep learning | |
Vijayakumar et al. | Increased level of security using DNA steganography | |
Jaiswal et al. | En-VStegNET: Video Steganography using spatio-temporal feature enhancement with 3D-CNN and Hourglass | |
CN114172630A (en) | Reversible information hiding method based on addition homomorphic encryption and multi-high-order embedding | |
CN105827632B (en) | Cloud computing CCS fine-grained data control method | |
EP4141747A1 (en) | Steganography method | |
CN109543425A (en) | A kind of Image Data Hiding Methods based on tensor resolution | |
CN109558701B (en) | Medical CT image secret sharing method | |
Hassan et al. | Data hiding by unsupervised machine learning using clustering K-mean technique | |
Xu et al. | Deniable steganography |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |