CN112634117B - End-to-end JPEG domain image steganography method based on generation of countermeasure network - Google Patents

End-to-end JPEG domain image steganography method based on generation of countermeasure network Download PDF

Info

Publication number
CN112634117B
CN112634117B CN202011534708.7A CN202011534708A CN112634117B CN 112634117 B CN112634117 B CN 112634117B CN 202011534708 A CN202011534708 A CN 202011534708A CN 112634117 B CN112634117 B CN 112634117B
Authority
CN
China
Prior art keywords
image
network
carrier
layer
secret
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011534708.7A
Other languages
Chinese (zh)
Other versions
CN112634117A (en
Inventor
康显桂
廖异
阳建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Chenzhen Zhishan Information Technology Co ltd
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202011534708.7A priority Critical patent/CN112634117B/en
Publication of CN112634117A publication Critical patent/CN112634117A/en
Application granted granted Critical
Publication of CN112634117B publication Critical patent/CN112634117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides an end-to-end JPEG domain image steganography method based on a generated countermeasure network. The method is completed by three parts: encoder, decoder and arbiter. The generation network part for generating the countermeasure network is composed of an encoder and a decoder in combination, and the countermeasure part is completed by a discriminator. The encoder is responsible for embedding secret information, the decoder is responsible for extracting secret information, the discriminator distinguishes carrier image and carrier image, carries out countermeasure training with this classification error as loss function, constantly promotes the performance of generating the countermeasure network. An interference layer is additionally added to simulate the common interference that would be experienced in an actual transmission channel. The steganography method for the JPEG domain image provided by the invention is used for embedding and extracting secret information of the JPEG image in a mode of modifying DCT coefficients, and has strong practicability; an interference layer is added in the training process, so that the robustness of the algorithm in an actual application scene is improved; by performing an countermeasure training with the arbiter, the security of the steganography algorithm is enhanced.

Description

End-to-end JPEG domain image steganography method based on generation of countermeasure network
Technical Field
The invention relates to the fields of multimedia security, deep learning, steganography and steganalysis, in particular to an end-to-end JPEG domain image steganography method based on a generated countermeasure network.
Background
Steganography is a technique that embeds secret information into a digital media carrier and is not perceived by bystanders as a change in the carrier. With the continuous development of multimedia technology, the form of the carrier is also becoming more and more diverse, for example: text, pictures, audio and video, etc. Image steganography mainly exploits the visual redundancy of the human eye. By image steganography, the sender embeds the secret information that is desired to be transferred into the carrier image to obtain a carrier image, and the finely differentiated person between the carrier image and the carrier image is not perceived by the naked eye. The sender transmits the encrypted image to the receiver, and no one can find that secret information exists in the encrypted image except the sender and the receiver, thereby realizing secret communication. Steganography is distinguished from encryption by the fact that not only is secret information hidden, but also the action of information hiding is hidden.
Disclosure of Invention
The invention aims to realize an end-to-end JPEG domain image steganography method based on a generated countermeasure network, which adopts the deep learning generated countermeasure network to carry out image steganography on the JPEG domain.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
an end-to-end JPEG domain image steganography method based on a generated countermeasure network is characterized in that the embedding and the extraction of secret information are completed by the generated countermeasure network. The generation countermeasure network includes three parts: encoder, decoder and arbiter. An interference layer is additionally added to simulate the common interference that would be experienced in an actual transmission channel. The training process comprises the following steps:
s1: the DCT coefficient matrix of the secret information and the carrier image in the JPEG domain is input into an encoder, and the encoder outputs the DCT coefficient matrix corresponding to the carrier image.
S2: and inputting the DCT coefficient matrix of the carrier image into an IDCT (inverse discrete cosine transform) transformation module to obtain the airspace carrier image.
S3: inputting the carrier image into an interference layer to obtain DCT coefficients of the noise carrier image after the interference is added.
S4: the DCT coefficients of the generated noise-loaded image are input to a decoder to obtain decryption information.
S5: and (3) inputting the DCT coefficient matrix of the carrier image in the S1 into an IDCT conversion module to obtain an airspace carrier image.
S6: and (3) inputting the airspace carrier image obtained in the step (S5) and the airspace secret image obtained in the step (S2) into a discrimination network, performing two classification on the carrier image and the secret image by the discrimination network, taking the classification error as a loss function, and performing network updating by back propagation of the loss.
S7: repeating the steps S1-S6 until the trained generated countermeasure network is obtained.
S8: and selecting a model with the best effect according to the accuracy of the extracted information and the safety of the loaded image. And placing the carrier image and the secret information into a trained encoder to generate a carrier secret image, and placing the carrier secret image into a decoder to obtain decryption information.
Preferably, the carrier image DCT coefficient matrix in S1 is processed by MATLAB, and the secret information is a binary image which is randomly generated and has the same size as the carrier image. The encoder comprises a preprocessing layer and three convolution groups, wherein each convolution group comprises a convolution layer, a batch normalization layer and an activation layer.
Preferably, the IDCT transform module in S2 performs inverse quantization on the DCT coefficient matrix of the secret-carrying image to obtain the DCT coefficient of the secret-carrying image in the YCbCr space, performs IDCT transform on the DCT coefficient, and then converts the DCT coefficient into the RGB color space to obtain the RGB secret-carrying image.
Preferably, the interference layer in S3 contains four kinds of interference common in real life, namely clipping, gaussian noise, salt and pepper noise and JPEG compression, and one or more kinds of interference can be selected for each training, or no interference is added. The robustness of adding multiple superimposed interferers may be better than adding no interferers or adding a single type of interferer.
Wherein, for JPEG compression, mainly comprises the following three steps: and performing DCT transformation operation, quantization and rounding on the spatial domain image, and finally performing entropy coding to obtain a compressed image, wherein the flow chart is shown in figure 2. But the rounding operation intercepts the transfer of the gradient, we introduce a simulated rounding function to simulate rounding while maintaining the transfer of the gradient:
simu_round(x)=[x]+(x-[x]) 3
preferably, the decoder in S4 comprises four convolutional layer groups, each comprising a convolutional layer, a batch normalization layer and an activation layer, respectively.
Preferably, the discrimination network in S6 is similar to the decoder in general structure, and includes four convolution groups and a fully-connected layer, each convolution group includes a convolution layer, a batch normalization layer and an activation layer, and the last convolution group further includes a pooling layer.
Preferably, the loss function of the discriminating network uses a cross entropy loss function commonly used by discriminating networks. The generating network encoder-decoder selects the difference between the carrier image and the carrier secret image, the difference between the embedded secret information and the extracted secret information and the weighted sum of the judging network losses as a loss function, calculates the losses and performs counter propagation to update the network, thereby effectively generating countermeasure training. The loss function of the discrimination network adopts a common cross entropy function, which is expressed as:
Figure BDA0002852837540000031
wherein y' i Output label representing discriminator, y i A label representing the original image.
The generation network is composed of a combination of an encoder and a decoder, and the loss function is expressed as:
L g =λ a *L cb *L m -β*L d
wherein L is g Representing generation of a penalty to combat the network, the penalty being made up of a sum of three parts, L c L is the difference between the carrier image and the density image m The difference between the embedded secret information and the extracted decryption information is the loss of the discrimination network, and the loss is used for resisting the discrimination network; lambda (lambda) a ,λ b Beta is the weight of the three parts respectively; difference L between carrier image and secret image c Embedding secret information and extracting a difference L of the secret information m Can be expressed as:
L c =α*MSE(c,s)+(1-SSIM(c,s))
L m =α*MSE(m,m')+(1-SSIM(m,m'))
where c represents the carrier image, s represents the carrier image, m represents the embedded secret information, m' represents the extracted secret information, and MSE is the mean square error between the two objects. MSE is expressed as:
MSE(x,y)=||x-y|| 2
SSIM is used to measure the structural similarity of two objects, ranging from 0,1, where the closer the SSIM is to 1, the higher the similarity of the two objects, and when ssim=1, the two objects are identical. SSIM is expressed as:
SSIM(x,y)=[L(x,y)] l *[C(x,y)] m *[S(x,y)] n
where L (x, y) is the comparison of luminance, C (x, y) is the comparison of contrast, and S (x, y) is the comparison of structure. L (x, y), C (x, y), S (x, y) are respectively expressed as:
Figure BDA0002852837540000041
Figure BDA0002852837540000042
Figure BDA0002852837540000043
wherein mu x Sum mu y Respectively represent the average value of x and y, theta x And theta y Respectively represents the standard deviation of x and y, theta xy Represents the covariance of x and y, and C 1 ,C 2 ,C 3 Respectively constant, avoiding systematic errors caused by 0 denominator. The update gradient is calculated by calculating two loss functions and back-propagating, continuously adjusting the parameters to update the network.
Preferably, in S8, the optimal model is selected in combination with the convergence of the accuracy of the extracted information and the visual effect and the detection resistance of the loaded image.
Compared with the prior art, the scheme has the advantages that:
1) The embedding of secret information is carried out in the JPEG domain, and the method is more suitable for the image subjected to JPEG compression than the prior method.
2) The method adopts the network to perform embedding and extraction, and has simple implementation and easy use.
3) The picture passes through the interference layer before entering the decoder, and the interference layer contains a plurality of different forms, such as JPEG compression, gaussian noise and the like, so that the robustness of the method is improved.
4) The method is based on generating the countermeasure network, and compared with the prior work, the steganography safety is improved.
5) The method is based on the generation of the countermeasure network, does not need too much priori knowledge, is learned by the network, and has simple design and easy realization.
Drawings
FIG. 1 is a flow chart of countermeasure training in an end-to-end JPEG domain image steganography method based on generating a countermeasure network;
FIG. 2 is a schematic diagram of an encoder network;
FIG. 3 is a flow chart of a DCT transform module for color images;
FIG. 4 is a flow chart of an IDCT transform module of a color image;
FIG. 5 is a flow chart of the actual JPEG compression;
FIG. 6 is a schematic diagram of a decoder network;
FIG. 7 is a schematic diagram of a arbiter network;
Detailed Description
The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
Example 1
The embodiment provides an end-to-end JPEG domain image steganography method based on a generated countermeasure network, wherein the embedding and the extraction of secret information are completed by the generated countermeasure network obtained through training, and the generated countermeasure network comprises an encoder, a decoder and a discriminator. The specific flow of generating the countermeasure network training is shown in fig. 1, and mainly comprises the following steps:
s1: the DCT coefficient matrix of the secret information and the carrier image in the JPEG domain is input into an encoder, and the encoder outputs the DCT coefficient matrix corresponding to the carrier image.
For the carrier image set, the present example selects 10 ten thousand color images of the MSCOCO dataset. The MSCOCO dataset was processed to unify the image size to 256x256. For images less than 256x256 we perform an upsampling operation and for images exceeding 256x256 we perform a center cropping operation. In this embodiment, the image is subjected to QF (image compression quality factor) =75 compression, and the DCT coefficient thereof is used as an input.
For secret information, the example adopts a matrix which is randomly generated and is consistent with the size of the carrier image, the size is 256x256, and the value of the matrix can only be 0 or 1.
The encoder comprises a preprocessing layer group and three convolution layer groups, wherein the preprocessing layer group and each convolution layer group respectively comprise a convolution layer, a batch normalization layer and an activation layer. The convolutional layer is designed as a 3X3 convolutional kernel, the step size is 1, and the padding is 1. The active layer employs a linear rectification function ReLU. Number of convolution kernels-! The network structure of the encoder is shown in fig. 2.
S2: and inputting the DCT coefficient matrix of the carrier image into an IDCT conversion module to obtain the airspace carrier image.
The IDCT conversion module firstly carries out inverse quantization on DCT coefficients of the carrier image by using a quantization table with a compression quality factor QF=75, namely multiplying the DCT coefficients with the quantization matrix with QF=75 to obtain DCT coefficients of the carrier image in a YCbCr space, carries out IDCT conversion operation on the DCT coefficients, and then converts the DCT coefficients into an RGB space to obtain the carrier image in the RGB space. The IDCT transform is essentially the inverse of the DCT transform. The detailed flow of the DCT conversion module of the color image is shown in FIG. 3, and the flow of the IDCT conversion module is shown in FIG. 4.
S3: inputting the carrier image into an interference layer to obtain DCT coefficients of the noise carrier image after the interference is added.
The interference layer proposed by the method contains four kinds of common interference in real life, namely clipping, gaussian noise, salt and pepper noise and JPEG compression, wherein one kind of interference or superposition of multiple kinds of interference can be selected for each training, or no interference is added. The robustness of adding multiple superimposed interferers may be better than adding no interferers or adding a single type of interferer.
In this embodiment, the JPEG interference is selectively added, and in this embodiment, quantization is performed by selecting a quantization table with a compression factor of 75. Since the rounding operation intercepts the transfer of the gradient during compression, the method employs a simulated rounding operation to preserve the transfer of the gradient. The actual JPEG compression flow is shown in fig. 5.
S4: the DCT coefficients of the generated noise-loaded image are input to a decoder to obtain decryption information.
The decoder comprises four convolutional layer groups, each of which comprises a convolutional layer, a batch normalization layer and an activation layer. The convolutional layer is designed as a 3X3 convolutional kernel, the step size is 1, and the padding is 1. The active layer employs a leakage correction linear element, leakyReLU. The network structure of the decoder is shown in fig. 6.
S5: and (3) inputting the DCT coefficient matrix of the carrier image in the S1 into an IDCT conversion module to obtain an airspace carrier image.
S6: inputting the airspace carrier image in the step S5 and the airspace secret image obtained in the step S2 into a discrimination network, performing two classification on the carrier image and the secret image by the discrimination network, training by using the classification error as a loss function through a counter propagation algorithm, updating the network according to gradients, and continuously improving the performance of generating an countermeasure network.
The discrimination network is similar to a decoder in general structure and comprises four convolution layer groups and a full connection layer, each convolution layer group comprises a convolution layer, a batch normalization layer and an activation layer, and a pooling layer is added in the last convolution layer group. The convolution layers are all designed as convolution kernels of 3X3, the step size is 2, and the padding is 1. The active layer of the first three convolutional groups uses the leakage correction linear unit LeakyReLU. The active layer of the last convolution layer group adopts a linear rectification function ReLU, and an adaptive average pooling layer is added. The structure of the discrimination network is shown in fig. 7.
The loss function of the discrimination network adopts a cross entropy loss function commonly used by the discrimination network. The generating network is formed by combining a coder and a decoder, the difference between the carrier image and the carrier secret image, the difference between the embedded secret information and the extracted secret information and the weighted sum of the loss of the judging network are selected as a loss function, the loss is calculated, and the counter propagation is carried out to update the network, so that the countermeasure training is effectively generated. The loss function of the discrimination network adopts a common cross entropy function:
Figure BDA0002852837540000071
wherein y' i Output label representing discriminator, y i A label representing the original image.
The generation network is composed of a combination of an encoder and a decoder, and the loss function is expressed as:
L g =λ a *L cb *L m -β*L d
wherein L is g Representing generation of a penalty to combat the network, the penalty being made up of a sum of three parts, L c L is the difference between the carrier image and the density image m The difference between the embedded secret information and the extracted decryption information is the loss of the discrimination network, and the loss is used for resisting the discrimination network; in this embodiment, lambda is selected a =1,λ b =1, β=1 is the weight of the above three parts; difference L between carrier image and secret image c Embedding secret information and extracting a difference L of the secret information m Expressed as:
L c =α*MSE(c,s)+(1-SSIM(c,s))
L m =α*MSE(m,m')+(1-SSIM(m,m'))
where c denotes the carrier image, s denotes the secret image, m denotes the embedded secret information, m' denotes the extracted secret information, MSE is the mean square error between two objects, and this embodiment chooses α=1. MSE may be expressed as:
MSE(x,y)=||x-y|| 2
SSIM is used to measure the structural similarity of two objects, ranging from 0,1, the closer the SSIM is to 1, the higher the similarity of two objects, when ssim=1, the two objects are completely identical. SSIM is expressed as:
SSIM(x,y)=[L(x,y)] l *[C(x,y)] m *[S(x,y)] n
where L (x, y) is the comparison of brightness, C (x, y) is the comparison of contrast, and S (x, y) is the comparison of structure, in practical calculation, this embodiment chooses l=m=n=1. L (x, y), C (x, y), S (x, y) are respectively expressed as:
Figure BDA0002852837540000072
Figure BDA0002852837540000073
Figure BDA0002852837540000081
wherein mu x Sum mu y Respectively represent the average value of x and y, theta x And theta y Respectively represents the standard deviation of x and y, theta xy Represents the covariance of x and y, and C 1 ,C 2 ,C 3 Respectively constant, avoiding systematic errors caused by 0 denominator. In actual calculation, we generally set C 3 =C 2 /2. The update gradient is calculated by calculating two loss functions and back-propagating, continuously adjusting the parameters to update the network.
S7: repeating the steps S1-S6 until the trained generated countermeasure network is obtained.
Example 2
The embodiment carries out JPEG domain image steganography through the generated countermeasure network obtained by training in the embodiment 1, and specifically comprises the following steps:
s8: after the training of the generated countermeasure network is finished, selecting a generated countermeasure network with the best effect from a plurality of generated countermeasure networks for training according to the accuracy of the extracted information and the safety of the loaded image; and (3) placing the carrier image and the secret information into a trained encoder to generate a carrier secret image, and placing the carrier secret image into a decoder to obtain decryption information.
In step S8, the best generated countermeasure network is selected in combination with the convergence of the accuracy of the extracted information, the visual effect of the carried image, and the detection resistance. Through multiple consideration and verification, in this embodiment, we choose the model with 125000 iterations in embodiment 1, so that the performance is optimal.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (8)

1. The end-to-end JPEG domain image steganography method based on the generation countermeasure network is characterized in that the embedding and the extraction of secret information are completed by the generation countermeasure network, wherein the generation countermeasure network comprises an encoder, a decoder and a discriminator, and an interference layer is additionally added for simulating common interference suffered in an actual transmission channel; the training to generate the countermeasure network includes the steps of:
s1: inputting the DCT coefficient matrix of the secret information and the carrier image in the JPEG domain into an encoder, and outputting the DCT coefficient matrix corresponding to the carrier image by the encoder;
s2: inputting the DCT coefficient matrix of the carrier image into an IDCT conversion module to obtain an airspace carrier image;
s3: inputting the space-domain carrier-sealed image into an interference layer to obtain DCT coefficients of the noise carrier-sealed image after the interference is added;
s4: inputting the DCT coefficient of the noise-loaded image generated in the step S3 into a decoder to obtain decryption information;
s5: inputting the DCT coefficient matrix of the carrier image in the S1 into an IDCT conversion module to obtain an airspace carrier image;
s6: inputting the airspace carrier image in the step S5 and the airspace carrier image obtained in the step S2 into a discriminator, performing two classification on the airspace carrier image and the airspace carrier image by the discriminator, taking a classification error obtained after the two classification as a loss function, and performing counter-propagation on the loss function so as to generate updating of an countermeasure network;
s7: repeating the steps S1-S6 until a trained generated countermeasure network is obtained;
s8: and selecting a generated countermeasure network with the best effect according to the accuracy of the extracted information and the safety of the secret image, putting the carrier image and the secret information into a trained encoder to generate the secret image, and putting the secret image into a decoder to obtain decryption information.
2. The method for end-to-end JPEG domain image steganography based on an antagonistic network according to claim 1, wherein the DCT coefficient matrix of the carrier image in S1 is obtained by MATLAB processing, the secret information is a binary image with a size identical to that of the carrier image, which is randomly generated, and the encoder comprises a preprocessing layer and three convolution layer groups, each convolution layer group comprises a convolution layer, a batch normalization layer and an activation layer.
3. The method for steganography of end-to-end JPEG domain images based on a generated countermeasure network according to claim 1, wherein the IDCT transform module in S2 performs inverse quantization on the DCT coefficient matrix of the secret-carrying image to obtain the DCT coefficient of the secret-carrying image in YCbCr space, performs IDCT transform on the DCT coefficient, and transforms the DCT coefficient into RGB color space to obtain the space-domain secret-carrying image.
4. The method for generating end-to-end JPEG domain image steganography based on an antagonizing network of claim 1, wherein the interference layer in S3 comprises four kinds of interference commonly found in real life, namely clipping, gaussian noise, salt and pepper noise and JPEG compression, and one or more kinds of interference are selected for each training; the robustness of adding multiple superimposed interferers may be better than adding no interferers or adding a single type of interferer.
5. The end-to-end JPEG domain image steganography method based on a generation countermeasure network according to claim 4, comprising the following three steps for JPEG compression:
DCT transformation operation is carried out on the airspace image, quantization and rounding are carried out, finally, the compressed image is obtained through entropy coding, but the rounding operation can enable gradients to disappear in countermeasure training, so that a simulated rounding function is introduced to simulate rounding and simultaneously maintain gradient transfer:
simu_round(x)=[x]+(x-[x]) 3
where x represents the DCT coefficients, and simu_round (x) represents the analog rounded DCT coefficients.
6. The end-to-end JPEG domain image steganography method based on a generation countermeasure network according to claim 1, wherein the decoder in S4 comprises four convolutional layer groups, each comprising a convolutional layer, a batch normalization layer and an activation layer, respectively.
7. The end-to-end JPEG domain image steganography method based on the generation of an countermeasure network according to claim 1, wherein the arbiter in S6 comprises four convolution groups and a full connection layer, each convolution group comprises a convolution layer, a batch normalization layer and an activation layer, respectively, and a pooling layer is added to the last convolution group;
the loss function of the discriminator adopts a cross entropy loss function commonly used by the discrimination network, the generation network is formed by combining an encoder and a decoder, the difference between a carrier image and a carrier image, the difference between embedded secret information and extracted secret information and the weighted sum of the loss of the discrimination network are selected as the loss function, the loss is calculated and counter-propagation is carried out to update the network, so that the countermeasure training is effectively generated, and the loss function of the discrimination network adopts the commonly used cross entropy function:
Figure FDA0004116427920000021
wherein y' i Output label representing discriminator, y i A label representing an original image;
the generation network is composed of a combination of an encoder and a decoder, and the loss function is expressed as:
L g =λ a *L cb *L m -β*L d
wherein L is g Representing generation of a penalty to combat the network, the penalty being made up of a sum of three parts, L c L is the difference between the carrier image and the density image m For the gap between the embedded secret information and the extracted decryption information, L d Then the loss of the discrimination network is used for countering the discrimination network; lambda (lambda) a ,λ b Beta is the weight of the three parts respectively; difference L between carrier image and secret image c Embedding secret information and extracting a difference L of the secret information m Expressed as:
L c =α*MSE(c,s)+(1-SSIM(c,s))
L m =α*MSE(m,m')+(1-SSIM(m,m'))
wherein c represents a carrier image, s represents a secret image, m represents embedded secret information, m' represents extracted secret information, MSE is the mean square error between two objects, expressed as:
MSE(x,y)=||x-y|| 2
SSIM is used to measure the structural similarity of two objects, ranging from 0,1, where the closer the SSIM is to 1, the higher the similarity of the two objects, and when ssim=1, the two objects are illustrated as identical, the SSIM is expressed as:
SSIM(x,y)=[L(x,y)] l *[C(x,y)] q *[S(x,y)] n
wherein L (x, y) is a comparison of brightness, C (x, y) is a comparison of contrast, S (x, y) is a comparison of structure, L (x, y), C (x, y), S (x, y) are respectively expressed as:
Figure FDA0004116427920000031
Figure FDA0004116427920000032
Figure FDA0004116427920000033
wherein mu x Sum mu y Respectively represent the average value of x and y, theta x And theta y Respectively represents the standard deviation of x and y, theta xy Represents the covariance of x and y, and C 1 ,C 2 ,C 3 The parameters are respectively constant, so that system errors caused by 0 denominator are avoided, and the network is updated by calculating two loss functions and performing back propagation calculation and updating gradient.
8. The method of generating end-to-end JPEG domain image steganography based on an countermeasure network according to claim 1, wherein in S8, the optimal model is selected in combination with convergence of the accuracy of the extracted information and visual effect and resistance of the dense image.
CN202011534708.7A 2020-12-22 2020-12-22 End-to-end JPEG domain image steganography method based on generation of countermeasure network Active CN112634117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011534708.7A CN112634117B (en) 2020-12-22 2020-12-22 End-to-end JPEG domain image steganography method based on generation of countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011534708.7A CN112634117B (en) 2020-12-22 2020-12-22 End-to-end JPEG domain image steganography method based on generation of countermeasure network

Publications (2)

Publication Number Publication Date
CN112634117A CN112634117A (en) 2021-04-09
CN112634117B true CN112634117B (en) 2023-05-05

Family

ID=75321443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011534708.7A Active CN112634117B (en) 2020-12-22 2020-12-22 End-to-end JPEG domain image steganography method based on generation of countermeasure network

Country Status (1)

Country Link
CN (1) CN112634117B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926607B (en) * 2021-04-28 2023-02-17 河南大学 Two-branch network image steganography framework and method based on convolutional neural network
CN113612898B (en) * 2021-05-08 2022-11-08 上海大学 Robust covert communication device for resisting JPEG image downsampling
CN113284033A (en) * 2021-05-21 2021-08-20 湖南大学 Large-capacity image information hiding technology based on confrontation training
CN113326531B (en) * 2021-06-29 2022-07-26 湖南汇视威智能科技有限公司 Robust efficient distributed face image steganography method
CN114782697B (en) * 2022-04-29 2023-05-23 四川大学 Self-adaptive steganography detection method for anti-domain
CN115086674B (en) * 2022-06-16 2024-04-02 西安电子科技大学 Image steganography method based on generation of countermeasure network
CN117876273B (en) * 2024-03-11 2024-06-07 南京信息工程大学 Robust image processing method based on reversible generation countermeasure network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008079269A2 (en) * 2006-12-19 2008-07-03 Genego, Inc. Novel methods for functional analysis of high-throughput experimental data and gene groups identified therfrom
AU2018101528A4 (en) * 2018-10-14 2018-11-15 Li, Junjie Mr Camouflage image encryption based on variational auto-encoder(VAE) and discriminator
CN108921764A (en) * 2018-03-15 2018-11-30 中山大学 A kind of image latent writing method and system based on generation confrontation network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008079269A2 (en) * 2006-12-19 2008-07-03 Genego, Inc. Novel methods for functional analysis of high-throughput experimental data and gene groups identified therfrom
CN108921764A (en) * 2018-03-15 2018-11-30 中山大学 A kind of image latent writing method and system based on generation confrontation network
AU2018101528A4 (en) * 2018-10-14 2018-11-15 Li, Junjie Mr Camouflage image encryption based on variational auto-encoder(VAE) and discriminator

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Spatial Image Steganography Based on Generative Adversarial Network;Jianhua Yang 等;《arXiv:1804.07939v1》;20180421;1-7 *
一种基于DCT的数字图像水印算法;吴琦;《电子工程师》;20081215(第12期);45-47 *

Also Published As

Publication number Publication date
CN112634117A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN112634117B (en) End-to-end JPEG domain image steganography method based on generation of countermeasure network
CN110334805B (en) JPEG domain image steganography method and system based on generation countermeasure network
CN113222800B (en) Robust image watermark embedding and extracting method and system based on deep learning
Huan et al. Exploring stable coefficients on joint sub-bands for robust video watermarking in DT CWT domain
CN105469353B (en) The embedding grammar and device and extracting method and device of watermarking images
CN110232650B (en) Color image watermark embedding method, detection method and system
CN107240061A (en) A kind of watermark insertion, extracting method and device based on Dynamic BP neural
CN109658322B (en) A kind of large capacity image latent writing method and secret information extraction method
CN114339258B (en) Information steganography method and device based on video carrier
Ishtiaq et al. Adaptive watermark strength selection using particle swarm optimization
CN112132737B (en) Image robust steganography method without reference generation
CN111681154A (en) Color image steganography distortion function design method based on generation countermeasure network
CN104766269A (en) Spread transform dither modulation watermarking method based on JND brightness model
Hamamoto et al. Image watermarking technique using embedder and extractor neural networks
Ernawan et al. A blind multiple watermarks based on human visual characteristics
CN115908095A (en) Hierarchical attention feature fusion-based robust image watermarking method and system
Yousfi et al. JPEG steganalysis detectors scalable with respect to compression quality
Dai et al. A novel steganography algorithm based on quantization table modification and image scrambling in DCT domain
CN113628090B (en) Anti-interference message steganography and extraction method, system, computer equipment and terminal
Zhang et al. A blind watermarking system based on deep learning model
CN116342362B (en) Deep learning enhanced digital watermark imperceptibility method
CN108492275B (en) No-reference stereo image quality evaluation method based on deep neural network
Yu et al. A channel selection rule for YASS
CN116883222A (en) JPEG-compression-resistant robust image watermarking method based on multi-scale automatic encoder
CN109255748B (en) Digital watermark processing method and system based on double-tree complex wavelet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240801

Address after: 710000 B2-216, 7th Floor, Xixian Talent Building, Century Building, Fengdong New City, Xi'an City, Shaanxi Province

Patentee after: Xi'an Chenzhen Zhishan Information Technology Co.,Ltd.

Country or region after: China

Address before: 510275 No. 135 West Xingang Road, Guangzhou, Guangdong, Haizhuqu District

Patentee before: SUN YAT-SEN University

Country or region before: China

TR01 Transfer of patent right