CN117391920A - High-capacity steganography method and system based on RGB channel differential plane - Google Patents

High-capacity steganography method and system based on RGB channel differential plane Download PDF

Info

Publication number
CN117391920A
CN117391920A CN202311416882.5A CN202311416882A CN117391920A CN 117391920 A CN117391920 A CN 117391920A CN 202311416882 A CN202311416882 A CN 202311416882A CN 117391920 A CN117391920 A CN 117391920A
Authority
CN
China
Prior art keywords
image
steganography
loss function
representing
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311416882.5A
Other languages
Chinese (zh)
Inventor
马宾
王浩城
马睿和
李琦
王晓雨
王春鹏
咸永锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology filed Critical Qilu University of Technology
Priority to CN202311416882.5A priority Critical patent/CN117391920A/en
Publication of CN117391920A publication Critical patent/CN117391920A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a high-capacity steganography method and a high-capacity steganography system based on RGB channel difference planes, belongs to the technical field of image processing, and aims to solve the technical problems of how to carry out image steganography based on RGB channel differences, improve the visual quality of steganography images and improve the steganography-resistant analysis capability. Comprising the following steps: constructing a steganography network model, wherein the steganography network model comprises a channel separation part and a generator; constructing a steganography analysis network model based on the airspace steganography analysis model; constructing and extracting a network model based on the CNN model; constructing a discrimination loss function based on an output result of the steganography analysis network model, constructing a pixel-level loss function based on the carrier image, the steganography image and the extraction image as a steganography loss function, and performing joint weighting on the discrimination loss function and the steganography loss function to obtain a total loss function of the generator; performing iterative optimization on the generator based on the total loss function; a post-training steganographic network model is constructed based on the channel separation and the post-training generator.

Description

High-capacity steganography method and system based on RGB channel differential plane
Technical Field
The invention relates to the technical field of image processing, in particular to a high-capacity steganography method and system based on an RGB channel differential plane.
Background
Steganography is an important means in the field of information hiding that makes the presence of secret information imperceptible to humans by embedding it in fine details of the carrier. Unlike encryption techniques, steganography is required to ensure not only confidentiality but also concealment of information, and even if an attacker obtains a carrier, secret information therein is difficult to perceive. With the continuous development of steganography algorithms and steganography models, the earliest words are steganographically changed to the current multimedia steganography of images, audio, video and the like. In the times of digital media technology today, the dominant steganographic media is still image based.
For the steganography of an RGB image, the image distortion is often caused after the secret image is embedded, so that the problems of low embedding capacity, poor visual effect after the embedding and the like are caused. But in RGB three-channel image steganography, the differences between the RGB channels using the carrier image can effectively hide secret information. The differences in RGB channels can be used to hide information because there are small differences between these channels, which can be used to store information with less impact on the visual effect of the image. In particular, embedding information into the differences in the RGB channels may enable steganography of an image without having a too great impact on the overall color and brightness of the image. This technique may make steganography more difficult to perceive, while also increasing the storage capacity of the information. Therefore, better results can be obtained by using the RGB channels of the carrier image for difference value and then steganography.
For large-capacity steganography of a graph, how to carry out image steganography based on RGB channel differences, improve the visual quality of steganography images and the anti-steganography analysis capability is a technical problem to be solved.
Disclosure of Invention
The technical task of the invention is to provide a high-capacity steganography method and a system based on an RGB channel difference plane to solve the technical problems of how to carry out image steganography based on RGB channel differences, improve the visual quality of steganography images and resist steganography analysis capability.
In a first aspect, the present invention provides a high-capacity steganography method based on an RGB channel differential plane, including the steps of:
steganography network construction: the method comprises the steps of constructing a steganography network model, wherein the steganography network model comprises a channel separation part and a generator, the channel separation part is used for taking a carrier image as input, carrying out channel separation and channel differencing on the carrier image to obtain a difference value plane, and the generator is a network model constructed based on a U-Net network and is used for taking the difference value plane and a secret image as input, embedding the secret image into the difference value plane to obtain a secret image;
steganalysis network construction: a steganography analysis network model is built based on an airspace steganography analysis model, and the steganography analysis network model is used for taking a steganography image and a corresponding carrier image as input, and predicting and outputting a label type and a probability value of an input image as an output result by judging whether the steganography image is the carrier image containing a secret image, wherein the label type of the input image comprises the steganography image and the carrier image;
And (3) extracting network construction: constructing an extraction network model based on the CNN model, wherein the extraction network model is used for taking a steganographic image as input, extracting a secret image from the steganographic image as extraction image output;
and (3) constructing a loss function: constructing a discrimination loss function based on an output result of the steganography analysis network model, constructing a pixel-level loss function based on the carrier image, the steganography image and the extraction image as a steganography loss function, and performing joint weighting on the discrimination loss function and the steganography loss function to obtain a total loss function of the generator;
model training: performing iterative optimization on the generator based on the total loss function to obtain a trained generator, performing iterative optimization on the extracted network model based on the steganography loss function while performing iterative optimization on the generator, and performing iterative optimization on the steganography analysis network model based on the discrimination loss function;
image steganography: and constructing a post-training steganography network model based on the channel separation part and the post-training generator, and inputting the carrier image and the secret image into the post-training steganography network model to obtain a steganography image.
Preferably, for the steganographic network model, the channel separation section is configured to perform the following:
Taking an original carrier image as input, carrying out channel separation on the carrier image to obtain three single-channel gray scale images, namely an R-channel gray scale image, a G-channel gray scale image and a B-channel gray scale image;
carrying out channel difference on the R channel gray level image and the G channel gray level image to obtain a difference plane;
the generator is a U-shaped network model comprising an encoding network structure and a decryption network structure, the encoding network structure comprises N/2 layers of downsampling encoding units, each layer of downsampling encoding unit comprises a convolution layer for sampling, a batch normalization layer and a ReLU activation function, the decoding network structure comprises N/2 layers of upsampling decoding units, each layer of upsampling decoding unit comprises an deconvolution layer for upsampling, a batch normalization layer and a ReLU activation function, the last layer of decoding unit further comprises a Sigmoid activation function, and for the decoding network structure, feature maps output by an ith layer and an N-i layer are spliced together in a jump connection mode to be used as input of an N-i+1 layer, wherein N is an even number, and 0< i < N/2.
Preferably, the steganographic analysis network model comprises a first airspace steganographic analyzer constructed based on an XU-Net network model and a second airspace steganographic analyzer constructed based on an SR-Net network model;
The discriminant loss function corresponding to the steganalysis network model is a weighted sum of the discriminant loss function corresponding to the first airspace analyzer and the discriminant loss function corresponding to the second airspace analyzer;
the first airspace steganalyzer corresponds to the discriminant loss function L SDX Corresponding discriminant loss function L of second airspace steganalyzer SDS The calculation formula of (2) is as follows:
wherein x is i Representing output through a softmax layer in a first spatial steganalyzer, x 1 Representing the probability of the original carrier image, x 2 Representing the probability of a steganographic image, x' 1 Representing the label corresponding to the original carrier image, x' 2 Representing a label corresponding to the steganographic image;
y i representing the output through the softmax layer, y in a second spatial steganalyzer 1 Representing the probability of the original carrier image, y 2 Representing the probability of a steganographic image, y' 1 Representing the label corresponding to the original carrier image, y' 2 Representing the label to which the steganographic image corresponds.
Preferably, the extraction network model comprises a multi-layer decoding structure, each layer decoding structure comprises a convolution layer, a batch normalization layer and a ReLU activation function, wherein a useful sigmoid activation function is connected between the convolution layer and the batch normalization layer in the last layer decoding structure.
Preferably, the steganographic loss function includes a mean square error loss function and a structural similarity loss function,
The mean square error loss function MSE_loss is a weighted sum of MSE_Sloss and MSE_Eloss, SSIM_loss is a weighted sum of SSIM_Sloss and SSIM_Eloss, MSE_Sloss represents the mean square error loss between the carrier image and the steganographic image, SSIM_Eloss represents the mean square error loss between the steganographic image and the extraction image, and the calculation formula is as follows:
wherein x is i Pixel values representing the carrier image, x' i Representing pixel values, y of a steganographic image i Pixel values, y ', representing steganographic images' i Representing pixel values of the extracted image, and n represents the number of pixel points in the image;
the structural similarity loss function ssim_loss is a weighted sum of ssim_loss and ssim_eloss, ssim_loss represents the structural similarity loss between the carrier image and the steganographic image, ssim_eloss represents the structural similarity loss between the steganographic image and the extraction image, and the calculation formula is as follows:
wherein mu x Mean, mu of the representation of the carrier image x' Mean value and sigma of steganographic image x Representing the variance, sigma, of the carrier image x' Representing variance, μ of steganographic image y Mean, mu, of the steganographic image y' Representing the mean value, sigma, of the extracted image y Representing variance, sigma of steganographic image y' Representing the variance, sigma, of the extracted image xx' Representing covariance of carrier image and steganographic image, σ yy' Representing covariance of the steganographic image and the extracted image; c (C) i (i=1, 2,3, 4) has a value much smaller than the square of the product of the coefficient of 1 and the dynamic range of the image gray scale;
correspondingly, the total loss function of the generator is:
loss=MSE_loss+SSIM_loss+αL D
wherein L is D For steganalysis of fight loss, α is the weight of steganalysis of fight loss.
In a second aspect, the present invention is a high-capacity steganography system based on an RGB channel differential plane, for implementing image steganography by the high-capacity steganography method based on an RGB channel differential plane according to any one of the first aspect, where the system includes a steganography network construction module, a steganography analysis network construction module, an extraction network construction module, a loss function construction module, a model training module, and an image steganography module,
the steganography network construction module is used for executing the following steps: the method comprises the steps of constructing a steganography network model, wherein the steganography network model comprises a channel separation part and a generator, the channel separation part is used for taking a carrier image as input, carrying out channel separation and channel differencing on the carrier image to obtain a difference value plane, and the generator is a network model constructed based on a U-Net network and is used for taking the difference value plane and a secret image as input, embedding the secret image into the difference value plane to obtain a secret image;
The steganalysis network construction module is used for executing the following steps: a steganography analysis network model is built based on an airspace steganography analysis model, and the steganography analysis network model is used for taking a steganography image and a corresponding carrier image as input, and predicting and outputting a label type and a probability value of an input image as an output result by judging whether the steganography image is the carrier image containing a secret image, wherein the label type of the input image comprises the steganography image and the carrier image;
the extraction network construction module is used for executing the following steps: constructing an extraction network model based on the CNN model, wherein the extraction network model is used for taking a steganographic image as input, extracting a secret image from the steganographic image as extraction image output;
the loss function construction module is used for executing the following steps: constructing a discrimination loss function based on an output result of the steganography analysis network model, constructing a pixel-level loss function based on the carrier image, the steganography image and the extraction image as a steganography loss function, and performing joint weighting on the discrimination loss function and the steganography loss function to obtain a total loss function of the generator;
the model training module is used for executing the following steps: performing iterative optimization on the generator based on the total loss function to obtain a trained generator, performing iterative optimization on the extracted network model based on the steganography loss function while performing iterative optimization on the generator, and performing iterative optimization on the steganography analysis network model based on the discrimination loss function;
The image steganography module is used for executing the following steps: and constructing a post-training steganography network model based on the channel separation part and the post-training generator, and inputting the carrier image and the secret image into the post-training steganography network model to obtain a steganography image.
Preferably, for the steganographic network model, the channel separation section is configured to perform the following:
taking an original carrier image as input, carrying out channel separation on the carrier image to obtain three single-channel gray scale images, namely an R-channel gray scale image, a G-channel gray scale image and a B-channel gray scale image;
carrying out channel difference on the R channel gray level image and the G channel gray level image to obtain a difference plane;
the generator is a U-shaped network model comprising an encoding network structure and a decryption network structure, the encoding network structure comprises N/2 layers of downsampling encoding units, each layer of downsampling encoding unit comprises a convolution layer for sampling, a batch normalization layer and a ReLU activation function, the decoding network structure comprises N/2 layers of upsampling decoding units, each layer of upsampling decoding unit comprises an deconvolution layer for upsampling, a batch normalization layer and a ReLU activation function, the last layer of decoding unit further comprises a Sigmoid activation function, and for the decoding network structure, feature maps output by an ith layer and an N-i layer are spliced together in a jump connection mode to be used as input of an N-i+1 layer, wherein N is an even number, and 0< i < N/2.
Preferably, the steganographic analysis network model comprises a first airspace steganographic analyzer constructed based on an XU-Net network model and a second airspace steganographic analyzer constructed based on an SR-Net network model;
the discriminant loss function corresponding to the steganalysis network model is a weighted sum of the discriminant loss function corresponding to the first airspace analyzer and the discriminant loss function corresponding to the second airspace analyzer;
the first airspace steganalyzer corresponds to the discriminant loss function L SDX Corresponding discriminant loss function L of second airspace steganalyzer SDS The calculation formula of (2) is as follows:
wherein x is i Representing output through a softmax layer in a first spatial steganalyzer, x 1 Representing the probability of the original carrier image, x 2 Representing the probability of a steganographic image, x' 1 Representing the label corresponding to the original carrier image, x' 2 Representing a label corresponding to the steganographic image;
y i representing the output through the softmax layer, y in a second spatial steganalyzer 1 Representing the probability of the original carrier image, y 2 Representing the probability of a steganographic image, y 1 'represents the label corresponding to the original carrier image, y' 2 Representing the label to which the steganographic image corresponds.
Preferably, the extraction network model comprises a multi-layer decoding structure, each layer decoding structure comprises a convolution layer, a batch normalization layer and a ReLU activation function, wherein a useful sigmoid activation function is connected between the convolution layer and the batch normalization layer in the last layer decoding structure.
Preferably, the steganographic loss function includes a mean square error loss function and a structural similarity loss function,
the mean square error loss function MSE_loss is a weighted sum of MSE_Sloss and MSE_Eloss, SSIM_loss is a weighted sum of SSIM_Sloss and SSIM_Eloss, MSE_Sloss represents the mean square error loss between the carrier image and the steganographic image, SSIM_Eloss represents the mean square error loss between the steganographic image and the extraction image, and the calculation formula is as follows:
wherein x is i Pixel values representing the carrier image, x' i Representing pixel values, y of a steganographic image i Pixel values, y ', representing steganographic images' i Representing pixel values of the extracted image, and n represents the number of pixel points in the image;
the structural similarity loss function ssim_loss is a weighted sum of ssim_loss and ssim_eloss, ssim_loss represents the structural similarity loss between the carrier image and the steganographic image, ssim_eloss represents the structural similarity loss between the steganographic image and the extraction image, and the calculation formula is as follows:
wherein mu x Mean, mu of the representation of the carrier image x' Mean value and sigma of steganographic image x Representing the variance, sigma, of the carrier image x' Representing the variance of the steganographic image,μ y mean, mu, of the steganographic image y' Representing the mean value, sigma, of the extracted image y Representing variance, sigma of steganographic image y' Representing the variance, sigma, of the extracted image xx' Representing covariance of carrier image and steganographic image, σ yy' Representing covariance of the steganographic image and the extracted image; c (C) i (i=1, 2,3, 4) has a value much smaller than the square of the product of the coefficient of 1 and the dynamic range of the image gray scale;
correspondingly, the total loss function of the generator is:
loss=MSE_loss+SSIM_loss+αL D
wherein L is D For steganalysis of fight loss, α is the weight of steganalysis of fight loss.
The high-capacity steganography method and system based on the RGB channel differential plane have the following advantages:
1. the difference relation among RGB channels is fully utilized to obtain a difference plane through the characteristics among different channels of the RGB color space, and a secret image is embedded into the difference plane through a generator based on a U-Net structure to generate a steganography image, so that the problem of image distortion caused by secret image embedding in high-capacity steganography is effectively solved;
2. the method comprises the steps of constructing a steganographic analysis network model and an extraction network model, calculating steganographic analysis counterlosses through the steganographic analysis network model, extracting a secret image from a steganographic image through the extraction network to serve as an extraction image, constructing a mean square error loss function based on mean square errors between a carrier image and the steganographic image and between the steganographic image and the extraction image, constructing an average structure similarity loss function based on structure similarity losses between the carrier image and the steganographic image and between the steganographic image and the extraction image, constructing a total loss function based on the mean square error loss function, the structure similarity loss function and the steganographic analysis counterlosses, performing iterative optimization on the steganographic analysis network model based on the total loss function, and training the generator based on the loss function, so that the problem of network iterative gradient disappearance is generated in the training process is solved, and the training effect is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a block flow diagram of a high-capacity steganography method based on RGB channel differential planes of embodiment 1;
FIG. 2 is a block diagram of a generator in a high-capacity steganography method based on RGB channel differential planes according to embodiment 1;
fig. 3 is a block diagram of a network model extracted in a high-capacity steganography method based on an RGB channel differential plane in embodiment 1.
Detailed Description
The invention will be further described with reference to the accompanying drawings and specific examples, so that those skilled in the art can better understand the invention and implement it, but the examples are not meant to limit the invention, and the technical features of the embodiments of the invention and the examples can be combined with each other without conflict.
The embodiment of the invention provides a high-capacity steganography method and a high-capacity steganography system based on an RGB channel difference plane, which are used for solving the technical problems of how to steganographically process images based on RGB channel differences, improving the visual quality of steganographically-processed images and resisting steganography analysis capability.
Example 1:
the invention discloses a high-capacity steganography method based on an RGB channel differential plane, which comprises five steps of steganography network construction, steganography analysis network construction, extraction network construction, loss function construction and image steganography.
Step S100, steganography model construction: the method comprises the steps of constructing a steganography network model, wherein the steganography network model comprises a channel separation part and a generator, the channel separation part is used for taking a carrier image as input, carrying out channel separation and channel differencing on the carrier image to obtain a difference plane, and the generator is a network model constructed based on a U-Net network and used for taking the difference plane and a secret image as input, embedding the secret image into the difference plane to obtain the secret image.
In step S100 of this embodiment, for the steganographic network model, the channel separation section performs the following operations:
(1) Taking an original carrier image as input, carrying out channel separation on the carrier image to obtain three single-channel gray scale images, namely an R-channel gray scale image, a G-channel gray scale image and a B-channel gray scale image;
(2) And carrying out channel difference on the R channel gray level image and the G channel gray level image to obtain a difference plane.
In this embodiment, based on the real carrier image, the real carrier image is subjected to channel separation in the RGB color space, and three single-channel gray-scale images are separated. The R channel and the G channel are the closest in vision, for a single-channel gray scale image, the vision is similar, the brightness average value is similar, the pixel overflow range after the difference plane made by using R, G is embedded and then added with the original carrier image is smaller, so that the pixel loss is smaller when the pixel overflow processing is performed after the secret image is embedded, and the R channel and the G channel with the single-channel pixel value is similar are finally selected to generate the difference plane and are subjected to steganography. Therefore, according to the principle of minimum pixel overflow processing loss, the two channels R, G are subjected to difference, so that a difference plane which is more suitable for embedding secret information is obtained, and better information hiding capability is realized.
In this embodiment, the generator is a U-shaped network model including an encoding network structure and a decryption network structure, where the encoding network structure includes N/2-layer downsampling encoding units, each downsampling encoding unit includes a convolution layer for sampling, a batch normalization layer, and a ReLU activation function, the decoding network structure includes N/2-layer upsampling decoding units, each upsampling decoding unit includes a deconvolution layer for upsampling, a batch normalization layer, and a ReLU activation function, and the last decoding unit further includes a Sigmoid activation function, and for the decoding network structure, feature maps output by the ith layer and the N-i layer are spliced together in a jumper manner to be input of the N-i+1 layer, where N is an even number, and 0< i < N/2.
U-Net is a convolutional neural network model based on encoder-decoder architecture that employs a U-shaped structure of downsampling layer-upsampling layer to reconstruct the original image. On a left contraction path (constracting path) of the U-shaped network, the overlapped convolution is continuously carried out, the receptive field is continuously enhanced, the number of the output feature images is gradually increased, and the bottom features of the original image are extracted; and on the extended path (expandable path) on the right side thereof, the original input image is gradually approximated by the convolution operation after up-sampling. Meanwhile, jump connection is used between feature images with the same size to fuse the feature information of the shallow layer and the deep layer, so that the whole network can fully utilize the features to fuse and reconstruct images. In the embodiment, the U-Net structure is adopted to transmit the detail information of different receptive fields in the encoding stage to the decoding stage, and the secret image and the carrier image are stacked and fused to finally generate a high-quality steganographic image, and the generator structure is shown in figure 2. The U-Net structure makes full use of local and whole characteristic information of the input original carrier image and the secret image by splicing shallow information and deep information, so that a high-precision steganography image is gradually constructed, and the generated steganography image is ensured to have excellent visual performance.
In contrast to conventional generation countermeasure networks, which can only generate random carrier images, the present embodiment can purposefully generate images with specific content by designing a generation countermeasure network based on a U-Net structure. As a specific implementation of the generator, it contains 16 data processing units in total, the first 8 groups are downsampling codes, each group is a convolution layer (convolution kernel 3×3) for downsampling with a step size of 2, the batch normalization layer (BN) and the ReLU activation function, 9 to 15 groups are extension paths, each group includes a deconvolution layer (convolution kernel 5×5) with a step size of 2, the batch normalization layer (BN) and the ReLU activation function, and the 16 th group includes a deconvolution layer (convolution kernel 5) with a step size of 2 and the ReLU activation function, sigmoid activation function. To achieve pixel level learning and facilitate back propagation, feature maps of the i-th and 16-i-th layers are stitched together by a skip-connection as input to the 16-i+1 layer during the decoding stage of the image. The specific parameters of the network model are shown in table 1.
TABLE 1 network Structure parameters of Generator
Step S200, steganalysis network construction: and constructing a steganography analysis network model based on the airspace steganography analysis model, wherein the steganography analysis network model is used for taking a steganography image and a corresponding carrier image as input, and predicting and outputting a label type and a probability value of the input image as an output result by judging whether the steganography image is the carrier image containing the secret image, and the label type of the input image comprises the steganography image and the carrier image.
In this embodiment, the steganographic analysis network model includes a first airspace steganographic analyzer constructed based on an XU-Net network model and a second airspace steganographic analyzer constructed based on an SR-Net network model.
Correspondingly, the discriminant loss function corresponding to the steganalysis network model is a weighted sum of the discriminant loss function corresponding to the first airspace analyzer and the discriminant loss function corresponding to the second airspace analyzer.
The first airspace steganalyzer corresponds to the discriminant loss function L SDX Corresponding discriminant loss function L of second airspace steganalyzer SDS The calculation formula of (2) is as follows:
wherein x is i Representation ofOutput through softmax layer in first airspace steganalyzer, x 1 Representing the probability of the original carrier image, x 2 Representing the probability of a steganographic image, x' 1 Representing the label corresponding to the original carrier image, x' 2 Representing a label corresponding to the steganographic image;
y i representing the output through the softmax layer, y in a second spatial steganalyzer 1 Representing the probability of the original carrier image, y 2 Representing the probability of a steganographic image, y' 1 Representing the label corresponding to the original carrier image, y' 2 Representing the label to which the steganographic image corresponds.
The steganographic analysis network model judges whether the steganographic image is a secret-carrying image or not, and when the steganographic analysis network model replaces the function of a discriminator, the training target is as follows: when the input is a steganographic image, the expected output probability approaches 0; when the original carrier image is input, the desired output probability approaches 1. In the embodiment, two classical airspace steganography analyzers XU-Net and SR-Net are selected to perform combined weighted countermeasure, so that the steganography detection resistance of the generated steganography image is further improved.
In this embodiment, the steganography analysis network model is used to determine whether the input image has hidden secret information, and with the development of deep learning, the combination of steganography analysis and deep learning constitutes a great threat to steganography algorithm. In the embodiment, a steganographic analyzer XU-Net and an SR-Net which are commonly used in the field of steganographic analysis of airspace images are adopted to judge the steganographic images, and corresponding judgment loss is used as model optimization, so that the steganographic analysis resistance of the steganographic images is improved. The steganography analysis network model takes an original carrier image Cover and a Stego thereof as input of a steganography analysis countermeasure network, generates corresponding discrimination loss SD_loss after discrimination by a steganography analyzer, and trains a generator in the feedback of the discrimination loss SD_loss during subsequent model training.
Step S300, extracting network construction: and constructing an extraction network model based on the CNN model, wherein the extraction network model is used for taking the steganographic image as input, extracting the secret image from the steganographic image and outputting the secret image as an extraction image.
In this embodiment, the extracted network model includes a multi-layer decoding structure, where each layer decoding structure includes a convolution layer, a batch normalization layer, and a ReLU activation function, where a useful sigmoid activation function is connected between the convolution layer and the batch normalization layer in the last layer decoding structure.
The extraction network is used for extracting the embedded secret image Extracted from the Stego image, an extraction network model of a coding and decoding network structure for extracting the secret image is designed through a Convolutional Neural Network (CNN), the secret image is accurately and efficiently recovered from the Stego image while the small network parameter number is controlled, and the extraction network model structure is shown in figure 3. As a specific implementation, the decoding structure in the model adopts a 3×3 convolution layer with a step length of 1 and a filling of 1; meanwhile, in order to enhance the nonlinear learning capability of the neural network, batch Normalization (BN) and ReLU activation functions are performed after each convolutional layer; and after the last convolution layer, using a sigmoid activation function to complete the extraction of the secret image. The nonlinear characteristics are learned to better fit parameters, the mapping between the input and the output is realized, and the extraction of the high visual quality of the image is realized.
Step S400, construction of a loss function: and constructing a discrimination loss function based on the output result of the steganalysis network model, constructing a pixel-level loss function based on the carrier image, the steganalysis image and the extraction image as a steganalysis loss function, and carrying out joint weighting on the discrimination loss function and the steganalysis loss function as a total loss function of the generator.
In this embodiment, the steganographic loss function includes a mean square error loss function and a structural similarity loss function.
In order to effectively reduce the distance between the original carrier image and the steganographic image, the present embodiment uses pixel-level mean square error loss (mse_loss) for pixel-by-pixel computation. The mse_loss between the carrier image and the steganographic image is obtained by calculating the mean square error loss. The loss function of the mean square error of the carrier image and the steganographic image is calculated as follows:
meanwhile, in order to improve the visual similarity of the extracted image and the secret image, the present embodiment designs a mean square error loss to obtain mse_eleoss. The loss function of the mean square error of the secret image and the extracted image is calculated as follows:
wherein x is i Pixel values representing the carrier image, x' i Representing pixel values, y of a steganographic image i Pixel values, y ', representing steganographic images' i The pixel value of the extracted image is represented, and n represents the number of pixels in the image.
In order to further improve the structural similarity of the generated image, the embodiment specifically designs the structural similarity loss (ssim_loss) while using the MSE, and is simultaneously used on the carrier image and the steganographic image, and the secret image and the extraction image. The improvement of the image structure similarity is realized more accurately, and the loss function of the calculated structure similarity is as follows:
Wherein mu x Mean, mu of the representation of the carrier image x' Mean value and sigma of steganographic image x Representing the variance, sigma, of the carrier image x' Representing variance, μ of steganographic image y Mean, mu, of the steganographic image y' Representing the mean value, sigma, of the extracted image y Representing variance, sigma of steganographic image y' Representing the variance, sigma, of the extracted image xx' Representing covariance of carrier image and steganographic image, σ yy' Representing covariance of the steganographic image and the extracted image; c (C) i (i=1, 2,3, 4) is a coefficient having a value much smaller than 1Square of the dynamic range product of the image gray scale.
Correspondingly, the total loss function of the generator is:
loss=MSE_loss+SSIM_loss+αL D
wherein L is D For steganalysis of fight loss, α is the weight of steganalysis of fight loss.
Step S500, training a model: and carrying out iterative optimization on the generator based on the total loss function to obtain a trained generator, carrying out iterative optimization on the extraction network model based on the steganography loss function while carrying out iterative optimization on the generator, and carrying out iterative optimization on the steganography analysis network model based on the discrimination loss function.
In the model training stage, the method carries out iterative optimization on the generator based on the corresponding total loss function; for the extraction network model, the weighted combination is carried out on the two pixel-level loss functions of Mean Square Error (MSE) and Structural Similarity (SSIM) to be used as the total loss function of the extractor network, and the parameters of the extraction network are gradually updated through iterative optimization of each round, so that an extraction image with higher visual quality is generated. For the steganalysis network model, the loss function is iteratively optimized with its steganalysis.
In this embodiment, the model training flow is as follows:
input: original carrier image Cover, secret image Secret
And (3) outputting: steganographic image after embedding secret image
Step 1, channel separation is carried out on the original carrier image Cover of the input model, RGB three channels are separated respectively, and the RGB three channels are marked as C_ R, C _ G, C _B.
Step 2, according to the principle of minimum pixel overflow processing loss, performing difference operation on C_ R, C _G with the nearest pixel value of the single-channel gray level image to obtain a difference Plane Dif_plane.
Step 3, inputting a difference Plane Dif_Plane into the steganography network SN, and embedding Secret into the Cover through a generator constructed based on a U-Net structure to generate a Stego image.
Step 4, performing pixel overflow processing on the generated steganographic image, and clipping the pixel value overflowed by the steganographic image into a uint8 data type range.
Step 5, inputting the generated Stego image and the original carrier image Cover into a Stego analysis countermeasure network SAN to generate Stego analysis countermeasure Loss SD_loss, and carrying out iterative optimization along with MSE_loss and SSIM_loss joint weighting as the total Loss of the model.
Step 6, steganalysis optimizing network SON synchronously updating network parameters of the steganalyzer and optimizing the steganalyzer, thereby further improving the detection capability of the steganalyzer and the anti-steganalysis capability of the steganalysis image.
Step 7, synchronously optimizing and updating network parameters of the steganography network SN, the steganography analysis countermeasure network SAN and the steganography analysis optimizing network SON, so as to generate a steganography image with high visual quality and high steganography resistance.
Step S600, image steganography: and constructing a post-training steganography network model based on the channel separation part and the post-training generator, and inputting the carrier image and the secret image into the post-training steganography network model to obtain a steganography image.
In the image steganography process in this embodiment, the channel separation and the channel difference are performed on the input carrier image through the channel separation part to obtain a difference plane, then the difference plane and the secret image are used as input, and the secret image is embedded into the difference plane through the training generator to obtain the steganography image.
Example 2:
the invention discloses a high-capacity steganography system based on an RGB channel differential plane, which comprises five modules, namely a steganography network construction module, a steganography analysis network construction module, an extraction network construction module, a loss function construction module and an image steganography module, wherein the system can execute the method disclosed in the embodiment 1 to carry out image steganography.
The steganography network construction module is used for executing the following steps: the method comprises the steps of constructing a steganography network model, wherein the steganography network model comprises a channel separation part and a generator, the channel separation part is used for taking a carrier image as input, carrying out channel separation and channel differencing on the carrier image to obtain a difference plane, and the generator is a network model constructed based on a U-Net network and used for taking the difference plane and a secret image as input, embedding the secret image into the difference plane to obtain the secret image.
In this embodiment, for the steganographic network model, the channel separation section performs the following operations:
(1) Taking an original carrier image as input, carrying out channel separation on the carrier image to obtain three single-channel gray scale images, namely an R-channel gray scale image, a G-channel gray scale image and a B-channel gray scale image;
(2) And carrying out channel difference on the R channel gray level image and the G channel gray level image to obtain a difference plane.
In this embodiment, based on the real carrier image, the real carrier image is subjected to channel separation in the RGB color space, and three single-channel gray-scale images are separated. The R channel and the G channel are the closest in vision, for a single-channel gray scale image, the vision is similar, the brightness average value is similar, the pixel overflow range after the difference plane made by using R, G is embedded and then added with the original carrier image is smaller, so that the pixel loss is smaller when the pixel overflow processing is performed after the secret image is embedded, and the R channel and the G channel with the single-channel pixel value is similar are finally selected to generate the difference plane and are subjected to steganography. Therefore, according to the principle of minimum pixel overflow processing loss, the two channels R, G are subjected to difference, so that a difference plane which is more suitable for embedding secret information is obtained, and better information hiding capability is realized.
In this embodiment, the generator is a U-shaped network model including an encoding network structure and a decryption network structure, where the encoding network structure includes N/2-layer downsampling encoding units, each downsampling encoding unit includes a convolution layer for sampling, a batch normalization layer, and a ReLU activation function, the decoding network structure includes N/2-layer upsampling decoding units, each upsampling decoding unit includes a deconvolution layer for upsampling, a batch normalization layer, and a ReLU activation function, and the last decoding unit further includes a Sigmoid activation function, and for the decoding network structure, feature maps output by the ith layer and the N-i layer are spliced together in a jumper manner to be input of the N-i+1 layer, where N is an even number, and 0< i < N/2.
As a specific implementation of the generator, it contains 16 data processing units in total, the first 8 groups are downsampling codes, each group is a convolution layer (convolution kernel 3×3) for downsampling with a step size of 2, the batch normalization layer (BN) and the ReLU activation function, 9 to 15 groups are extension paths, each group includes a deconvolution layer (convolution kernel 5×5) with a step size of 2, the batch normalization layer (BN) and the ReLU activation function, and the 16 th group includes a deconvolution layer (convolution kernel 5) with a step size of 2 and the ReLU activation function, sigmoid activation function. To achieve pixel level learning and facilitate back propagation, feature maps of the i-th and 16-i-th layers are stitched together by a skip-connection as input to the 16-i+1 layer during the decoding stage of the image. The specific parameters of the network model are shown in table 1.
The steganalysis network construction module is used for executing the following steps: and constructing a steganography analysis network model based on the airspace steganography analysis model, wherein the steganography analysis network model is used for taking a steganography image and a corresponding carrier image as input, and predicting and outputting a label type and a probability value of the input image as an output result by judging whether the steganography image is the carrier image containing the secret image, and the label type of the input image comprises the steganography image and the carrier image.
In this embodiment, the steganographic analysis network model includes a first airspace steganographic analyzer constructed based on an XU-Net network model and a second airspace steganographic analyzer constructed based on an SR-Net network model.
Correspondingly, the discriminant loss function corresponding to the steganalysis network model is a weighted sum of the discriminant loss function corresponding to the first airspace analyzer and the discriminant loss function corresponding to the second airspace analyzer.
The first airspace steganalyzer corresponds to the discriminant loss function L SDX Corresponding discriminant loss function L of second airspace steganalyzer SDS The calculation formula of (2) is as follows:
wherein x is i Representing output through a softmax layer in a first spatial steganalyzer, x 1 Representing the probability of the original carrier image, x 2 Representing the probability of a steganographic image, x' 1 Representing the label corresponding to the original carrier image, x' 2 Representing a label corresponding to the steganographic image;
y i representing the output through the softmax layer, y in a second spatial steganalyzer 1 Representing the probability of the original carrier image, y 2 Representing the probability of a steganographic image, y 1 'represents the label corresponding to the original carrier image, y' 2 Representing the label to which the steganographic image corresponds.
The steganographic analysis network model judges whether the steganographic image is a secret-carrying image or not, and when the steganographic analysis network model replaces the function of a discriminator, the training target is as follows: when the input is a steganographic image, the expected output probability approaches 0; when the original carrier image is input, the desired output probability approaches 1. In the embodiment, two classical airspace steganography analyzers XU-Net and SR-Net are selected to perform combined weighted countermeasure, so that the steganography detection resistance of the generated steganography image is further improved.
The extraction network construction module is used for executing the following steps: and constructing an extraction network model based on the CNN model, wherein the extraction network model is used for taking the steganographic image as input, extracting the secret image from the steganographic image and outputting the secret image as an extraction image.
In this embodiment, the extracted network model includes a multi-layer decoding structure, where each layer decoding structure includes a convolution layer, a batch normalization layer, and a ReLU activation function, where a useful sigmoid activation function is connected between the convolution layer and the batch normalization layer in the last layer decoding structure.
The extraction network is used for extracting the embedded secret image Extracted from the Stego image, an extraction network model of a coding and decoding network structure for extracting the secret image is designed through a Convolutional Neural Network (CNN), and the secret image is accurately and efficiently recovered from the Stego image while the small network parameter number is controlled. As a specific implementation, the decoding structure in the model adopts a 3×3 convolution layer with a step length of 1 and a filling of 1; meanwhile, in order to enhance the nonlinear learning capability of the neural network, batch Normalization (BN) and ReLU activation functions are performed after each convolutional layer; and after the last convolution layer, using a sigmoid activation function to complete the extraction of the secret image. The nonlinear characteristics are learned to better fit parameters, the mapping between the input and the output is realized, and the extraction of the high visual quality of the image is realized.
The loss function construction module is used for executing the following steps: and constructing a discrimination loss function based on the output result of the steganalysis network model, constructing a pixel-level loss function based on the carrier image, the steganalysis image and the extraction image as a steganalysis loss function, and carrying out joint weighting on the discrimination loss function and the steganalysis loss function as a total loss function of the generator.
In this embodiment, the steganographic loss function includes a mean square error loss function and a structural similarity loss function.
In order to effectively reduce the distance between the original carrier image and the steganographic image, the present embodiment uses pixel-level mean square error loss (mse_loss) for pixel-by-pixel computation. The mse_loss between the carrier image and the steganographic image is obtained by calculating the mean square error loss. The loss function of the mean square error of the carrier image and the steganographic image is calculated as follows:
meanwhile, in order to improve the visual similarity of the extracted image and the secret image, the present embodiment designs a mean square error loss to obtain mse_eleoss. The loss function of the mean square error of the secret image and the extracted image is calculated as follows:
wherein x is i Pixel values representing the carrier image, x' i Pixels representing steganographic imagesValue, y i Pixel values, y ', representing steganographic images' i The pixel value of the extracted image is represented, and n represents the number of pixels in the image.
In order to further improve the structural similarity of the generated image, the embodiment specifically designs the structural similarity loss (ssim_loss) while using the MSE, and is simultaneously used on the carrier image and the steganographic image, and the secret image and the extraction image. The improvement of the image structure similarity is realized more accurately, and the loss function of the calculated structure similarity is as follows:
Wherein mu x Mean, mu of the representation of the carrier image x' Mean value and sigma of steganographic image x Representing the variance, sigma, of the carrier image x' Representing variance, μ of steganographic image y Mean, mu, of the steganographic image y' Representing the mean value, sigma, of the extracted image y Representing variance, sigma of steganographic image y' Representing the variance, sigma, of the extracted image xx' Representing covariance of carrier image and steganographic image, σ yy' Representing covariance of the steganographic image and the extracted image; c (C) i The value of (i=1, 2,3, 4) is much smaller than the square of the product of the coefficient of 1 and the dynamic range of the image gray scale.
Correspondingly, the total loss function of the generator is:
loss=MSE_loss+SSIM_loss+αL D
wherein L is D For steganalysis of fight loss, α is the weight of steganalysis of fight loss.
The model training module is used for executing the following steps: and carrying out iterative optimization on the generator based on the total loss function to obtain a trained generator, carrying out iterative optimization on the extraction network model based on the steganography loss function while carrying out iterative optimization on the generator, and carrying out iterative optimization on the steganography analysis network model based on the discrimination loss function.
In the model training stage, the method carries out iterative optimization on the generator based on the corresponding total loss function; for the extraction network model, the weighted combination is carried out on the two pixel-level loss functions of Mean Square Error (MSE) and Structural Similarity (SSIM) to be used as the total loss function of the extractor network, and the parameters of the extraction network are gradually updated through iterative optimization of each round, so that an extraction image with higher visual quality is generated. For the steganalysis network model, the loss function is iteratively optimized with its steganalysis.
The image steganography module is used for executing the following steps: and constructing a post-training steganography network model based on the channel separation part and the post-training generator, and inputting the carrier image and the secret image into the post-training steganography network model to obtain a steganography image.
In the image steganography process in this embodiment, the channel separation and the channel difference are performed on the input carrier image through the channel separation part to obtain a difference plane, then the difference plane and the secret image are used as input, and the secret image is embedded into the difference plane through the training generator to obtain the steganography image.
While the invention has been illustrated and described in detail in the drawings and in the preferred embodiments, the invention is not limited to the disclosed embodiments, but it will be apparent to those skilled in the art that many more embodiments of the invention can be made by combining the means of the various embodiments described above and still fall within the scope of the invention.

Claims (10)

1. The high-capacity steganography method based on the RGB channel differential plane is characterized by comprising the following steps of:
steganography network construction: the method comprises the steps of constructing a steganography network model, wherein the steganography network model comprises a channel separation part and a generator, the channel separation part is used for taking a carrier image as input, carrying out channel separation and channel differencing on the carrier image to obtain a difference value plane, and the generator is a network model constructed based on a U-Net network and is used for taking the difference value plane and a secret image as input, embedding the secret image into the difference value plane to obtain a secret image;
Steganalysis network construction: a steganography analysis network model is built based on an airspace steganography analysis model, and the steganography analysis network model is used for taking a steganography image and a corresponding carrier image as input, and predicting and outputting a label type and a probability value of an input image as an output result by judging whether the steganography image is the carrier image containing a secret image, wherein the label type of the input image comprises the steganography image and the carrier image;
and (3) extracting network construction: constructing an extraction network model based on the CNN model, wherein the extraction network model is used for taking a steganographic image as input, extracting a secret image from the steganographic image as extraction image output;
and (3) constructing a loss function: constructing a discrimination loss function based on an output result of the steganography analysis network model, constructing a pixel-level loss function based on the carrier image, the steganography image and the extraction image as a steganography loss function, and performing joint weighting on the discrimination loss function and the steganography loss function to obtain a total loss function of the generator;
model training: performing iterative optimization on the generator based on the total loss function to obtain a trained generator, performing iterative optimization on the extracted network model based on the steganography loss function while performing iterative optimization on the generator, and performing iterative optimization on the steganography analysis network model based on the discrimination loss function;
Image steganography: and constructing a post-training steganography network model based on the channel separation part and the post-training generator, and inputting the carrier image and the secret image into the post-training steganography network model to obtain a steganography image.
2. The RGB channel difference plane-based high-capacity steganography method of claim 1, characterized in that for the steganography network model the channel separation section is used to perform the following:
taking an original carrier image as input, carrying out channel separation on the carrier image to obtain three single-channel gray scale images, namely an R-channel gray scale image, a G-channel gray scale image and a B-channel gray scale image;
carrying out channel difference on the R channel gray level image and the G channel gray level image to obtain a difference plane;
the generator is a U-shaped network model comprising an encoding network structure and a decryption network structure, the encoding network structure comprises N/2 layers of downsampling encoding units, each layer of downsampling encoding unit comprises a convolution layer for sampling, a batch normalization layer and a ReLU activation function, the decoding network structure comprises N/2 layers of upsampling decoding units, each layer of upsampling decoding unit comprises an deconvolution layer for upsampling, a batch normalization layer and a ReLU activation function, the last layer of decoding unit further comprises a Sigmoid activation function, and for the decoding network structure, feature maps output by an ith layer and an N-i layer are spliced together in a jump connection mode to be used as input of an N-i+1 layer, wherein N is an even number, and 0< i < N/2.
3. The RGB channel differential plane-based high-capacity steganography method of claim 1, wherein the steganography analysis network model includes a first spatial steganography analyzer constructed based on an XU-Net network model and a second spatial steganography analyzer constructed based on an SR-Net network model;
the discriminant loss function corresponding to the steganalysis network model is a weighted sum of the discriminant loss function corresponding to the first airspace analyzer and the discriminant loss function corresponding to the second airspace analyzer;
the first airspace steganalyzer corresponds to the discriminant loss function L SDX Corresponding discriminant loss function L of second airspace steganalyzer SDS The calculation formula of (2) is as follows:
wherein x is i Representing the firstOutput through softmax layer, x in airspace steganalyzer 1 Representing the probability of the original carrier image, x 2 Representing the probability of a steganographic image, x' 1 Representing the label corresponding to the original carrier image, x' 2 Representing a label corresponding to the steganographic image;
y i representing the output through the softmax layer, y in a second spatial steganalyzer 1 Representing the probability of the original carrier image, y 2 Representing the probability of a steganographic image, y 1 'represents the label corresponding to the original carrier image, y' 2 Representing the label to which the steganographic image corresponds.
4. The method of high capacity steganography based on RGB channel differential planes of claim 1, wherein the extraction network model comprises a multi-layer decoding structure, each layer decoding structure comprises a convolution layer, a batch normalization layer and a ReLU activation function, wherein a useful sigmoid activation function is connected between the convolution layer and the batch normalization layer in the last layer decoding structure.
5. The method of claim 1, wherein the steganographic loss function includes a mean square error loss function and a structural similarity loss function,
the mean square error loss function MSE_loss is a weighted sum of MSE_Sloss and MSE_Eloss, SSIM_loss is a weighted sum of SSIM_Sloss and SSIM_Eloss, MSE_Sloss represents the mean square error loss between the carrier image and the steganographic image, SSIM_Eloss represents the mean square error loss between the steganographic image and the extraction image, and the calculation formula is as follows:
wherein the method comprises the steps of,x i Pixel values, x representing a carrier image i ' pixel value representing steganographic image, y i Representing pixel values, y of a steganographic image i ' represents the pixel value of the extracted image, n represents the number of pixels in the image;
the structural similarity loss function ssim_loss is a weighted sum of ssim_loss and ssim_eloss, ssim_loss represents the structural similarity loss between the carrier image and the steganographic image, ssim_eloss represents the structural similarity loss between the steganographic image and the extraction image, and the calculation formula is as follows:
Wherein mu x Mean, mu of the representation of the carrier image x' Mean value and sigma of steganographic image x Representing the variance, sigma, of the carrier image x' Representing variance, μ of steganographic image y Mean, mu, of the steganographic image y' Representing the mean value, sigma, of the extracted image y Representing variance, sigma of steganographic image y' Representing the variance, sigma, of the extracted image xx' Representing covariance of carrier image and steganographic image, σ yy' Representing covariance of the steganographic image and the extracted image; c (C) i (i=1, 2,3, 4) has a value much smaller than the square of the product of the coefficient of 1 and the dynamic range of the image gray scale;
correspondingly, the total loss function of the generator is:
loss=MSE_loss+SSIM_loss+αL D
wherein L is D For steganalysis of fight loss, α is the weight of steganalysis of fight loss.
6. A high-capacity steganography system based on an RGB channel differential plane, which is used for realizing image steganography by the high-capacity steganography method based on the RGB channel differential plane according to any one of claims 1-5, and comprises a steganography network construction module, a steganography analysis network construction module, an extraction network construction module, a loss function construction module, a model training module and an image steganography module,
the steganography network construction module is used for executing the following steps: the method comprises the steps of constructing a steganography network model, wherein the steganography network model comprises a channel separation part and a generator, the channel separation part is used for taking a carrier image as input, carrying out channel separation and channel differencing on the carrier image to obtain a difference value plane, and the generator is a network model constructed based on a U-Net network and is used for taking the difference value plane and a secret image as input, embedding the secret image into the difference value plane to obtain a secret image;
The steganalysis network construction module is used for executing the following steps: a steganography analysis network model is built based on an airspace steganography analysis model, and the steganography analysis network model is used for taking a steganography image and a corresponding carrier image as input, and predicting and outputting a label type and a probability value of an input image as an output result by judging whether the steganography image is the carrier image containing a secret image, wherein the label type of the input image comprises the steganography image and the carrier image;
the extraction network construction module is used for executing the following steps: constructing an extraction network model based on the CNN model, wherein the extraction network model is used for taking a steganographic image as input, extracting a secret image from the steganographic image as extraction image output;
the loss function construction module is used for executing the following steps: constructing a discrimination loss function based on an output result of the steganography analysis network model, constructing a pixel-level loss function based on the carrier image, the steganography image and the extraction image as a steganography loss function, and performing joint weighting on the discrimination loss function and the steganography loss function to obtain a total loss function of the generator;
the model training module is used for executing the following steps: performing iterative optimization on the generator based on the total loss function to obtain a trained generator, performing iterative optimization on the extracted network model based on the steganography loss function while performing iterative optimization on the generator, and performing iterative optimization on the steganography analysis network model based on the discrimination loss function;
The image steganography module is used for executing the following steps: and constructing a post-training steganography network model based on the channel separation part and the post-training generator, and inputting the carrier image and the secret image into the post-training steganography network model to obtain a steganography image.
7. The RGB channel differential plane-based high-capacity steganography system of claim 6, wherein for the steganography network model, the channel separation section is configured to perform the following:
taking an original carrier image as input, carrying out channel separation on the carrier image to obtain three single-channel gray scale images, namely an R-channel gray scale image, a G-channel gray scale image and a B-channel gray scale image;
carrying out channel difference on the R channel gray level image and the G channel gray level image to obtain a difference plane;
the generator is a U-shaped network model comprising an encoding network structure and a decryption network structure, the encoding network structure comprises N/2 layers of downsampling encoding units, each layer of downsampling encoding unit comprises a convolution layer for sampling, a batch normalization layer and a ReLU activation function, the decoding network structure comprises N/2 layers of upsampling decoding units, each layer of upsampling decoding unit comprises an deconvolution layer for upsampling, a batch normalization layer and a ReLU activation function, the last layer of decoding unit further comprises a Sigmoid activation function, and for the decoding network structure, feature maps output by an ith layer and an N-i layer are spliced together in a jump connection mode to be used as input of an N-i+1 layer, wherein N is an even number, and 0< i < N/2.
8. The RGB channel differential plane-based high-capacity steganography system of claim 6, wherein the steganography analysis network model includes a first spatial steganography analyzer constructed based on an XU-Net network model and a second spatial steganography analyzer constructed based on an SR-Net network model;
the discriminant loss function corresponding to the steganalysis network model is a weighted sum of the discriminant loss function corresponding to the first airspace analyzer and the discriminant loss function corresponding to the second airspace analyzer;
the first airspace steganalyzer corresponds to the discriminant loss function L SDX Corresponding discriminant loss function L of second airspace steganalyzer SDS The calculation formula of (2) is as follows:
wherein x is i Representing output through a softmax layer in a first spatial steganalyzer, x 1 Representing the probability of the original carrier image, x 2 Representing the probability of a steganographic image, x' 1 Representing the label corresponding to the original carrier image, x' 2 Representing a label corresponding to the steganographic image;
y i representing the output through the softmax layer, y in a second spatial steganalyzer 1 Representing the probability of the original carrier image, y 2 Representing the probability of a steganographic image, y' 1 Representing the label corresponding to the original carrier image, y' 2 Representing the label to which the steganographic image corresponds.
9. The RGB channel differential plane-based high-capacity steganography system of claim 6, wherein the extraction network model comprises a multi-layer decoding structure, each layer decoding structure comprises a convolution layer, a batch normalization layer and a ReLU activation function, wherein a useful sigmoid activation function is connected between the convolution layer and the batch normalization layer in the last layer decoding structure.
10. The RGB channel difference plane-based high-capacity steganography system of claim 6, wherein the steganography loss function includes a mean square error loss function and a structural similarity loss function,
the mean square error loss function MSE_loss is a weighted sum of MSE_Sloss and MSE_Eloss, SSIM_loss is a weighted sum of SSIM_Sloss and SSIM_Eloss, MSE_Sloss represents the mean square error loss between the carrier image and the steganographic image, SSIM_Eloss represents the mean square error loss between the steganographic image and the extraction image, and the calculation formula is as follows:
wherein x is i Pixel values representing the carrier image, x' i Representing pixel values, y of a steganographic image i Pixel values, y ', representing steganographic images' i Representing pixel values of the extracted image, and n represents the number of pixel points in the image;
the structural similarity loss function ssim_loss is a weighted sum of ssim_loss and ssim_eloss, ssim_loss represents the structural similarity loss between the carrier image and the steganographic image, ssim_eloss represents the structural similarity loss between the steganographic image and the extraction image, and the calculation formula is as follows:
Wherein mu x Mean, mu of the representation of the carrier image x' Mean value and sigma of steganographic image x Representing the variance, sigma, of the carrier image x' Representing variance, μ of steganographic image y Mean, mu, of the steganographic image y' Representing the mean value, sigma, of the extracted image y Representing variance, sigma of steganographic image y' Representing the variance, sigma, of the extracted image xx' Representing covariance of carrier image and steganographic image, σ yy' Representing covariance of the steganographic image and the extracted image; c (C) i (i=1, 2,3, 4) has a value much smaller than the square of the product of the coefficient of 1 and the dynamic range of the image gray scale;
correspondingly, the total loss function of the generator is:
loss=MSE_loss+SSIM_loss+αL D
wherein L is D For steganalysis of fight loss, α is the weight of steganalysis of fight loss.
CN202311416882.5A 2023-10-30 2023-10-30 High-capacity steganography method and system based on RGB channel differential plane Pending CN117391920A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311416882.5A CN117391920A (en) 2023-10-30 2023-10-30 High-capacity steganography method and system based on RGB channel differential plane

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311416882.5A CN117391920A (en) 2023-10-30 2023-10-30 High-capacity steganography method and system based on RGB channel differential plane

Publications (1)

Publication Number Publication Date
CN117391920A true CN117391920A (en) 2024-01-12

Family

ID=89469934

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311416882.5A Pending CN117391920A (en) 2023-10-30 2023-10-30 High-capacity steganography method and system based on RGB channel differential plane

Country Status (1)

Country Link
CN (1) CN117391920A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579837A (en) * 2024-01-15 2024-02-20 齐鲁工业大学(山东省科学院) JPEG image steganography method based on countermeasure compression image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579837A (en) * 2024-01-15 2024-02-20 齐鲁工业大学(山东省科学院) JPEG image steganography method based on countermeasure compression image
CN117579837B (en) * 2024-01-15 2024-04-16 齐鲁工业大学(山东省科学院) JPEG image steganography method based on countermeasure compression image

Similar Documents

Publication Publication Date Title
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
CN111275637B (en) Attention model-based non-uniform motion blurred image self-adaptive restoration method
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
CN112396607B (en) Deformable convolution fusion enhanced street view image semantic segmentation method
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN111754438B (en) Underwater image restoration model based on multi-branch gating fusion and restoration method thereof
CN112489164B (en) Image coloring method based on improved depth separable convolutional neural network
CN117391920A (en) High-capacity steganography method and system based on RGB channel differential plane
CN116309232B (en) Underwater image enhancement method combining physical priori with deep learning
CN112116537A (en) Image reflected light elimination method and image reflected light elimination network construction method
CN112767283A (en) Non-uniform image defogging method based on multi-image block division
CN112070688A (en) Single image defogging method for generating countermeasure network based on context guidance
CN111696021A (en) Image self-adaptive steganalysis system and method based on significance detection
CN113379606B (en) Face super-resolution method based on pre-training generation model
CN115272072A (en) Underwater image super-resolution method based on multi-feature image fusion
CN109871790B (en) Video decoloring method based on hybrid neural network model
CN114596233A (en) Attention-guiding and multi-scale feature fusion-based low-illumination image enhancement method
CN114494699A (en) Image semantic segmentation method and system based on semantic propagation and foreground and background perception
CN114359073A (en) Low-illumination image enhancement method, system, device and medium
CN117036182A (en) Defogging method and system for single image
Lin et al. Multi-frequency residual convolutional neural network for steganalysis of color images
CN114881879A (en) Underwater image enhancement method based on brightness compensation residual error network
CN115131414A (en) Unmanned aerial vehicle image alignment method based on deep learning, electronic equipment and storage medium
CN115035377A (en) Significance detection network system based on double-stream coding and interactive decoding
CN116188324A (en) Checkerboard image restoration method based on deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination