CN114157773B - Image steganography method based on convolutional neural network and frequency domain attention - Google Patents

Image steganography method based on convolutional neural network and frequency domain attention Download PDF

Info

Publication number
CN114157773B
CN114157773B CN202111454888.2A CN202111454888A CN114157773B CN 114157773 B CN114157773 B CN 114157773B CN 202111454888 A CN202111454888 A CN 202111454888A CN 114157773 B CN114157773 B CN 114157773B
Authority
CN
China
Prior art keywords
layer
network
frequency domain
image
output characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111454888.2A
Other languages
Chinese (zh)
Other versions
CN114157773A (en
Inventor
张善卿
李辉
李黎
陆剑锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202111454888.2A priority Critical patent/CN114157773B/en
Publication of CN114157773A publication Critical patent/CN114157773A/en
Application granted granted Critical
Publication of CN114157773B publication Critical patent/CN114157773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32154Transform domain methods
    • H04N1/32165Transform domain methods using cosine transforms
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image steganography method based on a convolutional neural network and frequency domain attention, and relates to the field of information hiding, in particular to the application field of image steganography. The present deep learning method only carries out information hiding of the digital image in a space domain, the invention is based on a convolutional neural network, increases a frequency domain attention mechanism, focuses on a middle-low frequency part in a frequency domain space, adjusts the depth of a convolutional neural network model, and can better extract the characteristics in the frequency domain. The invention can effectively embed the secret information into the image, has high similarity between the secret-containing image and the original image, and can accurately extract the secret information according to the secret-containing image.

Description

Image steganography method based on convolutional neural network and frequency domain attention
Technical Field
The invention researches the field of image steganography and information hiding and provides a convolutional neural network and a method for applying frequency domain attention in image steganography.
Background
Image steganography is the primary content of information hiding. The sender conceals the secret information in the original image, generates a secret-containing image carrying the secret information, and completes the transmission of the secret information on a public channel through the secret-containing image, and a receiver of the transmission can decode the secret information carried in the secret-containing image. While other listeners on the common channel cannot detect the secret information in the secret-containing image.
Classical data hiding methods typically use heuristic algorithms to decide how much each pixel needs to be modified. For example, some algorithms modify the least significant bits of certain selected pixels, and others modify the mid-low frequency components in the frequency domain. Currently, there are some efforts to introduce Convolutional Neural Networks (CNNs) into image steganography techniques and beyond the traditional image steganography algorithms. However, these studies are limited to training its network model over the spatial domain of the image, and there are still performance bottlenecks in terms of capacity, invisibility, and security.
With the development of deep learning technology, deep learning has great improvement potential in the field of image steganography, and needs to combine the advantages of the traditional heuristic algorithm to develop towards a more novel and comprehensive research direction. How to make the secret-containing image look more natural and carry more secret information becomes a problem to be solved.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provide an image steganography method based on a convolutional neural network and frequency domain attention.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
an image steganography method based on a convolutional neural network and frequency domain attention comprises the following specific steps:
s1, adding a plurality of Fca frequency domain modules into an encoder and reconstructing a decoder of a network on the basis of a SteganoGAN network, and constructing a convolutional neural network comprising the encoder network, the decoder network and an evaluator network; each Fca frequency domain module is a frequency domain attention network Fca Net, and the frequency component combination of the frequency domain attention network Fca Net is selected as a middle-low frequency part;
in the encoder network, after a first layer convolution layer, a second layer convolution layer and a third layer convolution layer of a fully-connected encoder of the SteganoGAN network are respectively inserted with an Fca frequency domain module;
in the decoder network, a first layer is a convolution layer, and an input is a dense image; the second layer is an Fca frequency domain module, and the input is an output characteristic tensor of the first layer of the decoder network; the third layer is a convolution layer, and the input is the output characteristic tensor of the second layer of the decoder network; the fourth layer is an Fca frequency domain module, and the input is an output characteristic tensor of the third layer of the decoder network; the fifth layer is a convolution layer, and the input is the result of splicing the output characteristic tensor of the fourth layer and the output characteristic tensor of the second layer of the decoder network according to the dimension; the sixth layer is an Fca frequency domain module, and the input is an output characteristic tensor of the fifth layer of the decoder network; the seventh layer is a convolution layer, and the input is the output characteristic tensor of the sixth layer of the decoder network; the eighth layer is an Fca frequency domain module, and the input is an output characteristic tensor of the seventh layer of the decoder network; the ninth layer is a convolution layer, and the input is an output characteristic tensor of the eighth layer of the decoder network; the tenth layer is an Fca frequency domain module, and the input is an output characteristic tensor of the ninth layer of the decoder network; the eleventh layer is a convolution layer, and the input is the result of splicing the output characteristic tensor of the tenth layer and the output characteristic tensor of the eighth layer of the decoder network according to the dimension; the twelfth layer is an Fca frequency domain module, and the input is an output characteristic tensor of the ninth layer of the decoder network; the tenth layer is a convolution layer, and inputs are the output characteristic tensor of the twelfth layer of the decoder network, the result of splicing the output characteristic tensor of the tenth layer of the decoder network and the output characteristic tensor of the eighth layer according to dimensions, and the result is output as secret information;
s2, training the convolutional neural network model based on an image data set to obtain an image steganography network model for carrying out secret information steganography transmission between an information sender and an information receiver; the information sending direction inputs the original image and the secret information to be written into the encoder module of the image steganography network model, outputs the secret-containing image and sends the secret-containing image to the information receiving party, and the information receiving party inputs the received secret-containing image into the decoder module of the image steganography network model and outputs the secret information.
Preferably, in step S1, the middle-low frequency part is a low-frequency part and a middle-frequency part of a 2-dimensional DCT frequency space.
Further, each dimension of the 2-dimensional DCT frequency space is divided into 7 equal parts, and 7×7 parts are obtained by dividing the 2-dimensional DCT frequency space in total, and the coordinates of the part located at the upper left corner are (0, 0), then the middle-low frequency part is composed of 16 parts in total, the coordinates of which are (0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (1, 0), (1, 1), (1, 2), (1, 3), (1, 4), (2, 0), (2, 1), (2, 2), (3, 0), (3, 1), (4, 0).
Preferably, in step S1, the convolution kernels of the first layer, the third layer, the fifth layer, the seventh layer, the ninth layer and the eleventh layer in the decoder network are 3*3, the number of convolution kernels is 32, and the LeakyReLU function is used as the activation function.
Preferably, in step S1, the convolution kernel size of the tenth layer of the decoder network is 3*3, and the number of convolution kernels is 3.
Preferably, in the convolutional neural network, the secret information is converted into a tensor of 360×360×3 through byte coding, and then is input into the image steganography network model.
Preferably, in step S2, the convolutional neural network model is trained by randomly generating a tensor with a size of 360×360×3 as secret information for the image dataset, and the training batch size is set to 8, and the training round number is set to 300.
Preferably, in step S2, the information sender and the information receiver perform transmission of the encrypted image on the common channel.
Compared with the prior art, the invention has the following beneficial effects:
according to the heuristic that the traditional information hiding is applied to the frequency domain space, the frequency domain attention module is added in the convolutional neural network, and the method focuses on the middle-low frequency part in the frequency domain space. The mutual conversion between the pixel domain and the DCT domain can not generate information loss, the energy arrangement of the DCT domain space is more compact, each frequency component clearly represents the information of different frequency bands, and convolution is used for further fusion, so that the interpretability is better. The invention can effectively embed the secret information into the image, has high similarity between the secret-containing image and the original image, is difficult to be perceived by a human visual system, and can accurately extract the secret information according to the secret-containing image.
Drawings
FIG. 1 is a training flow chart of an image steganography method in an embodiment of the present invention;
FIG. 2 is a diagram of a convolutional neural network model in an embodiment of the present invention;
FIG. 3 is a diagram of a frequency attention network FcNet model in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of selection of frequency component combinations in an embodiment of the invention;
FIG. 5 is a partial image steganographic embedding result diagram in accordance with an embodiment of the present invention;
fig. 6 is a diagram showing a result of extracting a part of secret information in the embodiment of the present invention.
Detailed Description
The invention is further illustrated and described below with reference to the drawings and detailed description. The technical features of the embodiments of the invention can be combined correspondingly on the premise of no mutual conflict.
Most deep learning methods perform information hiding of digital images only in the pixel domain (spatial domain). The mutual conversion between the pixel domain and the frequency domain (DCT domain is the invention) can not generate information loss, the energy arrangement of the frequency domain space is more compact, each frequency component clearly represents the information of different frequency bands, and the information is further fused by convolution, so that the interpretability is better. The invention is based on the convolutional neural network, increases the frequency domain attention mechanism, focuses on the middle-low frequency part in the frequency domain space, adjusts the depth of the convolutional neural network model according to the characteristics of the convolutional neural network when the convolutional neural network is applied in the frequency domain space, and can better extract the characteristics in the frequency domain.
Specific embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
In a preferred embodiment of the present invention, there is provided an image steganography method based on a convolutional neural network and a frequency domain attention, the specific steps of which are described as shown in fig. 1:
step 1: image dataset preprocessing
The method comprises the steps of obtaining an image data set for convolutional neural network training, preprocessing the image data set to form sample images with consistent sizes, and dividing the sample images into a training set and a testing set according to a proportion.
Step 2: definition of convolutional neural networks
On the basis of the SteganoGAN network, a plurality of Fca frequency domain modules are added into an encoder and a decoder of the network to construct a convolutional neural network comprising the encoder network, the decoder network and the estimator network, as shown in figure 2.
It should be noted in advance that each Fca frequency domain module added with the SteganoGAN network in the invention is a frequency domain attention network FcaNet, the frequency domain attention network FcaNet belongs to the prior art, the model structure is shown in fig. 3, the specific structural form can be seen in paper FcaNet: frequency Channel Attention Networks, and the link is http:// arxiv. The frequency domain attention network FcaNet needs to set a frequency component combination, and in the present invention, the frequency component combination of the frequency domain attention network FcaNet is selected as a middle-low frequency part, i.e., a low-frequency part and a middle-frequency part of the 2-dimensional DCT frequency space.
In this embodiment, the low and medium frequency portions of the 2-dimensional DCT frequency space may be selected by spatial division. As shown in FIG. 4, the frequency domain component combination is selected by dividing each dimension of the whole 2-dimensional DCT frequency space into 7 equal parts, dividing the whole 2-dimensional DCT frequency space into 7 x 7 parts in total, setting x in (x, y) to represent abscissa and y to represent ordinate, defining the part coordinate at the upper left corner as (0, 0), then the middle and low frequency parts selected in the invention are composed of 16 parts in total with coordinates of (0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (1, 0), (1, 1), (1, 2), (1, 3), (1, 4), (2, 0), (2, 2), (3, 1), (4, 0).
SteganoGAN networks belong to the prior art and can be seen in the paper: steganoGAN High Capacity Image Steganography with GANs linked to https:// arxiv. Org/abs/1901.03892. The SteganoGAN network is divided into three parts:
the first part is an encoder (encoder), which inputs an original image cover image, and steganographic information data, and outputs as steganographic images Steganography image. Three different architectures of the encoder (encoder) are the Basic encoder, the Residual encoder and the full-concatenated encoder.
The second part is a decoder (decoder), the input is a steganographic image Steganography image, and the output is steganographic information data.
The third part is the evaluator (critic), which is equivalent to the discriminator of GAN, inputs are cover image and Steganography image, training a classifier.
In the present invention, the encoder network of the convolutional neural network is modified based on the full-concatenated encoder decoder of the original SteganoGAN network, the decoder network is reconstructed to adapt the model based on the modified encoder network, and the estimator network uses the estimator (critic) of the original SteganoGAN network. As shown in fig. 2, a network structure diagram of the convolutional neural network is shown, which includes three parts of an encoder network, a decoder network and an evaluator network. Since the evaluator network is unchanged, the encoder network and decoder network of the convolutional neural network are mainly described in detail below.
A) The encoder network is improved based on a full-connection encoder Dense encoder of an original SteganoGAN network, and an Fca frequency domain module is respectively inserted after a first layer convolution layer, a second layer convolution layer and a third layer convolution layer of the Dense encoder.
B) The decoder network is directly reconfigured, comprising 13 network layers, wherein:
1) The first layer of the decoder network is a convolutional layer, the input is a dense-containing image, the convolutional kernel size is 3*3, the number of convolutional kernels is 32, and the LeakyReLU function is used as an activation function.
2) The second layer of the decoder network is an Fca frequency domain module, and the input is an output characteristic tensor of the first layer of the decoder network.
3) The third layer of the decoder network is a convolution layer, the input is the output characteristic tensor of the second layer of the decoder network, the convolution kernel size is 3*3, the number of convolution kernels is 32, and the LeakyReLU function is adopted as an activation function.
4) The fourth layer of the decoder network is an Fca frequency domain module, and the input is an output characteristic tensor of the third layer of the decoder network.
5) The fifth layer of the decoder network is a convolution layer, the input is the result of the output characteristic tensor of the fourth layer of the decoder network and the output characteristic tensor of the second layer being spliced (concatenation operations) according to dimensions, the convolution kernel size is 3*3, the number of convolution kernels is 32, and the LeakyReLU function is adopted as an activation function.
6) The sixth layer of the decoder network is an Fca frequency domain module, and the input is an output characteristic tensor of the fifth layer of the decoder network.
7) The seventh layer of the decoder network is a convolution layer, the input is the output characteristic tensor of the sixth layer of the decoder network, the convolution kernel size is 3*3, the number of convolution kernels is 32, and the LeakyReLU function is adopted as an activation function.
8) The eighth layer of the decoder network is an Fca frequency domain module, and the input is an output characteristic tensor of the seventh layer of the decoder network.
9) The ninth layer of the decoder network is a convolution layer, the input is the output feature tensor of the eighth layer of the decoder network, the convolution kernel size is 3*3, the number of convolution kernels is 32, and the LeakyReLU function is adopted as the activation function.
10 A tenth layer of the decoder network is an Fca frequency domain module, and the input is an output characteristic tensor of a ninth layer of the decoder network.
11 The eleventh layer of the decoder network is a convolution layer, the input is the result of the dimension concatenation of the output characteristic tensor of the tenth layer of the decoder network and the output characteristic tensor of the eighth layer, the convolution kernel size is 3*3, the number of convolution kernels is 32, and the LeakyReLU function is adopted as an activation function.
12 A twelfth layer of the decoder network is an Fca frequency domain module, and the input is an output characteristic tensor of a ninth layer of the decoder network.
13 The tenth layer of the decoder network is a convolution layer, the input is the result of splicing the output characteristic tensor of the twelfth layer of the decoder network, the output characteristic tensor of the tenth layer of the decoder network and the output characteristic tensor of the eighth layer according to the dimensions, the convolution kernel size is 3*3, the number of convolution kernels is 3, and the output is secret information.
In addition, in order to unify the input of secret information, in the convolutional neural network, the secret information is converted into 360×360×3 tensors through byte coding and then is input into an image steganography network model. Of course, the size of the secret information can also be adjusted according to the actual situation.
Thus, by inserting 9 Fca frequency domain modules into the encoder network and the decoder network, a convolutional neural network as shown in fig. 2 can be formed, and secret information steganography transmission between an information sender and an information receiver can be realized through training of the convolutional neural network.
Step 3: joint training of convolutional neural network model based on data set
And (3) putting the training set and the testing set obtained in the step (1) into the convolutional neural network model constructed in the step (2), and training the convolutional neural network model. The training of models belongs to the prior art and is not described in detail. In this embodiment, a tensor with a size of 360×360×3 may be randomly generated for each image in the training set and the testing set as secret information to be embedded, so as to train the model, a batch size of training is set to 8, a training round number epoch is set to 300, and the trained model may be used as an image steganography network model for secret information steganography transmission between the information sender and the information receiver.
Step 4: the information sending direction image steganography network model encoder module inputs the original image and the secret information to be written in, outputs the secret-contained image and sends the secret-contained image to the information receiving party. In this embodiment, it is also necessary to output the encrypted image by converting the secret information byte code into 360×360×3 tensors as inputs to the image steganographic network model. Since the encrypted image is substantially visually indistinguishable from the original image, in turn, the information sender may send the encrypted image to the information receiver over a common channel.
Step 5: the information receiver inputs the received image containing secret into a decoder module of the image steganography network model, and outputs secret information, so that encryption transmission of the secret information is completed.
In this embodiment, the trained image steganography network model is used to test the image embedded secret information. A part of the image containing the secret information is shown in the diagram in fig. 5, and the result shows that the image containing the secret information embedded by the invention has natural color, is not easy to be perceived by human eyes and has high invisibility. The result of extracting part of secret information is shown in figure 6, and from the result, the invention can accurately extract the embedded secret information from the secret-containing image.
The invention further evaluates the image steganography network model through three indexes: hidden data capacity, similarity of the secret image and the original image, and accuracy of secret information extracted from the secret image. In order to more intuitively evaluate the information hiding effect of the invention, a control experiment group is arranged, and the control experiment group does not introduce a frequency domain attention mechanism and corresponding model depth adjustment on the basis of the invention. The two groups of experiments fix the index of hidden data capacity, the capacity is 360 x 3 bits, and the two indexes of transparency of the image containing the secret information and accuracy of extracting the secret information are compared. The experimental indexes of the similarity of the secret image and the original image use PSNR and SSIM, the experimental indexes of the secret information extraction success rate use Accuracy and RS-BPP, and the experimental results are as follows:
table 1 partial index comparison table
PSNR SSIM Accuracy RS-BPP
Spatial domain method 36.52 0.85 0.94 2.63
The invention is that 40.78 0.96 0.98 2.86
According to the experimental result, the method for hiding the image, disclosed by the invention, is applied to image hiding after the convolution neural network and the frequency domain spatial attention are combined and adjusted, so that secret information can be effectively embedded into the image, the transparency of the image containing the secret information is effectively improved, and the success rate of secret information extraction is also improved.
The above embodiment is only a preferred embodiment of the present invention, but it is not intended to limit the present invention. Various changes and modifications may be made by one of ordinary skill in the pertinent art without departing from the spirit and scope of the present invention. Therefore, all the technical schemes obtained by adopting the equivalent substitution or equivalent transformation are within the protection scope of the invention.

Claims (8)

1. An image steganography method based on a convolutional neural network and frequency domain attention is characterized by comprising the following specific steps:
s1, adding a plurality of Fca frequency domain modules into an encoder and reconstructing a decoder of a network on the basis of a SteganoGAN network, and constructing a convolutional neural network comprising the encoder network, the decoder network and an evaluator network; each Fca frequency domain module is a frequency domain attention network Fca Net, and the frequency component combination of the frequency domain attention network Fca Net is selected as a middle-low frequency part;
in the encoder network, after a first layer convolution layer, a second layer convolution layer and a third layer convolution layer of a fully-connected encoder of the SteganoGAN network are respectively inserted with an Fca frequency domain module;
in the decoder network, a first layer is a convolution layer, and an input is a dense image; the second layer is an Fca frequency domain module, and the input is an output characteristic tensor of the first layer of the decoder network; the third layer is a convolution layer, and the input is the output characteristic tensor of the second layer of the decoder network; the fourth layer is an Fca frequency domain module, and the input is an output characteristic tensor of the third layer of the decoder network; the fifth layer is a convolution layer, and the input is the result of splicing the output characteristic tensor of the fourth layer and the output characteristic tensor of the second layer of the decoder network according to the dimension; the sixth layer is an Fca frequency domain module, and the input is an output characteristic tensor of the fifth layer of the decoder network; the seventh layer is a convolution layer, and the input is the output characteristic tensor of the sixth layer of the decoder network; the eighth layer is an Fca frequency domain module, and the input is an output characteristic tensor of the seventh layer of the decoder network; the ninth layer is a convolution layer, and the input is an output characteristic tensor of the eighth layer of the decoder network; the tenth layer is an Fca frequency domain module, and the input is an output characteristic tensor of the ninth layer of the decoder network; the eleventh layer is a convolution layer, and the input is the result of splicing the output characteristic tensor of the tenth layer and the output characteristic tensor of the eighth layer of the decoder network according to the dimension; the twelfth layer is an Fca frequency domain module, and the input is an output characteristic tensor of the ninth layer of the decoder network; the tenth layer is a convolution layer, and inputs are the output characteristic tensor of the twelfth layer of the decoder network, the result of splicing the output characteristic tensor of the tenth layer of the decoder network and the output characteristic tensor of the eighth layer according to dimensions, and the result is output as secret information;
s2, training the convolutional neural network model based on an image data set to obtain an image steganography network model for carrying out secret information steganography transmission between an information sender and an information receiver; the information sending direction inputs the original image and the secret information to be written into the encoder module of the image steganography network model, outputs the secret-containing image and sends the secret-containing image to the information receiving party, and the information receiving party inputs the received secret-containing image into the decoder module of the image steganography network model and outputs the secret information.
2. The method of image steganography based on convolutional neural network and frequency domain attention according to claim 1, wherein in step S1, the mid-low frequency portions are low frequency and mid-frequency portions of a 2-dimensional DCT frequency space.
3. The method of claim 2, wherein each dimension of the 2-dimensional DCT frequency space is divided into 7 equal parts, and a total of 7 x 7 parts are obtained by dividing the 2-dimensional DCT frequency space into 7 equal parts, and the coordinates of the part located at the upper left corner are (0, 0), and the middle-low frequency part is composed of 16 parts in total, wherein the coordinates are (0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (1, 0), (1, 1), (1, 2), (1, 3), (1, 4), (2, 0), (2, 1), (2, 2), (3, 0), (3, 1), (4, 0).
4. The method of image steganography based on convolutional neural network and frequency domain attention according to claim 1, wherein in step S1, the convolution kernels of the first layer, the third layer, the fifth layer, the seventh layer, the ninth layer, and the eleventh layer in the decoder network are each 3*3, the number of convolution kernels is 32, and the LeakyReLU functions are all used as the activation functions.
5. The method of image steganography based on convolutional neural networks and frequency domain attention according to claim 1, wherein in step S1, the convolutional kernel size of the tenth layer of the decoder network is 3*3, and the number of convolutional kernels is 3.
6. The method for image steganography based on convolutional neural network and frequency domain attention according to claim 1, wherein in the convolutional neural network, secret information is converted into tensors of 360 x 3 through byte coding and then is input into an image steganography network model.
7. The image steganography method based on the convolutional neural network and the frequency domain attention according to claim 1, wherein in step S2, the convolutional neural network model is trained by randomly generating tensors with a size of 360 x 3 as secret information for the image dataset, and the training batch size is set to 8, and the training round number is set to 300.
8. The image steganography method based on the convolutional neural network and the frequency domain attention according to claim 1, wherein in step S2, the information sender and the information receiver perform transmission of the secret-containing image on a common channel.
CN202111454888.2A 2021-12-01 2021-12-01 Image steganography method based on convolutional neural network and frequency domain attention Active CN114157773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111454888.2A CN114157773B (en) 2021-12-01 2021-12-01 Image steganography method based on convolutional neural network and frequency domain attention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111454888.2A CN114157773B (en) 2021-12-01 2021-12-01 Image steganography method based on convolutional neural network and frequency domain attention

Publications (2)

Publication Number Publication Date
CN114157773A CN114157773A (en) 2022-03-08
CN114157773B true CN114157773B (en) 2024-02-09

Family

ID=80455631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111454888.2A Active CN114157773B (en) 2021-12-01 2021-12-01 Image steganography method based on convolutional neural network and frequency domain attention

Country Status (1)

Country Link
CN (1) CN114157773B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627154B (en) * 2022-03-18 2023-08-01 中国电子科技集团公司第十研究所 Target tracking method deployed in frequency domain, electronic equipment and storage medium
CN117132671B (en) * 2023-10-27 2024-02-23 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Multi-task steganography method, system and medium based on depth self-adaptive steganography network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132738A (en) * 2020-10-12 2020-12-25 中国人民武装警察部队工程大学 Image robust steganography method with reference generation
WO2021047471A1 (en) * 2019-09-10 2021-03-18 阿里巴巴集团控股有限公司 Image steganography method and apparatus, and image extraction method and apparatus, and electronic device
CN113393359A (en) * 2021-05-18 2021-09-14 杭州电子科技大学 Information hiding method and device based on convolutional neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021047471A1 (en) * 2019-09-10 2021-03-18 阿里巴巴集团控股有限公司 Image steganography method and apparatus, and image extraction method and apparatus, and electronic device
CN112132738A (en) * 2020-10-12 2020-12-25 中国人民武装警察部队工程大学 Image robust steganography method with reference generation
CN113393359A (en) * 2021-05-18 2021-09-14 杭州电子科技大学 Information hiding method and device based on convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的图像隐写方法研究;付章杰;王帆;孙星明;王彦;;计算机学报(第09期);全文 *

Also Published As

Publication number Publication date
CN114157773A (en) 2022-03-08

Similar Documents

Publication Publication Date Title
CN114157773B (en) Image steganography method based on convolutional neural network and frequency domain attention
Lin et al. A novel reversible data hiding scheme based on AMBTC compression technique
Wei et al. Generative steganography network
CN111640444A (en) CNN-based self-adaptive audio steganography method and secret information extraction method
CN111681155B (en) GIF dynamic image watermarking method based on deep learning
Prabakaran et al. Dual transform based steganography using wavelet families and statistical methods
CN115131188A (en) Robust image watermarking method based on generation countermeasure network
CN112132737B (en) Image robust steganography method without reference generation
Liao et al. GIFMarking: The robust watermarking for animated GIF based deep learning
CN116091288A (en) Diffusion model-based image steganography method
Liu et al. JPEG robust invertible grayscale
CN113628090A (en) Anti-interference message steganography and extraction method and system, computer equipment and terminal
CN105279728B (en) Pretreated intelligent mobile terminal image latent writing method is encrypted based on secret information
Prabakaran et al. Dual Wavelet Transform Used in Color Image Steganography Method
CN114630130B (en) Face-changing video tracing method and system based on deep learning
CN115880125A (en) Soft fusion robust image watermarking method based on Transformer
CN114119330A (en) Robust digital watermark embedding and extracting method based on neural network
CN117611422B (en) Image steganography method based on Moire pattern generation
Venugopala et al. Evaluation of video watermarking algorithms on mobile device
Zhong et al. Enhanced Attention Mechanism-Based Image Watermarking With Simulated JPEG Compression
CN117057969B (en) Cross-modal image-watermark joint generation and detection device and method
Mei et al. A robust blind watermarking scheme based on attention mechanism and neural joint source-channel coding
Gangarde et al. Application of crypto-video watermarking technique to improve robustness and imperceptibiltiy of secret data
CN112184841B (en) Block replacement generation type information hiding and recovering method, equipment and medium
CN114979402B (en) Shared image storage method based on matrix coding embedding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant