CN113726976A - High-capacity graph hiding method and system based on coding-decoding network - Google Patents

High-capacity graph hiding method and system based on coding-decoding network Download PDF

Info

Publication number
CN113726976A
CN113726976A CN202111021882.6A CN202111021882A CN113726976A CN 113726976 A CN113726976 A CN 113726976A CN 202111021882 A CN202111021882 A CN 202111021882A CN 113726976 A CN113726976 A CN 113726976A
Authority
CN
China
Prior art keywords
secret
operation group
branch
image
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111021882.6A
Other languages
Chinese (zh)
Other versions
CN113726976B (en
Inventor
胡欣珏
付章杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202111021882.6A priority Critical patent/CN113726976B/en
Publication of CN113726976A publication Critical patent/CN113726976A/en
Application granted granted Critical
Publication of CN113726976B publication Critical patent/CN113726976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32267Methods relating to embedding, encoding, decoding, detection or retrieval operations combined with processing of the image
    • H04N1/32272Encryption or ciphering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/44Secrecy systems
    • H04N1/4446Hiding of documents or document information
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a large-capacity image hiding method and a large-capacity image hiding system based on a coding-decoding network, which belong to the technical field of image processing, wherein a carrier image and two secret images are input into a double-branch coding network based on a Res2 Net-inclusion-SE module to obtain a secret image, the secret image is input into a W-Net decoding network to obtain two reconstructed secret images, a mixed loss function is designed according to the quality of the secret image and the quality of the reconstructed secret image and is used as a total loss function of a steganography network, the steganography network is optimized by taking the minimized loss function as a target, and when loss is reduced and kept stable, training is considered to be finished; compared with the existing algorithm, the method has high hiding capacity and high hiding performance.

Description

High-capacity graph hiding method and system based on coding-decoding network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a system for hiding a picture with a large capacity based on an encoding-decoding network.
Background
The development of internet technology brings great convenience to people's life, and also brings many information security problems, such as disclosure of personal privacy information, illegal theft of commercial confidential data, and the like. Therefore, people are increasingly paying attention to security issues during data communication.
Steganography is one of the main methods for ensuring communication security, and is characterized in that secret information is embedded into a carrier through a specific algorithm, and then the secret information is extracted by a secret information receiver through an extraction algorithm. Common steganographic carriers include text, images, audio, video, and the like. The images have high redundancy, and with the development of internet technology, a large number of images are spread every moment, so that the images become a good carrier for hiding secret information. The steganography is one of the main research directions for information hiding due to its simplicity, effectiveness and large steganography capacity. In 2017, Baluja published the first Deep learning steganography algorithm with a map on NIPS [ Baluja S.Hiding images In display sight: Deep learning graphics [ C ] In Processing systems of the Neural Information Processing systems. Cambridge: MIT Press,2017: 2069. This 2079 ], and since then, various Deep learning networks were used for mass production of steganography models with maps. The standard for measuring the quality of a large-capacity steganographic model is generally the effective hiding degree and the extraction accuracy of secret information. However, the existing steganography models have the problems of poor quality of a secret image and poor quality of a reconstructed secret image. Therefore, how to construct an efficient generator and extractor by using a hidden graph or design a new loss function can more effectively train the steganographic network and improve the quality of the secret image and the reconstructed secret image, and the method is a direction in which a large-capacity steganographic model can be continuously researched in the future.
Disclosure of Invention
In view of the defects of the prior art, the present invention aims to provide a method and a system for hiding images in large capacity based on an encoding-decoding network, which are used for solving the problems of poor quality of the confidential images and poor quality of the reconstructed secret images of the steganographic models in the prior art.
The purpose of the invention can be realized by the following technical scheme:
a high capacity mapping method based on an encoding-decoding network, the method comprising the steps of:
s1: inputting a carrier image and two secret images into a double-branch coding network based on a Res2 Net-inclusion-SE module to generate a secret image;
s2: inputting the generated secret images into a W-Net decoding network to obtain two reconstructed secret images;
s3: and designing a mixed loss function according to the quality of the secret image and the quality of the reconstructed secret image, taking the mixed loss function as a total loss function of the steganographic network, optimizing the steganographic network by taking the minimum total loss function as a target, and finishing training when the loss is reduced and the stability is kept.
Further, the way of inputting the one carrier image and the two secret images into the two-branch encoding network based on the Res2 Net-inclusion-SE module in step S1 includes separately inputting the carrier image into one branch of the encoding network, and inputting the two secret images into the other branch of the encoding network after the channel stacking operation.
Further, the dual-branch coding network based on the Res2 Net-inclusion-SE module in step S1 includes a carrier branch first convolution operation group, a carrier branch Res2 Net-inclusion-SE module, a carrier branch second convolution operation group, a secret branch first convolution operation group, a secret branch Res2 Net-inclusion-SE module, a secret branch second convolution operation group, a deconvolution operation group, and a third convolution operation group;
the carrier branch second convolution operation group and the secret branch second convolution operation group are subjected to channel splicing and then input into a deconvolution operation group, the output end of the deconvolution operation group is connected with the third convolution operation group, and the output of the third convolution operation group is a dense image;
one convolution operation group comprises a convolution layer, an activation layer and a batch normalization layer which are sequentially arranged, and one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
Further, the Res2 Net-inclusion-SE module in step S1 includes a convolution block one, an improved residual block, a convolution block two, and an attention module;
the volume block one is represented as:
x=F3(f)
where F is the input of the created volume block, x is the output of the created volume block, F3(. is a 3X 3 convolution variantChanging a function;
the modified residual block is represented as:
xi=S(x) i∈1,2,3,4
Figure BDA0003242212880000031
y=[y1,y2,y3,y4]
wherein S (-) is a feature channel splitting operation, the output x of the rolling block is split into 4 blocks according to the channel, xiFor the ith block after channel splitting, yiIs xiOutput after corresponding operation [, etc]Representing the channel stacking operation in the spatial dimension, y being the output of the improved residual block, IC (·) being the inclusion operation, as follows:
[F1(·),F3(F1(·)),F5(F1(·)),F1(M(·))]
wherein, F1(. h) is a 1 × 1 convolution transform function, F5(. cndot.) is a 5 × 5 convolution transform function, M (. cndot.) is a 3 × 3 maximal pooling function;
the volume block two is represented as:
z=F3(y)
wherein y is the input of the convolution block two, and z is the characteristic diagram of the output of the convolution block two;
the attention module is represented as:
Figure BDA0003242212880000041
G=δ1(W22(W1(sc))))·zc
wherein H is the height of the convolution block two output characteristic diagram, W is the width of the convolution block two output characteristic diagram, and zc(i, j) is the feature of the c channel of the convolution block two output feature map z, scIs a global feature, δ, encoded by a spatial feature on the c-th channel1(. is a Sigmoid activation function, W2(. is a fully-connected operation that reduces the feature map channel dimension by a factor of 16, δ2(. is) a ReLU activation function, W1(. h) is a fully connected operation that upscales the feature map channel by 16 times, and G is the output of the attention module.
Further, in step S2, the W-Net decoding network is a hopping connection structure, and the W-Net decoding network includes: the method comprises the following steps of performing convolution operation group I, convolution operation group II, convolution operation group III, convolution operation group IV, channel splitting operation, secret first branch deconvolution operation group I, secret first branch deconvolution operation group II, secret first branch deconvolution operation group III, secret first branch deconvolution operation group IV, secret first branch convolution operation group I, secret second branch deconvolution operation group II, secret second branch deconvolution operation group III, secret second branch deconvolution operation group IV and secret second branch convolution operation group I;
the channel splitting operation splits the output of the convolution operation group four into a secret feature map I and a secret feature map II;
the outputs of the convolution operation group III and the secret first branch deconvolution operation group I are fused with the secret characteristic graph I to form a jump connection structure;
the outputs of the convolution operation group two, the secret first branch deconvolution operation group two and the secret first branch deconvolution operation group one are fused with the secret feature graph one to form a jump connection structure;
the output of the first convolution operation group, the third secret first branch deconvolution operation group, the second secret first branch deconvolution operation group and the first secret feature map are fused to form a jump connection structure;
the outputs of the convolution operation group III and the secret first branch convolution operation group I are fused with the secret characteristic diagram II to form a jump connection structure;
the output of the convolution operation group II, the secret second branch deconvolution operation group II and the secret first branch convolution operation group I is fused with the secret characteristic diagram II to form a jump connection structure;
the output of the convolution operation group I, the secret second branch deconvolution operation group III, the secret second branch deconvolution operation group II and the secret second branch convolution operation group I is fused with the secret characteristic diagram II to form a jump connection structure;
the input of the first secret first branch convolution operation group is a fourth secret first branch deconvolution operation group, and the output of the first secret first branch deconvolution operation group is a first secret image;
the input of the first secret second branch convolution operation group is a fourth secret second branch deconvolution operation group, and the output of the first secret second branch deconvolution operation group is a second secret image;
one convolution operation group comprises a convolution layer, an activation layer and a batch normalization layer which are arranged in sequence; a deconvolution operation group comprises a deconvolution layer, an activation layer and a batch standardization layer which are arranged in sequence.
Further, the steganographic network in the step S3 includes a dual-branch encoding network and a W-Net decoding network based on the Res2 Net-inclusion-SE module.
Further, the total loss function in step S3 is:
L(c,c',s,s')=βL(c,c')+γL(s,s')
wherein, L (c, c ') is the double-branch coding network loss based on the Res2 Net-inclusion-SE module, L (s, s') is the W-Net decoding network loss, and beta and gamma are weights for controlling the double-branch coding network loss and the W-Net decoding network loss based on the Res2 Net-inclusion-SE module.
Further, the calculation formula of the dual-branch coding network loss L (c, c') based on the Res2 Net-inclusion-SE module is as follows:
Figure BDA0003242212880000051
where c is a carrier image pixel, c ═ ciL, L is the total pixel value of the image, c 'is the dense image pixel, c' ═ ci'|1,2,...,L},μc、μc'Average values of c and c', respectively, also representing the brightness of the carrier image and the dense image, K1Is one is less than or equal to1, M is a custom scale, here the value is 5, σc、σc'The standard deviations c, c', respectively, also represent the contrast, σ, of the support image and the dense imagecc'Covariance of c and c', also representing the structural similarity of the support image and the dense image, K2Is a constant less than or equal to 1, G is a Gaussian filter parameter, and alpha is a hyperparameter used for controlling the weight;
the calculation formula of the W-Net decoding network loss L (s, s') is as follows:
L(s,s')=L(s1,s′1)+L(s2,s'2)
Figure BDA0003242212880000061
Figure BDA0003242212880000062
wherein s is1For the first one of the secret image pixels,
Figure BDA0003242212880000063
s2for the second secret image pixel or pixels,
Figure BDA0003242212880000064
l is the total pixel value of the image, s1For the first reconstructed secret image pixel,
Figure BDA0003242212880000065
s′2for the second reconstructed secret image pixel,
Figure BDA0003242212880000066
Figure BDA0003242212880000067
are respectively s1And s'1Also represents the luminance of the first secret image and the first reconstructed secret image,
Figure BDA0003242212880000068
Figure BDA0003242212880000069
are respectively s2And s'2Also represents the luminance of the second secret image and the second reconstructed secret image, K1Is a constant less than or equal to 1, M is a custom dimension, here, the value is 5,
Figure BDA00032422128800000610
are respectively s1、s′1Also represents the contrast of the first secret image and the first reconstructed secret image,
Figure BDA00032422128800000611
are respectively s2、s′2Also represents the contrast of the second secret image and the second reconstructed secret image,
Figure BDA00032422128800000612
is s is1And s'1Also represents the structural similarity of the first secret image and the first reconstructed secret image,
Figure BDA00032422128800000613
is s is2And s'2Also represents the structural similarity of the second secret image and the second reconstructed secret image, K2Is a constant less than or equal to 1, G is a Gaussian filter parameter, and alpha is a hyperparameter used for controlling the weight.
A high capacity mapping system based on an encoding-decoding network, the system comprising:
a generation unit: the image processing method comprises the steps that a carrier image and two secret images are input into a double-branch coding network based on a Res2 Net-inclusion-SE module, and a secret image embedded with the two secret images is obtained;
a reconstruction unit: the secret image acquisition module is used for inputting the secret image into a W-Net network to obtain two reconstructed secret images;
a training unit: the hidden-writing network optimization method comprises the steps of designing a mixed loss function according to the quality of a secret image and the quality of a reconstructed secret image, taking the mixed loss function as a total loss function of the hidden-writing network, optimizing the hidden-writing network by taking a minimized loss function as a target, and considering that training is finished when loss is reduced and kept stable; the steganographic network comprises a double-branch coding network and a W-Net decoding network based on a Res2 Net-inclusion-SE module;
a steganographic unit: and the secret image reconstruction method is used for generating a secret image according to the carrier image and the secret image by using the trained steganographic network, and reconstructing two secret images from the secret image.
The invention has the beneficial effects that:
the invention designs a double-branch coding network based on a Res2 Net-inclusion-SE module, improves the visual quality of a secret image, designs a W-Net decoding network, and improves the visual quality of a reconstructed secret image; the mixed loss function designed according to the quality of the secret image and the quality of the reconstructed secret image considers the visual perception of human, is more suitable for steganography, and effectively improves the image quality.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a flow chart of a high capacity in-view method based on an encoding-decoding network;
FIG. 2 is a schematic diagram of a dual-branch coding network structure based on Res2 Net-inclusion-SE modules;
fig. 3 is a schematic diagram of a W-Net decoding network structure.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
The embodiment of the invention provides a large-capacity graph hiding method based on an encoding-decoding network, which specifically comprises the following steps as shown in a graph 1:
s1, inputting a carrier image and two secret images into a double-branch coding network based on a Res2 Net-inclusion-SE module to generate a secret image;
in the specific implementation process, the carrier image is input into one branch of the coding network separately, and the two secret images are input into the other branch of the coding network after channel stacking operation.
In a specific implementation manner of the embodiment of the present invention, as shown in fig. 2, the dual-branch coding network based on Res2 Net-inclusion-SE module includes: a carrier branch first convolution operation group Conv1_ c, a carrier branch Res2 Net-inclusion-SE module R _ c, a carrier branch second convolution operation group Conv2_ c, a secret branch first convolution operation group Conv1_ s, a secret branch Res2 Net-inclusion-SE module R _ s, a secret branch second convolution operation group Conv2_ s, a deconvolution operation group ConvT, a third convolution operation group Conv 3;
the carrier branch second convolution group and the secret branch second convolution group are subjected to channel splicing and then input into a deconvolution operation group, and the output end of the deconvolution operation group is connected with the third convolution operation group;
the output of the third convolution operation group is a dense image;
one convolution operation group comprises a convolution layer Conv, an activation layer LeakyReLu and a batch normalization layer BN which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer ConvT, an activation layer LeakyReLu and a batch normalization layer BN which are arranged in sequence.
The Res2 Net-inclusion-SE module comprises a convolution block I, an improved residual block, a convolution block II and an attention module;
the volume block one is represented as:
x=F3(f)
where F is the input of the created volume block, x is the output of the created volume block, F3(. cndot.) is a 3 × 3 convolution transform function.
The modified residual block is represented as:
xi=S(x) i∈1,2,3,4
Figure BDA0003242212880000091
y=[y1,y2,y3,y4]
wherein S (-) is a feature channel splitting operation, the output x of the rolling block is split into 4 blocks according to the channel, xiFor the ith block after channel splitting, yiIs xiOutput after corresponding operation [, etc]Representing the channel stacking operation in the spatial dimension, y being the output of the improved residual block, IC (·) being the inclusion operation, as follows:
[F1(·),F3(F1(·)),F5(F1(·)),F1(M(·))]
wherein, F1(. h) is a 1 × 1 convolution transform function, F5(. cndot.) is a 5 × 5 convolution transform function, and M (. cndot.) is a 3 × 3 maximal pooling function.
The volume block two is represented as:
z=F3(y)
wherein y is the input of the convolution block two, and z is the feature map of the output of the convolution block two.
The attention module is represented as:
Figure BDA0003242212880000101
G=δ1(W22(W1(sc))))·zc
where H is the rolling block two output characteristicHeight of the graph, W is the width of the convolution block two output characteristic graph, zc(i, j) is the feature of the c channel of the convolution block two output feature map z, scIs a global feature, δ, encoded by a spatial feature on the c-th channel1(. is a Sigmoid activation function, W2(. is a fully-connected operation that reduces the feature map channel dimension by a factor of 16, δ2(. is) a ReLU activation function, W1(. h) is a fully connected operation that upscales the feature map channel by 16 times, and G is the output of the attention module. It can be seen that the dual-branch coding network based on the Res2 Net-inclusion-SE module comprises a feature extraction phase and a feature fusion phase. In the feature extraction stage, the encoder uses the double-branch structure to independently extract the features of the carrier image and the secret image respectively. And the Res2 Net-inclusion-SE module on each branch is responsible for carrying out feature extraction and channel importance learning on features subjected to convolution operation at different scales at the same resolution. And then, in the feature fusion stage, the encoder fuses the extracted features by using splicing operation and deconvolution operation to finally obtain a dense image.
S2, inputting the secret image into a W-Net decoding network to obtain two reconstructed secret images;
in a specific implementation manner of the embodiment of the present invention, the W-Net decoding network is a hopping connection structure, and as shown in fig. 3, the W-Net decoding network includes: a set of convolution operations one, Conv _1, a set of convolution operations two, Conv _2, a set of convolution operations three, Conv _3, a set of convolution operations four, Conv _4, a channel splitting operation Sp, a set of secret first branch deconvolution operations one, ConvT1_ s1, a set of secret first branch deconvolution operations two, Conv2_ s1, a set of secret first branch deconvolution operations three, Conv 3_ s1, a set of secret first branch deconvolution operations four, ConvT4_ s1, a set of secret first branch convolution operations one, Conv _ s1, a set of secret first branch deconvolution operations one, Conv1_ s2, a set of secret second branch deconvolution operations two, Conv2_ s2, a set of secret second branch deconvolution operations three, Conv 3_ s2, a set of secret second branch deconvolution operations four, a set of Conv 4_ s2, a set of secret first branch deconvolution operations 2;
the output of the convolution operation group IV is split into a secret characteristic diagram I and a secret characteristic diagram II by the channel splitting operation;
the outputs of the convolution operation group III and the secret first branch deconvolution operation group I are fused with the secret characteristic graph I to form a jump connection structure;
the outputs of the convolution operation group two, the secret first branch deconvolution operation group two and the secret first branch deconvolution operation group one are fused with the secret feature graph one to form a jump connection structure;
the output of the first convolution operation group, the third secret first branch deconvolution operation group, the second secret first branch deconvolution operation group and the first secret feature map are fused to form a jump connection structure;
the outputs of the convolution operation group III and the secret first branch convolution operation group I are fused with the secret characteristic diagram II to form a jump connection structure;
the output of the convolution operation group II, the secret second branch deconvolution operation group II and the secret first branch convolution operation group I is fused with the secret characteristic diagram II to form a jump connection structure;
the output of the convolution operation group I, the secret second branch deconvolution operation group III, the secret second branch deconvolution operation group II and the secret second branch convolution operation group I is fused with the secret characteristic diagram II to form a jump connection structure;
the input of the first secret first branch convolution operation group is a fourth secret first branch deconvolution operation group, and the output of the first secret first branch deconvolution operation group is a first secret image;
the input of the first secret second branch convolution operation group is a fourth secret second branch deconvolution operation group, and the output of the first secret second branch deconvolution operation group is a second secret image;
one convolution operation group comprises a convolution layer Conv, an activation layer LeakyReLU and a batch normalization layer BN which are sequentially arranged; one set of deconvolution operations includes a deconvolution layer ConvT, an activation layer leak relu, and a batch normalization layer BN, which are arranged in this order.
S3, designing a mixed loss function according to the quality of the secret image and the quality of the reconstructed secret image, taking the mixed loss function as a total loss function of the steganographic network, optimizing the steganographic network by taking the minimum total loss function as a target, and considering that training is finished when loss is reduced and kept stable; the steganographic network comprises a double-branch coding network and a W-Net decoding network based on a Res2 Net-inclusion-SE module.
In a specific implementation manner of the embodiment of the present invention, the total loss function of the steganographic network is:
L(c,c',s,s')=βL(c,c')+γL(s,s')
wherein, L (c, c ') is the double-branch coding network loss based on the Res2 Net-inclusion-SE module, L (s, s') is the W-Net decoding network loss, and beta and gamma are weights for controlling the double-branch coding network loss and the W-Net decoding network loss based on the Res2 Net-inclusion-SE module.
The calculation formula of the loss L (c, c') of the double-branch coding network based on the Res2 Net-inclusion-SE module is as follows:
Figure BDA0003242212880000121
where c represents a carrier image, c ═ ciL, L is the total pixel value of the image, c 'represents the dense image, c' ═ ci'|1,2,...,L},μc、μc'Average values of c and c', respectively, also representing the brightness of the carrier image and the dense image, K1Is a constant less than or equal to 1, M is a custom scale, and the value is 5, sigmac、σc'The standard deviations c, c', respectively, also represent the contrast, σ, of the support image and the dense imagecc'Covariance of c and c', also representing the structural similarity of the support image and the dense image, K2Is a constant less than or equal to 1, G is a Gaussian filter parameter, and alpha is a hyperparameter used for controlling the weight.
The calculation formula of the W-Net decoding network loss L (s, s') is as follows:
L(s,s')=L(s1,s′1)+L(s2,s'2)
Figure BDA0003242212880000122
Figure BDA0003242212880000131
wherein s is1Representing the first one of the secret images,
Figure BDA0003242212880000132
s2represents the second one of the secret images,
Figure BDA0003242212880000133
l is the total pixel value of the image, s'1Representing the first reconstructed secret image,
Figure BDA0003242212880000134
s′2representing the second reconstructed secret image,
Figure BDA0003242212880000135
Figure BDA0003242212880000136
are respectively s1And s'1Also represents the luminance of the first secret image and the first reconstructed secret image,
Figure BDA0003242212880000137
are respectively s2And s'2Also represents the luminance of the second secret image and the second reconstructed secret image, K1Is a constant less than or equal to 1, M is a custom dimension, here, the value is 5,
Figure BDA0003242212880000138
are respectively s1、s′1Also represents the contrast of the first secret image and the first reconstructed secret image,
Figure BDA0003242212880000139
are respectively s2、s′2Also represents the contrast of the second secret image and the second reconstructed secret image,
Figure BDA00032422128800001310
is s is1And s'1Also represents the structural similarity of the first secret image and the first reconstructed secret image,
Figure BDA00032422128800001311
is s is2And s'2Also represents the structural similarity of the second secret image and the second reconstructed secret image, K2Is a constant less than or equal to 1, G is a Gaussian filter parameter, and alpha is a hyperparameter used for controlling the weight.
In practical application, a carrier image and two secret images are input into a trained double-branch coding network based on a Res2 Net-inclusion-SE module to obtain a secret image, then the secret image is input into a trained W-Net decoding network, and the two secret images hidden in the secret image are extracted.
In summary, in the embodiment of the present invention, based on the large-capacity graph hiding method of the encoding-decoding network, one carrier image and two secret images are input into a dual-branch encoding network based on a Res2 Net-inclusion-SE module to generate a secret image; and inputting the secret images into a W-Net decoding network to obtain two reconstructed secret images, wherein the two reconstructed secret images have high hiding capacity and high hiding performance.
In order to verify the effect of the invention, the proposed steganography model is trained based on the public data set PASCAL-VOC2012, and the test is carried out on the public data set ImageNet. The results of the experiment on the image quality are shown in table 1. Wherein, Baluja's model [ Baluja S.Hiding images with images [ J ]. IEEE Transactions on Pattern Analysis and Machine Analysis, 2020,42(7): 1685-.
TABLE 1
Figure BDA0003242212880000141
Example 2
Based on the same inventive concept as embodiment 1, an embodiment of the present invention provides a high capacity mapping system based on an encoding-decoding network, including:
the generating unit is used for inputting one carrier image and two secret images into a double-branch coding network based on a Res2 Net-inclusion-SE module to obtain a secret image embedded with the two secret images;
the reconstruction unit is used for inputting the secret images into a W-Net network to obtain two reconstructed secret images;
the training unit is used for designing a mixed loss function according to the quality of the secret images and the quality of the reconstructed secret images, taking the mixed loss function as a total loss function of the steganographic network, optimizing the steganographic network by taking the minimized loss function as a target, and considering that the training is finished when the loss is reduced and kept stable; the steganographic network comprises a double-branch coding network and a W-Net decoding network based on a Res2 Net-inclusion-SE module;
and the steganography unit is used for generating a secret image according to the carrier image and the secret image by using the trained steganography network, and then reconstructing two secret images from the secret image.
For specific limitations of the large-capacity mapping system based on the coding-decoding network, reference may be made to the above limitations of the large-capacity mapping method based on the coding-decoding network, which are not described herein again. The various modules in the above-described large capacity mapping system based on encoding-decoding networks may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In the description herein, references to the description of "one embodiment," "an example," "a specific example" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed.

Claims (9)

1. A high capacity mapping method based on coding-decoding network, characterized in that the method comprises the following steps:
s1: inputting a carrier image and two secret images into a double-branch coding network based on a Res2 Net-inclusion-SE module to generate a secret image;
s2: inputting the generated secret images into a W-Net decoding network to obtain two reconstructed secret images;
s3: and designing a mixed loss function according to the quality of the secret image and the quality of the reconstructed secret image, taking the mixed loss function as a total loss function of the steganographic network, optimizing the steganographic network by taking the minimum total loss function as a target, and finishing training when the loss is reduced and the stability is kept.
2. The method for mapping a large capacity based on the coding-decoding network as recited in claim 1, wherein the step S1 of inputting one carrier image and two secret images into the two-branch coding network based on the Res2 Net-inclusion-SE module includes inputting the carrier image into one branch of the coding network separately, and inputting the two secret images into the other branch of the coding network after the channel stacking operation.
3. The large capacity mapping method based on coding-decoding network of claim 1, wherein the dual-branch coding network based on Res2 Net-inclusion-SE module in step S1 includes a first convolution operation group of carrier branches, a Res2 Net-inclusion-SE module, a second convolution operation group of carrier branches, a first convolution operation group of secret branches, a Res2 Net-inclusion-SE module of secret branches, a second convolution operation group of secret branches, a deconvolution operation group and a third convolution operation group;
the carrier branch second convolution operation group and the secret branch second convolution operation group are subjected to channel splicing and then input into a deconvolution operation group, the output end of the deconvolution operation group is connected with the third convolution operation group, and the output of the third convolution operation group is a dense image;
one convolution operation group comprises a convolution layer, an activation layer and a batch normalization layer which are sequentially arranged, and one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
4. The high capacity atlas-hiding method based on coding-decoding network of claim 1, wherein the Res2 Net-inclusion-SE module in step S1 comprises a convolution block one, a modified residual block, a convolution block two and an attention module;
the volume block one is represented as:
x=F3(f)
where F is the input of the created volume block, x is the output of the created volume block, F3(. h) is a 3 × 3 convolution transform function;
the modified residual block is represented as:
xi=S(x)i∈1,2,3,4
Figure FDA0003242212870000021
y=[y1,y2,y3,y4]
wherein S (-) is a feature channel splitting operation, the output x of the rolling block is split into 4 blocks according to the channel, xiFor the ith block after channel splitting, yiIs xiOutput after corresponding operation [, etc]Representing the channel stacking operation in the spatial dimension, y being the output of the improved residual block, IC (·) being the inclusion operation, as follows:
[F1(·),F3(F1(·)),F5(F1(·)),F1(M(·))]
wherein, F1(. h) is a 1 × 1 convolution transform function, F5(. cndot.) is a 5 × 5 convolution transform function, M (. cndot.) is a 3 × 3 maximal pooling function;
the volume block two is represented as:
z=F3(y)
wherein y is the input of the convolution block two, and z is the characteristic diagram of the output of the convolution block two;
the attention module is represented as:
Figure FDA0003242212870000031
G=δ1(W22(W1(sc))))·zc
wherein H is the height of the convolution block two output characteristic diagram, W is the width of the convolution block two output characteristic diagram, and zc(i, j) is the feature of the c channel of the convolution block two output feature map z, scIs a global feature, δ, encoded by a spatial feature on the c-th channel1(. is a Sigmoid activation function, W2(. is a fully-connected operation that reduces the feature map channel dimension by a factor of 16, δ2(. is) a ReLU activation function, W1(. h) is a fully connected operation that upscales the feature map channel by 16 times, and G is the output of the attention module.
5. The high capacity mapping method according to claim 1, wherein the W-Net decoding network in step S2 is a hopping connection structure, and the W-Net decoding network comprises: the method comprises the following steps of performing convolution operation group I, convolution operation group II, convolution operation group III, convolution operation group IV, channel splitting operation, secret first branch deconvolution operation group I, secret first branch deconvolution operation group II, secret first branch deconvolution operation group III, secret first branch deconvolution operation group IV, secret first branch convolution operation group I, secret second branch deconvolution operation group II, secret second branch deconvolution operation group III, secret second branch deconvolution operation group IV and secret second branch convolution operation group I;
the channel splitting operation splits the output of the convolution operation group four into a secret feature map I and a secret feature map II;
the outputs of the convolution operation group III and the secret first branch deconvolution operation group I are fused with the secret characteristic graph I to form a jump connection structure;
the outputs of the convolution operation group two, the secret first branch deconvolution operation group two and the secret first branch deconvolution operation group one are fused with the secret feature graph one to form a jump connection structure;
the output of the first convolution operation group, the third secret first branch deconvolution operation group, the second secret first branch deconvolution operation group and the first secret feature map are fused to form a jump connection structure;
the outputs of the convolution operation group III and the secret first branch convolution operation group I are fused with the secret characteristic diagram II to form a jump connection structure;
the output of the convolution operation group II, the secret second branch deconvolution operation group II and the secret first branch convolution operation group I is fused with the secret characteristic diagram II to form a jump connection structure;
the output of the convolution operation group I, the secret second branch deconvolution operation group III, the secret second branch deconvolution operation group II and the secret second branch convolution operation group I is fused with the secret characteristic diagram II to form a jump connection structure;
the input of the first secret first branch convolution operation group is a fourth secret first branch deconvolution operation group, and the output of the first secret first branch deconvolution operation group is a first secret image;
the input of the first secret second branch convolution operation group is a fourth secret second branch deconvolution operation group, and the output of the first secret second branch deconvolution operation group is a second secret image;
one convolution operation group comprises a convolution layer, an activation layer and a batch normalization layer which are arranged in sequence; a deconvolution operation group comprises a deconvolution layer, an activation layer and a batch standardization layer which are arranged in sequence.
6. The high-capacity mapping method based on coding-decoding network of claim 1, wherein the steganographic network in step S3 comprises a dual-branch coding network based on Res2 Net-inclusion-SE module and a W-Net decoding network.
7. The high capacity mapping method according to claim 1, wherein the total loss function in step S3 is:
L(c,c',s,s')=βL(c,c')+γL(s,s')
wherein, L (c, c ') is the double-branch coding network loss based on the Res2 Net-inclusion-SE module, L (s, s') is the W-Net decoding network loss, and beta and gamma are weights for controlling the double-branch coding network loss and the W-Net decoding network loss based on the Res2 Net-inclusion-SE module.
8. The high capacity mapping method based on coding-decoding network of claim 7, wherein the calculation formula of the dual-branch coding network loss L (c, c') based on Res2 Net-inclusion-SE module is:
Figure FDA0003242212870000051
where c is a carrier image pixel, c ═ ciL1, 2, L is the total number of imagesPixel value, c 'is a dense image pixel, c' ═ ci'|1,2,...,L},μc、μc'Average values of c and c', respectively, also representing the brightness of the carrier image and the dense image, K1Is a constant less than or equal to 1, M is a custom scale, and the value is 5, sigmac、σc'The standard deviations c, c', respectively, also represent the contrast, σ, of the support image and the dense imagecc'Covariance of c and c', also representing the structural similarity of the support image and the dense image, K2Is a constant less than or equal to 1, G is a Gaussian filter parameter, and alpha is a hyperparameter used for controlling the weight;
the calculation formula of the W-Net decoding network loss L (s, s') is as follows:
L(s,s')=L(s1,s′1)+L(s2,s′2)
Figure FDA0003242212870000052
Figure FDA0003242212870000053
wherein s is1For the first one of the secret image pixels,
Figure FDA0003242212870000054
s2for the second secret image pixel or pixels,
Figure FDA0003242212870000055
l is the total pixel value of the image, s'1For the first reconstructed secret image pixel,
Figure FDA0003242212870000056
s′2for the second reconstructed secret image pixel,
Figure FDA0003242212870000057
Figure FDA0003242212870000058
are respectively s1And s'1Also represents the luminance of the first secret image and the first reconstructed secret image,
Figure FDA0003242212870000059
Figure FDA00032422128700000510
are respectively s2And s'2Also represents the luminance of the second secret image and the second reconstructed secret image, K1Is a constant less than or equal to 1, M is a custom dimension, here, the value is 5,
Figure FDA00032422128700000511
are respectively s1、s′1Also represents the contrast of the first secret image and the first reconstructed secret image,
Figure FDA0003242212870000061
are respectively s2、s′2Also represents the contrast of the second secret image and the second reconstructed secret image,
Figure FDA0003242212870000062
is s is1And s'1Also represents the structural similarity of the first secret image and the first reconstructed secret image,
Figure FDA0003242212870000063
is s is2And s'2Also represents the structural similarity of the second secret image and the second reconstructed secret image, K2Is a constant less than or equal to 1, G is a Gaussian filter parameter, and alpha is a hyperparameter used for controlling the weight.
9. A high capacity mapping system based on an encoding-decoding network, the system comprising:
a generation unit: the image processing method comprises the steps that a carrier image and two secret images are input into a double-branch coding network based on a Res2 Net-inclusion-SE module, and a secret image embedded with the two secret images is obtained;
a reconstruction unit: the secret image acquisition module is used for inputting the secret image into a W-Net network to obtain two reconstructed secret images;
a training unit: the hidden-writing network optimization method comprises the steps of designing a mixed loss function according to the quality of a secret image and the quality of a reconstructed secret image, taking the mixed loss function as a total loss function of the hidden-writing network, optimizing the hidden-writing network by taking a minimized loss function as a target, and considering that training is finished when loss is reduced and kept stable; the steganographic network comprises a double-branch coding network and a W-Net decoding network based on a Res2 Net-inclusion-SE module;
a steganographic unit: and the secret image reconstruction method is used for generating a secret image according to the carrier image and the secret image by using the trained steganographic network, and reconstructing two secret images from the secret image.
CN202111021882.6A 2021-09-01 2021-09-01 Large-capacity graph hiding method and system based on coding-decoding network Active CN113726976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111021882.6A CN113726976B (en) 2021-09-01 2021-09-01 Large-capacity graph hiding method and system based on coding-decoding network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111021882.6A CN113726976B (en) 2021-09-01 2021-09-01 Large-capacity graph hiding method and system based on coding-decoding network

Publications (2)

Publication Number Publication Date
CN113726976A true CN113726976A (en) 2021-11-30
CN113726976B CN113726976B (en) 2023-07-11

Family

ID=78680626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111021882.6A Active CN113726976B (en) 2021-09-01 2021-09-01 Large-capacity graph hiding method and system based on coding-decoding network

Country Status (1)

Country Link
CN (1) CN113726976B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114257697A (en) * 2021-12-21 2022-03-29 四川大学 High-capacity universal image information hiding method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028308A (en) * 2019-11-19 2020-04-17 珠海涵辰科技有限公司 Steganography and reading method for information in image
US20200184592A1 (en) * 2017-05-19 2020-06-11 Google Llc Hiding Information and Images via Deep Learning
CN111292221A (en) * 2020-02-25 2020-06-16 南京信息工程大学 Safe and robust high-capacity image steganography method
CN111640444A (en) * 2020-04-17 2020-09-08 宁波大学 CNN-based self-adaptive audio steganography method and secret information extraction method
CN112132738A (en) * 2020-10-12 2020-12-25 中国人民武装警察部队工程大学 Image robust steganography method with reference generation
CN112132158A (en) * 2020-09-04 2020-12-25 华东师范大学 Visual picture information embedding method based on self-coding network
CN112926607A (en) * 2021-04-28 2021-06-08 河南大学 Two-branch network image steganography framework and method based on convolutional neural network
CN113284033A (en) * 2021-05-21 2021-08-20 湖南大学 Large-capacity image information hiding technology based on confrontation training

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200184592A1 (en) * 2017-05-19 2020-06-11 Google Llc Hiding Information and Images via Deep Learning
CN111028308A (en) * 2019-11-19 2020-04-17 珠海涵辰科技有限公司 Steganography and reading method for information in image
CN111292221A (en) * 2020-02-25 2020-06-16 南京信息工程大学 Safe and robust high-capacity image steganography method
CN111640444A (en) * 2020-04-17 2020-09-08 宁波大学 CNN-based self-adaptive audio steganography method and secret information extraction method
CN112132158A (en) * 2020-09-04 2020-12-25 华东师范大学 Visual picture information embedding method based on self-coding network
CN112132738A (en) * 2020-10-12 2020-12-25 中国人民武装警察部队工程大学 Image robust steganography method with reference generation
CN112926607A (en) * 2021-04-28 2021-06-08 河南大学 Two-branch network image steganography framework and method based on convolutional neural network
CN113284033A (en) * 2021-05-21 2021-08-20 湖南大学 Large-capacity image information hiding technology based on confrontation training

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
M. B. OULD MEDENI; EL MAMOUN SOUIDI: "Steganographic algorithm based on error-correcting codes for gray scale images", 《 2010 5TH INTERNATIONAL SYMPOSIUM ON I/V COMMUNICATIONS AND MOBILE NETWORK》, pages 1 - 4 *
杨晓元,毕新亮等: "结合图像加密与深度学习的高容量图像隐写算法", 通信学报, pages 96 - 105 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114257697A (en) * 2021-12-21 2022-03-29 四川大学 High-capacity universal image information hiding method

Also Published As

Publication number Publication date
CN113726976B (en) 2023-07-11

Similar Documents

Publication Publication Date Title
Li et al. MDCN: Multi-scale dense cross network for image super-resolution
CN112001847A (en) Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model
CN110163801B (en) Image super-resolution and coloring method, system and electronic equipment
CN111105352A (en) Super-resolution image reconstruction method, system, computer device and storage medium
CN110070091B (en) Semantic segmentation method and system based on dynamic interpolation reconstruction and used for street view understanding
CN101950365A (en) Multi-task super-resolution image reconstruction method based on KSVD dictionary learning
CN112823379A (en) Method and device for training machine learning model and device for video style transfer
CN112381716B (en) Image enhancement method based on generation type countermeasure network
Wei et al. Improving resolution of medical images with deep dense convolutional neural network
CN115100720A (en) Low-resolution face recognition method
DE102021109050A1 (en) VIDEO COMPRESSION AND TRANSMISSION SUPPORTED BY A NEURONAL GENERATIVE ADVERSARIAL NETWORK
Liao et al. GIFMarking: The robust watermarking for animated GIF based deep learning
CN113096015B (en) Image super-resolution reconstruction method based on progressive perception and ultra-lightweight network
CN114694176A (en) Lightweight human body posture estimation method based on deep learning
CN114841859A (en) Single-image super-resolution reconstruction method based on lightweight neural network and Transformer
CN113726976A (en) High-capacity graph hiding method and system based on coding-decoding network
Fu et al. Detecting GAN-generated face images via hybrid texture and sensor noise based features
CN114022356A (en) River course flow water level remote sensing image super-resolution method and system based on wavelet domain
CN113379606A (en) Face super-resolution method based on pre-training generation model
Xie et al. GAGCN: Generative adversarial graph convolutional network for non‐homogeneous texture extension synthesis
CN116091319A (en) Image super-resolution reconstruction method and system based on long-distance context dependence
CN115797176A (en) Image super-resolution reconstruction method
CN113191367B (en) Semantic segmentation method based on dense scale dynamic network
CN114493971A (en) Media data conversion model training and digital watermark embedding method and device
Zeng et al. Swin-CasUNet: cascaded U-Net with Swin Transformer for masked face restoration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant