CN113726976B - Large-capacity graph hiding method and system based on coding-decoding network - Google Patents

Large-capacity graph hiding method and system based on coding-decoding network Download PDF

Info

Publication number
CN113726976B
CN113726976B CN202111021882.6A CN202111021882A CN113726976B CN 113726976 B CN113726976 B CN 113726976B CN 202111021882 A CN202111021882 A CN 202111021882A CN 113726976 B CN113726976 B CN 113726976B
Authority
CN
China
Prior art keywords
secret
operation group
branch
image
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111021882.6A
Other languages
Chinese (zh)
Other versions
CN113726976A (en
Inventor
胡欣珏
付章杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202111021882.6A priority Critical patent/CN113726976B/en
Publication of CN113726976A publication Critical patent/CN113726976A/en
Application granted granted Critical
Publication of CN113726976B publication Critical patent/CN113726976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32267Methods relating to embedding, encoding, decoding, detection or retrieval operations combined with processing of the image
    • H04N1/32272Encryption or ciphering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/44Secrecy systems
    • H04N1/4446Hiding of documents or document information
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a large-capacity graph hiding method and a system based on an encoding-decoding network, which belong to the technical field of image processing, wherein one carrier image and two secret images are input into a dual-branch encoding network based on a Res2 Net-concept-SE module to obtain a secret-containing image, then the secret-containing image is input into a W-Net decoding network to obtain two reconstructed secret images, a mixed loss function is designed according to the quality of the secret-containing image and the quality of the reconstructed secret image, and is used as the total loss function of a steganography network, the steganography network is optimized with the minimum loss function as a target, and training is considered to be finished when the loss is reduced and kept stable; compared with the prior algorithm, the method has high hiding capacity and high concealment.

Description

Large-capacity graph hiding method and system based on coding-decoding network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a high-capacity graph hiding method and system based on an encoding-decoding network.
Background
The development of internet technology brings great convenience to people's life, and also creates many information security problems such as disclosure of personal privacy information, illegal theft of business confidential data, etc. Therefore, security problems in data communication are increasingly emphasized.
Steganography is one of the main methods for guaranteeing communication security, and is to embed secret information into a carrier through a specific algorithm, and then extract the secret information through an extraction algorithm by a secret information receiver. Common steganographic carriers include text, images, audio, video, and the like. The images themselves have high redundancy, and with the development of internet technology, a large number of images are spread at any moment, so that the images become a good carrier for hiding secret information. The graph hiding and graph hiding technology is one of main research directions of information hiding because of the simplicity, effectiveness and large hiding capacity. In 2017, baluja published on NIPS a first deep learning steganography algorithm in graph hidden [ Baluja S.Hiding images in plain sight: deep steganography [ C ]. In Proceedings of the Neural Information Processing systems.Cambridge: MIT Press,2017:2069-2079 ], and since then various deep learning networks were used to make massive emergence of steganography models in graph hidden. The criteria for measuring how well a large-capacity hidden-graph steganographic model is, in general, the effective hiding degree and the extraction accuracy of secret information. However, the existing steganography models have the problem that the quality of the secret-contained image and the quality of the reconstructed secret image are poor. Therefore, how to construct a high-efficiency generator and extractor with the graph hidden by the hidden graph or design a new loss function, so that the hidden network can be more effectively trained, the quality of the secret-containing image and the reconstructed secret image is improved, and the method is a direction in which a high-capacity hidden model can be continuously researched in the future.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a large-capacity graph hiding method and a large-capacity graph hiding system based on an encoding-decoding network, which are used for solving the problems of poor quality of a secret-containing image and poor quality of a reconstructed secret image existing in the prior art.
The aim of the invention can be achieved by the following technical scheme:
a high capacity diagrammatical method based on an encoding-decoding network, the method comprising the steps of:
s1: inputting a carrier image and two secret images into a dual-branch coding network based on a Res2 Net-acceptance-SE module to generate a secret-containing image;
s2: inputting the generated secret-containing image into a W-Net decoding network to obtain two reconstructed secret images;
s3: and designing a mixed loss function according to the quality of the secret-containing image and the quality of the reconstructed secret image, taking the mixed loss function as a total loss function of the steganography network, optimizing the steganography network by taking the minimum total loss function as a target, and finishing training when the loss is reduced and the stability is kept.
Further, the path of inputting one carrier image and two secret images to the dual-branch coding network based on the Res2 Net-concept-SE module in the step S1 includes that the carrier image is separately input to one branch of the coding network, and the two secret images are input to the other branch of the coding network after the channel stacking operation.
Further, the dual-branch coding network based on the Res2 Net-indication-SE module in the step S1 includes a carrier branch first convolution operation group, a carrier branch Res2 Net-indication-SE module, a carrier branch second convolution operation group, a secret branch first convolution operation group, a secret branch Res2 Net-indication-SE module, a secret branch second convolution operation group, a deconvolution operation group, and a third convolution operation group;
the carrier branch second convolution operation group and the secret branch second convolution operation group are input into a deconvolution operation group after being subjected to channel splicing, the output end of the deconvolution operation group is connected with the third convolution operation group, and the output of the third convolution operation group is a dense image;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged, and one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch standardization layer which are sequentially arranged.
Further, the Res2 Net-concept-SE module in the step S1 includes a first convolution block, an improved residual block, a second convolution block, and an attention module;
the convolution block one is expressed as:
x=F 3 (f)
where F is the input of the built convolution block, x is the output of the built convolution block, F 3 (. Cndot.) is a 3 x 3 convolution transform function;
the improved residual block is represented as:
x i =S(x)i∈1,2,3,4
Figure GDA0004267213400000031
y=[y 1 ,y 2 ,y 3 ,y 4 ]
wherein S (·) is a characteristic channel splitting operation, the output x of the convolution block is split into 4 blocks, x, by channel i For the i-th block after channel splitting, y i Is x i Output after the corresponding operation, [ - (C) -, (C.)]Representing spatial dimensionsThe above channel stacking operation, y is the output of the improved residual block, and IC (·) is the acceptance operation, which is specifically as follows:
[F 1 (·),F 3 (F 1 (·)),F 5 (F 1 (·)),F 1 (M(·))]
wherein F is 1 (. Cndot.) is a 1X 1 convolution transform function, F 5 (. Cndot.) is a 5X 5 convolution transform function, M (. Cndot.) is a 3X 3 max pooling function;
the convolution block two is expressed as:
z=F 3 (y)
wherein y is the input of the convolution block II, and z is the characteristic diagram of the output of the convolution block II;
the attention module is expressed as:
Figure GDA0004267213400000041
G=δ 1 (W 22 (W 1 (s c ))))·z c
wherein H is the high of the second output characteristic diagram of the convolution block, W is the wide of the second output characteristic diagram of the convolution block, and z c (i, j) is the characteristic of the c-th channel of the convolution block two-output characteristic diagram z, s c Is the global feature, delta, encoded by the spatial feature on the c-th channel 1 (. Cndot.) is a Sigmoid activation function, W 2 (. Cndot.) is a 16-fold reduced dimension full join operation of the feature map channel, delta 2 (. Cndot.) is a ReLU activation function, W 1 (. Cndot.) is a full join operation that up-dimensions the feature map channel by a factor of 16, and G is the output of the attention module.
Further, in the step S2, the W-Net decoding network is a jump connection structure, and the W-Net decoding network includes: a first convolution operation group, a second convolution operation group, a third convolution operation group, a fourth convolution operation group, a channel splitting operation, a first secret first branch deconvolution operation group, a second secret first branch deconvolution operation group, a third secret first branch deconvolution operation group, a fourth secret first branch deconvolution operation group, a first secret first branch convolution operation group, a first secret second branch deconvolution operation group, a second secret branch deconvolution operation group, a third secret second branch deconvolution operation group, a fourth secret second branch deconvolution operation group and a first secret second branch deconvolution operation group;
the channel splitting operation splits the output of the convolution operation group IV into a first secret feature diagram and a second secret feature diagram;
the output of the third convolution operation group and the first secret first branch deconvolution operation group are fused with the first secret characteristic diagram to form a jump connection structure;
the output of the second convolution operation group, the second secret first branch deconvolution operation group and the first secret first branch deconvolution operation group is fused with the first secret characteristic diagram to form a jump connection structure;
the first convolution operation group, the third secret first branch deconvolution operation group, the second secret first branch deconvolution operation group and the first secret characteristic diagram are fused with the output of the first secret first branch deconvolution operation group to form a jump connection structure;
the output of the third convolution operation group and the first secret first branch convolution operation group are fused with the second secret characteristic diagram to form a jump connection structure;
the output of the second convolution operation group, the second secret second branch deconvolution operation group and the first secret first branch deconvolution operation group are fused with the second secret characteristic diagram to form a jump connection structure;
the first convolution operation group, the third secret second branch deconvolution operation group, the second secret second branch deconvolution operation group and the output of the first secret second branch deconvolution operation group are fused with the second secret characteristic diagram to form a jump connection structure;
the input of the first secret first branch convolution operation group is the fourth secret first branch deconvolution operation group, and the output is a first secret image;
the input of the first secret second branch convolution operation group is the fourth secret second branch deconvolution operation group, and the output is a second secret image;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
Further, the steganography network in the step S3 includes a dual-branch coding network and a W-Net decoding network based on a Res2 Net-acceptance-SE module.
Further, the total loss function in the step S3 is:
L(c,c',s,s')=βL(c,c')+γL(s,s')
wherein L (c, c ') is a double-branch coding network loss based on the Res2 Net-acceptance-SE module, L (s, s') is a W-Net decoding network loss, and beta, gamma is a weight for controlling the double-branch coding network loss based on the Res2 Net-acceptance-SE module and the W-Net decoding network loss.
Further, the calculation formula of the dual-branch coding network loss L (c, c') based on the Res2 Net-acceptance-SE module is as follows:
Figure GDA0004267213400000061
where c is the carrier image pixel, c= { c i I 1, 2..l }, L is the total pixel value of the image, c 'is the dense-containing image pixel, c' = { c i '|1,2,...,L},μ c 、μ c' Average of c and c', respectively, also represents the brightness of the carrier image and the density-containing image, K 1 Is a constant less than or equal to 1, M is a custom scale, and the value is 5, sigma c 、σ c' Standard deviation of c, c', respectively, also represents contrast of the carrier image and the dense image, σ cc' Covariance of c and c', also representing structural similarity of the carrier image and the dense-containing image, K 2 G is a Gaussian filter parameter, and alpha is a super parameter for controlling the weight;
the calculation formula of the W-Net decoding network loss L (s, s') is as follows:
L(s,s')=L(s 1 ,s' 1 )+L(s 2 ,s' 2 )
Figure GDA0004267213400000062
Figure GDA0004267213400000063
wherein s is 1 For the first secret image pixel,
Figure GDA0004267213400000064
s 2 for the second secret image pixel,
Figure GDA00042672134000000614
l is the total pixel value of the image, s' 1 For the first reconstructed secret image pixel,
Figure GDA00042672134000000615
s' 2 reconstructing the secret image pixels for the second sheet, for example>
Figure GDA0004267213400000067
Figure GDA0004267213400000068
S are respectively 1 And s' 1 Also representing the brightness of the first secret image and the first reconstructed secret image, +.>
Figure GDA0004267213400000069
Figure GDA00042672134000000616
S are respectively 2 And s' 2 Also representing the brightness, K, of the second secret image and the second reconstructed secret image 1 Is a constant less than or equal to 1, M is a custom scale, and the value is 5 #>
Figure GDA00042672134000000611
S are respectively 1 、s' 1 Also representing the first secret image and the first standard deviation of (2)Zhang Chong contrast of secret image, +.>
Figure GDA00042672134000000617
S are respectively 2 、s' 2 Is also representative of the contrast of the second secret image and the second reconstructed secret image, +.>
Figure GDA00042672134000000618
Is s 1 And s' 1 Is also representative of the structural similarity of the first secret image and the first reconstructed secret image, +.>
Figure GDA0004267213400000071
Is s 2 And s' 2 Is also representative of the structural similarity of the second secret image and the second reconstructed secret image, K 2 G is a gaussian filter parameter, and α is a super parameter for controlling the weight, which is a constant equal to or less than 1.
A high capacity hidden-graph system based on an encoding-decoding network, the system comprising:
a generation unit: the method comprises the steps of inputting a carrier image and two secret images into a dual-branch coding network based on a Res2 Net-acceptance-SE module to obtain a secret-containing image embedded with the two secret images;
and a reconstruction unit: the method comprises the steps of inputting the secret-containing image into a W-Net network to obtain two reconstructed secret images;
training unit: the method is used for designing a mixed loss function according to the quality of the secret-containing image and the quality of the reconstructed secret image, taking the mixed loss function as the total loss function of the steganography network, optimizing the steganography network with the minimum loss function as a target, and considering that training is finished when the loss is reduced and kept stable; the steganography network comprises a dual-branch coding network based on a Res2 Net-acceptance-SE module and a W-Net decoding network;
steganography unit: the method is used for generating a secret-containing image according to the carrier image and the secret image by using the trained steganography network, and reconstructing two secret images from the secret-containing image.
The invention has the beneficial effects that:
the invention designs a dual-branch coding network based on a Res2 Net-acceptance-SE module, improves the visual quality of the secret image, designs a W-Net decoding network, and improves the visual quality of the reconstructed secret image; the mixed loss function designed according to the quality of the secret-contained image and the quality of the reconstructed secret image considers the visual perception of human beings, is more suitable for steganography, and effectively improves the image quality.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to those skilled in the art that other drawings can be obtained according to these drawings without inventive effort.
FIG. 1 is a flow chart of a high capacity hidden graph method based on an encoding-decoding network;
FIG. 2 is a schematic diagram of a dual-branch coding network architecture based on Res2 Net-acceptance-SE modules;
FIG. 3 is a schematic diagram of a W-Net decoding network architecture.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
The embodiment of the invention provides a high-capacity graph hiding method based on an encoding-decoding network, which is shown in fig. 1 and specifically comprises the following steps:
s1, inputting a carrier image and two secret images into a dual-branch coding network based on a Res2 Net-acceptance-SE module to generate a secret-containing image;
in the implementation process, the carrier image is independently input into one branch of the coding network, and the two secret images are input into the other branch of the coding network after channel stacking operation.
In a specific implementation manner of the embodiment of the present invention, as shown in fig. 2, the dual-branch coding network based on the Res2 Net-concept-SE module includes: a carrier branch first convolution operation group Conv1_c, a carrier branch Res2 Net-acceptance-SE module R_c, a carrier branch second convolution operation group Conv2_c, a secret branch first convolution operation group Conv1_s, a secret branch Res2 Net-acceptance-SE module R_s, a secret branch second convolution operation group Conv2_s, a deconvolution operation group ConvT and a third convolution operation group Conv3;
the carrier branch second convolution group and the secret branch second convolution group are input into a deconvolution operation group after being subjected to channel splicing, and the output end of the deconvolution operation group is connected with the third convolution operation group;
the output of the third convolution operation group is a dense image;
one convolution operation group comprises a convolution layer Conv, an activation layer LeakyReLu and a batch standardization layer BN which are sequentially arranged; one deconvolution operation group includes a deconvolution layer ConvT, an activation layer LeakyReLu, and a batch normalization layer BN, which are sequentially arranged.
The Res2 Net-acceptance-SE module comprises a first convolution block, an improved residual block, a second convolution block and an attention module;
the convolution block one is expressed as:
x=F 3 (f)
where F is the input of the built convolution block, x is the output of the built convolution block, F 3 (. Cndot.) is a 3 x 3 convolution transform function.
The improved residual block is represented as:
x i =S(x)i∈1,2,3,4
Figure GDA0004267213400000091
y=[y 1 ,y 2 ,y 3 ,y 4 ]
wherein S (·) is a characteristic channel splitting operation, the output x of the convolution block is split into 4 blocks, x, by channel i For the i-th block after channel splitting, y i Is x i Output after the corresponding operation, [ - (C) -, (C.)]Representing the channel stacking operation in spatial dimensions, y is the output of the modified residual block, and IC (·) is the acceptance operation, as follows:
[F 1 (·),F 3 (F 1 (·)),F 5 (F 1 (·)),F 1 (M(·))]
wherein F is 1 (. Cndot.) is a 1X 1 convolution transform function, F 5 (. Cndot.) is a 5X 5 convolution transform function and M (. Cndot.) is a 3X 3 max pooling function.
The convolution block two is expressed as:
z=F 3 (y)
wherein y is the input of the convolution block II, and z is the characteristic diagram of the output of the convolution block II.
The attention module is expressed as:
Figure GDA0004267213400000101
G=δ 1 (W 22 (W 1 (s c ))))·z c
wherein H is the high of the second output characteristic diagram of the convolution block, W is the wide of the second output characteristic diagram of the convolution block, and z c (i, j) is the characteristic of the c-th channel of the convolution block two-output characteristic diagram z, s c Is the global feature, delta, encoded by the spatial feature on the c-th channel 1 (. Cndot.) is a Sigmoid activation function, W 2 (. Cndot.) is a 16-fold reduced dimension full join operation of the feature map channel, delta 2 (. Cndot.) is a ReLU activation function, W 1 (. Cndot.) is a full join operation that up-dimensions the feature map channel by a factor of 16, and G is the output of the attention module. It can be seen that the Res2 Net-concept-SE module-based dual-branch encoding network comprises a feature extraction stage and a feature fusion stage. In the feature extraction stage, the encoder uses its dual-branch structure to extract the features of the carrier image and the secret image separately and independentlyAnd (3) sign. The Res2 Net-acceptance-SE module on each branch is responsible for extracting features subjected to convolution operation at the same resolution and learning channel importance. And then, in the feature fusion stage, the encoder fuses the extracted features by using a splicing operation and a deconvolution operation, and finally, a dense image is obtained.
S2, inputting the secret-containing image into a W-Net decoding network to obtain two reconstructed secret images;
in a specific implementation manner of the embodiment of the present invention, the W-Net decoding network is a jump connection structure, as shown in fig. 3, including: a first convolution operation group, conv_1, a second convolution operation group, conv_2, a third convolution operation group, conv_3, a fourth convolution operation group, conv_4, a channel splitting operation Sp, a first secret branch deconvolution operation group, convt1_s1, a second secret branch deconvolution operation group, convt2_s1, a first secret branch deconvolution operation group three, convt3_s1, a first secret branch deconvolution operation group four, convt4_s1, a first secret branch convolution operation group, conv_s1, a second secret branch deconvolution operation group one, convt1_s2, a second secret branch deconvolution operation group three, convt3_s2, a second secret branch deconvolution operation group four, convt4_s2, a second secret branch deconvolution operation group one, conv_s2;
the output of the convolution operation group IV is split into a first secret feature diagram and a second secret feature diagram by the channel splitting operation;
the output of the convolution operation group III and the output of the secret first branch deconvolution operation group I are fused with the secret characteristic diagram I to form a jump connection structure;
the output of the second convolution operation group, the second secret first branch deconvolution operation group and the first secret first branch deconvolution operation group is fused with the first secret characteristic diagram to form a jump connection structure;
the first convolution operation group, the third secret first branch deconvolution operation group, the second secret first branch deconvolution operation group and the first secret characteristic diagram are fused with the output of the first secret first branch deconvolution operation group to form a jump connection structure;
the output of the third convolution operation group and the first secret first branch convolution operation group are fused with the second secret characteristic diagram to form a jump connection structure;
the output of the second convolution operation group, the second secret second branch deconvolution operation group and the first secret first branch deconvolution operation group are fused with the second secret characteristic diagram to form a jump connection structure;
the first convolution operation group, the third secret second branch deconvolution operation group, the second secret second branch deconvolution operation group and the output of the first secret second branch deconvolution operation group are fused with the second secret characteristic diagram to form a jump connection structure;
the input of the first secret first branch convolution operation group is the fourth secret first branch deconvolution operation group, and the output is a first secret image;
the input of the first secret second branch convolution operation group is the fourth secret second branch deconvolution operation group, and the output is a second secret image;
one convolution operation group comprises a convolution layer Conv, an activation layer LeakyReLU and a batch standardization layer BN which are sequentially arranged; one deconvolution operation group includes a deconvolution layer ConvT, an activation layer LeakyReLU, and a batch normalization layer BN, which are sequentially arranged.
S3, designing a mixed loss function according to the quality of the secret-containing image and the quality of the reconstructed secret image, taking the mixed loss function as a total loss function of the steganography network, optimizing the steganography network by taking the minimum total loss function as a target, and considering that training is finished when the loss is reduced and kept stable; the steganography network comprises a dual-branch coding network and a W-Net decoding network based on a Res2 Net-acceptance-SE module.
In a specific implementation manner of the embodiment of the present invention, the total loss function of the steganography network is:
L(c,c',s,s')=βL(c,c')+γL(s,s')
wherein L (c, c ') is a double-branch coding network loss based on the Res2 Net-acceptance-SE module, L (s, s') is a W-Net decoding network loss, and beta, gamma is a weight for controlling the double-branch coding network loss based on the Res2 Net-acceptance-SE module and the W-Net decoding network loss.
The calculation formula of the double-branch coding network loss L (c, c') based on the Res2 Net-acceptance-SE module is as follows:
Figure GDA0004267213400000121
wherein c represents a carrier image, c= { c i I 1,2,..l }, L is the total pixel value of the image, c 'represents the dense-containing image, c' = { c i '|1,2,...,L},μ c 、μ c' Average of c and c', respectively, also represents the brightness of the carrier image and the density-containing image, K 1 Is a constant less than or equal to 1, M is a custom scale, and the value is 5, sigma c 、σ c' Standard deviation of c, c', respectively, also represents contrast of the carrier image and the dense image, σ cc' Covariance of c and c', also representing structural similarity of the carrier image and the dense-containing image, K 2 G is a gaussian filter parameter, and α is a super parameter for controlling the weight, which is a constant equal to or less than 1.
The calculation formula of the W-Net decoding network loss L (s, s') is as follows:
L(s,s')=L(s 1 ,s' 1 )+L(s 2 ,s' 2 )
Figure GDA0004267213400000131
Figure GDA0004267213400000132
wherein s is 1 Representing a first secret image of the person,
Figure GDA0004267213400000133
s 2 representing a second secret image of the person,
Figure GDA0004267213400000134
l is the total image of the imagePlain value, s' 1 Representing a first reconstructed secret image, < >>
Figure GDA00042672134000001313
s' 2 Representing a second reconstructed secret image, < >>
Figure GDA0004267213400000136
Figure GDA0004267213400000137
S are respectively 1 And s' 1 Also representing the brightness of the first secret image and the first reconstructed secret image, +.>
Figure GDA0004267213400000138
S are respectively 2 And s' 2 Also representing the brightness, K, of the second secret image and the second reconstructed secret image 1 Is a constant less than or equal to 1, M is a custom scale, and the value is 5 #>
Figure GDA0004267213400000139
S are respectively 1 、s' 1 Is also representative of the contrast of the first secret image and the first reconstructed secret image, +.>
Figure GDA00042672134000001314
S are respectively 2 、s' 2 Is also representative of the contrast of the second secret image and the second reconstructed secret image, +.>
Figure GDA00042672134000001315
Is s 1 And s' 1 Is also representative of the structural similarity of the first secret image and the first reconstructed secret image, +.>
Figure GDA00042672134000001316
Is s 2 And s' 2 Also representing the junction of the second secret image and the second reconstructed secret imageSimilarity of structure, K 2 G is a gaussian filter parameter, and α is a super parameter for controlling the weight, which is a constant equal to or less than 1.
In practical application, one carrier image and two secret images are input into a trained dual-branch coding network based on a Res2 Net-acceptance-SE module to obtain a secret-containing image, then the secret-containing image is input into a trained W-Net decoding network, and two secret images hidden in the secret-containing image are extracted.
In summary, in the embodiment of the invention, a carrier image and two secret images are input into a dual-branch coding network based on a Res2 Net-acceptance-SE module by a graph hiding method based on the large capacity of the coding-decoding network to generate a secret-containing image; and inputting the secret-containing image into a W-Net decoding network to obtain two reconstructed secret images, wherein the reconstructed two secret images have high hiding capacity and high concealment.
To verify the effect of the present invention, the proposed steganographic model was first trained based on the public dataset PASCAL-VOC2012 and tested on the public dataset ImageNet. The results of the experiments on the image quality are shown in table 1. Wherein, baluja's model [ Baluja S.Hiding images within images [ J ]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2020,42 (7): 1685-1697 ] is the first model to hide two secret images in one carrier image.
TABLE 1
Figure GDA0004267213400000141
Example 2
Based on the same inventive concept as embodiment 1, in an embodiment of the present invention, there is provided a high-capacity hidden graphic system based on an encoding-decoding network, including:
the generating unit is used for inputting one carrier image and two secret images into a dual-branch coding network based on a Res2 Net-acceptance-SE module to obtain a secret-containing image embedded with the two secret images;
the reconstruction unit is used for inputting the secret images into a W-Net network to obtain two reconstructed secret images;
the training unit is used for designing a mixed loss function according to the quality of the secret-containing image and the quality of the reconstructed secret image, taking the mixed loss function as a total loss function of the steganography network, optimizing the steganography network with the minimum loss function as a target, and considering that training is finished when the loss is reduced and kept stable; the steganography network comprises a dual-branch coding network based on a Res2 Net-acceptance-SE module and a W-Net decoding network;
and the steganography unit is used for generating a secret-containing image according to the carrier image and the secret image by using the trained steganography network, and reconstructing two secret images from the secret-containing image.
For specific limitations of the high-capacity pictographic system based on the encoding-decoding network, reference may be made to the above limitation of the high-capacity pictographic method based on the encoding-decoding network, and the detailed description thereof will be omitted. The above-described high-capacity hidden-graph system based on the encoding-decoding network may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In the description of the present specification, the descriptions of the terms "one embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims.

Claims (7)

1. A high capacity graph hiding method based on an encoding-decoding network, the method comprising the steps of:
s1: inputting a carrier image and two secret images into a dual-branch coding network based on a Res2 Net-acceptance-SE module to generate a secret-containing image;
s2: inputting the generated secret-containing image into a W-Net decoding network to obtain two reconstructed secret images;
s3: designing a mixed loss function according to the quality of the secret-containing image and the quality of the reconstructed secret image, taking the mixed loss function as a total loss function of the steganography network, optimizing the steganography network by taking the minimum total loss function as a target, and finishing training when the loss is reduced and the stability is kept;
the dual-branch coding network based on the Res2 Net-acceptance-SE module in the step S1 comprises a carrier branch first convolution operation group, a carrier branch Res2 Net-acceptance-SE module, a carrier branch second convolution operation group, a secret branch first convolution operation group, a secret branch Res2 Net-acceptance-SE module, a secret branch second convolution operation group, a deconvolution operation group and a third convolution operation group;
the carrier branch second convolution operation group and the secret branch second convolution operation group are input into a deconvolution operation group after being subjected to channel splicing, the output end of the deconvolution operation group is connected with the third convolution operation group, and the output of the third convolution operation group is a dense image;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged, and one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch standardization layer which are sequentially arranged;
the W-Net decoding network in the step S2 is of a jump connection structure, and the W-Net decoding network comprises: a first convolution operation group, a second convolution operation group, a third convolution operation group, a fourth convolution operation group, a channel splitting operation, a first secret first branch deconvolution operation group, a second secret first branch deconvolution operation group, a third secret first branch deconvolution operation group, a fourth secret first branch deconvolution operation group, a first secret first branch convolution operation group, a first secret second branch deconvolution operation group, a second secret branch deconvolution operation group, a third secret second branch deconvolution operation group, a fourth secret second branch deconvolution operation group and a first secret second branch deconvolution operation group;
the channel splitting operation splits the output of the convolution operation group IV into a first secret feature diagram and a second secret feature diagram;
the output of the third convolution operation group and the first secret first branch deconvolution operation group are fused with the first secret characteristic diagram to form a jump connection structure;
the output of the second convolution operation group, the second secret first branch deconvolution operation group and the first secret first branch deconvolution operation group is fused with the first secret characteristic diagram to form a jump connection structure;
the first convolution operation group, the third secret first branch deconvolution operation group, the second secret first branch deconvolution operation group and the first secret characteristic diagram are fused with the output of the first secret first branch deconvolution operation group to form a jump connection structure;
the output of the third convolution operation group and the first secret first branch convolution operation group are fused with the second secret characteristic diagram to form a jump connection structure;
the output of the second convolution operation group, the second secret second branch deconvolution operation group and the first secret first branch deconvolution operation group are fused with the second secret characteristic diagram to form a jump connection structure;
the first convolution operation group, the third secret second branch deconvolution operation group, the second secret second branch deconvolution operation group and the output of the first secret second branch deconvolution operation group are fused with the second secret characteristic diagram to form a jump connection structure;
the input of the first secret first branch convolution operation group is the fourth secret first branch deconvolution operation group, and the output is a first secret image;
the input of the first secret second branch convolution operation group is the fourth secret second branch deconvolution operation group, and the output is a second secret image;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
2. The method as claimed in claim 1, wherein the step S1 of inputting one carrier image and two secret images into the dual-branch encoding network based on the Res2 Net-concept-SE module includes inputting the carrier image into one branch of the encoding network separately and inputting the two secret images into the other branch of the encoding network after the channel stacking operation.
3. The high-capacity diagrammatical method based on an encoding-decoding network as defined in claim 1, wherein the Res2 Net-concept-SE module in step S1 includes a first convolution block, a modified residual block, a second convolution block and an attention module;
the convolution block one is expressed as:
x=F 3 (f)
where F is the input of the built convolution block, x is the output of the built convolution block, F 3 (. Cndot.) is a 3 x 3 convolution transform function;
the improved residual block is represented as:
x i =S(x)i∈1,2,3,4
Figure FDA0004267213390000031
y=[y 1 ,y 2 ,y 3 ,y 4 ]
wherein S (·) is a characteristic channel splitting operation, the output x of the convolution block is split into 4 blocks, x, by channel i For the i-th block after channel splitting, y i Is x i After corresponding operationOutput, [ ·, ]]Representing the channel stacking operation in spatial dimensions, y is the output of the modified residual block, and IC (·) is the acceptance operation, as follows:
[F 1 (·),F 3 (F 1 (·)),F 5 (F 1 (·)),F 1 (M(·))]
wherein F is 1 (. Cndot.) is a 1X 1 convolution transform function, F 5 (. Cndot.) is a 5X 5 convolution transform function, M (. Cndot.) is a 3X 3 max pooling function;
the convolution block two is expressed as:
z=F 3 (y)
wherein y is the input of the convolution block II, and z is the characteristic diagram of the output of the convolution block II;
the attention module is expressed as:
Figure FDA0004267213390000041
G=δ 1 (W 22 (W 1 (s c ))))·z c
wherein H is the high of the second output characteristic diagram of the convolution block, W is the wide of the second output characteristic diagram of the convolution block, and z c (i, j) is the characteristic of the c-th channel of the convolution block two-output characteristic diagram z, s c Is the global feature, delta, encoded by the spatial feature on the c-th channel 1 (. Cndot.) is a Sigmoid activation function, W 2 (. Cndot.) is a 16-fold reduced dimension full join operation of the feature map channel, delta 2 (. Cndot.) is a ReLU activation function, W 1 (. Cndot.) is a full join operation that up-dimensions the feature map channel by a factor of 16, and G is the output of the attention module.
4. The high capacity hiding method based on coding-decoding network according to claim 1, wherein said hidden network in step S3 includes a dual-branch coding network based on Res2 Net-concept-SE module and a W-Net decoding network.
5. The high-capacity hidden drawing method based on the encoding-decoding network according to claim 1, wherein the total loss function in the step S3 is:
L(c,c',s,s')=βL(c,c')+γL(s,s')
wherein L (c, c ') is a double-branch coding network loss based on the Res2 Net-acceptance-SE module, L (s, s') is a W-Net decoding network loss, and beta, gamma is a weight for controlling the double-branch coding network loss based on the Res2 Net-acceptance-SE module and the W-Net decoding network loss.
6. The method for high-capacity pictographic storage based on an encoding-decoding network of claim 5, wherein the calculation formula of the dual-branch encoding network loss L (c, c') based on the Res2 Net-acceptance-SE module is:
Figure FDA0004267213390000051
where c is the carrier image pixel, c= { c i I 1, 2..l }, L is the total pixel value of the image, c 'is the dense-containing image pixel, c' = { c i '|1,2,...,L},μ c 、μ c' Average of c and c', respectively, also represents the brightness of the carrier image and the density-containing image, K 1 Is a constant less than or equal to 1, M is a custom scale, and the value is 5, sigma c 、σ c' Standard deviation of c, c', respectively, also represents contrast of the carrier image and the dense image, σ cc' Covariance of c and c', also representing structural similarity of the carrier image and the dense-containing image, K 2 G is a Gaussian filter parameter, and alpha is a super parameter for controlling the weight;
the calculation formula of the W-Net decoding network loss L (s, s') is as follows:
L(s,s')=L(s 1 ,s' 1 )+L(s 2 ,s' 2 )
Figure FDA0004267213390000052
Figure FDA0004267213390000053
wherein s is 1 For the first secret image pixel,
Figure FDA0004267213390000054
s 2 for the second secret image pixel,
Figure FDA00042672133900000513
l is the total pixel value of the image, s' 1 For the first reconstructed secret image pixel,
Figure FDA00042672133900000514
s′ 2 reconstructing the secret image pixels for the second sheet, for example>
Figure FDA0004267213390000057
Figure FDA0004267213390000058
S are respectively 1 And s' 1 Also representing the brightness of the first secret image and the first reconstructed secret image, +.>
Figure FDA0004267213390000059
Figure FDA00042672133900000515
S are respectively 2 And s' 2 Also representing the brightness, K, of the second secret image and the second reconstructed secret image 1 Is a constant less than or equal to 1, M is a custom scale, and the value is 5 #>
Figure FDA00042672133900000511
S are respectively 1 、s′ 1 Is also representative of the contrast of the first secret image and the first reconstructed secret image, +.>
Figure FDA00042672133900000516
S are respectively 2 、s′ 2 Is also representative of the contrast of the second secret image and the second reconstructed secret image, +.>
Figure FDA0004267213390000063
Is s 1 And s' 1 Is also representative of the structural similarity of the first secret image and the first reconstructed secret image, +.>
Figure FDA0004267213390000064
Is s 2 And s' 2 Is also representative of the structural similarity of the second secret image and the second reconstructed secret image, K 2 G is a gaussian filter parameter, and α is a super parameter for controlling the weight, which is a constant equal to or less than 1.
7. A high capacity hidden-graph system based on an encoding-decoding network, the system comprising:
a generation unit: the method comprises the steps of inputting a carrier image and two secret images into a dual-branch coding network based on a Res2 Net-acceptance-SE module to obtain a secret-containing image embedded with the two secret images;
and a reconstruction unit: the method comprises the steps of inputting the secret-containing image into a W-Net network to obtain two reconstructed secret images;
training unit: the method is used for designing a mixed loss function according to the quality of the secret-containing image and the quality of the reconstructed secret image, taking the mixed loss function as the total loss function of the steganography network, optimizing the steganography network with the minimum loss function as a target, and considering that training is finished when the loss is reduced and kept stable; the steganography network comprises a dual-branch coding network based on a Res2 Net-acceptance-SE module and a W-Net decoding network;
steganography unit: the method comprises the steps of generating a secret-containing image according to a carrier image and a secret image by using a trained steganography network, and reconstructing two secret images from the secret-containing image;
the Res2 Net-acceptance-SE module-based dual-branch coding network comprises a carrier branch first convolution operation group, a carrier branch Res2 Net-acceptance-SE module, a carrier branch second convolution operation group, a secret branch first convolution operation group, a secret branch Res2 Net-acceptance-SE module, a secret branch second convolution operation group, a deconvolution operation group and a third convolution operation group;
the carrier branch second convolution operation group and the secret branch second convolution operation group are input into a deconvolution operation group after being subjected to channel splicing, the output end of the deconvolution operation group is connected with the third convolution operation group, and the output of the third convolution operation group is a dense image;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged, and one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch standardization layer which are sequentially arranged;
the W-Net decoding network is of a jump connection structure, and comprises: a first convolution operation group, a second convolution operation group, a third convolution operation group, a fourth convolution operation group, a channel splitting operation, a first secret first branch deconvolution operation group, a second secret first branch deconvolution operation group, a third secret first branch deconvolution operation group, a fourth secret first branch deconvolution operation group, a first secret first branch convolution operation group, a first secret second branch deconvolution operation group, a second secret branch deconvolution operation group, a third secret second branch deconvolution operation group, a fourth secret second branch deconvolution operation group and a first secret second branch deconvolution operation group;
the channel splitting operation splits the output of the convolution operation group IV into a first secret feature diagram and a second secret feature diagram;
the output of the third convolution operation group and the first secret first branch deconvolution operation group are fused with the first secret characteristic diagram to form a jump connection structure;
the output of the second convolution operation group, the second secret first branch deconvolution operation group and the first secret first branch deconvolution operation group is fused with the first secret characteristic diagram to form a jump connection structure;
the first convolution operation group, the third secret first branch deconvolution operation group, the second secret first branch deconvolution operation group and the first secret characteristic diagram are fused with the output of the first secret first branch deconvolution operation group to form a jump connection structure;
the output of the third convolution operation group and the first secret first branch convolution operation group are fused with the second secret characteristic diagram to form a jump connection structure;
the output of the second convolution operation group, the second secret second branch deconvolution operation group and the first secret first branch deconvolution operation group are fused with the second secret characteristic diagram to form a jump connection structure;
the first convolution operation group, the third secret second branch deconvolution operation group, the second secret second branch deconvolution operation group and the output of the first secret second branch deconvolution operation group are fused with the second secret characteristic diagram to form a jump connection structure;
the input of the first secret first branch convolution operation group is the fourth secret first branch deconvolution operation group, and the output is a first secret image;
the input of the first secret second branch convolution operation group is the fourth secret second branch deconvolution operation group, and the output is a second secret image;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
CN202111021882.6A 2021-09-01 2021-09-01 Large-capacity graph hiding method and system based on coding-decoding network Active CN113726976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111021882.6A CN113726976B (en) 2021-09-01 2021-09-01 Large-capacity graph hiding method and system based on coding-decoding network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111021882.6A CN113726976B (en) 2021-09-01 2021-09-01 Large-capacity graph hiding method and system based on coding-decoding network

Publications (2)

Publication Number Publication Date
CN113726976A CN113726976A (en) 2021-11-30
CN113726976B true CN113726976B (en) 2023-07-11

Family

ID=78680626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111021882.6A Active CN113726976B (en) 2021-09-01 2021-09-01 Large-capacity graph hiding method and system based on coding-decoding network

Country Status (1)

Country Link
CN (1) CN113726976B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114257697B (en) * 2021-12-21 2022-09-23 四川大学 High-capacity universal image information hiding method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028308A (en) * 2019-11-19 2020-04-17 珠海涵辰科技有限公司 Steganography and reading method for information in image
CN112132158A (en) * 2020-09-04 2020-12-25 华东师范大学 Visual picture information embedding method based on self-coding network
CN112132738A (en) * 2020-10-12 2020-12-25 中国人民武装警察部队工程大学 Image robust steganography method with reference generation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11080809B2 (en) * 2017-05-19 2021-08-03 Google Llc Hiding information and images via deep learning
CN111292221A (en) * 2020-02-25 2020-06-16 南京信息工程大学 Safe and robust high-capacity image steganography method
CN111640444B (en) * 2020-04-17 2023-04-28 宁波大学 CNN-based adaptive audio steganography method and secret information extraction method
CN112926607B (en) * 2021-04-28 2023-02-17 河南大学 Two-branch network image steganography framework and method based on convolutional neural network
CN113284033A (en) * 2021-05-21 2021-08-20 湖南大学 Large-capacity image information hiding technology based on confrontation training

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028308A (en) * 2019-11-19 2020-04-17 珠海涵辰科技有限公司 Steganography and reading method for information in image
CN112132158A (en) * 2020-09-04 2020-12-25 华东师范大学 Visual picture information embedding method based on self-coding network
CN112132738A (en) * 2020-10-12 2020-12-25 中国人民武装警察部队工程大学 Image robust steganography method with reference generation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
M. B. Ould Medeni ; El Mamoun Souidi.Steganographic algorithm based on error-correcting codes for gray scale images.《 2010 5th International Symposium On I/V Communications and Mobile Network》.2010,1-4页. *

Also Published As

Publication number Publication date
CN113726976A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
Yang et al. An embedding cost learning framework using GAN
Subramanian et al. End-to-end image steganography using deep convolutional autoencoders
Fei et al. Exposing AI-generated videos with motion magnification
CN112381716B (en) Image enhancement method based on generation type countermeasure network
Jin et al. Sparsity-based image inpainting detection via canonical correlation analysis with low-rank constraints
Wang et al. Image splicing detection based on convolutional neural network with weight combination strategy
CN113344110B (en) Fuzzy image classification method based on super-resolution reconstruction
Lin et al. DeepFake detection with multi-scale convolution and vision transformer
Hou et al. Emerging applications of reversible data hiding
Liu et al. Image deblocking detection based on a convolutional neural network
CN113726976B (en) Large-capacity graph hiding method and system based on coding-decoding network
Li et al. High-capacity coverless image steganographic scheme based on image synthesis
Liao et al. GIFMarking: The robust watermarking for animated GIF based deep learning
Ye et al. Invertible grayscale via dual features ensemble
Su et al. JPEG steganalysis based on ResNeXt with gauss partial derivative filters
Dai et al. CFGN: A lightweight context feature guided network for image super-resolution
Lai et al. Generative focused feedback residual networks for image steganalysis and hidden information reconstruction
Hu et al. Learning-based image steganography and watermarking: A survey
Liu et al. Effective JPEG steganalysis using non-linear pre-processing and residual channel-spatial attention
Gan et al. Highly accurate end-to-end image steganalysis based on auxiliary information and attention mechanism
Zhu et al. Deepfake detection via inter-frame inconsistency recomposition and enhancement
Jia et al. Learning rich information for quad bayer remosaicing and denoising
Li et al. On improving perceptual image hashing using reference image construction
Li et al. Improving CoatNet for spatial and JPEG domain steganalysis
Al-Hadaad et al. A New Face Image Authentication Scheme based on Bicubic Interpolation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant