CN113592693B - Digital watermarking method, device and system based on Y-Net - Google Patents

Digital watermarking method, device and system based on Y-Net Download PDF

Info

Publication number
CN113592693B
CN113592693B CN202110786024.4A CN202110786024A CN113592693B CN 113592693 B CN113592693 B CN 113592693B CN 202110786024 A CN202110786024 A CN 202110786024A CN 113592693 B CN113592693 B CN 113592693B
Authority
CN
China
Prior art keywords
watermark
network
image
operation group
convolution operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110786024.4A
Other languages
Chinese (zh)
Other versions
CN113592693A (en
Inventor
胡欣珏
梁秀健
付章杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202110786024.4A priority Critical patent/CN113592693B/en
Priority to PCT/CN2021/106681 priority patent/WO2023283914A1/en
Publication of CN113592693A publication Critical patent/CN113592693A/en
Application granted granted Critical
Publication of CN113592693B publication Critical patent/CN113592693B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/005Robust watermarking, e.g. average attack or collusion attack resistant
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a digital watermarking method, device and system based on Y-Net, which inputs a carrier image and a watermark image into a Y-Net generation network to generate a watermark image; inputting the watermark-containing image into a pre-trained compression loss simulation network to obtain a simulated compressed watermark-containing image; judging the simulated compressed watermark-containing image by adopting a pre-trained judging network to obtain a judging result; simultaneously inputting the simulated compressed watermark-containing image into an extraction network to obtain a reconstructed watermark image; and calculating the total loss function of the whole watermark network according to the watermark image, the reconstructed watermark image and the discrimination network result, optimizing the watermark network with the minimum loss function as a target, and considering that training is finished when the loss is reduced and kept stable. In the training process, the weight of the discrimination network and the compression loss simulation network is fixed. Compared with the existing algorithm, the method has the advantages of high watermark capacity, high concealment, high safety and strong robustness.

Description

Digital watermarking method, device and system based on Y-Net
Technical Field
The invention belongs to the technical field of digital watermarking, and particularly relates to a digital watermarking method, device and system based on Y-Net.
Background
The rapid development of internet technology has enabled more digital works to be propagated over networks. While convenient for communication, originality of works is also examined, and copyright protection becomes a very important issue for people.
The digital watermarking technology can provide protection for originality of the work according to the characteristics of the work. In recent years, various digital watermark models based on deep learning are in mass emergence, so that not only can the watermark capacity be enlarged, but also the transmission safety can be effectively improved. However, these methods still have some problems, such as insufficient quality of the watermark-containing image and analysis security, and weak robustness against attacks such as JPEG compression.
Disclosure of Invention
Aiming at the problems, the invention provides a digital watermarking method, device and system based on Y-Net, which have high watermarking capacity, high concealment, high security and strong robustness.
In order to achieve the technical purpose and achieve the technical effect, the invention is realized by the following technical scheme:
in a first aspect, the present invention provides a Y-Net based digital watermarking method, comprising:
inputting the carrier image and the watermark image into a Y-Net generating network to obtain a watermark-containing image embedded with the watermark image;
inputting the watermark-containing image into a compression loss simulation network trained in advance to obtain a simulated compressed watermark-containing image;
the simulated compressed watermark-containing image is judged by adopting a pre-trained judging network, so as to obtain a judging result, wherein the judging result comprises the step of recognizing the watermark-containing image as a carrier image or a watermark-containing image;
inputting the simulated compressed watermark-containing image into an extraction network to obtain a reconstructed watermark image;
according to the watermark image, the reconstructed watermark image and the discrimination network result, calculating the total loss function of the whole watermark network, optimizing the watermark network by taking the minimum total loss function as a target, and considering that training is finished when the loss is reduced and kept stable; the watermark network comprises a Y-Net generating network, a compression loss simulation network, a judging network and an extracting network, wherein the weights of the judging network and the compression loss simulation network are fixed in the optimization process;
and generating a watermark-containing image according to the carrier image and the watermark image by using a trained watermark network, compressing the watermark-containing image, and reconstructing the watermark-containing image from the compressed watermark-containing image.
Optionally, the Y-Net generating network is a jump connection structure, including: a first convolution operation set, a second convolution operation set, a third convolution operation set, a fourth convolution operation set, a fifth convolution operation set, a sixth convolution operation set, a seventh convolution operation set, a first deconvolution operation set, a second deconvolution operation set, a third deconvolution operation set, a fourth deconvolution operation set, a fifth deconvolution operation set, a sixth deconvolution operation set, a seventh deconvolution operation set, and an eighth convolution operation set;
the output of the carrier sixth convolution operation group and the watermark sixth convolution operation group are fused with the output of the first deconvolution operation group;
the output of the carrier fifth convolution operation group and the watermark fifth convolution operation group are fused with the output of the second deconvolution operation group;
the output of the carrier fourth convolution operation group and the watermark fourth convolution operation group are fused with the output of the third convolution operation group;
the output of the carrier third convolution operation group and the watermark third convolution operation group are fused with the output of the fourth deconvolution operation group;
the output of the carrier second convolution operation group and the output of the watermark second convolution operation group are fused with the output of the fifth deconvolution operation group;
the output of the carrier first convolution operation group and the watermark first convolution operation group is fused with the output of the sixth deconvolution operation group;
the carrier sixth convolution operation group and the watermark sixth convolution operation group are input into a seventh convolution operation group after being subjected to channel splicing, and the output end of the seventh convolution operation group is connected with the first deconvolution operation group;
the result of the jump connection of the sixth deconvolution operation group, the first carrier convolution operation group and the first watermark convolution operation group is input into a seventh deconvolution operation group;
the input end of the eighth convolution operation group is connected with the seventh deconvolution operation group, and the output of the eighth deconvolution operation group is a watermark-containing image;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
Optionally, the discrimination network is obtained by the following training steps:
collecting a plurality of image samples, and determining labels of the images of the samples; the tag includes a probability that the corresponding sample image includes a watermark image;
and taking each image sample as input, and taking the label of each image sample as output training Zhu-Net to obtain a discrimination network.
Optionally, the compression loss simulation network is a U-Net network.
Optionally, the extraction network is a U-Net++ network.
Optionally, the discrimination network is a discriminator, and the discriminator includes a softmax function therein.
Optionally, the total loss function of the watermark network is:
L=αL c +βL s +δL d
wherein L is c Generating network losses for Y-Net, L s To extract network loss, L d To determine network loss, α, β, and δ are weights for controlling Y-Net generation, network loss extraction, and network loss determination.
Optionally, the calculation formula of the network loss generated by the Y-Net is as follows:
Figure BDA0003158821860000031
where n represents the total pixel value of the image, c i Representing the original image of the carrier,
Figure BDA0003158821860000032
representing a watermark-containing image;
the extraction network loss L s The calculation formula of (2) is as follows:
Figure BDA0003158821860000033
wherein s is i Representing the original watermark image and,
Figure BDA0003158821860000034
representing the reconstructed watermark image;
said discriminating network loss L d The calculation formula of (2) is as follows:
Figure BDA0003158821860000035
wherein y is i ' represents a real label, namely, a real result of inputting a discrimination network image which is a carrier image or a watermark image at all; y is i Representing the output of the discrimination network.
In a second aspect, the present invention provides a Y-Net based robust digital watermarking apparatus, comprising:
the first processing unit is used for inputting the carrier image and the watermark image into the Y-Net generating network to obtain a watermark-containing image embedded with the watermark image;
the second processing unit is used for inputting the watermark image into a compression loss simulation network which is trained in advance to obtain a simulated compressed watermark image;
the judging unit is used for judging the simulated compressed watermark-containing image by adopting a pre-trained judging network to obtain a judging result, and the judging result comprises the step of recognizing the watermark-containing image as a carrier image or a watermark-containing image;
the reconstruction unit is used for inputting the simulated compressed watermark-containing image into an extraction network to obtain a reconstructed watermark image;
the training unit is used for calculating the total loss function of the whole watermark network according to the watermark image, the reconstructed watermark image and the discrimination network result, optimizing the watermark network by taking the minimum total loss function as a target, and considering that training is finished when the loss is reduced and kept stable; the watermark network comprises a Y-Net generating network, a compression loss simulation network, a judging network and an extracting network, wherein the weights of the judging network and the compression loss simulation network are fixed in the optimization process;
and the watermarking unit is used for generating a watermark-containing image according to the carrier image and the watermark image by using the trained watermarking network, compressing the watermark-containing image and reconstructing the watermark-containing image from the compressed watermark-containing image.
In a third aspect, the present invention provides a strong robust digital watermarking system based on Y-Net, comprising a storage medium and a processor;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the method according to any one of the first aspects.
Compared with the prior art, the invention has the beneficial effects that:
the invention designs a rich-feature branch generation network-Y-Net, which improves the visual quality of the watermark image;
the U-Net++ structure is cited for the first time in the field of image digital watermarking, and is used as an extraction network, so that the visual quality of a reconstructed watermark image is improved;
the image quality loss caused by JPEG compression during network transmission is considered, and the JPEG compression loss is simulated by using a U-Net structure network, so that the JPEG compression resistance of the watermark network is improved;
using a Zhu-Net network structure as a discrimination network to monitor the security of the watermark-containing image in real time;
the weights of the discrimination network and the compression loss simulation network are fixed when the watermark network is trained, and the unidirectional countermeasure training method is adopted, so that the convergence speed of the watermark network is accelerated, the image quality is improved, and the security of the watermark-containing image is ensured.
Drawings
In order that the invention may be more readily understood, a more particular description of the invention will be rendered by reference to specific embodiments that are illustrated in the appended drawings, in which:
FIG. 1 is a flow diagram of a Y-Net based digital watermarking method of one embodiment;
FIG. 2 is a schematic diagram of a Y-Net generating network architecture of one embodiment;
fig. 3 is a schematic diagram of a compression loss simulation network structure of an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The principle of application of the invention is described in detail below with reference to the accompanying drawings.
Example 1
The embodiment of the invention provides a digital watermarking method based on Y-Net, which is shown in figure 1 and specifically comprises the following steps:
s10, inputting a carrier image and a watermark image into a Y-Net generation network to obtain a watermark image embedded with the watermark image, namely embedding the watermark image into the carrier image to generate the watermark image;
in a specific implementation manner of the embodiment of the present invention, as shown in fig. 3, the Y-Net generating network is a jump connection structure, including: the first carrier convolution operation set conv1_c, the second carrier convolution operation set conv2_c, the third carrier convolution operation set conv3_c, the fourth carrier convolution operation set conv4_c, the fifth carrier convolution operation set conv5_c, the sixth carrier convolution operation set conv6_c, the first watermark convolution operation set conv1_s, the second watermark convolution operation set conv2_s, the third watermark convolution operation set conv3_s, the fourth watermark convolution operation set conv4_s, the fifth watermark convolution operation set conv5_s, the sixth watermark convolution operation set conv6_s, the seventh convolution operation set conv7, the first deconvolution operation set ConvT1, the second deconvolution operation set ConvT2, the third deconvolution operation set ConvT3, the fourth deconvolution operation set ConvT4, the fifth deconvolution operation set ConvT5, the sixth deconvolution operation set conv6, the seventh deconvolution operation set conv7, and the eighth deconvolution operation set conv8;
the output of the carrier sixth convolution operation group and the watermark sixth convolution operation group are fused with the output of the first deconvolution operation group;
the output of the carrier fifth convolution operation group and the watermark fifth convolution operation group are fused with the output of the second deconvolution operation group;
the output of the carrier fourth convolution operation group and the watermark fourth convolution operation group are fused with the output of the third convolution operation group;
the output of the carrier third convolution operation group and the watermark third convolution operation group are fused with the output of the fourth deconvolution operation group;
the output of the carrier second convolution operation group and the output of the watermark second convolution operation group are fused with the output of the fifth deconvolution operation group;
the output of the carrier first convolution operation group and the watermark first convolution operation group is fused with the output of the sixth deconvolution operation group;
the carrier sixth convolution operation group and the watermark sixth convolution operation group are input into a seventh convolution operation group after being subjected to channel splicing, and the output end of the seventh convolution operation group is connected with the first deconvolution operation group;
the result of the jump connection of the sixth deconvolution operation group, the first carrier convolution operation group and the first watermark convolution operation group is input into a seventh deconvolution operation group;
the input end of the eighth convolution operation group is connected with the seventh deconvolution operation group, and the output of the eighth deconvolution operation group is a watermark-containing image;
one convolution operation group comprises a convolution layer Conv, an activation layer LeakyReLu and a batch standardization layer BN which are sequentially arranged; one deconvolution operation group includes a deconvolution layer ConvT, an activation layer LeakyReLu, and a batch normalization layer BN, which are sequentially arranged.
Therefore, the Y-Net generating network utilizes all carrier convolution operation groups to extract the characteristics of the carrier image, utilizes all watermark convolution operation groups to extract the characteristics of the watermark image, and the two branches respectively and independently extract the characteristics of the carrier image and the watermark image, so that the characteristics of the two types of images can be effectively prevented from being confused. In addition, in the subsequent deconvolution process, a jump connection structure is arranged, and the multidimensional features extracted in the earlier stage are fused, so that the original image information of the watermark-containing image is reserved as much as possible, and great convenience is provided for the subsequent watermark image reconstruction work.
S20, inputting the watermark-containing image into a pre-trained compression loss simulation network to obtain a simulated compressed watermark-containing image;
in a specific implementation process, in a network transmission process, in order to improve transmission efficiency, a JPEG compression operation is generally performed on an image. JPEG compression reduces the information of the high frequency part of the image, so that the watermark image embedded in the high frequency area is seriously damaged and cannot be reconstructed. For this reason, the JPEG compression penalty needs to be introduced into the watermark network, but the quantization operation of the JPEG compression is discontinuous and cannot participate in the back propagation process of the network. Therefore, the compression loss simulation network in the embodiment of the invention can select a U-Net network for simulating JPEG compression loss, thereby effectively improving the quality of the reconstructed watermark image.
In a specific implementation manner of the embodiment of the present invention, the U-Net compression loss simulation network is a jump connection structure, and includes a first convolution operation group, a second convolution operation group, a third convolution operation group, a fourth convolution operation group, a first deconvolution operation group, a second deconvolution operation group, a third deconvolution operation group, and a fifth convolution operation group;
the output of the third convolution operation set is fused with the output of the first deconvolution operation set;
the output of the second convolution operation set is fused with the output of the second deconvolution operation set;
the output of the first convolution operation group is fused with the output of the third convolution operation group;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
The U-Net compression loss simulation network is shown in fig. 3, where the first convolution operation set, the second convolution operation set, the third convolution operation set, the fourth convolution operation set, the first deconvolution operation set, the second deconvolution operation set, the third deconvolution operation set, and the fifth convolution operation set correspond to Conv1, conv2, conv3, conv4, convT1, convT2, convT3, and Conv5 shown in fig. 3. The convolutional layer, the active layer, the batch normalization layer, and the deconvolution layer correspond in sequence to Conv, leakyReLU, BN, convT of fig. 3.
Inputting the watermark-containing image into the compression loss simulation network to simulate the image quality loss in the JPEG compression process, and outputting the watermark-containing image after simulated compression, namely the simulated compressed watermark-containing image;
s30, discriminating the simulated compressed watermark-containing image by adopting a discrimination network trained in advance to obtain a discrimination result, wherein the discrimination result comprises the step of recognizing the watermark-containing image as a carrier image or a watermark-containing image;
after the simulated compressed watermark-containing image is obtained, whether the embedding position of the watermark image is reasonable or not is judged in order to improve the safety of the simulated compressed watermark-containing image, and a steganographic analyzer Zhu-Net with the best effect is selected as a discriminator, so that the simulated compressed watermark-containing image and the carrier image can be effectively distinguished. Therefore, in the implementation process, the discrimination network may select Zhu-Net network to learn the watermark-containing images generated by different spatial watermarking algorithms, for example: S-UNIWARD and HUGO; and inputting the compressed analog watermark-containing image into a discrimination network for discrimination, and outputting a classification result, namely, recognizing the watermark-containing image as a carrier image or a watermark-containing image.
In a specific implementation manner of the embodiment of the present invention, the discrimination network is obtained through the following training steps:
collecting a plurality of image samples, and determining labels of the images of the samples; the tag includes a probability that the corresponding sample image includes a watermark image;
and taking each image sample as input, and taking the label of each image sample as output training Zhu-Net to obtain a discrimination network.
S40, inputting the simulated compressed watermark-containing image into an extraction network to obtain a reconstructed watermark image;
in the specific implementation process, the extraction network can be a U-Net++ network, the compressed analog watermark-containing image is input into the extraction network, and the reconstructed watermark image is output; the extraction network constructed by the U-Net++ network structure can grasp the characteristics of different layers of the characteristic map through a series of nested jump connection operations, so that higher accuracy of watermark image extraction is ensured;
s50, calculating a total loss function of the watermark network according to the watermark image, the reconstructed watermark image and the discrimination network result, optimizing the watermark network by taking the minimum total loss function as a target, and considering that training is finished when the loss is reduced and kept stable; the watermark network comprises a Y-Net generating network, a compression loss simulation network, a distinguishing network and an extracting network, wherein the weights of the distinguishing network and the compression loss simulation network are fixed in the optimizing process. The method has the advantages that the unidirectional countermeasure training method is adopted, the compression loss simulation network and the discrimination network which are trained in advance are added into the watermark network, the compression loss simulation network and the discrimination network weight are fixed, the weight of the Y-Net generating network is only changed continuously in the network training process, the stability of the network is greatly improved, the convergence speed is accelerated, and the image quality is improved while the image watermark security is ensured.
In a specific implementation manner of the embodiment of the present invention, the total loss function of the watermark network is:
L=αL c +βL s +δL d
wherein L is c Generating network losses for Y-Net, L s To extract network loss, L d To determine network loss, α, β, and δ are weights for controlling Y-Net generation, network loss extraction, and network loss determination.
Wherein, the calculation formula of the network loss (namely the water-containing print image quality loss) generated by the Y-Net is as follows:
Figure BDA0003158821860000071
where n represents the total pixel value of the image, c i Representing the original image of the carrier,
Figure BDA0003158821860000072
representing a watermark-containing image;
the extraction network loss (i.e. reconstructed watermark image quality loss) L s The calculation formula of (2) is as follows:
Figure BDA0003158821860000081
wherein s is i Representing the original watermark image and,
Figure BDA0003158821860000082
representing the reconstructed watermark image;
said discriminating network loss L d The calculation formula of (2) is as follows:
Figure BDA0003158821860000083
wherein y is i ' represents the authentic label (i.e. the authentic result of the input discriminating network image being the carrier image or the watermarked image at all), y i Representing the output of the discrimination network (i.e., the output of the softmax function in the discriminator).
In the training process of the watermark network, the weights of the compression loss network and the discrimination network which are trained in advance are fixed, and only the weights of the generation network and the extraction network are changed continuously, so that the stability of the network training is effectively improved.
S60, generating a watermark-containing image according to the carrier image and the watermark image by using the trained watermark network, performing JPEG compression on the watermark-containing image, and reconstructing the watermark-containing image from the compressed watermark-containing image.
In practical application, inputting a carrier image and a watermark image into a trained Y-Net generating network to obtain a high-quality watermark image; inputting the watermark-containing image into a trained compression loss simulation network to obtain a JPEG-compressed watermark-containing image; inputting the compressed watermark-containing image into a trained extraction network, and extracting the watermark image hidden in the watermark-containing image.
In summary, in the digital watermarking method based on Y-Net in the embodiment of the invention, the carrier image and the watermark image are input into a Y-Net generating network to generate a watermark image; inputting the watermark image into a pre-trained compression loss simulation network to obtain a simulated compressed watermark image; and judging whether the watermark image is hidden in the analog compressed watermark-containing image by adopting a judgment network obtained through pre-training, and simultaneously inputting the analog compressed watermark-containing image into an extraction network to obtain a reconstructed watermark image, so that the method has high concealment, high safety and strong robustness.
To verify the effect of the present invention, the proposed watermark model is first trained based on the public data set PASCAL-VOC2007 and tested on the public data set PASCAL-VOC 2012. The experimental results in terms of image quality are shown in table 1. The U-Net [ Duan X T, jia K, liB X, et al.reversible image steganography schemebased on a U-Net structure [ J ]. IEEE Access,2019 (7): 9314-9323] model is the best model for generating watermark-containing images and reconstructing watermark images at present. ISGAN [ Zhang R, dong SQ, liu jy. Inviting steganography via generative adversarial networks, multimedia Tools and Applications,2019,78 (7): 8559-8575] is the best model for embedding grey watermark, generating watermark-containing image and reconstructing watermark image. The experimental results in terms of safety are shown in Table 2, where SRNet [ Boroumand M, chen M, fridrich J.deep residual network for steganalysis of digital images [ J ]. IEEE Transactions on Information Forensics and Security,2018,14 (5): 1181-1193] and Zhu-Net [ Zhang R, zhu F, liu J, et al Depth-wise separable convolutions and multi-level pooling for an efficient spatial CNN-based steganalysis [ J ]. IEEE Transactions on Information Forensics and Security,2019 (15): 1138-1150] are currently the best two steganographic models.
TABLE 1
Figure BDA0003158821860000091
TABLE 2
Figure BDA0003158821860000092
Example 2
Based on the same inventive concept as embodiment 1, an embodiment of the present invention provides a robust digital watermarking apparatus based on Y-Net, including:
the first processing unit is used for inputting the carrier image and the watermark image into the Y-Net generating network to obtain a watermark-containing image embedded with the watermark image;
the second processing unit is used for inputting the watermark image into a compression loss simulation network which is trained in advance to obtain a simulated compressed watermark image;
the judging unit is used for judging the simulated compressed watermark-containing image by adopting a pre-trained judging network to obtain a judging result, and the judging result comprises the step of recognizing the watermark-containing image as a carrier image or a watermark-containing image;
the reconstruction unit is used for inputting the simulated compressed watermark-containing image into an extraction network to obtain a reconstructed watermark image;
the training unit is used for calculating the total loss function of the whole watermark network according to the watermark image, the reconstructed watermark image and the discrimination network result, optimizing the watermark network by taking the minimum total loss function as a target, and considering that training is finished when the loss is reduced and kept stable; the watermark network comprises a Y-Net generating network, a compression loss simulation network, a judging network and an extracting network, wherein the weights of the judging network and the compression loss simulation network are fixed in the optimization process;
and the watermarking unit is used for generating a watermark-containing image according to the carrier image and the watermark image by using the trained watermarking network, compressing the watermark-containing image and reconstructing the watermark-containing image from the compressed watermark-containing image.
The remainder was the same as in example 1.
Example 3
Based on the same inventive concept as embodiment 1, the embodiment of the invention provides a strong robust digital watermarking system based on Y-Net, which comprises a storage medium and a processor;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the method according to any one of embodiment 1.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the claims.

Claims (10)

1. A digital watermarking method based on Y-Net, comprising:
inputting the carrier image and the watermark image into a Y-Net generating network to obtain a watermark-containing image embedded with the watermark image;
inputting the watermark-containing image into a compression loss simulation network trained in advance to obtain a simulated compressed watermark-containing image;
the simulated compressed watermark-containing image is judged by adopting a pre-trained judging network, so as to obtain a judging result, wherein the judging result comprises the step of recognizing the watermark-containing image as a carrier image or a watermark-containing image;
inputting the simulated compressed watermark-containing image into an extraction network to obtain a reconstructed watermark image;
according to the watermark image, the reconstructed watermark image and the discrimination network result, calculating the total loss function of the watermark network, optimizing the watermark network by taking the minimum total loss function as a target, and considering that training is finished when the loss is reduced and kept stable; the watermark network comprises a Y-Net generating network, a compression loss simulation network, a judging network and an extracting network, wherein the weights of the judging network and the compression loss simulation network are fixed in the optimization process;
and generating a watermark-containing image according to the carrier image and the watermark image by using a trained watermark network, compressing the watermark-containing image, and reconstructing the watermark-containing image from the compressed watermark-containing image.
2. The Y-Net based digital watermarking method according to claim 1, wherein the Y-Net generating network is a jump connection structure, comprising: a first convolution operation set, a second convolution operation set, a third convolution operation set, a fourth convolution operation set, a fifth convolution operation set, a sixth convolution operation set, a seventh convolution operation set, a first deconvolution operation set, a second deconvolution operation set, a third deconvolution operation set, a fourth deconvolution operation set, a fifth deconvolution operation set, a sixth deconvolution operation set, a seventh deconvolution operation set, and an eighth convolution operation set;
the output of the carrier sixth convolution operation group and the watermark sixth convolution operation group are fused with the output of the first deconvolution operation group;
the output of the carrier fifth convolution operation group and the watermark fifth convolution operation group are fused with the output of the second deconvolution operation group;
the output of the carrier fourth convolution operation group and the watermark fourth convolution operation group are fused with the output of the third convolution operation group;
the output of the carrier third convolution operation group and the watermark third convolution operation group are fused with the output of the fourth deconvolution operation group;
the output of the carrier second convolution operation group and the output of the watermark second convolution operation group are fused with the output of the fifth deconvolution operation group;
the output of the carrier first convolution operation group and the output of the watermark first convolution operation group are fused with the output of the sixth deconvolution operation group;
the carrier sixth convolution operation group and the watermark sixth convolution operation group are input into a seventh convolution operation group after being subjected to channel splicing, and the output end of the seventh convolution operation group is connected with the first deconvolution operation group;
the result of the jump connection of the sixth deconvolution operation group, the first carrier convolution operation group and the first watermark convolution operation group is input into a seventh deconvolution operation group;
the input end of the eighth convolution operation group is connected with the seventh deconvolution operation group, and the output of the eighth deconvolution operation group is a watermark-containing image;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
3. The Y-Net based digital watermarking method according to claim 1, wherein the discriminating network is obtained by the training steps of:
collecting a plurality of image samples, and determining labels of the images of the samples; the tag includes a probability that the corresponding sample image includes a watermark image;
and taking each image sample as input, and taking the label of each image sample as output training Zhu-Net to obtain a discrimination network.
4. The Y-Net based digital watermarking method according to claim 1, wherein: the compression loss simulation network is a U-Net network.
5. The Y-Net based digital watermarking method according to claim 1, wherein: the extraction network is a U-Net++ network.
6. The Y-Net based digital watermarking method according to claim 1, wherein: the discrimination network is a discriminator, and the discriminator comprises a softmax function.
7. The Y-Net based digital watermarking method according to claim 1, wherein the total loss function of the whole watermarking network is:
L=αL c +βL s +δL d
wherein L is c Generating network losses for Y-Net, L s To extract network loss, L d To determine network loss, α, β, and δ are weights for controlling Y-Net generation, network loss extraction, and network loss determination.
8. The Y-Net based digital watermarking method according to claim 7, wherein: the calculation formula of the network loss generated by the Y-Net is as follows:
Figure QLYQS_1
where n represents the total pixel value of the image, c i Representing the original image of the carrier,
Figure QLYQS_2
representing a watermark-containing image;
the extraction network loss L s The calculation formula of (2) is as follows:
Figure QLYQS_3
wherein s is i Representing the original watermark image and,
Figure QLYQS_4
representing the reconstructed watermark image;
said discriminating network loss L d The calculation formula of (2) is as follows:
Figure QLYQS_5
wherein y' i Representing a real label, namely inputting a real result for judging whether the image of the network is a carrier image or a watermark image; y is i Representing the output of the discrimination network.
9. A Y-Net based robust digital watermarking apparatus comprising:
the first processing unit is used for inputting the carrier image and the watermark image into the Y-Net generating network to obtain a watermark-containing image embedded with the watermark image;
the second processing unit is used for inputting the watermark image into a compression loss simulation network which is trained in advance to obtain a simulated compressed watermark image;
the judging unit is used for judging the simulated compressed watermark-containing image by adopting a pre-trained judging network to obtain a judging result, and the judging result comprises the step of recognizing the watermark-containing image as a carrier image or a watermark-containing image;
the reconstruction unit is used for inputting the simulated compressed watermark-containing image into an extraction network to obtain a reconstructed watermark image;
the training unit is used for calculating the total loss function of the whole watermark network according to the watermark image, the reconstructed watermark image and the discrimination network result, optimizing the watermark network by taking the minimum total loss function as a target, and considering that training is finished when the loss is reduced and kept stable; the watermark network comprises a Y-Net generating network, a compression loss simulation network, a judging network and an extracting network, wherein the weights of the judging network and the compression loss simulation network are fixed in the optimization process;
and the watermarking unit is used for generating a watermark-containing image according to the carrier image and the watermark image by using the trained watermarking network, compressing the watermark-containing image and reconstructing the watermark-containing image from the compressed watermark-containing image.
10. A strong robust digital watermarking system based on Y-Net is characterized by comprising a storage medium and a processor;
the storage medium is used for storing instructions;
the processor is operative to perform the method according to any one of claims 1-8.
CN202110786024.4A 2021-07-12 2021-07-12 Digital watermarking method, device and system based on Y-Net Active CN113592693B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110786024.4A CN113592693B (en) 2021-07-12 2021-07-12 Digital watermarking method, device and system based on Y-Net
PCT/CN2021/106681 WO2023283914A1 (en) 2021-07-12 2021-07-16 Y-net-based digital watermarking method, apparatus, and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110786024.4A CN113592693B (en) 2021-07-12 2021-07-12 Digital watermarking method, device and system based on Y-Net

Publications (2)

Publication Number Publication Date
CN113592693A CN113592693A (en) 2021-11-02
CN113592693B true CN113592693B (en) 2023-05-12

Family

ID=78247132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110786024.4A Active CN113592693B (en) 2021-07-12 2021-07-12 Digital watermarking method, device and system based on Y-Net

Country Status (2)

Country Link
CN (1) CN113592693B (en)
WO (1) WO2023283914A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011043937A (en) * 2009-08-20 2011-03-03 Mitsubishi Electric Corp Information processing system and program
CN111292221A (en) * 2020-02-25 2020-06-16 南京信息工程大学 Safe and robust high-capacity image steganography method
CN111768327A (en) * 2020-06-30 2020-10-13 苏州科达科技股份有限公司 Watermark adding and extracting method and device based on deep learning and storage medium
CN112529758A (en) * 2020-12-18 2021-03-19 海南大学 Color image steganography method based on convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11080809B2 (en) * 2017-05-19 2021-08-03 Google Llc Hiding information and images via deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011043937A (en) * 2009-08-20 2011-03-03 Mitsubishi Electric Corp Information processing system and program
CN111292221A (en) * 2020-02-25 2020-06-16 南京信息工程大学 Safe and robust high-capacity image steganography method
CN111768327A (en) * 2020-06-30 2020-10-13 苏州科达科技股份有限公司 Watermark adding and extracting method and device based on deep learning and storage medium
CN112529758A (en) * 2020-12-18 2021-03-19 海南大学 Color image steganography method based on convolutional neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Zhangjie Fu 等.A Privacy-Preserving Outsourcing Data Storage Scheme with Fragile Digital Watermarking-Based Data Auditing.《Journal of Electrical and Computer Engineering 2016》.2016,第1-8页. *
张蒲临.基于小波分析和神经网络的图像数字水印技术研究.中国优秀硕士学位论文全文数据库.2012,(第01期),全文. *
徐日莉 ; 张昕 ; .国外出版平台技术商典型分析(一)――以Atypon、Publishing Technology Plc、PubFactory为例.科技与出版.2014,(第12期),第26-29页. *
胡欣珏.高图像质量的一图藏两图方法.2022,第59卷(第4期),高图像质量的一图藏两图方法. *

Also Published As

Publication number Publication date
CN113592693A (en) 2021-11-02
WO2023283914A1 (en) 2023-01-19

Similar Documents

Publication Publication Date Title
CN112580782B (en) Channel-enhanced dual-attention generation countermeasure network and image generation method
CN111507386B (en) Method and system for detecting encryption communication of storage file and network data stream
CN110210492B (en) Stereo image visual saliency detection method based on deep learning
CN115131188A (en) Robust image watermarking method based on generation countermeasure network
CN113435269A (en) Improved water surface floating object detection and identification method and system based on YOLOv3
CN114821204B (en) Meta-learning-based embedded semi-supervised learning image classification method and system
CN112836602B (en) Behavior recognition method, device, equipment and medium based on space-time feature fusion
CN115809953A (en) Attention mechanism-based multi-size image robust watermarking method and system
CN114581646A (en) Text recognition method and device, electronic equipment and storage medium
Lu et al. Steganalysis of content-adaptive steganography based on massive datasets pre-classification and feature selection
CN115393698A (en) Digital image tampering detection method based on improved DPN network
CN114898171A (en) Real-time target detection method suitable for embedded platform
CN114299305A (en) Salient object detection algorithm for aggregating dense and attention multi-scale features
CN113592693B (en) Digital watermarking method, device and system based on Y-Net
CN113298689A (en) Large-capacity image steganography method
CN102129700A (en) Infrared simulated image platform under ocean background and image generation method thereof
CN112818774A (en) Living body detection method and device
Lai et al. Generative focused feedback residual networks for image steganalysis and hidden information reconstruction
CN112785478B (en) Hidden information detection method and system based on generation of embedded probability map
CN114943083A (en) Intelligent terminal vulnerability code sample mining method and device and electronic equipment
CN114202592A (en) Recoloring image evidence obtaining method based on spatial correlation
CN114743148A (en) Multi-scale feature fusion tampering video detection method, system, medium, and device
Zhang et al. Image Steganalysis Network Based on Dual-Attention Mechanism
CN111461253A (en) Automatic feature extraction system and method
CN117456286B (en) Ginseng grading method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant