CN113989096B - Robust image watermarking method and system based on deep learning and attention network - Google Patents
Robust image watermarking method and system based on deep learning and attention network Download PDFInfo
- Publication number
- CN113989096B CN113989096B CN202111607588.3A CN202111607588A CN113989096B CN 113989096 B CN113989096 B CN 113989096B CN 202111607588 A CN202111607588 A CN 202111607588A CN 113989096 B CN113989096 B CN 113989096B
- Authority
- CN
- China
- Prior art keywords
- image
- attention
- tensor
- watermark
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 238000013135 deep learning Methods 0.000 title claims abstract description 28
- 238000013136 deep learning model Methods 0.000 claims abstract description 8
- 238000010586 diagram Methods 0.000 claims abstract description 8
- 230000008569 process Effects 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 13
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 10
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 238000007906 compression Methods 0.000 claims description 6
- 235000002566 Capsicum Nutrition 0.000 claims description 4
- 239000006002 Pepper Substances 0.000 claims description 4
- 235000016761 Piper aduncum Nutrition 0.000 claims description 4
- 235000017804 Piper guineense Nutrition 0.000 claims description 4
- 235000008184 Piper nigrum Nutrition 0.000 claims description 4
- 230000008521 reorganization Effects 0.000 claims description 4
- 150000003839 salts Chemical class 0.000 claims description 4
- 244000203593 Piper nigrum Species 0.000 claims 1
- 230000008447 perception Effects 0.000 abstract description 3
- 230000000007 visual effect Effects 0.000 abstract description 2
- 230000016776 visual perception Effects 0.000 abstract description 2
- 239000000126 substance Substances 0.000 description 4
- 241000722363 Piper Species 0.000 description 3
- 230000006835 compression Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000004438 eyesight Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000006854 communication Effects 0.000 description 2
- 238000005520 cutting process Methods 0.000 description 2
- 241000764238 Isis Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/10—Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
- G06F21/16—Program or content traceability, e.g. by watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0052—Embedding of the watermark in the frequency domain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0065—Extraction of an embedded watermark; Reliable detection
Abstract
The invention belongs to the technical field of digital image watermarking, and provides a method and a system for watermarking a robust image based on deep learning and an attention network, which comprises the following steps: acquiring an original image tensor; obtaining an attention diagram image tensor according to the obtained original image tensor and the attention model; generating a watermark-containing image based on the obtained attention map image tensor and the watermark embedding model; generating an attacked image according to the generated watermark-containing image and the constructed attack network model; and extracting an image watermark according to the attacked image, the attention model and the deep learning model. According to the invention, the perception capability of a human visual system to different regions and the anti-attack capability of different pixel points are utilized, attention weights are deduced along two dimensions of a channel and a space, and regions with small influence on human visual perception and good robustness are searched for watermark embedding.
Description
Technical Field
The invention belongs to the technical field of digital image watermarking, and particularly relates to a method and a system for watermarking a robust image based on deep learning and an attention network.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
With the popularization of network communication and the rapid development of digital multimedia, the manufacture and the propagation of digital information are more and more convenient. However, the problem of copyright dispute of digital products is endless while facilitating the life of people, and the copyright protection is not gentle. The image watermarking technology is an effective means for protecting image copyright, and the watermark is embedded into the image in an invisible form for copyright identification. However, the current image watermarking method still has shortcomings in robustness and imperceptibility, and the performance needs to be further improved.
Deep learning has been rapidly developed in the fields of computer vision and image processing due to its powerful representation and learning capabilities, and has also been widely used in the field of image watermarking. At present, most of image watermarking methods based on convolutional networks utilize self-coding neural networks in deep learning to train models through a large amount of data, and therefore good performance is achieved. However, because the human visual system has different perceptibility to different areas of the image and different attack resistance of different pixel points in the image, the imperceptibility and robustness of the existing method are still not ideal.
Disclosure of Invention
In order to solve the problems, the invention provides an image watermarking method and system based on deep learning and an attention network, which utilizes the perception capability of a human vision system to different regions and the attack resistance capability of different pixel points, deduces an attention weight value along two dimensions of a channel and a space, and searches for a region with small influence on human vision perception and good robustness for watermark embedding.
According to some embodiments, a first aspect of the present invention provides a robust image watermarking method based on deep learning and attention network, which adopts the following technical solutions:
a robust image watermarking method based on deep learning and attention network comprises the following steps:
acquiring an original image tensor;
obtaining an attention diagram image tensor according to the obtained original image tensor and the attention model;
generating a watermark-containing image based on the obtained attention map image tensor and the watermark embedding model;
generating an attacked image according to the generated watermark-containing image and the constructed attack network model;
and extracting an image watermark according to the attacked image, the attention model and the deep learning model.
As a further technical limitation, in the process of acquiring the original image tensor, data reorganization is performed on the acquired original image to obtain the original image tensor.
As a further technical limitation, in the process of obtaining the attention map image tensor, the original image tensor is input to the attention model, attention weights are sequentially estimated along a channel dimension and a space dimension, the obtained attention weights are multiplied by the original image tensor, the weights are adaptively adjusted, an attention feature tensor is generated, and the attention map image tensor is further obtained.
As a further technical limitation, the watermark embedding model employs a residual neural network.
Further, the specific process of generating the image containing the watermark is as follows:
performing discrete cosine transformation on the attention drawing image tensor, performing characteristic combination on the transformed attention drawing image tensor and the watermark image, and inputting the combined attention drawing image tensor to the residual error neural network to generate an attention drawing image tensor containing the watermark;
performing inverse discrete cosine transform on the obtained tensor of the attention force image containing the watermark to obtain a tensor of the characteristic of the watermark;
and carrying out data reconstruction on the tensor containing the watermark characteristics to generate an image containing the watermark.
Further, the acquired attention image tensor including watermark is subjected to inverse discrete cosine transform, an image tensor of a spatial domain is acquired, and the acquired image tensor of the spatial domain is embedded with the intensity factor and then added with the attention image tensor, so that a feature tensor including watermark is acquired.
As a further technical limitation, the attack isJPEGCompression attack, salt and pepper noise attack, gaussian noise attack and sharpening attack.
As a further technical limitation, the watermark-containing image is input into an attack network model, and the attack network selects an attack mode for attack in a random probability manner in the training iteration process of the model to generate an attacked image.
As a further technical limitation, the specific process of extracting the image watermark is as follows:
carrying out data reconstruction on the obtained attacked image to obtain an attacked image tensor;
inputting the attacked image tensor into the attention model to obtain an attention image of the attacked image;
and performing discrete cosine transform on the attention image of the attacked image, inputting the transformed attention image of the attacked image into a deep learning model, and extracting an image watermark by combining a voting strategy.
According to some embodiments, a second aspect of the present invention provides a robust image watermarking system based on deep learning and attention network, which adopts the following technical solutions:
a robust image watermarking system based on deep learning and attention networks, comprising:
the watermark embedding module is configured to acquire an original image tensor, obtain an attention image tensor according to the acquired original image tensor and an attention model, and generate a watermark-containing image based on the obtained attention image tensor and a watermark embedding model;
the simulated attack module is configured to generate an attacked image according to the generated watermark-containing image and the constructed attack network model;
a watermark extraction model configured to extract an image watermark from the attacked image, the attention model, and a deep learning model.
Compared with the prior art, the invention has the beneficial effects that:
compared with the existing deep learning-based method, the robust image watermarking method based on the deep learning and the attention network has better imperceptibility and robustness, is suitable for copyright protection of digital image products, embeds and extracts watermarks on the premise of not damaging the image quality, and can resist various attacks.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a flowchart of a method for robustness image watermarking based on deep learning and attention networks according to a first embodiment of the present invention;
FIG. 2 is a block diagram of an attention model in accordance with a first embodiment of the invention;
fig. 3 is a block diagram of a watermark embedding model in accordance with a first embodiment of the present invention;
FIG. 4 is a block diagram of an attack network model according to a first embodiment of the present invention;
fig. 5 is a structural diagram of a watermark extraction network in the first embodiment of the present invention;
fig. 6 is a block diagram of a robust image watermarking system based on deep learning and attention network in the second embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
Example one
The embodiment of the invention introduces a robust image watermarking method based on deep learning and an attention network.
As shown in fig. 1, the method for watermarking a robust image based on deep learning and attention network includes three stages, namely a watermark embedding stage, a simulated attack stage and a watermark extraction stage, and specifically includes the following steps:
step S01: for the number of original imagesAccording to the data reorganization, the input size isIs recombined intoThe tensor of (a) is considered in this embodiment for the followingDCTThe transformation is carried out by changing the parameters of the image,h=M/8,w=N/8,c=64。
step S02: constructing the attention model as shown in FIG. 2, with the size generated in step S01 asThe original image tensor is input into the network of fig. 2, the attention weight is sequentially inferred along two dimensions (a channel and a space), multiplied by the original tensor, the weight is adaptively adjusted, the attention characteristic tensor is generated, and the attention characteristic tensor is output to obtainThe attention of the magnitude is looking at the tensor.
Step S03: constructing a watermark embedding model as shown in FIG. 3, and setting the size asIs the tensor and size ofThe watermark image of (a) is subjected to the deep convolution network of fig. 3 to obtain a watermark image of sizeThe water-containing print image of (a); in order to enhance the universality of the method, the watermark images adopted in the training stage are all randomly generated binary images.
In this embodiment, step S01, step S02 and step S03 are watermark embedding stages, and are intended to find an area in an image, which has a small influence on human visual perception and is more resistant to attack, through an attention network, and embed a watermark in the area through a multi-layer residual neural network, so as to improve imperceptibility and robustness of the method.
Step S04: an attack network model is built, as shown in fig. 4, a plurality of attacks that may be encountered in an actual communication process are simulated, and 4 common attacks are simulated in this embodiment, namely, the attacks are respectivelyJPEGCompression attack, salt and pepper noise attack, Gaussian noise attack and sharpening attack form distinguishable attack layers. Wherein the content of the first and second substances,JPEGthe compression attack belongs to the irreducible attack, and the method adopts a series of micromanipulations to simulateJPEGEach step of the compression is to make the generated compressed image and realityJPEGThe compressed image is almost uniform. In order to improve the generalization capability of the algorithm, the probability of each attack is randomly distributed in each iteration, but the total probability is required to be 1.
Step S05: inputting the watermark-containing image generated in step S03 into the attack network model shown in fig. 4, selecting an attack mode from the attack network with random probability to attack each time training iteration, and generating the watermark-containing image with the size ofThe attacked image.
In this embodiment, the steps S04 and S05 are simulation attack stages, and the distortion attack is processed in the network, so that the network is more robust in practical application.
Step S07: the tensor generated in step S06 is input to the attention network created in step S02, and an attention image is generated.
Step S08: constructing a watermark extraction network as shown in FIG. 5, with a size ofThe image of (2) is passed through the deep convolutional network of FIG. 5, resulting in a magnitude ofExtracting the watermark.
In this embodiment, step S06, step S07, and step S08 are watermark extraction stages, and are intended to completely extract a watermark image from an attacked image, so as to achieve the purpose of copyright authentication.
Step S09: and packaging the network structures corresponding to the three built stages into an integral network for training and testing.
Step S10: image block extraction is performed on the images in the Pascal VOC2012 data set, a CIFAR10 data set is generated to generate a training sample set, a total of about 33 ten thousand training samples are generated, the whole network packaged in the step S09 is trained, and the optimization method adopted in the embodiment is a random gradient descent method. Training loss functionLComprises the following steps:
wherein the content of the first and second substances,in order to obtain a loss ratio,loss functions for the embedding and extraction networks, respectively. Since the loss of the embedded network is essentially the evaluation of the image quality after embedding the watermark, a structural similarity function is adoptedL I :
Wherein the content of the first and second substances,Iin order to be the original image, the image is processed,I W in order to produce a water-bearing image,is composed ofIIs determined by the average value of (a) of (b),is composed ofIThe variance of (a) is determined,is composed ofI W Is determined by the average value of (a) of (b),is composed ofI W The variance of (a) is determined,is prepared from (a)I,I W ) The covariance of (a) of (b),c 1 、c 2 are two measurement constants, set to 10 in this embodiment–4And 9X 10–4。
Because the extraction loss is essentially to evaluate the correctness of the extracted watermark, a binary cross entropy loss function is adopted:
wherein the content of the first and second substances,to be the domain of the image,in order to be the true watermark pixel value,the watermark pixel value for that point is predicted for the network.
Other training parameters are shown in table 1. And storing the parameters of the network training.
TABLE 1 watermark embedding and extraction Module training parameter settings
So far, the method of the present invention has completed the construction and training of all network models, and saves the weight file in step S10.
Step S11: and decomposing the end-to-end training model, removing an attack layer, and decomposing the network model built by the figures 2, 3 and 5 into an independent network for actual test.
Step S12: when the actual watermark embedding and extraction detection are carried out on the image, the method supports images with various resolutions, and carries out data recombination on the original image to generate the original image so as to lead the input size to be consistent with the network model settingThe tensor of (a).
Step S13: in order to better verify the robustness of the method, the attacks adopted by the attack network in actual test are all real attacks. In order to verify the generalization capability of the method to the attack, the adopted attack mode is added with a new attack mode such as cutting attack, scaling attack and the like on the basis of the training stage attack mode.
Step S14: and (4) completely warping the tensor generated in the step (S12) to the network model set up in the figures 2, 3 and 5, and obtaining the image containing the watermark, the image after the attack and the extracted watermark by using the trained weight parameters in the step (S10).
As one or more embodiments, the specific process of step S03 is:
step S301: discrete cosine transform of the attention image generated in step S02 (S) ((S))Discrete Cosine TransformFor shortDCT) By usingDCTThe transformation is because research shows that the spatial domain watermarking method has poor robustness,DCTthe transformation is an image approximate optimal transformation and has low computational complexity.
Step S302: generated in step S301DCTThe tensor and size after transformation areIs combined with the characteristics of the watermark image to formThe tensor of (a).
Step S303: the tensor generated in step S302 is passed through a multilayer convolutional network, i.e., fig. 3, and a watermark is embedded in the transformed image tensor. The residual neural network is chosen here to avoid gradient explosion and gradient disappearance problems with the depth network. The convolution kernel sizes are set to 1 × 1 and 2 × 2, respectively, and the 1 × 1 convolution layer is a tensor dimension-variable layer with a size of 1 × 1Is reduced toThe tensor of (a). The 2 x 2 convolution layer is to share watermark data among adjacent blocks, and a voting strategy is used to obtain a final watermark image in the watermark extraction stage, so as to improve the robustness of the method.
Step S304: inverting the output tensor of step S303DCTTransform, converting it from the frequency domain back to the spatial domain.
Step S305: multiplying the tensor of step S304 by the embedding strength factorAnd then, the original image tensor generated in step S01 is added to generate a watermark-containing feature tensor. Experimental data show that the embedding strengthThe increase of (2) will result in the quality of the generated watermark-containing image being degraded, but will greatly improve the robustness of the method, so in order to enhance the robustness of the method on the premise of ensuring the imperceptibility of the watermark-containing image, in the embodiment, the training stageSet to 1.0. In the actual application stage, the adjustment can be properly carried out according to the application requirementsMachine for finishing。
Step S306: reconstructing the feature tensor generated in step S305, shifting the values in the depth dimension to the height and width dimensions, and generating the feature tensor as the inverse process of step S01Size of watermarked image.
As one or more embodiments, the specific process of step S08 is:
step S801: the attention image generated in step S07 is subjected toDCTAnd (6) transforming.
Step S802: generated in step S801DCTThe transformed tensor passes through a multilayer convolution network, the watermark tensor is output, and a voting strategy is used for the watermark tensor to obtain a final watermark image. The convolutional network chosen here is identical to the fig. 3 embedded network, and since the final output is a watermark image with the number of channels being 1, the number of convolutional kernels in the last layer is set to 1.
To demonstrate the effectiveness of the method in this example, use was made ofGranadaData sets and data setsBossBase1.01 data set, and verifying the robust performance of the embodiment against various attacks.
The method of the present embodiment and a classical algorithm (Lee J E, Seo Y H, Kim D W. connected digital image water marking adaptive to the resolution of image and water mark [ J E, J H, B, D, B]Applied Sciences, 2020, 10(19): 1-20, article No. 6854, and Ahmadi M, Norouzi A, Karimi N, et al ReDMark: Framework for residual differentiation marking on deep networks [ J]Expert Systems with Applications 2020, 146: 1-15, articule No. 113157). Table 2 shows the method proposed by Lee et al,ReDMarkMethod and peak signal-to-noise ratio (S/N) of watermark-containing image generated by the method of the embodimentPeak Signal-to-Noise RatioFor shortPSNR). Table 3 shows the bit error rate of 3 methods for extracting watermarks under various attacks.From the data in Table 2, the method of this embodiment can generate a watermarked imagePSNRThe value is improved and better imperceptibility is achieved. As can be seen from the data in Table 3, this embodiment is subject to a variety of attacks, such asJPEGUnder the attacks of compression, sharpening, cutting, Gaussian noise, salt and pepper noise and the like, the error rate of extracted watermarks is low, and the robustness is better. However, the robustness of the method against geometric attacks such as scaling attack is poor, and the method is a limit point of the method of the embodiment.
TABLE 23 methodsPSNRComparison
TABLE 3 bit error rate comparison of 3 methods under multiple attacks
Based on an attention network and a deep convolutional network, the embodiment provides a robust image watermarking algorithm, which has better imperceptibility and robustness compared with the existing method based on deep learning, can better embed a watermark on the premise of not damaging the quality of a carrier image, and can more completely extract the embedded watermark.
Example two
The second embodiment of the invention introduces a robust image watermarking system based on deep learning and attention network.
A robust image watermarking system based on deep learning and attention network as shown in fig. 6, comprising:
the watermark embedding module is configured to acquire an original image tensor, obtain an attention image tensor according to the acquired original image tensor and an attention model, and generate a watermark-containing image based on the obtained attention image tensor and a watermark embedding model;
the simulated attack module is configured to generate an attacked image according to the generated watermark-containing image and the constructed attack network model;
a watermark extraction model configured to extract an image watermark from the attacked image, the attention model, and a deep learning model.
The detailed steps are the same as those of the robustness image watermarking method based on deep learning and attention network provided in the first embodiment, and are not described herein again.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.
Claims (7)
1. A robust image watermarking method based on deep learning and attention network is characterized by comprising the following steps:
acquiring an original image tensor;
obtaining an attention diagram image tensor according to the obtained original image tensor and the attention model;
generating a watermark-containing image based on the obtained attention map image tensor and the watermark embedding model;
generating an attacked image according to the generated watermark-containing image and the constructed attack network model;
extracting an image watermark according to the attacked image, the attention model and the deep learning model;
in the process of obtaining the attention drawing image tensor, the original image tensor is input into the attention model, attention weights are sequentially inferred along a channel dimension and a space dimension respectively, the obtained attention weights are multiplied by the original image tensor, the weights are adjusted in a self-adaptive mode, an attention feature tensor is generated, and the attention drawing image tensor is further obtained;
the watermark embedding model adopts a residual error neural network; in the process of generating the watermark-containing image, performing discrete cosine transform on the attention drawing image tensor, performing characteristic combination on the transformed attention drawing image tensor and the watermark image, and inputting the combined attention drawing image tensor and the watermark image into the residual error neural network to generate the watermark-containing attention drawing image tensor; performing inverse discrete cosine transformation on the obtained tensor of the water mark-containing attention image, multiplying the tensor of the water mark-containing attention image subjected to inverse discrete cosine transformation by the embedding intensity factor, and superposing the tensor of the water mark-containing attention image subjected to multiplication by the embedding intensity factor and the tensor of the attention image to obtain a tensor of water mark-containing characteristics; and performing data reconstruction on the obtained tensor containing the watermark characteristics, and moving the values in the depth dimension to the height dimension and the width dimension to generate the image containing the watermark.
2. The method for watermarking the robust image based on the deep learning and attention network as claimed in claim 1, wherein in the process of acquiring the original image tensor, the acquired original image is subjected to data reorganization to obtain the original image tensor.
3. The method for watermarking the robust image based on the deep learning and the attention network as claimed in claim 1, wherein after performing inverse discrete cosine transform on the obtained tensor of the watermark-containing attention image, an image tensor of a spatial domain is obtained, and the obtained image tensor of the spatial domain is multiplied by the embedding intensity factor and then added with the tensor of the attention image to obtain the tensor of the watermark-containing characteristic.
4. A method for robust image watermarking based on deep learning and attention networks as claimed in claim 1 wherein the attack isJPEGCompression attack, salt and pepper noise attack, gaussian noise attack and sharpening attack.
5. The robust image watermarking method based on deep learning and attention network as claimed in claim 1, wherein the watermark-containing image is input into an attack network model, and the attack network selects an attack mode for attack with random probability in the process of carrying out model training iteration to generate an attacked image.
6. The robust image watermarking method based on deep learning and attention network as claimed in claim 1, wherein the specific process of extracting the image watermark is as follows:
carrying out data reconstruction on the obtained attacked image to obtain an attacked image tensor;
inputting the attacked image tensor into the attention model to obtain an attention image of the attacked image;
and performing discrete cosine transform on the attention image of the attacked image, inputting the transformed attention image of the attacked image into a deep learning model, and extracting an image watermark by combining a voting strategy.
7. A robust image watermarking system based on deep learning and attention network, comprising:
the watermark embedding module is configured to acquire an original image tensor, obtain an attention image tensor according to the acquired original image tensor and an attention model, and generate a watermark-containing image based on the obtained attention image tensor and a watermark embedding model;
the simulated attack module is configured to generate an attacked image according to the generated watermark-containing image and the constructed attack network model;
a watermark extraction model configured to extract an image watermark from the attacked image, the attention model and a deep learning model;
in the process of obtaining the attention drawing image tensor, the original image tensor is input into the attention model, attention weights are sequentially inferred along a channel dimension and a space dimension respectively, the obtained attention weights are multiplied by the original image tensor, the weights are adjusted in a self-adaptive mode, an attention feature tensor is generated, and the attention drawing image tensor is further obtained;
the watermark embedding model adopts a residual error neural network; in the process of generating the watermark-containing image, performing discrete cosine transform on the attention drawing image tensor, performing characteristic combination on the transformed attention drawing image tensor and the watermark image, and inputting the combined attention drawing image tensor and the watermark image into the residual error neural network to generate the watermark-containing attention drawing image tensor; performing inverse discrete cosine transformation on the obtained tensor of the water mark-containing attention image, multiplying the tensor of the water mark-containing attention image subjected to inverse discrete cosine transformation by the embedding intensity factor, and superposing the tensor of the water mark-containing attention image subjected to multiplication by the embedding intensity factor and the tensor of the attention image to obtain a tensor of water mark-containing characteristics; and performing data reconstruction on the obtained tensor containing the watermark characteristics, and moving the values in the depth dimension to the height dimension and the width dimension to generate the image containing the watermark.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111607588.3A CN113989096B (en) | 2021-12-27 | 2021-12-27 | Robust image watermarking method and system based on deep learning and attention network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111607588.3A CN113989096B (en) | 2021-12-27 | 2021-12-27 | Robust image watermarking method and system based on deep learning and attention network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113989096A CN113989096A (en) | 2022-01-28 |
CN113989096B true CN113989096B (en) | 2022-04-12 |
Family
ID=79734375
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111607588.3A Active CN113989096B (en) | 2021-12-27 | 2021-12-27 | Robust image watermarking method and system based on deep learning and attention network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113989096B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114663946B (en) * | 2022-03-21 | 2023-04-07 | 中国电信股份有限公司 | Countermeasure sample generation method, apparatus, device and medium |
CN115293949B (en) * | 2022-07-14 | 2024-01-02 | 中技安全科技有限公司 | Image encryption method |
CN116308986B (en) * | 2023-05-24 | 2023-08-04 | 齐鲁工业大学(山东省科学院) | Hidden watermark attack algorithm based on wavelet transformation and attention mechanism |
CN116342362B (en) * | 2023-05-31 | 2023-07-28 | 齐鲁工业大学(山东省科学院) | Deep learning enhanced digital watermark imperceptibility method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090018586A (en) * | 2007-08-17 | 2009-02-20 | 가부시끼가이샤 도시바 | Image processing method and image processing apparatus |
CN102722857A (en) * | 2012-05-24 | 2012-10-10 | 河海大学 | Digital image watermark method based on visual attention mechanism |
CN102750660A (en) * | 2012-06-08 | 2012-10-24 | 北京京北方信息技术有限公司 | Method and device for embedding and extracting digital watermarking |
CN111199233A (en) * | 2019-12-30 | 2020-05-26 | 四川大学 | Improved deep learning pornographic image identification method |
CN113095988A (en) * | 2021-03-29 | 2021-07-09 | 贵州大学 | Dispersion tensor image robust zero watermarking method based on ORC sampling and QGPCE conversion |
CN113763224A (en) * | 2020-06-03 | 2021-12-07 | 阿里巴巴集团控股有限公司 | Image processing method and device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8369565B2 (en) * | 2009-04-24 | 2013-02-05 | Academia Sinica | Information hiding with similar structures |
CN107292806A (en) * | 2017-06-28 | 2017-10-24 | 南京师范大学 | A kind of remote sensing image digital watermark embedding and extracting method based on quaternion wavelet |
CN111681155B (en) * | 2020-06-09 | 2022-05-27 | 湖南大学 | GIF dynamic image watermarking method based on deep learning |
CN113379584B (en) * | 2021-06-10 | 2023-10-31 | 大连海事大学 | Imperceptible watermark attack method based on residual error learning, storage medium and electronic device |
CN113393382B (en) * | 2021-08-16 | 2021-11-09 | 四川省人工智能研究院(宜宾) | Binocular picture super-resolution reconstruction method based on multi-dimensional parallax prior |
-
2021
- 2021-12-27 CN CN202111607588.3A patent/CN113989096B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090018586A (en) * | 2007-08-17 | 2009-02-20 | 가부시끼가이샤 도시바 | Image processing method and image processing apparatus |
CN102722857A (en) * | 2012-05-24 | 2012-10-10 | 河海大学 | Digital image watermark method based on visual attention mechanism |
CN102750660A (en) * | 2012-06-08 | 2012-10-24 | 北京京北方信息技术有限公司 | Method and device for embedding and extracting digital watermarking |
CN111199233A (en) * | 2019-12-30 | 2020-05-26 | 四川大学 | Improved deep learning pornographic image identification method |
CN113763224A (en) * | 2020-06-03 | 2021-12-07 | 阿里巴巴集团控股有限公司 | Image processing method and device |
CN113095988A (en) * | 2021-03-29 | 2021-07-09 | 贵州大学 | Dispersion tensor image robust zero watermarking method based on ORC sampling and QGPCE conversion |
Non-Patent Citations (2)
Title |
---|
基于深度学习的声纹识别算法研究;郭茗涵;《中国优秀硕士学位论文全文数据库信息科技辑》;20200815;第2020年卷(第08期);第3.2.2节 * |
基于视觉注意模型的数字图像水印算法;杨颖 等;《计算机应用》;20091231;第29卷(第S2期);第2-3节 * |
Also Published As
Publication number | Publication date |
---|---|
CN113989096A (en) | 2022-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113989096B (en) | Robust image watermarking method and system based on deep learning and attention network | |
Amirgholipour et al. | Robust digital image watermarking based on joint DWT-DCT | |
Singh et al. | A security enhanced robust steganography algorithm for data hiding | |
Kasmani et al. | A new robust digital image watermarking technique based on joint DWT-DCT transformation | |
Susanto et al. | Hybrid method using HWT-DCT for image watermarking | |
Divecha et al. | Implementation and performance analysis of DCT-DWT-SVD based watermarking algorithms for color images | |
Ho et al. | Robust digital image-in-image watermarking algorithm using the fast Hadamard transform | |
Sari et al. | Robust and imperceptible image watermarking by DC coefficients using singular value decomposition | |
Ramakrishnan et al. | Svd based robust digital watermarking for still images using wavelet transform | |
Perwej et al. | Copyright protection of digital images using robust watermarking based on joint DLT and DWT | |
Khalifa et al. | A robust non-blind algorithm for watermarking color images using multi-resolution wavelet decomposition | |
Rao et al. | An efficient genetic algorithm based gray scale digital image watermarking for improving the robustness and imperceptibility | |
Halima et al. | A novel approach of digital image watermarking using HDWT-DCT | |
Xuan et al. | Image steganalysis based on statistical moments of wavelet subband histograms in DFT domain | |
Meng et al. | Copyright protection for digital image based on joint DWT-DCT transformation | |
Wang et al. | New gray-scale watermarking algorithm of color images based on quaternion Fourier transform | |
Sharma et al. | Robust image watermarking technique using contourlet transform and optimized edge detection algorithm | |
Lee et al. | Genetic algorithm-based watermarking in discrete wavelet transform domain | |
Zhong et al. | An optimal wavelet-based image watermarking via genetic algorithm | |
Pai et al. | A high quality robust digital watermarking by smart distribution technique and effective embedded scheme | |
Hu et al. | A blind watermarking algorithm for color image based on wavelet transform and Fourier transform | |
Tomar et al. | A statistical comparison of digital image watermarking techniques | |
Duman et al. | A new method of wavelet domain watermark embedding and extraction using fractional Fourier transform | |
Silja et al. | A watermarking algorithm based on contourlet transform and nonnegative matrix factorization | |
Kaur et al. | Digital watermarking in neural networks models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |