CN113554545A - Model watermarking method for image processing model copyright protection - Google Patents

Model watermarking method for image processing model copyright protection Download PDF

Info

Publication number
CN113554545A
CN113554545A CN202110858190.0A CN202110858190A CN113554545A CN 113554545 A CN113554545 A CN 113554545A CN 202110858190 A CN202110858190 A CN 202110858190A CN 113554545 A CN113554545 A CN 113554545A
Authority
CN
China
Prior art keywords
model
watermark
image
training
expressed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110858190.0A
Other languages
Chinese (zh)
Inventor
唐琳琳
陈佳伟
刘洋
漆舒汉
张加佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN202110858190.0A priority Critical patent/CN113554545A/en
Publication of CN113554545A publication Critical patent/CN113554545A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/005Robust watermarking, e.g. average attack or collusion attack resistant
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a model watermarking method for protecting the copyright of an image processing model, which comprises the following steps: acquiring a trained image processing model M (theta;); embedding an invisible watermark w in a GT image set Y of original training data (X, Y) to obtain the watermarked GT image set Yw(ii) a Then training data (X, Y) after embedding the watermarkw) Training the model to change the parameters of the model into the embedded watermark to obtain the watermarked model M (theta)w(ii) a H); the method further comprises ownership verification, and copyright verification of the modified model or the suspicious model SM can be achieved by verifying the success rate of watermark extraction from the trigger image. The copyright protection of the image processing model can be realized, the model has excellent fidelity, uniqueness and robustness, and strong universality, and can be used for the copyright protection of any model which outputs images.

Description

Model watermarking method for image processing model copyright protection
Technical Field
The invention relates to the technical field of image processing model watermark method copyright protection, in particular to a model watermark method for image processing model copyright protection.
Background
Deep learning has enjoyed great success in recent years on many computer vision tasks, such as image classification, image processing and image segmentation. However, it is not easy to fully develop the learning ability of the deep neural network. For example, the design of a model architecture and the use of hyper-parameters in training often require extensive expertise and repeated experimentation. In addition, a large amount of computational resources and high quality annotation data are also essential to training a good model.
The academic world of 2017 began to study deep learning model watermarks (model watermarking) for protecting the copyrights of deep learning models. Model watermarking methods typically consist of two parts: embedding of watermarks and ownership verification. The embedding of watermarks often causes the model to exhibit a different and specific pattern at a specific input, thereby making the model traceable after being stolen; thus, the model owner at the time of ownership verification can determine ownership of the model by observing whether the model exhibits the particular pattern. When performing ownership verification on a suspicious model, there are two scenarios for the model owner: white box scenes and black box scenes. Under the white-box scene, a model owner can acquire the structure and parameters of a suspicious model; in the black box scenario, the model is often deployed into a product or service to expose only one API, so the model owner can only obtain the output of the model. Obviously, black boxes are more common in real-world scenarios and more challenging for ownership verification.
At present, most of work on deep learning model watermarking focuses on copyright protection of an image classification model, but research on image processing model (input and output are images) watermarking is very little, and an existing method is lack of generality and is difficult to transplant into image processing models with different tasks. Therefore, the image processing model watermarking method with strong universality is provided, and black box verification can be supported.
The above background disclosure is only for the purpose of assisting understanding of the inventive concept and technical solutions of the present invention, and does not necessarily belong to the prior art of the present patent application, and should not be used for evaluating the novelty and inventive step of the present application in the case that there is no clear evidence that the above content is disclosed at the filing date of the present patent application.
Disclosure of Invention
In order to solve at least one of the technical problems mentioned in the background art, an object of the present invention is to provide a model watermarking method for image processing model copyright protection, which can realize copyright protection of an image processing model, has excellent model fidelity, uniqueness and robustness, has strong universality, and can be used for copyright protection of any model whose output is an image.
In order to achieve the above object, the present invention provides the following technical solutions.
A model watermarking method for image processing model copyright protection utilizes an invisible image watermarking mechanism to carry out watermark embedding and ownership verification, and comprises the following steps:
acquiring a trained image processing model M (theta;);
embedding an invisible watermark w in a GT image set Y of original training data (X, Y) to obtain the watermarked GT image set Yw
Then training data (X, Y) after embedding the watermarkw) Training the model to change the parameters of the model into the embedded watermark to obtain the model M (theta) embedded with the watermarkw;·)。
Further, the model watermarking method for image processing model copyright protection further comprises ownership verification, namely selecting a trigger image T from an input image x to input into the model M (theta)w(ii) a In (g) or in the suspicious model SM, extracting watermark information from each obtained output image SM (t), and calculating the success rate of extracting the watermark in the trigger image, thereby carrying out comparison on the model M (theta)w(ii) a ·) or the suspicious model SM.
Further, the watermark embedding process is adapted to three-stage training, including:
and (3) image watermarking stage: training to obtain an encoder E, and then embedding a watermark w in the GT image Y by utilizing the E;
and (3) a model watermarking stage: by training data (X, Y) after embedding the watermarkw) Training the model to obtain the model M embedded with the watermarkw
And (3) extraction and reinforcement stage: enhancing the extraction capability of decoder D in order to enable it to be extracted from MwThe watermark w is extracted from the output image of (a) and the blank image b is extracted from the output of the other model.
The model watermarking method is applied to the copyright protection of the image processing model.
In the method, an encoder E and a decoder D are obtained through training, and an E and D minimum loss function is trained, the watermark embedding process is suitable for three-stage training, so that an image processing model after embedding the watermark can learn to embed invisible watermark information in an output image of the image processing model, the watermark extraction success rate is verified through verifying a normalized correlation coefficient between the watermark extracted from the model and the original watermark, and the ownership of the model is verified. A large number of experiments prove that the method provided by the invention is used for protecting the feasibility of an image processing model, has excellent fidelity, uniqueness and robustness, has strong universality because each step in the method is independent of a specific task, overcomes the defects that the traditional image processing model watermarking method lacks universality and is difficult to transplant to different task models, and can be used for copyright protection of any model outputting images.
The invention adopts the technical scheme for realizing the purpose, makes up the defects of the prior art, and has reasonable design and convenient operation.
Drawings
The foregoing and/or other objects, features, advantages and embodiments of the invention will be more readily understood from the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic illustration of an invisible image watermark;
FIG. 2 is a general schematic diagram of a model watermarking method;
FIG. 3 is a schematic diagram of a three-stage training strategy in the watermark embedding process;
FIG. 4 is a slave model MwExtracting the watermark of the output image to obtain a watermark image;
FIG. 5 is a diagram of the model MwOutputting a watermark image obtained by image extraction after compression;
FIG. 6 shows the model MwOutputting a watermark image obtained by image extraction after fine adjustment;
FIG. 7 is a watermark map extracted from the output image of the other model M';
FIG. 8 is a slave model SM1And SM2And (5) carrying out a watermark extraction success rate statistical schematic diagram.
Detailed Description
The present invention is described in detail below with reference to specific embodiments and drawings, and it is to be understood that those skilled in the art can implement the invention by referring to the content and appropriate replacement and/or modification of process parameters, and it is specifically noted that all similar replacement and/or modification are obvious to those skilled in the art and are considered to be included in the present invention. While the methods of the present invention have been described in terms of preferred embodiments, it will be apparent to those of ordinary skill in the art that variations and modifications in the methods described herein, as well as other suitable variations and combinations, may be made to implement and use the techniques of the present invention without departing from the spirit and scope of the invention.
The embodiment of the invention provides a model watermarking method for image processing model copyright protection, which utilizes an invisible image watermarking mechanism, can embed a watermark into an output image of a trained image processing model without influencing the performance of an original task, namely, the model is ensured to be consistent with the vision of the original output image, and a competitor steals the model and deploys the model into a product or service of the competitor after simple modification.
The following is a schematic analysis to facilitate understanding of the method of the present invention.
Let M (θ;) represent a trained image processing model, where θ is used to represent model parameters of the model, which are trained on the original training data (X, Y). Where X represents the set of all input images in the original training data and Y represents the set of GT (ground-route) images in the original training data corresponding to X. Therefore, the output M (theta; X) of the model M (theta;) on X should approach Y and is expressed as
M(θ;X)≈Y (1)。
Assuming that a competitor Attacker hacks the model and deploys it in his own products or services, only one API (Application Programming Interface) is exposed for use. For a suspicious model SM in such a black box scene, the only information that the model Owner, can use to prove its ownership is the output image of the SM. Fortunately, as the most common watermark carrier, we can embed the watermark in the image without causing noticeable image distortion, and can extract the watermark from the image after embedding the watermark in a specific way, as shown in fig. 1. Therefore, we consider designing a model watermarking method, so that the model after embedding the watermark can embed the invisible watermark in the output image of the model without changing the model architecture.
To this end, we first embed an invisible watermark w in the GT image set Y of the original training data using an invisible image watermarking mechanism, and use YwTo represent a watermarked set of GT images, YwAnd Y should be as consistent as possible visually and may be expressed as
YwY and YwContaining the watermark w (2).
Then we pass the training data (X, Y) after embedding the watermarkw) The model is trained to change the parameters of the model to that in which the watermark is embedded. Let M (theta)w(ii) a ·) represents the model after embedding the watermark, where θwIs embedded withAnd (4) model parameters after watermarking. Thus watermarked model M (θ)w(ii) a Output M (θ) on Xw(ii) a X) should approach YwI.e. expressed as
M(θw;X)≈Yw (3)。
Comparing equations (1), (2) and (3), we can deduce the following equation (4).
M(θw(ii) a X) is approximately equal to M (theta; x) and M (θ)w(ii) a X) comprises w (4)
I.e. because of the watermarked GT image YwThe embedded watermark w is included and visually consistent with the original GT image Y, so that the training data (X, Y) after embedding the watermarkw) New model parameter theta obtained by upper trainingwThe watermark w can be embedded in the output image of the model, and the performance of the original task is not obviously reduced. Thus, our objectives are achieved: the watermarked model is able to embed an invisible watermark in its output image.
Thus, in this case, if the watermarked model is stolen by the Attacker, Owner can extract the watermark from the output image of the model in a specific way to prove the ownership of the model. For convenience, we will use M to represent the original model M (θ;) and MwTo represent the watermarked model M (theta)w;·)。
Example 1:
as shown in fig. 2, a DNN-based image watermarking mechanism is employed: a trained encoder E (for short, E) is used to embed the watermark in the GT image Y, and a trained decoder D (for short, D) is used from MwExtracting the watermark from the output image. The entire frame comprises two parts: embedding of watermarks and ownership verification.
A three-stage training strategy is applied in the watermark embedding process, as shown in fig. 3, specifically including: the image watermarking stage, the model watermarking stage, and the extraction enhancement stage are respectively described below.
Firstly, image watermarking stage:
at this stage, our goal is to train to obtain the encoder E and then use E to embed the watermark w in the GT image Y. For this purpose, we use a DNN-based image watermarking method, as shown in fig. 3 (a).
E is the combination of Y and w as input, and the image Y with embedded watermark as outputw
To minimize the impact on the original task, YwShould be visually consistent with Y.
D with YwAnd Y as inputs, and should be selected from YwThe watermark w is extracted, and a blank picture b is extracted from the Y. The blank picture b is set to avoid this: d always extracts the watermark w whether the input image of D contains the watermark w or not. In this way, D can really distinguish between the image in which the watermark w is embedded and the image in which no watermark is embedded.
At the same time, to further improve YwWe add an addversary a after E. We next present the setting of the loss function at this stage.
Let us hold YwAnd Y are expressed by the following three loss functions:
image distortion loss, which measures YwAnd L between Y2Distance, expressed as
Limg=||Yw-Y||2 (5);
Perceptual loss, which measures YwAnd L between VGG characteristics of Y2Distance, expressed as
Lvgg=||VGGk(Yw)-VGGk(Y)||2 (6);
ad versal loss, which measures the ability of encoder E to spoof ad versary A, denoted as
Ladv=log(1-A(Yw)) (7)。
Also, we denote the extracting capability of decoder D as message transformation loss, which measures the extraction from YwAnd L between the extracted watermark in Y and the set w, b2Distance, expressed as
Lmsg=||D(Yw)-w||2+||D(Y)-b||2 (8)。
We train E and D simultaneously to minimize the loss function LED
LED=λimg Limgvgg Lvggadv Ladvmsg Lmsg (9),
λimg、λvgg、λadv、λmsgAll the parameters are hyper-parameters, and the optimal values can be determined in the experimental process.
While we train A to minimize the loss function LATo strengthen the A division YwAnd the ability in Y:
LA=log(1-A(Y))+log(A(Yw)) (10);
throughout the training process, E, D (E and D are updated simultaneously) and A are updated alternately.
After the training at this stage is completed, we can embed the watermark w in the GT image Y by using the trained encoder E to generate the GT image Y with the watermark embedded thereinw
Secondly, model watermarking stage:
at this stage, our goal is to pass the training data (X, Y) after embedding the watermarkw) Training the model to obtain the model M embedded with the watermarkw. During training, the method is completely consistent with the training original model M without any other changes. As shown in FIG. 3(b), if the loss function used in training the original model M is LMLost (M (x), Y), then M is trainedwThe loss function employed may be expressed as:
Figure BDA0003184850280000061
as mentioned before, because the watermarked GT image YwThe watermark w is included and the original GT image Y is kept consistent visually, thereforeTraining data (X, Y) after embedding watermarkw) Model M obtained by upper trainingwIt is possible to embed the watermark w in its output image while maintaining no significant degradation in performance over the original task. Therefore, even if the Owner Owner can only access the output images of the suspicious model SM under the black box scene, the output images can also be a powerful index of the ownership of the suspicious model, and the copyright protection of the image processing model is better realized.
Thirdly, extraction and reinforcement stage:
at this stage, the goal is to enhance the extraction capability of decoder D, so that the model M after embedding watermark can be better distinguishedwAnd other models for the same task. A qualified D should be able to be derived from MwThe watermark w is extracted from the output image of (a) and the blank image b is extracted from the output of the other model. When D is not yet at MwIs trained on the output image. Thus, as shown in FIG. 3(c), we are mixing data set Yw,Y,Mw(X) and M (X) further fine-tuning D to enhance its extraction capacity.
We express the extraction capability of D by the following three loss functions:
watermark distortion, which scales the watermark w and the slave Yw,MwL between the information extracted in (X)2The distance, expressed as:
Lw=||D(Yw)-w||2+||D(Mw(X))-w||2 (12);
blank image distortion loss, which measures the L between the blank image b and the information extracted from Y, M (X)2The distance, expressed as:
Lb=||D(Y)-b||2+||D(M(X))-b||2 (13);
a conditional loss that makes the watermark consistent from different images containing the watermark, expressed as:
Lcst=||D(Yw)-D(Mw(X))||2 (14);
we have a littleAdjusting D to minimize loss function LD
LD=λw Lwb Lbcst Lcst (15);
λw、λb、λcstAll the parameters are hyper-parameters, and the optimal values can be determined in the experimental process.
Example 2:
on the basis of the foregoing embodiment, for a suspicious model SM, the process of ownership verification is shown in fig. 2. First, we randomly select k trigger images T from the input image X (T ═ T)1,t2,t3,…,ti}) and then triggering each of the T images TiThe API provided by the attackers is input into the SM. The decoder D obtained by training is used again to output the image SM (t) from each imagei) To extract watermark information w'i=D(SM(ti). Finally, the success rate SR of watermark extraction in the trigger image can be calculated:
Figure BDA0003184850280000081
wherein
Figure BDA0003184850280000082
si1 denotes the watermark w 'if extracted'iAnd the NCC (normalized correlation coefficient) value between the original watermark w and the original watermark w is greater than 0.95, then the watermark extraction is considered to be successful.
Obviously, if the suspicious model SM is M that Attacker steals from OwnerwThen the success rate of watermark extraction should be high; otherwise it should be low. Therefore, in the ownership verification, if the success rate of watermark extraction is found to be high, the model is very likely to be stolen from Owner. This enables copyright protection of the image processing model.
Example 3:
on the basis of the foregoing embodiment, the watermark extraction for the image processing model specifically includes:
processing model M for image embedded with watermarkwAfter the watermark extraction is carried out on the output image, the watermark image obtained in the figure 4 is obtained, and the watermark extraction rate is 100 percent;
and then the model M is usedwAfter compression, after watermark extraction is carried out on the output image, the watermark image obtained in the figure 5 is obtained, and the watermark extraction rate is not lower than 50%, which shows that the model is probably stolen from Owner;
the model M iswAfter fine adjustment is carried out, watermark extraction is carried out on an output image of the model to obtain a watermark image obtained in the figure 6, the watermark extraction success rate exceeds 98 percent, and the model is very likely to be stolen from Owner;
and taking other models M ', carrying out watermark extraction on the output image of the model M' to obtain the watermark image obtained in the figure 7, wherein the watermark extraction success rate is close to 0%, which indicates that the model does not belong to Owner.
Example 4:
on the basis of the foregoing embodiments, pictures of different scenes, colors, contrast, brightness, etc. are respectively used to match a certain suspicious model SM1And SM2And (4) performing copyright protection verification, verifying 30 pictures respectively, verifying each picture for 100 times, and counting the watermark extraction success rate of each time, wherein the counting result is shown in fig. 8. As can be seen from FIG. 8, it is apparent that SM1Not attributed to Owner, but SM2The method has high possibility of stealing the model watermark from Owner, and the verification stability of the model watermark method for different images is high.
The above description is only a part of the embodiments of the present invention, and the data and drawings are also part of the present invention, but it should be understood that the scope of the present invention is not limited thereby, and any structural changes made according to the present invention should be construed as being limited within the scope of the present invention without departing from the gist of the present invention.
However, it should be understood that, for convenience and brevity of description, it can be clearly understood by those skilled in the art that the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and further description is omitted here.
It will be appreciated and will be appreciated by those of skill in the art that the various exemplary method steps and equations described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations thereof, and that such method steps and equations may correspond to programs located in random access memory, read-only memory, programmable ROM, erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.
The invention is not the best known technology.

Claims (9)

1. A model watermarking method for image processing model copyright protection utilizes an invisible image watermarking mechanism to carry out watermark embedding and ownership verification, and comprises the following steps:
acquiring a trained image processing model M (theta;);
embedding an invisible watermark w in a GT image set Y of original training data (X, Y) to obtain the watermarked GT image set Yw
Then training data (X, Y) after embedding the watermarkw) Training the model to change the parameters of the model into the embedded watermark to obtain the watermarked model M (theta)w;·)。
2. The method of claim 1, wherein: further comprising ownership verification, selecting a trigger image T from the input image x for input to the model M (θ)w(ii) a In (g) or in the suspicious model SM, extracting watermark information from each obtained output image SM (t), and calculating the success rate of extracting the watermark in the trigger image, thereby carrying out comparison on the model M (theta)w(ii) a ·) or the suspicious model SM.
3. The method according to claim 1 or 2, characterized in that: the watermark embedding process is suitable for three-stage training, including:
and (3) image watermarking stage: training to obtain an encoder E, and then embedding a watermark w in the GT image Y by utilizing the E;
and (3) a model watermarking stage: by training data (X, Y) after embedding the watermarkw) Training the model to obtain the model M embedded with the watermarkw
And (3) extraction and reinforcement stage: enhancing the extraction capability of decoder D in order to enable it to be extracted from MwThe watermark w is extracted from the output image of (a) and the blank image b is extracted from the output of the other model.
4. The method of claim 3, wherein: in the image watermarking stage, the image is watermarked,
e is the combination of Y and w as input, and the image Y with embedded watermark as outputw
The training target is YwAnd Y maintains visual consistency;
d with YwAnd Y as inputs, and should be able to be selected from YwThe watermark w is extracted, and a blank picture b is extracted from the Y.
5. The method of claim 4, wherein:
handle YwAnd Y are expressed by the following three loss functions:
image distortion loss, which measures YwAnd L between Y2Distance, expressed as
Limg=||Yw-Y||2 (5);
Perceptual loss, which measures YwAnd L between VGG characteristics of Y2Distance, expressed as
Lvgg=||VGGk(Yw)-VGGk(Y)||2 (6);
addversalloss, which measures the ability of encoderE to spoof addversaryA, is denoted as
Ladv=log(1-A(Yw)) (7);
Meanwhile, the extraction capability of decoderD is expressed as message transformation loss, which is measured from YwAnd L between the extracted watermark in Y and the set w, b2Distance, expressed as
Lmsg=||D(Yw)-w||2+||D(Y)-b||2 (8);
Training E and D to minimize the loss function LED
LED=λimgLimguggLvggadvLadvmsgLmsg (9),
λimg、λvgg、λadv、λmsgAll the parameters are hyper-parameters, and the optimal values can be determined in the experimental process;
train A to minimize the loss function LATo strengthen the A division YwAnd the ability in Y:
LA=log(1-A(Y))+log(A(Yw)) (10);
throughout the training process, E, D (E and D are updated simultaneously) and A are updated alternately.
6. The method of claim 3, wherein: in the stage of the model watermarking,
training MwThe loss function used is expressed as:
Figure FDA0003184850270000021
the training goal is to maintain MwNo significant degradation in performance on the original task occurred.
7. The method of claim 3, wherein: in the stage of the extraction and the strengthening,
in a mixed data set Yw,Y,Mw(X) and m (X) further fine-tuning D to enhance its extraction capacity;
the extraction capability of D is expressed by the following three loss functions:
watermark distortion, which scales the watermark w and the slave Yw,MwL between information extracted in (X)2The distance, expressed as:
Lw=||D(Yw)-w||2+||D(Mw(X))-w||2 (12);
blankimage distortion loss, which measures the L between the blank image b and the information extracted from Y, M (X)2The distance, expressed as:
Lb=||D(Y)-b||2+||D(M(X))-b||2 (13);
a conditional loss that makes the watermark consistent from different images containing the watermark, expressed as:
Lcst=||D(Yw)-D(Mw(X))||2 (14);
we fine tune D to minimize the loss function LD
LD=λwLwbLbcstLcst (15);
λw、λb、λcstAll the parameters are hyper-parameters, and the optimal values can be determined in the experimental process.
8. The method of claim 2, wherein: in the verification of the ownership(s),
for the suspicious model SM, k trigger images T are randomly selected from the input image X, and then each trigger image T in T is selectediInputting into SM through API provided by Attacker, and outputting image SM (t) from each image by using decoderD obtained by trainingi) To extract watermark information w'i=D(SM(ti) Finally, the success rate SR of watermark extraction in the trigger image can be calculated:
Figure FDA0003184850270000031
wherein
Figure FDA0003184850270000032
si1 denotes the watermark w 'if extracted'iAnd the NCC (normalized correlation coefficient) value between the original watermark w and the original watermark w is more than 0.95, the watermark extraction is considered to be successful.
9. Use of the method of any one of claims 1 to 8 for image processing model copy protection.
CN202110858190.0A 2021-07-28 2021-07-28 Model watermarking method for image processing model copyright protection Pending CN113554545A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110858190.0A CN113554545A (en) 2021-07-28 2021-07-28 Model watermarking method for image processing model copyright protection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110858190.0A CN113554545A (en) 2021-07-28 2021-07-28 Model watermarking method for image processing model copyright protection

Publications (1)

Publication Number Publication Date
CN113554545A true CN113554545A (en) 2021-10-26

Family

ID=78133109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110858190.0A Pending CN113554545A (en) 2021-07-28 2021-07-28 Model watermarking method for image processing model copyright protection

Country Status (1)

Country Link
CN (1) CN113554545A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114329365A (en) * 2022-03-07 2022-04-12 南京理工大学 Deep learning model protection method based on robust watermark
CN118172225A (en) * 2024-05-16 2024-06-11 蓝象智联(杭州)科技有限公司 Watermark embedding method, training method and verification method of logistic regression model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872267A (en) * 2019-02-19 2019-06-11 哈尔滨工业大学(深圳) A kind of packet-based digital watermark method of robustness
CN110727928A (en) * 2019-10-12 2020-01-24 湘潭大学 3D video copyright comprehensive protection method based on deep reinforcement learning optimization
CN111311472A (en) * 2020-01-15 2020-06-19 中国科学技术大学 Property right protection method for image processing model and image processing algorithm
CN112837202A (en) * 2021-01-26 2021-05-25 支付宝(杭州)信息技术有限公司 Watermark image generation and attack tracing method and device based on privacy protection
CN113158583A (en) * 2021-05-24 2021-07-23 南京信息工程大学 End-to-end text image watermark model establishing method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872267A (en) * 2019-02-19 2019-06-11 哈尔滨工业大学(深圳) A kind of packet-based digital watermark method of robustness
CN110727928A (en) * 2019-10-12 2020-01-24 湘潭大学 3D video copyright comprehensive protection method based on deep reinforcement learning optimization
CN111311472A (en) * 2020-01-15 2020-06-19 中国科学技术大学 Property right protection method for image processing model and image processing algorithm
CN112837202A (en) * 2021-01-26 2021-05-25 支付宝(杭州)信息技术有限公司 Watermark image generation and attack tracing method and device based on privacy protection
CN113158583A (en) * 2021-05-24 2021-07-23 南京信息工程大学 End-to-end text image watermark model establishing method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YANG LIU ET AL: "A novel two-stage separable deep learning framework for practical blind watermarking", 《SESSION 3C: SMART APPLICATIONS》 *
李政: "基于盲水印的深度神经网络模型知识产权保护框架", 《中国优秀硕士学位论文全文数据库电子期刊 信息科技辑》 *
闫凤 等: "基于DWT和PNN的数字图像水印算法", 《湘潭大学自然科学学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114329365A (en) * 2022-03-07 2022-04-12 南京理工大学 Deep learning model protection method based on robust watermark
CN114329365B (en) * 2022-03-07 2022-06-10 南京理工大学 Deep learning model protection method based on robust watermark
CN118172225A (en) * 2024-05-16 2024-06-11 蓝象智联(杭州)科技有限公司 Watermark embedding method, training method and verification method of logistic regression model

Similar Documents

Publication Publication Date Title
Wan et al. A comprehensive survey on robust image watermarking
Baluja Hiding images in plain sight: Deep steganography
Zhang et al. Robust invisible video watermarking with attention
Cogranne et al. Steganography by minimizing statistical detectability: The cases of JPEG and color images
CN111311472B (en) Property right protection method for image processing model and image processing algorithm
Yang et al. Source camera identification based on content-adaptive fusion residual networks
Kundur et al. Digital watermarking for telltale tamper proofing and authentication
Marra et al. On the vulnerability of deep learning to adversarial attacks for camera model identification
Roy et al. A hybrid domain color image watermarking based on DWT–SVD
CN113554545A (en) Model watermarking method for image processing model copyright protection
CN110232650B (en) Color image watermark embedding method, detection method and system
CN115131188A (en) Robust image watermarking method based on generation countermeasure network
Gul et al. SVD based image manipulation detection
Shehzad et al. LSB image steganography based on blocks matrix determinant method
Zhu et al. Destroying robust steganography in online social networks
CN115841413A (en) Image processing method and device
Ahn et al. Local-source enhanced residual network for steganalysis of digital images
Shih et al. 16 A Comparison Study on Copy–Cover Image Forgery Detection
Al-Gindy et al. A novel blind Image watermarking technique for colour RGB images in the DCT domain using green channel
Chang et al. The application of a full counterpropagation neural network to image watermarking
CN114119330B (en) Robust digital watermark embedding and extracting method based on neural network
JP4167372B2 (en) Digital watermark embedding method, extraction method, invisibility method, visualization method, and embedding device
CN115330583A (en) Watermark model training method and device based on CMYK image
US20080307227A1 (en) Digital-Invisible-Ink Data Hiding Schemes
Kadian et al. A Highly Secure and Robust Copyright Protection Method for Grayscale Images using DWT-SVD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination