CN112215276A - Training method and device for confrontation network, electronic equipment and storage medium - Google Patents

Training method and device for confrontation network, electronic equipment and storage medium Download PDF

Info

Publication number
CN112215276A
CN112215276A CN202011070284.3A CN202011070284A CN112215276A CN 112215276 A CN112215276 A CN 112215276A CN 202011070284 A CN202011070284 A CN 202011070284A CN 112215276 A CN112215276 A CN 112215276A
Authority
CN
China
Prior art keywords
picture
pixels
fused
pictures
column
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011070284.3A
Other languages
Chinese (zh)
Inventor
李国安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
OneConnect Financial Technology Co Ltd Shanghai
Original Assignee
OneConnect Financial Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Financial Technology Co Ltd Shanghai filed Critical OneConnect Financial Technology Co Ltd Shanghai
Priority to CN202011070284.3A priority Critical patent/CN112215276A/en
Publication of CN112215276A publication Critical patent/CN112215276A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The application relates to model construction and discloses a training method, a device, electronic equipment and a storage medium for an confrontation network, wherein the method comprises the following steps: acquiring a training set, wherein the training set comprises N fused pictures, each fused picture in the N fused pictures is determined according to each first picture in the N first pictures and each second picture in the N second pictures, each first picture in the N first pictures is a picture only containing a watermark, and each second picture in the N second pictures is a picture not containing the watermark; inputting the N fused pictures into a generator to be trained to obtain N generated pictures, wherein each generated picture in the N generated pictures does not contain a watermark; and alternately updating the generator to be trained and the discriminator to be trained by adopting a loss function, wherein the loss function is determined according to the N first pictures, the N second pictures and the N generated pictures. By implementing the embodiment of the application, the accurate watermark removal is realized.

Description

Training method and device for confrontation network, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a training method and apparatus for an countermeasure network, an electronic device, and a storage medium.
Background
At present, in order to reduce the recognition error rate of the pictures with watermarks, a model is often adopted to remove the watermarks. For example, to reduce the number of watermarked advertisement pictures, watermarked bill screenshots, and the like, a model is generally used to remove the watermark, so that the user can better use or view the advertisement pictures, the bill screenshots, and the like. Therefore, when removing the watermark using the model, training of the model may be involved. When model training is carried out, a training set marked manually is adopted.
Generally, each picture in the manually labeled training set is a picture with a marked watermark position. The situation of label error may occur during labeling, and after the model is trained by using such a training set, the problem that the watermark cannot be accurately removed may occur in the trained model.
Disclosure of Invention
The embodiment of the application provides a training method and device for an anti-network, electronic equipment and a storage medium.
A first aspect of the present application provides a training method for a countermeasure network, the countermeasure network including a generator to be trained and a discriminator to be trained, the method including:
acquiring a training set, wherein the training set comprises N fused pictures, each fused picture in the N fused pictures is determined according to each first picture in the N first pictures and each second picture in the N second pictures, each first picture in the N first pictures is a picture only containing a watermark, each second picture in the N second pictures is a picture not containing the watermark, and N is an integer greater than 0;
inputting the N fused pictures into the generator to be trained to obtain N generated pictures, wherein each generated picture in the N generated pictures does not contain a watermark;
and alternately updating the generator to be trained and the discriminator to be trained by adopting a loss function, wherein the loss function is determined according to the N first pictures, the N second pictures and the N generated pictures.
A second aspect of the present application provides a training apparatus of a countermeasure network, the countermeasure network including a generator to be trained and a discriminator to be trained, the apparatus including an acquisition module, an input module, and an update module, wherein,
the acquisition module is configured to acquire a training set, where the training set includes N fused pictures, each of the N fused pictures is determined according to each of N first pictures and each of N second pictures, each of the N first pictures is a picture that only includes a watermark, each of the N second pictures is a picture that does not include a watermark, and N is an integer greater than 0;
the input module is used for inputting the N fused pictures into the generator to be trained to obtain N generated pictures, and each generated picture in the N generated pictures does not contain a watermark;
the updating module is configured to alternately update the generator to be trained and the discriminator to be trained by using a loss function, where the loss function is determined according to the N first pictures, the N second pictures, and the N generated pictures.
A third aspect of the application provides an electronic device for training against a network, comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and are generated as instructions to be executed by the processor to perform the steps in any one of the methods of training against a network.
A fourth aspect of the present application provides a computer readable storage medium for storing a computer program for execution by the processor to implement the method of any one of the training methods against a network.
It can be seen that, in the above technical solution, the fused picture is generated by using the picture only containing the watermark and the picture not containing the watermark, and the fused picture is used as the training set, thereby avoiding the situation of a mislabeling occurring in the manual labeling of the watermark position in the existing solution. Meanwhile, the fusion picture is used as the input picture of the generator to be trained, so that the training of the generator is realized, the condition that the picture generated by the trained generator cannot be distinguished by the trained discriminator due to the fact that the marking is wrong is avoided, and the problem that the watermark is not accurately removed is further avoided. And finally, the generator to be trained and the discriminator to be trained are alternately updated by using the loss function determined according to the picture only containing the watermark, the picture not containing the watermark and the picture output by the generator to be trained, so that the better training of the countermeasure network is realized, and the preparation is made for accurately removing the watermark by using the trained countermeasure network subsequently.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
FIG. 1 is a schematic diagram of a training system for a countermeasure network according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a training method for a countermeasure network according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a first picture, a second picture and a fused picture provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of another training method for a countermeasure network according to an embodiment of the present application;
fig. 5 is a schematic diagram of a first pixel matrix provided in the present application;
FIG. 6 is a schematic diagram of a training apparatus for a countermeasure network according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device in a hardware operating environment according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first" and "second" in the description and claims of the present application and the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a schematic diagram of a training system for a countermeasure network according to an embodiment of the present disclosure, where the training system 100 for a countermeasure network includes a training device 110 for a countermeasure network. The training device 110 of the countermeasure network is used for processing and storing the training set. The training system 100 of the countermeasure network may include an integrated single device or multiple devices, and for convenience of description, the training system 100 of the countermeasure network is referred to herein as an electronic device. It will be apparent that the electronic device may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem having wireless communication capability, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal Equipment (terminal device), and the like.
In addition, when removing watermarks using a model, training of the model may be involved. When model training is carried out, a training set marked manually is adopted.
Generally, each picture in the manually labeled training set is a picture with a marked watermark position. The situation of label error may occur during labeling, and after the model is trained by using such a training set, the problem that the watermark cannot be accurately removed may occur in the trained model.
Based on this, the present application provides a training method for an anti-network to solve the above problem, and the present application will be described in detail below.
Referring to fig. 2, fig. 2 is a schematic flowchart of a training method for a countermeasure network according to an embodiment of the present disclosure. The training method of the countermeasure network can be applied to the electronic equipment, the countermeasure network comprises a generator to be trained and a discriminator to be trained, and as shown in fig. 2, the method comprises the following steps:
201. obtaining a training set, wherein the training set comprises N fused pictures, each fused picture in the N fused pictures is determined according to each first picture in the N first pictures and each second picture in the N second pictures, each first picture in the N first pictures is a picture only containing a watermark, and each second picture in the N second pictures is a picture not containing the watermark.
Wherein N is an integer greater than 0.
For example, referring to fig. 3, fig. 3 is a schematic diagram of a first picture, a second picture and a fused picture provided in the embodiment of the present application. Specifically, as shown in fig. 3, it can be seen that the first picture is a picture only including a watermark, and the watermark is "123". The second picture is a picture that does not contain a watermark, and in conjunction with fig. 3, the second picture contains an irregular shape. The fused picture is determined from the first picture and the second picture, and it can be seen that the fused picture includes both the watermark and an irregular shape.
202. And inputting the N fused pictures into the generator to be trained to obtain N generated pictures, wherein each generated picture in the N generated pictures does not contain a watermark.
And the N generated pictures are pictures output by the generator to be trained. It can be understood that the N generated pictures and the N second pictures may be completely the same, may also be partially the same, may also be completely different, and are not limited herein. Specifically, in this application, the partial identity means that at least one generated picture in the N generated pictures is identical to at least one second picture in the N second pictures.
The generator to be trained adopts a U-net structure, and can comprise a convolutional layer, a maximum pooling layer, an anti-convolutional layer and a mish function.
Wherein the hash function is: mish (x) x tan h (softplus (x))
Wherein, softplus (x) is log (1+ ex), and x is the output data of the convolution layer after inputting the N fusion pictures.
203. And alternately updating the generator to be trained and the discriminator to be trained by adopting a loss function, wherein the loss function is determined according to the N first pictures, the N second pictures and the N generated pictures.
It can be seen that, in the above technical solution, the fused picture is generated by using the picture only containing the watermark and the picture not containing the watermark, and the fused picture is used as the training set, thereby avoiding the situation of a mislabeling occurring in the manual labeling of the watermark position in the existing solution. Meanwhile, the fusion picture is used as the input picture of the generator to be trained, so that the training of the generator is realized, the condition that the picture generated by the trained generator cannot be distinguished by the trained discriminator due to the fact that the marking is wrong is avoided, and the problem that the watermark is not accurately removed is further avoided. And finally, the generator to be trained and the discriminator to be trained are alternately updated by using the loss function determined according to the picture only containing the watermark, the picture not containing the watermark and the picture output by the generator to be trained, so that the better training of the countermeasure network is realized, and the preparation is made for accurately removing the watermark by using the trained countermeasure network subsequently.
In a possible implementation manner, the fused picture a is any one of the N fused pictures, and the acquiring a training set includes:
acquiring a first picture corresponding to the fusion picture A and a second picture corresponding to the fusion picture A, wherein the first picture corresponding to the fusion picture A is one of the N first pictures, and the second picture corresponding to the fusion picture A is one of the N second pictures;
processing a first picture corresponding to the fused picture A and a second picture corresponding to the fused picture A by adopting a first formula to obtain the fused picture A;
determining the fusion picture A as a picture in the training set;
the first formula is determined according to the gray value of the pixel in the first picture corresponding to the fused picture A and the gray value of the pixel in the second picture corresponding to the fused picture A.
Further, the first formula is:
T=α·F+(1-α)·S
the T is a gray value corresponding to a pixel in the fusion picture A, the alpha is a transparency coefficient, the alpha belongs to [0, 1], the F is a gray value of a pixel in a first picture corresponding to the fusion picture A, and the S is a gray value of a pixel in a second picture corresponding to the fusion picture A.
It should be noted that any one pixel B in the fusion picture a is determined according to a pixel C and a pixel D, where the pixel C is a pixel in the first picture corresponding to the fusion picture a, and the pixel D is a pixel corresponding to the pixel C in the second picture corresponding to the fusion picture a. Further, the pixel B satisfies: α. C + (1-. alpha.) D. Wherein C is the gray scale value corresponding to the pixel C, and D is the gray scale value corresponding to the pixel D.
Therefore, in the technical scheme, the determination of the fused picture is realized, and the condition of labeling error in the manual labeling of the watermark position in the existing scheme is avoided. Meanwhile, preparation is made for the follow-up training of the generator by using the fusion picture as the input picture of the generator to be trained, and preparation is also made for the follow-up situation that the picture generated by the trained generator cannot be distinguished by the trained discriminator due to the mislabeling.
Referring to fig. 4, fig. 4 is a schematic flowchart of another training method for a countermeasure network according to an embodiment of the present application. The training method of the countermeasure network can be applied to electronic equipment, wherein the countermeasure network comprises a generator to be trained and a discriminator to be trained, as shown in fig. 4, and the method comprises the following steps:
401. and acquiring a first picture corresponding to the fusion picture A and a second picture corresponding to the fusion picture A, wherein the first picture corresponding to the fusion picture A is one of the N first pictures, and the second picture corresponding to the fusion picture A is one of the N second pictures.
And the fusion picture A is any one picture in the N fusion pictures.
The first picture corresponding to the fusion picture A is a picture only containing a watermark, and the second picture corresponding to the fusion picture A is a picture not containing the watermark.
402. And determining a first pixel matrix according to the first picture corresponding to the fused picture A, wherein the first pixel matrix comprises a gray value corresponding to each pixel in the first picture corresponding to the fused picture A.
403. And detecting whether pixels with gray values being preset gray values exist in the first pixel matrix.
The preset gray value can be set by an administrator or configured in a configuration file of the electronic device. Further, the preset gray-level value may be 0 or 255, which is not limited herein.
If yes, go to step 404; if not, go to step 405.
404. Determining the number of pixels with the gray value of each row of pixels in the first pixel matrix as the preset gray value, and processing a first picture corresponding to the fusion picture A and a second picture corresponding to the fusion picture A by using the first formula according to a first sequence to obtain the fusion picture A, wherein the first sequence is determined according to the number of pixels with the gray value of each row of pixels in the first pixel matrix as the preset gray value.
The first pixel matrix comprises M1 columns of pixels, the M1 columns of pixels correspond to M1 values, and each value of the M1 values is the number of pixels with the preset gray value in each column of pixels of the M1 columns of pixels. Further, the first order is the order of the M1 numbers from small to large. Wherein M1 is an integer greater than 0.
For example, referring to fig. 5, fig. 5 is a schematic diagram of a first pixel matrix provided in the present application. Specifically, as shown in fig. 5, the first pixel matrix is a 3 × 3 matrix. If the preset gray value is 0, from left to right, the gray value corresponding to each pixel in the first column is 9, 88 and 19 respectively; the gray value corresponding to each pixel in the second column is respectively 5, 0 and 21; the gray scale value of each pixel in the third column is 0, 3, 0, respectively. Further, there is no pixel with a gray value of 0 corresponding to the pixel in the first column; the gray value corresponding to 1 pixel in the second row is 0; the third column has 2 pixels corresponding to a gray value of 0.
405. Determining a second pixel matrix according to a second picture corresponding to the fused picture A, wherein the second pixel matrix comprises a gray value corresponding to each pixel in the second picture corresponding to the fused picture A; if the second pixel matrix has pixels with the gray values being the preset gray values, determining the number of the pixels with the gray values of each row of pixels in the second pixel matrix being the preset gray values, and processing the first picture corresponding to the fusion picture A and the second picture corresponding to the fusion picture A by using the first formula according to a second sequence to obtain the fusion picture A, wherein the second sequence is determined according to the number of the pixels with the gray values of each row of pixels in the second pixel matrix being the preset gray values.
The second pixel matrix comprises M2 columns of pixels, the M2 columns of pixels correspond to M2 values, and each value of the M2 values is the number of pixels with the gray value of the preset gray value in each column of pixels of the M2 columns of pixels. Further, the first order is the order of the M2 numbers from large to small. Wherein the M2 is an integer greater than 0, and the M1 is equal to the M2.
406. And determining the fusion picture A as a picture in the training set.
407. And inputting the N fused pictures into the generator to be trained to obtain N generated pictures, wherein each generated picture in the N generated pictures does not contain a watermark.
408. And alternately updating the generator to be trained and the discriminator to be trained by adopting a loss function, wherein the loss function is determined according to the N first pictures, the N second pictures and the N generated pictures.
It can be seen that, in the above technical solution, the fused picture is generated by using the picture only containing the watermark and the picture not containing the watermark, and the fused picture is used as the training set, thereby avoiding the situation of a mislabeling occurring in the manual labeling of the watermark position in the existing solution. Meanwhile, the fusion picture is used as the input picture of the generator to be trained, so that the training of the generator is realized, the condition that the picture generated by the trained generator cannot be distinguished by the trained discriminator due to the fact that the marking is wrong is avoided, and the problem that the watermark is not accurately removed is further avoided. And finally, the generator to be trained and the discriminator to be trained are alternately updated by using the loss function determined according to the picture only containing the watermark, the picture not containing the watermark and the picture output by the generator to be trained, so that the better training of the countermeasure network is realized, and the preparation is made for accurately removing the watermark by using the trained countermeasure network subsequently.
In a possible implementation manner, the processing, according to the first order and by using the first formula, the first picture corresponding to the fused picture a and the second picture corresponding to the fused picture a to obtain the fused picture a includes:
determining at least one column of pixels P1 in the first pixel matrix according to the number of pixels of which the gray value of each column of pixels is the preset gray value in the first pixel matrix, wherein the first column of pixels is any column of the at least one column of pixels P1, and the number of pixels of which the gray value of each pixel is the preset gray value in the first column of pixels is less than a threshold value;
processing the at least one column of pixels P1 and the at least one column of pixels Q1 by using the first formula according to the sequence that the gray value of each column of pixels in the at least one column of pixels P1 is smaller than the number of pixels with the preset gray value, to obtain at least one column of fusion pixels K1, where the at least one column of pixels Q1 is a pixel corresponding to the at least one column of pixels P1 in the second pixel matrix;
after the last column of pixels in the at least one column of pixels P1 is detected to be processed, processing other pixels in the first pixel matrix except the at least one column of pixels P1 and other pixels in the second pixel matrix except the at least one column of pixels Q1 in parallel by using the first formula to obtain at least one column of fused pixels K2;
determining the fused picture A according to the at least one column of fused pixels K1 and the at least one column of fused pixels K2.
The threshold value may be set by an administrator or may be configured in a configuration file of the electronic device.
It can be seen that, in the above technical scheme, the efficiency of determining the fused picture is improved through serial processing and parallel processing. By determining the fusion picture, the condition of label error caused by manually labeling the watermark position in the existing scheme is also avoided. Meanwhile, preparation is made for the follow-up training of the generator by using the fusion picture as the input picture of the generator to be trained, and preparation is also made for the follow-up situation that the picture generated by the trained generator cannot be distinguished by the trained discriminator due to the mislabeling.
In a possible implementation manner, the processing, according to the second order, the first picture corresponding to the fused picture a and the second picture corresponding to the fused picture a by using the first formula to obtain the fused picture a includes:
determining a difference value of gray values of the first pixel matrix and the second pixel matrix according to the second sequence;
determining at least one column of pixels P2 in the first pixel matrix according to the difference between the gray values of the first pixel matrix and the second pixel matrix, wherein the second column of pixels is any column of the at least one column of pixels P2, the second column of pixels corresponds to a third column of pixels in the second pixel matrix, and the sum of the difference between the gray values of the second column of pixels and the third column of pixels is greater than a preset difference;
processing at least one column of pixels P2 and at least one column of pixels Q2 in parallel by using the first formula to obtain at least one column of fused pixels K2, wherein the at least one column of pixels Q2 is a pixel corresponding to the at least one column of pixels P2 in the second pixel matrix;
after detecting that the processing of the last column of pixels in the at least one column of pixels P2 is completed, determining the number of pixels with the gray scale value of the preset gray scale value in each column of pixels Q3, wherein the pixels Q3 of the at least one column are other pixels in the second pixel matrix except the pixels Q2 of the at least one column;
processing the other pixels except for the at least one column of pixels P1 in the first pixel matrix and the at least one column of pixels Q3 by adopting the first formula according to the sequence that the gray value of each column of pixels in the at least one column of pixels Q3 is the number of pixels with the preset gray value from large to small to obtain at least one column of fusion pixels K3;
determining the fused picture A according to the at least one column of fused pixels K2 and the at least one column of fused pixels K3.
The preset difference value may be set by an administrator or may be configured in a configuration file of the electronic device.
The pixel E is any one of the pixels in the second row of pixels, and the pixel F is a pixel corresponding to the pixel E in the third row of pixels. The electronics can determine a difference between the gray scale value corresponding to the pixel E and the gray scale value corresponding to the pixel F.
It can be seen that, in the above technical scheme, the efficiency of determining the fused picture is improved through serial processing and parallel processing. By determining the fusion picture, the condition of label error caused by manually labeling the watermark position in the existing scheme is also avoided. Meanwhile, preparation is made for the follow-up training of the generator by using the fusion picture as the input picture of the generator to be trained, and preparation is also made for the follow-up situation that the picture generated by the trained generator cannot be distinguished by the trained discriminator due to the mislabeling.
In a possible implementation, the countermeasure network further includes a to-be-trained discriminator, and before the generator to be trained and the discriminator to be trained are alternately updated by using a loss function, the method further includes:
inputting the N second pictures into the discriminator to be trained to obtain the probability corresponding to each second picture in the N second pictures;
determining a first cross entropy function value according to the probability corresponding to each second picture in the N second pictures;
inputting the N fused pictures into the discriminator to be trained to obtain the probability corresponding to each fused picture in the N fused pictures;
determining a second cross entropy function value of the N second pictures and the N generated pictures according to the probability corresponding to each fused picture in the N fused pictures;
determining a third cross entropy function value according to the N first pictures, the N second pictures and the N generated pictures;
and determining the loss function according to the first cross entropy function value, the second cross entropy function value and the third cross entropy function value.
Therefore, the technical scheme realizes the determination of the loss function and prepares for the subsequent alternate updating of the generator to be trained and the discriminator to be trained.
Further, after the alternately updating the generator to be trained and the arbiter to be trained by using the loss function, the method further includes: acquiring a third picture, wherein the third picture is a picture comprising a watermark and also comprises a watermark-removed original picture; inputting the third picture into the updated generator to be trained to obtain a generated picture corresponding to the third picture; and inputting the generated picture corresponding to the third picture into the updated discriminator to be trained, and determining whether the generated picture corresponding to the third picture is a watermark-removed picture.
It can be understood that if the updated label output by the to-be-trained discriminator is 1, the generated picture corresponding to the third picture is a watermark-removed picture; and if the updated label output by the discriminator to be trained is 0, the generated picture corresponding to the third picture is not the watermark-removed picture.
Therefore, in the technical scheme, the training condition of the countermeasure network is detected after the generator to be trained and the discriminator to be trained are updated, and preparation is made for better detecting the convergence of the countermeasure network subsequently.
Referring to fig. 6, fig. 6 is a schematic diagram of a training apparatus for a countermeasure network according to an embodiment of the present application. As shown in fig. 6, a training apparatus 600 for a countermeasure network according to an embodiment of the present application includes an obtaining module 601, an input module 602, an updating module 603, a processing module 604, and a determining module 605,
the obtaining module 601 is configured to obtain a training set, where the training set includes N fused pictures, each of the N fused pictures is determined according to each of N first pictures and each of N second pictures, each of the N first pictures is a picture that only includes a watermark, and each of the N second pictures is a picture that does not include a watermark.
Wherein N is an integer greater than 0.
The input module 602 is configured to input the N fused pictures into the generator to be trained to obtain N generated pictures, where each generated picture in the N generated pictures does not include a watermark.
The updating module 603 is configured to update the generator to be trained and the discriminator to be trained alternately by using a loss function, where the loss function is determined according to the N first pictures, the N second pictures, and the N generated pictures.
It can be seen that, in the above technical solution, the fused picture is generated by using the picture only containing the watermark and the picture not containing the watermark, and the fused picture is used as the training set, thereby avoiding the situation of a mislabeling occurring in the manual labeling of the watermark position in the existing solution. Meanwhile, the fusion picture is used as the input picture of the generator to be trained, so that the training of the generator is realized, the condition that the picture generated by the trained generator cannot be distinguished by the trained discriminator due to the fact that the marking is wrong is avoided, and the problem that the watermark is not accurately removed is further avoided. And finally, the generator to be trained and the discriminator to be trained are alternately updated by using the loss function determined according to the picture only containing the watermark, the picture not containing the watermark and the picture output by the generator to be trained, so that the better training of the countermeasure network is realized, and the preparation is made for accurately removing the watermark by using the trained countermeasure network subsequently.
In one possible implementation, the fused picture a is any one of the N fused pictures, and when the training set is obtained,
the obtaining module 601 is configured to obtain a first picture corresponding to the fused picture a and a second picture corresponding to the fused picture a, where the first picture corresponding to the fused picture a is one of the N first pictures, and the second picture corresponding to the fused picture a is one of the N second pictures;
the processing module 604 is configured to process a first picture corresponding to the fused picture a and a second picture corresponding to the fused picture a by using a first formula, so as to obtain the fused picture a;
the determining module 605 is configured to determine the fusion picture a as a picture in the training set;
the first formula is determined according to the gray value of the pixel in the first picture corresponding to the fused picture A and the gray value of the pixel in the second picture corresponding to the fused picture A.
Therefore, in the technical scheme, the determination of the fused picture is realized, and the condition of labeling error in the manual labeling of the watermark position in the existing scheme is avoided. Meanwhile, preparation is made for the follow-up training of the generator by using the fusion picture as the input picture of the generator to be trained, and preparation is also made for the follow-up situation that the picture generated by the trained generator cannot be distinguished by the trained discriminator due to the mislabeling.
In a possible implementation manner, when a first picture corresponding to the fused picture a and a second picture corresponding to the fused picture a are processed by using a first formula to obtain the fused picture a, the processing module 604 is configured to determine a first pixel matrix according to the first picture corresponding to the fused picture a, where the first pixel matrix includes a gray value corresponding to each pixel in the first picture corresponding to the fused picture a; detecting whether pixels with gray values being preset gray values exist in the first pixel matrix or not; if so, determining the number of pixels of which the gray value of each row of pixels in the first pixel matrix is the preset gray value, and processing a first picture corresponding to the fusion picture A and a second picture corresponding to the fusion picture A by adopting the first formula according to a first sequence to obtain the fusion picture A, wherein the first sequence is determined according to the number of pixels of which the gray value of each row of pixels in the first pixel matrix is the preset gray value; if not, determining a second pixel matrix according to a second picture corresponding to the fused picture A, wherein the second pixel matrix comprises a gray value corresponding to each pixel in the second picture corresponding to the fused picture A; if the second pixel matrix has pixels with the gray values being the preset gray values, determining the number of the pixels with the gray values of each row of pixels in the second pixel matrix being the preset gray values, and processing the first picture corresponding to the fusion picture A and the second picture corresponding to the fusion picture A by using the first formula according to a second sequence to obtain the fusion picture A, wherein the second sequence is determined according to the number of the pixels with the gray values of each row of pixels in the second pixel matrix being the preset gray values.
Therefore, in the technical scheme, the determination of the fused picture is realized, and the condition of labeling error in the manual labeling of the watermark position in the existing scheme is avoided. Meanwhile, preparation is made for the follow-up training of the generator by using the fusion picture as the input picture of the generator to be trained, and preparation is also made for the follow-up situation that the picture generated by the trained generator cannot be distinguished by the trained discriminator due to the mislabeling.
In a possible implementation manner, when the first picture corresponding to the fused picture a and the second picture corresponding to the fused picture a are processed according to the first formula according to the first order to obtain the fused picture a, the processing module 604 is configured to determine at least one column of pixels P1 in the first pixel matrix according to the number of pixels of which the gray value of each column of pixels in the first pixel matrix is the preset gray value, where the first column of pixels is any one column of the at least one column of pixels P1, and the number of pixels of which the gray value of each pixel in the first column is the preset gray value is smaller than a threshold; processing the at least one column of pixels P1 and the at least one column of pixels Q1 by using the first formula according to the sequence that the gray value of each column of pixels in the at least one column of pixels P1 is smaller than the number of pixels with the preset gray value, to obtain at least one column of fusion pixels K1, where the at least one column of pixels Q1 is a pixel corresponding to the at least one column of pixels P1 in the second pixel matrix; after the last column of pixels in the at least one column of pixels P1 is detected to be processed, processing other pixels in the first pixel matrix except the at least one column of pixels P1 and other pixels in the second pixel matrix except the at least one column of pixels Q1 in parallel by using the first formula to obtain at least one column of fused pixels K2; determining the fused picture A according to the at least one column of fused pixels K1 and the at least one column of fused pixels K2.
It can be seen that, in the above technical scheme, the efficiency of determining the fused picture is improved through serial processing and parallel processing. By determining the fusion picture, the condition of label error caused by manually labeling the watermark position in the existing scheme is also avoided. Meanwhile, preparation is made for the follow-up training of the generator by using the fusion picture as the input picture of the generator to be trained, and preparation is also made for the follow-up situation that the picture generated by the trained generator cannot be distinguished by the trained discriminator due to the mislabeling.
In a possible implementation manner, when the first picture corresponding to the fused picture a and the second picture corresponding to the fused picture a are processed by using the first formula according to a second order to obtain the fused picture a, the processing module 604 is configured to determine a difference value between gray values of the first pixel matrix and the second pixel matrix according to the second order; determining at least one column of pixels P2 in the first pixel matrix according to the difference between the gray values of the first pixel matrix and the second pixel matrix, wherein the second column of pixels is any column of the at least one column of pixels P2, the second column of pixels corresponds to a third column of pixels in the second pixel matrix, and the sum of the difference between the gray values of the second column of pixels and the third column of pixels is greater than a preset difference; processing at least one column of pixels P2 and at least one column of pixels Q2 in parallel by using the first formula to obtain at least one column of fused pixels K2, wherein the at least one column of pixels Q2 is a pixel corresponding to the at least one column of pixels P2 in the second pixel matrix; after detecting that the processing of the last column of pixels in the at least one column of pixels P2 is completed, determining the number of pixels with the gray scale value of the preset gray scale value in each column of pixels Q3, wherein the pixels Q3 of the at least one column are other pixels in the second pixel matrix except the pixels Q2 of the at least one column; processing the other pixels except for the at least one column of pixels P1 in the first pixel matrix and the at least one column of pixels Q3 by adopting the first formula according to the sequence that the gray value of each column of pixels in the at least one column of pixels Q3 is the number of pixels with the preset gray value from large to small to obtain at least one column of fusion pixels K3; determining the fused picture A according to the at least one column of fused pixels K2 and the at least one column of fused pixels K3.
It can be seen that, in the above technical scheme, the efficiency of determining the fused picture is improved through serial processing and parallel processing. By determining the fusion picture, the condition of label error caused by manually labeling the watermark position in the existing scheme is also avoided. Meanwhile, preparation is made for the follow-up training of the generator by using the fusion picture as the input picture of the generator to be trained, and preparation is also made for the follow-up situation that the picture generated by the trained generator cannot be distinguished by the trained discriminator due to the mislabeling.
In one possible embodiment, before alternately updating the generator to be trained and the arbiter to be trained with a loss function,
the input module 602 is further configured to input the N second pictures into the to-be-trained discriminator to obtain a probability corresponding to each of the N second pictures;
the determining module 605 is further configured to determine a first cross entropy function value according to a probability corresponding to each of the N second pictures;
the input module 602 is further configured to input the N fused pictures into the to-be-trained discriminator to obtain a probability corresponding to each of the N fused pictures;
the determining module 605 is further configured to determine, according to a probability corresponding to each of the N fused pictures, a second cross entropy function value of the N second pictures and the N generated pictures; determining a third cross entropy function value according to the N first pictures, the N second pictures and the N generated pictures; and determining the loss function according to the first cross entropy function value, the second cross entropy function value and the third cross entropy function value.
Therefore, the technical scheme realizes the determination of the loss function and prepares for the subsequent alternate updating of the generator to be trained and the discriminator to be trained.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device in a hardware operating environment according to an embodiment of the present application.
An embodiment of the application provides an electronic device for training against a network, comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor to execute instructions comprising steps in any method for training against a network. As shown in fig. 7, an electronic device of a hardware operating environment according to an embodiment of the present application may include:
a processor 701, such as a CPU.
The memory 702, which may optionally be a high speed RAM memory, may also be a stable memory, such as a disk memory.
A communication interface 703 for implementing connection communication between the processor 701 and the memory 702.
Those skilled in the art will appreciate that the configuration of the electronic device shown in fig. 7 is not intended to be limiting and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 7, the memory 702 may include an operating system, a network communication module, and one or more programs. An operating system is a program that manages and controls the server hardware and software resources, supporting the execution of one or more programs. The network communication module is used to implement communication between the components within the memory 702 and with other hardware and software within the electronic device.
In the electronic device shown in fig. 7, the processor 701 is configured to execute one or more programs in the memory 702, and implement the following steps:
acquiring a training set, wherein the training set comprises N fused pictures, each fused picture in the N fused pictures is determined according to each first picture in the N first pictures and each second picture in the N second pictures, each first picture in the N first pictures is a picture only containing a watermark, each second picture in the N second pictures is a picture not containing the watermark, and N is an integer greater than 0;
inputting the N fused pictures into the generator to be trained to obtain N generated pictures, wherein each generated picture in the N generated pictures does not contain a watermark;
and alternately updating the generator to be trained and the discriminator to be trained by adopting a loss function, wherein the loss function is determined according to the N first pictures, the N second pictures and the N generated pictures.
For specific implementation of the electronic device related to the present application, reference may be made to the above embodiments of the training method for the countermeasure network, which are not described herein again.
The present application further provides a computer readable storage medium for storing a computer program, the stored computer program being executable by the processor to perform the steps of:
acquiring a training set, wherein the training set comprises N fused pictures, each fused picture in the N fused pictures is determined according to each first picture in the N first pictures and each second picture in the N second pictures, each first picture in the N first pictures is a picture only containing a watermark, each second picture in the N second pictures is a picture not containing the watermark, and N is an integer greater than 0;
inputting the N fused pictures into the generator to be trained to obtain N generated pictures, wherein each generated picture in the N generated pictures does not contain a watermark;
and alternately updating the generator to be trained and the discriminator to be trained by adopting a loss function, wherein the loss function is determined according to the N first pictures, the N second pictures and the N generated pictures.
For specific implementation of the computer-readable storage medium related to the present application, reference may be made to the above embodiments of the method for training an anti-network, which are not described herein again.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art should understand that the present application is not limited by the order of acts described, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in this specification are preferred embodiments and that the acts and modules involved are not necessarily required for this application.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A training method of a countermeasure network including a generator to be trained and a discriminator to be trained, the method comprising:
acquiring a training set, wherein the training set comprises N fused pictures, each fused picture in the N fused pictures is determined according to each first picture in the N first pictures and each second picture in the N second pictures, each first picture in the N first pictures is a picture only containing a watermark, each second picture in the N second pictures is a picture not containing the watermark, and N is an integer greater than 0;
inputting the N fused pictures into the generator to be trained to obtain N generated pictures, wherein each generated picture in the N generated pictures does not contain a watermark;
and alternately updating the generator to be trained and the discriminator to be trained by adopting a loss function, wherein the loss function is determined according to the N first pictures, the N second pictures and the N generated pictures.
2. The method according to claim 1, wherein the fused picture a is any one of the N fused pictures, and the obtaining the training set comprises:
acquiring a first picture corresponding to the fusion picture A and a second picture corresponding to the fusion picture A, wherein the first picture corresponding to the fusion picture A is one of the N first pictures, and the second picture corresponding to the fusion picture A is one of the N second pictures;
processing a first picture corresponding to the fused picture A and a second picture corresponding to the fused picture A by adopting a first formula to obtain the fused picture A;
determining the fusion picture A as a picture in the training set;
the first formula is determined according to the gray value of the pixel in the first picture corresponding to the fused picture A and the gray value of the pixel in the second picture corresponding to the fused picture A.
3. The method according to claim 2, wherein the processing the first picture corresponding to the fused picture a and the second picture corresponding to the fused picture a by using the first formula to obtain the fused picture a comprises:
determining a first pixel matrix according to a first picture corresponding to the fused picture A, wherein the first pixel matrix comprises a gray value corresponding to each pixel in the first picture corresponding to the fused picture A;
detecting whether pixels with gray values being preset gray values exist in the first pixel matrix or not;
if so, determining the number of pixels of which the gray value of each row of pixels in the first pixel matrix is the preset gray value, and processing a first picture corresponding to the fusion picture A and a second picture corresponding to the fusion picture A by adopting the first formula according to a first sequence to obtain the fusion picture A, wherein the first sequence is determined according to the number of pixels of which the gray value of each row of pixels in the first pixel matrix is the preset gray value;
if not, determining a second pixel matrix according to a second picture corresponding to the fused picture A, wherein the second pixel matrix comprises a gray value corresponding to each pixel in the second picture corresponding to the fused picture A; if the second pixel matrix has pixels with the gray values being the preset gray values, determining the number of the pixels with the gray values of each row of pixels in the second pixel matrix being the preset gray values, and processing the first picture corresponding to the fusion picture A and the second picture corresponding to the fusion picture A by using the first formula according to a second sequence to obtain the fusion picture A, wherein the second sequence is determined according to the number of the pixels with the gray values of each row of pixels in the second pixel matrix being the preset gray values.
4. The method according to claim 3, wherein the processing a first picture corresponding to the fused picture A and a second picture corresponding to the fused picture A by using the first formula according to the first order to obtain the fused picture A comprises:
determining at least one column of pixels P1 in the first pixel matrix according to the number of pixels of which the gray value of each column of pixels is the preset gray value in the first pixel matrix, wherein the first column of pixels is any column of the at least one column of pixels P1, and the number of pixels of which the gray value of each pixel is the preset gray value in the first column of pixels is less than a threshold value;
processing the at least one column of pixels P1 and the at least one column of pixels Q1 by using the first formula according to the sequence that the gray value of each column of pixels in the at least one column of pixels P1 is smaller than the number of pixels with the preset gray value, to obtain at least one column of fusion pixels K1, where the at least one column of pixels Q1 is a pixel corresponding to the at least one column of pixels P1 in the second pixel matrix;
after the last column of pixels in the at least one column of pixels P1 is detected to be processed, processing other pixels in the first pixel matrix except the at least one column of pixels P1 and other pixels in the second pixel matrix except the at least one column of pixels Q1 in parallel by using the first formula to obtain at least one column of fused pixels K2;
determining the fused picture A according to the at least one column of fused pixels K1 and the at least one column of fused pixels K2.
5. The method according to claim 3, wherein the processing the first picture corresponding to the fused picture A and the second picture corresponding to the fused picture A by using the first formula according to the second order to obtain the fused picture A comprises:
determining a difference value of gray values of the first pixel matrix and the second pixel matrix according to the second sequence;
determining at least one column of pixels P2 in the first pixel matrix according to the difference between the gray values of the first pixel matrix and the second pixel matrix, wherein the second column of pixels is any column of the at least one column of pixels P2, the second column of pixels corresponds to a third column of pixels in the second pixel matrix, and the sum of the difference between the gray values of the second column of pixels and the third column of pixels is greater than a preset difference;
processing at least one column of pixels P2 and at least one column of pixels Q2 in parallel by using the first formula to obtain at least one column of fused pixels K2, wherein the at least one column of pixels Q2 is a pixel corresponding to the at least one column of pixels P2 in the second pixel matrix;
after detecting that the processing of the last column of pixels in the at least one column of pixels P2 is completed, determining the number of pixels with the gray scale value of the preset gray scale value in each column of pixels Q3, wherein the pixels Q3 of the at least one column are other pixels in the second pixel matrix except the pixels Q2 of the at least one column;
processing the other pixels except for the at least one column of pixels P1 in the first pixel matrix and the at least one column of pixels Q3 by adopting the first formula according to the sequence that the gray value of each column of pixels in the at least one column of pixels Q3 is the number of pixels with the preset gray value from large to small to obtain at least one column of fusion pixels K3;
determining the fused picture A according to the at least one column of fused pixels K2 and the at least one column of fused pixels K3.
6. The method according to any one of claims 1-5, wherein before alternately updating the generator to be trained and the arbiter to be trained with a loss function, the method further comprises:
inputting the N second pictures into the discriminator to be trained to obtain the probability corresponding to each second picture in the N second pictures;
determining a first cross entropy function value according to the probability corresponding to each second picture in the N second pictures;
inputting the N fused pictures into the discriminator to be trained to obtain the probability corresponding to each fused picture in the N fused pictures;
determining a second cross entropy function value of the N second pictures and the N generated pictures according to the probability corresponding to each fused picture in the N fused pictures;
determining a third cross entropy function value according to the N first pictures, the N second pictures and the N generated pictures;
and determining the loss function according to the first cross entropy function value, the second cross entropy function value and the third cross entropy function value.
7. A training device of a confrontation network is characterized in that the confrontation network comprises a generator to be trained and a discriminator to be trained, the device comprises an acquisition module, an input module and an updating module, wherein,
the acquisition module is configured to acquire a training set, where the training set includes N fused pictures, each of the N fused pictures is determined according to each of N first pictures and each of N second pictures, each of the N first pictures is a picture that only includes a watermark, each of the N second pictures is a picture that does not include a watermark, and N is an integer greater than 0;
the input module is used for inputting the N fused pictures into the generator to be trained to obtain N generated pictures, and each generated picture in the N generated pictures does not contain a watermark;
the updating module is configured to alternately update the generator to be trained and the discriminator to be trained by using a loss function, where the loss function is determined according to the N first pictures, the N second pictures, and the N generated pictures.
8. The apparatus according to claim 7, wherein the fused picture A is any one of the N fused pictures, the apparatus further comprises a processing module and a determining module, when acquiring the training set,
the acquisition module is configured to acquire a first picture corresponding to the fused picture a and a second picture corresponding to the fused picture a, where the first picture corresponding to the fused picture a is one of the N first pictures, and the second picture corresponding to the fused picture a is one of the N second pictures;
the processing module is used for processing a first picture corresponding to the fused picture A and a second picture corresponding to the fused picture A by adopting a first formula to obtain the fused picture A;
the determining module is configured to determine the fusion picture a as a picture in the training set;
the first formula is determined according to the gray value of the pixel in the first picture corresponding to the fused picture A and the gray value of the pixel in the second picture corresponding to the fused picture A.
9. An electronic device for training against a network, comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and are generated as instructions that are executed by the processor to perform the steps in the method of any of claims 1-6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program, which is executed by the processor, to implement the method of any of claims 1-6.
CN202011070284.3A 2020-09-30 2020-09-30 Training method and device for confrontation network, electronic equipment and storage medium Pending CN112215276A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011070284.3A CN112215276A (en) 2020-09-30 2020-09-30 Training method and device for confrontation network, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011070284.3A CN112215276A (en) 2020-09-30 2020-09-30 Training method and device for confrontation network, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112215276A true CN112215276A (en) 2021-01-12

Family

ID=74053487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011070284.3A Pending CN112215276A (en) 2020-09-30 2020-09-30 Training method and device for confrontation network, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112215276A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950458A (en) * 2021-03-19 2021-06-11 润联软件系统(深圳)有限公司 Image seal removing method and device based on countermeasure generation network and related equipment
CN113591856A (en) * 2021-08-23 2021-11-02 中国银行股份有限公司 Bill picture processing method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005094058A1 (en) * 2004-03-29 2005-10-06 Oki Electric Industry Co., Ltd. Printing medium quality adjusting system, examining watermark medium output device, watermark quality examining device, adjusted watermark medium output device, printing medium quality adjusting method, and examining watermark medium
US20120300975A1 (en) * 2011-05-24 2012-11-29 Tata Consultancy Services Limited System and method for detecting the watermark using decision fusion
CN108109124A (en) * 2017-12-27 2018-06-01 北京诸葛找房信息技术有限公司 Indefinite position picture watermark restorative procedure based on deep learning
CN110599387A (en) * 2019-08-08 2019-12-20 北京邮电大学 Method and device for automatically removing image watermark
CN111105336A (en) * 2019-12-04 2020-05-05 山东浪潮人工智能研究院有限公司 Image watermarking removing method based on countermeasure network
WO2020143309A1 (en) * 2019-01-09 2020-07-16 平安科技(深圳)有限公司 Segmentation model training method, oct image segmentation method and apparatus, device and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005094058A1 (en) * 2004-03-29 2005-10-06 Oki Electric Industry Co., Ltd. Printing medium quality adjusting system, examining watermark medium output device, watermark quality examining device, adjusted watermark medium output device, printing medium quality adjusting method, and examining watermark medium
US20120300975A1 (en) * 2011-05-24 2012-11-29 Tata Consultancy Services Limited System and method for detecting the watermark using decision fusion
CN108109124A (en) * 2017-12-27 2018-06-01 北京诸葛找房信息技术有限公司 Indefinite position picture watermark restorative procedure based on deep learning
WO2020143309A1 (en) * 2019-01-09 2020-07-16 平安科技(深圳)有限公司 Segmentation model training method, oct image segmentation method and apparatus, device and medium
CN110599387A (en) * 2019-08-08 2019-12-20 北京邮电大学 Method and device for automatically removing image watermark
CN111105336A (en) * 2019-12-04 2020-05-05 山东浪潮人工智能研究院有限公司 Image watermarking removing method based on countermeasure network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
孙全;曾晓勤;: "基于生成对抗网络的图像修复", 计算机科学, no. 12, 15 December 2018 (2018-12-15) *
康显桂, 黄继武, 林彦, 杨群生: "抗仿射变换的扩频图像水印算法", 电子学报, no. 01, 25 January 2004 (2004-01-25) *
韩宁;闫德勤;: "基于支持向量机的鲁棒盲水印算法", 计算机工程与设计, no. 22, 28 November 2009 (2009-11-28) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950458A (en) * 2021-03-19 2021-06-11 润联软件系统(深圳)有限公司 Image seal removing method and device based on countermeasure generation network and related equipment
CN112950458B (en) * 2021-03-19 2022-06-21 润联软件系统(深圳)有限公司 Image seal removing method and device based on countermeasure generation network and related equipment
CN113591856A (en) * 2021-08-23 2021-11-02 中国银行股份有限公司 Bill picture processing method and device

Similar Documents

Publication Publication Date Title
CN109784181B (en) Picture watermark identification method, device, equipment and computer readable storage medium
CN108986169B (en) Method and apparatus for processing image
CN112861648B (en) Character recognition method, character recognition device, electronic equipment and storage medium
CN113283446B (en) Method and device for identifying object in image, electronic equipment and storage medium
CN108491866B (en) Pornographic picture identification method, electronic device and readable storage medium
CN113705461B (en) Face definition detection method, device, equipment and storage medium
CN112215276A (en) Training method and device for confrontation network, electronic equipment and storage medium
CN114049568B (en) Target object deformation detection method, device, equipment and medium based on image comparison
CN112381092B (en) Tracking method, tracking device and computer readable storage medium
CN111553241A (en) Method, device and equipment for rejecting mismatching points of palm print and storage medium
CN112016560A (en) Overlay text recognition method and device, electronic equipment and storage medium
CN111523340B (en) Two-dimensional code identification method and device, electronic equipment and storage medium
CN109977745B (en) Face image processing method and related device
CN112966687B (en) Image segmentation model training method and device and communication equipment
CN116137061B (en) Training method and device for quantity statistical model, electronic equipment and storage medium
CN108810319B (en) Image processing apparatus, image processing method, and program
CN112862703A (en) Image correction method and device based on mobile photographing, electronic equipment and medium
CN115439850B (en) Method, device, equipment and storage medium for identifying image-text characters based on examination sheets
CN113705459B (en) Face snapshot method and device, electronic equipment and storage medium
CN113705686B (en) Image classification method, device, electronic equipment and readable storage medium
CN113808134B (en) Oil tank layout information generation method, oil tank layout information generation device, electronic apparatus, and medium
CN113888086B (en) Article signing method, device, equipment and storage medium based on image recognition
CN115658525A (en) User interface checking method and device, storage medium and computer equipment
CN112541436B (en) Concentration analysis method and device, electronic equipment and computer storage medium
CN112288748B (en) Semantic segmentation network training and image semantic segmentation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination