CN112581396A - Reflection elimination method based on generation countermeasure network - Google Patents

Reflection elimination method based on generation countermeasure network Download PDF

Info

Publication number
CN112581396A
CN112581396A CN202011504507.2A CN202011504507A CN112581396A CN 112581396 A CN112581396 A CN 112581396A CN 202011504507 A CN202011504507 A CN 202011504507A CN 112581396 A CN112581396 A CN 112581396A
Authority
CN
China
Prior art keywords
image
network
reflection
loss function
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011504507.2A
Other languages
Chinese (zh)
Inventor
王宏伟
成孝刚
宋丽敏
邵文杰
蔡勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202011504507.2A priority Critical patent/CN112581396A/en
Publication of CN112581396A publication Critical patent/CN112581396A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a reflection elimination method based on a generation countermeasure network, which comprises the following steps: step S1, establishing a data set; step S2, constructing a reflection elimination network model; step S3, defining a required loss function; step S4, performing iterative training on the reflection elimination network model; step S5, testing the trained reflection elimination network model and outputting a final reflection elimination network model; and step S6, eliminating the reflection in the input image by using the reflection elimination network model. Compared with the prior method, the method for generating the countermeasure network to eliminate the reflection has better reliability and generalization.

Description

Reflection elimination method based on generation countermeasure network
Technical Field
The invention relates to the field of deep learning, in particular to a reflection elimination method based on a generation countermeasure network.
Background
Images are shot through glass, the images are often obtained with reflection, and the reflection images bring great troubles to people, particularly cause serious influence on the current computer vision system. Eliminating unwanted reflections is of great significance for large-scale applications of computer vision. While reflective scenes on the one hand greatly reduce the visibility of content behind the glass, a series of distortions, blurs, etc. often lead to failure of computer vision system applications. The purpose of eliminating reflections is to eliminate reflections and to increase the visibility of the background scene. The recognition rate of the current computer vision system has reached a good level, and in order to further improve the recognition accuracy rate and the application scene, the quality of the picture is necessarily required to be improved to meet the requirements of the system.
Most of the existing reflection elimination algorithms are based on a non-learning method, most commonly based on different prior methods and different attributes between a background and a reflection layer, for example, Levin et al adopt information based on pixel sparsity to decompose an image; li et al separate images based on the difference in blur level between the background and the reflective layer; shih et al remove reflections and visible ghosting effects by the GMM method. These methods are based on manual a priori methods to artificially distinguish and segment certain properties of the image, which makes these methods difficult to use in more complex scenes. In particular, the method of Levin et al assumes that the edge histogram of the image has a certain sparse distribution. The method depends on the labeling of the background and the reflection content by a user, and the method is often ineffective in the situation of complex texture, so that a fixed sparse prior distribution cannot be suitable for natural images with different complexity degrees. Both the method of Li et al and the method of Shih et al assume that the reflection image and the target image have asymmetry, i.e., the reflection image and the target image have different statistical information. But in many cases these asymmetry information is not present, which limits the scope of use of these algorithms. The algorithms are similar and all depend on the pixel information of the lower layer, the method of Li et al and the method of Shih et al are greatly influenced by the scene and the texture of the image, and the complex scene and the texture can cause difficulty in judging the sparsity of the edge histogram of the reflection image and calculating the offset vector and the attenuation coefficient of the ghost. These problems arise with methods that rely on lower-level pixel details because the lower-level pixel information is greatly affected by the environment. For example, simple illumination changes can cause the pixel statistics of an image to fluctuate widely.
In contrast, the high-level semantic information of the image has relatively good stability. The illumination change, the position offset and the slight deformation of the image do not greatly influence the high-level semantic information of the image. Therefore, adding higher-level semantic information as a priori to the reflection removal problem helps to circumvent the limitations of methods that use only lower-level pixel information as a priori knowledge.
Disclosure of Invention
In view of the above, the present invention provides a reflection elimination method based on a generative countermeasure network, which can solve the problems of the universality and accuracy of the application of the image reflection elimination technology.
In order to achieve the purpose, the technical solution of the invention is as follows: a reflection elimination method based on a generation countermeasure network comprises the following steps:
step S1, establishing a data set, dividing the data set into a training set and a testing set according to a certain proportion, wherein the data set comprises: the image processing device comprises a background image, a reflection image and a mixed image, wherein the mixed image is formed by weighting and combining the background image and the reflection image;
step S2, constructing a reflection elimination network model, wherein the reflection elimination network model comprises a generator network and a judger network;
step S3, defining a required loss function according to a training target and the architecture of the reflection elimination network model;
step S4, utilizing the training set to carry out iterative training on the reflection elimination network model until the loss function is converged;
step S5, testing the trained reflection elimination network model through the test set, judging whether the reflection elimination network model meets the requirements, if not, continuing to perform the step S4 until the test result meets the requirements; if so, outputting a final reflection elimination network model;
step S6, eliminating the reflection in the input image using the reflection elimination network model obtained in step S5.
Further, in step S1, the data set has a plurality of sets of images satisfying a training scale, each set of images includes a background image, a reflection map R and a mixed image, and the background image in the training set is defined as a real image.
Further, in the step S2, the generator network includes an encoder and a decoder;
the encoder is composed of 13 3-by-3 convolutional layers and 5 maximum pond layers and used for extracting a detailed part feature map of the image, wherein the detailed part feature map comprises background features and reflection features;
the decoder receives the detail part feature map and reconstructs an image, and the reconstructed image output by the decoder is defined as a false image;
the decoder and the encoder perform up-sampling in a one-to-one correspondence mode, and the pooling layer in the decoder is connected with the pooling layer in the encoder in the same spatial resolution;
the decision network is composed of 6 convolution layers of 3 x 3, 2 maximum pooling layers and finally one sigmoid layer, receives the false image output by the decoder and the real image in the data set, and judges whether the received image is false or real.
Further, the step S3 is specifically:
s301, defining and generating a countermeasure loss function;
the expression for generating the countervailing loss function is as follows:
Figure BDA0002844566870000031
in formula (1), x represents a mixed image in the dataset; g (x) represents a reconstructed background image, i.e. a ghost image, of the generator network output; y represents a background image in the dataset, i.e. a real image; where G denotes the generator network and D denotes the decider network. Training the first stage, we want to train decision D pairsIn the discrimination ability of the real image, i.e. maximizing D (y), the expected E of the category is obtainedy(ii) a In the second stage we want the generator G to generate an image G (x) that is close to the real image y, i.e. minimize D (y, G (x)), resulting in a desired E for that classx,y
The function of generating the counterdamage function is that when the generator network and the judger network are in the process of continuous iteration, the difference between the false image output by the generator network and the real image in the data set is continuously minimized, and the judgment capability of the judger network is maximized;
step S302, defining an L1 loss function, wherein the expression of the L1 loss function is as follows:
Figure BDA0002844566870000032
in equation (2), y represents the background image in the dataset, i.e. the true background image, g (x) represents the reflection-removed background image generated by the generator network, i.e. the reconstructed background image, Ex,yThe expectation is obtained after the absolute value error is obtained for the two images;
the function of the L1 loss function is to constrain the reconstructed output of the generator network so that the image output by the generator network can confuse the decision-maker network, so that the generator network produces an image similar to the real image;
step S303, defining a loss function for judging the similarity of the image structures, wherein the loss function expression is as follows:
Figure BDA0002844566870000033
in formula (3), SSIM is an index for measuring the similarity between two images, and thus SSIM is an index for measuring the similarity between two imagesx,y(G) Expressed as a function of the structural similarity between the measured real image y and the image generated by the generation network G, said function having the effect of constraining the difference between the image reconstructed by the generator network and the background image in the dataset, suppressing the generator networkStructural deviation occurs between the reconstructed image and the real image;
SSIMx,y(G) the expression of (a) is:
Figure BDA0002844566870000041
in the formula (4), c1,c2Is a regularization constant, mu is a mean, sigma is a variance, x, y denote the generated image and the real image, respectively, muxAnd muyRepresenting the mean, σ, of the generated image and the real image, respectivelyx、σyAnd σxyRespectively representing the variance of the generated image and the real image and the covariance between the two;
step S304, defining a final loss function, wherein the expression is as follows:
Figure BDA0002844566870000042
in the formula (5), the first and second groups,
Figure BDA0002844566870000043
the representation is generated as a function of the pair loss,
Figure BDA0002844566870000044
denoted as the L1 loss function, λ is the weight of the L1 loss function,
Figure BDA0002844566870000045
expressed as a structural similarity loss function.
Further, step S4 specifically includes:
step S401, initializing parameters of the reflection elimination network model, specifically, respectively initializing parameters of a regularization layer, a convolution layer and a deconvolution layer by adopting Gaussian distribution;
s402, inputting the background images in the training set into a decision device network for training to obtain parameters;
step S403, inputting the mixed images in the training set into a generator network for training to obtain parameters and obtain reconstructed background images, inputting the reconstructed background images into a decision device network for discrimination, and performing back propagation on the generator network and the decision device network according to the process during training to alternately train parameters of each layer of the optimization model;
step S404, performing multiple rounds of iterative training until the loss function obtained in the step S304 is converged, so that the determiner network cannot distinguish which image is a real image and which image is a false image, and the generator network can generate an image which is nearly consistent with the real image;
and S405, storing the trained reflection elimination network model.
Further, in step S5, the trained reflection elimination network model is tested by a minimum root mean square error criterion, where the minimum root mean square error criterion is expressed as:
Figure BDA0002844566870000046
in formula (6), m × n is the horizontal and vertical coordinates of the image, i represents each pixel, yiAnd giCorresponding pixel points of the real image and the generated image, respectively.
Further, after step S1 is completed and before step S2 is performed, the data set is subjected to a preprocessing operation, specifically, the preprocessing operation is to uniformly crop and scale the images in the data set to 224 × 224.
The invention has the beneficial effects that:
the invention generates the antagonistic network through training to eliminate the reflection, and has better reliability and generalization compared with the prior method.
Drawings
Fig. 1 is a structural diagram of a reflection cancellation network model according to the present invention.
FIG. 2 is a schematic flow chart of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Referring to fig. 1 and fig. 2, the present invention provides a reflection elimination method based on a generative countermeasure network, including the following steps:
step S1, establishing a data set, dividing the data set into a training set and a testing set according to a certain proportion, wherein the data set comprises: the image processing system comprises a background image, a reflection image and a mixed image, wherein the mixed image is formed by weighting and combining the background image and the reflection image;
specifically, the data set has a plurality of groups of images satisfying the training scale, each group of images comprises a background image, a reflection image R and a mixed image, and the background image in the training set is defined as a real image.
The expression for the blended image is:
Xm=αB+βR
in the formula, XmRepresenting a mixed image, alpha and beta are respectively represented as weight coefficients of a background image and a reflection image, B is the background image, and R is the reflection image.
And then, performing a preprocessing operation on the established data set, wherein the preprocessing operation is specifically to uniformly crop and scale the images in the data set to 224 × 224.
As shown in fig. 1, MixA input to the network is a mixed image, realB is a pure background image, the former is used to train the generator network, and the latter is used to train the decision device network together with the dummy image.
Step S2, constructing a reflection elimination network model, wherein the reflection elimination network model comprises a generator network and a judger network; the generator network comprises an encoder and a decoder;
the encoder consists of 13 3-by-3 convolutional layers and 5 maximum pond layers and is used for extracting a detailed part feature map of the image, wherein the detailed part feature map comprises background features and reflection features;
the decoder receives the detailed part feature map and reconstructs an image, and the reconstructed image output by the decoder is defined as a false image;
the decoder and the encoder perform up-sampling in a one-to-one correspondence mode, in order to avoid the problem of gradient disappearance, the pooling layer in the decoder and the pooling layer in the encoder are connected in the same spatial resolution, a residual error structure similar to that in a Resnet network is formed, and therefore the encoder can effectively guide the decoder to retain details in the process of regenerating an image.
The decision network is composed of 6 convolution layers of 3 × 3, 2 maximum pooling layers and finally a sigmoid layer, in this embodiment, a patch GAN is used to replace the most used image GAN at present, and the biggest difference between the ordinary image GAN and the network is that the output of the network is a two-dimensional plane data generated by a plurality of paths, rather than a scalar obtained by ensemble averaging, and the sensing domain of the two-dimensional plane data corresponds to a small area in the input, namely, the discrimination output of the discriminator network to a small block of the input image, so that the training makes the model more focused on image details.
The decider network receives the dummy image and the real image in the data set output by the decoder, and judges whether the received image is dummy or real.
Specifically, in this embodiment, the generator network may be a pre-trained VGG network.
Step S3, defining a required loss function according to the training target and the architecture of the reflection elimination network model;
step S3 specifically includes:
s301, defining and generating a countermeasure loss function;
the expression for generating the penalty function is:
Figure BDA0002844566870000061
in the formula, x represents a mixed image in the dataset; g (x) represents a reconstructed background image, i.e. a ghost image, of the generator network output; y represents a background image in the dataset, i.e. a real image; where G denotes the generator network and D denotes the decider network. In the first stage of training, we want to train the discrimination ability of the decision device D on the real image, i.e. maximize D (y), and obtain the expectation E of the classy(ii) a In the second stage we want the generator G to generate an image G (x) that is close to the real image y, i.e. minimize D (y, G (x)), resulting in a desired E for that classx,y
The effect of generating the counterloss function is to continuously minimize the difference between the false image output by the generator network and the real image in the data set and maximize the judgment capability of the decider network, while the generator network and the decider network are in the process of continuous iteration.
In particular, define
Figure BDA0002844566870000071
The significance of the function is: inputting the mixed picture x into a generator network G to obtain a false image G (x), and simultaneously delivering the false image G (x) and a background image y to a decision device network D; the training aims to minimize the difference between an image generated by the generator network G and a pure background image (namely minimize D (y, G (x)) in the continuous iteration process, and simultaneously maximize the decision device network D, so that the capability of the decision device network for judging true and false images is improved. Meanwhile, in order to make G confuse D, an image much like the Ground truth is generated. And then an L1 loss with the group channel, so that the L1 loss function needs to be defined.
Step S302, an L1 loss function is defined, and the expression of the L1 loss function is as follows:
Figure BDA0002844566870000072
in the formula, y represents the background image in the data set, i.e. the original background image, g (x) represents the background image output by the generator network with reflections removed, i.e. the reconstructed background image, Ex,yThe expectation is obtained after the absolute value error is obtained for the two images;
the effect of the L1 loss function is to constrain the reconstructed output of the generator network so that the image of the generator network output can confuse the decider network, so that the generator network produces an image similar to the real image.
Step S303, defining a loss function for judging the similarity of the image structures, wherein the loss function expression is as follows:
Figure BDA0002844566870000073
in formula (3), SSIM is an index for measuring the similarity between two images, and thus SSIM is an index for measuring the similarity between two imagesx,y(G) This is expressed as a function of the structural similarity between the true image y and the image generated by the generator network G, said function acting to constrain the differences between the image reconstructed by the generator network and the background image in the dataset, suppressing structural deviations between the image reconstructed by the generator network and the true image.
SSIMx,y(G) The expression of (a) is:
Figure BDA0002844566870000074
in the formula, c1,c2Is a regularization constant, mu is a mean, sigma is a variance, x, y denote the generated image and the real image, respectively, muxAnd muyRepresenting the mean, σ, of the generated image and the real image, respectivelyx、σyAnd σxyRespectively, as the variance of the generated image and the real image and the covariance between the two.
Step S304, defining a final loss function, wherein the expression is as follows:
Figure BDA0002844566870000081
in the formula, the first and second sets of data are represented,
Figure BDA0002844566870000082
the representation is generated as a function of the pair loss,
Figure BDA0002844566870000083
denoted as the L1 loss function, λ is the weight of the L1 loss function,
Figure BDA0002844566870000084
expressed as a structural similarity loss function.
Step S4, performing iterative training on the reflection elimination network model by using a training set until the loss function is converged;
in this embodiment, step S4 specifically includes:
step S401, initializing parameters of a reflection elimination network model, specifically, respectively initializing parameters of a regularization layer, a convolution layer and a deconvolution layer by adopting Gaussian distribution;
s402, inputting the background images in the training set into a decision device network for training to obtain parameters;
step S403, inputting the mixed images in the training set into a generator network for training to obtain parameters and obtain reconstructed background images, inputting the reconstructed background images into a decision device network for discrimination, and performing back propagation on the generator network and the decision device network according to the process during training to alternately train parameters of each layer of the optimization model;
step S404, performing multiple rounds of iterative training until the loss function obtained in the step S304 is converged, so that the determiner network cannot distinguish which image is a real image and which image is a false image, and the generator network can generate an image which is nearly consistent with the real image;
and S405, storing the trained reflection elimination network model.
Step S5, testing the trained reflection elimination network model through the test set, judging whether the reflection elimination network model meets the requirements, if not, continuing to perform the step S4 until the test result meets the requirements; if so, outputting a final reflection elimination network model;
specifically, step S5 is a testing stage, in which the invention tests the trained reflection elimination network model by a minimum root mean square error criterion (RMSE), that is, measures the difference between the image generated by the model and the real image by RMSE, so as to replace the reflection elimination effect of observing the image by naked eyes;
the expression for the minimum root mean square error criterion is:
Figure BDA0002844566870000085
in the formula, m × n is the horizontal and vertical coordinates of the image, i represents each pixel, y represents each pixeliAnd giCorresponding pixel points of the real image and the generated image, respectively.
This criterion was also used to test other algorithms, whereby we achieved lateral contrast and corresponding analysis of the reflection elimination method.
Step S6, eliminating the reflection in the input image using the reflection elimination network model obtained in step S5.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (7)

1. A reflection elimination method based on a generation countermeasure network is characterized by comprising the following steps:
step S1, establishing a data set, dividing the data set into a training set and a testing set according to a certain proportion, wherein the data set comprises: the image processing device comprises a background image, a reflection image and a mixed image, wherein the mixed image is formed by weighting and combining the background image and the reflection image;
step S2, constructing a reflection elimination network model, wherein the reflection elimination network model comprises a generator network and a judger network;
step S3, defining a required loss function according to a training target and the architecture of the reflection elimination network model;
step S4, utilizing the training set to carry out iterative training on the reflection elimination network model until the loss function is converged;
step S5, testing the trained reflection elimination network model through the test set, judging whether the reflection elimination network model meets the requirements, if not, continuing to perform the step S4 until the test result meets the requirements; if so, outputting a final reflection elimination network model;
step S6, eliminating the reflection in the input image using the reflection elimination network model obtained in step S5.
2. The method for eliminating reflection based on generation of countermeasure network of claim 1, wherein in step S1, the data set has a plurality of sets of images satisfying a training scale, each set of images includes a background image, a reflection map R and a mixed image, and the background image in the training set is defined as a real image.
3. The method of claim 2, wherein in step S2, the generator network comprises an encoder and a decoder;
the encoder is composed of 13 3-by-3 convolutional layers and 5 maximum pond layers and used for extracting a detailed part feature map of the image, wherein the detailed part feature map comprises background features and reflection features;
the decoder receives the detail part feature map and reconstructs an image, and the reconstructed image output by the decoder is defined as a false image;
the decoder and the encoder perform up-sampling in a one-to-one correspondence mode, and the pooling layer in the decoder is connected with the pooling layer in the encoder in the same spatial resolution;
the decision network is composed of 6 convolution layers of 3 x 3, 2 maximum pooling layers and finally one sigmoid layer, receives the false image output by the decoder and the real image in the data set, and judges whether the received image is false or real.
4. The method for eliminating reflection based on a generative countermeasure network as claimed in claim 3, wherein the step S3 specifically comprises:
s301, defining and generating a countermeasure loss function;
the expression for generating the countervailing loss function is as follows:
Figure FDA0002844566860000021
in formula (1), x represents a mixed image in the dataset; g (x) represents a reconstructed background image, i.e. a ghost image, of the generator network output; y represents a background image in the dataset, i.e. a real image; wherein G denotes a generator network, D denotes a decider network, Ex,yThe expectation is obtained after the absolute value error is obtained for the two images;
the function of generating the counterdamage function is that when the generator network and the judger network are in the process of continuous iteration, the difference between the false image output by the generator network and the real image in the data set is continuously minimized, and the judgment capability of the judger network is maximized;
step S302, defining an L1 loss function, wherein the expression of the L1 loss function is as follows:
Figure FDA0002844566860000022
in equation (2), y represents the background image in the dataset, i.e. the true background image, g (x) represents the reflection-removed background image generated by the generator network, i.e. the reconstructed background image, Ex,yThe expectation is obtained after the absolute value error is obtained for the two images;
the function of the L1 loss function is to constrain the reconstructed output of the generator network so that the image output by the generator network can confuse the decision-maker network, so that the generator network produces an image similar to the real image;
step S303, defining a loss function for judging the similarity of the image structures, wherein the loss function expression is as follows:
Figure FDA0002844566860000023
in formula (3), SSIM is an index for measuring the similarity between two images, SSIMx,y(G) Expressed as a function for measuring the structural similarity between the real image y and the image generated by the generator network G, said function acting to constrain the differences between the image reconstructed by the generator network and the background image in the dataset, suppressing the structural deviations between the image reconstructed by the generator network and the real image;
SSIMx,y(G) the expression of (a) is:
Figure FDA0002844566860000024
in the formula (4), c1,c2Is a regularization constant, mu is a mean, sigma is a variance, x, y denote the generated image and the real image, respectively, muxAnd muyRepresenting the mean, σ, of the generated image and the real image, respectivelyx、σyAnd σxyExpressed as the variance of the generated image and the real image and bothThe covariance of the two;
step S304, defining a final loss function, wherein the expression is as follows:
Figure FDA0002844566860000031
in the formula (5), the first and second groups,
Figure FDA0002844566860000032
the representation is generated as a function of the pair loss,
Figure FDA0002844566860000033
denoted as the L1 loss function, λ is the weight of the L1 loss function,
Figure FDA0002844566860000034
expressed as a structural similarity loss function.
5. The method for eliminating reflection based on a generative countermeasure network as claimed in claim 4, wherein the step S4 specifically comprises:
step S401, initializing parameters of the reflection elimination network model, specifically, respectively initializing parameters of a regularization layer, a convolution layer and a deconvolution layer by adopting Gaussian distribution;
s402, inputting the background images in the training set into a decision device network for training to obtain parameters;
step S403, inputting the mixed images in the training set into a generator network for training to obtain parameters and obtain reconstructed background images, inputting the reconstructed background images into a decision device network for discrimination, and performing back propagation on the generator network and the decision device network according to the process during training to alternately train parameters of each layer of the optimization model;
step S404, performing multiple rounds of iterative training until the loss function obtained in the step S304 is converged, so that the determiner network cannot distinguish which image is a real image and which image is a false image, and the generator network can generate an image which is nearly consistent with the real image;
and S405, storing the trained reflection elimination network model.
6. The method of claim 5, wherein in the step S5, the trained model of the reflection cancellation network is tested by a minimum root mean square error criterion, which is expressed as:
Figure FDA0002844566860000035
in formula (6), m × n is the horizontal and vertical coordinates of the image, i represents each pixel, yiAnd giCorresponding pixel points of the real image and the generated image, respectively.
7. The method of claim 6, wherein after step S1 is completed and before step S2 is performed, the data set is pre-processed, and the pre-processing is specifically performed by uniformly cropping and scaling the images in the data set to 224 x 224 size.
CN202011504507.2A 2020-12-18 2020-12-18 Reflection elimination method based on generation countermeasure network Pending CN112581396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011504507.2A CN112581396A (en) 2020-12-18 2020-12-18 Reflection elimination method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011504507.2A CN112581396A (en) 2020-12-18 2020-12-18 Reflection elimination method based on generation countermeasure network

Publications (1)

Publication Number Publication Date
CN112581396A true CN112581396A (en) 2021-03-30

Family

ID=75136036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011504507.2A Pending CN112581396A (en) 2020-12-18 2020-12-18 Reflection elimination method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN112581396A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115731125A (en) * 2022-11-11 2023-03-03 贵州大学 Big data technology-based method for eliminating main beam effect of radio interference array

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103590A (en) * 2017-03-22 2017-08-29 华南理工大学 A kind of image for resisting generation network based on depth convolution reflects minimizing technology
CN110473154A (en) * 2019-07-31 2019-11-19 西安理工大学 A kind of image de-noising method based on generation confrontation network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103590A (en) * 2017-03-22 2017-08-29 华南理工大学 A kind of image for resisting generation network based on depth convolution reflects minimizing technology
CN110473154A (en) * 2019-07-31 2019-11-19 西安理工大学 A kind of image de-noising method based on generation confrontation network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张云飞: "基于生成对抗网络的模糊图像复原研究", 《硕士电子期刊》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115731125A (en) * 2022-11-11 2023-03-03 贵州大学 Big data technology-based method for eliminating main beam effect of radio interference array

Similar Documents

Publication Publication Date Title
CN111652321B (en) Marine ship detection method based on improved YOLOV3 algorithm
CN111738942A (en) Generation countermeasure network image defogging method fusing feature pyramid
CN109685072B (en) Composite degraded image high-quality reconstruction method based on generation countermeasure network
CN109859120B (en) Image defogging method based on multi-scale residual error network
CN107103285B (en) Face depth prediction method based on convolutional neural network
CN111079739B (en) Multi-scale attention feature detection method
CN110288550B (en) Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition
CN111968123B (en) Semi-supervised video target segmentation method
CN111489301A (en) Image defogging method based on image depth information guide for migration learning
CN111160229B (en) SSD network-based video target detection method and device
CN110717863B (en) Single image snow removing method based on generation countermeasure network
CN109949200B (en) Filter subset selection and CNN-based steganalysis framework construction method
CN115393410A (en) Monocular view depth estimation method based on nerve radiation field and semantic segmentation
CN115393231B (en) Defect image generation method and device, electronic equipment and storage medium
CN111882516B (en) Image quality evaluation method based on visual saliency and deep neural network
CN116468995A (en) Sonar image classification method combining SLIC super-pixel and graph annotation meaning network
CN111861926A (en) Image rain removing method based on airspace group enhancement mechanism and long-time and short-time memory network
CN113160085B (en) Water bloom shielding image data collection method based on generation countermeasure network
CN114283058A (en) Image super-resolution reconstruction method based on countermeasure network and maximum mutual information optimization
CN112581396A (en) Reflection elimination method based on generation countermeasure network
CN110738645B (en) 3D image quality detection method based on convolutional neural network
Zeng et al. Progressive feature fusion attention dense network for speckle noise removal in OCT images
CN115860113B (en) Training method and related device for self-countermeasure neural network model
CN115526891B (en) Training method and related device for defect data set generation model
Krishnan et al. A novel underwater image enhancement technique using ResNet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210330

RJ01 Rejection of invention patent application after publication