CN111539896B - Domain-adaptive-based image defogging method and system - Google Patents
Domain-adaptive-based image defogging method and system Download PDFInfo
- Publication number
- CN111539896B CN111539896B CN202010367514.6A CN202010367514A CN111539896B CN 111539896 B CN111539896 B CN 111539896B CN 202010367514 A CN202010367514 A CN 202010367514A CN 111539896 B CN111539896 B CN 111539896B
- Authority
- CN
- China
- Prior art keywords
- image
- defogging
- real
- domain
- synthetic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000012549 training Methods 0.000 claims abstract description 30
- 238000006243 chemical reaction Methods 0.000 claims abstract description 22
- 230000004913 activation Effects 0.000 claims description 18
- 238000010606 normalization Methods 0.000 claims description 15
- 230000003044 adaptive effect Effects 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 239000002131 composite material Substances 0.000 claims 1
- 238000003909 pattern recognition Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 30
- 230000000694 effects Effects 0.000 description 30
- 230000006870 function Effects 0.000 description 18
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Abstract
The invention discloses a domain-adaptive-based image defogging method and system, and belongs to the field of pattern recognition. According to the invention, the domain deviation between the synthetic domain and the real domain is effectively reduced through the image conversion module, and the generalization of the image defogging model in the real domain is improved; the real foggy day images are fused into the training of the synthetic domain and the real domain defogging model to be used for restricting the training of the model and further increasing the generalization of the model, so that the defogged images are clearer. The total loss function in the invention comprises dark channel prior loss, total variation loss and consistency loss, wherein the dark channel prior loss emphasizes that most pixel values of a dark channel of a recovered image are 0 or close to 0, the total variation loss hopes that the pixels of the defogged image are smoothly distributed, and the consistency loss ensures that the output results of the two defogging models can be consistent, so that the defogged image is clearer and more obvious in detail by taking the minimization of the total loss function as a target.
Description
Technical Field
The invention belongs to the field of pattern recognition, and particularly relates to a domain-adaptive image defogging method and system.
Background
The goal of image defogging is to recover a sharp image from a fogged input image, which is a very important link for subsequent computer vision advanced tasks, such as object recognition and scene understanding.
Given a fogged image, early image defogging methods attempted to estimate the transfer function and the global atmospheric light intensity, thereby recovering a sharp image. Some a priori based methods estimate the transmission map by mining statistical properties of the sharp image, e.g., dark channel a priori and color line a priori. Unfortunately, these a priori sums are easily inconsistent with the actual situation, which may lead to inaccurate transfer function estimates and thus to poor quality of the reconstructed image. With the benefit of the powerful feature representation of the convolutional neural network, many deep learning-based methods achieve good image defogging performance. However, these methods require a large number of foggy image and clear image pairs to train the network. In practical application, it is impractical to obtain a large number of clear images corresponding to foggy day images. Therefore, most methods choose to train the defogging model with the synthetic data. However, the model trained on the synthetic data has poor generalization over real data due to domain bias.
Disclosure of Invention
Aiming at the defect of poor generalization on a real data set and the improvement requirement of the existing image defogging method, the invention provides a domain adaptive image defogging method and system, and aims to effectively improve the generalization of an image defogging model on the real data by utilizing the domain adaptive method.
To achieve the above object, according to a first aspect of the present invention, there is provided a domain-adaptive-based image defogging method including the steps of:
s1, taking the synthetic foggy day image data set and the real foggy day image data set as training sets, and performing iterative training on an image defogging network to obtain a trained image defogging network;
s2, inputting the real foggy day image to be detected into a trained image defogging network to obtain a defogging result;
the image defogging network comprises: the system comprises an image conversion module, a synthetic domain image defogging module and a real domain image defogging module;
the image conversion module is used for converting the original synthetic foggy day image into a real foggy day image to obtain a synthetic-to-real foggy day image, and simultaneously converting the original real foggy day image into a synthetic foggy day image to obtain a real-to-synthetic foggy day image;
the synthetic domain image defogging module is used for respectively defogging the original synthetic foggy day image and the real synthetic foggy day image to obtain a defogging result of the corresponding synthetic foggy day image;
and the real domain image defogging module is used for defogging the original real foggy day image and the synthesized and converted real foggy day image respectively to obtain a defogging result of the corresponding real foggy day image.
Preferably, the image conversion module includes: the synthesis-to-real network and the real-to-synthesis network both adopt a CycleGAN network;
the cycleGAN network comprises a generator network and an arbiter network;
the generator network comprises a convolution layer, a residual block, an up-sampling layer, a convolution layer and a nonlinear activation layer which are sequentially connected, and is used for carrying out style conversion on an input image to obtain a converted image;
the discriminator network comprises a convolution layer, a batch normalization layer and a nonlinear activation layer which are sequentially connected and used for discriminating whether the distribution of the converted image is consistent with that of the original image or not, and the auxiliary generator network generates the converted image which is more consistent with the distribution of the real foggy days.
Preferably, the network structures of the synthetic domain image defogging module and the real domain image defogging module are the same, and the inputs are different;
the image defogging module comprises a coding block and a decoding block which are connected in sequence;
the coding block comprises a convolution layer, a batch normalization layer and a nonlinear activation layer which are sequentially connected and is used for extracting features according to a synthetic foggy day image and a real synthetic foggy day image to obtain first clear image features and extracting features according to the real foggy day image and the synthetic real foggy day image to obtain second clear image features;
the decoding block comprises a convolution layer, a batch normalization layer, a nonlinear activation layer, an anti-convolution layer, a batch normalization layer and a nonlinear activation layer which are sequentially connected, and is used for carrying out anti-convolution on the first clear image characteristic to obtain a defogged image of a synthetic foggy day image, and carrying out anti-convolution on the second clear image characteristic to obtain the defogged image of the real foggy day image.
Preferably, the total loss function of the domain adaptive defogging network is calculated as follows:
L=Ltran+λm(Lrm+Lsm)+λd(Lrd+Lsd)+λt(Lrt+Lst)+λcLc
wherein L istranA damage function representing the image conversion module; l isrmAnd LsmRespectively representing the mean square error loss of the real domain defogging network and the synthetic domain defogging network; l issdAnd LsdRespectively representing the dark channel loss of the real domain defogging network and the synthesized domain defogging network; l isrtAnd LstRespectively representing the total variation loss of the defogging networks of the real domain and the synthesis domain; l iscIndicating a loss of consistency; parameter lambdam,λd,λt,λcRespectively used for balancing the mean square error loss function, the dark channel loss function, the total variation loss function and the consistency loss function.
Preferably, the calculation formula of the dark channel loss of the real domain and the synthetic domain defogging networks is as follows:
wherein, JRDefogged image, J, representing a true foggy day imageR→SDefogged image representing true transformed synthetic foggy sky image, x and y being image JRAnd JRPixel coordinate of (a), JR cAnd JR→s cRespectively represent images JRAnd JR→SThe c color channel, N (x), represents a local neighborhood centered at x, | · | |. survival1Representing the L1 norm.
Preferably, the calculation formula of the total variation loss of the real domain defogging network and the synthetic domain defogging network is as follows:
wherein the content of the first and second substances,a horizontal gradient operator is shown and is,denotes the vertical gradient operator, JRDefogged image, J, representing a true foggy day imageR→SRepresenting a defogged image transformed into a foggy image, | · | | non-woven phosphor1Representing the L1 norm.
Preferably, the consistency loss is calculated as follows:
Lc=||JR-JR→S||1
wherein, JRDefogged image, J, representing a true foggy day imageR→SDefogged image representing real synthetic fog image, | · | | calculation1Representing the L1 norm.
To achieve the above object, according to a second aspect of the present invention, there is provided a domain-adaptive-based image defogging system including:
the training module is used for taking the synthetic foggy day image data set and the real foggy day image data set as a training set, and performing iterative training on the image defogging network to obtain a trained image defogging network;
the defogging module is used for inputting the real foggy day image to be detected into the trained image defogging network to obtain a defogging result;
the image defogging network comprises: the system comprises an image conversion module, a synthetic domain image defogging module and a real domain image defogging module;
the image conversion module is used for converting the original synthetic foggy day image into a real foggy day image to obtain a synthetic-to-real foggy day image, and simultaneously converting the original real foggy day image into a synthetic foggy day image to obtain a real-to-synthetic foggy day image;
the synthetic domain image defogging module is used for respectively defogging the original synthetic foggy day image and the real synthetic foggy day image to obtain a defogging result of the corresponding synthetic foggy day image;
and the real domain image defogging module is used for defogging the original real foggy day image and the synthesized and converted real foggy day image respectively to obtain a defogging result of the corresponding real foggy day image.
In general, by the above technical solution conceived by the present invention, the following beneficial effects can be obtained:
(1) the image defogging network consisting of the image conversion module, the synthesized domain image defogging module and the real domain image defogging module is constructed, the image conversion module can effectively reduce the problem of field deviation between the synthesized domain and the real domain, and the generalization of the image defogging model in the real domain is improved; the real foggy day images are fused into the training of the synthetic domain and the real domain defogging model to constrain the training of the model and further increase the generalization of the model, so that the defogged images are clearer, and the domain self-adaption of image defogging is realized.
(2) The total loss function in the invention comprises dark channel prior loss, total variation loss and consistency loss, wherein the dark channel prior loss emphasizes that most pixel values of a dark channel of a recovered image are 0 or close to 0, the total variation loss hopes that the pixels of the image after defogging are smoothly distributed, and the consistency loss ensures that the output results of two defogging models can be consistent, so that the image after defogging is clearer and the details are more obvious by taking the minimization of the total loss function as a target.
Drawings
FIG. 1 is a schematic diagram of an image defogging network according to an embodiment of the present invention;
fig. 2(a) is a synthetic foggy day image 1 to be detected according to an embodiment of the present invention;
fig. 2(b) is a defogging effect diagram provided by the embodiment of the present invention for the synthesized foggy day image 1 to be detected and output by the conventional image defogging method NLD;
fig. 2(c) is a defogging effect diagram provided by the embodiment of the invention for the defogging effect of the synthesized foggy day image 1 output by the traditional image defogging method DehazeNet;
fig. 2(d) is a diagram of the defogging effect output by the conventional image defogging method AOD-Net for the synthetic foggy image 1 to be detected according to the embodiment of the present invention;
fig. 2(e) is a defogging effect diagram provided by the embodiment of the invention for the defogging method DCPDN of the traditional image of the synthesized foggy image 1 to be detected;
fig. 2(f) is a diagram of a defogging effect output by a GFN method for a traditional image of a synthesized foggy image 1 to be detected according to an embodiment of the present invention;
fig. 2(g) is a diagram of a defogging effect output by the conventional image defogging method EPDN for the synthetic foggy image 1 to be detected according to the embodiment of the present invention;
fig. 2(h) is a diagram of the defogging effect output by the defogging method according to the embodiment of the invention for the synthetic foggy image 1 to be detected;
fig. 2(i) is a clear diagram corresponding to the synthetic foggy day image 1 to be detected according to the embodiment of the present invention;
fig. 3(a) is a synthetic foggy day image 2 to be detected according to an embodiment of the present invention;
fig. 3(b) is a defogging effect diagram output by the traditional image defogging method NLD for the synthetic foggy image 2 to be detected according to the embodiment of the invention;
fig. 3(c) is a defogging effect diagram provided by the embodiment of the invention for the output of the traditional image defogging method DehazeNet for the synthetic foggy image 2 to be detected;
fig. 3(d) is a diagram of the defogging effect output by the conventional image defogging method AOD-Net for the synthetic foggy image 2 to be detected according to the embodiment of the present invention;
fig. 3(e) is a defogging effect diagram provided by the embodiment of the invention for the synthesized foggy image 2 to be detected and output by the traditional image defogging method DCPDN;
fig. 3(f) is a diagram of a defogging effect output by a GFN method for a traditional image of a synthetic foggy image 2 to be detected according to an embodiment of the present invention;
fig. 3(g) is a defogging effect diagram provided by the embodiment of the invention for the defogging method EPDN of the conventional image of the synthesized foggy image 2 to be detected;
fig. 3(h) is a diagram of the defogging effect output by the defogging method according to the embodiment of the invention for the synthetic foggy image 2 to be detected;
fig. 3(i) is a clear view corresponding to the synthetic foggy day image 2 to be detected according to the embodiment of the present invention;
fig. 4(a) is a real foggy day image 3 to be measured according to an embodiment of the present invention;
fig. 4(b) is a defogging effect diagram output by the traditional image defogging method NLD for the real foggy day image 3 to be detected according to the embodiment of the invention;
fig. 4(c) is a defogging effect diagram provided by the embodiment of the present invention for the real fog day image to be detected 3 output by the traditional image defogging method DehazeNet;
fig. 4(d) is a defogging effect diagram provided by the embodiment of the invention for the real fog day image to be detected 3 output by the traditional image defogging method AOD-Net;
fig. 4(e) is a defogging effect diagram provided by the embodiment of the invention for the real fog day image to be detected 3 and output by the traditional image defogging method dcpddn;
fig. 4(f) is a diagram of a defogging effect output by a GFN method for defogging a real foggy day image 3 to be detected according to the embodiment of the present invention;
fig. 4(g) is a defogging effect diagram output by the conventional image defogging method EPDN for the real fog day image 3 to be detected according to the embodiment of the present invention;
fig. 4(h) is a defogging effect diagram provided by the embodiment of the invention for the real fog day image to be detected 3 and output by the defogging method of the invention;
fig. 5(a) is a real foggy day image 4 to be measured according to an embodiment of the present invention;
fig. 5(b) is a defogging effect diagram output by the traditional image defogging method NLD for the real foggy day image 4 to be detected according to the embodiment of the invention;
fig. 5(c) is a defogging effect diagram provided by the embodiment of the invention for the real fog day image 4 to be detected, which is output by the traditional image defogging method DehazeNet;
fig. 5(d) is a defogging effect diagram provided by the embodiment of the invention for the defogging method of the traditional image of the real fog day image 4 to be detected by the AOD-Net according to the conventional image defogging method;
FIG. 5(e) is a conventional diagram of a real foggy day image 4 to be measured according to an embodiment of the present invention
Fig. 5(f) is a diagram of a defogging effect provided by an embodiment of the present invention for a defogging method GFN of a real fog day image 4 to be detected according to a conventional image defogging method;
fig. 5(g) is a defogging effect diagram output by the conventional image defogging method EPDN for the real fog image 4 to be detected according to the embodiment of the invention;
fig. 5(h) is a diagram of a defogging effect provided by the embodiment of the invention for the real fog day image to be detected 4 and output by the defogging method of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, the present invention provides a domain-adaptive based image defogging method, which includes the following steps:
and S1, taking the synthetic foggy day image data set and the real foggy day image data set as training sets, and performing iterative training on the image defogging network to obtain the trained image defogging network.
The image defogging network comprises: the device comprises an image conversion module, a synthesized domain image defogging module and a real domain image defogging module.
And the image conversion module is used for converting the original synthetic foggy day image into a real foggy day image to obtain a synthetic-to-real foggy day image, and simultaneously converting the original real foggy day image into a synthetic foggy day image to obtain a real-to-synthetic foggy day image.
And the synthetic domain image defogging module is used for respectively defogging the original synthetic foggy day image and the real synthetic foggy day image to obtain a defogging result of the corresponding synthetic foggy day image.
And the real domain image defogging module is used for respectively defogging the original real foggy day image and the synthesized and converted real foggy day image to obtain a defogging result of the corresponding real foggy day image.
The "image" in the present invention refers to an image captured by a 2D camera, for example, a natural scene such as a street, a lake, a mountain, and the like, and a social scene such as a street, a bridge, and a road. The synthetic foggy day image is a foggy day image synthesized according to the image shot by the camera and the depth information of the image. The real foggy day image is an image photographed by a 2D camera in a real foggy day environment. The "domain" in the present invention refers to an image data set satisfying a certain probability distribution, for example, a synthetic domain, a real domain.
The size of the image obtained after the original image passes through the image conversion module is consistent with the size of the original image. Preferably, the image conversion module includes: synthetic to real network Gs→RAnd real transfer to network GR→SThe CycleGAN network is adopted.
Preferably, the CycleGAN network comprises a generator network and an arbiter network. The generator network comprises a convolution layer, a residual block, an up-sampling layer, a convolution layer and a non-linear activation layer which are sequentially connected, and is used for carrying out style conversion on an input image (a synthetic foggy day image/a real foggy day image) to obtain a converted image (a synthetic-to-real foggy day image/a real-to-synthetic foggy day image). The discriminator network comprises a convolution layer, a batch normalization layer and a nonlinear activation layer which are sequentially connected and is used for discriminating whether the distribution of the converted image (the synthesized and converted real foggy day image/the real and converted synthetic foggy day image) is consistent with that of the original image (the real foggy day image/the synthetic foggy day image), and the auxiliary generator network generates the synthesized and converted real foggy day image/the real and converted synthetic foggy day image which is more consistent with the real foggy day distribution.
The network structures of the synthetic domain image defogging module and the real domain image defogging module are the same, and the difference is that the input is different. The image defogging module comprises a coding block and a decoding block which are connected in sequence.
The coding block comprises a convolution layer, a batch normalization layer and a nonlinear activation layer which are sequentially connected and is used for extracting features according to a synthetic foggy day image and a real synthetic foggy day image to obtain first clear image features, and extracting features according to the real foggy day image and the synthetic real foggy day image to obtain second clear image features.
The decoding block comprises a convolution layer, a batch normalization layer, a nonlinear activation layer, an anti-convolution layer, a batch normalization layer and a nonlinear activation layer which are sequentially connected, and is used for carrying out anti-convolution on the first clear image characteristic to obtain a defogged image of a synthetic foggy day image, and carrying out anti-convolution on the second clear image characteristic to obtain the defogged image of the real foggy day image.
During training, the defogged images of the foggy day images are converted into the defogged images of the foggy day images according to the reality output by the synthesis domain image defogging module, the defogged images of the real foggy day images are output by the real domain image defogging module, and the consistency loss of the two results is calculated.
The calculation formula of the total loss function of the domain adaptive defogging network is as follows:
L=Ltran+λm(Lrm+Lsm)+λd(Lrd+Lsd)+λt(Lrt+Lst)+λcLc
wherein L istranThe damage function of the image conversion module, namely the loss of the discriminators and generators of the two cyclegans; l isrmAnd LsmRespectively representing the mean square error loss of the real domain defogging network and the synthetic domain defogging network; l isrdAnd LsdRespectively representing the dark channel loss of the real domain defogging network and the synthesized domain defogging network; l isrtAnd LstRespectively representing the total variation loss of the real domain defogging network and the synthetic domain defogging network; l iscIndicating a loss of consistency. Parameter lambdam,λd,λt,λcRespectively used for balancing the mean square error loss function, the dark channel loss function, the total variation loss function and the consistency loss function.
The dark channel loss is calculated as follows:
wherein, JRDefogged image, J, representing a true foggy day imageR→SDefogged image representing true transformed synthetic foggy sky image, x and y being image JRAnd JRPixel coordinate of (b), JR cAnd JR→s cRespectively represent images JRAnd JR→SThe c color channel, N (x), represents a local neighborhood centered at x, | · | |. survival1Representing the L1 norm.
The total variation loss is calculated as follows:
wherein the content of the first and second substances,a horizontal gradient operator is shown and is,a vertical gradient operator is shown.
The consistency loss is calculated as follows:
Lc=||JR-JR→S||1
an embodiment of the present invention employs RESIDE, which includes a synthetic and real-world foggy image dataset comprising 5 subsets: indoor Training Set (ITS), Outdoor Training Set (OTS), synthetic target test set (SOTS), unlabeled real foggy day image (URHI), and real task driven data set (RTTS). For the synthetic foggy day image data, this example selected 6000 images as the training set, 3000 from ITS and 3000 from OTS. For real foggy day image data, 1000 images from the URHI data set are selected as a training set in the present embodiment. In the training phase, the present embodiment randomly crops all images to 256 × 256, and normalizes the pixel values to [.1, 1 ]. The training set is randomly divided into training subsets of the same size, each of which in this embodiment is 1.
In this embodiment, each time one training subset is trained, all training subsets end as one iteration end. And repeating the operation until the iteration times reach the upper limit to obtain the final weight, thereby realizing the training of the deep convolutional neural network. In the present embodiment, the upper limit of the number of iterations is preferably 100.
The training process in one iteration is as follows: and (3) training network parameters of the domain self-adaptive defogging network by utilizing a forward propagation algorithm and a backward propagation algorithm, calculating a loss function corresponding to each training subset by forward propagation, and obtaining a corresponding gradient of the training subset by backward propagation. In this example λm,λd,λt,λcValues of (3) are 10, 0.01, 0.001, 0.1. And minimizing a loss function by using a small batch of random gradient descent algorithm, then updating the weight value, and taking the updated weight value as an initial value for training the next training subset.
And S2, inputting the real foggy day image to be detected into the trained image defogging network to obtain a defogging result.
Fig. 2(a) -3 (a) are synthesized foggy day images to be detected, fig. 4(a) -5 (a) are real foggy day images to be detected, fig. 2(b-g) -5 (b-g) are defogging effect images output by a traditional image defogging method NLD, DehazeNet, AOD-Net, dcpddn, GFN and EPDN, fig. 2(h) -5 (h) are defogging effect images output by an image defogging model of the invention, and fig. 2(i) -3 (i) are clear images corresponding to the synthesized foggy day images to be detected. Compared with fig. 2(b-g) -5 (b-g), it is obvious that the images in fig. 2(h) -5 (h) retain more details and are closer to real clear images, therefore, the image quality can be effectively improved by performing the defogging processing on the foggy day images by using the image defogging model provided by the invention, and a high-quality input image is provided for subsequent target identification and scene understanding.
The image defogging method provided by the invention can be applied to the fields of automatic driving, video monitoring, robots and the like.
It will be understood by those skilled in the art that the foregoing is only an exemplary embodiment of the present invention, and is not intended to limit the invention to the particular forms disclosed, and all changes, equivalents and modifications that fall within the spirit and scope of the invention are therefore intended to be embraced therein.
Claims (4)
1. A domain-adaptive based image defogging method is characterized by comprising the following steps:
s1, taking a synthetic foggy day image data set and a real foggy day image data set as a training set, and performing iterative training on a domain adaptive image defogging network to obtain a trained domain adaptive image defogging network;
s2, inputting the real foggy day image to be detected into a real domain image defogging module in the trained domain self-adaptive image defogging network to obtain a defogging result;
the domain-adaptive image defogging network comprises: the system comprises an image conversion module, a synthetic domain image defogging module and a real domain image defogging module;
the image conversion module is used for converting the original synthetic foggy day image into a real foggy day image to obtain a synthetic-to-real foggy day image, and simultaneously converting the original real foggy day image into a synthetic foggy day image to obtain a real-to-synthetic foggy day image;
the synthetic domain image defogging module is used for respectively defogging the original synthetic foggy image and the real synthetic foggy image to obtain a defogging result of the corresponding synthetic foggy image;
the real domain image defogging module is used for defogging the original real foggy day image and the synthesized and converted real foggy day image respectively to obtain a defogging result of the corresponding real foggy day image;
the image conversion module adopts a CycleGAN network, and comprises two generator networks and two discriminator networks;
the generator network comprises a convolution layer, a residual block, an up-sampling layer, a convolution layer and a nonlinear activation layer which are sequentially connected, and is used for carrying out style conversion on an input image to obtain a converted image;
the discriminator network comprises a convolution layer, a batch normalization layer and a nonlinear activation layer which are sequentially connected, and is used for discriminating whether the distribution of the converted image is consistent with that of the original image or not and assisting the generator network to generate the converted image;
the calculation formula of the total loss function of the domain adaptive image defogging network is as follows:
L=Ltran+λm(Lrm+Lsm)+λd(Lrd+Lsd)+λt(Lrt+Lst)+λcLc
wherein L istranA loss function representing the image conversion module; l isrmAnd LsmRespectively representing the mean square error loss between the defogged image and the defogged image of the real domain image defogging module and the synthesized domain image defogging module; l isrdAnd LsdRespectively representing the prior loss of dark channels of the real-domain image defogging module and the synthetic-domain image defogging module; l isrtAnd LstRespectively representing the total variation loss of the real domain image defogging module and the synthetic domain image defogging module; l iscIndicating a loss of consistency; parameter lambdam,λd,λt,λcRespectively equalizing the mean square error loss function, the dark channel loss function, the total variation loss function and the consistency loss function;
the calculation formula of dark channel prior loss of the real domain and the synthetic domain image defogging modules is as follows:
the calculation formula of the total variation loss of the real domain image defogging module and the synthetic domain image defogging module is as follows:
wherein, JRDefogged image, J, representing a true foggy day imageR→SDefogged image representing true conversion into foggy image, x and y being image JRAnd JRPixel coordinate of (a), JR cAnd JR→S cRespectively represent images JRAnd JR→SThe c-th color channel, N (x) represents a local neighborhood centered at x,represents the L1 norm;a horizontal gradient operator is shown and is,a vertical gradient operator is shown.
2. The method of claim 1, wherein the network structure of the synthesized domain image defogging module and the real domain image defogging module are the same and the input is different;
the composite domain image defogging module comprises a coding block and a decoding block which are connected in sequence;
the coding block comprises a convolution layer, a batch normalization layer and a nonlinear activation layer which are sequentially connected and used for respectively extracting characteristics of an original synthesized foggy image and a real synthesized foggy image to obtain corresponding first clear image characteristics;
the decoding block comprises a convolution layer, a batch normalization layer, a nonlinear activation layer, an anti-convolution layer, a batch normalization layer and a nonlinear activation layer which are sequentially connected, and is used for carrying out deconvolution on the corresponding first clear image characteristics to obtain a corresponding defogged image of the synthetic foggy day image;
the real domain image defogging module comprises a coding block and a decoding block which are connected in sequence;
the coding block comprises a convolution layer, a batch normalization layer and a nonlinear activation layer which are sequentially connected and used for respectively extracting characteristics of an original real foggy day image and a synthesized and converted real foggy day image to obtain corresponding second clear image characteristics;
the decoding block comprises a convolution layer, a batch normalization layer, a nonlinear activation layer, an anti-convolution layer, a batch normalization layer and a nonlinear activation layer which are sequentially connected and is used for carrying out deconvolution on the corresponding second clear image characteristics to obtain the corresponding defogged image of the real foggy day image.
3. The method of claim 1, wherein the consistency loss is calculated as follows:
Lc=||JR-JR→S||1。
4. a domain-adaptive based image defogging system comprising:
a computer-readable storage medium and a processor;
the computer-readable storage medium is used for storing executable instructions;
the processor is configured to read executable instructions stored in the computer-readable storage medium and execute the domain adaptive image defogging method according to claim 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010367514.6A CN111539896B (en) | 2020-04-30 | 2020-04-30 | Domain-adaptive-based image defogging method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010367514.6A CN111539896B (en) | 2020-04-30 | 2020-04-30 | Domain-adaptive-based image defogging method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111539896A CN111539896A (en) | 2020-08-14 |
CN111539896B true CN111539896B (en) | 2022-05-27 |
Family
ID=71975344
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010367514.6A Active CN111539896B (en) | 2020-04-30 | 2020-04-30 | Domain-adaptive-based image defogging method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111539896B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111815509B (en) * | 2020-09-02 | 2021-01-01 | 北京邮电大学 | Image style conversion and model training method and device |
CN112365428B (en) * | 2020-12-03 | 2022-04-01 | 华中科技大学 | DQN-based highway monitoring video defogging method and system |
CN115032817B (en) * | 2022-05-27 | 2023-06-27 | 北京理工大学 | Real-time video defogging intelligent glasses for severe weather use and control method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108269244A (en) * | 2018-01-24 | 2018-07-10 | 东北大学 | It is a kind of based on deep learning and prior-constrained image defogging system |
CN109472818A (en) * | 2018-10-17 | 2019-03-15 | 天津大学 | A kind of image defogging method based on deep neural network |
CN109801232A (en) * | 2018-12-27 | 2019-05-24 | 北京交通大学 | A kind of single image to the fog method based on deep learning |
CN110148088A (en) * | 2018-03-14 | 2019-08-20 | 北京邮电大学 | Image processing method, image rain removing method, device, terminal and medium |
CN110517203A (en) * | 2019-08-30 | 2019-11-29 | 山东工商学院 | A kind of defogging method rebuild based on reference picture |
CN110992275A (en) * | 2019-11-18 | 2020-04-10 | 天津大学 | Refined single image rain removing method based on generation countermeasure network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10885611B2 (en) * | 2016-04-07 | 2021-01-05 | Carmel Haifa University Economic Corporation Ltd. | Image dehazing and restoration |
-
2020
- 2020-04-30 CN CN202010367514.6A patent/CN111539896B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108269244A (en) * | 2018-01-24 | 2018-07-10 | 东北大学 | It is a kind of based on deep learning and prior-constrained image defogging system |
CN110148088A (en) * | 2018-03-14 | 2019-08-20 | 北京邮电大学 | Image processing method, image rain removing method, device, terminal and medium |
CN109472818A (en) * | 2018-10-17 | 2019-03-15 | 天津大学 | A kind of image defogging method based on deep neural network |
CN109801232A (en) * | 2018-12-27 | 2019-05-24 | 北京交通大学 | A kind of single image to the fog method based on deep learning |
CN110517203A (en) * | 2019-08-30 | 2019-11-29 | 山东工商学院 | A kind of defogging method rebuild based on reference picture |
CN110992275A (en) * | 2019-11-18 | 2020-04-10 | 天津大学 | Refined single image rain removing method based on generation countermeasure network |
Non-Patent Citations (4)
Title |
---|
"Cycle-Dehaze: Enhanced CycleGAN for Single Image Dehazing";Engin D等;《arXiv:1805.05308v1》;20180514;全文 * |
"Real-Time Monocular Depth Estimation Using Synthetic Data With Domain Adaptation via Image Style Transfer";Amir Atapour-Abarghouei等;《Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;20181231;第2018年卷;全文 * |
"Semi-Supervised Image Dehazing";Li L等;《IEEE Transactions on Image Processing》;20191115;全文 * |
"含有大片天空区域图像的去雾算法";宋瑞霞等;《计算机辅助设计与图形学学报》;20191130;第31卷(第11期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111539896A (en) | 2020-08-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111539896B (en) | Domain-adaptive-based image defogging method and system | |
CN108876735B (en) | Real image blind denoising method based on depth residual error network | |
CN112184577B (en) | Single image defogging method based on multiscale self-attention generation countermeasure network | |
CN109584170B (en) | Underwater image restoration method based on convolutional neural network | |
Hu et al. | Underwater image restoration based on convolutional neural network | |
CN110288550B (en) | Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition | |
CN109993804A (en) | A kind of road scene defogging method generating confrontation network based on condition | |
CN107958465A (en) | A kind of single image to the fog method based on depth convolutional neural networks | |
CN112150379A (en) | Image defogging method and device for enhancing generation of countermeasure network based on perception discrimination | |
CN114463218B (en) | Video deblurring method based on event data driving | |
CN111986108A (en) | Complex sea-air scene image defogging method based on generation countermeasure network | |
CN110838095B (en) | Single image rain removing method and system based on cyclic dense neural network | |
CN113887349A (en) | Road area image identification method based on image and point cloud fusion network | |
CN113034361B (en) | Remote sensing image super-resolution reconstruction method based on improved ESRGAN | |
CN115376024A (en) | Semantic segmentation method for power accessory of power transmission line | |
CN114494821A (en) | Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation | |
Wei et al. | Sidgan: Single image dehazing without paired supervision | |
Babu et al. | An efficient image dahazing using Googlenet based convolution neural networks | |
Wang et al. | Afdn: Attention-based feedback dehazing network for UAV remote sensing image haze removal | |
CN115293992B (en) | Polarization image defogging method and device based on unsupervised weight depth model | |
CN114820395B (en) | Underwater image enhancement method based on multi-field information fusion | |
CN116468625A (en) | Single image defogging method and system based on pyramid efficient channel attention mechanism | |
Wang et al. | Gridformer: Residual dense transformer with grid structure for image restoration in adverse weather conditions | |
CN115439738A (en) | Underwater target detection method based on self-supervision cooperative reconstruction | |
CN114140361A (en) | Generation type anti-network image defogging method fusing multi-stage features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |