CN110517203B - Defogging method based on reference image reconstruction - Google Patents
Defogging method based on reference image reconstruction Download PDFInfo
- Publication number
- CN110517203B CN110517203B CN201910815133.7A CN201910815133A CN110517203B CN 110517203 B CN110517203 B CN 110517203B CN 201910815133 A CN201910815133 A CN 201910815133A CN 110517203 B CN110517203 B CN 110517203B
- Authority
- CN
- China
- Prior art keywords
- image
- foggy
- defogging
- network
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000009467 reduction Effects 0.000 claims abstract description 8
- 230000008569 process Effects 0.000 claims abstract description 7
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 3
- 239000010410 layer Substances 0.000 claims description 55
- 238000012549 training Methods 0.000 claims description 33
- 238000012360 testing method Methods 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 14
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 12
- 238000010606 normalization Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 8
- 230000001537 neural effect Effects 0.000 claims description 5
- 238000011156 evaluation Methods 0.000 claims description 4
- 238000011084 recovery Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 239000002346 layers by function Substances 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000008030 elimination Effects 0.000 claims 1
- 238000003379 elimination reaction Methods 0.000 claims 1
- 238000013135 deep learning Methods 0.000 abstract description 5
- 238000013527 convolutional neural network Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000006731 degradation reaction Methods 0.000 description 4
- 230000015556 catabolic process Effects 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000000087 stabilizing effect Effects 0.000 description 2
- 239000002344 surface layer Substances 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000003595 mist Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a defogging method based on reference image reconstruction. The method comprises the following steps: firstly, acquiring haze-free color images and depth image data sets of different scenes and different light rays by using a depth camera, and performing preliminary pretreatment; setting an atmospheric light value and transmissivity, synthesizing a foggy image from the foggy image, and carrying out noise reduction pretreatment on a foggy data set; then selecting images with similar content corresponding to the foggy images as reference images; and finally, constructing an end-to-end convolutional neural network model, specifically, respectively subjecting the network to the processes of extracting haze characteristics, removing the haze characteristics, adaptively migrating textures of a reference image and reconstructing a high-resolution defogging image. Unlike most end-to-end based deep learning methods, the method does not combine the displayed atmospheric physical model, but enhances the details of the image while achieving image defogging by constructing a defogging network and introducing a reference image, implicitly removing haze.
Description
Technical Field
The invention relates to an image processing method, in particular to an image defogging method based on deep learning.
Background
Mist is an atmospheric phenomenon formed by water droplets or a large number of minute particles in the air. The images photographed in such an environment generally have problems of color distortion, serious degradation of contrast, loss of scene details, and the like. These problems can severely impact various types of system performance that rely on optical imaging instrumentation, such as urban traffic systems, outdoor monitoring systems, target recognition systems, and the like. Therefore, taking effective measures to remove haze of an image, it becomes increasingly necessary to restore sharpness of the image.
The image defogging process takes a hazy image as input, eliminates degradation effects and finally restores an image without haze. Currently, methods for image processing in foggy days are mainly classified into three categories: the first is an algorithm based on image processing. Such algorithms essentially use image processing methods to change contrast or brightness, thereby improving the visual effect of the image. However, this method does not pay attention to the cause of image degradation, resulting in problems such as incomplete image defogging, and easy image distortion. The second category is algorithms based on image restoration. The algorithm is based on the atmospheric scattering principle, and the foggy image is restored by solving the inverse process of the foggy image degradation process. The image restored by the method has a real effect, is more close to the original scene of the degraded foreground, has a good image processing effect on the complex scene, and can be stored more completely. The third category is deep learning based methods. At present, the research of an image defogging method based on deep learning is very hot, some methods are based on an atmospheric physical model, and some methods directly perform learning mapping. These methods can automatically learn complex input-output relationships based on data observations, allowing for more complex heuristic learning that is imperceptible to humans. Although these methods have achieved satisfactory results, they are based on strong assumptions and require a variety of parameters related to image formation, which are not always available. This is due to unpredictability of scene conditions, leading them to fail in cases where the foreground is not authentic, such as underwater environments, high-light or low-light environments, or scenes where haze is not entirely white.
Disclosure of Invention
Object of the invention
The invention provides a defogging method based on reference image reconstruction, and aims to solve the problems of low resolution of results processed by the traditional defogging method and low operation efficiency caused by introducing too many parameters.
(II) technical scheme
In order to achieve the above purpose, the present invention adopts the following technical scheme:
firstly, collecting a data set of a foggy image, taking a high-resolution image which is similar to the image in the data set as a reference image, then constructing an end-to-end convolutional neural network model, adaptively migrating textures of the reference image according to the texture similarity of the foggy image and the reference image, and enhancing details of the image while defogging the image.
The method comprises the following specific steps:
and step 1, manufacturing and synthesizing a foggy image data set and a real foggy image data set.
The step 1 specifically comprises the following steps:
1.1 The method comprises the steps of) acquiring real and clean images of the ground with different brightness of different scenes and corresponding depth maps by using a depth camera, wherein 5000 pairs of data are acquired altogether, the acquired scenes are mainly divided into indoor scenes and outdoor scenes, and the brightness of special scenes is divided into high brightness and low brightness;
1.2 Preprocessing acquired paired images, including aligning and fixing the depth map and the color image;
1.3 Given haze-free image)Scene depth mapIlluminance of atmosphereAnd atmospheric scattering coefficientAccording to the formulaAnd calculating a transmission diagram as a ground real transmission diagram. According to the formula of the atmospheric physical modelThe foggy image synthesis was performed, and the obtained foggy image was expressed as:
1.4 5000 real hazy images are collected on line by using Google pictures to serve as a real hazy image data set.
And 2, selecting a training set and a testing set, and respectively performing rapid noise reduction pretreatment on the training set and the testing set.
The step 2 specifically includes the following steps:
2.1 Randomly selecting 3000 pairs from the synthetic foggy dataset as a training set, and randomly selecting 1000 pairs from the NYU2 Depth dataset as a training set in order to avoid the dependence of a trained network model on the dataset; the test set comprises 2000 pieces of synthesized foggy data set, 950 pieces of NYU2 Depth data set, and 5000 real foggy images collected on line;
2.2 The foggy image is subjected to noise reduction by using an FFDNet network, the foggy image is input into an FFDNet denoising network, and the denoised foggy image is output.
And 3, manufacturing a reference image data set aiming at the fog data set.
The step 3 specifically includes the following steps:
3.1 For each picture in the foggy data set, manually and rapidly searching 5 corresponding similar foggy high-resolution reference images by using the function of hundred-degree picture recognition;
3.2 Lot size adjustment of the reference image and the corresponding haze-free imageIs a picture of the picture(s).
And 4, constructing a defogging network model, wherein the whole defogging network structure consists of two parts, the first part realizes a basic image defogging function, and the second part realizes a texture detail enhancement recovery function.
The step 4 specifically includes the following steps:
4.1 The first part adopts a network structure of an encoder-functional layer-decoder. Wherein the encoder is responsible for implementing the feature extraction function. The device consists of downsampling layers and convolution layers, wherein each downsampling layer comprises a convolution layer, a normalization layer and an activation layer. The convolutional layer with step size of 2 is used here to perform downsampling instead of the fixed pooling layer in the conventional convolutional network. The characteristic calculation formula of each convolution layer by output is as follows:
wherein the method comprises the steps ofThe matrix of images is represented and,the convolution kernel is represented as a function of the convolution kernel,a convolution operation is represented and is performed,representing the bias value. And a batch normalization layer is added after each convolution layer in the network, and normalization processing is carried out on the values of each characteristic on all samples, so that the functions of stabilizing model training and accelerating convergence are realized. Batch normalization is defined as:
wherein the method comprises the steps ofA feature map representing the input is presented,representation ofIs used for the average value of (a),representation ofBy setting 2 parameters per layerRealize scaling and translation, change the value interval. To speed up training convolutional neural networks, the ReLU function is used for activation. The formula for defining the ReLU function is:
the intermediate functional layer is responsible for implementing the function of fog removal. It consists of 3 residual blocks and a skip connection. Each residual block is a filterAnd different bypasses use different convolution kernels. Operations may be defined as:
wherein the method comprises the steps ofAndrespectively, weight and bias, the superscript represents the number of layers they are in, and the subscript represents the size of the convolution kernel used in the surface layer,representing a cascading operation.Andrepresenting the input and output of the residual block. Finally removing the background layer by element-by-element subtraction to realize haze removal;
the reconstruction network is thus a decoder corresponding to the encoder, which is responsible for recovering the defogged image, consisting of 4 upsampled layers. Wherein the up-sampling layer consists of a deconvolution layer with a step size of 2, a batch normalization layer and a nonlinear activation layer. The size of the feature map of each up-sampling unit input up-sampling unit is doubled after the feature map is subjected to the deconvolution process;
4.2 The network structure of the second part is composed of the same encoders, the layers of the two encoders are connected by adopting a feature matching block, and the matched features are respectively cascaded to the decoder, so that the image with richer details is obtained on the defogging basis. Similar feature exchange is defined as:
wherein the method comprises the steps ofAndrepresenting the sampling from the neural feature mapAnd (b)And (3) patches.The space of the neural characteristics is represented,represent the firstAnd (b)Similarity between patches.A feature map representing an exchange, the feature exchange may be represented by the following formula:
and step 5, training a network model, and testing by using a testing set.
The step 5 specifically includes the following steps:
5.1 Using a synthetic foggy training set training network, the training objective function representing the average error of the defogging image estimated by the network and the real foggy image, settingRepresenting an estimated defogging pattern of the network,the true fog-free map of the ground is represented by the formula:
wherein the method comprises the steps ofIs the training setThe defogging image is estimated by the network,is the training setA true ground fog-free image with fog images;
5.2 Testing the performance of the network by using the test data set, and evaluating by using subjective or objective evaluation indexes PSNR and SSIM.
The invention has the beneficial effects that:
(1) The invention realizes defogging of end-to-end images and avoids the problem of low operation efficiency caused by introducing too many super parameters;
(2) The invention uses similar reference images to recover details of defogging images, thereby enhancing target visualization of the images;
the method can process images of different scenes, namely, indoor and outdoor synthesis of foggy images and foggy images of brighter areas and darker areas in the real world, and the network model has better defogging effect than the traditional neural network method.
Description of the drawings:
FIG. 1 is a schematic overall flow diagram of an image defogging method based on a residual error network of the present invention;
FIG. 2 is a schematic diagram of an image defogging network constructed in accordance with the present invention;
FIG. 3 is a defogging result image obtained in a brighter scene according to the present invention;
FIG. 4 is a defogging result image obtained in a darker scene according to the present invention;
FIG. 5 is a defogging result image obtained under an indoor scene according to the present invention;
fig. 6 is a defogging result image obtained in an outdoor scene according to the present invention.
The specific embodiment is as follows:
the invention is further described with reference to the drawings and examples;
as shown in fig. 1, the following steps are included.
1) Creating a composite foggy image dataset and a real foggy image dataset:
1.1 The method comprises the steps of) acquiring real and clean images of the ground with different brightness of different scenes and corresponding depth maps by using a depth camera, wherein 5000 pairs of data are acquired altogether, the acquired scenes are mainly divided into indoor scenes and outdoor scenes, and the brightness of special scenes is divided into high brightness and low brightness;
1.2 Preprocessing acquired paired images, including aligning and fixing the depth map and the color image;
1.3 Given haze-free image)Scene depth mapIlluminance of atmosphereAnd atmospheric scattering coefficientAccording to the formulaAnd calculating a transmission diagram as a ground real transmission diagram. According to the formula of the atmospheric physical modelThe foggy image synthesis was performed, and the obtained foggy image was expressed as:
1.4 5000 real hazy images are collected on line by using Google pictures to serve as a real hazy image data set.
2) Selecting a training set and a testing set, and respectively carrying out rapid noise reduction pretreatment on the training set and the testing set:
2.1 Randomly selecting 3000 pairs from the synthetic foggy dataset as a training set, and randomly selecting 1000 pairs from the NYU2 Depth dataset as a training set in order to avoid the dependence of a trained network model on the dataset; the test set comprises 2000 pieces of synthesized foggy data set, 950 pieces of NYU2 Depth data set, and 5000 real foggy images collected on line;
2.2 The foggy image is subjected to noise reduction by using an FFDNet network, the foggy image is input into an FFDNet denoising network, and the denoised foggy image is output.
3) For a foggy dataset, a reference image dataset is made:
3.1 For each picture in the foggy data set, manually and rapidly searching 5 corresponding similar foggy high-resolution reference images by using the function of hundred-degree picture recognition;
3.2 Lot size adjustment of the reference image and the corresponding haze-free imageIs a picture of the picture(s).
4) Constructing a defogging network model, wherein the whole defogging network structure consists of two parts, the first part realizes a basic image defogging function, and the second part realizes a texture detail enhancement recovery function;
4.1 The first part adopts a network structure of an encoder-functional layer-decoder. Wherein the encoder is responsible for implementing the feature extraction function. It consists of downsampling layers and convolution layers, each downsampling layer comprisingA convolution layer, a normalization layer and an activation layer. The convolutional layer with step size of 2 is used here to perform downsampling instead of the fixed pooling layer in the conventional convolutional network. The characteristic calculation formula of each convolution layer by output is as follows:
wherein the method comprises the steps ofThe matrix of images is represented and,the convolution kernel is represented as a function of the convolution kernel,a convolution operation is represented and is performed,representing the bias value. And a batch normalization layer is added after each convolution layer in the network, and normalization processing is carried out on the values of each characteristic on all samples, so that the functions of stabilizing model training and accelerating convergence are realized. Batch normalization is defined as:
wherein the method comprises the steps ofA feature map representing the input is presented,representation ofIs used for the average value of (a),representation ofBy setting 2 parameters per layerTo realizeScaling and translating, and changing the value interval. To speed up training convolutional neural networks, the ReLU function is used for activation. The formula for defining the ReLU function is:
the intermediate functional layer is responsible for implementing the function of fog removal. It consists of 3 residual blocks and a skip connection. Each residual block is a filterAnd different bypasses use different convolution kernels. Operations may be defined as:
wherein the method comprises the steps ofAndrespectively, weight and bias, the superscript represents the number of layers they are in, and the subscript represents the size of the convolution kernel used in the surface layer,representing a cascading operation.Andrepresenting the input and output of the residual block. Finally removing the background layer by element-by-element subtraction to realize haze removal;
the reconstruction network is thus a decoder corresponding to the encoder, which is responsible for recovering the defogged image, consisting of 4 upsampled layers. Wherein the up-sampling layer consists of a deconvolution layer with a step size of 2, a batch normalization layer and a nonlinear activation layer. The size of the feature map of each up-sampling unit input up-sampling unit is doubled after the feature map is subjected to the deconvolution process;
4.2 The network structure of the second part is composed of the same encoders, the layers of the two encoders are connected by adopting a feature matching block, and the matched features are respectively cascaded to the decoder, so that the image with richer details is obtained on the defogging basis. Similar feature exchange is defined as:
wherein the method comprises the steps ofAndrepresenting the sampling from the neural feature mapAnd (b)And (3) patches.The space of the neural characteristics is represented,represent the firstAnd (b)Similarity between patches.A feature map representing an exchange, the feature exchange may be represented by the following formula:
5) Training a network model, and testing by using a testing set:
5.1 Using a synthetic foggy training set training network, the training objective function representing the average error of the defogging image estimated by the network and the real foggy image, settingRepresenting an estimated defogging pattern of the network,the true fog-free map of the ground is represented by the formula:
wherein the method comprises the steps ofIs the training setThe defogging image is estimated by the network,is the training setA true ground fog-free image with fog images;
5.2 The performance of the network is tested by using the test data set, and the subjective or objective evaluation indexes PSNR and SSIM are used for evaluation, and the partial test structure is shown in fig. 3, 4, 5 and 6.
Fig. 3 shows the processing result of the foggy image in the bright scene, fig. 3 (a) is the foggy image, and fig. 3 (b) is the processing result of the present invention. Fig. 4 shows the processing result of the foggy image in the dark scene, fig. 4 (a) is the foggy image, and fig. 4 (b) is the processing result of the present invention. Fig. 5 shows the processing result of the foggy image in the indoor scene, fig. 5 (a) shows the foggy image, and fig. 5 (b) shows the processing result of the present invention. Fig. 6 shows the processing result of the foggy image in the outdoor scene, fig. 6 (a) shows the foggy image, and fig. 6 (b) shows the processing result of the present invention. From the results of the figures 3-6, the invention has universality and can exert better effect on images of special scenes.
In summary, the invention discloses a defogging method based on reference image reconstruction. Unlike most of end-to-end deep learning methods, the method is not combined with a displayed atmospheric physical model, but implicitly removes haze by constructing a defogging network and introducing a reference image, so that the problem of low operation efficiency caused by estimating a plurality of super parameters is avoided, and details of the image are enhanced while defogging the image is realized. While the foregoing detailed description of the embodiments of the present invention has been presented in conjunction with the drawings, it is not intended to limit the scope of the invention, and it should be understood that those skilled in the art can make various modifications or variations within the scope of the invention without inventive effort within the scope of the invention described herein.
Claims (4)
1. A defogging method based on reference image reconstruction, the method comprising:
step 1) manufacturing a synthetic foggy image data set and a real foggy image data set;
step 2) selecting a training set and a testing set, and respectively performing rapid noise reduction pretreatment on the training set and the testing set;
step 3) for the foggy training set, a reference image data set is manufactured;
step 4) constructing a defogging network model, wherein the whole defogging network structure consists of two parts, the first part realizes an image defogging function, and the second part realizes a texture detail enhancement recovery function;
step 5) training a network model, and testing by using a testing set;
step 3) for the foggy data set, a reference image data set is manufactured, and the steps are as follows:
3.1 For each picture in the foggy data set, manually and rapidly searching 5 corresponding similar foggy high-resolution reference images by using the function of hundred-degree picture recognition;
3.2 Lot size of 256×256 pictures;
the step 4) is to construct an image defogging network model, the whole defogging network structure is composed of two parts, the first part is used for realizing the image defogging function, and the second part is used for realizing the texture detail enhancement recovery function, and the method comprises the following steps:
4.1 A first part adopts a network structure of an encoder-functional layer-decoder; wherein the encoder is responsible for realizing the feature extraction function; the device consists of 4 downsampling layers, wherein each downsampling layer comprises a convolution layer, a normalization layer and an activation layer; the convolutional layer with the step length of 2 is adopted to replace a fixed pooling layer in the traditional convolutional network for downsampling; the middle functional layer is responsible for realizing the function of fog elimination; it is composed of residual block and jump connection; each residual block is a two-bypass residual block with a filter of 3 multiplied by 3 and 5 multiplied by 5 respectively, and different bypasses use different convolution kernels; the reconstruction network is a decoder corresponding to the encoder and is responsible for recovering defogging images, and consists of 4 up-sampling layers; the upper sampling layer consists of a deconvolution layer with a step length of 2, a batch normalization layer and a nonlinear activation layer; the size of the feature map of each up-sampling unit input up-sampling unit is doubled after the feature map is subjected to the deconvolution process;
wherein each residual block is a two-bypass residual block with a filter of 3×3,5×5, respectively, different bypasses use different convolution kernels, and the operations are defined as:
where W and b represent weight and bias, respectively, the superscript represents the number of layers they are in, and the subscript represents the size of the convolution kernel used in the skin,representing cascade operation, x n-1 And x n Representing the input and output of the residual block, and finally removing the background layer by element-by-element subtraction to realize the haze removal function;
4.2 The network structure of the second part is composed of the same encoders, the layers of the two encoders are connected by adopting a feature matching block, and the matched features are respectively cascaded to the decoder, so that an image with richer details is obtained on the basis of defogging;
similar feature exchange is defined as:
wherein B is i And B j Representing the ith and jth patches sampled from the neural feature map,representation ofNeural feature space, D i,j Representing the similarity between the ith and jth patches, T represents the feature map of the exchange, which can be represented by the following formula:
2. the defogging method based on reference image reconstruction of claim 1, wherein the step 1) is to collect real clean images of the ground with different brightness of different scenes and corresponding depth maps, and preprocess the collected paired images, and comprises the following steps:
1.1 Data acquisition is carried out by using a depth camera, 5000 pairs of data are acquired altogether, the acquired scenes are divided into indoor scenes and outdoor scenes, and the brightness of the scenes is divided into high brightness and low brightness;
1.2 Preprocessing the acquired paired images, and carrying out alignment and fixed-size processing on the depth map and the color image;
1.3 Giving a foggy image, a scene depth map, atmospheric illuminance and an atmospheric scattering coefficient, and synthesizing the foggy image according to an atmospheric physical model formula;
1.4 5000 real hazy images are collected on line by using Google pictures to serve as a real hazy image data set.
3. The defogging method based on reference image reconstruction of claim 1, wherein the step 2) is to select a training set and a test set, and perform rapid noise reduction pretreatment on the training set and the test set respectively, and the method comprises the following steps:
2.1 Randomly selecting 3000 pairs from the synthetic foggy dataset as a training set, and randomly selecting 1000 pairs from the NYU2 Depth dataset as a training set in order to avoid the dependence of a trained network model on the dataset; the test set comprises 2000 pieces of synthesized foggy data set, 950 pieces of NYU2 Depth data set, and 5000 real foggy images collected on line;
2.2 The foggy image is subjected to noise reduction by using an FFDNet network, the foggy image is input into an FFDNet denoising network, and the denoised foggy image is output.
4. The defogging method based on reference image reconstruction of claim 1, wherein the step 5) trains a network model, and tests by using a test set, and comprises the following steps:
5.1 Using the synthesized foggy training set to train the network, wherein the trained objective function represents the average error of the defogging image estimated by the network and the real foggy image;
5.2 Testing the performance of the network by using the test data set, and evaluating by using subjective or objective evaluation indexes PSNR and SSIM.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910815133.7A CN110517203B (en) | 2019-08-30 | 2019-08-30 | Defogging method based on reference image reconstruction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910815133.7A CN110517203B (en) | 2019-08-30 | 2019-08-30 | Defogging method based on reference image reconstruction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110517203A CN110517203A (en) | 2019-11-29 |
CN110517203B true CN110517203B (en) | 2023-06-23 |
Family
ID=68629510
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910815133.7A Active CN110517203B (en) | 2019-08-30 | 2019-08-30 | Defogging method based on reference image reconstruction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110517203B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111210394A (en) * | 2020-01-03 | 2020-05-29 | 北京智云视图科技有限公司 | Image enhancement technology based on deep decomposition synthesis network |
CN113139909B (en) * | 2020-01-19 | 2022-08-02 | 杭州喔影网络科技有限公司 | Image enhancement method based on deep learning |
CN111539885B (en) * | 2020-04-21 | 2023-09-19 | 西安交通大学 | Image enhancement defogging method based on multi-scale network |
CN113570613A (en) * | 2020-04-29 | 2021-10-29 | 阿里巴巴集团控股有限公司 | Image processing method and device |
CN111539896B (en) * | 2020-04-30 | 2022-05-27 | 华中科技大学 | Domain-adaptive-based image defogging method and system |
CN112150395A (en) * | 2020-10-15 | 2020-12-29 | 山东工商学院 | Encoder-decoder network image defogging method combining residual block and dense block |
CN113689343B (en) * | 2021-03-31 | 2024-06-18 | 西安理工大学 | Single image defogging method for Resnet to calculate Veil |
CN113808039B (en) * | 2021-09-09 | 2023-06-27 | 中山大学 | Migration learning defogging method and system based on Gaussian process mapping |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9288458B1 (en) * | 2015-01-31 | 2016-03-15 | Hrl Laboratories, Llc | Fast digital image de-hazing methods for real-time video processing |
CN107977932A (en) * | 2017-12-28 | 2018-05-01 | 北京工业大学 | It is a kind of based on can differentiate attribute constraint generation confrontation network face image super-resolution reconstruction method |
CN108564549A (en) * | 2018-04-20 | 2018-09-21 | 福建帝视信息科技有限公司 | A kind of image defogging method based on multiple dimensioned dense connection network |
CN108665432A (en) * | 2018-05-18 | 2018-10-16 | 百年金海科技有限公司 | A kind of single image to the fog method based on generation confrontation network |
CN108961350A (en) * | 2018-07-17 | 2018-12-07 | 北京工业大学 | One kind being based on the matched painting style moving method of significance |
CN109146810A (en) * | 2018-08-08 | 2019-01-04 | 国网浙江省电力有限公司信息通信分公司 | A kind of image defogging method based on end-to-end deep learning |
CN109300090A (en) * | 2018-08-28 | 2019-02-01 | 哈尔滨工业大学(威海) | A kind of single image to the fog method generating network based on sub-pix and condition confrontation |
CN109598695A (en) * | 2017-09-29 | 2019-04-09 | 南京大学 | A kind of non-reference picture fog-level estimation method based on deep learning network |
CN109783655A (en) * | 2018-12-07 | 2019-05-21 | 西安电子科技大学 | A kind of cross-module state search method, device, computer equipment and storage medium |
CN109801232A (en) * | 2018-12-27 | 2019-05-24 | 北京交通大学 | A kind of single image to the fog method based on deep learning |
CN109949242A (en) * | 2019-03-19 | 2019-06-28 | 内蒙古工业大学 | The generation method of image defogging model, device and image defogging method, device |
CN110097609A (en) * | 2019-04-04 | 2019-08-06 | 上海凌笛数码科技有限公司 | A kind of fining embroidery texture moving method based on sample territory |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101568971B1 (en) * | 2011-08-03 | 2015-11-13 | 인디안 인스티튜트 오브 테크놀로지, 카라그푸르 | Method and system for removal of fog, mist or haze from images and videos |
WO2018053340A1 (en) * | 2016-09-15 | 2018-03-22 | Twitter, Inc. | Super resolution using a generative adversarial network |
-
2019
- 2019-08-30 CN CN201910815133.7A patent/CN110517203B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9288458B1 (en) * | 2015-01-31 | 2016-03-15 | Hrl Laboratories, Llc | Fast digital image de-hazing methods for real-time video processing |
CN109598695A (en) * | 2017-09-29 | 2019-04-09 | 南京大学 | A kind of non-reference picture fog-level estimation method based on deep learning network |
CN107977932A (en) * | 2017-12-28 | 2018-05-01 | 北京工业大学 | It is a kind of based on can differentiate attribute constraint generation confrontation network face image super-resolution reconstruction method |
CN108564549A (en) * | 2018-04-20 | 2018-09-21 | 福建帝视信息科技有限公司 | A kind of image defogging method based on multiple dimensioned dense connection network |
CN108665432A (en) * | 2018-05-18 | 2018-10-16 | 百年金海科技有限公司 | A kind of single image to the fog method based on generation confrontation network |
CN108961350A (en) * | 2018-07-17 | 2018-12-07 | 北京工业大学 | One kind being based on the matched painting style moving method of significance |
CN109146810A (en) * | 2018-08-08 | 2019-01-04 | 国网浙江省电力有限公司信息通信分公司 | A kind of image defogging method based on end-to-end deep learning |
CN109300090A (en) * | 2018-08-28 | 2019-02-01 | 哈尔滨工业大学(威海) | A kind of single image to the fog method generating network based on sub-pix and condition confrontation |
CN109783655A (en) * | 2018-12-07 | 2019-05-21 | 西安电子科技大学 | A kind of cross-module state search method, device, computer equipment and storage medium |
CN109801232A (en) * | 2018-12-27 | 2019-05-24 | 北京交通大学 | A kind of single image to the fog method based on deep learning |
CN109949242A (en) * | 2019-03-19 | 2019-06-28 | 内蒙古工业大学 | The generation method of image defogging model, device and image defogging method, device |
CN110097609A (en) * | 2019-04-04 | 2019-08-06 | 上海凌笛数码科技有限公司 | A kind of fining embroidery texture moving method based on sample territory |
Non-Patent Citations (4)
Title |
---|
Deep Network for Simultaneous Stereo Matching and Dehazing;Taeyong Song等;《29th Birtish Machine Vision Conference,BMVC2018》;20190101;第1-12页 * |
Deep Video Dehazing With Semantic Segmentation;Wenqi Ren等;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20190430;第28卷(第4期);第1895-1908页 * |
基于卷积神经网络的水下图像质量提升方法;凌梅;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190715(第7期);第I138-988页 * |
基于合成雾图的去雾图像质量客观评价方法研究;李昱霏等;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180415(第4期);第I138-2778页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110517203A (en) | 2019-11-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110517203B (en) | Defogging method based on reference image reconstruction | |
CN111915530B (en) | End-to-end-based haze concentration self-adaptive neural network image defogging method | |
CN108230264B (en) | Single image defogging method based on ResNet neural network | |
CN110544213B (en) | Image defogging method based on global and local feature fusion | |
CN111161360B (en) | Image defogging method of end-to-end network based on Retinex theory | |
CN109360155A (en) | Single-frame images rain removing method based on multi-scale feature fusion | |
CN110378849B (en) | Image defogging and rain removing method based on depth residual error network | |
CN112907479A (en) | Residual single image rain removing method based on attention mechanism | |
CN110533614B (en) | Underwater image enhancement method combining frequency domain and airspace | |
CN116579945B (en) | Night image restoration method based on diffusion model | |
CN113284061B (en) | Underwater image enhancement method based on gradient network | |
CN114219722A (en) | Low-illumination image enhancement method by utilizing time-frequency domain hierarchical processing | |
CN112419163A (en) | Single image weak supervision defogging method based on priori knowledge and deep learning | |
CN111553856A (en) | Image defogging method based on depth estimation assistance | |
CN110807744A (en) | Image defogging method based on convolutional neural network | |
CN114022392A (en) | Serial attention-enhancing UNet + + defogging network for defogging single image | |
CN113052776A (en) | Unsupervised image defogging method based on multi-scale depth image prior | |
CN113256538B (en) | Unsupervised rain removal method based on deep learning | |
CN113379861B (en) | Color low-light-level image reconstruction method based on color recovery block | |
CN114187210A (en) | Multi-mode dense fog removing method based on visible light-far infrared image | |
CN114140361A (en) | Generation type anti-network image defogging method fusing multi-stage features | |
CN113689346A (en) | Compact deep learning defogging method based on contrast learning | |
CN116128768B (en) | Unsupervised image low-illumination enhancement method with denoising module | |
CN111626943B (en) | Total variation image denoising method based on first-order forward and backward algorithm | |
CN109360169B (en) | Signal processing method for removing rain and mist of single image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |