CN112102179B - Retinex-based depth network single image defogging method - Google Patents

Retinex-based depth network single image defogging method Download PDF

Info

Publication number
CN112102179B
CN112102179B CN202010769566.6A CN202010769566A CN112102179B CN 112102179 B CN112102179 B CN 112102179B CN 202010769566 A CN202010769566 A CN 202010769566A CN 112102179 B CN112102179 B CN 112102179B
Authority
CN
China
Prior art keywords
image
defogging
illumination
network
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010769566.6A
Other languages
Chinese (zh)
Other versions
CN112102179A (en
Inventor
李鹏越
田建东
唐延东
王国霖
张箴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN202010769566.6A priority Critical patent/CN112102179B/en
Publication of CN112102179A publication Critical patent/CN112102179A/en
Application granted granted Critical
Publication of CN112102179B publication Critical patent/CN112102179B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a Retinex-based depth network single image defogging method. A defogging decomposition model based on Retinex is provided, and the model is solved through an end-to-end depth network, so that a clear defogging image is obtained. The defogging decomposition model based on Retinex can more accurately describe the formation form of the foggy image, and is also convenient for guiding the depth network to separate and extract the residual illumination graph of the image and the foggy image under natural illumination. The model is solved by utilizing the strong nonlinear fitting capacity of the depth network, so that the generalization capacity of a defogging algorithm can be improved, and a more accurate parameter estimation result can be obtained. The Retinex-based depth network adopted in the method can better decompose the foggy image and obtain the foggy image under natural illumination. Experimental results on the synthetic image and the real image show that compared with a classical image defogging algorithm, the method has better robustness and effectiveness.

Description

Retinex-based depth network single image defogging method
Technical Field
The invention relates to an image defogging algorithm, in particular to a Retinex-based depth network single image defogging method.
Background
The image is the basis of vision, and is an important way for a computer to acquire and utilize external vision information. However, the external imaging conditions seriously affect the image quality, and restrict the development of the field of computer vision research. Complicated lighting conditions and bad weather are main causes of image degradation, and low signal-to-noise ratio, color shift, fogging and turbidity are main manifestations of image degradation. In severe weather, foggy weather is receiving a great deal of attention in the scientific field in terms of its severity of its occurrence and impact. In foggy days, pedestrians cannot see the passing vehicles clearly, drivers cannot see the front travel clearly, and marine heavy fog directly causes the passing ships to collide.
Image defogging is an image processing technology for removing fog interference in an image through technologies such as algorithm or multi-sensor fusion, improving visual effect of the image and obtaining more effective visual information. The technology is mainly divided into two types: one is a restoration method based on a physical model; one type is an enhancement method based on image processing. The restoration method mainly researches establishment of an imaging model and solving of model parameters, and a restoration image is obtained through an inverse model method. The enhancement method does not consider the imaging process, directly adjusts the pixel value and the pixel value distribution of the image, and obtains an enhanced image with better visual effect.
The existing image restoration method can obtain a natural image result, but because the used model is a simplified model and more parameters and introduced prior are needed to be estimated, the universality and the robustness of the algorithm are poor. The image enhancement method is not considered in the imaging process, so that the problems of image distortion, information loss and the like are easily caused by excessive enhancement of an image salient part in the process of realizing global enhancement.
Disclosure of Invention
At present, the image defogging algorithm has strong pertinence, cannot be suitable for changeable foggy imaging environments, and has difficult universality and robustness to meet practical requirements. Aiming at the technical defects, the invention aims to provide a Retinex-based depth network single image defogging method. The decomposition model adopted by the method can better describe the composition components of the foggy image, and the method has better robustness, stronger generalization capability and higher processing speed in defogging aspect, and can show good adaptability in various foggy scenes.
The specific technical scheme adopted for achieving the purpose of the invention is as follows: a Retinex-based depth network single image defogging method comprises the following steps:
step one: establishing a residual illumination image extraction sub-network by using a convolution layer, a pooling layer, a residual dense connection network block RDB and a transposed convolution layer, defining an illumination loss function, expanding a training set image, inputting the training set image into the sub-network for iterative training, and optimizing a sub-network model for outputting a corresponding residual illumination image;
step two: calculating a haze-free image according to a haze-removal decomposition model based on a Retinex method;
I f (x,y)=I nf (x,y)*L rf (x,y);
wherein I is f (x, y) represents a foggy image, I nf (x, y) represents a haze-free image, L rf Representing a residual illumination image;
step three: establishing a defogging U-Net sub-network of a spatial domain and a channel domain attention mechanism by using an attention mechanism network block CSA, a pooling layer, a residual error dense connection network block RDB and a transposition convolution layer, defining a defogging loss function, inputting a defogging image into the sub-network for iterative training, and optimizing a sub-network model for finally outputting the corresponding defogging image under natural illumination;
step four: and acquiring an actual foggy image, sequentially inputting the actual foggy image into the optimized residual illumination image to extract a sub-network model, a defogging U-Net sub-network model of a spatial domain and channel domain attention mechanism, and obtaining a foggy image under the processed natural illumination.
The training set image includes a composite foggy image and a foggy image pair.
The method for expanding the training set image comprises the following steps: and cutting, transforming and rotating the data set, so that the problem of fitting due to the fact that the data set is too small is avoided.
The structure of the residual illumination image extraction sub-network comprises:
inputting the foggy image into a pooling layer with multiple scales for downsampling operation to obtain multi-scale image data;
inputting the multi-scale image data into a plurality of residual dense connection network blocks RDB respectively, and extracting multi-scale features;
inputting different scale features into a transposed convolution layer for up-sampling operation to be of uniform size, and obtaining multi-scale features of the same size;
and (3) connecting the foggy image with the multi-scale features with the same size in series, inputting the foggy image into the last convolution layer, and outputting a residual illumination graph.
The residual dense connection network block RDB comprises a plurality of dense connection blocks DB, a hierarchical feature fusion HFF module and a residual learning RL module, and is used for extracting multi-scale features of the foggy image.
The method comprises the steps of judging whether a residual illumination image extraction sub-network model is cut off in an iteration mode by calculating an illumination loss function and comparing the illumination loss function with a preset threshold value:
wherein: l (L) L An illumination loss function representing the output residual illumination image;and->Respectively representing an absolute loss function, an SSIM loss function and a smooth loss function of the output residual illumination image; omega al 、ω sml And omega sl Respectively representing the weights of the corresponding loss functions.
The deduction process of the defogging decomposition model is as follows:
from Retinex theory, it is known that a hazy image can be described as:
I f (x,y)=R(x,y)*L f (x,y)
wherein: i f A foggy image is represented; r (x, y) represents a reflected image; l (L) f Representing an illumination image affected by fog scattering and absorption;
the defocused image is defined as:
I nf (x,y)=R(x,y)*L n (x,y)
wherein: i nf Representing a haze-free image, L n Representing a natural illumination image;
the hazy image can be further decomposed into:
I f (x,y)=I nf (x,y)*L rf (x,y)
wherein: l (L) rf (x,y)=L f (x,y)/L n (x, y) representing a residual illumination image.
The structure of the spatial domain and channel domain attention mechanism defogging U-Net comprises:
the network is composed of a contracted path and an expanded path: the contracted path is used for acquiring semantic information, and the symmetrical expanded path is used for recovering position information; each partial path comprises four steps, wherein each step of the contracted path is formed by a residual dense connection network block RDB and a maximum pooling layer, each step of the expanded path is formed by a residual dense connection network block RDB and a transposed convolution layer, and an attention mechanism network block CSA is arranged between the connection contracted path and the expanded path.
The defogging loss function is calculated, and compared with a preset threshold value to judge whether a defogging U-Net sub-network model of a spatial domain and channel domain attention mechanism is iteratively cut off or not:
wherein: l (L) D A defogging loss function representing an output defogging image under natural illumination;and->Respectively representing an absolute loss function, an SSIM loss function and an edge loss function of the haze-free image under the output natural illumination; omega ad 、ω ssd And omega egd Respectively representing the weights of the corresponding loss functions.
The beneficial effects of the invention are as follows:
1. the method provided by the invention provides a novel decomposition method of the foggy image based on the Retinex theory, the original foggy image is decomposed into the foggy image and the residual illumination image under natural illumination, the relation and the difference between the components of the foggy image are described, and the defogging process of the image is effectively guided by the Retinex theory.
2. The method establishes a residual illumination image extraction sub-network, and the network can process multi-scale image data and obtain processed multi-scale image characteristics.
3. The method establishes a spatial domain and a channel domain attention mechanism defogging U-Net, acquires context information through a contracted path and accurately positions by using a symmetrical expansion path, thereby obtaining a finer defogging effect.
4. The illumination loss function and the defogging loss function provided by the method can better guide the training of the network, and a final defogging model is obtained.
5. The final image defogging result of the method is good, and compared with other methods, the method has good universality and robustness.
Drawings
FIG. 1 is an overall depth network framework of the present invention;
FIG. 2 is a block of a residual dense connection network used in the present invention;
FIG. 3 is a diagram of an attention mechanism network block used in the present invention;
FIG. 4 is a synthetic fog image defogging result;
fig. 5 is a true fog image defogging result.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. The description herein is only intended to illustrate the invention and not to limit the invention, as far as the specific examples are concerned.
The specific framework of the invention is shown in fig. 1, and the method mainly comprises five steps: establishing a defogging decomposition model based on Retinex theory, establishing a residual illumination image extraction sub-network, establishing a spatial domain and channel domain attention mechanism defogging U-Net, setting an illumination loss function and a defogging loss function, and completing network optimization and training.
Step one: and (3) establishing a defogging decomposition model based on the Retinex theory, and decomposing the foggy image into a foggy image and a residual illumination image under natural illumination.
The original image defogging model is an atmospheric scattering simplified model, has strong prior dependence and has a plurality of unknown parameters to be estimated, so that the model has poor universality, and reduces the robustness and the self-adaptive capacity of the algorithm in defogging algorithm application. The Retinex theory is a commonly used image enhancement method based on scientific experiments and analysis, and is considered as: the color of the object is determined by the reflecting capability of the object to light with different wavelengths, and the light source is not light, so that the color of the object is not affected by illumination non-uniformity and has consistency. The study of Retinex theory for image defogging has focused on improving the accuracy of the extracted illumination pattern to obtain a better reflected image that is not affected by the illumination. However, in defogging tasks, it is more important to obtain a fogless image under natural illumination than to obtain a true reflected image. We need to re-derive the defogging decomposition model.
From Retinex theory, it is known that a hazy image can be described as:
I f (x,y)=R(x,y)*L f (x,y), (1)
wherein: i f A foggy image is represented; r (x, y) represents a reflected image; l (L) f Representing an illumination image affected by scattering and absorption of haze, the defogged image is defined as:
I nf (x,y)=R(x,y)*L n (x,y), (2)
wherein: i nf Representing a haze-free image under natural illumination; l (L) n Representing a natural illumination image. The hazy image can be further decomposed into:
I f (x,y)=I nf (x,y)*L rf (x,y), (3)
wherein: l (L) rf (x,y)=L f (x,y)/L n (x, y) representing a residual illumination image. Therefore, according to the foggy image decomposition model, the main step of defogging is to obtain a residual illumination image.
Step two: and establishing a residual illumination image extraction sub-network by utilizing the convolution layer, the pooling layer, the residual intensive connection network block (residual dense block, RDB) and the transposed convolution layer, processing an input image, calculating an illumination loss function to judge whether the residual illumination image extraction sub-network is optimized, acquiring an optimized residual illumination image extraction sub-network, and finally outputting the residual illumination image at the moment.
Firstly, the dimension of an input image is improved through a convolution layer so that a later network layer can extract richer features, then the extracted features are sent to a multi-degree module to further extract multi-scale features, in general, a small-size image can better highlight global information of the image, and a large-size image can better keep local information of the image. Thus, firstly, the input foggy image is subjected to downsampling operation after the feature map is extracted through a convolution layer, and the foggy image is divided into three scales: 1/2, 1/4, 1/8, forming multi-scale image data. The multi-scale image data are then sent to residual dense connected network blocks (RDB) of no-size, respectively, to extract multi-scale features. And then up-sampling the different scale features into uniform sizes to obtain the different scale features with the same size. Finally, serially connecting the foggy image and the features with the same size and different scales to the tail end convolution layer, outputting a residual illumination graph (shown in figure 1), and then according to the formula I f (x,y)=I nf (x,y)*L rf (x, y) obtaining defogging image I nf And (x, y), calculating an illumination loss function to judge whether the residual illumination image extraction sub-network is optimized and qualified, acquiring the optimized residual illumination image extraction sub-network, and finally outputting the residual illumination image at the moment.
The residual dense connection network block (RDB) is shown in fig. 2, and the RDB comprises three parts, namely a dense connection block (DB), a Hierarchical Feature Fusion (HFF) and a Residual Learning (RL), wherein the dense connection block extracts image features by fully utilizing features of each layer, the Hierarchical Feature Fusion (HFF) fuses the features extracted by the dense blocks through convolution of 1*1, and the Residual Learning (RL) further fuses the image features before the dense blocks, so that gradient disappearance is not easy to occur when the network deepens. The network block can fully utilize the features extracted by all convolution layers to improve the efficiency of the whole network block.
Step three: utilizing an attention mechanism network block (channel and spatial attention block, CSA), a pooling layer, a residual dense connection network block (RDB) and a transposed convolution layer to establish a spatial domain and a channel domain attention mechanism defogging U-Net, and carrying out input defogging image I nf (x, y) processing and calculating defogging loss function judgment space domainAnd judging whether the defogging U-Net network of the channel domain attention mechanism is optimized and qualified, acquiring the optimized spatial domain and the defogging U-Net network of the channel domain attention mechanism, and finally outputting a defogging image under natural illumination.
The network is composed of a contracted path and an expanded path: a contracted path (connecting path) is used to obtain context semantic information (e.g., fog concentration, background semantics), and a symmetrical expanded path (expanding path) is used to recover location information. Each partial path contains four steps, where each step of the contracted path is a combination of a residual dense connected network block (RDB) and a maximum pooling layer, each step of the expanded path is a combination of a residual dense connected network block (RDB) and a transposed convolutional layer, and a by-attention mechanism network block (CSA) is introduced in the jump connection connecting the contracted path and the expanded path.
Wherein the attention mechanism network block (CSA) (shown in fig. 3) is capable of adjusting the restoration coefficient (weight) according to the fog concentration at the current pixel, giving different attention to different pixels in the image by weight. The attention mechanism network block includes channel attention and spatial attention. Channel attention is to extract global features of image commonality and global features of variability respectively through a maximum pooling layer (MaxPool) and an average pooling layer (AvgPool), and then acquire 1-dimensional channel attention through a multi-layer perceptron (MLP) including an implicit layer. Spatial attention is obtained by a scale invariant max pooling layer (MaxPool), an average pooling layer (AvgPool) and a layer of convolution layers. The attention mechanism can give a large weight to the fog heavy pixels to make defogging more thorough, and give a small weight to the fog light pixels to avoid the problem of excessive defogging.
Step four: setting an illumination loss function and a defogging loss function.
The expression of the illumination loss function in the residual illumination image extraction sub-network is:
wherein: i is the image sequence number, L L Function for indicating illumination lossA number;and->Respectively representing an absolute loss function, an SSIM loss function and a smooth loss function; omega al 、ω sml And omega sl Respectively representing the weights of the corresponding loss functions.
Wherein the expression of the smooth loss function is:
wherein: i is the image sequence number and,representing the residual illumination value of the pixel point p; />And->Representing the partial derivatives in the horizontal and vertical directions of the image space, respectively.
The expression of the defogging loss function of the spatial domain and channel domain attention mechanism defogging U-Net is as follows:
wherein: l (L) D Representing a defogging loss function;and->Representing an absolute loss function, an SSIM loss function, and an edge loss function, respectively; omega ad 、ω ssd And omega egd Respectively representing the weights of the corresponding loss functions.
Wherein, the expression of the edge loss function is:
wherein: e (E) canny () w,h A canny edge detection operator; i dh For defogging result, I gt Is a true haze-free image.
Step five: the training set in the dataset is extended.
And cutting, transforming and rotating the data set, so that the problem of fitting due to the fact that the data set is too small is avoided.
The residual illumination image of the image pair is obtained from the foggy image and the corresponding foggy image in the dataset using equation (3).
Step six: and performing network training by using the training set.
Firstly, extracting a sub-network from a residual illumination image by utilizing a foggy image and a foggy image, performing independent training, substituting the trained sub-network into an overall network frame, optimizing a spatial domain and a channel domain attention mechanism defogging U-Net, and finally performing overall optimization of the network to obtain a final optimization result.
Fig. 4 and 5 are defogging results of the method of the present invention on the synthetic fog image and the real fog image. When the method is used for processing the foggy images with different depths and different fog concentrations, the processing result is stable and natural; when the sky area is processed, the method has better fidelity; the problem of oversaturation does not occur when processing high brightness images. The method can realize extraction of the residual illumination image and defogging of the image through a single image, and the used model can better describe the composition of the foggy image.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.

Claims (8)

1. The Retinex-based depth network single image defogging method is characterized by comprising the following steps of:
step one: establishing a residual illumination image extraction sub-network by using a convolution layer, a pooling layer, a residual dense connection network block RDB and a transposed convolution layer, defining an illumination loss function, expanding a training set image, inputting the training set image into the sub-network for iterative training, and optimizing a sub-network model for outputting a corresponding residual illumination image;
step two: calculating a haze-free image according to a haze-removal decomposition model based on a Retinex method;
I f (x,y)=I nf (x,y)*L rf (x,y);
wherein I is f (x, y) represents a foggy image, I nf (x, y) represents a haze-free image, L rf Representing a residual illumination image;
step three: establishing a defogging U-Net sub-network of a spatial domain and a channel domain attention mechanism by using an attention mechanism network block CSA, a pooling layer, a residual error dense connection network block RDB and a transposition convolution layer, defining a defogging loss function, inputting a defogging image into the sub-network for iterative training, and optimizing a sub-network model for finally outputting the corresponding defogging image under natural illumination;
step four: and acquiring an actual foggy image, sequentially inputting the actual foggy image into the optimized residual illumination image to extract a sub-network model, a defogging U-Net sub-network model of a spatial domain and channel domain attention mechanism, and obtaining a foggy image under the processed natural illumination.
2. A Retinex-based depth network single image defogging method according to claim 1, wherein the training set image comprises a composite foggy image and a foggy image pair.
3. A Retinex-based depth network single image defogging method according to claim 1 or 2, wherein the method for expanding the training set image comprises: and cutting, transforming and rotating the data set, so that the problem of fitting due to the fact that the data set is too small is avoided.
4. The Retinex-based depth network single image defogging method according to claim 1, wherein the residual illumination image extraction sub-network structure comprises:
inputting the foggy image into a pooling layer with multiple scales for downsampling operation to obtain multi-scale image data;
inputting the multi-scale image data into a plurality of residual dense connection network blocks RDB respectively, and extracting multi-scale features;
inputting different scale features into a transposed convolution layer for up-sampling operation to be of uniform size, and obtaining multi-scale features of the same size;
and (3) connecting the foggy image with the multi-scale features with the same size in series, inputting the foggy image into the last convolution layer, and outputting a residual illumination graph.
5. The Retinex-based depth network single image defogging method according to claim 1, wherein whether the residual illumination image extraction sub-network model is iteratively cut off is judged by calculating an illumination loss function and comparing the illumination loss function with a preset threshold value:
wherein: l (L) L An illumination loss function representing the output residual illumination image;and->Respectively representing an absolute loss function, an SSIM loss function and a smooth loss function of the output residual illumination image; omega al 、ω sml And omega sl Respectively representing the weights of the corresponding loss functions.
6. The Retinex-based depth network single image defogging method according to claim 1, wherein the defogging decomposition model is derived as follows:
from Retinex theory, it is known that a hazy image can be described as:
I f (x,y)=R(x,y)*L f (x,y)
wherein: i f A foggy image is represented; r (x, y) represents a reflected image; l (L) f Representing an illumination image affected by fog scattering and absorption;
the defocused image is defined as:
I nf (x,y)=R(x,y)*L n (x,y)
wherein: i nf Representing a haze-free image, L n Representing a natural illumination image;
the hazy image can be further decomposed into:
I f (x,y)=I nf (x,y)*L rf (x,y)
wherein: l (L) rf (x,y)=L f (x,y)/L n (x, y) representing a residual illumination image.
7. The Retinex-based depth network single image defogging method according to claim 1, wherein the spatial domain and channel domain attention mechanism defogging U-Net structure comprises:
the network is composed of a contracted path and an expanded path: the contracted path is used for acquiring semantic information, and the symmetrical expanded path is used for recovering position information; each partial path comprises four steps, wherein each step of the contracted path is formed by a residual dense connection network block RDB and a maximum pooling layer, each step of the expanded path is formed by a residual dense connection network block RDB and a transposed convolution layer, and an attention mechanism network block CSA is arranged between the connection contracted path and the expanded path.
8. The Retinex-based depth network single image defogging method according to claim 1, wherein whether the defogging U-Net sub-network model of the spatial domain and the channel domain attention mechanism is iteratively cut off is judged by calculating a defogging loss function and comparing the defogging loss function with a preset threshold value:
wherein: l (L) D A defogging loss function representing an output defogging image under natural illumination;and->Respectively representing an absolute loss function, an SSIM loss function and an edge loss function of the haze-free image under the output natural illumination; omega ad 、ω ssd And omega egd Respectively representing the weights of the corresponding loss functions.
CN202010769566.6A 2020-08-04 2020-08-04 Retinex-based depth network single image defogging method Active CN112102179B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010769566.6A CN112102179B (en) 2020-08-04 2020-08-04 Retinex-based depth network single image defogging method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010769566.6A CN112102179B (en) 2020-08-04 2020-08-04 Retinex-based depth network single image defogging method

Publications (2)

Publication Number Publication Date
CN112102179A CN112102179A (en) 2020-12-18
CN112102179B true CN112102179B (en) 2023-08-29

Family

ID=73749522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010769566.6A Active CN112102179B (en) 2020-08-04 2020-08-04 Retinex-based depth network single image defogging method

Country Status (1)

Country Link
CN (1) CN112102179B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991201B (en) * 2021-02-18 2024-04-05 西安理工大学 Image defogging method based on color correction and context aggregation residual error network
CN113393386B (en) * 2021-05-18 2022-03-01 电子科技大学 Non-paired image contrast defogging method based on feature decoupling
CN113962901B (en) * 2021-11-16 2022-08-23 中国矿业大学(北京) Mine image dust removing method and system based on deep learning network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2020100274A4 (en) * 2020-02-25 2020-03-26 Huang, Shuying DR A Multi-Scale Feature Fusion Network based on GANs for Haze Removal
CN111161360A (en) * 2019-12-17 2020-05-15 天津大学 Retinex theory-based image defogging method for end-to-end network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161360A (en) * 2019-12-17 2020-05-15 天津大学 Retinex theory-based image defogging method for end-to-end network
AU2020100274A4 (en) * 2020-02-25 2020-03-26 Huang, Shuying DR A Multi-Scale Feature Fusion Network based on GANs for Haze Removal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于卷积分析稀疏表示和相位一致性的低照度图像增强;周浦城;张杰;薛模根;尹璋堃;;电子学报(第01期);全文 *

Also Published As

Publication number Publication date
CN112102179A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN112102179B (en) Retinex-based depth network single image defogging method
CN106910175B (en) Single image defogging algorithm based on deep learning
CN108986050B (en) Image and video enhancement method based on multi-branch convolutional neural network
CN109299274B (en) Natural scene text detection method based on full convolution neural network
CN108921799B (en) Remote sensing image thin cloud removing method based on multi-scale collaborative learning convolutional neural network
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN110728640B (en) Fine rain removing method for double-channel single image
CN109118446B (en) Underwater image restoration and denoising method
CN109509156B (en) Image defogging processing method based on generation countermeasure model
CN113409267B (en) Pavement crack detection and segmentation method based on deep learning
Fayaz et al. Underwater image restoration: A state‐of‐the‐art review
CN111179196B (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
Agrawal et al. A comprehensive review on analysis and implementation of recent image dehazing methods
CN115131797A (en) Scene text detection method based on feature enhancement pyramid network
CN117095368A (en) Traffic small target detection method based on YOLOV5 fusion multi-target feature enhanced network and attention mechanism
CN116503709A (en) Vehicle detection method based on improved YOLOv5 in haze weather
CN111598814A (en) Single image defogging method based on extreme scattering channel
Babu et al. An efficient image dahazing using Googlenet based convolution neural networks
CN114155165A (en) Image defogging method based on semi-supervision
Ding et al. Restoration of single sand-dust image based on style transformation and unsupervised adversarial learning
CN110738624B (en) Area-adaptive image defogging system and method
CN116385293A (en) Foggy-day self-adaptive target detection method based on convolutional neural network
CN115496764A (en) Dense feature fusion-based foggy image semantic segmentation method
CN114862724A (en) Contrast type image defogging method based on exponential moving average knowledge distillation
CN114881879A (en) Underwater image enhancement method based on brightness compensation residual error network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant