CN111340718B - Image defogging method based on progressive guiding strong supervision neural network - Google Patents
Image defogging method based on progressive guiding strong supervision neural network Download PDFInfo
- Publication number
- CN111340718B CN111340718B CN202010075090.6A CN202010075090A CN111340718B CN 111340718 B CN111340718 B CN 111340718B CN 202010075090 A CN202010075090 A CN 202010075090A CN 111340718 B CN111340718 B CN 111340718B
- Authority
- CN
- China
- Prior art keywords
- neural network
- image
- defogging
- output
- rgb
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 61
- 238000000034 method Methods 0.000 title claims abstract description 40
- 230000000750 progressive effect Effects 0.000 title claims abstract description 14
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 239000000919 ceramic Substances 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 239000003595 mist Substances 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an image defogging method based on a progressive guiding strong supervision neural network, which comprises the following steps: constructing an end-to-end convolutional neural network, wherein the network is input into an original foggy RGB image and output into a clear RGB image; the convolutional neural network is formed by connecting 3 defogging modules with the same structure end to end, and the output of the previous defogging module is used as the input of the next defogging module; each defogging module consists of 1 neural network block and 1 guiding filter layer; in each defogging module, the neural network block carries out defogging processing on the image to obtain the RGB output of 3 channels, and the guiding filter layer takes the original foggy RGB image as a guide to carry out image edge sharpening processing on the RGB output of the 3 channels of the neural network block in the current defogging module. After learning and reconstruction are carried out based on training data, the convolutional neural network constructed by the invention can directly input the foggy RGB image into the network when the convolutional neural network is actually applied, so that a foggy image with better definition and higher quality is obtained.
Description
Technical Field
The invention relates to the field of deep learning and computer vision, in particular to an image defogging method based on a progressive guiding strong supervision neural network.
Background
Mist is a common atmospheric phenomenon, and water droplets, dust, fine sand, or other particles suspended in the air, etc. cause degradation in the imaging quality of an image. In foggy day imaging, reflected light from distant objects cannot pass through dense atmosphere to reach the camera, and atmospheric scattering causes loss of image contrast and saturation. The foggy images can seriously influence the work of high-level computer vision tasks such as automatic driving, satellite image semantic segmentation and the like, and image defogging has become a research focus and a hot spot in the field of deep learning and computer vision.
In recent decades, computer vision technology has developed at a high speed, and various image defogging methods have appeared in the process, and the image defogging methods mainly fall into two main categories: traditional methods based on manual priors and methods based on deep learning. The traditional method based on manual priori analysis summarizes image priori knowledge from foggy and foggy image data sets, builds manual features, mainly builds a projection map based on an atmospheric light model, and uses methods based on image histograms, contrast, saturation and the like. With the development of deep learning technology, many defogging methods based on deep learning are developed, the deep learning method mainly uses a convolutional neural network to replace the process of manually extracting features, and the defogging effect based on the deep learning method is greatly improved compared with the prior traditional method by means of strong calculation power of a machine.
At present, most image defogging methods have obvious limitations, the defogging quality of images needs to be improved, and as the fog concentration in the images is different in different areas, different defogging modules are used for defogging treatment respectively in different areas in the images. Therefore, research on a defogging method capable of overcoming the defects has important research significance and practical value.
Disclosure of Invention
In order to overcome the defects and shortcomings of the existing image defogging method, the invention provides an image defogging method based on a progressive guiding strong supervision neural network.
The aim of the invention is achieved by the following technical scheme: an image defogging method based on a progressive guiding strong supervision neural network comprises the following steps:
constructing an end-to-end convolutional neural network, wherein the network is input into an original foggy RGB image and output into a clear RGB image; the convolutional neural network is formed by connecting 3 defogging modules with the same structure end to end, and the output of the previous defogging module is used as the input of the next defogging module; each defogging module consists of 1 neural network block and 1 guiding filter layer;
in each defogging module, the neural network block carries out defogging processing on the image input into the current defogging module to obtain the RGB output of 3 channels, the input of the guiding filter layer is the original foggy RGB image and the RGB output of the neural network block in the current defogging module, and the guiding filter layer takes the original foggy RGB image as a guide to carry out image edge sharpening processing on the RGB output of the 3 channels of the neural network block in the current defogging module.
After learning and reconstruction are carried out based on training data, the convolutional neural network constructed by the invention can directly input the foggy RGB image into the network when the convolutional neural network is actually applied, so that a foggy image with better definition and higher quality is obtained.
Preferably, the neural network block performs defogging processing on the image input into the current defogging module, and the method comprises the following steps:
firstly, preprocessing input by using 2 convolution layers of 3 multiplied by 3 to obtain 64 feature graphs; then 1/2 downsampling is carried out by using a 3X 3 convolution layer to obtain 64 feature graphs with the size half of the original input; then using dense network (dense Net) architecture to build, each dense block uses 1×1 and 3×3 convolution layers, wherein the 1×1 convolution layer functions as a bottlenneck to convert the input feature map into 64, each dense block has input from all previous feature maps, and the number of feature maps of dense block is 128, 192, 256, 320 respectively; then using a 3X 3 convolution layer to obtain 64 feature graphs; then, up-sampling is carried out by 2 times by using a 3 multiplied by 3 deconvolution layer, and 64 feature graphs with the same size as the original input are obtained; finally, a 3×3 and a 1×1 convolution layer are used to obtain a 3-channel RGB output.
Preferably, the guiding filter layer uses the original foggy RGB image as a guide, and performs image edge sharpening processing on the 3-channel RGB output of the neural network block in the current defogging module, so that the edge of the output image of the neural network block is consistent with the edge of the original foggy RGB image, and the method comprises the following steps:
the original foggy RGB image is taken as a guiding image I, the output of the neural network block is taken as an image P to be filtered, the filtered output image is Q, and the guiding filter layer is simply defined as follows:
Q=∑W(I)*P
wherein W is a weight value determined according to the guide image I, and the weight value is a parameter learned by the guide filter layer in the model training process.
Preferably, the feature diagrams before and after the guiding filter layer and the ground trunk calculate a mean square error loss, and the loss weight before and after the guiding filter layer is 1:10, the supervision before the guide filter plays a role in strong supervision, so that the neural network block can learn more characteristics.
Specifically, the convolutional neural network includes 3 defogging modules connected end to end, and the loss function of the convolutional neural network from end to end is as follows:
L=αL1+βL2+αL3+βL4+αL5+βL6
wherein the ratio of alpha to beta is 1:10 L1 represents the MSE (minimum mean square error) loss of the output of the first neural network block and the ground truth, L2 represents the MSE loss of the output of the first guide filter layer and the ground truth, L3 represents the MSE loss of the output of the second neural network block and the ground truth, L4 represents the MSE loss of the output of the second guide filter layer and the ground truth, L5 represents the MSE loss of the output of the third neural network block and the ground truth, and L6 represents the MSE loss of the output of the third guide filter layer and the ground truth.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) The method uses a progressive strategy to remove fog in the image in a multi-level manner, so as to obtain a clearer and better quality result;
(2) The invention uses the method of the guide filter to carry out the edge-preserving post-processing, so that the final output image has sharper edges and has more visual effect;
(3) The invention adds strong supervision before the guide filter, so that the neural network block is focused on the function of extracting the characteristic reconstruction image, and simultaneously, the guide filter is used as a post-processing operation, thereby being more in line with the logic of image reconstruction.
(4) The invention is an end-to-end image processing technology, avoids the intervention of artificial features, directly obtains clear fog-free images from foggy input images, and avoids the interference of introducing some noise.
Drawings
FIG. 1 is a general flow chart of an image defogging method based on a progressive guided strongly supervised neural network;
FIG. 2 is a diagram of an implementation of a defogging module technique;
fig. 3 is a diagram of a strongly supervised approach technique implementation.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
Examples
As shown in fig. 1-3, the embodiment discloses an image defogging method based on a progressive guiding strong supervision neural network, which comprises the following steps:
s1, constructing an end-to-end convolutional neural network, directly inputting a foggy RGB image into a network model, outputting the foggy RGB image to obtain a clear RGB image, and considering based on a model effect and model complexity, the convolutional neural network of the embodiment comprises 3 defogging modules, wherein the 3 defogging modules have the same structure and are connected end to end. Each defogging module consists of 1 neural network block and 1 guiding filter layer.
S2, in each defogging module, the neural network block carries out defogging processing on the input image to obtain RGB output of 3 channels. The guiding filter layer comprises two inputs, one is an original foggy RGB image, the other is the RGB output of the neural network block in the current defogging module, and the guiding filter layer takes the original foggy RGB image as a guide to carry out image edge sharpening processing on the 3-channel RGB output of the neural network block in the current defogging module.
S4, controlling the image to pass through 3 defogging modules by adopting a progressive training strategy, wherein the output of the previous defogging module is used as the input of the next defogging module;
s5, calculating loss by the feature graphs before and after the guiding filter layer and the group trunk, and performing strong supervision on supervision before the guiding filter, so that the neural network block can learn more features.
In the embodiment, as shown in fig. 1, the end-to-end convolutional neural network is constructed, a foggy RGB image is directly input into a network model, the network model is composed of three defogging modules, the foggy RGB image is input, and three channels of output with a general defogging effect are obtained through a defogging module 1; then, the output of the defogging module 1 is input into the defogging module 2, and three-channel output with suboptimal defogging effect is obtained; and finally, inputting the output of the defogging module 2 into the defogging module 3 to obtain an RGB image with the optimal defogging effect, wherein the three defogging modules have the same structure. Each defogging module comprises 1 neural network block and 1 guiding filter layer, wherein the total number of the neural network blocks and the guiding filter layers is 3, the foggy RGB images are directly input into the network, the network carries out learning reconstruction based on training data, and clear RGB images are obtained as output results.
Referring to fig. 2, each neural network block performs defogging processing on an input image, specifically:
firstly, preprocessing input by using 2 convolution layers of 3 multiplied by 3 to obtain 64 feature images, wherein the feature images mainly learn some simple information of the images; then 1/2 downsampling is carried out by using a 3×3 convolution layer (the stride size is set to 2), and 64 feature maps with the size half of the original input are obtained; then using dense network (dense) architecture to build, inspired by dense network, the network can maximize the texture information along the features of different proportions, each dense block uses 1×1 and 3×3 convolution layers, wherein the 1×1 convolution layer acts as a bottleneck, the input feature map is converted into 64, the input of each dense block is from all the previous feature maps, and the feature map number of dense block is 128, 192, 256 and 320 respectively; then using a 3X 3 convolution layer to obtain 64 feature graphs; then, up-sampling is carried out by 2 times by using a 3 multiplied by 3 deconvolution layer to obtain 64 feature maps with the same size as the original input, so that the size of the final outgoing image of the neural network block is the same as the size of the original input; finally, a 3×3 and a 1×1 convolution layer are used to obtain a 3-channel RGB output.
Each guiding filter layer takes the original foggy RGB image as a guide to carry out image edge sharpening processing on the 3-channel RGB output of the neural network block,
in this embodiment, the input to the guided filter layer is the output of the original foggy RGB image and the neural network block. Because the output image of the neural network block has the phenomenon of edge blurring, and the original foggy RGB image has sharper edge information, the guiding filter layer takes the original foggy RGB image as a guide to sharpen the edge of the output of the neural network block, so that the edge of the output image of the neural network block is consistent with the edge of the original foggy RGB image. In particular to a special-shaped ceramic tile,
the original foggy RGB image is taken as a guiding image I, the output of the neural network block is taken as an image P to be filtered, the filtered output image is Q, and the guiding filter layer is simply defined as follows:
Q=∑W(I)*P
wherein W is a weight value determined according to the guide image I, and the weight value is a parameter learned by the guide filter layer in the model training process.
In the embodiment, the progressive training strategy is adopted to control the image to pass through 3 defogging modules, the output of the last defogging module is used as the input of the next defogging module, namely, the foggy image sequentially passes through the defogging module 1, the defogging module 2 and the defogging module 3, and finally, a clear image is obtained. When a foggy image is photographed, the fog concentration in the area close to the camera (depth of field is shallow) is smaller, and the fog concentration in the area far from the camera (depth of field is deep) is larger, so that in the foggy image, the fog concentration in the area with shallow depth of field is lower, and the fog concentration in the area with deep depth of field is higher. In the invention, as the network model processing is carried out, the defogging module 1 processes the region with shallow depth of field, the defogging module 2 processes the region with deeper depth of field, the defogging module 3 processes the region with deeper depth of field, the 3 defogging modules progressively work, and defogging processing is carried out on the region with shallower depth of field to the region with deeper depth of field of the image in sequence, so as to obtain the final clear RGB image.
Referring to fig. 3, the feature diagrams before and after the guide filter layer and the ground trunk calculate a mean square error loss, and loss weights before and after the guide filter are 1:10, the supervision before the guide filter plays a role in strong supervision, so that the neural network block can learn more characteristics.
The loss function of the entire end-to-end convolutional neural network is as follows:
L=αL1+βL2+αL3+βL4+αL5+βL6
wherein the ratio of alpha to beta is 1:10 L1 represents the minimum Mean Square Error (MSE) loss of the output of the neural network block 1 and the ground tint, L2 represents the MSE loss of the output of the neural network block 2 and the ground tint, L3 represents the MSE loss of the output of the neural network block 2 and the ground tint, L4 represents the MSE loss of the output of the neural network block 3 and the ground tint, and L5 represents the MSE loss of the output of the neural network block 3 and the ground tint, L6 represents the MSE loss of the output of the neural network block 3 and the ground tint.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.
Claims (3)
1. An image defogging method based on a progressive guiding strong supervision neural network is characterized by comprising the following steps:
constructing an end-to-end convolutional neural network, wherein the network is input into an original foggy RGB image and output into a clear RGB image; the convolutional neural network is formed by connecting 3 defogging modules with the same structure end to end, and the output of the previous defogging module is used as the input of the next defogging module; each defogging module consists of 1 neural network block and 1 guiding filter layer;
in each defogging module, the neural network block carries out defogging processing on the image input into the current defogging module to obtain the RGB output of 3 channels, the input of the guiding filter layer is the original foggy RGB image and the RGB output of the neural network block in the current defogging module, and the guiding filter layer takes the original foggy RGB image as a guide to carry out image edge sharpening processing on the RGB output of the 3 channels of the neural network block in the current defogging module;
the neural network block carries out defogging processing on the image input into the current defogging module, and the method comprises the following steps:
firstly, preprocessing input by using 2 convolution layers of 3 multiplied by 3 to obtain 64 feature graphs; then 1/2 downsampling is carried out by using a 3X 3 convolution layer to obtain 64 feature graphs with the size half of the original input; then using dense network architecture to build, each dense block uses 1×1 and 3×3 convolution layers, wherein the 1×1 convolution layer functions as a bootlenck, the input feature map is converted into 64, each dense block inputs from all the previous feature maps, and the number of feature maps of dense block is 128, 192, 256, 320 respectively; then using a 3X 3 convolution layer to obtain 64 feature graphs; then, up-sampling is carried out by 2 times by using a 3 multiplied by 3 deconvolution layer, and 64 feature graphs with the same size as the original input are obtained; finally, a 3×3 convolution layer and a 1×1 convolution layer are used to obtain a 3-channel RGB output;
the convolutional neural network comprises 3 defogging modules which are connected end to end, and the loss function of the convolutional neural network from end to end is as follows:
wherein ,and->The ratio of (2) is 1:10 L1 represents the MSE loss of the output of the first neural network block and the ground tint, L2 represents the MSE loss of the output of the first guided filter layer and the ground tint, L3 represents the MSE loss of the output of the second neural network block and the ground tint, L4 represents the MSE loss of the output of the second guided filter layer and the ground tint, L5 represents the MSE loss of the output of the third neural network block and the ground tint, and L6 represents the MSE loss of the output of the third guided filter layer and the ground tint.
2. The image defogging method based on the progressive guiding strong supervision neural network according to claim 1, wherein the guiding filter layer uses an original foggy RGB image as a guide, performs image edge sharpening processing on the 3-channel RGB output of the neural network block in the current defogging module, so that the edge of the output image of the neural network block is consistent with the edge of the original foggy RGB image, and the method comprises the following steps:
the original foggy RGB image is taken as a guiding image I, the output of the neural network block is taken as an image P to be filtered, the filtered output image is Q, and the guiding filter layer is simply defined as follows:
wherein W is a weight value determined according to the guide image I, and the weight value is a parameter learned by the guide filter layer in the model training process.
3. The image defogging method based on the progressive guiding strong supervision neural network according to claim 1, wherein the feature images before and after the guiding filter layer and the ground trunk calculate a mean square error loss, and the loss weight before and after the guiding filter layer is 1:10, the supervision before the guide filter plays a role in strong supervision, so that the neural network block can learn more characteristics.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010075090.6A CN111340718B (en) | 2020-01-22 | 2020-01-22 | Image defogging method based on progressive guiding strong supervision neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010075090.6A CN111340718B (en) | 2020-01-22 | 2020-01-22 | Image defogging method based on progressive guiding strong supervision neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111340718A CN111340718A (en) | 2020-06-26 |
CN111340718B true CN111340718B (en) | 2023-06-20 |
Family
ID=71183363
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010075090.6A Active CN111340718B (en) | 2020-01-22 | 2020-01-22 | Image defogging method based on progressive guiding strong supervision neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111340718B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111861936B (en) * | 2020-07-29 | 2023-03-24 | 抖音视界有限公司 | Image defogging method and device, electronic equipment and computer readable storage medium |
CN111932365B (en) * | 2020-08-11 | 2021-09-10 | 上海华瑞银行股份有限公司 | Financial credit investigation system and method based on block chain |
CN112070701A (en) * | 2020-09-08 | 2020-12-11 | 北京字节跳动网络技术有限公司 | Image generation method, device, equipment and computer readable medium |
CN114049274A (en) * | 2021-11-13 | 2022-02-15 | 哈尔滨理工大学 | Defogging method for single image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109903232A (en) * | 2018-12-20 | 2019-06-18 | 江南大学 | A kind of image defogging method based on convolutional neural networks |
CN109934779A (en) * | 2019-01-30 | 2019-06-25 | 南京邮电大学 | A kind of defogging method based on Steerable filter optimization |
CN110097519A (en) * | 2019-04-28 | 2019-08-06 | 暨南大学 | Double supervision image defogging methods, system, medium and equipment based on deep learning |
CN110599534A (en) * | 2019-09-12 | 2019-12-20 | 清华大学深圳国际研究生院 | Learnable guided filtering module and method suitable for 2D convolutional neural network |
-
2020
- 2020-01-22 CN CN202010075090.6A patent/CN111340718B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109903232A (en) * | 2018-12-20 | 2019-06-18 | 江南大学 | A kind of image defogging method based on convolutional neural networks |
CN109934779A (en) * | 2019-01-30 | 2019-06-25 | 南京邮电大学 | A kind of defogging method based on Steerable filter optimization |
CN110097519A (en) * | 2019-04-28 | 2019-08-06 | 暨南大学 | Double supervision image defogging methods, system, medium and equipment based on deep learning |
CN110599534A (en) * | 2019-09-12 | 2019-12-20 | 清华大学深圳国际研究生院 | Learnable guided filtering module and method suitable for 2D convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN111340718A (en) | 2020-06-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111340718B (en) | Image defogging method based on progressive guiding strong supervision neural network | |
CN111062880B (en) | Underwater image real-time enhancement method based on condition generation countermeasure network | |
CN112233038B (en) | True image denoising method based on multi-scale fusion and edge enhancement | |
CN108376392B (en) | Image motion blur removing method based on convolutional neural network | |
Santra et al. | Learning a patch quality comparator for single image dehazing | |
CN111915530B (en) | End-to-end-based haze concentration self-adaptive neural network image defogging method | |
CN108564549A (en) | A kind of image defogging method based on multiple dimensioned dense connection network | |
CN113052814B (en) | Dim light image enhancement method based on Retinex and attention mechanism | |
CN112419191B (en) | Image motion blur removing method based on convolution neural network | |
CN110533614B (en) | Underwater image enhancement method combining frequency domain and airspace | |
CN111626960A (en) | Image defogging method, terminal and computer storage medium | |
CN109345609B (en) | Method for denoising mural image and generating line drawing based on convolutional neural network | |
CN116188325A (en) | Image denoising method based on deep learning and image color space characteristics | |
CN112419163B (en) | Single image weak supervision defogging method based on priori knowledge and deep learning | |
CN113160286A (en) | Near-infrared and visible light image fusion method based on convolutional neural network | |
Wang et al. | An efficient method for image dehazing | |
CN113066025A (en) | Image defogging method based on incremental learning and feature and attention transfer | |
CN111612803B (en) | Vehicle image semantic segmentation method based on image definition | |
CN116385293A (en) | Foggy-day self-adaptive target detection method based on convolutional neural network | |
CN113012067B (en) | Retinex theory and end-to-end depth network-based underwater image restoration method | |
CN112184566B (en) | Image processing method and system for removing adhered water mist and water drops | |
Niu et al. | Underwater Waste Recognition and Localization Based on Improved YOLOv5. | |
CN113888632A (en) | Method and system for positioning stains in pool by combining RGBD image | |
CN110717873A (en) | Traffic sign deblurring detection recognition algorithm based on multi-scale residual error | |
CN112381725A (en) | Image restoration method and device based on deep convolution countermeasure generation network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |