CN110675330A - Image rain removing method of encoding-decoding network based on channel level attention mechanism - Google Patents
Image rain removing method of encoding-decoding network based on channel level attention mechanism Download PDFInfo
- Publication number
- CN110675330A CN110675330A CN201910741764.9A CN201910741764A CN110675330A CN 110675330 A CN110675330 A CN 110675330A CN 201910741764 A CN201910741764 A CN 201910741764A CN 110675330 A CN110675330 A CN 110675330A
- Authority
- CN
- China
- Prior art keywords
- network
- net
- image
- channel
- encoding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000007246 mechanism Effects 0.000 title claims description 16
- 238000013507 mapping Methods 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 4
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 2
- 239000000470 constituent Substances 0.000 claims description 2
- 230000009748 deglutition Effects 0.000 claims 9
- 238000012545 processing Methods 0.000 abstract description 13
- 239000003086 colorant Substances 0.000 abstract description 5
- 230000008569 process Effects 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses an image rain removing method of an encoding-decoding network based on a channel level attention machine system, belonging to the field of image processing, comprising two networks, wherein the first network is called A-Net, namely attention Dense network (attention Dense Net), the second network is called D-Net, namely encoding-decoding rain removing network (De-raining Encoder-DecoderNet), the A-Net and the D-Net are jointly optimized to obtain the image rain removing method of the encoding-decoding network based on the channel level attention machine system, which can respectively establish corresponding A-Net and D-Net for image nets of channels c e { r, g, b } of different colors, respectively process the images nets, and then utilize the encoding-decoding network to realize rain removing processing, meanwhile, the distribution of rain based on pixel points is considered by constructing an attention map by using DenseNet, which is very helpful for improving the system performance, and finally, a better rain removing image is processed by using a smaller amount of calculation.
Description
Technical Field
The invention relates to the field of image processing, in particular to an image rain removing method of an encoding-decoding network based on a channel level attention mechanism.
Background
The rain removal refers to removing raindrops in a picture to obtain a restored picture for a picture in rain, and belongs to the category of image processing in the CV field like picture defogging and super-resolution. The rain removal is an image processing biased to low level, and is essentially to separate and remove the content in the picture and the superimposed raindrop pattern. The existing image rain removing method mostly adopts a deep network architecture of DerainNet or JORDER to carry out rain removing processing on the basis of a convolutional neural network. The specific technical scheme is as follows:
1、DerainNet
and constructing a depth detail network (deep detail network) on the basis of the residual error network (ResNet) and aiming at eliminating the influence of rainwater. The method comprises the steps of separating high-frequency components and low-frequency components in an image by using priori knowledge, taking the high-frequency components as the input of a residual error network, and summing the output of the residual error network and an original image to obtain a final result of removing rainwater influence.
2、JORDER
The rain removing processing is carried out on the basis of a convolutional neural network, firstly, an input image is transferred to a characteristic space through a convolutional layer, then, the three networks with different corrosion factors are added to obtain a rain characteristic F, and R (rain strip residual error) is obtained through a convolutional network. And connecting F and R in series to form [ F, R ], obtaining S (a rain strip image) by using a convolution layer, connecting F, R and S in series to form [ F, R, S ], and obtaining B (a clear image) by performing final convolution calculation.
In the method, the influence of the { r, g, b } channel on the rain removing effect is ignored. Due to the brighter pixels in an image, the same brightness is no longer maintained after the conventional rain-removing effect, since the density distribution pattern of rain changes with the color channel, which was just a negligible point in the previous study of the rain-removing effect.
Disclosure of Invention
1. Technical problem to be solved
Aiming at the problems in the prior art, the invention aims to provide an image rain removing method of a coding-decoding network based on a channel level attention mechanism, which can respectively establish corresponding A-net and D-net for images of channels c e { r, g and b } of different colors, respectively process the images, then utilize the coding-decoding network to realize rain removing processing, and simultaneously utilize DenseNet to construct an attention diagram to realize the distribution of rain based on pixel points, thereby being greatly helpful for improving the system performance and finally processing better rain removing images with smaller calculated amount.
2. Technical scheme
In order to solve the above problems, the present invention adopts the following technical solutions.
An image rain removing method based on an encoding-decoding network of a channel level attention mechanism comprises two networks, wherein the first network is called A-Net (attention Dense Net), the second network is called D-Net (De-training Encoder-DecoderNet), the A-Net and the D-Net are jointly optimized to obtain an image rain removing method, and the optimization steps are as follows:
s1, calculating the attention mapping A of the A-net network corresponding to each channelcThe following were used:
Rc=Oc-Bc
wherein R isc(x) If the residual error of the pixel x is represented, the attention map A can be obtained according to the fact that whether the corresponding pixel of the residual error image has rain or notcAs shown in the following equation:
c represents the index of the three channels r, g, b, and x represents the pixel point index;
the input of A-net is O and the output is Ac;
S2, calculating the input of the encoding-decoding network in the D-net: [ A ]cOc]For the concatenation relationship, the input of D-net is [ A ]cOc]The output is Bc;
S3, pairing parameter set W1And a parameter set W2Medium filter parameters are according to [0, 1%]Initial value W of Gaussian distribution random generation parameter1 (0)And W2 (0);
S4, establishing an objective loss function L: l ═ LA+LD;
S5, optimizing the parameters by using a random gradient descent method on the basis of minimizing the L loss function;
and S6, iterating and updating until convergence.
The method can realize that corresponding A-net and D-net are respectively established for images of channels c belonging to { r, g and b } in different colors, are respectively processed, then the rain removing processing is realized by utilizing an encoding-decoding network, and meanwhile, the distribution of rain based on pixel points is considered by constructing an attention map by utilizing DenseNet, so that the method is greatly helpful for improving the system performance, and finally, a better rain removing image is processed by using smaller calculated amount.
Further, in step S1, the a-Net network is designed by adopting a DenseNet (dense convolutional network) network structure and a Block structure with a five-layer convolutional network.
Further, in step S1, the Block of the a-Net network includes five convolutional layers, and each convolutional layer is connected to each other convolutional layer in a feed-forward manner.
Further, in step S1, the a-Net network includes five convolutional layers, and the convolutional network of the last layer uses the sigmod function as the activation function.
Further, in step S2, the D-net network structure includes convolutional layers and convolutional layers, where the convolutional layers are multi-layer convolutional network constituent encoders, the convolutional layers are decoders of corresponding multi-layer deconvolution layers, and each convolutional layer and each deconvolution layer correspond to a deconvolution layer.
Further, in step S2, in the D-net network structure, the size of the convolution filter of the encoder is also maintained at 3 × 3, and the number of convolution filters is 128/layer, which is 15 layers in total; the size of the deconvolution filter of the decoder is also maintained at 3 × 3, and the number of deconvolution filters is 128/layer, for a total of 15 layers.
Further, in step S2, in the D-net network structure, a connection layer between the convolutional layer and the convolutional layer is skipped, so that the image features can be directly input to the decoder side, which is helpful to better recover the image details.
Further, in the step S4, the parameter L in the objective loss functionAThe operation formula of (1) is as follows:
whereinTo determine the parameter set W1 (j)The resulting output of the a-net network,represents the function mapping represented by A-net corresponding to channel c, OiThe ith image is shown, and N is the number of training data.
Further, in the step S4, the parameter L in the objective loss functionDThe operation formula of (1) is as follows:
whereinFor determining parameter setsAnd then obtaining negative residual output of the D-net network corresponding to the channel c.
Further, in step S5, the optimization formula of the parameters is as follows:
3. advantageous effects
Compared with the prior art, the invention has the advantages that:
the scheme can respectively establish corresponding A-net and D-net for the images of the channel c belonging to { r, g and b } in different colors, respectively process the images, realize rain removal processing by utilizing an encoding-decoding network, and simultaneously realize the distribution of rain based on pixel points by constructing an attention map by utilizing DenseNet, thereby greatly helping to improve the system performance and finally processing better rain removal images with smaller calculated amount.
Drawings
FIG. 1 is a schematic diagram of the framework of the present invention;
FIG. 2 is a schematic diagram of an A-Net network structure according to the present invention;
FIG. 3 is a schematic diagram of the D-net structure of the present invention;
fig. 4 is a schematic diagram of the real effect of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention; it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and all other embodiments obtained by those skilled in the art without any inventive work are within the scope of the present invention.
In the description of the present invention, it should be noted that the terms "upper", "lower", "inner", "outer", "top/bottom", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplification of description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "disposed," "sleeved/connected," "connected," and the like are to be construed broadly, e.g., "connected," which may be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1:
referring to fig. 1-3, the image rain removing method based on the channel level attention mechanism coding-decoding network includes two networks, the first network is called a-Net (attention Dense Net), the second network is called D-Net (De-raining Encoder-decoder Net), the a-Net and the D-Net are jointly optimized to obtain the image rain removing method, and the optimization steps are as follows:
s1, calculating the attention mapping A of the A-net network corresponding to each channelcThe following were used:
Rc=Oc-Bc
wherein R isc(x) If the residual error of the pixel x is represented, the attention map A can be obtained according to the fact that whether the corresponding pixel of the residual error image has rain or notcAs shown in the following equation:
c represents the index of the three channels r, g, b, and x represents the pixel point index;
the input of A-net is O and the output is Ac;
S2, calculating the input of the encoding-decoding network in the D-net: [ A ]cOc]Are in a serial relationship. The input of D-net is [ A ]cOc]The output is Bc;
S3, pairing parameter set W1And a parameter set W2Medium filter parameters are according to [0, 1%]Initial value W of Gaussian distribution random generation parameter1 (0)And
s4, establishing an objective loss function L as follows:
L=LA+LD
wherein:
to determine the parameter set W1 (j)The resulting output of the a-net network,represents the function mapping represented by A-net corresponding to channel c, OiRepresenting the ith image, N is the number of training data,
for determining parameter setsThen obtaining negative residual output of the D-net network corresponding to the channel c;
s5, on the basis of minimizing the L loss function, optimizing the parameters by using a random gradient descent method as follows:
and S6, iterating and updating until convergence.
Testing the rain removing method of the image after the combined optimization, and inputting: rain-added image O and corresponding channel image OcEach filter parameter set W in A-net network1And each filter parameter set W in the D-net network2And outputting: clear image BcThe specific steps of the test are as follows:
s1, parameter set W according to A-net1And input O, calculating A according to the designed network structurec;
S2, series connection [ AcOc]As input to D-net, a set W of network parameters is set according to D-net2Calculating B according to the designed network structurec;
S3, B combining three channelsc(c belongs to { r, g, B }), and splicing into a color image B, namely obtaining an image after rain removal.
When the method is used, the A-net and the D-net respectively have three channels which are respectively designed, the mapping of r, b and g channels of an image is respectively processed, the r, b and g channels are serially operated with the image of the corresponding channel in the original image to be used as the input of a coding-decoding network, the output of the coding-decoding network corresponds to a negative residual error, and the negative residual error is added with the image of the corresponding channel with rain, so that the image of each channel for eliminating the rain effect is obtained. The images of the three channels are spliced to obtain a pure image with a rainwater removing effect, corresponding A-net and D-net can be respectively established for the images of the channels c e { r, g, b } with different colors and are respectively processed, then the rain removing processing is realized by utilizing a coding-decoding network, meanwhile, the distribution of rainwater is considered based on pixel points by utilizing a DenseNet construction attention force diagram, the system performance is greatly improved, and finally, a better rain removing image is processed by using smaller calculated amount.
The foregoing is only a preferred embodiment of the present invention; the scope of the invention is not limited thereto. Any person skilled in the art should be able to cover the technical scope of the present invention by equivalent or modified solutions and modifications within the technical scope of the present invention.
Claims (10)
1. The image rain removing method of the coding-decoding network based on the channel level attention mechanism is characterized by comprising the following steps of: the rain removing method comprises two networks, wherein the first network is called A-net, namely a dense attention network, the second network is called D-net, namely a coding-decoding rain removing network, the A-net and the D-net are jointly optimized to obtain an image, and the optimization steps are as follows:
s1, calculating the attention mapping A of the A-net network corresponding to each channelcThe following were used:
Rc=Oc-Bc
wherein R isc(x) If the residual error of the pixel x is represented, the attention map A can be obtained according to the fact that whether the corresponding pixel of the residual error image has rain or notcAs shown in the following equation:
c represents the index of the three channels r, g, b, and x represents the pixel point index;
the input of A-net is O and the output is Ac;
S2, calculating the input of the encoding-decoding network in the D-net: [ A ]cOc]For the concatenation relationship, the input of D-net is [ A ]cOc]The output is Bc;
S3, pairing parameter set W1And a parameter set W2Medium filter parameters are according to [0, 1%]Initial value W of Gaussian distribution random generation parameter1 (0)And
s4, establishing an objective loss function L: l ═ LA+LD;
S5, optimizing the parameters by using a random gradient descent method on the basis of minimizing the L loss function;
and S6, iterating and updating until convergence.
2. The image deglutition method of the channel-level attention mechanism-based encoding-decoding network of claim 1, wherein: in step S1, the a-Net network is designed using a DenseNet network structure and a Block structure having a five-layer convolutional network.
3. The image deglutition method of an encoding-decoding network based on a channel-level attention mechanism according to claim 1 or 2, characterized in that: in step S1, the Block of the a-Net network includes five convolutional layers, and each convolutional layer is connected to each other convolutional layer in a feed-forward manner.
4. The image deglutition method of an encoding-decoding network based on a channel-level attention mechanism according to claim 1 or 2, characterized in that: in step S1, the a-Net network includes five convolutional layers, and the convolutional network of the last layer uses the sigmod function as the activation function.
5. The image deglutition method of the channel-level attention mechanism-based encoding-decoding network of claim 1, wherein: in step S2, the D-net network structure includes convolutional layers and convolutional layers, where the convolutional layers are multi-layered convolutional network constituent encoders, the convolutional layers are decoders of corresponding multi-layered convolutional layers, and each convolutional layer corresponds to a convolutional layer.
6. The image deglutition method of an encoding-decoding network based on a channel-level attention mechanism according to claim 1 or 5, wherein: in step S2, in the D-net network structure, the size of the convolution filter of the encoder is maintained at 3 × 3, and the number of convolution filters is 128/layer, which is 15 layers in total; the size of the deconvolution filter of the decoder is also maintained at 3 × 3, and the number of deconvolution filters is 128/layer, for a total of 15 layers.
7. The image deglutition method of an encoding-decoding network based on a channel-level attention mechanism according to claim 1 or 5, wherein: in step S2, in the D-net network structure, the connection layer between the convolutional layer and the convolutional layer is skipped, so that the image features can be directly input to the decoder side, which is helpful to better recover the image details.
8. The image deglutition method of the channel-level attention mechanism-based encoding-decoding network of claim 1, wherein: in the step S4, the parameter L in the objective loss functionAThe operation formula of (1) is as follows:
9. The image deglutition method of the channel-level attention mechanism-based encoding-decoding network of claim 1, wherein: in the step S4, the parameter L in the objective loss functionDThe operation formula of (1) is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910741764.9A CN110675330A (en) | 2019-08-12 | 2019-08-12 | Image rain removing method of encoding-decoding network based on channel level attention mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910741764.9A CN110675330A (en) | 2019-08-12 | 2019-08-12 | Image rain removing method of encoding-decoding network based on channel level attention mechanism |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110675330A true CN110675330A (en) | 2020-01-10 |
Family
ID=69068801
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910741764.9A Pending CN110675330A (en) | 2019-08-12 | 2019-08-12 | Image rain removing method of encoding-decoding network based on channel level attention mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110675330A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111445484A (en) * | 2020-04-01 | 2020-07-24 | 华中科技大学 | Image-level labeling-based industrial image abnormal area pixel level segmentation method |
CN111860517A (en) * | 2020-06-28 | 2020-10-30 | 广东石油化工学院 | Semantic segmentation method under small sample based on decentralized attention network |
CN113240589A (en) * | 2021-04-01 | 2021-08-10 | 重庆兆光科技股份有限公司 | Image defogging method and system based on multi-scale feature fusion |
CN114240797A (en) * | 2021-12-22 | 2022-03-25 | 海南大学 | OCT image denoising method, device, equipment and medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9288458B1 (en) * | 2015-01-31 | 2016-03-15 | Hrl Laboratories, Llc | Fast digital image de-hazing methods for real-time video processing |
CN108765344A (en) * | 2018-05-30 | 2018-11-06 | 南京信息工程大学 | A method of the single image rain line removal based on depth convolutional neural networks |
CN108900841A (en) * | 2018-07-10 | 2018-11-27 | 中国科学技术大学 | Method for video coding based on image rain removing algorithm |
CN109087258A (en) * | 2018-07-27 | 2018-12-25 | 中山大学 | A kind of image rain removing method and device based on deep learning |
CN109360155A (en) * | 2018-08-17 | 2019-02-19 | 上海交通大学 | Single-frame images rain removing method based on multi-scale feature fusion |
CN109447918A (en) * | 2018-11-02 | 2019-03-08 | 北京交通大学 | Removing rain based on single image method based on attention mechanism |
CN109919838A (en) * | 2019-01-17 | 2019-06-21 | 华南理工大学 | The ultrasound image super resolution ratio reconstruction method of contour sharpness is promoted based on attention mechanism |
CN110009580A (en) * | 2019-03-18 | 2019-07-12 | 华东师范大学 | The two-way rain removing method of single picture based on picture block raindrop closeness |
-
2019
- 2019-08-12 CN CN201910741764.9A patent/CN110675330A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9288458B1 (en) * | 2015-01-31 | 2016-03-15 | Hrl Laboratories, Llc | Fast digital image de-hazing methods for real-time video processing |
CN108765344A (en) * | 2018-05-30 | 2018-11-06 | 南京信息工程大学 | A method of the single image rain line removal based on depth convolutional neural networks |
CN108900841A (en) * | 2018-07-10 | 2018-11-27 | 中国科学技术大学 | Method for video coding based on image rain removing algorithm |
CN109087258A (en) * | 2018-07-27 | 2018-12-25 | 中山大学 | A kind of image rain removing method and device based on deep learning |
CN109360155A (en) * | 2018-08-17 | 2019-02-19 | 上海交通大学 | Single-frame images rain removing method based on multi-scale feature fusion |
CN109447918A (en) * | 2018-11-02 | 2019-03-08 | 北京交通大学 | Removing rain based on single image method based on attention mechanism |
CN109919838A (en) * | 2019-01-17 | 2019-06-21 | 华南理工大学 | The ultrasound image super resolution ratio reconstruction method of contour sharpness is promoted based on attention mechanism |
CN110009580A (en) * | 2019-03-18 | 2019-07-12 | 华东师范大学 | The two-way rain removing method of single picture based on picture block raindrop closeness |
Non-Patent Citations (3)
Title |
---|
RISHENG LIU 等: "Learning Aggregated Transmission Propagation Networks for Haze Removal and Beyond", 《IEEE》 * |
RUI QIAN 等: "Attentive Generative Adversarial Network for Raindrop Removal from a Single Image", 《IEEE》 * |
林向伟等: "基于多细节卷积神经网络的单幅图像去雨方法", 《信号处理》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111445484A (en) * | 2020-04-01 | 2020-07-24 | 华中科技大学 | Image-level labeling-based industrial image abnormal area pixel level segmentation method |
CN111860517A (en) * | 2020-06-28 | 2020-10-30 | 广东石油化工学院 | Semantic segmentation method under small sample based on decentralized attention network |
CN113240589A (en) * | 2021-04-01 | 2021-08-10 | 重庆兆光科技股份有限公司 | Image defogging method and system based on multi-scale feature fusion |
CN114240797A (en) * | 2021-12-22 | 2022-03-25 | 海南大学 | OCT image denoising method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110675330A (en) | Image rain removing method of encoding-decoding network based on channel level attention mechanism | |
CN110210608B (en) | Low-illumination image enhancement method based on attention mechanism and multi-level feature fusion | |
CN109447907B (en) | Single image enhancement method based on full convolution neural network | |
CN109712083B (en) | Single image defogging method based on convolutional neural network | |
CN108269244B (en) | Image defogging system based on deep learning and prior constraint | |
CN109472830A (en) | A kind of monocular visual positioning method based on unsupervised learning | |
CN106780356A (en) | Image defogging method based on convolutional neural networks and prior information | |
CN114742719B (en) | End-to-end image defogging method based on multi-feature fusion | |
CN111161360B (en) | Image defogging method of end-to-end network based on Retinex theory | |
CN109785252B (en) | Night image enhancement method based on multi-scale residual error dense network | |
CN109087255A (en) | Lightweight depth image denoising method based on mixed loss | |
CN108805839A (en) | Combined estimator image defogging method based on convolutional neural networks | |
CN110070489A (en) | Binocular image super-resolution method based on parallax attention mechanism | |
CN111402145B (en) | Self-supervision low-illumination image enhancement method based on deep learning | |
CN110288535B (en) | Image rain removing method and device | |
CN112116601A (en) | Compressive sensing sampling reconstruction method and system based on linear sampling network and generation countermeasure residual error network | |
CN111882485B (en) | Hierarchical feature feedback fusion depth image super-resolution reconstruction method | |
CN115330631A (en) | Multi-scale fusion defogging method based on stacked hourglass network | |
CN112767283A (en) | Non-uniform image defogging method based on multi-image block division | |
CN116757988B (en) | Infrared and visible light image fusion method based on semantic enrichment and segmentation tasks | |
CN111598789A (en) | Sparse color sensor image reconstruction method based on deep learning | |
CN113139914B (en) | Image denoising method and device, electronic equipment and storage medium | |
CN116468625A (en) | Single image defogging method and system based on pyramid efficient channel attention mechanism | |
CN110246085A (en) | A kind of single-image super-resolution method | |
CN115439363A (en) | Video defogging device and method based on comparison learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |