CN114283078A - Self-adaptive fusion image defogging method based on double-path convolution neural network - Google Patents

Self-adaptive fusion image defogging method based on double-path convolution neural network Download PDF

Info

Publication number
CN114283078A
CN114283078A CN202111494688.XA CN202111494688A CN114283078A CN 114283078 A CN114283078 A CN 114283078A CN 202111494688 A CN202111494688 A CN 202111494688A CN 114283078 A CN114283078 A CN 114283078A
Authority
CN
China
Prior art keywords
image
neural network
network
adaptive fusion
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111494688.XA
Other languages
Chinese (zh)
Inventor
董立泉
易伟超
刘明
李世添
惠梅
赵跃进
孔令琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202111494688.XA priority Critical patent/CN114283078A/en
Publication of CN114283078A publication Critical patent/CN114283078A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a self-adaptive fusion image defogging method based on a two-way convolution neural network. Firstly, respectively performing feature extraction on input foggy images by utilizing a two-way convolutional neural network, wherein an attention residual error module in an image restoration sub-network is responsible for primary restoration of the images, and a detail enhancement module in a detail enhancement sub-network is responsible for constructing detail information of the images; secondly, self-adaptive fusion is carried out on the features extracted from the two sub-networks by using a self-adaptive fusion module, so as to obtain a more precise feature result; and finally, a multi-content loss function is introduced to optimize the model, wherein the pixel-level loss function reduces the content difference of the image, and the feature-level loss function improves the visual effect of the restored image. The invention can be used for tasks such as target identification and automatic driving in severe haze weather.

Description

Self-adaptive fusion image defogging method based on double-path convolution neural network
Technical Field
The invention relates to the field of computer vision and image defogging, in particular to a self-adaptive fusion image defogging method based on a two-way convolutional neural network, which is used for performing supervised learning on a designed model to realize image defogging.
Background
Fog, a common atmospheric phenomenon. Due to the scattering effect of water drops, dust or other particulate matters suspended in the air, the image acquired in the foggy day often has the degradation problems of reduced contrast, color distortion, detail loss and the like, and further influences the performance of a series of tasks such as subsequent target identification and detection.
The existing mainstream image defogging algorithm is divided into two types: a defogging algorithm based on a model prior defogging algorithm and a defogging algorithm based on a convolutional neural network. The former uses various a priori knowledge to estimate the transmittance and the atmospheric transmission matrix, and then performs image defogging based on an atmospheric scattering model. The atmospheric scattering model parameters are estimated by the latter through a learning mode, or a fog-free clear image is directly restored end to end. Although the above defogging algorithm achieves good results. The following problems still remain: (1) inaccurate parameter estimation will result in poor defogging effect; (2) the recovered defogged image has the problem of loss of detail information.
Therefore, how to effectively defogging the image and improve the detail information of the recovered image become the focus and focus of research of those in the field.
Disclosure of Invention
Aiming at the problems, the invention provides a self-adaptive fusion image defogging method based on a two-way convolution neural network, and an overall defogging model comprises two sub-networks: an image restoration sub-network and a detail enhancement sub-network. The former extracts image features through an attention residual error module to carry out image preliminary restoration, and the latter constructs detail information of the image through a detail enhancement module and finally obtains a final defogged image through a self-adaptive fusion module. The method adopts an end-to-end training mode, and utilizes the optimized loss function to carry out iterative update on the parameters of the model, thereby effectively solving the problems of poor defogging effect, serious detail loss and the like commonly existing in the current algorithm, and providing clear fog-free images as input for subsequent high-level computer vision tasks such as target identification, automatic driving and the like.
The technical scheme adopted by the invention is that the self-adaptive fusion image defogging method based on the two-way convolution neural network is implemented according to the following steps:
step 1: the method comprises the following steps of (1) constructing a two-way convolutional neural network defogging model, wherein the model comprises two sub-networks: the image restoration sub-network and the detail enhancement sub-network comprise a feature attention residual error module, a detail enhancement module and a self-adaptive fusion module;
step 2: forming a pair of foggy image and fogless image data set by using an atmospheric scattering model, and respectively extracting the characteristics of the input foggy image through a double-path convolution neural network to obtain characteristic graphs with different information;
and step 3: self-adaptive fusion is carried out on the feature maps from different sub-networks by using a self-adaptive fusion module, and a fused feature result is obtained through conversion;
and 4, step 4: and carrying out space dimension reduction processing on the fused feature map, and outputting to obtain a defogged image of the original RGB space. And finally, calculating the loss difference between the defogged image and the non-fogging image by using the optimized loss function, reversely propagating and updating the model parameters, and repeatedly iterating to obtain the final trained model result.
The invention is also characterized in that:
the specific implementation process of the step 1 is as follows: the feature attention residual module introduces a feature attention mechanism when extracting image features, and the operation can obtain a weight map with channel and pixel responses to guide a network to focus on more important features rather than invalid features. In the detail enhancement module, expansion convolution with different expansion coefficients is used for extracting features of an input image, so that the network receptive field is enlarged, and details of the image are extracted more finely.
The specific implementation process of the step 2 is as follows: and constructing a data set for model training, and generating a fog-free data set and a fog-free data set matched with the corresponding data set in a simulation mode on the basis of the atmospheric scattering model, wherein the data set for training comprises an indoor data set and an outdoor data set. And taking the simulated foggy image as the input of the model, and respectively extracting the features of different layers through two sub-networks to obtain a feature map containing different information. The restoring subnetwork and the detail enhancer network respectively comprise 5 continuous feature attention residual error extraction modules and detail enhancement modules for feature extraction, and also comprise 2 convolution layers with the size of 3 multiplied by 3 for feature mapping extraction.
The specific implementation process in step 3 is as follows: firstly, adding feature maps from different sub-networks pixel by pixel to obtain a preliminary fusion result; secondly, obtaining different characteristic weight graphs by 2 convolution layers with convolution kernel size of 3 multiplied by 3 according to the fusion result; and finally, carrying out linear combination and superposition on the feature graphs from different sub-networks and the obtained corresponding weight graph to obtain a final self-adaptive fusion result.
The specific implementation process in the step 4 is as follows: the feature map after self-adaptive fusion is subjected to space dimension reduction processing, an convolution layer with convolution kernel size of 3 multiplied by 3 is utilized to return the input high-dimensional feature map to an original RGB (red, green and blue) space, then a defogged image is obtained through a Tanh (nonlinear active layer), and meanwhile, in order to accelerate the convergence speed of the model, a global cross-layer connection is added between input and output, so that the problems of gradient disappearance explosion and the like are effectively avoided. And finally, calculating the loss difference between the defogged image and the fog-free image by using the optimized multi-content loss function, reversely propagating and updating the model parameters, and repeatedly iterating to obtain the final trained model result.
Compared with the existing defogging algorithm, the defogging algorithm has the following beneficial effects:
(1) the invention provides a self-adaptive fusion image defogging method based on a two-way convolution neural network, an image restoration sub-network utilizes a characteristic attention residual error module to extract characteristics, and the defogging effect of an image is effectively improved.
(2) The invention provides a self-adaptive fusion image defogging method based on a two-way convolutional neural network, and a detail enhancement module is utilized by a detail enhancement network, so that the detail information of a defogged image is enriched.
(3) The invention provides a self-adaptive fusion image defogging method based on a two-way convolution neural network.
(4) The invention provides a self-adaptive fusion image defogging method based on a two-way convolution neural network, which optimizes a loss function of the network and introduces a multi-content loss function comprising pixel-level loss and characteristic-level loss. Wherein the pixel level loss is used to reduce the pixel difference between the defogged image and the sharp image; the visual effect of defogging is further improved by the characteristic level loss.
Drawings
FIG. 1 is a schematic diagram of a two-way convolutional neural network adaptive fusion network structure according to the present invention;
FIG. 2 is a schematic view of a feature attention structure proposed in the present invention;
FIG. 3 is a schematic structural diagram of a feature attention residual error module according to the present invention;
FIG. 4 is a schematic diagram of a detail enhancement module according to the present invention;
FIG. 5 is a schematic structural diagram of an adaptive fusion module according to the present invention;
FIG. 6 is a schematic diagram illustrating the defogging effect according to the embodiment of the present invention;
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. It should be noted that the described embodiments are only intended to facilitate the understanding of the present invention, and do not have any limiting effect thereon. The accompanying drawings, which are in a simplified form and are not to scale, are included for purposes of illustrating embodiments of the invention in a clear and concise manner and are incorporated in and constitute a part of the specification. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention provides a self-adaptive fusion image defogging method based on a two-way convolution neural network, wherein an integral defogging model comprises two sub-networks: an image restoration sub-network and a detail enhancement sub-network. The former extracts image features through a residual module to carry out image preliminary restoration, and the latter constructs detail information of the image through a detail enhancement module and finally obtains a final defogged image through a self-adaptive fusion module. Compared with the conventional method, the method adopts an end-to-end training mode, and performs iterative update on the parameters of the model by using the optimized loss function, so that the problems of poor defogging effect, serious detail loss and the like commonly existing in the current algorithm are effectively solved, and a clear fog-free image is provided for subsequent high-level computer vision tasks such as target identification, automatic driving and the like as input.
A self-adaptive fusion image defogging method based on a two-way convolution neural network is suitable for high-level computer vision tasks requiring clear and fog-free images as input, such as target recognition, automatic driving and the like, and a defogging model is provided and shown in figure 1, and specifically comprises the following steps:
step 1: the method comprises the following steps of (1) constructing an integral defogging model, wherein the model comprises two sub-networks: the image restoration sub-network and the detail enhancement sub-network respectively comprise different feature extraction modules, feature attention residual error modules and detail enhancement modules. The feature attention residual error module introduces a feature attention structure to guide the network to focus on more important features rather than invalid features; the detail enhancement module utilizes the expansion convolution of different expansion coefficients to extract the characteristics of the input image, and the details of the image are extracted more finely while the network receptive field is enlarged.
In step 1, the feature attention structure is shown in FIG. 2. By modeling reconstruction of interdependent and affected feature maps, the network can adaptively calibrate channel and pixel responses. It relies mainly on the following two arithmetic operations: depthwise convolution and Pointwise convolution. The specific implementation process is shown as the following formula: w ═ σ (Conv)pw(i(x)))=σ(Convpw(δ(Convdw(X)))), wherein σ represents a Sigmoid function; convpwRepresenting a Pointwise convolution; δ represents the Relu activation function; convdwRepresents the Depthwise convolution; x represents a feature input; w represents the adaptively generated characteristic response. Then, the reconstructed characteristics are obtained by element-by-element multiplication, as shown in a formula:
Figure BDA0003400351930000051
where X' represents the reconstructed feature output.
In step 1, the structure of the feature attention residual module is shown in fig. 3. Firstly, passing an input feature map through a convolution layer with an activation function Relu and an example regularization IN, secondly, transferring the obtained feature map to a next convolution layer which lacks an activation function layer compared with the previous layer, and finally, taking the features extracted by two layers of convolution as the input of a feature attention module to carry out enhanced reconstruction on the features. In order to accelerate the convergence of the network, local residual connection is introduced between the input and the output, so that the network avoids unimportant information such as a mist region and the like in the forward propagation process, and the overall reasoning speed of the model is accelerated. The process is shown as the formula: o ═ FAM (Conv (IN (δ (IN (I))))) + I, where FAM denotes the feature attention module; conv represents a 3 × 3 sized convolutional layer; IN represents example regularization; δ represents the Relu activation function; i and O represent input and output, respectively.
In step 1, a detail enhancement module is shown in fig. 4, and the module mainly consists of expansion convolutions with different expansion rates, so that the receptive field of the network can be enlarged, and the capability of capturing image details can be improved. Firstly, input features are subjected to dilation convolution with dilation rates of 1, 3 and 5 to obtain feature maps under different receptive fields, secondly, splicing operation is carried out on the obtained feature maps on channels, feature dimensionality is enlarged, feature information is enriched, finally, dimension reduction is carried out on the feature maps by using convolution layers with convolution kernels of 1 x 1 size, the feature maps are returned to an original channel, and meanwhile, the depth of a module is improved along with the convolution layers of 3 x 3 size. In order to accelerate the reasoning speed of the model, the module also introduces local residual errorsAnd (4) connecting. The process is shown by the following formula:
Figure BDA0003400351930000061
wherein, Conv1×1Represents a 1 × 1 sized convolutional layer;
Figure BDA0003400351930000062
a dilation convolution representing different dilation rates; cat denotes the splicing operation.
Step 2: and constructing a data set for model training, and generating a fog-free data set and a fog-free data set matched with the corresponding data set in a simulation mode on the basis of the atmospheric scattering model, wherein the data set for training comprises an indoor data set and an outdoor data set. And taking the simulated foggy image as the input of the model, and respectively extracting the features of different layers through two sub-networks to obtain a feature map containing different information. The restoring subnetwork and the detail enhancer network respectively comprise 5 continuous feature attention residual error extraction modules and detail enhancement modules for feature extraction.
In step 2, the detailed structure of the healing subnetwork is shown in fig. 1. The specific implementation steps are as follows: firstly, a convolution layer with the size of 3 multiplied by 3 maps the RGB foggy image which is originally input to a high-dimensional feature space, and the number of channels is 64; secondly, performing deep extraction on the high-dimensional features by continuous 5 attention residual error modules to obtain basic content information of the image; finally, the features are processed using a convolutional layer of 3 x 3 size, which also expands the depth of the network. The process is shown by the following formula: o isr=Conv(FARB5(Conv (I))), wherein OrRepresenting a recovery subnetwork feature output; conv represents a 3 × 3 sized convolutional layer; FARB5Representing 5 consecutive feature attention residual modules; i denotes the original RGB foggy image input.
In step 2, the structure of the detail enhancement subnetwork is as shown in fig. 1. The specific implementation steps are as follows: firstly, similar to a recovery subnetwork, a convolution layer with the size of 3 multiplied by 3 maps an original input RGB foggy image to a high-dimensional feature space, and the number of channels is 64; secondly, deep extraction is carried out on the high-dimensional features by continuous 5 detail enhancement modules to obtain the details lost by the imageInformation; finally, the features are processed using a convolutional layer of 3 x 3 size, which also expands the depth of the network. The process is shown by the following formula: o ise=Conv(DEB5(Conv (I))), wherein OeShowing the network characteristic output of the detail enhancer; conv represents a 3 × 3 sized convolutional layer; DEB5Representing 5 consecutive feature attention residual modules; i denotes the original RGB foggy image input.
And step 3: the feature maps from different sub-networks are adaptively fused by using an adaptive fusion module, and the feature maps are converted to obtain a fused feature result, wherein the adaptive fusion module is shown in fig. 5. The specific implementation steps are as follows: firstly, adding feature maps from an image restoration sub-network and a detail enhancement sub-network pixel by pixel to obtain a preliminary fusion result; secondly, obtaining different feature weight graphs by the fusion result through 2 convolution layers with the size of 3 multiplied by 3, and finally, carrying out linear combination superposition on the feature graphs from different sub-networks and the obtained corresponding weight graphs to obtain the final self-adaptive fusion result. The specific mathematical process is shown as the following formula: α ═ Conv (O)e+Or)),
Figure BDA0003400351930000071
Wherein α and β represent weights of different magnitudes; conv represents a 3 × 3 sized convolutional layer; o isfIndicating the result of the adaptive fusion.
And 4, step 4: the feature map after self-adaptive fusion is subjected to space dimension reduction processing, an convolution layer with convolution kernel size of 3 multiplied by 3 is utilized to return the input high-dimensional feature map to an original RGB (red, green and blue) space, then a defogged image is obtained through a Tanh (nonlinear active layer), and meanwhile, in order to accelerate the convergence speed of the model, a global cross-layer connection is added between input and output, so that the problems of gradient disappearance explosion and the like are effectively avoided. And finally, calculating the loss difference between the defogged image and the fog-free image by using the optimized multi-content loss function, reversely propagating and updating the model parameters, and repeatedly iterating to obtain the final trained model result.
In step 4, the optimized multiple content loss function comprisesThere is a pixel level penalty function and a feature level penalty function. In order to make the defogged image output by recovery closer to a real clear image, a pixel-level loss function L is introducedpixIt is defined by the following formula:
Figure BDA0003400351930000081
wherein H, W and C represent the height, width and number of channels size of the image, respectively; i isgRepresenting a true value clear image; i isrIndicating that the fog-free image is restored. In addition, in order to improve the visual quality of the defogged image, a characteristic level loss function L is additionally introducedfeatureThe formula is expressed as:
Figure BDA0003400351930000082
wherein H, W and C represent the height, width and number of channels size of the image, respectively; i isgRepresenting a true value clear image; i isrIndicating that the fog-free image is restored; f. ofvRepresenting a trained VGG19 model. Thus, the model employs a multiple content loss function LtotalIs defined as: l istotal=Lpix+εLfeatureWhere ε represents the weight coefficient of the feature level loss function.
According to an example provided by the invention, the server adopted by the model training is configured as follows, and the computer is provided with an NVIDIA RTX 2080Ti calculation video card, and the video memory of the computer is 12G. In the training process, the whole network adopts an Adam optimizer, wherein beta1And beta2Are 0.9 and 0.999, respectively. The number of iterative training is 100 epochs, the default initial learning rate is set to 0.001, the attenuation of the first 50 epochs is 0.0005, and the learning rate remains unchanged in the last 50 epochs. The batch size is set to 1, subject to the video memory size. All code is implemented in a pytorech framework, with the programming language Python. As shown in fig. 6, it can be observed that the defogging effect of the AODNet is not obvious, and the restored image has an overall dark effect, which affects the visual observation. The same problem occurs with DehazeNet, which has visual disturbances such as incomplete defogging and residual images. GFN improves the haze residue problem to some extent, but the image is completeThe volume brightness is still low, the color recovery is not obvious enough, compared with the method, the defogging effect is obvious, the color and detail reduction degree is higher, and a better visual effect is obtained.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A self-adaptive fusion image defogging method based on a two-way convolution neural network is characterized by comprising the following steps: the method comprises the following steps:
step 1: the method comprises the following steps of (1) constructing a two-way convolutional neural network defogging model, wherein the model comprises two sub-networks: the image restoration sub-network and the detail enhancement sub-network comprise a feature attention residual error module, a detail enhancement module and a self-adaptive fusion module;
step 2: forming a pair of foggy image and fogless image data set by using an atmospheric scattering model, and respectively extracting the characteristics of the input foggy image through a double-path convolution neural network to obtain characteristic graphs with different information;
and step 3: self-adaptive fusion is carried out on the feature maps from different sub-networks by using a self-adaptive fusion module, and a fused feature result is obtained through conversion;
and 4, step 4: and carrying out space dimension reduction processing on the fused feature map, and outputting to obtain a defogged image of the original RGB space. And finally, calculating the loss difference between the defogged image and the non-fogging image by using the optimized loss function, reversely propagating and updating the model parameters, and repeatedly iterating to obtain the final trained model result.
2. The adaptive fusion image defogging method based on the two-way convolutional neural network as claimed in claim 1, wherein the two-way convolutional neural network is used for defogging the foggy image, the image restoration sub-network is responsible for preliminary defogging, and the detail enhancement sub-network is responsible for detail enhancement, so that the problems of image restoration detail loss, blurring and the like in the traditional method are solved.
3. The adaptive fusion image defogging method based on the two-way convolutional neural network as claimed in claim 1, wherein the adopted feature attention residual error module introduces the feature attention module to reconstruct the features, thereby enhancing the network capability of extracting important features and solving the defect of the traditional method that different features are treated equally.
4. The adaptive fusion image defogging method based on the two-way convolutional neural network as claimed in claim 1, wherein a detail enhancement module is used for feature extraction of the image, so that the network receptive field is enlarged, more image detail information is captured, and the detail quality of the restored image is improved.
5. The adaptive fused image defogging method based on the two-way convolutional neural network as claimed in claim 1, wherein the features from different sub-networks are fused by using an adaptive fusion module, so as to realize the purpose of enriching the feature map information.
6. The adaptive fusion image defogging method based on the two-way convolutional neural network as claimed in claim 1, wherein a model is trained and optimized by using a multiple content loss function, so that the visual quality of the image is improved to the maximum extent while the content information of the restored fog-free image is retained.
CN202111494688.XA 2021-12-09 2021-12-09 Self-adaptive fusion image defogging method based on double-path convolution neural network Pending CN114283078A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111494688.XA CN114283078A (en) 2021-12-09 2021-12-09 Self-adaptive fusion image defogging method based on double-path convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111494688.XA CN114283078A (en) 2021-12-09 2021-12-09 Self-adaptive fusion image defogging method based on double-path convolution neural network

Publications (1)

Publication Number Publication Date
CN114283078A true CN114283078A (en) 2022-04-05

Family

ID=80871432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111494688.XA Pending CN114283078A (en) 2021-12-09 2021-12-09 Self-adaptive fusion image defogging method based on double-path convolution neural network

Country Status (1)

Country Link
CN (1) CN114283078A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171079A (en) * 2022-09-08 2022-10-11 松立控股集团股份有限公司 Vehicle detection method based on night scene
CN117853371A (en) * 2024-03-06 2024-04-09 华东交通大学 Multi-branch frequency domain enhanced real image defogging method, system and terminal
CN117853371B (en) * 2024-03-06 2024-05-31 华东交通大学 Multi-branch frequency domain enhanced real image defogging method, system and terminal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171079A (en) * 2022-09-08 2022-10-11 松立控股集团股份有限公司 Vehicle detection method based on night scene
CN115171079B (en) * 2022-09-08 2023-04-07 松立控股集团股份有限公司 Vehicle detection method based on night scene
CN117853371A (en) * 2024-03-06 2024-04-09 华东交通大学 Multi-branch frequency domain enhanced real image defogging method, system and terminal
CN117853371B (en) * 2024-03-06 2024-05-31 华东交通大学 Multi-branch frequency domain enhanced real image defogging method, system and terminal

Similar Documents

Publication Publication Date Title
Yang et al. Proximal dehaze-net: A prior learning-based deep network for single image dehazing
CN109087273B (en) Image restoration method, storage medium and system based on enhanced neural network
CN111161360B (en) Image defogging method of end-to-end network based on Retinex theory
CN111028163A (en) Convolution neural network-based combined image denoising and weak light enhancement method
CN106920220A (en) Based on the turbulent flow method for blindly restoring image that dark primary and alternating direction multiplier method optimize
KR102119687B1 (en) Learning Apparatus and Method of Image
CN113379661B (en) Double-branch convolution neural network device for fusing infrared and visible light images
CN110097522B (en) Single outdoor image defogging method based on multi-scale convolution neural network
CN110349093B (en) Single image defogging model construction and defogging method based on multi-stage hourglass structure
CN111192219A (en) Image defogging method based on improved inverse atmospheric scattering model convolution network
CN116309232B (en) Underwater image enhancement method combining physical priori with deep learning
CN113420794B (en) Binaryzation Faster R-CNN citrus disease and pest identification method based on deep learning
Jiao et al. Guided-Pix2Pix: End-to-end inference and refinement network for image dehazing
CN115063318A (en) Adaptive frequency-resolved low-illumination image enhancement method and related equipment
Qian et al. CIASM-Net: a novel convolutional neural network for dehazing image
CN114004766A (en) Underwater image enhancement method, system and equipment
CN113160286A (en) Near-infrared and visible light image fusion method based on convolutional neural network
CN115222614A (en) Priori-guided multi-degradation-characteristic night light remote sensing image quality improving method
CN114283078A (en) Self-adaptive fusion image defogging method based on double-path convolution neural network
CN113436101B (en) Method for removing rain by Dragon lattice tower module based on efficient channel attention mechanism
Shi et al. Integrating deep learning and traditional image enhancement techniques for underwater image enhancement
CN117422653A (en) Low-light image enhancement method based on weight sharing and iterative data optimization
CN116703750A (en) Image defogging method and system based on edge attention and multi-order differential loss
Goncalves et al. Guidednet: Single image dehazing using an end-to-end convolutional neural network
Huang et al. Attention-based for multiscale fusion underwater image enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination