CN113191964A - Unsupervised night image defogging method using high-frequency and low-frequency decomposition - Google Patents

Unsupervised night image defogging method using high-frequency and low-frequency decomposition Download PDF

Info

Publication number
CN113191964A
CN113191964A CN202110384208.8A CN202110384208A CN113191964A CN 113191964 A CN113191964 A CN 113191964A CN 202110384208 A CN202110384208 A CN 202110384208A CN 113191964 A CN113191964 A CN 113191964A
Authority
CN
China
Prior art keywords
image
input image
frequency
input
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110384208.8A
Other languages
Chinese (zh)
Other versions
CN113191964B (en
Inventor
李朝锋
龚轩
杨勇生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN202110384208.8A priority Critical patent/CN113191964B/en
Publication of CN113191964A publication Critical patent/CN113191964A/en
Application granted granted Critical
Publication of CN113191964B publication Critical patent/CN113191964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an unsupervised night image defogging method using high and low frequency decomposition, which comprises the following steps: decomposing an input image into a high-frequency image and a low-frequency image by using a guide filter; combining the input image and the high-frequency image to be used as the input of a fog-free image estimation network, and estimating a fog-free image; combining the input image and the low-frequency image to be used as the input of a transmissivity estimation network, and estimating a transmission image; estimating an atmospheric illumination map corresponding to the input image by using a maximum filter; reconstructing an input image by adopting an atmospheric scattering model based on the fog-free image, the transmission image and the atmospheric illumination map; and taking the reconstruction loss function as a loss function, and training the network end to end. According to the invention, under the condition that a pair of night fog image and night clear image is not needed, learning and inference can be carried out only by using the observed night fog image, so that night haze can be effectively removed, and the visual property of the night fog image is improved.

Description

Unsupervised night image defogging method using high-frequency and low-frequency decomposition
Technical Field
The invention relates to the technical field of computer image processing, in particular to an unsupervised night image defogging method using high-frequency and low-frequency decomposition.
Background
At present, many haze image restoration algorithms are directed at images in the daytime, such as a method based on prior information; most of defogging methods are based on atmospheric scattering models, which generally default that the atmospheric light in the daytime is uniform, but for nighttime images, because the ambient light is weak and the interference of artificial light sources is caused, the atmospheric light has great change, the composition is more complex, and the estimation is more difficult, so for nighttime image defogging, the algorithm effect is poor.
The current night image defogging restoration algorithm can be roughly divided into four types: one is that on the basis of the experience of studying the white haze image, the characteristics of the night image are analyzed to obtain some restoration methods, and the algorithm has a certain defogging effect, but the robustness is poor, and the defogging effect has a large promotion space; the second type is optimized and improved on the basis of an atmospheric scattering model, and by combining a related algorithm and research experience of day defogging, the transmissivity and atmospheric illumination of an image are estimated, but the distribution of the illumination of the environment at night is complex and difficult to calculate, so that the estimation of the method on the value of the environment light and the transmissivity is inaccurate, the robustness is not enough, and the recovery of a dark area in the image is not good; the third category is to combine atmospheric scattering model and hierarchical optimization algorithm; the algorithm has large time overhead due to complex calculation and does not meet the real-time requirement of processing video. The fourth category is night image defogging algorithms based on deep learning, such as convolutional neural networks applied to image defogging.
However, the existing night image defogging method based on deep learning is trained in a supervised mode, a large number of data sets of paired night clear images and foggy images need to be provided, and the performance of a defogging model obtained by the method is directly related to the quality of the data sets; in addition, because the defogging method based on deep learning needs a large amount of data, a lot of time is consumed when the defogging model is trained.
Disclosure of Invention
The invention aims to provide an unsupervised night image defogging method using high and low frequency decomposition, which can learn and deduce only by using an observed night fog image under the condition that a pair of the night fog image and a night clear image is not needed.
In order to achieve the above purpose, the invention is realized by the following technical scheme:
an unsupervised nighttime image defogging method using high and low frequency decomposition, comprising:
decomposing an input image into a high-frequency image and a low-frequency image by using a guide filter;
merging the input image and the high-frequency image to obtain a first merged image, using the first merged image as the input of a fog-free image estimation network J-net, and estimating a fog-free image corresponding to the input image;
merging the input image and the low-frequency image to obtain a second merged image which is used as the input of a transmissivity estimation network T-net to estimate a transmission image corresponding to the input image;
estimating an atmospheric illumination map corresponding to the input image by using a maximum filter;
reconstructing the input image using an atmospheric scattering model based on the fog-free image, the transmission image and the atmospheric illumination map;
and taking the reconstruction loss function as a loss function, and performing end-to-end training on the fog-free image estimation network J-net and the transmissivity estimation network T-net.
Further, the decomposing the input image into the high frequency image and the low frequency image by using the guiding filter includes:
taking a channel difference map of the input image as a guide map in a guide filter, and obtaining a low-frequency image corresponding to the input image through low-pass filtering, wherein the channel difference map is obtained by subtracting a minimum color channel from a maximum color channel of the input image;
and subtracting the low-frequency image from the input image to obtain a high-frequency image corresponding to the input image.
Further, a plurality of high-frequency images and low-frequency images are obtained by setting different decomposition parameters of a guide filter;
the merging the input image and the high-frequency image to obtain a first merged image includes:
combining the input image with the high-frequency images to obtain a first combined image;
the merging the input image and the low-frequency image to obtain a second merged image, including:
and combining the input image and the low-frequency images to obtain a second combined image.
Furthermore, the fog-free image estimation network J-net adopts a U-net network as a framework and comprises a feature extraction module, a multi-scale void convolution module and an image restoration module;
the characteristic extraction module comprises a convolution layer and a pooling layer which are alternately arranged;
the image recovery module comprises an up-sampling layer, a jump connection layer and convolution layers, wherein except the last convolution layer, a BatchNorm layer and a Relu activation function are arranged behind each convolution layer;
the multi-scale hole convolution module is formed by a plurality of paths of hole convolutions with different hole rates.
Further, the transmittance estimation network T-net is a self-encoder with hopping connections;
the encoder adopts N modules based on a convolutional neural network, and each module consists of a convolutional layer, a downsampling layer, a BatchNorm layer and a LeakyRelu activation function;
the decoder adopts N modules based on a convolutional neural network, and each module consists of a jump connection, an upsampling layer, a BatchNorm layer, a convolutional layer and a LeakyRelu activation function.
Further, the estimating the atmospheric illumination map corresponding to the input image by using a maximum filter includes:
carrying out maximum value filtering on the input image to obtain a maximum value filtered input image;
and performing guiding filtering on the input image after the maximum value filtering to obtain an atmospheric illumination map in the input image, wherein a guiding map of a guiding filter is the input image.
Further, the atmospheric scattering model is as follows:
I(x)=J*T+(1-T)*A,
wherein i (x) represents a data matrix of a defogged image of the input image, J represents a data matrix of the defogged image, T represents a data matrix of the transmission image, and a represents a data matrix of the atmospheric illumination map.
Further, the reconstruction loss function is:
LRec=‖I(x)-x‖p
wherein x represents a data matrix of the input image, I (x) represents a data matrix of the input image reconstructed using an atmospheric scattering model, | |)pRepresenting the p-norm of a given data matrix.
Compared with the prior art, the invention has the following advantages:
the invention provides an unsupervised night image defogging method using high-frequency and low-frequency decomposition, which is characterized in that high-frequency information is used as the input of estimation of a fog-free image, so that the fog-free image estimated by a model contains clearer textures, low-frequency information is added into the estimation of transmissivity, and a network can be ensured to accurately estimate a transmission map, so that the estimation of the fog-free image is more accurate, under the condition that a pair of night clear image-fog image data sets is not needed, only a single observed real night fog image is used, and the defogging effect of the image can be achieved after minute-level training time, so that the night haze can be removed, the image visibility and readability are improved, and the method plays a key role in the fields of video monitoring, target identification and the like in a night scene.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings used in the description will be briefly introduced, and it is obvious that the drawings in the following description are an embodiment of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts according to the drawings:
FIG. 1 is a schematic flow chart of an unsupervised nighttime image defogging method using high and low frequency decomposition according to an embodiment of the present invention;
FIG. 2 is a block diagram of a method for defogging a night image according to an embodiment of the present invention;
FIG. 3 is a block diagram of a J-net network according to an embodiment of the present invention;
FIG. 4a is a block diagram of a T-net network according to an embodiment of the present invention;
FIG. 4b is a detailed block diagram of the convolutional neural network module of FIG. 4 a;
FIGS. 5a to 5g are graphs comparing the results of real night fog day images of the present invention and other methods in a first example, wherein FIG. 5a is an input image, FIG. 5b is a defogged image of NDIM method, FIG. 5c is a defogged image of GS method, FIG. 5d is a defogged image of MRP method, FIG. 5e is a defogged image of MRP-Faster method, FIG. 5f is a defogged image of PWAB method, and FIG. 5g is a defogged image of the present invention;
FIGS. 6a to 6g are graphs comparing the results of real night fog day images of the present invention and other methods in a second example, wherein FIG. 6a is an input image, FIG. 6b is a defogged image of NDIM method, FIG. 6c is a defogged image of GS method, FIG. 6d is a defogged image of MRP method, FIG. 6e is a defogged image of MRP-Faster method, FIG. 6f is a defogged image of PWAB method, and FIG. 6g is a defogged image of the present invention;
fig. 7a to 7g are comparison graphs of the results of real night fog day images of the present invention and other methods in a third example, wherein fig. 7a is an input image, fig. 7b is a defogged image of NDIM method, fig. 7c is a defogged image of GS method, fig. 7d is a defogged image of MRP method, fig. 7e is a defogged image of MRP-Faster method, fig. 7f is a defogged image of PWAB method, and fig. 7g is a defogged image of the present invention method.
Detailed Description
The invention is described in further detail below with reference to the figures and the detailed description. The advantages and features of the present invention will become more apparent from the following description. It is to be noted that the drawings are in a very simplified form and are all used in a non-precise scale for the purpose of facilitating and distinctly aiding in the description of the embodiments of the present invention. To make the objects, features and advantages of the present invention comprehensible, reference is made to the accompanying drawings. It should be understood that the structures, ratios, sizes, and the like shown in the drawings and described in the specification are only used for matching with the disclosure of the specification, so as to be understood and read by those skilled in the art, and are not used to limit the implementation conditions of the present invention, so that the present invention has no technical significance, and any structural modification, ratio relationship change or size adjustment should still fall within the scope of the present invention without affecting the efficacy and the achievable purpose of the present invention.
The core idea of the invention is that a guide filter is used to decompose an input image into a high-frequency part and a low-frequency part, the obtained low-frequency part and the input image are used as the input of a transmissivity estimation network T-net, and the high-frequency part and the input image are used as the input of a fog-free image estimation network J-net, because the high-frequency part contains the scene texture information of the image, the fog-free image can be better estimated, and the low-frequency part contains the structure information of the image, which is beneficial to the estimation of the transmissivity; next, the atmospheric illumination map a of the input image is estimated using a maximum value filter, and the transmission map T and the fog-free map J obtained by combining the transmittance estimation network T-net and the fog-free image estimation network J-net are combined. The invention can achieve the defogging effect of the night image only by using a single observed night foggy image and through minute-level training time under the condition of not needing a large number of paired night clear image-foggy image data sets. The method can effectively remove the haze at night and improve the vision of the image in the foggy day at night.
Referring to fig. 1 and fig. 2, an unsupervised night image defogging method using high and low frequency decomposition according to the present invention includes the following steps:
and S100, decomposing the input image into a high-frequency image and a low-frequency image by using a guide filter. The input image is a night foggy day image.
And S200, merging the input image and the high-frequency image to obtain a first merged image, using the first merged image as the input of the fog-free image estimation network J-net, and estimating a fog-free image corresponding to the input image.
And S300, merging the input image and the low-frequency image to obtain a second merged image, using the second merged image as the input of the transmissivity estimation network T-net, and estimating a transmission image corresponding to the input image.
In step S100, a channel difference map of the input image x is used as a guide map in a guide filter, and a low-frequency image corresponding to the input image is obtained through low-pass filtering, where the channel difference map is obtained by subtracting a minimum color channel from a maximum color channel of the input image; and subtracting the low-frequency image from the input image to obtain a high-frequency image corresponding to the input image.
Preferably, a plurality of high-frequency images and a plurality of low-frequency images are obtained by setting different decomposition parameters of a guiding filter, and correspondingly, in steps S200 and S300, a first merged image is obtained by merging the input image with the plurality of high-frequency images, and a second merged image is obtained by merging the input image with the plurality of low-frequency images. The quality of the output image of the J-net network can be improved by combining the input image with a plurality of high-frequency images, and the quality of the output image of the T-net network can be improved by combining the input image with a plurality of low-frequency images, so that the defogging effect of the night image is improved.
For example, specifically set the guided filtering radius set to (2, 4, 8, 16, 32), the regularization parameter set to (0.0001, 0.00001), 10 sets of parameters are obtained by combination, and finally 10 high-frequency images and 10 low-frequency images are obtained by guided filtering. The 10 low-frequency images and the input image x are subjected to merging operation as the input of the transmissivity estimation network T-net, and the 10 corresponding high-frequency images and the input image x are subjected to merging operation as the input of the fog-free image estimation network J-net.
Specifically, as shown in fig. 3, the fog-free image estimation network J-net adopts a U-net network as a framework, and a multi-scale void convolution module is added for improvement, so as to further improve the defogging performance of the fog-free image estimation network J-net. The J-net network structure is composed of a feature extraction module, an image recovery module and a multi-scale cavity convolution module, wherein the feature extraction module adopts convolution layers and pooling layers to alternately extract features of an input image, the image recovery module adopts an upper sampling layer, jump connection and convolution layers to complete recovery of the image, and except for the last layer 1 × 1 convolution, a BatchNorm layer and a Relu activation function are arranged behind each convolution layer; the multi-scale hole convolution module is formed by convolution of a plurality of paths of holes with different hole rates, for example, fig. 3 shows three paths, wherein P1, P2 and P3 respectively represent that the hole rates are 1, 2 and 3; the purpose of doing so is to enable the network to capture multi-scale context information, thereby achieving a better defogging effect.
As shown in fig. 4a and 4b, the transmittance estimation network T-net is an auto-encoder with a jump connection, the encoder employs N (e.g., 5) convolutional neural network-based modules, each of which is composed of a convolutional layer, a downsampling, a BatchNorm layer, and a leak Relu activation function; the decoder also comprises N (e.g. 5) convolutional neural network-based modules, each module consisting of a hopping junction, upsampling, BatchNorm layer, convolutional layer, LeakyRelu activation function; and finally, obtaining an estimated transmission image T through the 1-by-1 convolution layer and the sigmoid activation function.
And S400, estimating an atmospheric illumination map corresponding to the input image by using a maximum filter.
Specifically, the input image is maximum filtered to obtain a maximum-filtered input image, and then the maximum-filtered input image is guided and filtered to obtain an atmospheric illumination map in the input image, where a guide map of a guide filter is the input image.
For example, a small-sized local block of size 5 × 5 is selected, the input image is maximum-filtered within the local block to obtain a maximum-filtered input image, and then the image is guided-filtered to obtain an atmospheric illumination map a in the input image, where the guide map of the guide filter uses itself. The parameters of the pilot filter are set to 10, 0.01, for example.
S500, reconstructing the input image by adopting an atmospheric scattering model based on the fog-free image, the transmission image and the atmospheric illumination map.
The reconstructed input image is the image without the haze at night.
Specifically, the atmospheric scattering model is as follows:
I(x)=J*T+(1-T)*A,
wherein, i (x) represents a data matrix of an input image reconstructed by an atmospheric scattering model, J represents a data matrix of the fog-free image, T represents a data matrix of the transmission image, and a represents a data matrix of the atmospheric illumination map.
S600, taking the reconstruction loss function as a loss function, and performing end-to-end training on the fog-free image estimation network J-net and the transmissivity estimation network T-net.
Specifically, a reconstruction loss function of the network model is calculated, that is:
LRec=‖I(x)-x‖p
wherein x represents a data matrix of the input image, I (x) represents a data matrix of the input image reconstructed using an atmospheric scattering model, | |)pRepresenting the p-norm, preferably the F-norm, of a given data matrix.
After image layer separation, LRecThe entire network is constrained by reconstructing the input images, and the haze-free image estimation network J-net and the transmittance estimation network T-net are trained end-to-end using the loss function.
For objective evaluation of the algorithm, PSNR (peak signal-to-noise ratio, PSNR value is larger, representing less distortion), SSIM (a new index for measuring structural similarity of two images, the larger the value is better, the maximum is 1) and CIEDE2000 (a color difference formula) of the defogged image are calculated, the objective indexes are tested on a synthetic night foggy day data set NHM, and the existing NDIM algorithm, GS algorithm, MRP-Faster algorithm and PWAB algorithm are compared with the algorithm, and the table 1 shows. As can be seen from Table 1, the algorithm of the present invention achieves the best objective index compared to the other 5 night defogging algorithms.
TABLE 1
Figure BDA0003014185060000081
Fig. 5a to 5g, fig. 6a to 6g, and fig. 7a to 7g show three sets of comparison graphs, and it can be seen from the comparison in the graphs that compared with the other five algorithms, the present invention removes haze from the night fog day image, and can greatly improve the visual performance of the image. Particularly, at the position of the depth of field of the image, the defogging effect of the image is better than that of the existing several night image defogging technologies, the details and the textures of the image are better protected and excavated, and the readability and the usability of the image information are greatly improved.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.

Claims (8)

1. An unsupervised nighttime image defogging method using high and low frequency decomposition, comprising:
decomposing an input image into a high-frequency image and a low-frequency image by using a guide filter;
merging the input image and the high-frequency image to obtain a first merged image, using the first merged image as the input of a fog-free image estimation network J-net, and estimating a fog-free image corresponding to the input image;
merging the input image and the low-frequency image to obtain a second merged image which is used as the input of a transmissivity estimation network T-net to estimate a transmission image corresponding to the input image;
estimating an atmospheric illumination map corresponding to the input image by using a maximum filter;
reconstructing the input image using an atmospheric scattering model based on the fog-free image, the transmission image and the atmospheric illumination map;
and taking the reconstruction loss function as a loss function, and performing end-to-end training on the fog-free image estimation network J-net and the transmissivity estimation network T-net.
2. The unsupervised nighttime image defogging method according to claim 1 utilizing high and low frequency decomposition wherein said decomposing the input image into a high frequency image and a low frequency image utilizing a directional filter comprises:
taking a channel difference map of the input image as a guide map in a guide filter, and obtaining a low-frequency image corresponding to the input image through low-pass filtering, wherein the channel difference map is obtained by subtracting a minimum color channel from a maximum color channel of the input image;
and subtracting the low-frequency image from the input image to obtain a high-frequency image corresponding to the input image.
3. The unsupervised nighttime image defogging method according to claim 2, wherein a plurality of said high frequency images and said low frequency images are obtained by setting different decomposition parameters of a guiding filter;
the merging the input image and the high-frequency image to obtain a first merged image includes:
combining the input image with the high-frequency images to obtain a first combined image;
the merging the input image and the low-frequency image to obtain a second merged image, including:
and combining the input image and the low-frequency images to obtain a second combined image.
4. The unsupervised nighttime image defogging method according to claim 1, wherein said fog-free image estimation network J-net is structured with a U-net network, and comprises a feature extraction module, a multi-scale void convolution module and an image restoration module;
the characteristic extraction module comprises a convolution layer and a pooling layer which are alternately arranged;
the image recovery module comprises an up-sampling layer, a jump connection layer and convolution layers, wherein except the last convolution layer, a BatchNorm layer and a Relu activation function are arranged behind each convolution layer;
the multi-scale hole convolution module is formed by a plurality of paths of hole convolutions with different hole rates.
5. The unsupervised nighttime image defogging method according to claim 1, wherein said transmittance estimation network T-net is a self-encoder with a hopping connection;
the encoder adopts N modules based on a convolutional neural network, and each module consists of a convolutional layer, a downsampling layer, a BatchNorm layer and a LeakyRelu activation function;
the decoder adopts N modules based on a convolutional neural network, and each module consists of a jump connection, an upsampling layer, a BatchNorm layer, a convolutional layer and a LeakyRelu activation function.
6. The unsupervised nighttime image defogging method according to claim 1, wherein said estimating an atmospheric illumination map corresponding to said input image with a maximum filter comprises:
carrying out maximum value filtering on the input image to obtain a maximum value filtered input image;
and performing guiding filtering on the input image after the maximum value filtering to obtain an atmospheric illumination map in the input image, wherein a guiding map of a guiding filter is the input image.
7. The unsupervised nighttime image defogging method according to claim 1 utilizing high and low frequency decomposition wherein said atmospheric scattering model is:
I(x)=J*T+(1-T)*A,
wherein i (x) represents a data matrix of a defogged image of the input image, J represents a data matrix of the defogged image, T represents a data matrix of the transmission image, and a represents a data matrix of the atmospheric illumination map.
8. The unsupervised nighttime image defogging method according to claim 1 and using high and low frequency decomposition wherein said reconstruction loss function is:
LRec=‖I(x)-x‖p
wherein x represents a data matrix of the input image, I (x) represents a data matrix of the input image reconstructed using an atmospheric scattering model, | |)pRepresenting the p-norm of a given data matrix.
CN202110384208.8A 2021-04-09 2021-04-09 Unsupervised night image defogging method using high-low frequency decomposition Active CN113191964B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110384208.8A CN113191964B (en) 2021-04-09 2021-04-09 Unsupervised night image defogging method using high-low frequency decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110384208.8A CN113191964B (en) 2021-04-09 2021-04-09 Unsupervised night image defogging method using high-low frequency decomposition

Publications (2)

Publication Number Publication Date
CN113191964A true CN113191964A (en) 2021-07-30
CN113191964B CN113191964B (en) 2024-04-05

Family

ID=76975335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110384208.8A Active CN113191964B (en) 2021-04-09 2021-04-09 Unsupervised night image defogging method using high-low frequency decomposition

Country Status (1)

Country Link
CN (1) CN113191964B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487519A (en) * 2021-09-03 2021-10-08 南通欧泰机电工具有限公司 Image rain removing method based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472818A (en) * 2018-10-17 2019-03-15 天津大学 A kind of image defogging method based on deep neural network
AU2020100274A4 (en) * 2020-02-25 2020-03-26 Huang, Shuying DR A Multi-Scale Feature Fusion Network based on GANs for Haze Removal
CN111882496A (en) * 2020-07-06 2020-11-03 苏州加乘科技有限公司 Method for defogging night image based on recurrent neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472818A (en) * 2018-10-17 2019-03-15 天津大学 A kind of image defogging method based on deep neural network
AU2020100274A4 (en) * 2020-02-25 2020-03-26 Huang, Shuying DR A Multi-Scale Feature Fusion Network based on GANs for Haze Removal
CN111882496A (en) * 2020-07-06 2020-11-03 苏州加乘科技有限公司 Method for defogging night image based on recurrent neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
余春艳;林晖翔;徐小丹;叶鑫焱;: "雾天退化模型参数估计与CUDA设计", 计算机辅助设计与图形学学报, no. 02 *
杨爱萍;赵美琪;王海新;鲁立宇;: "基于低通滤波和多特征联合优化的夜间图像去雾", 光学学报, no. 10 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487519A (en) * 2021-09-03 2021-10-08 南通欧泰机电工具有限公司 Image rain removing method based on artificial intelligence

Also Published As

Publication number Publication date
CN113191964B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN109389556B (en) Multi-scale cavity convolutional neural network super-resolution reconstruction method and device
CN111539887B (en) Channel attention mechanism and layered learning neural network image defogging method based on mixed convolution
CN112184577B (en) Single image defogging method based on multiscale self-attention generation countermeasure network
CN107749048B (en) Image correction system and method, and color blindness image correction system and method
CN109871875A (en) A kind of building change detecting method based on deep learning
CN115293992B (en) Polarization image defogging method and device based on unsupervised weight depth model
CN114429555A (en) Image density matching method, system, equipment and storage medium from coarse to fine
CN116958131B (en) Image processing method, device, equipment and storage medium
CN111681180A (en) Priori-driven deep learning image defogging method
CN114972378A (en) Brain tumor MRI image segmentation method based on mask attention mechanism
CN111598793A (en) Method and system for defogging image of power transmission line and storage medium
Yuan et al. A confidence prior for image dehazing
CN113191964A (en) Unsupervised night image defogging method using high-frequency and low-frequency decomposition
Qi et al. A novel haze image steganography method via cover-source switching
CN112767267B (en) Image defogging method based on simulation polarization fog-carrying scene data set
CN115439363A (en) Video defogging device and method based on comparison learning
CN116385293A (en) Foggy-day self-adaptive target detection method based on convolutional neural network
CN115205624A (en) Cross-dimension attention-convergence cloud and snow identification method and equipment and storage medium
Wenjuan et al. Retracted article: underwater image segmentation based on computer vision and research on recognition algorithm
CN113096176B (en) Semantic segmentation-assisted binocular vision unsupervised depth estimation method
CN111986109A (en) Remote sensing image defogging method based on full convolution network
CN114463346A (en) Complex environment rapid tongue segmentation device based on mobile terminal
Gao et al. Single image dehazing based on single pixel energy minimization
CN111091601A (en) PM2.5 index estimation method for outdoor mobile phone image in real time in daytime
CN115294570B (en) Cell image recognition method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant