CN113191964B - Unsupervised night image defogging method using high-low frequency decomposition - Google Patents

Unsupervised night image defogging method using high-low frequency decomposition Download PDF

Info

Publication number
CN113191964B
CN113191964B CN202110384208.8A CN202110384208A CN113191964B CN 113191964 B CN113191964 B CN 113191964B CN 202110384208 A CN202110384208 A CN 202110384208A CN 113191964 B CN113191964 B CN 113191964B
Authority
CN
China
Prior art keywords
image
input image
frequency
input
night
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110384208.8A
Other languages
Chinese (zh)
Other versions
CN113191964A (en
Inventor
李朝锋
龚轩
杨勇生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN202110384208.8A priority Critical patent/CN113191964B/en
Publication of CN113191964A publication Critical patent/CN113191964A/en
Application granted granted Critical
Publication of CN113191964B publication Critical patent/CN113191964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an unsupervised night image defogging method using high-low frequency decomposition, which comprises the following steps: decomposing the input image into a high frequency image and a low frequency image using a guide filter; combining the input image with the high-frequency image, and then taking the combined image as the input of a haze-free image estimation network to estimate a haze-free image; combining the input image with the low-frequency image, and then taking the combined image as the input of a transmissivity estimation network to estimate a transmission image; estimating an atmospheric illumination map corresponding to the input image by using a maximum filter; reconstructing an input image using an atmospheric scattering model based on the haze-free image, the transmission image, and the atmospheric illumination map; and taking the reconstructed loss function as a loss function, and performing end-to-end training on the network. Under the condition that paired night foggy images and night clear images are not needed, the night foggy images can be learned and inferred only by using the observed night foggy images, so that night haze can be effectively removed, and the visibility of the night foggy images is improved.

Description

Unsupervised night image defogging method using high-low frequency decomposition
Technical Field
The invention relates to the technical field of computer image processing, in particular to an unsupervised night image defogging method using high-low frequency decomposition.
Background
At present, many haze image restoration algorithms are aimed at daytime images, such as a method based on priori information; most of defogging methods are based on an atmospheric scattering model, and the model generally defaults to uniform atmospheric light in daytime, but for night images, the atmospheric light is greatly changed due to weak ambient light and interference of an artificial light source, the composition is more complex and difficult to estimate, so that the algorithm effects are poor for night image defogging.
The existing night image defogging recovery algorithm can be divided into four categories: the method is based on experience of research on haze images in daytime, characteristics of night images are analyzed to obtain a plurality of restoration methods, and the algorithm has a certain defogging effect, but has poor robustness, and the defogging effect has a large lifting space; the second type is optimized and improved on the basis of an atmospheric scattering model, and the transmissivity and the atmospheric illumination of an image are estimated by combining a related algorithm for defogging in daytime and research experience, but the method is difficult to calculate due to complex distribution of the ambient illumination at night, and is inaccurate in the estimation of the ambient light value and the transmissivity, insufficient in robustness and poor in the recovery of a dark area in the image; the third category is to combine the atmospheric scattering model and the hierarchical optimization algorithm; the algorithm has large time expenditure due to complex calculation, and does not meet the real-time requirement of processing video. The fourth class is night image defogging algorithms based on deep learning, such as convolutional neural networks applied to image defogging.
However, the existing night image defogging method based on deep learning is trained in a supervised mode, a large number of data sets for clear night images and foggy day images are required to be provided, and the performance of a defogging model obtained by the method is directly related to the quality of the data sets; in addition, since the deep learning-based defogging method requires a large amount of data, it takes a lot of time to train the defogging model.
Disclosure of Invention
The invention aims to provide an unsupervised night image defogging method using high-low frequency decomposition, which can learn and infer by using only observed night foggy images under the condition that paired night foggy images and night clear images are not needed.
In order to achieve the above object, the present invention is realized by the following technical scheme:
an unsupervised night image defogging method using high and low frequency decomposition, comprising:
decomposing the input image into a high frequency image and a low frequency image using a guide filter;
the input image and the high-frequency image are combined to obtain a first combined image which is used as input of a defogging image estimation network J-net, and a defogging image corresponding to the input image is estimated;
the input image and the low-frequency image are combined to obtain a second combined image which is used as input of a transmissivity estimation network T-net, and a transmission image corresponding to the input image is estimated;
estimating an atmospheric illumination map corresponding to the input image by using a maximum filter;
reconstructing the input image using an atmospheric scattering model based on the haze-free image, the transmission image, and the atmospheric illumination map;
and taking the reconstructed loss function as a loss function, and performing end-to-end training on the haze-free image estimation network J-net and the transmissivity estimation network T-net.
Further, the decomposing the input image into a high frequency image and a low frequency image by using the steering filter includes:
the channel difference diagram of the input image is used as a guide diagram in a guide filter, and a low-frequency image corresponding to the input image is obtained through low-pass filtering, wherein the channel difference diagram is obtained by subtracting a minimum color channel from a maximum color channel of the input image;
and subtracting the input image from the low-frequency image to obtain a high-frequency image corresponding to the input image.
Further, a plurality of high-frequency images and low-frequency images are obtained by setting different decomposition parameters of a guide filter;
the step of obtaining a first combined image after the combination operation of the input image and the high-frequency image comprises the following steps:
combining the input image with a plurality of high-frequency images to obtain a first combined image;
the step of obtaining a second combined image after the combination operation of the input image and the low-frequency image comprises the following steps:
and combining the input image with a plurality of low-frequency images to obtain a second combined image.
Furthermore, the haze-free image estimation network J-net adopts a U-net network as a framework and comprises a feature extraction module, a multi-scale cavity convolution module and an image recovery module;
the feature extraction module comprises a convolution layer and a pooling layer which are alternately arranged;
the image recovery module comprises an up-sampling layer, a jump connection layer and a convolution layer, wherein a BatchNorm layer and a Relu activation function are arranged behind each convolution layer except the last convolution layer;
the multi-scale cavity convolution module is formed by cavity convolutions with different cavity rates.
Further, the transmittance estimation network T-net is a self-encoder with a jump connection;
the encoder adopts N modules based on a convolutional neural network, and each module consists of a convolutional layer, a downsampling layer, a BatchNorm layer and a LeakyRelu activation function;
the decoder adopts N modules based on convolutional neural network, each module is composed of jump connection, up-sampling layer, batchNorm layer, convolutional layer and LeakyRelu activation function.
Further, the estimating the atmospheric illumination map corresponding to the input image by using a maximum filter includes:
performing maximum value filtering on the input image to obtain an input image with the filtered maximum value;
and carrying out guide filtering on the input image after the maximum value filtering to obtain an atmospheric illumination map in the input image, wherein the guide map of the guide filter is the input image.
Further, the adoption of the atmospheric scattering model is as follows:
I(x)=J*T+(1-T)*A,
wherein I (x) represents a data matrix of an defogging image of the input image, J represents a data matrix of the defogging image, T represents a data matrix of the transmission image, and a represents a data matrix of the atmospheric illumination map.
Further, the reconstruction loss function is:
L Rec =‖I(x)-x‖ p
wherein x represents the data matrix of the input image, I (x) represents the data matrix of the input image reconstructed using the atmospheric scattering model, ii p Representing the p-norm of a given data matrix.
Compared with the prior art, the invention has the following advantages:
the invention provides an unsupervised night image defogging method using high-low frequency decomposition, which takes high-frequency information as input of defogging image estimation, so that the defogging image estimated by a model contains clearer textures, low-frequency information is added into the estimation of transmissivity, a network is ensured to accurately estimate a transmission image, the estimation of the defogging image is more accurate, only a single real night foggy image observed is needed under the condition that a night clear image-foggy image data set is not needed, defogging effect of the image can be achieved after minute training time, night foggy can be removed, image vision and readability are improved, and a key effect is played for the fields of video monitoring, target identification and the like in a night scene.
Drawings
For a clearer description of the technical solutions of the present invention, the drawings that are needed in the description will be briefly introduced below, it being obvious that the drawings in the following description are one embodiment of the present invention, and that, without inventive effort, other drawings can be obtained by those skilled in the art from these drawings:
FIG. 1 is a schematic flow chart of an unsupervised night image defogging method using high-low frequency decomposition according to an embodiment of the present invention;
FIG. 2 is a block diagram of a night image defogging method according to an embodiment of the present invention;
FIG. 3 is a block diagram of a J-net network according to an embodiment of the present invention;
FIG. 4a is a block diagram of a T-net network according to an embodiment of the present invention;
FIG. 4b is a specific block diagram of the convolutional neural network module of FIG. 4 a;
FIGS. 5 a-5 g are graphs comparing the results of the present invention and other methods during real night-time foggy days in a first example, wherein FIG. 5a is an input image, FIG. 5b is a defogging image of the NDIM method, FIG. 5c is a defogging image of the GS method, FIG. 5d is a defogging image of the MRP method, FIG. 5e is a defogging image of the MRP-Faster method, FIG. 5f is a defogging image of the PWAB method, and FIG. 5g is a defogging image of the present invention;
FIGS. 6 a-6 g are graphs comparing the results of images of the present invention and other methods during real night, wherein FIG. 6a is an input image, FIG. 6b is a defogging image of the NDIM method, FIG. 6c is a defogging image of the GS method, FIG. 6d is a defogging image of the MRP method, FIG. 6e is a defogging image of the MRP-Faster method, FIG. 6f is a defogging image of the PWAB method, and FIG. 6g is a defogging image of the present invention;
fig. 7a to 7g are comparison graphs of the results of the image of the present invention and other methods during real night, in which fig. 7a is an input image, fig. 7b is a defogging image of NDIM method, fig. 7c is a defogging image of GS method, fig. 7d is a defogging image of MRP method, fig. 7e is a defogging image of MRP-fast method, fig. 7f is a defogging image of PWAB method, and fig. 7g is a defogging image of the method of the present invention.
Detailed Description
The following provides a further detailed description of the proposed solution of the invention with reference to the accompanying drawings and detailed description. The advantages and features of the present invention will become more apparent from the following description. It should be noted that the drawings are in a very simplified form and are all to a non-precise scale, merely for the purpose of facilitating and clearly aiding in the description of embodiments of the invention. For a better understanding of the invention with objects, features and advantages, refer to the drawings. It should be understood that the structures, proportions, sizes, etc. shown in the drawings are for illustration purposes only and should not be construed as limiting the invention to the extent that any modifications, changes in the proportions, or adjustments of the sizes of structures, proportions, or otherwise, used in the practice of the invention, are included in the spirit and scope of the invention which is otherwise, without departing from the spirit or essential characteristics thereof.
The key idea of the invention is that the input image is decomposed into a high-frequency part and a low-frequency part by utilizing the guide filter, the obtained low-frequency part and the obtained input image are used as the input of a transmissivity estimation network T-net, and the high-frequency part and the input image are used as the input of a haze-free image estimation network J-net, because the high-frequency part contains scene texture information of the image, the haze-free image can be better estimated, and the low-frequency part contains structural information of the image, thereby being beneficial to estimating the transmissivity; next, the atmospheric illumination map a of the input image is estimated using a maximum value filter, and the transmission map T and the haze-free map J obtained by combining the transmission estimation network T-net and the haze-free image estimation network J-net are combined. According to the night image defogging method, under the condition that a large number of pairs of night clear image-foggy image data sets are not needed, only the observed single Zhang Yejian foggy image is needed, and the defogging effect of the night image can be achieved after the training time of minutes. The method can effectively remove night haze and improve the visual property of night foggy images.
Referring to fig. 1 and 2, the method for defogging an unsupervised night image by using high-low frequency decomposition provided by the invention comprises the following steps:
s100, decomposing the input image into a high-frequency image and a low-frequency image by using a guide filter. The input image is a night foggy day image.
S200, combining the input image and the high-frequency image to obtain a first combined image, and using the first combined image as input of a defogging image estimation network J-net to estimate a defogging image corresponding to the input image.
S300, combining the input image with the low-frequency image to obtain a second combined image, and estimating a transmission image corresponding to the input image by taking the second combined image as the input of a transmission estimating network T-net.
In step S100, a channel difference map of the input image x is used as a guide map in a guide filter, and a low-frequency image corresponding to the input image is obtained through low-pass filtering, wherein the channel difference map is obtained by subtracting a minimum color channel from a maximum color channel of the input image; and subtracting the input image from the low-frequency image to obtain a high-frequency image corresponding to the input image.
Preferably, a plurality of high-frequency images and low-frequency images are obtained by setting different decomposition parameters of the guide filter, and correspondingly, in steps S200 and S300, a first combined image is obtained after the input image and the plurality of high-frequency images are combined, and a second combined image is obtained after the input image and the plurality of low-frequency images are combined. The quality of the J-net network output image can be improved by combining the input image with a plurality of high-frequency images, and the quality of the T-net network output image can be improved by combining the input image with a plurality of low-frequency images, so that the defogging effect of the night image is improved.
For example, a specific set of guide filter radii is (2, 4,8, 16, 32), a regularization parameter set is (0.0001,0.00001), 10 sets of parameters are obtained by combination, and finally 10 high-frequency images and 10 low-frequency images are obtained by guide filtering. The 10 low-frequency images and the input image x are combined to be used as the input of the transmissivity estimation network T-net, and the 10 corresponding high-frequency images and the input image x are combined to be used as the input of the haze-free image estimation network J-net.
Specifically, as shown in fig. 3, the haze-free image estimation network J-net adopts a U-net network as a framework, and a multi-scale cavity convolution module is added for improvement, so that the haze removal performance of the haze-free image estimation network J-net is further improved. The J-net network structure is composed of a feature extraction module, an image recovery module and a multi-scale cavity convolution module, wherein the feature extraction module alternately extracts features of an input image by adopting a convolution layer and a pooling layer, the image recovery module finishes image recovery by adopting an upper sampling layer, jump connection and the convolution layer, and except for the last layer 1*1 convolution, a BatchNorm layer and a Relu activation function are arranged behind each convolution layer; the multi-scale cavity convolution module is formed by multi-path cavity convolution with different cavity rates, for example, three paths are shown in fig. 3, wherein P1, P2 and P3 respectively represent the cavity rates of 1, 2 and 3; the purpose of this is to enable the network to capture multi-scale context information, thereby achieving a better defogging effect.
As shown in fig. 4a and 4b, the transmittance estimation network T-net is a self-encoder with a jump connection, and the encoder uses N (e.g. 5) modules based on convolutional neural networks, each module being composed of a convolutional layer, a downsampling, a Batchnorm layer, and a LeakyRelu activation function; the decoder also contains N (e.g., 5) convolutional neural network-based modules, each consisting of a skip-connect, upsampling, batch norm, convolutional, leakyRelu activation function; finally, the estimated transmission image T is obtained through the 1*1 convolution layer and the sigmoid activation function.
And S400, estimating an atmospheric illumination map corresponding to the input image by using a maximum filter.
Specifically, maximum value filtering is performed on the input image to obtain a maximum value filtered input image, and then guide filtering is performed on the maximum value filtered input image to obtain an atmospheric illumination map in the input image, wherein a guide map of a guide filter is the input image.
For example, a small-sized local block of size 5*5 is selected, the input image is subjected to maximum value filtering in the local block to obtain a maximum value filtered input image, and then the image is subjected to guide filtering to obtain an atmospheric illumination map A in the input image, wherein the guide map of the guide filter uses the guide map. The parameters of the steering filter are set to 10,0.01, for example.
S500, reconstructing the input image by adopting an atmospheric scattering model based on the haze-free image, the transmission image and the atmospheric illumination map.
The reconstructed input image is the image for removing night haze.
Specifically, the atmospheric scattering model is:
I(x)=J*T+(1-T)*A,
wherein I (x) represents a data matrix of an input image reconstructed using an atmospheric scattering model, J represents a data matrix of the haze-free image, T represents a data matrix of the transmission image, and a represents a data matrix of the atmospheric illumination map.
S600, taking the reconstruction loss function as a loss function, and performing end-to-end training on the haze-free image estimation network J-net and the transmissivity estimation network T-net.
Specifically, a reconstruction loss function of the network model is calculated, namely:
L Rec =‖I(x)-x‖ p
wherein x represents the data matrix of the input image, I (x) represents the data matrix of the input image reconstructed using the atmospheric scattering model, ii p The p-norm representing a given data matrix is preferably F-norm.
After separation of image layers, L Rec The entire network is constrained by reconstructing the input image, and the loss function is used for end-to-end training of the haze-free image estimation network J-net and the transmissivity estimation network T-net transmission network structure.
For objective evaluation of the algorithm of the present invention, PSNR (peak signal to noise ratio, the larger the PSNR value, the less representative distortion), SSIM (a new index for measuring the structural similarity of two images, the larger the value, the better the value, the maximum is 1) and CIEDE2000 (a color difference formula) of the defocused image are calculated, the objective indexes mentioned above are tested on a synthetic night foggy day dataset NHM, and the existing NDIM algorithm, GS algorithm, MRP-fast algorithm and PWAB algorithm are compared with the present invention, and Table 1 shows. As can be seen from Table 1, the algorithm of the present invention achieved the best objective index compared to the other 5 night defogging algorithms.
TABLE 1
Fig. 5a to 5g, fig. 6a to 6g, and fig. 7a to 7g show three sets of comparison diagrams, and it can be seen from the comparison of the diagrams that compared with other five algorithms, the night foggy day image is defogged, so that the visibility of the image can be improved to a greater extent. Particularly, in the position of the depth of field of the image, the defogging effect of the image is better than that of the existing night image defogging technologies, the details and textures of the image are better protected and mined, and the readability and the usability of the image information are greatly improved.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the present invention has been described in detail through the foregoing description of the preferred embodiment, it should be understood that the foregoing description is not to be considered as limiting the invention. Many modifications and substitutions of the present invention will become apparent to those of ordinary skill in the art upon reading the foregoing. Accordingly, the scope of the invention should be limited only by the attached claims.

Claims (5)

1. An unsupervised night image defogging method using high and low frequency decomposition, comprising:
decomposing the input image into a high frequency image and a low frequency image using a guide filter;
the input image and the high-frequency image are combined to obtain a first combined image which is used as input of a defogging image estimation network J-net, and a defogging image corresponding to the input image is estimated;
the input image and the low-frequency image are combined to obtain a second combined image which is used as input of a transmissivity estimation network T-net, and a transmission image corresponding to the input image is estimated;
estimating an atmospheric illumination map corresponding to the input image by using a maximum filter;
reconstructing the input image using an atmospheric scattering model based on the haze-free image, the transmission image, and the atmospheric illumination map;
taking the reconstruction loss function as a loss function, and performing end-to-end training on the haze-free image estimation network J-net and the transmissivity estimation network T-net;
wherein the decomposing the input image into a high frequency image and a low frequency image using the steering filter includes:
the channel difference diagram of the input image is used as a guide diagram in a guide filter, and a low-frequency image corresponding to the input image is obtained through low-pass filtering, wherein the channel difference diagram is obtained by subtracting a minimum color channel from a maximum color channel of the input image;
subtracting the input image from the low-frequency image to obtain a high-frequency image corresponding to the input image;
the foggy image estimation network J-net adopts a U-net network as a framework and comprises a feature extraction module, a multi-scale hole convolution module and an image recovery module;
the feature extraction module comprises a convolution layer and a pooling layer which are alternately arranged;
the image recovery module comprises an up-sampling layer, a jump connection layer and a convolution layer, wherein a BatchNorm layer and a Relu activation function are arranged behind each convolution layer except the last convolution layer;
the multi-scale cavity convolution module is formed by cavity convolutions with different cavity rates in multiple paths;
the transmissivity estimation network T-net is a self-encoder with a jump connection;
the encoder adopts N modules based on a convolutional neural network, and each module consists of a convolutional layer, a downsampling layer, a BatchNorm layer and a LeakyRelu activation function;
the decoder adopts N modules based on convolutional neural network, each module is composed of jump connection, up-sampling layer, batchNorm layer, convolutional layer and LeakyRelu activation function.
2. The unsupervised night image defogging method using high and low frequency decomposition according to claim 1, wherein a plurality of the high frequency images and the low frequency images are obtained by setting different decomposition parameters of a guide filter;
the step of obtaining a first combined image after the combination operation of the input image and the high-frequency image comprises the following steps:
combining the input image with a plurality of high-frequency images to obtain a first combined image;
the step of obtaining a second combined image after the combination operation of the input image and the low-frequency image comprises the following steps:
and combining the input image with a plurality of low-frequency images to obtain a second combined image.
3. The unsupervised night image defogging method using high and low frequency decomposition according to claim 1, wherein the estimating an atmospheric illumination map corresponding to the input image using a maximum value filter comprises:
performing maximum value filtering on the input image to obtain an input image with the filtered maximum value;
and carrying out guide filtering on the input image after the maximum value filtering to obtain an atmospheric illumination map in the input image, wherein the guide map of the guide filter is the input image.
4. The method for defogging an unsupervised night image using high and low frequency decomposition according to claim 1, wherein the atmospheric scattering model is as follows:
I(x)=J*T+(1-T)*A,
wherein I (x) represents a data matrix of an defogging image of the input image, J represents a data matrix of the defogging image, T represents a data matrix of the transmission image, and a represents a data matrix of the atmospheric illumination map.
5. The unsupervised night image defogging method using high and low frequency decomposition according to claim 1, wherein the reconstruction loss function is:
L Rec =‖I(x)-x‖ p
wherein x represents the data matrix of the input image, I (x) represents the data matrix of the input image reconstructed using the atmospheric scattering model, ii p Representing the p-norm of a given data matrix.
CN202110384208.8A 2021-04-09 2021-04-09 Unsupervised night image defogging method using high-low frequency decomposition Active CN113191964B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110384208.8A CN113191964B (en) 2021-04-09 2021-04-09 Unsupervised night image defogging method using high-low frequency decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110384208.8A CN113191964B (en) 2021-04-09 2021-04-09 Unsupervised night image defogging method using high-low frequency decomposition

Publications (2)

Publication Number Publication Date
CN113191964A CN113191964A (en) 2021-07-30
CN113191964B true CN113191964B (en) 2024-04-05

Family

ID=76975335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110384208.8A Active CN113191964B (en) 2021-04-09 2021-04-09 Unsupervised night image defogging method using high-low frequency decomposition

Country Status (1)

Country Link
CN (1) CN113191964B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487519B (en) * 2021-09-03 2022-02-25 南通欧泰机电工具有限公司 Image rain removing method based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472818A (en) * 2018-10-17 2019-03-15 天津大学 A kind of image defogging method based on deep neural network
AU2020100274A4 (en) * 2020-02-25 2020-03-26 Huang, Shuying DR A Multi-Scale Feature Fusion Network based on GANs for Haze Removal
CN111882496A (en) * 2020-07-06 2020-11-03 苏州加乘科技有限公司 Method for defogging night image based on recurrent neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472818A (en) * 2018-10-17 2019-03-15 天津大学 A kind of image defogging method based on deep neural network
AU2020100274A4 (en) * 2020-02-25 2020-03-26 Huang, Shuying DR A Multi-Scale Feature Fusion Network based on GANs for Haze Removal
CN111882496A (en) * 2020-07-06 2020-11-03 苏州加乘科技有限公司 Method for defogging night image based on recurrent neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
余春艳 ; 林晖翔 ; 徐小丹 ; 叶鑫焱 ; .雾天退化模型参数估计与CUDA设计.计算机辅助设计与图形学学报.2018,(02),全文. *
杨爱萍 ; 赵美琪 ; 王海新 ; 鲁立宇 ; .基于低通滤波和多特征联合优化的夜间图像去雾.光学学报.2018,(10),全文. *

Also Published As

Publication number Publication date
CN113191964A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
CN111553929B (en) Mobile phone screen defect segmentation method, device and equipment based on converged network
Liu et al. Learning aggregated transmission propagation networks for haze removal and beyond
CN112184577B (en) Single image defogging method based on multiscale self-attention generation countermeasure network
CN113673590B (en) Rain removing method, system and medium based on multi-scale hourglass dense connection network
CN104050637B (en) Quick image defogging method based on two times of guide filtration
CN110517203B (en) Defogging method based on reference image reconstruction
CN110288550B (en) Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition
CN112767279B (en) Underwater image enhancement method for generating countermeasure network based on discrete wavelet integration
CN110675340A (en) Single image defogging method and medium based on improved non-local prior
CN110097522B (en) Single outdoor image defogging method based on multi-scale convolution neural network
CN107749048B (en) Image correction system and method, and color blindness image correction system and method
Zhu et al. Object detection in complex road scenarios: improved YOLOv4-tiny algorithm
CN113191964B (en) Unsupervised night image defogging method using high-low frequency decomposition
CN112070688A (en) Single image defogging method for generating countermeasure network based on context guidance
CN115689932A (en) Image defogging method based on deep neural network
CN110807743B (en) Image defogging method based on convolutional neural network
CN112767267B (en) Image defogging method based on simulation polarization fog-carrying scene data set
CN111598793A (en) Method and system for defogging image of power transmission line and storage medium
Gui et al. Adaptive single image dehazing method based on support vector machine
Guo et al. Haze removal for single image: A comprehensive review
CN116977215A (en) Image defogging method, device, equipment and storage medium
CN116385293A (en) Foggy-day self-adaptive target detection method based on convolutional neural network
CN115439363A (en) Video defogging device and method based on comparison learning
Li et al. Underwater image enhancement utilizing adaptive color correction and model conversion for dehazing
CN111986109A (en) Remote sensing image defogging method based on full convolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant