CN110807744B - Image defogging method based on convolutional neural network - Google Patents

Image defogging method based on convolutional neural network Download PDF

Info

Publication number
CN110807744B
CN110807744B CN201911020206.XA CN201911020206A CN110807744B CN 110807744 B CN110807744 B CN 110807744B CN 201911020206 A CN201911020206 A CN 201911020206A CN 110807744 B CN110807744 B CN 110807744B
Authority
CN
China
Prior art keywords
image
defogging
network
dehazer
intermediate transmission
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911020206.XA
Other languages
Chinese (zh)
Other versions
CN110807744A (en
Inventor
华臻
丁元娟
李晋江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Technology and Business University
Original Assignee
Shandong Technology and Business University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Technology and Business University filed Critical Shandong Technology and Business University
Priority to CN201911020206.XA priority Critical patent/CN110807744B/en
Publication of CN110807744A publication Critical patent/CN110807744A/en
Application granted granted Critical
Publication of CN110807744B publication Critical patent/CN110807744B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30192Weather; Meteorology
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image defogging method based on a convolutional neural network, which comprises the following steps: transforming the atmospheric scattering model; building an Encoder-decoder network, and estimating an intermediate transmission diagram; processing an image restoration problem; and constructing a Dehazer network to realize a Dehazer function and outputting defogging images. The Encoder-decoder network can alleviate the influence of noise and jitter without changing the network structure and related parameters, obtain important characteristics related to a target image, and output a relatively accurate intermediate transmission diagram. The Dehazer network has the advantages of simple structure, convenient training, multi-parameter sharing, proper calculation cost, stable network performance, capability of effectively preventing gradient disappearance and explosion, and convenience for rapidly outputting defogging images. The method can efficiently and rapidly output defogging images, the built network has stable performance, the influence of defogging or haze can be well eliminated, the defogging quality of the images is effectively improved, and the defogging effect is ideal.

Description

Image defogging method based on convolutional neural network
Technical Field
The invention relates to the technical field of image processing, in particular to an image defogging method based on a convolutional neural network.
Background
The image is a visual basis for human to acquire information from the real world, is an important medium for transmitting the information, and is an important means for expressing the information. Image processing technology has been developed and plays an important role in many fields such as medical treatment, transportation, archaeology, agriculture, industry, and construction. Image processing is a technique of performing correlation analysis, processing, and the like on an image using a computer, and the purpose of the image processing is to process the image into a desired result. The image processing technology generally comprises methods of image compression, image enhancement, image restoration, image recognition and the like, and the image defogging method provided by the invention belongs to the image restoration technology.
Many professional and daily life areas rely on the related work of optical imaging instrumentation systems, such as real-time monitoring systems, object tracking systems, autopilot, satellite telemetry, imaging of photographs of daily life, and the like. Under the influence of foggy weather, the acquired image reduces the contrast and recognition degree of the image due to the influence of light scattering, reflection, absorption and other physical factors caused by micro-substances floating in the air, the color is seriously attenuated, and the detail information is lost, so that the image is distorted, the vision system is greatly influenced, the characteristics related to the image cannot be directly observed, and the analysis and the application of the image are not facilitated. Therefore, there is a need for defogging an image in foggy days using an efficient and rapid image defogging method to restore the sharpness of the image, thereby improving the quality and visibility of the image.
After twenty-first century, deep learning theory was proposed, and computer technology and equipment were improved, convolutional neural networks were developed rapidly, and then they were widely used in the fields of computer vision, natural language processing, image segmentation, object detection, and so on. The basic idea is to learn the characteristics of the original data by utilizing a convolution layer and a pooling layer in the convolution neural network, and by sharing the convolution kernel and the weight, the learning complexity is reduced, the calculated amount is greatly reduced, the model is convenient to train, and some or all physical parameters are regressed for further use in recovering a clean image. Although the single image defogging method (Single Image Dehazing via Multi-Scale Convolutional Neural Networks, MSCNN) based on the multi-scale convolutional neural network has better performance than the traditional method, and has good performance in bright white areas such as sky and the like and reduced performance in deep areas, defogging is incomplete due to the fact that the estimation of the haze level contained by the method is lower than the real quantity of the method, the output clean image still contains some haze which is not removed cleanly, the problem that the color of the image after defogging is dark exists, and a relatively ideal defogging result cannot be obtained.
Disclosure of Invention
The invention aims to provide an image defogging method based on a convolutional neural network, and the invention adopts the following technical scheme.
Based on the existing method, the invention provides an image defogging technology based on a convolutional neural network, and a distorted foggy image is processed into a clearer image through image defogging recovery. Firstly, transforming an atmospheric scattering model, then building an Encoder-decoder network to estimate an intermediate transmission layer, then carrying out related processing on an image restoration problem, and finally building a Dehazer network to output defogging images.
Specific steps of the invention
1) Analyzing the atmospheric scattering model reveals that the atmospheric scattering model is transformed.
2) And building an Encoder-decoder network, and estimating a more accurate intermediate transmission diagram.
3) And on the basis of the step 1), the transformed atmospheric scattering model is treated as an image restoration problem to obtain a Dehazer function.
4) And constructing a Dehazer network to realize a Dehazer function and outputting defogging images.
The method comprises the following steps of 1) carrying out problem analysis on a foggy image according to an atmospheric scattering model, wherein the problem to be solved in image defogging is that an intermediate transmission diagram and an atmospheric light value are unknown, and the atmospheric scattering model has the formula:
wherein,,pixels of an input image;a foggy image obtained by a camera in a foggy weather, namely an input image;a clear image obtained after defogging, namely a defogging-free image;representing the transmittance of scene rays contained in the target scene, i.e., an intermediate transmission map;is a constant representing the global atmospheric light value. Assuming that the concentration of mist is uniform in the atmosphere, the intermediate transmission map calculation formula is shown as formula (2).The atmospheric scattering coefficient represents the scattering ability of the unit volume of the atmosphere to light, and generally takes a relatively small constant value.Is the distance between the object and the camera, i.e. the depth of field. Wherein, the relation between the intermediate transmission diagram and the depth of field is that along with the depth of fieldIs a constant and increasing number of times,exponentially decays. According to the atmospheric scattering model, the problem of defogging is solved by an intermediate transmission diagramAnd atmospheric light valueUnknown, it needs to be estimated more accurately by a certain method. Many papers are also based on this difficulty, building and improving related methods to achieve image defogging.
The formula is further transformed into:
order the(1) can be rewritten as:
wherein in formula (4)A clear image is shown and is shown in the form of a sharp image,representing a hazy imageAnd intermediate transmission diagramIs used in the ratio of (a) to (b),the residual image is a noise such as fog and haze, which is a collection of image degradation caused by various factors.
And 2) building an Encoder-decoder network, and estimating a more accurate intermediate transmission diagram.
Said step 3) uses maximum a posteriori probability estimation (MAP) to obtain a solution to the image restoration problem, equation (4) can be expressed as:
wherein the method comprises the steps ofRepresenting the log-likelihood term(s),representing a hazy imageAnd intermediate transmission diagramIs used in the ratio of (a) to (b),a priori provided in the itemAnd (3) withIrrespective, so equation (5) can be restated as:
wherein,,representing the residual between the observed hazy image and the reconstructed image in the energy function,the term of fidelity is represented as such,a trade-off parameter is indicated,representing regularization terms.
The invention learns by optimizing the loss function on the training set comprising the sharp image pairs by using a predefined nonlinear functionInstead of MAP reasoning, equation (6) is transformed into learning of a priori parameters, equation (6) is restated as an objective function:
the fidelity term and regularization term can be decoupled by a half-quadratic splitting method (Half Quadratic Splitting, HQS), so that the calculation is simplified, and auxiliary variables are introducedNamely, the regularization term in the formula (6) is replaced by a variable, and the formula (6) can be restated as a constraint optimization problem:
the loss function can then be minimized by the HQS method:
wherein,,representing penalty parameters that vary in a non-decreasing iterative fashion. Equation (9) can be further solved by the iterative method:
from the above formula derivation, it can be seen that the same variables in the fidelity term and regularization termIs split into two independent sub-problems, and the specific formula of the formula (10 a) can be solved by the following formula:
the regularization term equation (10 b) may be rewritten as:
based on Bayesian probability, equation (11) can be determined by having a noise levelDehazer function to hazy imageProceeding with defogging, equation (12) can be restated as:
and 4) constructing a Dehazer network to realize a Dehazer function, and outputting defogging images.
The beneficial effects of the invention are that
(1) The constructed Encoder-decoder network can estimate the intermediate transmission diagram. The Encoder-decoder structure plays an important role in the layer, and the use of the structure has the performance of reducing noise and jitter, does not need to change the specific structure in the network and parameters related to the network, can capture more important information in extremely short time, retains key characteristics, discards unimportant characteristics, and facilitates more accurate estimation of the intermediate transmission diagram.
(2) The Dehazer network built by the invention can realize Dehazer functions and output defogging images. The network model is simple in construction structure, stable and reliable in network performance, parameter sharing in the Dehazer network is avoided, a plurality of parameters are prevented from being set, related calculated amount is greatly reduced, training is facilitated, and defogging images can be obtained rapidly.
(3) The invention trains out a series of quick and effective Dehazer networks through convolutional neural network models and separation technology, and the Dehazer networks can be used as priori knowledge in a model-based method, namely, the Dehazer networks can be used as modules to be inserted in a model-based optimization method, and are used for solving related problems in other higher fields.
(4) In the method, in the content information such as a bright white area, a white object and the like, the estimated transmission diagram is more accurate, halation (the halation is a bright light ring surrounding a light source) can be removed, higher robustness can be kept, the brightness of an image is regulated, the details of the image are highlighted, very small detail features can be treated very well, and the visual effect after defogging is natural and is closer to a real scene (group trunk).
Drawings
FIG. 1 is a schematic flow chart of the present invention.
FIG. 2 is a diagram of an Encoder-decoder network constructed in accordance with the present invention.
Figure 3 is a diagram of a Dehazer network constructed in accordance with the present invention.
Fig. 4 is a defogging result of the present invention applied to an indoor image.
Fig. 5 is a defogging result of the present invention applied to an outdoor image.
Detailed Description
The invention will be further described with reference to the drawings and examples.
As shown in fig. 1, firstly, an atmospheric scattering model is transformed, then an Encoder-decoder network is built to estimate an intermediate transmission diagram, then an image restoration problem is processed, and finally a Dehazer network is built to realize a Dehazer function, and a defogging image is output, wherein the specific steps are as follows:
1) According to an atmospheric scattering model, carrying out problem analysis on a foggy image, wherein the problem to be solved in image defogging is that an intermediate transmission diagram and an atmospheric light value are unknown, and the atmospheric scattering model has the following formula:
wherein,,pixels of an input image;a foggy image obtained by a camera in a foggy weather, namely an input image;a clear image obtained after defogging, namely a defogging-free image;representing the transmittance of scene rays contained in the target scene, i.e., an intermediate transmission map;is a constant representing the global atmospheric light value. Assuming that the concentration of mist is uniform in the atmosphere, the intermediate transmission map calculation formula is shown as formula (2).The atmospheric scattering coefficient represents the scattering ability of the unit volume of the atmosphere to light, and generally takes a relatively small constant value.Is the distance between the object and the camera, i.e. the depth of field. Wherein, the relation between the intermediate transmission diagram and the depth of field is that along with the depth of fieldIs a constant and increasing number of times,exponentially withType decay. According to the atmospheric scattering model, the problem of defogging is solved by an intermediate transmission diagramAnd atmospheric light valueUnknown, it needs to be estimated more accurately by a certain method. Many papers are also based on this difficulty, building and improving related methods to achieve image defogging.
The formula is further transformed into:
order the(1) can be rewritten as:
wherein in formula (4)A clear image is shown and is shown in the form of a sharp image,representing a hazy imageAnd intermediate transmission diagramIs used in the ratio of (a) to (b),the residual image is a noise such as fog and haze, which is a collection of image degradation caused by various factors.
2) As shown in fig. 2, an Encoder-decoder network is built, a relatively accurate intermediate transmission diagram is estimated, and each component network layer built by the network layer is described in detail as follows:
(1) The first layer uses a layer of ConV+Leaky ReLU to primarily process the foggy image, the size of the convolution kernel is that ConV performs feature extraction through the convolution kernel and weight sharing, the use of parameters is reduced, the complexity of a model is simplified, and the speed and accuracy of a training network are improved. It may extract shallow features of the image, such as shape, color, edges, etc. of the image.
(2) The middle layer uses an Encoder-decoder structure, and adopts four layers of ConV+BN+Leaky ReLU and four layers of DConV+BN+Leaky ReLU as the Encoder-decoder structure. The sizes of ConV layers and DConv convolution kernels used therein are respectivelyUsing different rollsThe convolution layer with the size of the product is used for further extracting deep features of the image, such as features of textures. The encoder structure uses ConV, namely a convolution layer, to map high-dimensional image features into low dimensions, and retains important features in the image to obtain a plurality of feature maps. The decoder part uses the DConV layer, namely deconvolution layer, the deconvolution and the convolution function are opposite, the image features with low dimensions are mapped into high dimensions, and the processed feature map is more visualized through the reduction function of deconvolution. In order to avoid loss of important features of the image, a Skip connection (Skip connection) is used in the Encoder-decoder structure.
(3) The final layer uses ConV+Leaky ReLU structure, and ConV layer convolution kernel has the size ofThe feature maps obtained before for the images are combined and finally an estimated intermediate transmission map is obtained by nonlinear regression.
3) The solution to the image restoration problem using maximum a posteriori probability estimation (MAP), equation (4) can be expressed as:
wherein the method comprises the steps ofRepresenting the log-likelihood term(s),representing a hazy imageAnd intermediate transmission diagramIs used in the ratio of (a) to (b),a priori provided in the itemAnd (3) withIrrespective, so equation (5) can be restated as:
wherein,,representing the residual between the observed hazy image and the reconstructed image in the energy function,the term of fidelity is represented as such,a trade-off parameter is indicated,representing regularization terms.
The invention learns by optimizing the loss function on the training set comprising the sharp image pairs by using a predefined nonlinear functionInstead of MAP reasoning, equation (6) is transformed into learning of a priori parameters, equation (6) is restated as an objective function:
decoupling the fidelity term and regularization term by a half-quadratic division method, simplifying calculation, and introducing auxiliary variablesNamely, the regularization term in the formula (6) is replaced by a variable, and the formula (6) can be restated as a constraint optimization problem:
the loss function can then be minimized by the HQS method:
wherein,,representing penalty parameters that vary in a non-decreasing iterative fashion. Equation (9) can be further solved by the iterative method:
from the above formula derivation, it can be seen that the same variables in the fidelity term and regularization termIs split into two independent sub-problems, and the specific formula of the formula (10 a) can be solved by the following formula:
the regularization term equation (10 b) may be rewritten as:
based on Bayesian probability, equation (11) can be determined by having a noise levelDehazer function to hazy imageProceeding with defogging, equation (12) can be restated as:
4) As shown in fig. 3, a Dehazer network is built to realize a Dehazer function, a defogging image is output, defogging processing is performed on the image, and a defogging image is obtained. The detailed description of each network layer of the network construction of the layer is as follows:
(1) The first layer uses Conv+Leaky ReLU, which is a feature extraction layer, and ConV layer convolution kernel has a size ofThe layer takes the ratio of the foggy image to the intermediate transmission diagram as the input of the layer according to the intermediate transmission diagram obtained by the upper layer, and initially extracts the characteristics of the related target image in the network.
(2) The second layer uses ConV+BN+Leaky ReLU, which is the first feature conversion layer, and ConV layer convolution kernel has the size ofAnd a BN layer is added, the characteristic map of the upper layer is used as the input of the layer, and the relevant characteristics of the upper layer result are converted.
(3) The third layer uses Feature 1 conversion layer, which performs a relevant Feature transformation on the Feature mapping result obtained by the upper layer transformation.
(4) The fourth layer uses ConV+Leaky ReLU, which is a feature extraction layer, and ConV layer convolution kernel has a size ofThe layer is mainly used for extracting related features from multiple layers, and passes through the layerAnd extracting the characteristics of the target image in a finer manner by extracting the characteristics of the image in various aspects.
(5) The fifth layer uses ConV+BN+Leaky ReLU, which is the second type of feature conversion layer, and ConV layer convolution kernel has the size ofProviding service for the next processing operation.
(6) The sixth layer uses Feature 2 conversion layer, which performs a Feature conversion of a further level on the Feature mapping result obtained by the upper layer conversion.
(7) The seventh layer uses ConV+Leaky ReLU, which is a feature extraction layer, and ConV layer convolution kernel has a size ofThe multi-level feature extraction is convenient for fine and comprehensive learning of important features of the target image.
(8) The eighth layer uses ConV+BN+Leaky ReLU, which is the first type of feature conversion layer, and ConV layer convolution kernel has the size ofThe accuracy of feature conversion is improved, and other relevant information features of the target image can be well predicted.
(9) And the ninth layer uses Feature 1 conversion layer to return to the Feature conversion layer used before, so that Feature conversion is performed on the Feature mapping result obtained by the upper layer conversion more finely, and errors generated in the network training process are reduced.
(10) The tenth layer uses Conv+Leaky ReLU, and ConV layer convolution kernel has the size ofAnd obtaining an output defogging image through nonlinear regression.
The present invention can be further illustrated by the following experimental results.
1. The experimental contents are as follows: REalistic Single Image Dehazing (RESIDE) true single image defogging dataset is an integrated large-scale dataset with which the network proposed by the present invention is trained. The images collected by the RESIDE dataset comprise pairs of sharp images and hazy images, the large scale data collected in the dataset is fully sufficient for the invention to be a training set and a test set, and the trained network is substantially reliable and the trained network has good performance.
2. Experimental results
Fig. 4 shows the defogging result of the method of the present invention applied to an indoor image. Wherein fig. 4 (a) is a first indoor hazy image, and fig. 4 (b), fig. 4 (c) and fig. 4 (d) are respectively the defogging result of the MSCNN method of fig. 4 (a), the defogging result of the method of the present invention and a group Truth image; fig. 4 (e) is a second indoor hazy image, and fig. 4 (f), fig. 4 (g) and fig. 4 (h) are respectively the defogging result of the MSCNN method of fig. 4 (e), the defogging result of the method of the present invention and the group Truth (real scene) image.
As can be seen from fig. 4, the MSCNN method may not completely defogging due to the fact that the haze level is estimated to be lower than the true haze level, and the output defogging image still contains some haze which is not removed cleanly. The defogging method is thorough in defogging, and the overall defogging visual effect is natural and is relatively close to that of the group Truth.
Fig. 5 shows the defogging result of the method of the present invention applied to an outdoor image. Wherein fig. 5 (a) is a first indoor hazy image, and fig. 5 (b), 5 (c) and 5 (d) are respectively the defogging result of the MSCNN method of fig. 5 (a), the defogging result of the method of the present invention and the group Truth; fig. 5 (e) shows a second image of fog in a room, and fig. 5 (f), 5 (g) and 5 (h) show the defogging results of the MSCNN method of fig. 5 (e), the defogging results of the method of the present invention and the group Truth, respectively.
As can be seen from fig. 5, in general, the MSCNN method is shown in a bright white area such as sky, and the bright white area of the defogging image has a phenomenon of dark color, and still has a phenomenon of fog residue, so that defogging is incomplete. The method has the advantages of thorough defogging, basically no fog residue, capability of effectively processing sky areas, highlighting the detail characteristics of images, and basically similar processing to the group trunk in aspects of image color saturation, contrast and the like.
In summary, the invention realizes the defogging process of the image by constructing a two-layer network model, and the Encoder-decoder network estimates the intermediate transmission diagram. The Dehazer network realizes a Dehazer function according to the related training and outputs defogging images. The network model built by the invention is simple, convenient to realize, short in running time, high in execution efficiency and capable of well defogging the foggy image.
The specific embodiments of the present invention are described in detail above and in conjunction with the accompanying drawings, but the scope of the present invention is not limited thereto, and those skilled in the art can make various modifications or variations on the present invention based on the technical solutions of the present invention, which remain within the scope of the present invention.

Claims (3)

1. The defogging method based on the convolutional neural network is characterized in that an atmospheric scattering model is firstly transformed, then an Encoder-decoder network estimation intermediate transmission diagram is built, then the image restoration problem is processed according to the transformed atmospheric scattering model, and finally a Dehazer network is built to realize a Dehazer function, output defogging images and finish defogging treatment;
the method comprises the following steps:
1) Analyzing the atmospheric scattering model, and transforming the atmospheric scattering model to obtain a foggy image;
2) Building an Encoder-decoder network, and estimating a relatively accurate intermediate transmission diagram;
3) On the basis of the step 1), processing the transformed atmospheric scattering model as image restoration based on the foggy image obtained in the step 1) and the intermediate transmission diagram in the step 2), and obtaining a Dehazer function;
4) Constructing a Dehazer network to realize a Dehazer function, calculating based on the foggy image and the intermediate transmission diagram, and outputting a defogging image;
and 3) processing the image restoration according to the atmospheric dispersion model transformed in the step 1) and the maximum posterior probability estimation technology, and then performing decoupling processing by a half-quadratic splitting method to finally obtain the Dehazer function.
2. The defogging method based on convolutional neural network as recited in claim 1, wherein the step 2) is to build an Encoder-decoder network, and to estimate a more accurate intermediate transmission map based on the features of the input capable of achieving the maximum image correlation.
3. The defogging method based on a convolutional neural network according to claim 1, wherein the step 4) is to build a Dehazer network to realize a Dehazer function, train a network model, and obtain a training network model; the Dehazer network can output residual images by means of identity mapping, can output the ratio of the foggy images to the intermediate transmission image by means of jump connection, and can output defogging images by means of the identity mapping and the jump connection in parallel in two steps.
CN201911020206.XA 2019-10-25 2019-10-25 Image defogging method based on convolutional neural network Active CN110807744B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911020206.XA CN110807744B (en) 2019-10-25 2019-10-25 Image defogging method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911020206.XA CN110807744B (en) 2019-10-25 2019-10-25 Image defogging method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN110807744A CN110807744A (en) 2020-02-18
CN110807744B true CN110807744B (en) 2023-09-08

Family

ID=69489188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911020206.XA Active CN110807744B (en) 2019-10-25 2019-10-25 Image defogging method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN110807744B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022011661A1 (en) * 2020-07-17 2022-01-20 南京理工大学 Progressive feature fusion-based image smog removal method and system
CN114331853B (en) * 2020-09-30 2023-05-12 四川大学 Single image restoration iteration framework based on target vector updating module
CN112150395A (en) * 2020-10-15 2020-12-29 山东工商学院 Encoder-decoder network image defogging method combining residual block and dense block
CN112634171B (en) * 2020-12-31 2023-09-29 上海海事大学 Image defogging method and storage medium based on Bayesian convolutional neural network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719247A (en) * 2016-01-13 2016-06-29 华南农业大学 Characteristic learning-based single image defogging method
CN107295261A (en) * 2017-07-27 2017-10-24 广东欧珀移动通信有限公司 Image defogging processing method, device, storage medium and mobile terminal
CN107749052A (en) * 2017-10-24 2018-03-02 中国科学院长春光学精密机械与物理研究所 Image defogging method and system based on deep learning neutral net
CN109389569A (en) * 2018-10-26 2019-02-26 大象智能科技(南京)有限公司 Based on the real-time defogging method of monitor video for improving DehazeNet
CN109410135A (en) * 2018-10-02 2019-03-01 复旦大学 It is a kind of to fight learning-oriented image defogging plus mist method
CN109544470A (en) * 2018-11-08 2019-03-29 西安邮电大学 A kind of convolutional neural networks single image to the fog method of boundary constraint
CN109903232A (en) * 2018-12-20 2019-06-18 江南大学 A kind of image defogging method based on convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9305339B2 (en) * 2014-07-01 2016-04-05 Adobe Systems Incorporated Multi-feature image haze removal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719247A (en) * 2016-01-13 2016-06-29 华南农业大学 Characteristic learning-based single image defogging method
CN107295261A (en) * 2017-07-27 2017-10-24 广东欧珀移动通信有限公司 Image defogging processing method, device, storage medium and mobile terminal
CN107749052A (en) * 2017-10-24 2018-03-02 中国科学院长春光学精密机械与物理研究所 Image defogging method and system based on deep learning neutral net
CN109410135A (en) * 2018-10-02 2019-03-01 复旦大学 It is a kind of to fight learning-oriented image defogging plus mist method
CN109389569A (en) * 2018-10-26 2019-02-26 大象智能科技(南京)有限公司 Based on the real-time defogging method of monitor video for improving DehazeNet
CN109544470A (en) * 2018-11-08 2019-03-29 西安邮电大学 A kind of convolutional neural networks single image to the fog method of boundary constraint
CN109903232A (en) * 2018-12-20 2019-06-18 江南大学 A kind of image defogging method based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JINJIANG LI,et al.Image Dehazing Using Residual-Based Deep CNN.《IEEE Access》.2018,正文第23831-26841页. *

Also Published As

Publication number Publication date
CN110807744A (en) 2020-02-18

Similar Documents

Publication Publication Date Title
CN110807744B (en) Image defogging method based on convolutional neural network
Golts et al. Unsupervised single image dehazing using dark channel prior loss
CN110458844B (en) Semantic segmentation method for low-illumination scene
CN110310241B (en) Method for defogging traffic image with large air-light value by fusing depth region segmentation
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN112184577A (en) Single image defogging method based on multi-scale self-attention generation countermeasure network
CN111127354B (en) Single-image rain removing method based on multi-scale dictionary learning
CN113298815A (en) Semi-supervised remote sensing image semantic segmentation method and device and computer equipment
CN113066025B (en) Image defogging method based on incremental learning and feature and attention transfer
CN116311254B (en) Image target detection method, system and equipment under severe weather condition
US11935213B2 (en) Laparoscopic image smoke removal method based on generative adversarial network
Ma et al. Image-based air pollution estimation using hybrid convolutional neural network
CN115861715B (en) Knowledge representation enhancement-based image target relationship recognition algorithm
CN116205962A (en) Monocular depth estimation method and system based on complete context information
CN116740362A (en) Attention-based lightweight asymmetric scene semantic segmentation method and system
CN117727046A (en) Novel mountain torrent front-end instrument and meter reading automatic identification method and system
CN116977844A (en) Lightweight underwater target real-time detection method
CN116664421A (en) Spacecraft image shadow removing method based on multi-illumination angle image fusion
CN116703750A (en) Image defogging method and system based on edge attention and multi-order differential loss
CN109800793A (en) A kind of object detection method and system based on deep learning
Li et al. Multi-scale fusion framework via retinex and transmittance optimization for underwater image enhancement
CN112907660B (en) Underwater laser target detector for small sample
CN115619677A (en) Image defogging method based on improved cycleGAN
Ma PANet: parallel attention network for remote sensing image semantic segmentation
CN113962332A (en) Salient target identification method based on self-optimization fusion feedback

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant