CN111640079A - Defogging method based on image gradient distribution prior - Google Patents

Defogging method based on image gradient distribution prior Download PDF

Info

Publication number
CN111640079A
CN111640079A CN202010494221.4A CN202010494221A CN111640079A CN 111640079 A CN111640079 A CN 111640079A CN 202010494221 A CN202010494221 A CN 202010494221A CN 111640079 A CN111640079 A CN 111640079A
Authority
CN
China
Prior art keywords
image
gradient
distribution
model
prior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010494221.4A
Other languages
Chinese (zh)
Other versions
CN111640079B (en
Inventor
田传耕
陈磊
秦伟
田浩澄
徐�明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuzhou University of Technology
Original Assignee
Xuzhou University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuzhou University of Technology filed Critical Xuzhou University of Technology
Priority to CN202010494221.4A priority Critical patent/CN111640079B/en
Publication of CN111640079A publication Critical patent/CN111640079A/en
Application granted granted Critical
Publication of CN111640079B publication Critical patent/CN111640079B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing

Abstract

The invention discloses a defogging method based on image gradient distribution prior, which comprises the following steps: firstly, training and learning a large amount of high-quality natural image data set to obtain a gradient distribution prior model of an image; secondly, changing the gradient distribution of the foggy image to enable the foggy image to approach to learning infinitely to obtain a prior model; finally, solving the reconstructed image by using a Poisson equation to obtain a defogged image; compared with the prior art, the defogged image processed by the method has a certain degree of improvement in the two aspects of contrast and average gradient, so that the information such as edge details in the original image is highlighted, a good visual effect is achieved, the values of MSSIM and PSNR are greatly improved, and the haze of the image processed by the method is effectively removed while the structural similarity is kept good, so that the image is clearer.

Description

Defogging method based on image gradient distribution prior
Technical Field
The invention relates to a defogging method, in particular to a defogging method based on image gradient distribution prior.
Background
In haze weather, the acquired image is degraded due to scattering of particles such as dust and mist in the air. The contrast and the definition of the obtained image are poor, and the visual effect of the image is influenced; the existing defogging processing methods at present include a wavelet transform defogging method, a method for enhancing the contrast of local region colors to realize defogging processing, a defogged image preprocessing and median filtering defogging method, a defogging method based on dark channel prior, and the like.
The defogging method based on the wavelet transform has the advantages that the visual effect of the processed image is good, and the method has the defect that the efficiency of acquiring a plurality of foggy images for image fusion is low; the defogging processing is realized by enhancing the contrast of the local area color, and the method enhances the image contrast and simultaneously enhances the noise of the image; the method has the advantages that a certain defogging effect is achieved through the preprocessing and the median filtering of the foggy image, the requirement of real-time performance is met, the defect is that the boundary with large gray scale change is blurred, and meanwhile, a lot of detail information can be lost; according to the defogging method based on the dark channel prior, the air transmissivity is estimated by using the prior knowledge, then the estimated air transmissivity is optimized and estimated by using the soft matting principle, and the aim of defogging the image is fulfilled.
Therefore, the important significance of researching a quick and effective defogging method to enable the haze image to be clear is achieved.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a defogging method based on image gradient distribution prior, which has good defogging effect and high image definition.
In order to achieve the purpose, the invention adopts the technical scheme that: a defogging method based on image gradient distribution prior comprises the following steps,
a prior model:
the image I (x, y) is grayed out, and the gradient G is defined as:
Figure BDA0002522168470000021
wherein the first order finite difference approximation:
Figure BDA0002522168470000022
and
Figure BDA0002522168470000023
on the boundary of the image, the homogeneous Dirichlet boundary condition is used, and the gray level image is processed, so that the gradient value range is [ -255,255]*[-255,255];GxAnd GyRepresents the gradient of G on the x and y axes, respectively;
laplace L is defined as: l (x, y) ═ Δ I (x, y),
wherein, delta is Laplace operation, discretizing is carried out by using a second-order 5-point finite difference template, and the value range is [ -1020,1020](ii) a In order to convert the histogram into a probability distribution, all pixels m x n of an image are divided into legs, where m and n are the number of pixels of the image along the x and y axes, respectively, and the histogram is defined for each image in the image concentration processing in the data set, and the gradient and the average distribution of the laplace operation are calculated
Figure BDA0002522168470000024
And
Figure BDA0002522168470000025
for the color image, the learned prior is respectively applied to each color channel;
gradient distribution model:
in view of GxAnd GyThe characteristic of heavy tail on logarithmic scale can be modeled into super Laplace distribution, and the traditional one-dimensional model is on logarithmic scale and p (G) andx) Match but fail to satisfy
Figure BDA0002522168470000026
Unlike previous work, CDF was used instead of PDF for modeling;
the CDF of the gradient is defined as:
Figure BDA0002522168470000027
observing the definition characteristics of C (G), a parameter model approximating the CDF is provided:
Figure BDA0002522168470000028
wherein the atan function is selected based on the t-distribution or the Cauchy distribution;
for GxThe edge model with consistent distribution of (a) is:
Figure BDA0002522168470000029
laplace distribution model:
the distribution of the laplace operational response is obtained by using CDF modeling, and the specific steps are as follows:
Figure BDA0002522168470000031
for laplacian CDF, the parametric model is:
Figure BDA0002522168470000032
wherein, T2Is the only free parameter;
naturalness factors and image regression:
for any given image I, the natural factor NfIs defined as:
Figure BDA0002522168470000033
wherein, theta ∈ [0,1 ]]Is a weighting parameter, natural factor N of the color imagef cIs determined for each color channel c separatelyDefining;
specification of gradient distribution:
the mapping function converts the gradient field G into a new gradient field GnThis function satisfies the normalization prior:
Figure BDA0002522168470000036
wherein the mapping is non-parametric, non-linear;
image reconstruction:
reconstructing a normalized image I by solving the decomposition model, deriving a new gradient field from the mappingn
Figure BDA0002522168470000034
The poisson equation is:
Figure BDA0002522168470000035
the poisson equation can be solved by an FFT algorithm or wavelet based method;
linear approximation of the mapping: the single parameter model for linear approximation in the map of the mapping function is: gn=NfG, scaling the original image is equivalent to: i isn=NfI。
Further, the linear approximation step of the mapping directly uses NfAnd the image normalization process can be remarkably accelerated by carrying out scaling.
Compared with the prior art, the method firstly trains and learns a gradient distribution prior model of the image from a large amount of high-quality natural image data set; secondly, changing the gradient distribution of the foggy image to enable the foggy image to approach to learning infinitely to obtain a prior model; finally, solving the reconstructed image by using a Poisson equation to obtain a defogged image; the method can effectively carry out defogging treatment on the haze image, and the treated image retains more detailed information, thereby greatly improving the definition of the image.
Drawings
FIG. 1 is a histogram of different distances between prior information and the gradient distribution and the Laplace distribution of respective training images;
FIG. 2 shows a Laplace CDF corresponding to different images;
Detailed Description
The present invention is further described below. The technical solutions in the embodiments of the present invention are clearly and completely described, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a defogging method based on image gradient distribution prior, which comprises the following steps:
a prior model: as shown in table 1 for 7 high quality natural image datasets, image I (x, y) is grayed out, and gradient G is defined as:
Figure BDA0002522168470000041
wherein the first order finite difference approximation:
Figure BDA0002522168470000042
and
Figure BDA0002522168470000043
on the boundary of the image, the homogeneous Dirichlet boundary condition is used, and the gray level image is processed, so that the gradient value range is [ -255,255]*[-255,255];GxAnd GyRepresents the gradient of G on the x and y axes, respectively;
TABLE 1
1 2 3 4 5 6 7 Total number of
1005 1000 5063 832 1491 6033 8189 23613
Laplace L is defined as: l (x, y) ═ Δ I (x, y),
wherein, delta is Laplace operation, discretizing is carried out by using a second-order 5-point finite difference template, and the value range is [ -1020,1020](ii) a In order to convert the histogram into a probability distribution, all pixels m x n of an image are divided into legs, where m and n are the number of pixels of the image along the x and y axes, respectively, and the histogram is defined for each image in the image concentration processing in the data set, and the gradient and the average distribution of the laplace operation are calculated
Figure BDA0002522168470000044
And
Figure BDA0002522168470000045
for color images, learned priors are applied to each color channel separately.
In order to more intuitively represent the difference between natural images and the average gradient and laplacian-priori distributions learned from them. As shown in FIG. 1, the first row in FIG. 1 shows the histogram of Root Mean Square (RMS) distance (left), Hariegler distance (middle), KL divergence distance (right) between the respective distribution and the prior for each image. The second row in fig. 1 is the Root Mean Square (RMS) distribution of the laplacian operation of the grayscale and color images, respectively.
Gradient distribution model:
in view of GxAnd GyThe characteristic of heavy tail on logarithmic scale can be modeled into super Laplace distribution, and the traditional one-dimensional model is on logarithmic scale and p (G) andx) Match but fail to satisfy
Figure BDA0002522168470000051
Unlike previous work, CDF was used instead of PDF for modeling;
the CDF of the gradient is defined as:
Figure BDA0002522168470000052
observing the definition characteristics of C (G), a parameter model approximating the CDF is provided:
Figure BDA0002522168470000053
wherein the atan function is selected based on the t-distribution or the Cauchy distribution; the model of the formula L (x, y) ═ Δ I (x, y) has only one parameter and T1The results of the matching are shown in table 2.
TABLE 2 parameterization of two-dimensional CDF models for image datasets
Image set 1 2 3 4 5 6 7 Total of
T1 0.37 0.26 0.38 0.35 0.56 0.37 0.7 0.46
SSE 20.71 23.11 19.08 23.7 22.94 19.64 22.97 18.75
R-square 0.9995 0.9995 0.9996 0.9996 0.9995 0.9996 0.9995 0.9996
For Gx(GySimilar) are:
Figure BDA0002522168470000054
laplace distribution model: the distribution of the laplace operational response is obtained by using CDF modeling, and the specific steps are as follows:
Figure BDA0002522168470000055
for laplacian CDF, the parametric model is:
Figure BDA0002522168470000056
wherein, T2Is the only free parameter; as shown in fig. 2, laplacian CDF is for a different image, the first row of images in fig. 2; gradient CDFs corresponding to the second behavior; the third row represents the corresponding laplacian CDFs, this figure also shows that the steps in the CDF indicate edge preservation in the spatial domain;
naturalness factors and image regression: for any given image I, the natural factor NfIs defined as:
Figure BDA0002522168470000061
wherein, theta ∈ [0,1 ]]Is a weighting parameter, a natural factor of the color image
Figure BDA0002522168470000062
Respectively defining each color channel c; normalized image InDerived from I, e.g. Ti≈Ti pr(i ∈ {1,2}), such a process is called image normalization;
specification of gradient distribution: the mapping function is toGradient domain G is transformed into a new gradient domain GnThis function satisfies the normalization prior:
Figure BDA0002522168470000063
wherein the mapping is non-parametric, non-linear;
image reconstruction: reconstructing a normalized image I by solving the decomposition model, deriving a new gradient field from the mappingn
Figure BDA0002522168470000064
The poisson equation is:
Figure BDA0002522168470000065
the poisson equation can be solved by an FFT algorithm or wavelet based method;
linear approximation of the mapping: the single parameter model for linear approximation in the map of the mapping function is: gn=NfG, scaling the original image is equivalent to: i isn=NfI。
Experimental results and analysis:
the invention verifies the effectiveness and superiority of the invention from an objective angle by carrying out defogging treatment on haze images under different scenes, and compared with the existing defogging method, the invention has stronger universality.
In order to evaluate the effectiveness and superiority of the invention more objectively, four general objective indexes are selected for evaluation.
(1) Contrast refers to the sum of the squares of the differences between the gray value of the center pixel and the gray values of the four surrounding neighbors, divided by the number of square terms above. The influence of the image contrast on vision is very critical, and generally, the higher the contrast is, the clearer and more striking the image is, and the more vivid and gorgeous the color is; and the smaller the contrast, the more blurred the image.
(2) The average gradient refers to the obvious difference of gray levels near the boundary or two sides of the hatched line of the image, namely the gray level change rate, and the size of the change rate can represent the definition of the image.
Calculation formula of average gradient:
Figure BDA0002522168470000066
wherein the average gradient
Figure BDA0002522168470000067
The larger the image is, namely the gray change rate of the image in a certain direction is large, the image is relatively clear, and the processing on detail information such as edges is better; otherwise, the image is relatively blurred, and the detail information in the image may be lost, which may easily cause image distortion.
(3) MSSIM is the average value of structural similarity, a more traditional method for evaluating image quality, and can well reflect subjective feeling of human eyes. The larger the value of the MSSIM is, the higher the structural similarity between the image and the original image is, and the higher the image quality is.
(4) PSNR refers to peak signal-to-noise ratio. In general, PSNR is often selected as an objective criterion for evaluating an image. The larger the PSNR value is, the smaller the degree of distortion of the image is.
TABLE 6 comparison of the indices of the methods under different backgrounds
Figure BDA0002522168470000071
The test result shows that the defogged image processed by the method has a certain improvement in contrast and average gradient compared with other two methods, which shows that the defogging result image of the invention not only highlights information such as edge details in the original image, but also has good visual effect. The values of MSSIM and PSNR are greatly improved, which shows that the haze of the image processed by the invention is more effectively removed and the image is clearer while the better structural similarity is kept.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and any minor modifications, equivalent replacements and improvements made to the above embodiment according to the technical spirit of the present invention should be included in the protection scope of the technical solution of the present invention.

Claims (2)

1. A defogging method based on image gradient distribution prior is characterized by comprising the following steps,
a prior model:
the image I (x, y) is grayed out, and the gradient G is defined as:
G(x,y)=(▽xI(x,y),▽y(x,y)),
wherein the first order finite difference approximation is ▽xI ═ I (x +1, y) -I (x, y) and ▽yI=I(x,y+1)-I(x,y),
On the boundary of the image, the homogeneous Dirichlet boundary condition is used, and the gray level image is processed, so that the gradient value range is [ -255,255]*[-255,255];GxAnd GyRepresents the gradient of G on the x and y axes, respectively;
laplace L is defined as: l (x, y) ═ Δ I (x, y),
wherein, delta is Laplace operation, discretizing is carried out by using a second-order 5-point finite difference template, and the value range is [ -1020,1020](ii) a In order to convert the histogram into a probability distribution, all pixels m x n of an image are divided into legs, where m and n are the number of pixels of the image along the x and y axes, respectively, and the histogram is defined for each image in the image concentration processing in the data set, and the gradient and the average distribution of the laplace operation are calculated
Figure FDA0002522168460000011
And
Figure FDA0002522168460000012
for the color image, the learned prior is respectively applied to each color channel;
gradient distribution model:
in view of GxAnd GyThe characteristic of heavy tail on logarithmic scale can be modeled into super Laplace distribution, and the traditional one-dimensional model is on logarithmic scale and p (G) andx) Match but fail to satisfy
Figure FDA0002522168460000013
Unlike previous work, CDF was used instead of PDF for modeling;
the CDF of the gradient is defined as:
Figure FDA0002522168460000014
observing the definition characteristics of C (G), a parameter model approximating the CDF is provided:
Figure FDA0002522168460000015
wherein the atan function is selected based on the t-distribution or the Cauchy distribution;
for GxThe edge model with consistent distribution of (a) is:
Figure FDA0002522168460000021
laplace distribution model:
the distribution of the laplace operational response is obtained by using CDF modeling, and the specific steps are as follows:
Figure FDA0002522168460000022
for laplacian CDF, the parametric model is:
Figure FDA0002522168460000023
wherein, T2Is the only free parameter;
naturalness factors and image regression:
for any given image I, the natural factor NfIs defined as:
Figure FDA0002522168460000024
wherein, theta ∈ [0,1 ]]Is a weighting parameter, a natural factor of the color image
Figure FDA0002522168460000025
Respectively defining each color channel c;
specification of gradient distribution:
the mapping function converts the gradient field G into a new gradient field GnThis function satisfies the normalization prior:
Figure FDA0002522168460000026
wherein the mapping is non-parametric, non-linear;
image reconstruction:
reconstructing a normalized image I by solving the decomposition model, deriving a new gradient field from the mappingn
Figure FDA0002522168460000027
The poisson equation is: delta In=▽·GnThe poisson equation can be solved by an FFT algorithm or wavelet based method;
linear approximation of the mapping: the single parameter model for linear approximation in the map of the mapping function is: gn=NfG,
Scaling the original image, etcThe same as: i isn=NfI。
2. The method of claim 1, wherein the linear approximation of the mapping directly uses NfAnd (5) carrying out scaling.
CN202010494221.4A 2020-06-03 2020-06-03 Defogging method based on image gradient distribution priori Active CN111640079B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010494221.4A CN111640079B (en) 2020-06-03 2020-06-03 Defogging method based on image gradient distribution priori

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010494221.4A CN111640079B (en) 2020-06-03 2020-06-03 Defogging method based on image gradient distribution priori

Publications (2)

Publication Number Publication Date
CN111640079A true CN111640079A (en) 2020-09-08
CN111640079B CN111640079B (en) 2023-05-26

Family

ID=72328625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010494221.4A Active CN111640079B (en) 2020-06-03 2020-06-03 Defogging method based on image gradient distribution priori

Country Status (1)

Country Link
CN (1) CN111640079B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140198992A1 (en) * 2013-01-15 2014-07-17 Apple Inc. Linear Transform-Based Image Processing Techniques
CN107146209A (en) * 2017-05-02 2017-09-08 四川大学 A kind of single image to the fog method based on gradient field
CN109685735A (en) * 2018-12-21 2019-04-26 温州大学 Single picture defogging method based on mist layer smoothing prior

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140198992A1 (en) * 2013-01-15 2014-07-17 Apple Inc. Linear Transform-Based Image Processing Techniques
CN107146209A (en) * 2017-05-02 2017-09-08 四川大学 A kind of single image to the fog method based on gradient field
CN109685735A (en) * 2018-12-21 2019-04-26 温州大学 Single picture defogging method based on mist layer smoothing prior

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许欣: "结合视觉感知特性的梯度域图像增强方法", 《计算机辅助设计与图形学学报》 *

Also Published As

Publication number Publication date
CN111640079B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN108921800B (en) Non-local mean denoising method based on shape self-adaptive search window
Wang et al. An experimental-based review of image enhancement and image restoration methods for underwater imaging
Pan et al. Underwater image de-scattering and enhancing using dehazenet and HWD
Fan et al. Two-layer Gaussian process regression with example selection for image dehazing
CN109636766B (en) Edge information enhancement-based polarization difference and light intensity image multi-scale fusion method
CN105913396A (en) Noise estimation-based image edge preservation mixed de-noising method
CN113837974B (en) NSST domain power equipment infrared image enhancement method based on improved BEEPS filtering algorithm
CN110992292B (en) Enhanced low-rank sparse decomposition model medical CT image denoising method
CN110675340A (en) Single image defogging method and medium based on improved non-local prior
CN106530244B (en) A kind of image enchancing method
Smith et al. Effect of pre-processing on binarization
Liu et al. True wide convolutional neural network for image denoising
CN115375574A (en) Multi-scale non-local low-dose CT image denoising method based on region self-adaption
Yin et al. Multiscale fusion algorithm for underwater image enhancement based on color preservation
Liang et al. Underwater image quality improvement via color, detail, and contrast restoration
CN111652810A (en) Image denoising method based on wavelet domain singular value differential model
CN111640079B (en) Defogging method based on image gradient distribution priori
Srivaramangai et al. Preprocessing MRI images of colorectal cancer
Liu A method of CT image denoising based on residual encoder-decoder network
CN113222879A (en) Generation countermeasure network for fusion of infrared and visible light images
Tang et al. Structure–texture decomposition-based dehazing of a single image with large sky area
CN114066786A (en) Infrared and visible light image fusion method based on sparsity and filter
Mathew et al. Edge-aware spatial denoising filtering based on a psychological model of stimulus similarity
Chen et al. An adaptive non-local means image denoising model
Chen An adaptive image sharpening scheme

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant