CN111598791A - Image defogging method based on improved dynamic atmospheric scattering coefficient function - Google Patents

Image defogging method based on improved dynamic atmospheric scattering coefficient function Download PDF

Info

Publication number
CN111598791A
CN111598791A CN202010284296.XA CN202010284296A CN111598791A CN 111598791 A CN111598791 A CN 111598791A CN 202010284296 A CN202010284296 A CN 202010284296A CN 111598791 A CN111598791 A CN 111598791A
Authority
CN
China
Prior art keywords
image
filtering
scattering coefficient
atmospheric scattering
improved dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010284296.XA
Other languages
Chinese (zh)
Other versions
CN111598791B (en
Inventor
胡辽林
郑毅
赵锴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Zhiguo Cloud Intellectual Property Operation Co ltd
Xi'an Huaqi Zhongxin Technology Development Co ltd
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202010284296.XA priority Critical patent/CN111598791B/en
Publication of CN111598791A publication Critical patent/CN111598791A/en
Application granted granted Critical
Publication of CN111598791B publication Critical patent/CN111598791B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image defogging method based on an improved dynamic atmospheric scattering coefficient function, which is characterized in that an R, G, B three color channels of an original image are utilized to calculate a minimum channel image, and an atmospheric light value A of a foggy day image is calculated by means of a quadtree segmentation method; calculating the depth of field d (x) of the image by using a nonlinear color attenuation prior model, and filtering noise information in the image by minimum filtering, smooth filtering and guided filtering to obtain the final depth of field d of the imager(x) Finally, the improved dynamic atmosphere scattering coefficient function β (x) and the final processed image depth d are combinedr(x) Calculating the atmospheric transmittance t (x), and recovering a fog-free image through an atmospheric scattering model; the method of the invention not only can solve the problem of inaccurate transmissivity estimation caused by constant atmospheric scattering coefficient, but also can effectively solve the phenomenon of color distortion of image sky area, so that the defogging effect of the image is clearer and more thorough, and the scenery is more clearThe color is more natural and real.

Description

Image defogging method based on improved dynamic atmospheric scattering coefficient function
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image defogging method based on an improved dynamic atmospheric scattering coefficient function.
Background
Haze is a common natural phenomenon on land and sea. In severe weather such as fog and haze, aerosol and solid particles suspended in the atmosphere can absorb and scatter light to some extent, so that images captured by shooting equipment are degraded, generally comprise low contrast and low visibility, and the visual system, especially the visible light visual system, is seriously affected. Since the definition of the degraded image is seriously reduced, which is a great difficulty and challenge for the subsequent automatic processing of the image, the research of a fast and effective self-adaptive image defogging method has very important practical significance for the automatic processing of the image and the video.
The model mainly adopted by the image defogging technology is a physical imaging model based on the foggy weather condition, namely an atmospheric scattering model, and the model is mainly used for reversely recovering a fog-free image by estimating required physical parameters such as atmospheric light and transmission rate (depth). The atmospheric scattering physical model was proposed by McCartney in 1976, and then further derived by Narasimohan and Nayar et al to obtain a mathematical model, which lays a solid foundation for image defogging research. Over the past few years, many scholars and researchers have achieved significant results in the field of defogging research. For example, He et al propose to replace soft matting with guided filtering, which reduces the complexity of the algorithm, but still has obvious residual fog in the distant view area; meng et al propose a defogging algorithm based on boundary constraint, which restores a fog-free image by increasing constraint conditions of parameters in a physical model, and improves the restoration effect by sacrificing a small amount of details, thereby obtaining a clearer image, but the calculation complexity of post-processing operation is larger; zhu et al found that there is a linear relationship between the difference between brightness and saturation and the depth of field of an image by observing HSV color channels, and proposed a priori knowledge of color attenuation based on this, established a mathematical model of depth of field information about the saturation and brightness of the image, and solved the depth of field information by using a supervised learning method, thereby achieving defogging of the image.
Disclosure of Invention
The invention aims to provide an image defogging method based on an improved dynamic atmospheric scattering coefficient function, which can effectively solve the problem of insufficient color attenuation prior algorithm in the prior art.
The first technical scheme adopted by the invention is that an image defogging method based on an improved dynamic atmospheric scattering coefficient function is implemented according to the following steps:
step 1, acquiring a minimum channel image I of red, green and blue channel values of an input foggy day image I (x)dark(x) Calculating an atmospheric light value A of the foggy day image I (x) by a quadtree segmentation method;
step 2, carrying out color space domain transformation on the original foggy day image I (x), namely transforming the original foggy day image I (x) to an HSV (hue, saturation and value) color space from an RGB (red, green and blue) color space, and extracting a brightness component v (x) and a saturation component s (x) of the foggy day image I (x);
step 3, calculating the depth of field d (x) of the foggy-day image I (x) by using a nonlinear color attenuation prior model, and filtering noise information in the foggy-day image I (x) through minimum filtering, smooth filtering and guided filtering to obtain the final depth of field d of the imager(x);
Step 4, combining the improved dynamic atmospheric scattering coefficient function β (x) and the image depth dr(x) Calculating the atmospheric transmittance t (x);
and 5, substituting the atmospheric light value A and the atmospheric transmittance t (x) into an atmospheric scattering model formula of the foggy day image, denoising through a fogless image recovery formula, and calculating a fogless image J (x).
The invention is also characterized in that:
step 1 minimum value channel image Idark(x) The expression is as follows:
Figure BDA0002447921330000031
where y represents R, G, B one of the three color channels.
The specific process of the step 1 is as follows:
step 1.1, according to an initial threshold value T 030 × 30, a gray scale image I is obtained for the input foggy day image I (x)gray
Step 1.2, gray level image is matchedIgrayObtaining a filtered image I using median filteringmedian
Step 1.3, image ImedianAveragely dividing the four rectangular areas by a quadtree segmentation method;
step 1.4, calculating the average pixel value of each rectangular area, subtracting the standard deviation of the area from the average pixel value to obtain a score, and selecting the maximum score and the area block corresponding to the maximum score;
step 1.5, comparing the area corresponding to the maximum score with the initial threshold value T0The size of (d); if the area corresponding to the maximum score is larger than the initial threshold value T0Returning to the step 1.2; otherwise, the region corresponding to the maximum score is the target region;
and step 1.6, calculating the average value of the gray values of the target area, wherein the average value is the atmospheric light value A.
Step 3, the expression of the nonlinear color attenuation prior model is as follows:
Figure BDA0002447921330000041
in the formula (2), v (x) represents a luminance component of the fog image i (x), s (x) represents a saturation component of the fog image i (x), and the parameter α is 4.99, θ0=-0.29,θ1=0.83,θ2=-0.16。
The specific process of filtering the noise information in the step 3 through minimum value filtering, smooth filtering and guide filtering is as follows:
step a, denoising a white object which is mistaken for a long shot by adopting minimum filtering;
Figure BDA0002447921330000042
in the formula (d)min(x) Represents the minimum filtered image depth, d (y) represents the image depth to be filtered, Ω (x) represents the filtering area centered on pixel x, the filtered structuring element takes the square matrix of 15 × 15.
Step b, filtering the minimum valueDepth of field d of the imagemin(x) Smoothing and guiding filtering to obtain the final image depth of field dr(x):
Figure BDA0002447921330000043
dr(x)=guidedfilter(Igray,derode(x),r,esp) (5)
In the formula IgrayGray scale map representing the original foggy day image I (x), derode(x) And representing the depth of field of the image after smooth filtering, wherein r is the radius of a filtering window and takes the value of 30, and esp is a regularization parameter and takes the value of 0.01.
Step 4, the expression of the improved dynamic atmospheric scattering coefficient function beta (x) is as follows:
Figure BDA0002447921330000044
in step 4, the atmospheric transmittance t (x) is calculated by the expression:
t(x)=exp[-βdr(x)](15)
the formula of the atmospheric scattering model is as follows:
I(x)=J(x)t(x)+A[1-t(x)](16)
the fog-free image J (x) is calculated as follows:
Figure BDA0002447921330000051
in the formula (9), t0The lower threshold value set for the transmittance t (x) is 0.1.
The invention has the beneficial effects that:
the invention discloses an image defogging method based on an improved dynamic atmospheric scattering coefficient function, which utilizes an improved dynamic atmospheric scattering coefficient function model to solve the problem of insufficient defogging degree of a local region caused by constant atmospheric scattering coefficient in the existing color attenuation prior image defogging algorithm and also solve the problem of color distortion of an image sky region.
Drawings
FIG. 1 is a flow chart of an image defogging method based on an improved dynamic atmospheric scattering coefficient function according to the present invention;
FIG. 2 is a graph of experimental results of different values of a and b;
FIG. 3 is a foggy day image;
FIG. 4 is an image before correction;
FIG. 5 is a corrected image;
FIG. 6 is an original foggy day image;
FIG. 7 shows the results of He algorithm processing;
FIG. 8 shows the processing results of Meng algorithm;
FIG. 9 is the results of the Zhu algorithm processing;
FIG. 10 shows the results of the process of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The origin of the color attenuation prior theory: a new defogging solution idea proposed by Zhu et al in 2015, which was discovered by Zhu et al after a large number of experiments on outdoor foggy images: the image depth of field and the difference between brightness and saturation have a linear relationship, and a Color Attenuation Prior (CAP) model is provided according to the linear relationship, namely
d(x)=θ01v(x)+θ2s(x)+(x) (10)
Where d (x) is the depth of field at pixel point x, v (x) and s (x) are the luminance component and saturation component at x, respectively, and θ1And theta2For unknown linear coefficients, (x) is the random error of the model, which is assumed to be subject to a mean of 0 and a variance of σ2Normal distribution of (x) to N (0, σ)2) From the nature of the normal distribution
d(x)~N(θ01v(x)+θ2s(x),σ2) (11)
Finally, 500 training samples and 1.2 hundred million pixel points are adopted to train a linear model, and an optimal coefficient theta is obtained through 517 generations0=0.121779,θ1=0.959710,θ2-0.780245 and σ 0.041337. Composed ofAs can be seen from equation (10), the depth of field of the image can be calculated after the brightness and saturation information of the foggy image is obtained.
The invention relates to an image defogging method based on an improved dynamic atmospheric scattering coefficient function, which is implemented according to the following steps as shown in figure 1:
step 1, acquiring a minimum channel image I of red, green and blue channel values of an input foggy day image I (x)dark(x) Calculating an atmospheric light value A of the foggy day image I (x) by a quadtree segmentation method;
minimum value channel image Idark(x) The expression is as follows:
Figure BDA0002447921330000071
where y represents R, G, B one of the three color channels.
The specific process of the step 1 is as follows:
step 1.1, according to an initial threshold value T 030 × 30, a gray scale image I is obtained for the input foggy day image I (x)gray
Step 1.2, aiming at gray level image IgrayObtaining a filtered image I using median filteringmedian
Step 1.3, image ImedianAveragely dividing the four rectangular areas by a quadtree segmentation method;
step 1.4, calculating the average pixel value of each rectangular area, subtracting the standard deviation of the area from the average pixel value to obtain a score, and selecting the maximum score and the area corresponding to the maximum score;
step 1.5, comparing the area corresponding to the maximum score with the initial threshold value T0The size of (d);
if the area corresponding to the maximum score is larger than the initial threshold value T0Returning to the step 1.2;
otherwise, the region corresponding to the maximum score is the target region;
and step 1.6, calculating the average value of the gray values of the target area, wherein the average value is the atmospheric light value A.
Step 2, carrying out color space domain transformation on the original foggy day image I (x), namely transforming the original foggy day image I (x) to an HSV (hue, saturation and value) color space from an RGB (red, green and blue) color space, and extracting a brightness component v (x) and a saturation component s (x) of the foggy day image I (x);
step 3, calculating the depth of field d (x) of the foggy-day image I (x) by using a nonlinear color attenuation prior model, and filtering noise information in the foggy-day image I (x) through minimum filtering, smooth filtering and guided filtering to obtain the final depth of field d of the imager(x);
The nonlinear color attenuation prior model expression is:
Figure BDA0002447921330000081
in the formula (2), v (x) represents a luminance component of the fog image i (x), s (x) represents a saturation component of the fog image i (x), and the parameter α is 4.99, θ0=-0.29,θ1=0.83,θ2=-0.16。
The specific process of filtering the noise information by minimum value filtering, smooth filtering and guide filtering is as follows:
step a, denoising a white object which is mistaken for a long shot by adopting minimum filtering;
Figure BDA0002447921330000082
in the formula (d)min(x) Represents the minimum filtered image depth, d (y) represents the image depth to be filtered, Ω (x) represents the filtering area centered on pixel x, the filtered structuring element takes the square matrix of 15 × 15.
Step b, the depth of field d of the image after the minimum value filtrationmin(x) Smoothing and guiding filtering to eliminate blocking effect and obtain final image depth of field dr(x):
Figure BDA0002447921330000083
dr(x)=guidedfilter(Igray,derode(x),r,esp) (5)
In the formula IgrayGray scale map representing the original foggy day image I (x), derode(x) And representing the depth of field of the image after smooth filtering, wherein r is the radius of a filtering window and takes the value of 30, and esp is a regularization parameter and takes the value of 0.01.
Step 4, improving the proving process of the dynamic atmospheric scattering coefficient function:
according to the characteristic that the depth of field of the image is in positive correlation with the fog concentration, the invention improves the function of the dynamic atmospheric scattering coefficient only using the fog, and the function form of the improved dynamic atmospheric scattering coefficient is defined as
Figure BDA0002447921330000091
In the formula, a and b are unknown coefficients, and by combining the value range d belonging to (0.1,1) of the depth of field of the image and the value range beta belonging to (0.1,2.5) of the atmospheric scattering coefficient, the method can obtain the following formula: a ∈ (0,1.5), b ∈ (0, 0.5).
In order to further determine unknown parameters a and b in the function of the improved dynamic atmospheric scattering coefficient, 100 non-uniform fog images are selected randomly on the network and then an average gradient I is utilizedgradAnd information entropy IentropyThe two objective evaluation indexes determine the optimal values of the parameters a and b by evaluating the image definition and the detail richness. The method comprises the following specific steps:
1) taking 0.1 as a value interval for a and b, and respectively taking 0.1 as a value interval in the intervals (0,1.5) and (0,0.5) to obtain 75 groups of atmospheric scattering coefficients;
2) respectively processing 100 uneven fog images by adopting the 75 groups of atmospheric scattering coefficients, and obtaining 75 groups of defogged images by using a formula (9);
3) calculating the average gradient I of each group of defogged imagesgradAnd information entropy IentropyAnd carrying out normalization and average weighting processing on the evaluation parameters to obtain comprehensive evaluation parameters of
cep=0.5*normal(Igrad)+0.5*normal(Ientropy) (7)
4) Accumulating the comprehensive evaluation parameters of each group to obtain a final 75 groups of comprehensive evaluation parameters cep;
5) and (5) combining 75 groups of comprehensive evaluation parameters cep, drawing a two-dimensional curve graph of a, b and cep, and a graph of an experimental result is shown in figure 2.
The above experiment determined that the expression of the function of the improved dynamic atmospheric scattering coefficient is
Figure BDA0002447921330000092
Although a specific expression for improving the dynamic atmospheric scattering coefficient function is successfully solved, the atmospheric scattering coefficient function is found to have the problem of sky area color distortion through practical verification. Therefore, in order to obtain an improved dynamic atmospheric scattering coefficient function with better effect, the adaptive atmospheric scattering coefficient function suitable for the image sky region is obtained by carrying out reverse derivation on the improved dynamic atmospheric scattering coefficient function according to a sky region transmissivity correction algorithm based on color attenuation prior. The specific derivation process is as follows:
the sky area of the image meets the following two characteristics:
1) the difference | v (x) -s (x) of the image pixel brightness and saturation | max;
2) the transmittance t (x) is minimal.
It follows that the transmittance of the image sky region is defined as follows (T is 0.5):
t'(x)=max{1-[v(x)-s(x)]*t(x),t(x)},|t(x)/[v(x)-s(x)]|<T (9)
let t' (x) be e-β'd(x)t(x)=e-βd(x)It can be substituted by the formula (15) to obtain
e-β'd(x)=max{1-[v(x)-s(x)]*e-βd(x),e-βd(x)} (10)
Taking logarithm of two sides of the formula (16) at the same time, and deforming to obtain
Figure BDA0002447921330000101
Because a segmentation threshold needs to be manually set, the effect of automatic segmentation cannot be achieved, and therefore the sky region is automatically segmented by adopting a sky region feature identification method, namely two major judgment conditions of the sky region are as follows:
max|Ic-Ic'|<10,c,c'∈{R,G,B} (12)
Figure BDA0002447921330000102
wherein, IcAnd Ic'For two arbitrary R, G, B channels of the same pixel point, the R, G, B values are similar for the sky region. Based on this, (| I) will be present herec-Ic'|<10)&(|Ac-Ic|<30) Is defined as the area of sky color distortion, will (| I)c-Ic'|≥10)&(|Ac-Ic| ≧ 30) is defined as a non-sky region whose transmittance does not need to be adjusted. Thus, the final improved dynamic atmospheric scattering coefficient function is:
Figure BDA0002447921330000111
through a large number of experimental verifications, it is found that the corrected improved dynamic atmospheric scattering coefficient function can well solve the problem of color distortion of the sky area, and fig. 4 and 5 are comparison graphs of the effect before and after correction.
And calculating the atmospheric transmittance t (x) by combining the improved dynamic atmospheric scattering coefficient function beta (x) and the image depth of field dr (x), wherein the expression is as follows:
t(x)=exp[-βdr(x)](15)
and 5, substituting the atmospheric light value A and the atmospheric transmittance t (x) into an atmospheric scattering model formula of the foggy day image, denoising through a fogless image recovery formula, and calculating a fogless image J (x).
The formula of the atmospheric scattering model is as follows:
I(x)=J(x)t(x)+A[1-t(x)](16)
the fog-free image J (x) is calculated as follows:
Figure BDA0002447921330000112
in the formula (17), t0The lower threshold value set for the transmittance t (x) is 0.1 to avoid introducing noise.
The following are specific examples of the method of the present invention:
the invention obtains depth of field information d (x) of an image by a color attenuation prior algorithm proposed by Zhu et al, reversely deduces the transmittance t (x) of the image by using an improved dynamic atmospheric scattering coefficient function beta (x) and a formula t (x) exp [ -beta d (x) ], and finally restores a fog-free image by an atmospheric scattering model I (x) J (x) t (x) + A [1-t (x) ]. The following validation of the method of the invention was carried out:
1) color distortion phenomenon of sky area
Selecting a foggy day image as shown in figure 3; FIG. 4 is a result of processing before function correction; fig. 5 shows the result of the function correction.
As can be seen from the comparison between fig. 4 and fig. 5, the distortion of the sky area of the corrected image is significantly reduced, so that the defogging result is more natural and real. The experimental result of fig. 5 shows that the method based on the improved dynamic atmospheric scattering coefficient function can effectively eliminate the sky color distortion phenomenon.
2) Comparison of results
Selecting an original foggy day image as a picture 6; FIG. 7 shows the He algorithm processing result, and there is still significant residual fog in the distant view area; FIG. 8 shows the processing results of Meng algorithm; FIG. 9 is the results of the Zhu algorithm processing; FIG. 10 shows the results of the process of the present invention; compared with other algorithm processing, the image restored by the method is real and natural, and the brightness of the image is more consistent with the observation of human eyes.
In order to further verify the actual recovery effect of the method, the defogging results are subjected to objective quality evaluation by adopting indexes such as average gradient, contrast, information entropy, mean square error, peak signal-to-noise ratio, processing time and the like, and the results are shown in table 1 by combining with fig. 6.
TABLE 1
Figure BDA0002447921330000131
The experimental data show that the method has certain superiority in image recovery reality, detail preservation and definition compared with other classical defogging algorithms.
Through the mode, the image defogging method based on the improved dynamic atmospheric scattering coefficient function can solve the problem of inaccurate transmissivity estimation caused by constant atmospheric scattering coefficient, and can effectively solve the phenomenon of color distortion of image sky areas, so that the image defogging effect is clearer and more thorough, and the scenery color is more natural and real. The method comprises the following specific implementation steps: firstly, solving a minimum value channel image by using R, G, B color channels of an original image, and calculating an atmospheric light value A of a foggy day image by using a quadtree segmentation method; then, the image depth of field d (x) is calculated by utilizing a nonlinear color attenuation prior model, and the noise information in the image depth of field d (x) is filtered by minimum value filtering, smooth filtering and guide filtering to obtain the final image depth of field dr(x) Finally, the improved dynamic atmosphere scattering coefficient function β (x) and the final processed image depth d are combinedr(x) And calculating the atmospheric transmittance t (x), and recovering a fog-free image through an atmospheric scattering model. The experimental result and subjective and objective evaluation prove the feasibility and effectiveness of the method.

Claims (9)

1. An image defogging method based on an improved dynamic atmospheric scattering coefficient function is characterized by comprising the following steps:
step 1, acquiring a minimum channel image I of red, green and blue channel values of an input foggy day image I (x)dark(x) Calculating an atmospheric light value A of the foggy day image I (x) by a quadtree segmentation method;
step 2, carrying out color space domain transformation on the original foggy day image I (x), namely transforming the original foggy day image I (x) to an HSV (hue, saturation and value) color space from an RGB (red, green and blue) color space, and extracting a brightness component v (x) and a saturation component s (x) of the foggy day image I (x);
step 3, calculating the depth of field d (x) of the foggy-day image I (x) by using a nonlinear color attenuation prior model, and filtering by minimum value and smooth filteringAnd guiding filtering to filter out noise information therein to obtain final image depth of field dr(x);
Step 4, calculating the atmospheric transmittance t (x) by combining the improved dynamic atmospheric scattering coefficient function beta (x) and the image depth of field dr (x);
and 5, substituting the atmospheric light value A and the atmospheric transmittance t (x) into an atmospheric scattering model formula of the foggy day image, denoising through a fogless image recovery formula, and calculating a fogless image J (x).
2. The method for defogging an image based on the function of the improved dynamic atmospheric scattering coefficient according to claim 1, wherein the minimum value channel image I in the step 1dark(x) The expression is as follows:
Figure FDA0002447921320000011
where y represents R, G, B one of the three color channels.
3. The image defogging method based on the improved dynamic atmospheric scattering coefficient function according to the claim 1, wherein the specific process of the step 1 is as follows:
step 1.1, according to an initial threshold value T030 × 30, a gray scale image I is obtained for the input foggy day image I (x)gray
Step 1.2, aiming at gray level image IgrayObtaining a filtered image I using median filteringmedian
Step 1.3, image ImedianAveragely dividing the four rectangular areas by a quadtree segmentation method;
step 1.4, calculating the average pixel value of each rectangular area, subtracting the standard deviation of the area from the average pixel value to obtain a score, and selecting the maximum score and the area corresponding to the maximum score;
step 1.5, comparing the area corresponding to the maximum score with the initial threshold value T0The size of (d); if the area corresponding to the maximum score is larger than the initial threshold value T0Returning to the step1.2; otherwise, the region corresponding to the maximum score is the target region;
and step 1.6, calculating the average value of the gray values of the target area, wherein the average value is the atmospheric light value A.
4. The method for defogging an image based on the improved dynamic atmospheric scattering coefficient function according to claim 1, wherein the expression of the nonlinear color attenuation prior model in the step 3 is as follows:
Figure FDA0002447921320000021
in the formula (2), v (x) represents a luminance component of the fog image i (x), s (x) represents a saturation component of the fog image i (x), and the parameter α is 4.99, θ0=-0.29,θ1=0.83,θ2=-0.16。
5. The image defogging method based on the improved dynamic atmospheric scattering coefficient function according to the claim 1, wherein the specific process of filtering the noise information in the step 3 through the minimum value filtering, the smoothing filtering and the guiding filtering is as follows:
step a, denoising a white object which is mistaken for a long shot by adopting minimum filtering;
Figure FDA0002447921320000031
in the formula (d)min(x) Representing the minimum filtered image depth, d (y) representing the image depth to be filtered, Ω (x) representing the filtering area centered on pixel x, the filtering structure elements taking a square matrix of 15 × 15;
step b, the depth of field d of the image after the minimum value filtrationmin(x) Smoothing and guiding filtering to obtain the final image depth of field dr(x):
Figure FDA0002447921320000032
dr(x)=guidedfilter(Igray,derode(x),r,esp) (5)
In the formula IgrayGray scale map representing the original foggy day image I (x), derode(x) And representing the depth of field of the image after smooth filtering, wherein r is the radius of a filtering window and takes the value of 30, and esp is a regularization parameter and takes the value of 0.01.
6. The image defogging method based on the improved dynamic atmospheric scattering coefficient function according to the claim 1, wherein the expression of the improved dynamic atmospheric scattering coefficient function β (x) in the step 4 is:
Figure FDA0002447921320000033
7. the method for defogging an image based on the function of the improved dynamic atmospheric scattering coefficient according to claim 1, wherein the atmospheric transmittance t (x) is calculated in step 4 by the expression:
t(x)=exp[-βdr(x)](15)
8. the image defogging method based on the improved dynamic atmospheric scattering coefficient function as claimed in claim 1, wherein the atmospheric scattering model formula is as follows:
I(x)=J(x)t(x)+A[1-t(x)](16)
9. the image defogging method based on the improved dynamic atmospheric scattering coefficient function according to claim 1, wherein the computation of the fog-free image J (x) is carried out by the following steps:
Figure FDA0002447921320000041
in the formula (9), t0The lower threshold value set for the transmittance t (x) is 0.1.
CN202010284296.XA 2020-04-13 2020-04-13 Image defogging method based on improved dynamic atmospheric scattering coefficient function Active CN111598791B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010284296.XA CN111598791B (en) 2020-04-13 2020-04-13 Image defogging method based on improved dynamic atmospheric scattering coefficient function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010284296.XA CN111598791B (en) 2020-04-13 2020-04-13 Image defogging method based on improved dynamic atmospheric scattering coefficient function

Publications (2)

Publication Number Publication Date
CN111598791A true CN111598791A (en) 2020-08-28
CN111598791B CN111598791B (en) 2024-01-23

Family

ID=72182041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010284296.XA Active CN111598791B (en) 2020-04-13 2020-04-13 Image defogging method based on improved dynamic atmospheric scattering coefficient function

Country Status (1)

Country Link
CN (1) CN111598791B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365467A (en) * 2020-11-11 2021-02-12 武汉长江通信智联技术有限公司 Foggy image visibility estimation method based on single image depth estimation
CN112465715A (en) * 2020-11-25 2021-03-09 清华大学深圳国际研究生院 Image de-scattering method based on iterative optimization of atmospheric transmission matrix
CN113298729A (en) * 2021-05-24 2021-08-24 中国科学院长春光学精密机械与物理研究所 Rapid single image defogging method based on minimum value channel
CN113643323A (en) * 2021-08-20 2021-11-12 中国矿业大学 Target detection system under dust and fog environment of urban underground comprehensive pipe gallery
WO2022213372A1 (en) * 2021-04-09 2022-10-13 深圳市大疆创新科技有限公司 Image dehazing method and apparatus, and electronic device and computer-readable medium
CN117893440A (en) * 2024-03-15 2024-04-16 昆明理工大学 Image defogging method based on diffusion model and depth-of-field guidance generation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170316551A1 (en) * 2016-04-29 2017-11-02 Industry Foundation Of Chonnam National University System for image dehazing by modifying lower bound of transmission rate and method therefor
CN110211067A (en) * 2019-05-27 2019-09-06 哈尔滨工程大学 One kind being used for UUV Layer Near The Sea Surface visible images defogging method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170316551A1 (en) * 2016-04-29 2017-11-02 Industry Foundation Of Chonnam National University System for image dehazing by modifying lower bound of transmission rate and method therefor
CN110211067A (en) * 2019-05-27 2019-09-06 哈尔滨工程大学 One kind being used for UUV Layer Near The Sea Surface visible images defogging method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘策;杨燕;: "基于自适应小波融合的单幅图像去雾算法" *
胡雪薇;李其申;: "动态大气散射系数的颜色衰减先验图像去雾" *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365467A (en) * 2020-11-11 2021-02-12 武汉长江通信智联技术有限公司 Foggy image visibility estimation method based on single image depth estimation
CN112365467B (en) * 2020-11-11 2022-07-19 武汉长江通信智联技术有限公司 Foggy image visibility estimation method based on single image depth estimation
CN112465715A (en) * 2020-11-25 2021-03-09 清华大学深圳国际研究生院 Image de-scattering method based on iterative optimization of atmospheric transmission matrix
WO2022111090A1 (en) * 2020-11-25 2022-06-02 清华大学深圳国际研究生院 Image de-scattering method based on atmospheric transmission matrix iterative optimization
CN112465715B (en) * 2020-11-25 2023-08-08 清华大学深圳国际研究生院 Image scattering removal method based on iterative optimization of atmospheric transmission matrix
WO2022213372A1 (en) * 2021-04-09 2022-10-13 深圳市大疆创新科技有限公司 Image dehazing method and apparatus, and electronic device and computer-readable medium
CN113298729A (en) * 2021-05-24 2021-08-24 中国科学院长春光学精密机械与物理研究所 Rapid single image defogging method based on minimum value channel
CN113298729B (en) * 2021-05-24 2022-04-26 中国科学院长春光学精密机械与物理研究所 Rapid single image defogging method based on minimum value channel
CN113643323A (en) * 2021-08-20 2021-11-12 中国矿业大学 Target detection system under dust and fog environment of urban underground comprehensive pipe gallery
CN113643323B (en) * 2021-08-20 2023-10-03 中国矿业大学 Target detection system under urban underground comprehensive pipe rack dust fog environment
CN117893440A (en) * 2024-03-15 2024-04-16 昆明理工大学 Image defogging method based on diffusion model and depth-of-field guidance generation
CN117893440B (en) * 2024-03-15 2024-05-14 昆明理工大学 Image defogging method based on diffusion model and depth-of-field guidance generation

Also Published As

Publication number Publication date
CN111598791B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN111598791B (en) Image defogging method based on improved dynamic atmospheric scattering coefficient function
Singh et al. Image dehazing using Moore neighborhood-based gradient profile prior
CN107103591B (en) Single image defogging method based on image haze concentration estimation
CN107301624B (en) Convolutional neural network defogging method based on region division and dense fog pretreatment
CN111292257B (en) Retinex-based image enhancement method in scotopic vision environment
CN108288258B (en) Low-quality image enhancement method under severe weather condition
CN110288550B (en) Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition
CN110782407B (en) Single image defogging method based on sky region probability segmentation
CN108133462B (en) Single image restoration method based on gradient field region segmentation
CN111861896A (en) UUV-oriented underwater image color compensation and recovery method
CN109118450B (en) Low-quality image enhancement method under sand weather condition
Yu et al. Image and video dehazing using view-based cluster segmentation
CN111145105A (en) Image rapid defogging method and device, terminal and storage medium
CN114219732A (en) Image defogging method and system based on sky region segmentation and transmissivity refinement
CN108765337B (en) Single color image defogging processing method based on dark channel prior and non-local MTV model
CN112419163A (en) Single image weak supervision defogging method based on priori knowledge and deep learning
CN111489333B (en) No-reference night natural image quality evaluation method
CN112825189B (en) Image defogging method and related equipment
CN109949239B (en) Self-adaptive sharpening method suitable for multi-concentration multi-scene haze image
CN110349113A (en) One kind being based on the improved adapting to image defogging method of dark primary priori
CN115619662A (en) Image defogging method based on dark channel prior
CN110889805B (en) Image defogging method based on dark channel compensation and atmospheric light value improvement
CN113012067B (en) Retinex theory and end-to-end depth network-based underwater image restoration method
CN111260589B (en) Retinex-based power transmission line monitoring image defogging method
CN110647843A (en) Face image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240129

Address after: 4055C, 4th Floor, No. 82 Taoyu Road, Tianhe District, Guangzhou City, Guangdong Province, 510000

Patentee after: Guangzhou Zhiguo cloud Intellectual Property Operation Co.,Ltd.

Country or region after: China

Address before: 710000 No. B49, Xinda Zhongchuang space, 26th Street, block C, No. 2 Trading Plaza, South China City, international port district, Xi'an, Shaanxi Province

Patentee before: Xi'an Huaqi Zhongxin Technology Development Co.,Ltd.

Country or region before: China

Effective date of registration: 20240129

Address after: 710000 No. B49, Xinda Zhongchuang space, 26th Street, block C, No. 2 Trading Plaza, South China City, international port district, Xi'an, Shaanxi Province

Patentee after: Xi'an Huaqi Zhongxin Technology Development Co.,Ltd.

Country or region after: China

Address before: 710048 Shaanxi province Xi'an Beilin District Jinhua Road No. 5

Patentee before: XI'AN University OF TECHNOLOGY

Country or region before: China