CN110889805B - Image defogging method based on dark channel compensation and atmospheric light value improvement - Google Patents

Image defogging method based on dark channel compensation and atmospheric light value improvement Download PDF

Info

Publication number
CN110889805B
CN110889805B CN201910949482.8A CN201910949482A CN110889805B CN 110889805 B CN110889805 B CN 110889805B CN 201910949482 A CN201910949482 A CN 201910949482A CN 110889805 B CN110889805 B CN 110889805B
Authority
CN
China
Prior art keywords
image
dark channel
value
formula
dark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910949482.8A
Other languages
Chinese (zh)
Other versions
CN110889805A (en
Inventor
胡辽林
高强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Caishunbao Technology Co ltd
Shenzhen Wanzhida Technology Co ltd
Original Assignee
Hainan Caishunbao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan Caishunbao Technology Co ltd filed Critical Hainan Caishunbao Technology Co ltd
Priority to CN201910949482.8A priority Critical patent/CN110889805B/en
Publication of CN110889805A publication Critical patent/CN110889805A/en
Application granted granted Critical
Publication of CN110889805B publication Critical patent/CN110889805B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30192Weather; Meteorology
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image defogging method based on dark channel compensation and atmospheric light value improvement, which not only can effectively correct underestimated dark channel values and weaken the halation effect at the edge of an image scene, but also can accurately acquire the atmospheric light value of an image, so that the restored image is clearer and more natural, and the detail reservation is richer. The method comprises the following specific implementation steps: firstly, obtaining a minimum value channel image by utilizing R, G, B color channels of an original image, and obtaining a compensated dark channel image by means of a dark channel compensation model; then, the atmospheric light value of the image is calculated through the steps of graying treatment, quadtree segmentation and the like on the original image; and finally, estimating the image transmittance by combining the dark channel image and the atmospheric light value, and recovering the haze-free image through an atmospheric scattering model. The feasibility and the effectiveness of the method are proved by experimental results and subjective and objective evaluation.

Description

Image defogging method based on dark channel compensation and atmospheric light value improvement
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image defogging method based on dark channel compensation and atmospheric light value improvement.
Background
In a haze scene, atmospheric suspended particles tend to generate certain absorption and scattering effects on light rays, so that the visibility of an outdoor image is greatly reduced, degradation phenomena such as contrast reduction and color attenuation occur in the image, visual observation of human eyes and normal operation of machine vision equipment are affected, and therefore, research on an image defogging method has very important practical significance.
The image defogging method is mainly divided into two types, one is based on image enhancement, the image definition is realized mainly by improving contrast, highlighting detail features and the like, but the method ignores the internal relation of a real fog scene, and is easy to cause the phenomenon of color distortion of the image; the other defogging method is an image restoration method based on an atmospheric scattering physical model, an image restoration model is obtained by deducting the image restoration model through a strict theoretical formula, and the method has the characteristics of thorough defogging, reality, nature and the like, but also causes higher time complexity, and is difficult to meet the real-time requirement.
Image defogging algorithm based on assumption or priori knowledge becomes the most widely applied defogging method at present, for example, he and the like find a new theory, namely dark primary prior theory (Dark Channel Piror, DCP), by counting outdoor defogging images, the method is to estimate atmospheric light and transmissivity by using dark primary prior, and improve a transmissivity map by means of Soft Matting technology (SM) to restore clear defogging images. However, soft matting techniques greatly increase the temporal and spatial complexity of the dark channel prior algorithm. After that, he et al sequentially put forward guided filtering to replace soft matting, reduce the complexity of algorithm, but still have apparent residual fog in the distant view area; meng et al propose a defogging algorithm based on boundary constraint, the method recovers a defogging image by increasing constraint conditions of parameters in a physical model, and improves recovery effect by sacrificing a small amount of details, so that a clearer image is obtained, but the calculation complexity of post-processing operation is larger; zhu et al found by observing HSV (Hue, saturation, value) color channels that there was a linear relationship between the difference between brightness and Saturation and the image depth of field, and accordingly provided a priori knowledge of color attenuation, established a mathematical model of depth of field information about image Saturation, brightness, and solved depth of field information by means of supervised learning, thereby achieving image defogging.
In the defogging algorithm, the prior defogging method of the dark channel proposed by He and the like is more known simply and effectively, but in practical application, the method directly influences the overall restoration effect of the image due to the problems of underestimated dark channel, inaccurate atmospheric light value selection and the like.
Disclosure of Invention
The invention aims to provide an image defogging method based on dark channel compensation and atmospheric light value improvement, which can effectively solve the problem of insufficient dark channel prior algorithm in the prior art.
The technical scheme adopted by the invention is that the image defogging method based on dark channel compensation and atmospheric light value improvement is implemented according to the following steps:
step 1, acquiring three colors of red, green and blue of an input foggy day image I (x)Color channel value minimum value channel image I dark1 Then the initial dark channel image I is obtained by the minimum value filtering calculation dark2 (x);
Step 2, according to the minimum value channel image I dark1 Initial dark channel image I dark2 (x) Calculating a dark channel compensation model to obtain a compensated dark channel image I dark (x);
Step 3, calculating an atmospheric light value A of a foggy day image I (x) by combining an improved quadtree segmentation method, and according to the compensated dark channel image I dark (x) Calculating the atmospheric transmittance t (x);
and 4, substituting the atmospheric light value A and the atmospheric transmissivity t (x) into an atmospheric scattering model formula of the foggy day image, denoising through a foggy image restoration formula, and calculating a foggy image J (x).
Step 1 initial dark channel image I dark2 (x) The expression is:
the specific process of the step 2 is as follows:
step 2.1, minimum value channel image I dark1 Initial dark channel image I dark2 (x) And identifying and extracting a halo region portion in the initial dark channel image, by the following specific expression:
I edge1 =αI dark1 -βI dark2 (2);
in the formula (2), I edge1 Representing a halo region map before correction, wherein alpha and beta are weighting parameters;
and 2.2, correcting the extracted halation image by morphological erosion and weighted fusion operation processing, namely:
bringing formula (2) into formula (3) yields:
in the formula (4), I edge2 (x) Representing corrected halo region, ζ 1 、ξ 2 Omega (x) is a filtering area taking x as a center, and a square matrix of 15 multiplied by 15 is taken as a filtering structural element;
step 2.3, performing image fusion on the corrected halation image and the original dark channel image in a linear fusion mode to obtain a dark channel compensation model:
I dark =I dark2 +I edge2 (5);
bringing equation (4) into equation (5) yields a dark channel compensation model:
in the formula (6), I dark Representing fused dark channel images, C 1 、C 2 、C 3 And C 4 For a linear weighting coefficient, ε (x) is the random error of the random variable representation model.
C in step 2.3 1 、C 2 、C 3 And C 4 The method for calculating the linear weighting coefficient comprises the following steps:
let I 0 (x)、I 1 (x)、I 2 (x)、I 3 (x) And I 4 (x) Respectively represent I in the formula (6) dark (x)、I dark1 (x)、 I dark2 (x)、And->Variable, combining ε (x) to N (0, σ) 2 ) And the nature of the normal distribution, can be obtained:
I(x)~N(C 1 I 1 (x)+C 2 I 2 (x)+C 3 I 3 (x)+C 4 I 4 (x),σ 2 ) (7)
assuming that the probabilities of each pixel error are independent, a joint probability density function is constructed as follows:
wherein i represents a pixel point, and logarithms are taken from two sides of the formula (8) at the same time to obtain:
let sigma, which is the maximum value of formula (9), be
Assuming σ is constant, the maximum value of the above equation may be converted to the minimum value of the following equation
And (3) adopting a gradient descent algorithm to obtain the minimum value of the formula (11), and obtaining partial derivatives of the parameters in the formula (11) respectively:
the specific process of calculating the atmospheric light value A of the foggy day image I (x) by combining the improved quadtree segmentation method is as follows:
step 3.1, according to the initial threshold T 0 Obtaining gray level image I for input foggy day image I (x) gray
Step 3.2, for gray scale image I gray Obtaining a filtered image I using median filtering median
Step 3.3, image I median The method comprises the steps of equally dividing the four rectangular areas and four adjacent areas by a quadtree segmentation method;
step 3.4, calculating an average pixel value of each rectangular region, subtracting the standard deviation of the region from the average pixel value to obtain a score, and selecting the maximum score and the region corresponding to the maximum score;
3.5, re-marking the four adjacent areas;
step 3.6, rotating the marked four adjacent areas anticlockwise to recombine into an image;
step 3.7, the average pixel value of each area in the step 3.6 is obtained to subtract the standard deviation of the corresponding area, and the maximum score and the area corresponding to the maximum score are selected;
step 3.8, comparing the maximum score in the step 3.4 with the maximum score in the step 3.7, and selecting the area with the maximum score;
step 3.9, repeating steps 3.2 and 3.3 until the region size is less than the initial threshold T 0 The region is a target region;
and 3.10, obtaining an average value of the gray value of the target area, wherein the average value is the atmospheric light value A.
And 4, an atmospheric scattering model formula is as follows:
I(x)=J(x)t(x)+A[1-t(x)] (16)。
the process of calculating the haze-free image J (x) in the step 4 is as follows:
in the formula (17), t 0 The lower threshold value set for the transmittance t (x) takes a value of 0.1.
The beneficial effects of the invention are as follows:
1) The image halation phenomenon caused by underestimation of the dark channel value in the conventional dark channel priori image defogging algorithm is solved by using the dark channel compensation model;
2) The atmospheric light value selection method for the quadtree segmentation increases the strategy of adjacent region comparison, so that the atmospheric light value selection is more accurate.
Drawings
FIG. 1 is a flow chart of an image defogging method based on dark channel compensation and atmospheric light value improvement according to the present invention;
FIG. 2 (a) is a block diagram of a prior art quad-tree segmentation method;
FIG. 2 (b) shows the missing regions after the quad-tree segmentation method;
FIG. 2 (c) is a combined image of areas segmented by the quadtree segmentation method;
FIG. 3 (a) is a foggy day image;
FIG. 3 (b) is an original dark channel image;
FIG. 3 (c) is a compensated dark channel image;
FIG. 3 (d) shows the result of the original treatment;
FIG. 3 (e) is the result of the compensated processing;
FIG. 4 (a) is a foggy day image;
FIG. 4 (b) shows the atmospheric light value region selected by the quad-tree segmentation method;
FIG. 4 (c) is an atmospheric light value region selected for improving the quad-tree segmentation method;
FIG. 4 (d) is a processing result of the quad-tree segmentation method;
FIG. 4 (e) is a processing result of the improved quadtree splitting method;
FIG. 5 (a) is an original foggy day image;
FIG. 5 (b) is the He algorithm processing result;
FIG. 5 (c) is the Meng algorithm processing result;
FIG. 5 (d) shows the result of the Zhu algorithm;
FIG. 5 (e) shows the result of the treatment of the method of the present invention.
Detailed Description
The invention will be described in detail below with reference to the drawings and the detailed description.
The origin of dark channel a priori theory is: a new defogging solving way proposed by He et al in 2009, he et al found by observing a large number of defogging images of the sky: in most non-sky areas of a haze-free image, the pixel value of at least one of the red, green, and blue color channels of the image is low and approaches zero, expressed as
In J c (y) represents three color channels of an image, Ω (x) represents a square filter window with J (x) as a center radius, and typically takes a value of 15×15. This formula, the dark channel a priori condition.
The dark channel prior algorithm is based on an atmospheric scattering model, and the mathematical expression of the atmospheric scattering physical model is I (x) =J (x) t (x) +A [1-t (x) ]
Wherein I (x) represents a haze image, J (x) represents a haze-free image, t (x) represents transmission transmittance, A represents an atmospheric light value, J (x) t (x) represents a direct attenuation term, namely the quantity of target object emitted light entering the shooting equipment after atmospheric scattering attenuation, and A1-t (x) represents the quantity of atmospheric scattering light, mainly the quantity of the atmospheric light entering the equipment after the atmospheric scattering.
When the atmospheric transmittance is obtained, the image is subjected to window refinement processing, and the transmittance t (x) is a local constant given that the atmospheric light value A is known, and the image is obtained by two minimum value filtering operations
The prior condition that the dark channel value in the haze-free image is towards zero can be obtained
In the formula, in order to enable the restored image to be closer to a real scene, a parameter mu is introduced, and the value of the invention is 0.95.
When the transmittance value t (x) is very small, the value of J (x) is large, resulting in excessive white field of the entire processing result. To avoid this problem, the present invention sets a lower threshold t for the transmittance t (x) 0 The value is 0.1, and then the fogless image restoration formula is expressed as
The invention discloses an image defogging method based on dark channel compensation and atmospheric light value improvement, which is implemented as shown in fig. 1, and specifically comprises the following steps of:
step 1, obtaining a minimum channel image I of red, green and blue color channel values of an input foggy day image I (x) dark1 Then the initial dark channel image I is obtained by the minimum value filtering calculation dark2 (x);
Initial dark channel image I dark2 (x) The expression is:
step 2, according to the minimum value channel image I dark1 Initial dark channel image I dark2 (x) Calculating a dark channel compensation model to obtain a compensated dark channel image I dark (x);
The specific process is as follows:
step 2.1, minimum value channel image I dark1 Initial dark channel image I dark2 (x) And identifying and extracting a halo region portion in the initial dark channel image, by the following specific expression:
I edge1 =αI dark1 -βI dark2 (2);
in the formula (2), I edge1 Representing a halo region map before correction, wherein alpha and beta are weighting parameters;
and 2.2, correcting the extracted halation image by morphological erosion and weighted fusion operation processing, namely:
bringing formula (2) into formula (3) yields:
in the formula (4), I edge2 (x) Representing corrected halo region, ζ 1 、ξ 2 Omega (x) is a filtering area taking x as a center, and a square matrix of 15 multiplied by 15 is taken as a filtering structural element;
step 2.3, performing image fusion on the corrected halation image and the original dark channel image in a linear fusion mode to obtain a dark channel compensation model:
I dark =I dark2 +I edge2 (5);
bringing equation (4) into equation (5) yields a dark channel compensation model:
in the formula (6), I dark Representing fused dark channel images, C 1 、C 2 、C 3 And C 4 For linear weighting coefficients, ε (x) is the random error of the random variable representation model;
c in step 2.3 1 、C 2 、C 3 And C 4 The method for calculating the linear weighting coefficient comprises the following steps:
let I 0 (x)、I 1 (x)、I 2 (x)、I 3 (x) And I 4 (x) Respectively represent I in the formula (6) dark (x)、I dark1 (x)、 I dark2 (x)、And->Variable, combining ε (x) to N (0, σ) 2 ) And the nature of the normal distribution, can be obtained:
I(x)~N(C 1 I 1 (x)+C 2 I 2 (x)+C 3 I 3 (x)+C 4 I 4 (x),σ 2 ) (7)
assuming that the probabilities of each pixel error are independent, a joint probability density function is constructed as follows:
wherein i represents a pixel point, and logarithms are taken from two sides of the formula (8) at the same time to obtain:
let sigma, which is the maximum value of formula (9), be
Assuming σ is constant, the maximum value of the above equation may be converted to the minimum value of the following equation
And (3) adopting a gradient descent algorithm to obtain the minimum value of the formula (11), and obtaining partial derivatives of the parameters in the formula (11) respectively:
step 3, calculating an atmospheric light value A of a foggy day image I (x) by combining an improved quadtree segmentation method, and according to the compensated dark channel image I dark (x) Calculating the atmospheric transmittance t (x);
the specific process for calculating the atmospheric light value A of the foggy day image I (x) by combining the improved quadtree segmentation method is as follows:
step 3.1, according to the initial threshold T 0 The value is 30 multiplied by 30, and the gray level image I is obtained for the input foggy day image I (x) gray
Step 3.2, for gray scale image I gray Obtaining a filtered image I using median filtering median
Step 3.3, image I median The four rectangular areas are equally divided into four adjacent areas by a quadtree segmentation method, as shown in fig. 2 (a);
step 3.4, calculating an average pixel value of each rectangular region, subtracting the standard deviation of the region from the average pixel value to obtain a score, and selecting the maximum score and the region corresponding to the maximum score;
step 3.5, re-marking the four adjacent areas as shown in fig. 2 (b);
step 3.6, the marked four adjacent areas are rotated anticlockwise to be recombined into an image, as shown in fig. 2 (c).
Step 3.7, the average pixel value of each area in the step 3.6 is obtained to subtract the standard deviation of the corresponding area, and the maximum score and the area corresponding to the maximum score are selected;
step 3.8, comparing the maximum score in the step 3.4 with the maximum score in the step 3.7, and selecting the area with the maximum score;
step 3.9, repeating steps 3.2 and 3.3 until the region size is less than the initial threshold T 0 The region is a target region;
and 3.10, obtaining an average value of the gray value of the target area, wherein the average value is the atmospheric light value A.
Step 4, substituting the atmospheric light value A and the atmospheric transmissivity t (x) into an atmospheric scattering model formula of the foggy-day image, denoising through a foggy-image restoration formula, and calculating a foggy-image J (x);
and 4, an atmospheric scattering model formula is as follows:
I(x)=J(x)t(x)+A[1-t(x)] (16)。
the process of calculating the haze-free image J (x) is:
in the formula (17), t 0 The lower threshold value set for the transmittance t (x) takes a value of 0.1.
Examples
The invention obtains the transmittance of the image through the color attenuation priori algorithm proposed by Zhu et al, and then by means of the methodThe dark channel which is not underestimated is reversely deduced, 100 training samples and 2400 ten thousand pixel points are selected to train a linear model, and the best training results C1= 0.91098, C2= -0.12076, C3= 0.12893, C4= 0.03144 are obtained,
1) Halo phenomenon existing a priori for dark channel
Processing an actual image, and compensating a result diagram before and after dark channel: selecting a foggy day image as shown in fig. 3 (a); FIG. 3 (b) is an original dark channel image; FIG. 3 (c) is a compensated dark channel image; FIG. 3 (d) shows the result of the original treatment; fig. 3 (e) shows the result of the compensated processing.
As can be seen by comparing fig. 3 (c) with fig. 3 (b), the compensated dark channels significantly increase the pixel values at the edges of the image and preserve the detailed features of the image scene. The experimental result of fig. 3 (e) shows that the method based on the dark channel compensation model provided by the invention can effectively remove the halation effect.
2) Improvement of atmospheric light value
Processing the actual image, and improving the result graph before and after the quadtree method: FIG. 4 (a) is a foggy day image; FIG. 4 (b) shows the atmospheric light value region selected by the quad-tree segmentation method; FIG. 4 (c) is an atmospheric light value region selected for improving the quad-tree segmentation method; FIG. 4 (d) is a processing result of the quad-tree segmentation method; FIG. 4 (e) is a processing result of the improved quadtree splitting method; as can be seen from fig. 4 (a) to fig. 4 (e), the atmospheric light value of the image selected by the original quadtree segmentation method is lower than the real atmospheric light value, so that defogging of the distant view area in the restored image is not thorough.
3) Comparison of results
Selecting the image in FIG. 5 (a) as an original foggy day image; FIG. 5 (b) shows the result of the He algorithm, with significant residual fog still present in the distant view area; FIG. 5 (c) is the Meng algorithm processing result; FIG. 5 (d) shows the result of the Zhu algorithm; FIG. 5 (e) shows the result of the treatment of the method of the present invention; compared with other algorithm processing, the image restored by the method is true and natural, and brightness is more in line with the observation of human eyes.
In order to further verify the actual recovery effect of the method, the above 4 groups of images are objectively evaluated by adopting various indexes such as average gradient, contrast, information entropy and processing time, and the like, and the specific indexes are shown in tables 1-4.
TABLE 1
TABLE 2
TABLE 3 Table 3
TABLE 4 Table 4
The above experimental data shows that the method has certain advantages and advancement in image detail characteristics, gray contrast and algorithm processing time, but the method has slightly less effect on image definition than the Meng algorithm and Zhu algorithm.
Through the mode, the image defogging method based on dark channel compensation and atmosphere light value improvement can not only effectively correct underestimated dark channel values and weaken halation effect at the edge of an image scene, but also accurately acquire the atmosphere light value of the image, so that the restored image is clearer and more natural, and the detail reservation is richer. The method comprises the following specific implementation steps: firstly, obtaining a minimum value channel image by utilizing R, G, B color channels of an original image, and obtaining a compensated dark channel image by means of a dark channel compensation model; then, the atmospheric light value of the image is calculated through the steps of graying treatment, quadtree segmentation and the like on the original image; and finally, estimating the image transmittance by combining the dark channel image and the atmospheric light value, and recovering the haze-free image through an atmospheric scattering model. The feasibility and the effectiveness of the method are proved by experimental results and subjective and objective evaluation.

Claims (7)

1. An image defogging method based on dark channel compensation and atmospheric light value improvement is characterized by comprising the following steps:
step 1, obtaining a minimum channel image I of red, green and blue color channel values of an input foggy day image I (x) dark1 Then the initial dark channel image I is obtained by the minimum value filtering calculation dark2 (x);
Step 2, according to the minimum value channel image I dark1 Initial dark channel image I dark2 (x) Calculating a dark channel compensation model to obtain a compensated dark channel image I dark (x);
Step 3, calculating an atmospheric light value A of a foggy day image I (x) by combining an improved quadtree segmentation method, and according to the compensated dark channel image I dark (x) Calculating the atmospheric transmittance t (x);
and 4, substituting the atmospheric light value A and the atmospheric transmissivity t (x) into an atmospheric scattering model formula of the foggy day image, denoising through a foggy image restoration formula, and calculating a foggy image J (x).
2. The image defogging method based on dark channel compensation and atmospheric light value improvement according to claim 1, wherein the initial dark channel image I of step 1 dark2 (x) The expression is:
3. the image defogging method based on dark channel compensation and atmospheric light value improvement according to claim 1, wherein the specific process of step 2 is as follows:
step 2.1, minimum value channel image I dark1 Initial dark channel image I dark2 (x) A kind of electronic deviceThe weighting difference is added, and the halation area part in the initial dark channel image is identified and extracted, wherein the specific expression is as follows:
I edge1 =αI dark1 -βI dark2 (2);
in the formula (2), I edge1 Representing a halo region map before correction, wherein alpha and beta are weighting parameters;
and 2.2, correcting the extracted halation image by morphological erosion and weighted fusion operation processing, namely:
bringing formula (2) into formula (3) yields:
in the formula (4), I edge2 (x) Representing corrected halo region, ζ 1 、ξ 2 Omega (x) is a filtering area taking x as a center, and a square matrix of 15 multiplied by 15 is taken as a filtering structural element;
step 2.3, performing image fusion on the corrected halation image and the original dark channel image in a linear fusion mode to obtain a dark channel compensation model:
I dark =I dark2 +I edge2 (5);
bringing equation (4) into equation (5) yields a dark channel compensation model:
in the formula (6), I dark Representing fused dark channel images, C 1 、C 2 、C 3 And C 4 For linear weighting coefficients, ε (x) is the random error of the random variable representation model.
4. A method of image defogging based on dark channel compensation and atmospheric light value improvement as recited in claim 3, wherein said C in step 2.3 1 、C 2 、C 3 And C 4 The method for calculating the linear weighting coefficient comprises the following steps:
let I 0 (x)、I 1 (x)、I 2 (x)、I 3 (x) And I 4 (x) Respectively represent I in the formula (6) dark (x)、I dark1 (x)、I dark2 (x)、And->Variable, combining ε (x) to N (0, σ) 2 ) And the nature of the normal distribution, can be obtained:
I(x)~N(C 1 I 1 (x)+C 2 I 2 (x)+C 3 I 3 (x)+C 4 I 4 (x),σ 2 ) (7)
assuming that the probabilities of each pixel error are independent, a joint probability density function is constructed as follows:
wherein i represents a pixel point, and logarithms are taken from two sides of the formula (8) at the same time to obtain:
let sigma, which is the maximum value of formula (9), be
Assuming σ is constant, the maximum value of the above equation may be converted to the minimum value of the following equation
The minimum value of the formula (11) is calculated by adopting a gradient descent algorithm, and partial derivatives of parameters in the formula (11) are calculated respectively to obtain:
5. the image defogging method based on dark channel compensation and atmosphere light value improvement according to claim 1, wherein the specific process of calculating the atmosphere light value a of the foggy day image I (x) by combining the improved quadtree segmentation method in step 3 is as follows:
step 3.1, according to the initial threshold T 0 Obtaining a gray level image I from the input foggy day image I (x) gray
Step 3.2, for the gray scale image I gray Obtaining a filtered image I using median filtering median
Step 3.3, the image I is processed median The method comprises the steps of equally dividing the four rectangular areas and four adjacent areas by a quadtree segmentation method;
step 3.4, calculating an average pixel value of each rectangular region, subtracting the standard deviation of the region from the average pixel value to obtain a score, and selecting the maximum score and the region corresponding to the maximum score;
3.5, re-marking the four adjacent areas;
step 3.6, rotating the marked four adjacent areas anticlockwise to recombine into an image;
step 3.7, the average pixel value of each area in the step 3.6 is obtained to subtract the standard deviation of the corresponding area, and the maximum score and the area corresponding to the maximum score are selected;
step 3.8, comparing the maximum score in the step 3.4 with the maximum score in the step 3.7, and selecting the area with the maximum score;
step 3.9, repeating steps 3.2 and 3.3 until the region size is less than the initial threshold T 0 The region is a target region;
and 3.10, obtaining an average value of the gray value of the target area, wherein the average value is the atmospheric light value A.
6. The image defogging method based on dark channel compensation and atmospheric light value improvement according to claim 1, wherein the atmospheric scattering model formula in step 4 is:
I(x)=J(x)t(x)+A[1-t(x)] (16)。
7. the image defogging method based on dark channel compensation and atmospheric light value improvement according to claim 1, wherein the process of calculating the defogging image J (x) in the step 4 is:
in the formula (17), t 0 The lower threshold value set for the transmittance t (x) takes a value of 0.1.
CN201910949482.8A 2019-10-08 2019-10-08 Image defogging method based on dark channel compensation and atmospheric light value improvement Active CN110889805B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910949482.8A CN110889805B (en) 2019-10-08 2019-10-08 Image defogging method based on dark channel compensation and atmospheric light value improvement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910949482.8A CN110889805B (en) 2019-10-08 2019-10-08 Image defogging method based on dark channel compensation and atmospheric light value improvement

Publications (2)

Publication Number Publication Date
CN110889805A CN110889805A (en) 2020-03-17
CN110889805B true CN110889805B (en) 2023-08-18

Family

ID=69746050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910949482.8A Active CN110889805B (en) 2019-10-08 2019-10-08 Image defogging method based on dark channel compensation and atmospheric light value improvement

Country Status (1)

Country Link
CN (1) CN110889805B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113962872B (en) * 2020-07-21 2023-08-18 四川大学 Dual-channel joint optimization night image defogging method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9355439B1 (en) * 2014-07-02 2016-05-31 The United States Of America As Represented By The Secretary Of The Navy Joint contrast enhancement and turbulence mitigation method
CN104299198A (en) * 2014-10-14 2015-01-21 嘉应学院 Fast image defogging method based on dark channels of pixels
CN109785262A (en) * 2019-01-11 2019-05-21 闽江学院 Image defogging method based on dark channel prior and adaptive histogram equalization
CN109919879B (en) * 2019-03-13 2022-11-25 重庆邮电大学 Image defogging method based on dark channel prior and bright channel prior

Also Published As

Publication number Publication date
CN110889805A (en) 2020-03-17

Similar Documents

Publication Publication Date Title
CN111598791B (en) Image defogging method based on improved dynamic atmospheric scattering coefficient function
CN111292257B (en) Retinex-based image enhancement method in scotopic vision environment
CN107358585B (en) Foggy day image enhancement method based on fractional order differential and dark channel prior
CN110335221B (en) Multi-exposure image fusion method based on unsupervised learning
CN111161167B (en) Single image defogging method based on middle channel compensation and self-adaptive atmospheric light estimation
CN108133462B (en) Single image restoration method based on gradient field region segmentation
CN104318529A (en) Method for processing low-illumination images shot in severe environment
CN111598800B (en) Single image defogging method based on space domain homomorphic filtering and dark channel priori
Yu et al. Image and video dehazing using view-based cluster segmentation
CN111145105A (en) Image rapid defogging method and device, terminal and storage medium
CN110136079A (en) Image defogging method based on scene depth segmentation
CN108711160A (en) A kind of Target Segmentation method based on HSI enhancement models
CN110969584B (en) Low-illumination image enhancement method
CN110889805B (en) Image defogging method based on dark channel compensation and atmospheric light value improvement
CN112053298A (en) Image defogging method
CN108765337B (en) Single color image defogging processing method based on dark channel prior and non-local MTV model
CN112825189B (en) Image defogging method and related equipment
CN116563133A (en) Low-illumination color image enhancement method based on simulated exposure and multi-scale fusion
CN104657939A (en) Low-illumination video image enhancement method
CN115170437A (en) Fire scene low-quality image recovery method for rescue robot
CN111260589B (en) Retinex-based power transmission line monitoring image defogging method
CN107301625B (en) Image defogging method based on brightness fusion network
Khan et al. Shadow removal from digital images using multi-channel binarization and shadow matting
CN115439346A (en) Defogging enhancement method for fog-containing image based on airborne embedded FPGA development platform
CN113012067A (en) Retinex theory and end-to-end depth network-based underwater image restoration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230724

Address after: Room 003, Room 423, Floor 4, Building 5, Yabulun Industrial Park, Yazhou District, Sanya, Hainan 572000

Applicant after: Hainan Caishunbao Technology Co.,Ltd.

Address before: 518000 1002, Building A, Zhiyun Industrial Park, No. 13, Huaxing Road, Henglang Community, Longhua District, Shenzhen, Guangdong Province

Applicant before: Shenzhen Wanzhida Technology Co.,Ltd.

Effective date of registration: 20230724

Address after: 518000 1002, Building A, Zhiyun Industrial Park, No. 13, Huaxing Road, Henglang Community, Longhua District, Shenzhen, Guangdong Province

Applicant after: Shenzhen Wanzhida Technology Co.,Ltd.

Address before: 710048 Shaanxi province Xi'an Beilin District Jinhua Road No. 5

Applicant before: XI'AN University OF TECHNOLOGY

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant