CN109087254B - Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method - Google Patents

Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method Download PDF

Info

Publication number
CN109087254B
CN109087254B CN201810385966.XA CN201810385966A CN109087254B CN 109087254 B CN109087254 B CN 109087254B CN 201810385966 A CN201810385966 A CN 201810385966A CN 109087254 B CN109087254 B CN 109087254B
Authority
CN
China
Prior art keywords
image
dissipation function
gray
atmospheric
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810385966.XA
Other languages
Chinese (zh)
Other versions
CN109087254A (en
Inventor
黄鹤
郭璐
王会峰
杜晶晶
宋京
胡凯益
许哲
惠晓滨
黄莺
任思奇
周卓彧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dragon Totem Technology Hefei Co ltd
Shenzhen Dragon Totem Technology Achievement Transformation Co ltd
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201810385966.XA priority Critical patent/CN109087254B/en
Publication of CN109087254A publication Critical patent/CN109087254A/en
Application granted granted Critical
Publication of CN109087254B publication Critical patent/CN109087254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The invention discloses a self-adaptive processing method for haze sky and white areas of aerial images of unmanned aerial vehicles, which comprises the steps of firstly, obtaining aerial fog-containing images of unmanned aerial vehicles to be processed; then acquiring a dark channel image and a gray image of the fog-containing image; calculating an atmospheric light value A of the image according to the dark channel image; taking the dark channel image as a rough estimation value of an atmospheric dissipation function and solving an adaptive threshold value ThrB capable of dividing a close view and a sky or white area according to the gray image; then processing the image in different areas, calculating a correction coefficient, and substituting the correction coefficient into an improved atmosphere dissipation function formula to obtain an improved atmosphere dissipation function value; refining the obtained improved atmospheric dissipation function by bilateral filtering; then, the image transmittance t (x) is obtained according to the transmittance estimation formula; and finally, restoring the fog-free image according to the image degradation model.

Description

Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a self-adaptive processing method for haze sky and white areas of aerial images of unmanned aerial vehicles.
Background
At present, the hotspot in the research field of unmanned aerial vehicles is to apply unmanned aerial vehicles in the fields of mapping, target recognition, geological disaster prevention and control and the like, and the executive task of the unmanned aerial vehicle depends on the unmanned aerial vehicle image with higher imaging quality to a great extent. Along with the deterioration of weather conditions, the image that obtains under the haze weather of taking photo by plane is fuzzy, and the definition is not high, and the image basic information characteristic that obtains of taking photo by plane is seriously distorted impaired, can't catch useful information, therefore people are more and more high to the defogging demand of unmanned aerial vehicle image of taking photo by plane.
In recent years, the defogging method of a single image has greatly improved, and the image defogging classification method can be divided into two types: fog image enhancement based on image processing and fog image restoration based on a physical model. The image enhancement method ignores the factor of image degradation, highlights the detail information of the image by enhancing the definition of the foggy day image, and restores the original image to a certain effect, but the method does not consider an image degradation model and has poor effect. The image restoration method aims at the foggy day image degradation process, establishes a degradation physical model, and inverts the degradation process, so that a fogless image is obtained. Compared to image-enhanced defogging algorithms. The image defogging effect based on the physical model is more natural, and the information loss is less. The fog-free outdoor image is utilized to construct a constraint condition, parameters in an atmospheric scattering model are estimated from a single image to achieve the aim of defogging, and the concept of 'dark channel' is put forward by the Gacamen et al, namely, the minimum value in RGB channels of all pixel points is close to 0 in any local window of a non-sky area of the fog-free outdoor image. The defogging effect of the method is ideal, but the sky area or the white bright area does not meet the dark primary color prior rule, so that the image color distortion problem generally exists in the areas after defogging, and therefore, the problem of the color distortion of the sky area or the white bright area becomes an important task in the conventional image restoration.
Aiming at the defects of the existing algorithm, key parameters in an atmospheric scattering model are researched, a bilateral filter or a guide filter with a good edge protection effect is used for accurately estimating an atmospheric dissipation function, image edge detail information is reserved, excessive naturalness can be realized between an edge and a slow area, a halo effect or fog residue is prevented from occurring after an image is restored, and the problem of distortion of bright areas such as sky or white is still not solved; some people adopt a fixed threshold value to repair the transmittance, so that the recovery effect of the sky fog image is protected to a certain extent, but the defogging deficiency of the sky-free image is easily caused; some people also adopt a sky area to be segmented to avoid the color distortion problem of the defogged image, but the maximum connected area is used as the identified sky area, so that missing detection of certain blocks at the sky part is easily caused, and the color distortion of the missed detection part of the sky area is caused. The algorithms do not fundamentally solve the problem of defogging color distortion, so the defogging effect algorithm for improving the haze sky or white region is still to be improved.
Disclosure of Invention
The invention aims to provide a self-adaptive processing method for haze sky and white areas of aerial images of unmanned aerial vehicles, which overcomes the defects in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
an unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method comprises the following steps:
step 1: acquiring an aerial image I (x) of the unmanned aerial vehicle;
step 2: obtaining a dark channel image I of an aerial image I (x) of an unmanned aerial vehicledark(x) And a gray scale image Igray(x);
And step 3: dark channel image I obtained according to step 2dark(x) Combining the bright spots in the unmanned aerial vehicle aerial image with the unmanned aerial vehicle aerial image I (x) to obtain an atmospheric light value A of the unmanned aerial vehicle aerial image;
and 4, step 4: defining an atmospheric dissipation function: v (x) ═ a (1-t (x)), where v (x) represents an atmospheric dissipation function, a represents an atmospheric light value, and t (x) represents an image transmittance; taking the dark channel image obtained in the step 2 as a rough estimation V (x) of an atmospheric dissipation function;
and 5: obtaining a gray level image I according to the step 2gray(x) Obtaining a gray histogram of the gray image and obtaining an adaptive threshold value ThrB to segment a close view and a sky or white area;
step 6: utilizing the adaptive threshold ThrB obtained in the step 5 to process the roughly estimated atmospheric dissipation function in regions, defining a new correction formula, calculating a correction coefficient, defining an improved atmospheric dissipation function formula, and substituting the correction coefficient into the improvement formula to obtain an improved atmospheric dissipation function V' (x);
and 7: refining the improved atmospheric dissipation function obtained in the step 6 by using a bilateral filter to obtain a refined atmospheric dissipation function V' (x);
and 8: using the transmittance estimation formula, based on the fine atmospheric dissipation function V "(x) obtained in step 7
Figure GDA0001871121920000031
Obtaining the transmittance t (x) of the whole image;
and step 9: and (3) establishing an image degradation process model, and recovering to obtain a defogged image J (x) by utilizing the original image I (x) obtained in the step (1) and the parameters A, t (x) obtained in the step (3) and the step (8).
Further, dark channel image I in step 2dark(x) Is represented as follows:
Figure GDA0001871121920000032
where c is the value of one of the three color channels R, G, B.
Further, when the atmospheric light value a is obtained in step 3, the first 0.1% pixel point with the maximum brightness value of the dark channel image is selected to correspond to the first 0.1% pixel point with the maximum brightness in the original foggy image, the corresponding R, G, B three channel values are respectively obtained as an average value, and finally the atmospheric light value a is obtained as an average value of the three atmospheric light values corresponding to the three channels.
Further, step (ii)The atmospheric dissipation function in 4 satisfies two constraints: (1) at each pixel point, V (x)>0, namely, the atmospheric dissipation function takes a positive value; (2)
Figure GDA0001871121920000033
i.e., v (x) is not greater than the minimum color component of the fog-containing image i (x), the dark channel image is used as a rough estimate of the atmospheric dissipation function.
Further, the step 5 is implemented as follows:
step 5.1: calculating a grayscale image Igray(x) And calculating the cumulative distribution function L (x) of the image, and extracting the distribution in [0.05,0.95 ]]Using a distribution function to calculate L (x)1) 0.05 and L (x)2) Gray scale value x of 0.951、x2Finally, the central point of the gray level histogram is calculated
Figure GDA0001871121920000041
Step 5.2: carrying out self-adaptive segmentation on the gray level histogram by using a maximum inter-class variance method to obtain a threshold value sh for distinguishing a target from a background;
step 5.3: the method comprises the following steps of accurately finding a pixel point ThrB corresponding to an initial point of a peak area of a gray histogram by reducing a gray histogram interval, wherein the method specifically comprises the following steps: finding out the minimum extreme point of the histogram in the [ max (Mid, sh), A ] interval, calculating the corresponding gray-scale value, namely the corresponding pixel point ThrB, reducing the interval by a dichotomy, finally setting the interval as [ b, ThrM ], finding out the minimum extreme point of the histogram in the interval, and setting the corresponding pixel value as ThrB;
wherein the content of the first and second substances,
Figure GDA0001871121920000042
ThrM is [ max (Mid, sh), A]And the pixel value corresponding to the maximum value point of the histogram in the interval.
Further, the new correction formula in step 6 is defined as:
Figure GDA0001871121920000043
wherein M is a correction coefficient, Igray(x) For gray-scale images, a is a parameter which influences the trend of the function, IdarkIs a dark channel image.
Further, defining the modified atmospheric dissipation function as:
V'(x)=M*V(x)
where V (x) is a rough estimate of the atmospheric dissipation function.
Further, the improved atmospheric dissipation function is refined by using a bilateral filter in step 7, i.e. V "(x) ═ Bil (V' (x));
wherein Bil (x) is a bilateral filter, and its mathematical expression is defined as
Figure GDA0001871121920000051
w(i,j)=ws(i,j)wr(i,j)
Figure GDA0001871121920000052
Figure GDA0001871121920000053
S is a neighborhood with (x, y) as a center, (x, y) is the coordinate of a central pixel point in a filtering window, (i, j) is the coordinate of an adjacent pixel point, w (i, j) is a weighting coefficient, ws(i, j) is a spatial similarity kernel, wr(i, j) is a brightness similarity kernel function, g (x, y) is the brightness value of the central pixel point in the filtering window, g (i, j) is the brightness value of the adjacent pixel points, and sigma iss、σrRespectively, a spatial similarity kernel function and a brightness similarity kernel function standard deviation.
Further, in step 9, the original image i (x), the atmospheric light a, and the obtained transmittance t (x) are substituted into the image degradation process model i (x) ═ j (x) t (x) + a (1-t (x)), and the restored image obtained by deformation is expressed by the formula
Figure GDA0001871121920000054
Namely, the defogged image J (x) is recovered.
Compared with the prior art, the invention has the following beneficial technical effects:
the invention provides a self-adaptive processing method for a bright area such as a haze sky or white, which can self-adaptively solve a threshold value when defogging processing is carried out on a fog-containing image containing the bright area such as the sky or the white, self-adaptively partition the bright area such as the sky or the white and a close-range area through threshold value judgment, and provide a new correction formula for different areas so as to improve an atmospheric dissipation function. The novel self-adaptive processing method can accurately detect bright areas such as haze sky or white and the like, improve and process the atmospheric dissipation function in the areas, recover the bright areas such as sky or white and the like which are more in line with human eyes, and can keep the defogging effect of the close-range areas. Compared with the traditional bilateral filtering algorithm, the novel algorithm effectively segments bright areas such as sky or white, the problem of color distortion of the bright areas is solved after self-adaptive processing, and the signal-to-noise ratio, the contrast and the color saturation of the defogged image are improved to a greater extent.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
fig. 2 is a comparison of the defogging effect of the present invention on the unmanned aerial vehicle aerial image containing bright areas such as haze sky or white, etc., with other filtering methods, where (a) (b) (c) (d) is the original unmanned aerial vehicle aerial image, (e) (f) (g) (h) is the image after bilateral filtering defogging of (a) (b) (c) (d), and (i) (j) (k) (l) is the image after defogging of (a) (b) (c) (d) by using the algorithm of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
referring to fig. 1 and fig. 2, the invention provides a method for adaptively processing haze sky and white areas of aerial images of unmanned aerial vehicles.
Step 1, acquiring an aerial image of an unmanned aerial vehicle: acquiring an aerial image I (x) to be subjected to defogging treatment by using unmanned aerial vehicle image acquisition equipment.
Step 2, solving a dark channel image I from the unmanned aerial vehicle aerial image obtained in the step 1 based on a dark channel prior theorydark(x) And a gray scale image Igray(x);
The obtained dark channel image is based on dark channel prior theory proposed by the Cacamme, and in most non-sky areas, at least one color channel of some pixels has a very low value and almost approaches zero. In other words, the minimum value of the light intensity in this region is a small number close to zero. Taking an RGB image as an example, for an arbitrary input image I, the dark channel can be expressed by the following formula:
Figure GDA0001871121920000061
where c is the value of one of the 3 color channels R, G, B;
step 3, obtaining the dark channel image I according to the step 2dark(x) In the bright spots, the first 0.1% pixel points with the largest brightness value are selected, the brightness values at the corresponding positions are found out in the original unmanned aerial vehicle aerial color image I (x) for summation, the average value is calculated, the atmospheric light values of the three channels are respectively obtained, and then the average value is calculated and used as the final atmospheric light value A in the invention.
Step 4, defining an atmospheric dissipation function: v (x) ═ a (1-t (x)), where v (x) represents an atmospheric dissipation function, a represents an atmospheric light value, t (x) represents an image transmittance, and the atmospheric dissipation function satisfies two constraints: (1) at each pixel point, V (x)>0, namely the value of the atmospheric dissipation function is a positive value; (2)
Figure GDA0001871121920000071
v (x) is not larger than the minimum color component of the fog-containing image I (x), so the dark channel image is taken as the rough estimation of the atmospheric dissipation function in the invention;
step 5, obtaining the gray level image I according to the step 2gray(x) The gray histogram of the image is obtained and the adaptive threshold ThrB is obtained to divide the near view and the skyOr a bright area such as white;
the sky or white area in the image is usually brighter, and exhibits high peak characteristics on a gray histogram, so an adaptive threshold value ThrB can be obtained to segment a close scene from the sky or white area, and the specific implementation steps are as follows:
(1) in order to judge the distribution of the histogram, firstly, a gray image I is calculatedgray(x) And calculating the cumulative distribution function L (x) of the image, and extracting the distribution in [0.05,0.95 ]]Using a distribution function to calculate L (x)1) 0.05 and L (x)2) Gray scale value x of 0.951、x2Finally, the center point of the main histogram is calculated
Figure GDA0001871121920000072
(2) And (5) dividing the histogram by using a maximum inter-class variance method to obtain sh. A threshold value sh is obtained through self-adaption to distinguish a target from a background;
(3) and accurately finding a pixel point ThrB corresponding to the starting point of the peak histogram by reducing the interval of the histogram. Finding out the minimum extreme point of the histogram in the [ max (Mid, sh), A ] interval, and calculating the corresponding gray-scale value, namely the corresponding pixel point, namely the ThrB to be found. And (3) reducing the interval by a dichotomy, finally setting the interval as [ b, ThrM ], finding the minimum value point in the histogram in the interval, and setting the corresponding pixel value as ThrB.
Wherein the content of the first and second substances,
Figure GDA0001871121920000081
ThrM is [ max (Mid, sh), A]And the pixel value corresponding to the maximum value point in the histogram in the interval.
Step 6, defining a new correction formula according to the air dissipation function roughly estimated by the regional processing of the adaptive threshold ThrB obtained in the step 5, calculating a correction coefficient M, defining an improved air dissipation function V ' (x) M V (x), and substituting the improved air dissipation function V ' (x) into the improvement formula to obtain an improved air dissipation function V ' (x);
the new correction formula is defined as:
Figure GDA0001871121920000082
Igrayis a gray image, a is a parameter influencing the trend change of the function, the value of a is 10, IdarkIs a dark channel image.
Step 7, refining the improved atmospheric dissipation function obtained in the step 6 by using a bilateral filter, namely V "(x) is Bil (V' (x)), so as to obtain a refined atmospheric dissipation function V" (x);
the mathematical expression of the bilateral filter is defined as
Figure GDA0001871121920000083
w(i,j)=ws(i,j)wr(i,j)
Figure GDA0001871121920000084
Figure GDA0001871121920000085
S is a neighborhood which takes (x, y) as a center and has the size of 5 multiplied by 5, (x, y) is the coordinate of a central pixel point in a filtering window, and (i, j) is the coordinate of an adjacent pixel point. W (i, j) is a weighting coefficient, Ws(i, j) is a spatial similarity kernel, wrAnd (i, j) is a brightness similarity kernel function, g (x, y) is the brightness value of the central pixel point in the filtering window, and g (i, j) is the brightness value of the adjacent pixel points. Standard deviation sigma of spatial similarity kernel functionsLuminance similarity kernel standard deviation σ ═ 3r=0.1。
And 8: according to the fine atmospheric dissipation function V' (x) obtained in the step 7, the atmospheric dissipation function formula is deformed to obtain a transmissivity estimation formula
Figure GDA0001871121920000091
Substituting A and V ″ (x) obtained in step 3 and step 6 into transmissionObtaining the transmissivity t (x) of the whole image by a rate estimation formula;
and step 9: establishing an image degradation process model I (x) ═ J (x) t (x) + A (1-t (x)) and transforming to obtain an image restoration formula
Figure GDA0001871121920000092
And recovering the defogged image J (x) by using the original image I (x) obtained in the step 1 and the parameter A, t (x) obtained in the steps 3 and 8.
Fig. 1 is a flow chart of the algorithm. Fig. 2 is a diagram of processing effects of different algorithms. In fig. 2, four different sets of unmanned aerial vehicle aerial images are used, and a bilateral filter is used as a comparison group to compare with the effect of the haze sky or white area adaptive processing algorithm for improving the atmospheric dissipation function provided by the invention. The analysis of the experimental result in fig. 2 shows that compared with the original image and the image after bilateral filtering, the new algorithm can solve the problem of color distortion after defogging of bright areas such as a haze sky or white and the like, better maintains the defogging effect of the image in a close-range area, and improves the color saturation and contrast of the restored image.
Table 1 is a comparison table of objective evaluation parameters after defogging of an image aerial by an unmanned aerial vehicle by applying different defogging algorithms.
TABLE 1 evaluation table of defogging effect parameters of different algorithms
Figure GDA0001871121920000093
Figure GDA0001871121920000101
As can be seen from table 1, the adaptive processing algorithm for haze sky or white regions provided by the present invention has a relatively high peak signal-to-noise ratio and color saturation after defogging of an image, and the larger the parameters such as the peak signal-to-noise ratio are, the better the image restoration effect is. From table 1, it can be seen that each parameter of the image after defogging is greater than the bilateral filtering defogging algorithm. For the image contrast, the image contrast after defogging is superior to the original image and the bilateral filtering algorithm to a certain extent. It can be seen that the adaptive processing algorithm for the sky or white bright area provided by the invention can eliminate the color distortion problem of the defogged bright area.
Therefore, aiming at the aerial fog-containing image of the unmanned aerial vehicle containing the sky or the white area, the self-adaptive processing algorithm of the aerial image of the unmanned aerial vehicle based on the haze sky or the white area of the improved atmosphere dissipation function is superior to the existing algorithm, and has obvious technical advantages. The method has extremely high application value for further processing and accurately extracting the information in the aerial image.
The invention relates to an algorithm for carrying out self-adaptive processing on a sky or white bright area of a fog-containing image, which can accurately segment the sky or white bright area, better eliminate the color distortion problem after defogging of the sky or white bright area and keep the defogging effect only. Compared with a bilateral filtering defogging algorithm, the image color saturation and contrast after defogging by the algorithm are obviously improved, the algorithm is more in line with human eye vision, the practicability of the algorithm is stronger, and the algorithm has high academic value and application value for improving the aerial fog-containing image quality of the unmanned aerial vehicle and extracting useful information.

Claims (8)

1. An unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method is characterized by comprising the following steps:
step 1: acquiring an aerial image I (x) of the unmanned aerial vehicle;
step 2: obtaining a dark channel image I of an aerial image I (x) of an unmanned aerial vehicledark(x) And a gray scale image Igray(x);
And step 3: dark channel image I obtained according to step 2dark(x) Combining the bright spots in the unmanned aerial vehicle aerial image with the unmanned aerial vehicle aerial image I (x) to obtain an atmospheric light value A of the unmanned aerial vehicle aerial image;
and 4, step 4: defining an atmospheric dissipation function: v (x) ═ a (1-t (x)), where v (x) represents an atmospheric dissipation function, a represents an atmospheric light value, and t (x) represents an image transmittance; taking the dark channel image obtained in the step 2 as a rough estimation V (x) of an atmospheric dissipation function;
and 5: obtaining a gray level image I according to the step 2gray(x) Obtaining a gray histogram of the gray image and obtaining an adaptive threshold value ThrB to segment a close view and a sky or white area;
the specific implementation steps are as follows:
step 5.1: calculating a grayscale image Igray(x) And calculating the cumulative distribution function L (x) of the image, and extracting the distribution in [0.05,0.95 ]]Using a distribution function to calculate L (x)1) 0.05 and L (x)2) Gray scale value x of 0.951、x2Finally, the central point of the gray level histogram is calculated
Figure FDA0003272391270000011
Step 5.2: carrying out self-adaptive segmentation on the gray level histogram by using a maximum inter-class variance method to obtain a threshold value sh for distinguishing a target from a background;
step 5.3: the method comprises the following steps of accurately finding a pixel point ThrB corresponding to an initial point of a peak area of a gray histogram by reducing a gray histogram interval, wherein the method specifically comprises the following steps: finding out the minimum extreme point of the histogram in the [ max (Mid, sh), A ] interval, calculating the corresponding gray-scale value, namely the corresponding pixel point ThrB, reducing the interval by a dichotomy, finally setting the interval as [ b, ThrM ], finding out the minimum extreme point of the histogram in the interval, and setting the corresponding pixel value as ThrB;
wherein the content of the first and second substances,
Figure FDA0003272391270000021
ThrM is [ max (Mid, sh), A]Pixel values corresponding to the maximum points of the histograms in the intervals;
step 6: utilizing the adaptive threshold ThrB obtained in the step 5 to process the roughly estimated atmospheric dissipation function in regions, defining a new correction formula, calculating a correction coefficient, defining an improved atmospheric dissipation function formula, and substituting the correction coefficient into the improvement formula to obtain an improved atmospheric dissipation function V' (x);
and 7: refining the improved atmospheric dissipation function obtained in the step 6 by using a bilateral filter to obtain a refined atmospheric dissipation function V' (x);
and 8: using the transmittance estimation formula, based on the fine atmospheric dissipation function V "(x) obtained in step 7
Figure FDA0003272391270000022
Obtaining the transmittance t (x) of the whole image;
and step 9: and (3) establishing an image degradation process model, and recovering to obtain a defogged image J (x) by utilizing the original image I (x) obtained in the step (1) and the parameters A, t (x) obtained in the step (3) and the step (8).
2. The adaptive processing method for the haze sky and white area of the aerial image of the unmanned aerial vehicle as claimed in claim 1, wherein the dark channel image I in step 2dark(x) Is represented as follows:
Figure FDA0003272391270000023
where c is the value of one of the three color channels R, G, B.
3. The adaptive processing method for the foggy sky and white area of the aerial image of the unmanned aerial vehicle of claim 1, wherein when the atmospheric light value a is obtained in step 3, the first 0.1% pixel point with the maximum brightness value of the dark channel image is selected to correspond to the first 0.1% pixel point with the maximum brightness in the original foggy image, the corresponding R, G, B channel values are respectively averaged, and the final atmospheric light value a is obtained as the average of the three atmospheric light values corresponding to the three channels.
4. The method for adaptively processing the haze sky and white areas of the aerial image of the unmanned aerial vehicle according to claim 1, wherein the atmospheric dissipation function in step 4 satisfies two constraint conditions: (1) at each pixel point, V (x)>0, namely, the atmospheric dissipation function takes a positive value; (2)
Figure FDA0003272391270000031
i.e., v (x) is not greater than the minimum color component of the fog-containing image i (x), the dark channel image is used as a rough estimate of the atmospheric dissipation function.
5. The method of claim 1, wherein the new modification formula in step 6 is defined as:
Figure FDA0003272391270000032
wherein M is a correction coefficient, Igray(x) For gray-scale images, a is a parameter which influences the trend of the function, IdarkIs a dark channel image.
6. The adaptive processing method for the haze sky and white areas of the aerial image of the unmanned aerial vehicle of claim 5, wherein the improved atmospheric dissipation function is defined as:
V'(x)=M*V(x)
where V (x) is a rough estimate of the atmospheric dissipation function.
7. The adaptive processing method for the foggy sky and white region in the aerial image of the unmanned aerial vehicle of claim 1, wherein the improved atmospheric dissipation function is refined by using a bilateral filter in step 7, i.e. V "(x) ═ Bil (V' (x));
wherein Bil (x) is a bilateral filter, and its mathematical expression is defined as
Figure FDA0003272391270000033
w(i,j)=ws(i,j)wr(i,j)
Figure FDA0003272391270000034
Figure FDA0003272391270000041
S is a neighborhood with (x, y) as a center, (x, y) is the coordinate of a central pixel point in a filtering window, (i, j) is the coordinate of an adjacent pixel point, w (i, j) is a weighting coefficient, ws(i, j) is a spatial similarity kernel, wr(i, j) is a brightness similarity kernel function, g (x, y) is the brightness value of the central pixel point in the filtering window, g (i, j) is the brightness value of the adjacent pixel points, and sigma iss、σrRespectively, a spatial similarity kernel function and a brightness similarity kernel function standard deviation.
8. The adaptive processing method for the haze sky and white area in the aerial image of the unmanned aerial vehicle of claim 1, wherein in step 9, the original image i (x), the atmospheric light a and the obtained transmittance t (x) are substituted into an image degradation process model i (x) ═ j (x) t (x) + a (1-t (x)), and the restored image obtained by deformation is represented by the formula
Figure FDA0003272391270000042
Namely, the defogged image J (x) is recovered.
CN201810385966.XA 2018-04-26 2018-04-26 Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method Active CN109087254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810385966.XA CN109087254B (en) 2018-04-26 2018-04-26 Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810385966.XA CN109087254B (en) 2018-04-26 2018-04-26 Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method

Publications (2)

Publication Number Publication Date
CN109087254A CN109087254A (en) 2018-12-25
CN109087254B true CN109087254B (en) 2021-12-31

Family

ID=64839641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810385966.XA Active CN109087254B (en) 2018-04-26 2018-04-26 Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method

Country Status (1)

Country Link
CN (1) CN109087254B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919859B (en) * 2019-01-25 2021-09-07 暨南大学 Outdoor scene image defogging enhancement method, computing device and storage medium thereof
CN109949239B (en) * 2019-03-11 2023-06-16 中国人民解放军陆军工程大学 Self-adaptive sharpening method suitable for multi-concentration multi-scene haze image
CN109978799B (en) * 2019-04-15 2021-03-23 武汉理工大学 Maritime unmanned aerial vehicle video image defogging method based on deep learning
CN110060221B (en) * 2019-04-26 2023-01-17 长安大学 Bridge vehicle detection method based on unmanned aerial vehicle aerial image
CN110136079A (en) * 2019-05-05 2019-08-16 长安大学 Image defogging method based on scene depth segmentation
CN110097522B (en) * 2019-05-14 2021-03-19 燕山大学 Single outdoor image defogging method based on multi-scale convolution neural network
CN110676753B (en) * 2019-10-14 2020-06-23 宁夏百川电力股份有限公司 Intelligent inspection robot for power transmission line
CN113538284A (en) * 2021-07-22 2021-10-22 哈尔滨理工大学 Transplantation method of image defogging algorithm based on dark channel prior

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489166A (en) * 2013-10-12 2014-01-01 大连理工大学 Bilateral filter-based single image defogging method
CN105184758A (en) * 2015-09-16 2015-12-23 宁夏大学 Defogging and enhancing method for image
CN106251301A (en) * 2016-07-26 2016-12-21 北京工业大学 A kind of single image defogging method based on dark primary priori

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4003428B2 (en) * 2001-10-10 2007-11-07 セイコーエプソン株式会社 Check processing apparatus and check processing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489166A (en) * 2013-10-12 2014-01-01 大连理工大学 Bilateral filter-based single image defogging method
CN105184758A (en) * 2015-09-16 2015-12-23 宁夏大学 Defogging and enhancing method for image
CN106251301A (en) * 2016-07-26 2016-12-21 北京工业大学 A kind of single image defogging method based on dark primary priori

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"修正大气耗散函数的单幅图像去雾";陈丹丹 等;《中国图象图形学报》;20170630;第876-885页 *

Also Published As

Publication number Publication date
CN109087254A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN109087254B (en) Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method
CN107301623B (en) Traffic image defogging method and system based on dark channel and image segmentation
CN106251300B (en) A kind of quick night Misty Image restored method based on Retinex
CN111292258B (en) Image defogging method based on dark channel prior and bright channel prior
Xu et al. Fast image dehazing using improved dark channel prior
CN106157267B (en) Image defogging transmissivity optimization method based on dark channel prior
CN105574830B (en) Low-quality image enhancement method under extreme weather condition
CN108389175B (en) Image defogging method integrating variation function and color attenuation prior
WO2019205707A1 (en) Dark channel based image defogging method for linear self-adaptive improvement of global atmospheric light
CN109064426B (en) Method and device for suppressing glare in low-illumination image and enhancing image
CN107360344B (en) Rapid defogging method for monitoring video
CN111161167B (en) Single image defogging method based on middle channel compensation and self-adaptive atmospheric light estimation
CN110782407A (en) Single image defogging method based on sky region probability segmentation
CN111861896A (en) UUV-oriented underwater image color compensation and recovery method
CN107977941B (en) Image defogging method for color fidelity and contrast enhancement of bright area
CN112435184B (en) Image recognition method for haze days based on Retinex and quaternion
CN114219732A (en) Image defogging method and system based on sky region segmentation and transmissivity refinement
CN114693548B (en) Dark channel defogging method based on bright area detection
CN111325688B (en) Unmanned aerial vehicle image defogging method for optimizing atmosphere light by fusion morphology clustering
CN107437241B (en) Dark channel image defogging method combined with edge detection
CN113298730B (en) Defogging restoration method based on image decomposition
CN107203979B (en) Low-illumination image enhancement method
CN115170437A (en) Fire scene low-quality image recovery method for rescue robot
Asadi et al. Improving dark channel prior for single image dehazing
CN106780381B (en) Low-illumination image self-adaptive enhancement method based on dark primary color and bilateral filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231016

Address after: Room 2202, 22 / F, Wantong building, No. 3002, Sungang East Road, Sungang street, Luohu District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen dragon totem technology achievement transformation Co.,Ltd.

Address before: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee before: Dragon totem Technology (Hefei) Co.,Ltd.

Effective date of registration: 20231016

Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Dragon totem Technology (Hefei) Co.,Ltd.

Address before: 710064 No. 33, South Second Ring Road, Shaanxi, Xi'an

Patentee before: CHANG'AN University