CN109087254A - Unmanned plane image haze sky and white area adaptive processing method - Google Patents

Unmanned plane image haze sky and white area adaptive processing method Download PDF

Info

Publication number
CN109087254A
CN109087254A CN201810385966.XA CN201810385966A CN109087254A CN 109087254 A CN109087254 A CN 109087254A CN 201810385966 A CN201810385966 A CN 201810385966A CN 109087254 A CN109087254 A CN 109087254A
Authority
CN
China
Prior art keywords
image
dissipation function
atmospheric
gray
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810385966.XA
Other languages
Chinese (zh)
Other versions
CN109087254B (en
Inventor
黄鹤
郭璐
王会峰
杜晶晶
宋京
胡凯益
许哲
惠晓滨
黄莺
任思奇
周卓彧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dragon Totem Technology Hefei Co ltd
Shenzhen Dragon Totem Technology Achievement Transformation Co ltd
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201810385966.XA priority Critical patent/CN109087254B/en
Publication of CN109087254A publication Critical patent/CN109087254A/en
Application granted granted Critical
Publication of CN109087254B publication Critical patent/CN109087254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The invention discloses unmanned plane image haze sky and white area adaptive processing methods, obtain unmanned plane image containing mist to be processed first;Then the dark channel image and gray level image of the image containing mist are obtained;The air light value A of image is calculated according to dark channel image;The adaptive threshold ThrB that can divide close shot and sky or white area is found out using dark channel image as the rough estimate evaluation of atmospheric dissipation function and according to gray level image;Then subarea processing image calculates correction factor, and is substituted into improve in atmospheric dissipation function formula and obtain improved atmospheric dissipation functional value;The improvement atmospheric dissipation function of acquisition will be refined by bilateral filtering again;Image transmission rate t (x) is then found out according to transmissivity estimation formulas;Fog free images are finally recovered according to image degradation model.

Description

Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a self-adaptive processing method for haze sky and white areas of aerial images of unmanned aerial vehicles.
Background
At present, the hotspot in the research field of unmanned aerial vehicles is to apply unmanned aerial vehicles in the fields of mapping, target recognition, geological disaster prevention and control and the like, and the executive task of the unmanned aerial vehicle depends on the unmanned aerial vehicle image with higher imaging quality to a great extent. Along with the deterioration of weather conditions, the image that obtains under the haze weather of taking photo by plane is fuzzy, and the definition is not high, and the image basic information characteristic that obtains of taking photo by plane is seriously distorted impaired, can't catch useful information, therefore people are more and more high to the defogging demand of unmanned aerial vehicle image of taking photo by plane.
In recent years, the defogging method of a single image has greatly improved, and the image defogging classification method can be divided into two types: fog image enhancement based on image processing and fog image restoration based on a physical model. The image enhancement method ignores the factor of image degradation, highlights the detail information of the image by enhancing the definition of the foggy day image, and restores the original image to a certain effect, but the method does not consider an image degradation model and has poor effect. The image restoration method aims at the foggy day image degradation process, establishes a degradation physical model, and inverts the degradation process, so that a fogless image is obtained. Compared to image-enhanced defogging algorithms. The image defogging effect based on the physical model is more natural, and the information loss is less. The fog-free outdoor image is utilized to construct a constraint condition, parameters in an atmospheric scattering model are estimated from a single image to achieve the aim of defogging, and the concept of 'dark channel' is put forward by the Gacamen et al, namely, the minimum value in RGB channels of all pixel points is close to 0 in any local window of a non-sky area of the fog-free outdoor image. The defogging effect of the method is ideal, but the sky area or the white bright area does not meet the dark primary color prior rule, so that the image color distortion problem generally exists in the areas after defogging, and therefore, the problem of the color distortion of the sky area or the white bright area becomes an important task in the conventional image restoration.
Aiming at the defects of the existing algorithm, key parameters in an atmospheric scattering model are researched, a bilateral filter or a guide filter with a good edge protection effect is used for accurately estimating an atmospheric dissipation function, image edge detail information is reserved, excessive naturalness can be realized between an edge and a slow area, a halo effect or fog residue is prevented from occurring after an image is restored, and the problem of distortion of bright areas such as sky or white is still not solved; some people adopt a fixed threshold value to repair the transmittance, so that the recovery effect of the sky fog image is protected to a certain extent, but the defogging deficiency of the sky-free image is easily caused; some people also adopt a sky area to be segmented to avoid the color distortion problem of the defogged image, but the maximum connected area is used as the identified sky area, so that missing detection of certain blocks at the sky part is easily caused, and the color distortion of the missed detection part of the sky area is caused. The algorithms do not fundamentally solve the problem of defogging color distortion, so the defogging effect algorithm for improving the haze sky or white region is still to be improved.
Disclosure of Invention
The invention aims to provide a self-adaptive processing method for haze sky and white areas of aerial images of unmanned aerial vehicles, which overcomes the defects in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
an unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method comprises the following steps:
step 1: acquiring an aerial image I (x) of the unmanned aerial vehicle;
step 2: dark channel image I for acquiring unmanned aerial vehicle aerial image I (x)dark(x) And a gray scale image Igray(x);
And step 3: dark channel image I obtained according to step 2dark(x) Combining the bright spot with the aerial image I (x) of the unmanned aerial vehicle to obtain an atmospheric light value A of the aerial image of the unmanned aerial vehicle;
and 4, step 4: defining an atmospheric dissipation function: v (x) ═ a (1-t (x)), where V (x) represents an atmospheric dissipation function, a represents an atmospheric light value, and t (x) represents an image transmittance; and taking the dark channel image obtained in the step 2 as a rough estimation V (x) of the atmospheric dissipation function;
and 5: obtaining a gray level image I according to the step 2gray(x) Obtaining a gray histogram of the gray image and obtaining an adaptive threshold value ThrB to segment a close view and a sky or white area;
step 6: utilizing the adaptive threshold ThrB obtained in the step 5 to process the roughly estimated atmospheric dissipation function in regions, defining a new correction formula, calculating a correction coefficient, defining an improved atmospheric dissipation function formula, and substituting the correction coefficient into the improvement formula to obtain an improved atmospheric dissipation function V' (x);
and 7: refining the improved atmospheric dissipation function obtained in the step 6 by using a bilateral filter to obtain a refined atmospheric dissipation function V' (x);
and 8: using the transmittance estimation formula, based on the fine atmospheric dissipation function V "(x) obtained in step 7Obtaining the transmittance t (x) of the whole image;
and step 9: and (3) establishing an image degradation process model, and recovering to obtain a defogged image J (x) by using the original image I (x) obtained in the step (1) and the parameters A, t (x) obtained in the steps (3) and (8).
Further, dark channel image I in step 2dark(x) Is represented as follows:
where c is the value of one of the three color channels R, G, B.
Further, when the atmospheric light value a is obtained in step 3, the first 0.1% pixel point with the maximum brightness value of the dark channel image is selected to correspond to the first 0.1% pixel point with the maximum brightness in the original foggy image, the corresponding R, G, B three channel values are respectively obtained as an average value, and finally the atmospheric light value a is obtained as an average value of the three atmospheric light values corresponding to the three channels.
Further, the atmospheric dissipation function in step 4 satisfies two constraints: (1) at each pixel point, V (x)>0, namely, the atmospheric dissipation function takes a positive value; (2)i.e., V (x) is not greater than the minimum color component of the fog-containing image I (x), the dark channel image is taken as a rough estimate of the atmospheric dissipation function.
Further, the step 5 is implemented as follows:
step 5.1: calculating a grayscale image Igray(x) And calculating the cumulative distribution function L (x) of the image, and extracting the distribution at [0.05,0.95 ]]Using a distribution function to calculate L (x)1) 0.05 and L (x)2) Gray scale value x of 0.951、x2Finally, the central point of the gray level histogram is calculated
Step 5.2: carrying out self-adaptive segmentation on the gray level histogram by using a maximum inter-class variance method to obtain a threshold value sh for distinguishing a target from a background;
step 5.3: the method comprises the following steps of accurately finding a pixel point ThrB corresponding to an initial point of a peak area of a gray histogram by reducing a gray histogram interval, wherein the method specifically comprises the following steps: finding out the minimum extreme point of the histogram in the [ max (Mid, sh), A ] interval, calculating the corresponding gray-scale value, namely the corresponding pixel point ThrB, reducing the interval by a dichotomy, finally setting the interval as [ b, ThrM ], finding out the minimum extreme point of the histogram in the interval, and setting the corresponding pixel value as ThrB;
wherein,ThrM is [ max (Mid, sh), A]And the pixel value corresponding to the maximum value point of the histogram in the interval.
Further, the new correction formula in step 6 is defined as:
wherein M is a correction coefficient, Igray(x) For gray-scale images, a is a parameter which influences the trend of the function, IdarkIs a dark channel image.
Further, defining the modified atmospheric dissipation function as:
V'(x)=M*V(x)
where V (x) is a rough estimate of the atmospheric dissipation function.
Further, the improved atmospheric dissipation function is refined by using a bilateral filter in step 7, i.e. V "(x) ═ Bil (V' (x));
wherein Bil (x) is a bilateral filter, and its mathematical expression is defined as
w(i,j)=ws(i,j)wr(i,j)
S is a neighborhood with (x, y) as a center, (x, y) is the coordinate of a central pixel point in a filtering window, (i, j) is the coordinate of an adjacent pixel point, w (i, j) is a weighting coefficient, ws(i, j) is a spatial similarity kernel, wr(i, j) is a brightness similarity kernel function, g (x, y) is the brightness value of the central pixel point in the filtering window, g (i, j) is the brightness value of the adjacent pixel points, and sigma iss、σrRespectively, a spatial similarity kernel function and a brightness similarity kernel function standard deviation.
Further, in step 9, the original image I (x), the atmospheric light a, and the obtained transmittance t (x) are substituted into the image degradation process model I (x) ═ J (x) t (x) + a (1-t (x)), and the restored image obtained by deformation is expressed by the formulaNamely, the defogged image J (x) is recovered.
Compared with the prior art, the invention has the following beneficial technical effects:
the invention provides a self-adaptive processing method for a bright area such as a haze sky or white, which can self-adaptively solve a threshold value when defogging processing is carried out on a fog-containing image containing the bright area such as the sky or the white, self-adaptively partition the bright area such as the sky or the white and a close-range area through threshold value judgment, and provide a new correction formula for different areas so as to improve an atmospheric dissipation function. The novel self-adaptive processing method can accurately detect bright areas such as haze sky or white and the like, improve and process the atmospheric dissipation function in the areas, recover the bright areas such as sky or white and the like which are more in line with human eyes, and can keep the defogging effect of the close-range areas. Compared with the traditional bilateral filtering algorithm, the new algorithm effectively segments bright areas such as sky or white, the problem of color distortion of the bright areas is solved after self-adaptive processing, and the signal-to-noise ratio, the contrast and the color saturation of the defogged image are improved to a great extent.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
fig. 2 is a comparison of the defogging effect of the present invention on the unmanned aerial vehicle aerial image containing bright areas such as haze sky or white, etc., with other filtering methods, where (a) (b) (c) (d) is the original unmanned aerial vehicle aerial image, (e) (f) (g) (h) is the image after bilateral filtering defogging of (a) (b) (c) (d), and (i) (j) (k) (l) is the image after defogging of (a) (b) (c) (d) by using the algorithm of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
referring to fig. 1 and fig. 2, the invention provides a method for adaptively processing haze sky and white areas of aerial images of unmanned aerial vehicles.
Step 1, acquiring an aerial image of an unmanned aerial vehicle: and acquiring an aerial image I (x) to be subjected to defogging treatment by using unmanned aerial vehicle image acquisition equipment.
Step 2, solving a dark channel image I from the unmanned aerial vehicle aerial image obtained in the step 1 based on a dark channel prior theorydark(x) And a gray scale image Igray(x);
The obtained dark channel image is based on dark channel prior theory proposed by the Cacamme, and in most non-sky areas, at least one color channel of some pixels has a very low value and almost approaches zero. In other words, the minimum value of the light intensity in this region is a small number close to zero. Taking an RGB image as an example, for an arbitrary input image I, the dark channel can be expressed by the following formula:
where c is the value of one of the 3 color channels R, G, B;
step 3, obtaining the dark channel image I according to the step 2dark(x) And selecting the first 0.1% pixel points with the largest brightness value from the bright points, finding out the brightness values at the corresponding positions in the original unmanned aerial vehicle aerial color image I (x), summing and calculating the average value to respectively obtain the atmospheric light values of the three channels, and averaging the atmospheric light values of the three channels to obtain the final atmospheric light value A.
Step 4, defining an atmospheric dissipation function: v (x) ═ a (1-t (x)), where V (x) represents an atmospheric dissipation function, a represents an atmospheric light value, t (x) represents an image transmittance, and the atmospheric dissipation function satisfies two constraints: (1) at each pixel point, V (x)>0, namely the value of the atmospheric dissipation function is a positive value; (2)that is, V (x) is not greater than the minimum color component of the fog-containing image I (x), so the dark channel image is used as a rough estimate of the atmospheric dissipation function in the present invention;
step 5, obtaining the gray level image I according to the step 2gray(x) Obtaining a gray level histogram of the image, and obtaining an adaptive threshold value ThrB to segment a near view and a bright area such as sky or white;
the sky or white area in the image is usually brighter, and exhibits high peak characteristics on a gray histogram, so an adaptive threshold value ThrB can be obtained to segment a close scene from the sky or white area, and the specific implementation steps are as follows:
(1) in order to judge the distribution of the histogram, firstly, a gray image I is calculatedgray(x) And calculating the cumulative distribution function L (x) of the image, and extracting the distribution at [0.05,0.95 ]]Using a distribution function to calculate L (x)1) 0.05 and L (x)2) Gray scale value x of 0.951、x2Finally, the center point of the main histogram is calculated
(2) And (5) dividing the histogram by using a maximum inter-class variance method to obtain sh. A threshold value sh is obtained through self-adaption to distinguish a target from a background;
(3) and accurately finding a pixel point ThrB corresponding to the starting point of the peak histogram by reducing the interval of the histogram. Finding out the minimum extreme point of the histogram in the [ max (Mid, sh), A ] interval, and calculating the corresponding gray-scale value, namely the corresponding pixel point, namely the ThrB to be found. And (3) reducing the interval by a dichotomy, finally setting the interval as [ b, ThrM ], finding the minimum value point in the histogram in the interval, and setting the corresponding pixel value as ThrB.
Wherein,ThrM is [ max (Mid, sh), A]And the pixel value corresponding to the maximum value point in the histogram in the interval.
Step 6, according to the adaptive threshold value ThrB obtained in step 5, defining a new correction formula, calculating a correction coefficient M, defining an improved atmospheric dissipation function V ' (x) ═ M × V (x), and substituting the improved atmospheric dissipation function V ' (x) into the improvement formula to obtain an improved atmospheric dissipation function V ' (x);
the new correction formula is defined as:
Igrayis a gray image, a is a parameter influencing the trend change of the function, the value of a is 10, IdarkIs a dark channel image.
Step 7, refining the improved atmospheric dissipation function obtained in the step 6 by using a bilateral filter, namely V "(x) is Bil (V' (x)), so as to obtain a refined atmospheric dissipation function V" (x);
the mathematical expression of the bilateral filter is defined as
w(i,j)=ws(i,j)wr(i,j)
S is a neighborhood which takes (x, y) as a center and has the size of 5 multiplied by 5, (x, y) is the coordinate of a central pixel point in a filtering window, and (i, j) is the coordinate of an adjacent pixel point. W (i, j) is a weighting coefficient, Ws(i, j) is a spatial similarity kernel, wrAnd (i, j) is a brightness similarity kernel function, g (x, y) is the brightness value of the central pixel point in the filtering window, and g (i, j) is the brightness value of the adjacent pixel points. Standard deviation sigma of spatial similarity kernel functionsLuminance similarity kernel standard deviation σ ═ 3r=0.1。
And 8: according to the fine atmospheric dissipation function V' (x) obtained in the step 7, the atmospheric dissipation function formula is deformed to obtain a transmissivity estimation formulaSubstituting the A and V' (x) obtained in the step 3 and the step 6 into a transmissivity estimation formula to obtain the transmissivity t (x) of the whole image;
and step 9: establishing an image degradation process model I (x) ═ J (x) t (x) + A (1-t (x)), and deforming to obtain an image restoration formulaAnd recovering the defogged image J (x) by using the original image I (x) obtained in the step 1 and the parameters A, t (x) obtained in the steps 3 and 8.
Fig. 1 is a flow chart of the algorithm. Fig. 2 is a diagram of processing effects of different algorithms. In fig. 2, four different sets of unmanned aerial vehicle aerial images are used, and a bilateral filter is used as a comparison group to compare with the effect of the haze sky or white area adaptive processing algorithm for improving the atmospheric dissipation function provided by the invention. The analysis of the experimental result in fig. 2 shows that compared with the original image and the image after bilateral filtering, the new algorithm can solve the problem of color distortion after defogging of bright areas such as a haze sky or white and the like, better maintains the defogging effect of the image in a close-range area, and improves the color saturation and contrast of the restored image.
Table 1 is a comparison table of objective evaluation parameters after defogging of an image aerial by an unmanned aerial vehicle by applying different defogging algorithms.
TABLE 1 evaluation table of defogging effect parameters of different algorithms
As can be seen from table 1, the adaptive processing algorithm for haze sky or white regions provided by the present invention has a relatively high peak signal-to-noise ratio and color saturation after defogging of an image, and the larger the parameters such as the peak signal-to-noise ratio are, the better the image restoration effect is. From table 1, it can be seen that each parameter of the image after defogging is greater than the bilateral filtering defogging algorithm. For the image contrast, the image contrast after defogging is superior to the original image and the bilateral filtering algorithm to a certain extent. It can be seen that the adaptive processing algorithm for the sky or white bright area provided by the invention can eliminate the color distortion problem of the defogged bright area.
Therefore, aiming at the aerial fog-containing image of the unmanned aerial vehicle containing the sky or the white area, the self-adaptive processing algorithm of the aerial image of the unmanned aerial vehicle based on the haze sky or the white area of the improved atmosphere dissipation function is superior to the existing algorithm, and has obvious technical advantages. The method has extremely high application value for further processing and accurately extracting the information in the aerial image.
The invention relates to an algorithm for carrying out self-adaptive processing on a sky or white bright area of a fog-containing image, which can accurately segment the sky or white bright area, better eliminate the color distortion problem after defogging of the sky or white bright area and keep the defogging effect only. Compared with a bilateral filtering defogging algorithm, the image color saturation and contrast after defogging by the algorithm are obviously improved, the algorithm is more in line with human eye vision, the practicability of the algorithm is stronger, and the algorithm has high academic value and application value for improving the aerial fog-containing image quality of the unmanned aerial vehicle and extracting useful information.

Claims (9)

1. An unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method is characterized by comprising the following steps:
step 1: acquiring an aerial image I (x) of the unmanned aerial vehicle;
step 2: dark channel image I for acquiring unmanned aerial vehicle aerial image I (x)dark(x) And a gray scale image Igray(x);
And step 3: dark channel image I obtained according to step 2dark(x) Combining the bright spot with the aerial image I (x) of the unmanned aerial vehicle to obtain an atmospheric light value A of the aerial image of the unmanned aerial vehicle;
and 4, step 4: defining an atmospheric dissipation function: v (x) ═ a (1-t (x)), where V (x) represents an atmospheric dissipation function, a represents an atmospheric light value, and t (x) represents an image transmittance; and taking the dark channel image obtained in the step 2 as a rough estimation V (x) of the atmospheric dissipation function;
and 5: obtaining a gray level image I according to the step 2gray(x) Obtaining a gray histogram of the gray image and obtaining an adaptive threshold value ThrB to segment a close view and a sky or white area;
step 6: utilizing the adaptive threshold ThrB obtained in the step 5 to process the roughly estimated atmospheric dissipation function in regions, defining a new correction formula, calculating a correction coefficient, defining an improved atmospheric dissipation function formula, and substituting the correction coefficient into the improvement formula to obtain an improved atmospheric dissipation function V' (x);
and 7: refining the improved atmospheric dissipation function obtained in the step 6 by using a bilateral filter to obtain a refined atmospheric dissipation function V' (x);
and 8: using the transmittance estimation formula, based on the fine atmospheric dissipation function V "(x) obtained in step 7Obtaining the transmittance t (x) of the whole image;
and step 9: and (3) establishing an image degradation process model, and recovering to obtain a defogged image J (x) by using the original image I (x) obtained in the step (1) and the parameters A, t (x) obtained in the steps (3) and (8).
2. The adaptive processing method for the haze sky and white area of the aerial image of the unmanned aerial vehicle as claimed in claim 1, wherein the dark channel image I in step 2dark(x) Is represented as follows:
where c is the value of one of the three color channels R, G, B.
3. The adaptive processing method for the foggy sky and white area of the aerial image of the unmanned aerial vehicle of claim 1, wherein when the atmospheric light value a is obtained in step 3, the first 0.1% pixel point with the maximum brightness value of the dark channel image is selected to correspond to the first 0.1% pixel point with the maximum brightness in the original foggy image, the corresponding R, G, B channel values are respectively averaged, and the final atmospheric light value a is obtained as the average of the three atmospheric light values corresponding to the three channels.
4. The method for adaptively processing the haze sky and white areas of the aerial image of the unmanned aerial vehicle according to claim 1, wherein the atmospheric dissipation function in step 4 satisfies two constraint conditions: (1) at each pixel point, V (x)>0, namely, the atmospheric dissipation function takes a positive value; (2)i.e., V (x) is not greater than the minimum color component of the fog-containing image I (x), the dark channel image is taken as a rough estimate of the atmospheric dissipation function.
5. The adaptive processing method for the haze sky and white area of the aerial image of the unmanned aerial vehicle of claim 1, wherein the step 5 is implemented as follows:
step 5.1: calculating a grayscale image Igray(x) And calculating the cumulative distribution function L (x) of the image, and extracting the distribution at [0.05,0.95 ]]Using a distribution function to calculate L (x)1) 0.05 and L (x)2) Gray scale value x of 0.951、x2Finally, the central point of the gray level histogram is calculated
Step 5.2: carrying out self-adaptive segmentation on the gray level histogram by using a maximum inter-class variance method to obtain a threshold value sh for distinguishing a target from a background;
step 5.3: the method comprises the following steps of accurately finding a pixel point ThrB corresponding to an initial point of a peak area of a gray histogram by reducing a gray histogram interval, wherein the method specifically comprises the following steps: finding out the minimum extreme point of the histogram in the [ max (Mid, sh), A ] interval, calculating the corresponding gray-scale value, namely the corresponding pixel point ThrB, reducing the interval by a dichotomy, finally setting the interval as [ b, ThrM ], finding out the minimum extreme point of the histogram in the interval, and setting the corresponding pixel value as ThrB;
wherein,ThrM is [ max (Mid, sh), A]And the pixel value corresponding to the maximum value point of the histogram in the interval.
6. The method of claim 1, wherein the new modification formula in step 6 is defined as:
wherein M is a correction coefficient, Igray(x) For gray-scale images, a is a parameter which influences the trend of the function, IdarkIs a dark channel image.
7. The adaptive processing method for the haze sky and white areas of the aerial image of the unmanned aerial vehicle of claim 6, wherein the improved atmospheric dissipation function is defined as:
V'(x)=M*V(x)
where V (x) is a rough estimate of the atmospheric dissipation function.
8. The adaptive processing method for the foggy sky and white region in the aerial image of the unmanned aerial vehicle of claim 1, wherein the improved atmospheric dissipation function is refined by using a bilateral filter in step 7, i.e. V "(x) ═ Bil (V' (x));
wherein Bil (x) is a bilateral filter, and its mathematical expression is defined as
w(i,j)=ws(i,j)wr(i,j)
S is a neighborhood with (x, y) as a center, (x, y) is the coordinate of a central pixel point in a filtering window, (i, j) is the coordinate of an adjacent pixel point, w (i, j) is a weighting coefficient, ws(i, j) is a spatial similarity kernel, wr(i, j) is a brightness similarity kernel function, g (x, y) is the brightness value of the central pixel point in the filtering window, g (i, j) is the brightness value of the adjacent pixel points, and sigma iss、σrRespectively, a spatial similarity kernel function and a brightness similarity kernel function standard deviation.
9. The method for adaptively processing the foggy sky and white region in the aerial image of the unmanned aerial vehicle according to claim 1, wherein the original image I (x), the atmospheric light a and the obtained transmittance t (x) are substituted into an image degradation process model I (x) ═ J (x) t (x) + a (1-t (x)) in step 9, and the restored image obtained by deformation is formulated asNamely, the defogged image J (x) is recovered.
CN201810385966.XA 2018-04-26 2018-04-26 Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method Active CN109087254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810385966.XA CN109087254B (en) 2018-04-26 2018-04-26 Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810385966.XA CN109087254B (en) 2018-04-26 2018-04-26 Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method

Publications (2)

Publication Number Publication Date
CN109087254A true CN109087254A (en) 2018-12-25
CN109087254B CN109087254B (en) 2021-12-31

Family

ID=64839641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810385966.XA Active CN109087254B (en) 2018-04-26 2018-04-26 Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method

Country Status (1)

Country Link
CN (1) CN109087254B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919859A (en) * 2019-01-25 2019-06-21 暨南大学 A kind of Outdoor Scene image defogging Enhancement Method calculates equipment and its storage medium
CN109949239A (en) * 2019-03-11 2019-06-28 中国人民解放军陆军工程大学 Self-adaptive sharpening method suitable for multi-concentration multi-scene haze image
CN109978799A (en) * 2019-04-15 2019-07-05 武汉理工大学 A kind of maritime affairs UAV Video image defogging method based on deep learning
CN110060221A (en) * 2019-04-26 2019-07-26 长安大学 A kind of bridge vehicle checking method based on unmanned plane image
CN110097522A (en) * 2019-05-14 2019-08-06 燕山大学 A kind of single width Method of defogging image of outdoor scenes based on multiple dimensioned convolutional neural networks
CN110136079A (en) * 2019-05-05 2019-08-16 长安大学 Image defogging method based on scene depth segmentation
CN110676753A (en) * 2019-10-14 2020-01-10 宁夏百川电力股份有限公司 Intelligent inspection robot for power transmission line
CN113538284A (en) * 2021-07-22 2021-10-22 哈尔滨理工大学 Transplantation method of image defogging algorithm based on dark channel prior

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030068077A1 (en) * 2001-10-10 2003-04-10 Naohiko Koakutsu Negotiable instrument processing apparatus and negotiable instrument processing method
CN103489166A (en) * 2013-10-12 2014-01-01 大连理工大学 Bilateral filter-based single image defogging method
CN105184758A (en) * 2015-09-16 2015-12-23 宁夏大学 Defogging and enhancing method for image
CN106251301A (en) * 2016-07-26 2016-12-21 北京工业大学 A kind of single image defogging method based on dark primary priori

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030068077A1 (en) * 2001-10-10 2003-04-10 Naohiko Koakutsu Negotiable instrument processing apparatus and negotiable instrument processing method
CN103489166A (en) * 2013-10-12 2014-01-01 大连理工大学 Bilateral filter-based single image defogging method
CN105184758A (en) * 2015-09-16 2015-12-23 宁夏大学 Defogging and enhancing method for image
CN106251301A (en) * 2016-07-26 2016-12-21 北京工业大学 A kind of single image defogging method based on dark primary priori

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈丹丹 等: ""修正大气耗散函数的单幅图像去雾"", 《中国图象图形学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919859A (en) * 2019-01-25 2019-06-21 暨南大学 A kind of Outdoor Scene image defogging Enhancement Method calculates equipment and its storage medium
CN109949239A (en) * 2019-03-11 2019-06-28 中国人民解放军陆军工程大学 Self-adaptive sharpening method suitable for multi-concentration multi-scene haze image
CN109949239B (en) * 2019-03-11 2023-06-16 中国人民解放军陆军工程大学 Self-adaptive sharpening method suitable for multi-concentration multi-scene haze image
CN109978799A (en) * 2019-04-15 2019-07-05 武汉理工大学 A kind of maritime affairs UAV Video image defogging method based on deep learning
CN109978799B (en) * 2019-04-15 2021-03-23 武汉理工大学 Maritime unmanned aerial vehicle video image defogging method based on deep learning
CN110060221A (en) * 2019-04-26 2019-07-26 长安大学 A kind of bridge vehicle checking method based on unmanned plane image
CN110060221B (en) * 2019-04-26 2023-01-17 长安大学 Bridge vehicle detection method based on unmanned aerial vehicle aerial image
CN110136079A (en) * 2019-05-05 2019-08-16 长安大学 Image defogging method based on scene depth segmentation
CN110097522A (en) * 2019-05-14 2019-08-06 燕山大学 A kind of single width Method of defogging image of outdoor scenes based on multiple dimensioned convolutional neural networks
CN110676753A (en) * 2019-10-14 2020-01-10 宁夏百川电力股份有限公司 Intelligent inspection robot for power transmission line
CN110676753B (en) * 2019-10-14 2020-06-23 宁夏百川电力股份有限公司 Intelligent inspection robot for power transmission line
CN113538284A (en) * 2021-07-22 2021-10-22 哈尔滨理工大学 Transplantation method of image defogging algorithm based on dark channel prior

Also Published As

Publication number Publication date
CN109087254B (en) 2021-12-31

Similar Documents

Publication Publication Date Title
CN109087254B (en) Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method
CN107301623B (en) Traffic image defogging method and system based on dark channel and image segmentation
CN106157267B (en) Image defogging transmissivity optimization method based on dark channel prior
CN108389175B (en) Image defogging method integrating variation function and color attenuation prior
CN111292258B (en) Image defogging method based on dark channel prior and bright channel prior
CN109064426B (en) Method and device for suppressing glare in low-illumination image and enhancing image
CN109255759B (en) Image defogging method based on sky segmentation and transmissivity self-adaptive correction
CN106875351A (en) A kind of defogging method towards large area sky areas image
CN107360344B (en) Rapid defogging method for monitoring video
CN108154492B (en) A kind of image based on non-local mean filtering goes haze method
CN107977941B (en) Image defogging method for color fidelity and contrast enhancement of bright area
CN107103591A (en) A kind of single image to the fog method based on image haze concentration sealing
CN110782407B (en) Single image defogging method based on sky region probability segmentation
CN111161167B (en) Single image defogging method based on middle channel compensation and self-adaptive atmospheric light estimation
CN111861896A (en) UUV-oriented underwater image color compensation and recovery method
CN109493291A (en) A kind of method for enhancing color image contrast ratio of adaptive gamma correction
CN111325688B (en) Unmanned aerial vehicle image defogging method for optimizing atmosphere light by fusion morphology clustering
CN110827221A (en) Single image defogging method based on double-channel prior and side window guide filtering
CN114693548B (en) Dark channel defogging method based on bright area detection
CN108765310B (en) Adaptive transmissivity restoration image defogging method based on multi-scale window
CN117876259A (en) Self-adaptive coal mine underground image defogging method
CN106611419B (en) The extracting method in image road surface region
CN117495719A (en) Defogging method based on atmospheric light curtain and fog concentration distribution estimation
CN106960425A (en) Single frames defogging method based on multiple dimensioned filtering of deconvoluting
CN110852957A (en) Rapid image defogging method based on retina characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231016

Address after: Room 2202, 22 / F, Wantong building, No. 3002, Sungang East Road, Sungang street, Luohu District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen dragon totem technology achievement transformation Co.,Ltd.

Address before: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee before: Dragon totem Technology (Hefei) Co.,Ltd.

Effective date of registration: 20231016

Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Dragon totem Technology (Hefei) Co.,Ltd.

Address before: 710064 No. 33, South Second Ring Road, Shaanxi, Xi'an

Patentee before: CHANG'AN University

TR01 Transfer of patent right