CN113379619A - Integrated processing method for defogging imaging, visibility extraction and depth of field estimation - Google Patents

Integrated processing method for defogging imaging, visibility extraction and depth of field estimation Download PDF

Info

Publication number
CN113379619A
CN113379619A CN202110518861.9A CN202110518861A CN113379619A CN 113379619 A CN113379619 A CN 113379619A CN 202110518861 A CN202110518861 A CN 202110518861A CN 113379619 A CN113379619 A CN 113379619A
Authority
CN
China
Prior art keywords
visibility
color
scene
image
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110518861.9A
Other languages
Chinese (zh)
Other versions
CN113379619B (en
Inventor
蒋大钢
孔令钊
张宇
刘昕
钟港
王毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110518861.9A priority Critical patent/CN113379619B/en
Publication of CN113379619A publication Critical patent/CN113379619A/en
Application granted granted Critical
Publication of CN113379619B publication Critical patent/CN113379619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to an integrated processing method for defogging imaging, visibility extraction and depth of field estimation, which comprises the following steps: firstly, selecting a classical fogging model; secondly, obtaining the value of the atmospheric background light; thirdly, obtaining an estimated value of the transmissivity according to dark channel prior; fourthly, completing the recovery of the fog-free image; expressing the fogging model in a color space in a vector form, constructing an auxiliary line to solve the transmittance ratio of the same scene in two frames of images, and further solving the real color and the corresponding transmittance of the scene; sixthly, combining the change value of the distance between the scenery and the unmanned aerial vehicle to solve an atmospheric extinction coefficient, and calculating the visibility by using an empirical formula of the extinction coefficient and the visibility; and seventhly, solving the depth of the scene. The invention is beneficial to improving the aerial photography effect in the weather of haze and the like, and can also be used for visibility detection and reconnaissance ranging in traffic and environmental protection.

Description

Integrated processing method for defogging imaging, visibility extraction and depth of field estimation
Technical Field
The invention relates to the technical field of image processing, in particular to an integrated processing method for defogging imaging, visibility extraction and depth of field estimation.
Background
Along with the development of unmanned aerial technology, unmanned aerial vehicles increasingly replace traditional manned spacecrafts by virtue of the advantages of safety, maneuverability and the like, and are applied to various scenes such as patrol, investigation and rescue, survey and the like. Unmanned aerial vehicle image quality of taking photo by plane easily receives the influence of haze weather, leads to the image quality of taking photo by plane to descend. The unmanned aerial vehicle aerial photography task effect is influenced, the working scene and time of the unmanned aerial vehicle are limited, and therefore important practical requirements are brought to the research of the image defogging algorithm. Subsequent scholars successively put forward methods for enhancing image contrast, mean defogging, median defogging and the like, wherein the methods have a certain defogging effect, but the results are often serious in color distortion. In 2009, a dark channel prior is proposed by doctor of hokeming and a defogging algorithm based on the dark channel prior is designed, and the algorithm still is a mainstream and classical defogging method by virtue of excellent effect and stability.
The dark channel defogging algorithm explains the principle that imaging colors are synthesized by scene real colors and atmospheric background light by utilizing atmospheric transmittance based on a physical model of atmospheric fogging, so that reduction of the fog real colors only requires solving two unknown parameters of the transmittance of an image and the atmospheric background light, wherein the transmittance is obtained by knowing the atmospheric background light. The algorithm considers the sky area in the image to be the bright area of the dark channel image, so the dark channel defogging algorithm considers the first 1% of the brightest pixels of the dark channel image to be the atmospheric background light; and estimating the transmittance by using the prior statistical rule of the dark channel and the value of the atmospheric background light. And utilizing the estimated value of the atmospheric background light and the estimated transmissivity to finish the defogging of the image. Because the dark channel defogging algorithm determines that the dark channel defogging algorithm depends on the sky area in the image when the atmospheric background light is estimated on the algorithm design, but the aerial image does not contain the sky area, the atmospheric background light in the aerial image is not estimated accurately, the defogging result is dim, the color is distorted, the detail identification degree is low, and a good result cannot be obtained, so that the design of the defogging algorithm suitable for the aerial image has urgent requirements.
Nowadays, unmanned aerial vehicle application demand presents the trend of function diversification, scene diversification, environment complication, and unmanned aerial vehicle technical development needs to move towards all-weather, multi-functional, wide field. In further research, it is found that in addition to trying to improve the defogging imaging effect of the aerial image, it is still rarely studied whether more functions can be mined for the physical process of atmospheric fog penetration imaging. According to the attenuation rule of light in atmospheric transmission, the distance between the atmospheric extinction coefficient and the light transmission directly influences the transmissivity of the scenery, so whether the atmospheric extinction coefficient and the scenery depth can be calculated by utilizing the process is worthy of being researched. Among the parameters of environment detection, the monitoring of visibility (affected by haze, dust and smog, and corresponding relation between atmospheric extinction coefficient and visibility) belongs to important parameters in atmospheric environment. Carry on visibility detection device and must consume the limited load of unmanned aerial vehicle, reduce the efficiency of unmanned aerial vehicle flight. Therefore, if visibility can be estimated from the aerial images, the visibility instrument can be not carried, environment-friendly monitoring can be carried out, unmanned aerial vehicle load is saved, and the flying efficiency of the unmanned aerial vehicle is improved; unmanned aerial vehicle all has the demand to scene depth measurement on applications such as survey and drawing, target tracking. If the depth of field can be estimated from the aerial image, a passive distance detection means can be directly provided without carrying equipment such as a binocular/multi-view camera, a radar and the like. As above-mentioned, be worth excavating visibility and distance information that bring to penetrating fog formation of image physical process, this will make unmanned aerial vehicle more worth in environmental protection monitoring field, provides the helping hand to unmanned aerial vehicle intelligent driving technical development simultaneously.
In summary, typical defogging algorithms in aerial image defogging applications are problematic and need to be improved. Meanwhile, the visibility and the scene depth are calculated by utilizing the atmospheric transmission process of light, so that the method has research and application values.
Disclosure of Invention
It is an object of the present invention to provide an integrated processing method for defogging imaging, visibility extraction and depth of field estimation that overcomes some or all of the deficiencies of the prior art.
The integrated processing method for defogging imaging, visibility extraction and depth of field estimation comprises the following steps:
selecting a classical fogging model I (x) t (x) D (x) plus (1-t (x)) A, wherein x is pixel coordinate, I (x) is observed foggy image, D (x) is natural color of the scene in non-attenuation state, A is atmosphere light background light, t (x) is transmissivity, wherein t (x) e-βd(x)β is the atmospheric extinction coefficient, d (x) is the distance of light transmission;
secondly, searching scenes which commonly appear in the two frames of changed images by using an image matching algorithm, and solving a value A of atmospheric background light by using a geometric relation between the difference of imaging colors of the same scenes in the two frames of images and chromatic aberration;
thirdly, obtaining the estimated value of the transmissivity t (x) according to the dark channel prior
Figure BDA0003063103470000031
Figure BDA0003063103470000032
Wherein: Ω is a square window centered at x, c is the color channel of D (x);
and fourthly, a deformation form of the fogging model:
Figure BDA0003063103470000033
completing the restoration of the fog-free image;
expressing the fogging model in a color space in a vector form, constructing an auxiliary line and solving the transmittance ratio t of the same scene in two frame images1(x)/t2(x) Further solving the real color D (x) and the corresponding transmittance t (x) of the scene;
sixthly, writing the transmittance column as a ratio formula:
Figure BDA0003063103470000034
where x represents the same scene in both frames, tx1Representing the transmittance of the scene x over the first of the two frame images; combined with the change value d of the scenery relative to the distance of the unmanned aerial vehicle1-d2The atmospheric extinction coefficient beta can be solved, and an empirical formula of the extinction coefficient and the visibility can be utilized
Figure BDA0003063103470000035
Visibility V can be calculated;
seventhly, the transmittance t (x) and the extinction coefficient β are known, and t (x) is equal to e-βd(x)And obtaining the depth d of the scenery.
Preferably, in step three, in the dark channel prior, the dark channel map is defined as a grayscale image composed of the lowest channel values of all pixels:
Figure BDA0003063103470000036
wherein D (x)darkIs a dark channel image, consisting of a dark channel prior D (x)dark→ 0, from which the estimated value of t (x) can be obtained
Figure BDA0003063103470000037
Is given by the following formula.
Preferably, in the fifth step, two frames of images with overlapped areas in the aerial image are selected for estimating the atmospheric background light A, SIFT operators are used for carrying out feature matching, the same points in the two images are found, and the points are respectively counted as t1(x)、t2(x) At this time, the color of the same object on the two images is represented as:
Figure BDA0003063103470000041
wherein, I isi(x)、Di(x) A is expressed as a space vector in the RGB color space as
Figure BDA0003063103470000042
pi=ti(x)qi=(1-ti(x)),pi+qi1 is ═ 1; wherein
Figure BDA0003063103470000043
Lie in a spatial plane; i is a reference number,
Figure BDA0003063103470000044
an unattenuated color vector representing a scene
Figure BDA0003063103470000045
Vector of ambient light
Figure BDA0003063103470000046
In the plane formed by the color vectors, all the synthesized different scenes actually image the color vectors caused by different transmittances;
in two adjacent images, an image matching algorithm can be used to obtain a plurality of pairs of point pairs which are in accordance with the description of the formula, and each pair of point pairs generates
Figure BDA0003063103470000047
In a space plane, there is a perpendicular line for each space plane
Figure BDA0003063103470000048
Perpendicular to
Figure BDA0003063103470000049
Then there are:
Figure BDA00030631034700000410
for several pairs of normal vectors to respective planes
Figure RE-GDA00031643361800000411
Always have
Figure RE-GDA00031643361800000412
Then the equation can be obtained:
Figure BDA00030631034700000413
by utilizing the rule of the above formula, the optimized formula and the fitting can be written
Figure BDA00030631034700000414
The direction of (a) is as follows:
Figure BDA00030631034700000415
wherein
Figure BDA00030631034700000416
For atmospheric background light
Figure BDA00030631034700000417
A unit vector of (a);
when the color of the scenery is combined with the atmospheric background light, the vector
Figure BDA00030631034700000418
The vector ends in the vector
Figure BDA00030631034700000419
On the connecting line of the terminal point, the equation can be written by using the geometrical relationship:
Figure BDA00030631034700000420
wherein a and b are constants, and a and b can be obtained by solving the above formula, wherein
Figure BDA00030631034700000421
Get the atmosphere by decompositionBackground light
Figure BDA00030631034700000422
At this time, t (x) is estimated
Figure BDA00030631034700000423
The right-hand variables of the expression (2) are known, and an estimated value of the transmittance of each point of the image is obtained
Figure BDA00030631034700000424
From pi+qiWork up to 1 gives the following formula:
Figure BDA0003063103470000051
the ratio of two formulas in the above formula is used for obtaining:
Figure BDA0003063103470000052
make a vector
Figure BDA0003063103470000053
Parallel line A' I of2Crossing OA with A', known as Δ OI1A1∽△A’I2A2(ii) a Then there are:
Figure BDA0003063103470000054
solving OA 'I and A' I by sine theorem2L, p is obtained from the relationship in the above formula1/p2A value of (d); further, the self color of the scenery can be solved
Figure BDA0003063103470000055
In the plane of each pair of matched pairs
Figure BDA0003063103470000056
Are all known; the corresponding transmittance t (x) at each point pair can be solved.
The invention provides a set of algorithm which can simultaneously realize three functions of unmanned aerial vehicle defogging imaging, visibility extraction and depth of field estimation without adding extra hardware equipment, so that unmanned aerial vehicle imaging can meet more diverse requirements of traffic monitoring, resource surveying, environmental protection monitoring and the like. Aiming at the condition that the traditional defogging algorithm is not suitable for aerial images, the invention provides an aerial video defogging imaging algorithm, and on the basis, the estimation of atmospheric visibility and depth of field is realized by further combining unmanned aerial data and defogging imaging calculation process data. The invention helps to improve aerial photography effect under weather such as haze and the like, and can also be used for traffic and environment-friendly visibility detection and reconnaissance ranging.
Drawings
Fig. 1 is a flowchart of an integrated processing method of defogging imaging, visibility extraction and depth of field estimation in embodiment 1;
FIG. 2 is a schematic plan view of the derived vector and the auxiliary lines in embodiment 1;
FIG. 3 shows each pair of the embodiments 1
Figure BDA0003063103470000057
Normal vector of the plane formed
Figure BDA0003063103470000058
And
Figure BDA0003063103470000059
a schematic vertical view;
FIG. 4 is a diagram illustrating the depth estimation and visibility estimation results in embodiment 1;
FIG. 5 is a graph showing the defogging results in example 1.
Detailed Description
For a further understanding of the invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples. It should be understood that the examples are illustrative of the invention only and not limiting.
Example 1
As shown in fig. 1, the present embodiment provides an integrated processing method of defogging imaging, visibility extraction and depth of field estimation, which includes:
in computer vision, selecting a classic fogging model:
I(x)=t(x)·D(x)+(1-t(x))·A (1)
where x is the pixel coordinate, I (x) is the observed foggy image, D (x) is the true color of the scene in the unattenuated state, A is the ambient background light, t (x) is the transmission, where t (x) has:
t(x)=e-βd(x),t(x)∈(0,1) (2)
beta is the atmospheric extinction coefficient, d (x) is the distance of light transmission;
as shown in the formula (1), the color I (x) of the scene after being attenuated by atmospheric transmission is a linear superposition of the scene natural color D (x) and the atmospheric background light A about t (x).
And searching scenes which commonly appear in the two frames of changed images by using an image matching algorithm, and solving the value A of the atmospheric background light by using the geometric relationship between the difference of imaging colors of the same scenes in the two frames of images and chromatic aberration.
Dark channels a priori assume that there is a channel with a low or even near zero intensity value in the RGB three color channels for most pixels in a statistically fog-free image. Based on this statistical rule, the dark channel map is defined as the grayscale image composed of the values of the lowest channel of all pixels:
Figure BDA0003063103470000061
wherein: Ω is a square window centered at x, c is d (x) a defined color channel; d (x)darkIs a dark channel image, consisting of a dark channel prior D (x)dark→ 0, from which an estimated value of t (x) can be obtained
Figure BDA0003063103470000062
Is represented by the formula:
Figure BDA0003063103470000071
in the embodiment, two frames of images with overlapped areas in the aerial image are selected for estimating the atmospheric background light A, SIFT operators are used for carrying out feature matching, the same points in the two images are found, the optical path difference from the same object to the airborne camera is different due to the movement of the unmanned aerial vehicle, and the difference of t (x) is known by the formula (2), wherein t is t1(x)、t2(x) At this time, the color of the same object on the two images is represented as:
Figure BDA0003063103470000072
wherein, I isi(x)、Di(x) A is expressed as a space vector in the RGB color space as
Figure BDA0003063103470000073
pi=ti(x)qi=(1-ti(x)),pi+qi1 is ═ 1; wherein
Figure BDA0003063103470000074
Lie in a spatial plane; i is a reference number,
Figure BDA0003063103470000075
an unattenuated color vector representing a scene
Figure BDA0003063103470000076
Vector of ambient light
Figure BDA0003063103470000077
In the plane formed by the color vectors, all the synthesized different scenes actually image the color vectors caused by different transmittances; two possible scene imaging color vectors, the scene self color vector and the atmospheric background light vector are selected and respectively represented as figure 2
Figure BDA0003063103470000078
The vector triangle synthesis process is shown as dashed line type 1 in fig. 2.
In two adjacent images, an image matching algorithm can be used to obtain a plurality of pairs of point pairs according to the description of the formula (5), and each pair of point pairs generates
Figure BDA0003063103470000079
In one spatial plane, there is a perpendicular line of each spatial plane
Figure BDA00030631034700000710
Perpendicular to
Figure BDA00030631034700000711
As shown in fig. 3, there are:
Figure BDA00030631034700000712
for several pairs of normal vectors to respective planes
Figure RE-GDA00031643361800000712
Always have
Figure RE-GDA00031643361800000713
Then the equation can be obtained:
Figure BDA00030631034700000715
by using the rule of formula (7), the optimized formula can be written and fitted
Figure BDA00030631034700000716
The direction of (a) is as follows:
Figure BDA00030631034700000717
wherein
Figure BDA00030631034700000718
For atmospheric background light
Figure BDA00030631034700000719
A unit vector of (a);
due to the definition of the relation p1+q21, so that when the color of the scene itself is combined with the atmospheric background light according to equation (5), the vector
Figure BDA0003063103470000081
The vector ends in the vector
Figure BDA0003063103470000082
The line of termination is shown above the dashed line 2 in FIG. 2.
The equations can be written in terms of geometric relationships:
Figure BDA0003063103470000083
wherein a and b are constants, and a and b can be obtained by solving the formula (9), wherein
Figure BDA0003063103470000084
Get the atmosphere background light
Figure BDA0003063103470000085
At this time, t (x) is estimated
Figure BDA0003063103470000086
The right variables of the expression are known, and the estimated value of the transmittance of each point of the image is obtained
Figure BDA0003063103470000087
Carrying out deformation by using the formula (1) to obtain a fog-free image recovery formula:
Figure BDA0003063103470000088
and finishing the defogging work of the image.
From pi+qiWork-up of formula (5) to 1 gives the following formula:
Figure BDA0003063103470000089
the ratio is obtained by using two formulas in formula (11):
Figure BDA00030631034700000810
make a vector
Figure BDA00030631034700000811
Parallel line A' I of2Crossing OA at A', as shown by the dotted line 3 in FIG. 1, Δ OI is shown1A1∽△A’I2A2(ii) a Then there are:
Figure BDA00030631034700000812
solving OA 'I and A' I by sine theorem2L, p is obtained from the relationship in the formula (13)1/p2A value of (d); further, the self color of the scene can be solved by the formula (12)
Figure BDA00030631034700000813
To this end, each pair of matched pairs is in the plane
Figure BDA00030631034700000814
Are all known; the corresponding transmittance t (x) at each dot pair can be solved by equation (5).
The formula (2) can be obtained by comparing at different distances:
Figure BDA0003063103470000091
as can be seen from the formula (14), p is used1/p2In combination with the change d of the scene relative to the range of the drone1-d2The atmospheric extinction coefficient beta can be solved, and an empirical formula of the atmospheric extinction coefficient and visibility is utilized:
Figure BDA0003063103470000092
the visibility V in the image at that time can be calculated.
The transmittance t (x) and the extinction coefficient β are known, and the depth d of the subject is obtained by the formula (2).
Fig. 4 shows the depth estimation and visibility estimation results, with the depth estimation result on the left and the visibility estimation result on the right. In fig. 5, the original image is on the left, the defogging result (middle) by the classical algorithm is in the middle, and the defogging result on the right is on the right. As can be seen from fig. 4 and 5, the method is helpful for improving aerial photography effects in weather such as haze and the like, and can also be used for visibility detection and reconnaissance ranging in traffic and environmental protection, so as to realize estimation of atmospheric visibility and depth of field.
The embodiment provides an image processing method for an unmanned aerial vehicle, but is not limited to unmanned aerial vehicle scenes.
The present invention and its embodiments have been described above schematically, and the description is not intended to be limiting, and what is shown in the drawings is only one embodiment of the present invention, and the actual structure is not limited thereto. Therefore, if the person skilled in the art receives the teaching, without departing from the spirit of the invention, the person skilled in the art shall not inventively design the similar structural modes and embodiments to the technical solution, but shall fall within the scope of the invention.

Claims (3)

1. The integrated processing method of defogging imaging, visibility extraction and depth of field estimation is characterized by comprising the following steps of: the method comprises the following steps:
selecting a classical fogging model I (x) t (x) D (x) plus (1-t (x) A, wherein x is an imagePixel coordinates, i (x) is the observed foggy image, d (x) is the true color of the scene in the unattenuated state, a is the ambient light background light, t (x) is the transmittance, where t (x) e-βd(x)β is the atmospheric extinction coefficient, d (x) is the distance of light transmission;
secondly, searching scenes which commonly appear in the two frames of changed images by using an image matching algorithm, and solving a value A of atmospheric background light by using a geometric relation between the difference of imaging colors of the same scenes in the two frames of images and chromatic aberration;
thirdly, obtaining the estimated value of the transmissivity t (x) according to the dark channel prior
Figure FDA0003063103460000015
Figure FDA0003063103460000011
Wherein: Ω is a square window centered at x, c is the color channel of D (x);
and fourthly, a deformation form of the fogging model:
Figure FDA0003063103460000012
completing the restoration of the fog-free image;
expressing the fogging model in a color space in a vector form, constructing an auxiliary line and solving the transmittance ratio t of the same scene in two frame images1(x)/t2(x) Further solving the real color D (x) and the corresponding transmittance t (x) of the scene;
sixthly, writing the transmittance column as a ratio formula:
Figure FDA0003063103460000013
where x represents the same scene in both frames, tx1Representing the transmission of a scene x in the first of two frame imagesRate; combined with the change value d of the scenery relative to the distance of the unmanned aerial vehicle1-d2The atmospheric extinction coefficient beta can be solved, and an empirical formula of the extinction coefficient and the visibility can be utilized
Figure FDA0003063103460000014
Visibility V can be calculated;
seventhly, the transmittance t (x) and the extinction coefficient β are known, and t (x) is equal to e-βd(x)And obtaining the depth d of the scenery.
2. The integrated processing method of defogging imaging, visibility extraction and depth estimation according to claim 1, wherein: in step three, in the dark channel prior, the dark channel map is defined as a gray image composed of the values of the lowest channel of all pixels:
Figure FDA0003063103460000021
wherein D (x)darkIs a dark channel image, consisting of a dark channel prior D (x)dark→ 0, from which an estimated value of t (x) can be obtained
Figure FDA00030631034600000216
Is given by the following formula.
3. The integrated processing method of defogging imaging, visibility extraction and depth estimation according to claim 1, wherein: in the fifth step, two frames of images with overlapped areas in the aerial image are selected for estimating the atmospheric background light A, SIFT operators are used for carrying out feature matching, the same points in the two images are found, and the points are respectively counted as t1(x)、t2(x) At this time, the color of the same object on the two images is represented as:
Figure RE-FDA0003164336170000023
wherein, I isi(x)、Di(x) A is expressed as a space vector in the RGB color space as
Figure RE-FDA0003164336170000024
pi=ti(x)qi=(1-ti(x)),pi+qi1 is ═ 1; wherein
Figure RE-FDA0003164336170000025
Lie in a spatial plane; i is a reference number, i is a symbol,
Figure RE-FDA0003164336170000026
an unattenuated color vector representing a scene
Figure RE-FDA0003164336170000027
Vector of ambient light
Figure RE-FDA0003164336170000028
In the formed plane, all the synthesized different scene actual imaging color vectors caused by different transmittances;
in two adjacent images, an image matching algorithm can be used to obtain a plurality of pairs of point pairs which are in accordance with the description of the formula, and each pair of point pairs generates
Figure RE-FDA0003164336170000029
In a space plane, there is a perpendicular line for each space plane
Figure RE-FDA00031643361700000210
Perpendicular to
Figure RE-FDA00031643361700000211
Then there are:
Figure RE-FDA00031643361700000212
for several pairs of normal vectors to respective planes
Figure RE-FDA00031643361700000213
Always have
Figure RE-FDA00031643361700000214
Then the equation can be obtained:
Figure RE-FDA00031643361700000215
by utilizing the rule of the above formula, the optimized formula and the fitting can be written
Figure RE-FDA00031643361700000216
The direction of (a) is as follows:
Figure RE-FDA0003164336170000031
wherein
Figure RE-FDA0003164336170000032
For atmospheric background light
Figure RE-FDA0003164336170000033
A unit vector of (a);
when the color of the scenery is combined with the atmospheric background light, the vector
Figure RE-FDA0003164336170000034
The vector ends in the vector
Figure RE-FDA0003164336170000035
Above the line connecting the endpoints, the equations can be written as follows using the geometrical relationship:
Figure RE-FDA0003164336170000036
wherein a and b are constants, and a and b can be obtained by solving the above formula, wherein
Figure RE-FDA0003164336170000037
Get the atmospheric background light
Figure RE-FDA0003164336170000038
At this time, t (x) is estimated
Figure RE-FDA0003164336170000039
The right-hand variables of the expression (2) are known, and an estimated value of the transmittance of each point of the image is obtained
Figure RE-FDA00031643361700000310
From pi+qiWork up to 1 gives the following formula:
Figure RE-FDA00031643361700000311
the ratio of two formulas in the above formula is used for obtaining:
Figure RE-FDA00031643361700000312
make a vector
Figure RE-FDA00031643361700000313
Parallel line A' I of2Crossing OA with A', known as Δ OI1A1∽△A’I2A2(ii) a Then there are:
Figure RE-FDA00031643361700000314
solving OA 'I and A' I by sine theorem2L, p is obtained from the relationship in the above formula1/p2A value of (d); further, the self color of the scenery can be solved
Figure RE-FDA00031643361700000315
In the plane of each pair of matched pairs
Figure RE-FDA00031643361700000316
Are all known; the corresponding transmittance t (x) at each point pair can be solved.
CN202110518861.9A 2021-05-12 2021-05-12 Integrated processing method for defogging imaging, visibility extraction and depth of field estimation Active CN113379619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110518861.9A CN113379619B (en) 2021-05-12 2021-05-12 Integrated processing method for defogging imaging, visibility extraction and depth of field estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110518861.9A CN113379619B (en) 2021-05-12 2021-05-12 Integrated processing method for defogging imaging, visibility extraction and depth of field estimation

Publications (2)

Publication Number Publication Date
CN113379619A true CN113379619A (en) 2021-09-10
CN113379619B CN113379619B (en) 2022-02-01

Family

ID=77572622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110518861.9A Active CN113379619B (en) 2021-05-12 2021-05-12 Integrated processing method for defogging imaging, visibility extraction and depth of field estimation

Country Status (1)

Country Link
CN (1) CN113379619B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114280056A (en) * 2021-12-20 2022-04-05 北京普测时空科技有限公司 Visibility measurement system
CN116664448A (en) * 2023-07-24 2023-08-29 南京邮电大学 Medium-high visibility calculation method and system based on image defogging

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931220A (en) * 2016-04-13 2016-09-07 南京邮电大学 Dark channel experience and minimal image entropy based traffic smog visibility detection method
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN107301623A (en) * 2017-05-11 2017-10-27 北京理工大学珠海学院 A kind of traffic image defogging method split based on dark and image and system
CN107680054A (en) * 2017-09-26 2018-02-09 长春理工大学 Multisource image anastomosing method under haze environment
CN111161167A (en) * 2019-12-16 2020-05-15 天津大学 Single image defogging method based on middle channel compensation and self-adaptive atmospheric light estimation
CN111292258A (en) * 2020-01-15 2020-06-16 长安大学 Image defogging method based on dark channel prior and bright channel prior
CN111553862A (en) * 2020-04-29 2020-08-18 大连海事大学 Sea-sky background image defogging and binocular stereo vision positioning method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931220A (en) * 2016-04-13 2016-09-07 南京邮电大学 Dark channel experience and minimal image entropy based traffic smog visibility detection method
CN107301623A (en) * 2017-05-11 2017-10-27 北京理工大学珠海学院 A kind of traffic image defogging method split based on dark and image and system
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN107680054A (en) * 2017-09-26 2018-02-09 长春理工大学 Multisource image anastomosing method under haze environment
CN111161167A (en) * 2019-12-16 2020-05-15 天津大学 Single image defogging method based on middle channel compensation and self-adaptive atmospheric light estimation
CN111292258A (en) * 2020-01-15 2020-06-16 长安大学 Image defogging method based on dark channel prior and bright channel prior
CN111553862A (en) * 2020-04-29 2020-08-18 大连海事大学 Sea-sky background image defogging and binocular stereo vision positioning method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
于书博: "雾霾天气下道路交通标志识别和车道线检测研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114280056A (en) * 2021-12-20 2022-04-05 北京普测时空科技有限公司 Visibility measurement system
CN116664448A (en) * 2023-07-24 2023-08-29 南京邮电大学 Medium-high visibility calculation method and system based on image defogging
CN116664448B (en) * 2023-07-24 2023-10-03 南京邮电大学 Medium-high visibility calculation method and system based on image defogging

Also Published As

Publication number Publication date
CN113379619B (en) 2022-02-01

Similar Documents

Publication Publication Date Title
CN112435325B (en) VI-SLAM and depth estimation network-based unmanned aerial vehicle scene density reconstruction method
Mehra et al. ReViewNet: A fast and resource optimized network for enabling safe autonomous driving in hazy weather conditions
US11087151B2 (en) Automobile head-up display system and obstacle prompting method thereof
CN106651938B (en) A kind of depth map Enhancement Method merging high-resolution colour picture
US7148861B2 (en) Systems and methods for providing enhanced vision imaging with decreased latency
Kuanar et al. Night time haze and glow removal using deep dilated convolutional network
CN113379619B (en) Integrated processing method for defogging imaging, visibility extraction and depth of field estimation
CN112419472B (en) Augmented reality real-time shadow generation method based on virtual shadow map
CN103186887B (en) Image demister and image haze removal method
CN105225230A (en) A kind of method and device identifying foreground target object
CN110766024B (en) Deep learning-based visual odometer feature point extraction method and visual odometer
Choi et al. Safenet: Self-supervised monocular depth estimation with semantic-aware feature extraction
CN112508814B (en) Image tone restoration type defogging enhancement method based on unmanned aerial vehicle at low altitude visual angle
CN111553862B (en) Defogging and binocular stereoscopic vision positioning method for sea and sky background image
CN111753739B (en) Object detection method, device, equipment and storage medium
CN107808140B (en) Monocular vision road recognition algorithm based on image fusion
CN103914820A (en) Image haze removal method and system based on image layer enhancement
CN106023108A (en) Image defogging algorithm based on boundary constraint and context regularization
CN112561996A (en) Target detection method in autonomous underwater robot recovery docking
CN110503609B (en) Image rain removing method based on hybrid perception model
CN107085830B (en) Single image defogging method based on propagation filtering
CN112419411B (en) Realization method of vision odometer based on convolutional neural network and optical flow characteristics
CN110910457B (en) Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics
Le Besnerais et al. Dense height map estimation from oblique aerial image sequences
CN106846260B (en) Video defogging method in a kind of computer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant