CN116188316A - Water area defogging method based on fog concentration sensing - Google Patents

Water area defogging method based on fog concentration sensing Download PDF

Info

Publication number
CN116188316A
CN116188316A CN202310205615.7A CN202310205615A CN116188316A CN 116188316 A CN116188316 A CN 116188316A CN 202310205615 A CN202310205615 A CN 202310205615A CN 116188316 A CN116188316 A CN 116188316A
Authority
CN
China
Prior art keywords
image
fog
foggy
value
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310205615.7A
Other languages
Chinese (zh)
Inventor
徐超捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202310205615.7A priority Critical patent/CN116188316A/en
Publication of CN116188316A publication Critical patent/CN116188316A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a water area defogging method based on fog concentration perception, which is characterized in that a mathematical relationship between image characteristics and fog concentration is analyzed, a fog concentration model is established, image defogging can be converted into fog concentration minimum, a mathematical relationship between an image block and transmissivity is established through the model, the fog concentration minimum obtains the transmissivity through minimizing the fog concentration, and an atmospheric light value is the value of the image block with the largest fog concentration level in an image. According to the invention, the transmissivity and the atmospheric light value are estimated more accurately according to the fog concentration perception model, so that the problems of fog concentration measurement and monitoring in a foggy environment in the running process of a pleasure boat in a water area environment are solved, the method can be applied to a ship-borne video monitoring system of a inland travel boat, and the running safety problem of the inland travel boat is solved.

Description

Water area defogging method based on fog concentration sensing
Technical Field
The invention relates to the technical field of fog concentration and defogging, in particular to a water area defogging method based on fog concentration sensing.
Background
The video monitoring system is additionally arranged on the inland tourist ship, and a driver knows the current condition of the channel in real time through the video monitoring image. Because the inland waterway is in a water area environment and is easy to fog, the video monitoring image is blurred, and the potential safety hazard of ship driving is formed. In order to eliminate potential safety hazards of ship going caused by fog, defogging treatment is needed to be carried out on a foggy video monitoring image, but when the foggy video monitoring image is not always fogged, the defogging treatment is carried out on the video monitoring image, so that the image transition defogging is caused, and the image distortion is caused; meanwhile, as fog is unevenly distributed, the same parameters are used for defogging treatment, and color distortion can be caused in a fog-free area in an image, so that the fog concentration level and a defogging algorithm are closely related, and regarding the defogging effect, the defogging effect can be better due to the perception of the fog concentration, and the defogging effect is closer to the fog-free image. Therefore, in order to obtain a clear undistorted video surveillance image, the current fog concentration level needs to be monitored before the defogging process is used. Thus, the importance of effective and accurate fog concentration detection is shown.
Currently, fog concentration detection methods can be divided into two main categories, visibility-based and image feature-based, and are described below:
(1) Visibility-based fog concentration detection
In road traffic systems, various visibility measuring methods have been conventionally adopted and proposed. These methods can be generally classified into three types: manual, mechanical and video image detection methods.
The artificial method is the most traditional method, and the visibility is estimated directly by using the observation of human eyes. However, it is often easy for people to overestimate the distance of the vehicle ahead, personal deviation is easy and time consuming, and special training of the relevant personnel is required.
The device measurement method is to use special instruments to measure the visibility, and the currently common instruments comprise an optical visibility meter and a laser radar visibility meter. Optical visibility meters can be divided into two types: a transmission meter and a scatterometer. The transmittance meter calculates the visibility by measuring the transparency of the atmosphere, the light source emits a light beam to a receiver located at a distance from the light source, the receiver measures the intensity of light transmitted through the atmosphere, and the extinction coefficient is calculated from the transmittance between the two points. The scatterometer directly measures the scattered light intensity from a small sample volume, and the extinction coefficient is effectively calculated from the scattered light intensity. The optical visibility meter is highly complex, highly sensitive to fog non-homogeneity, and high in various requirements for use and maintenance. These have a greatly limited scope of their use. The laser radar visibility meter emits laser beams with certain wavelength, and the single photon detector converts echoes into electric signals for data acquisition, so that a curve of echo power changing along with the distance is obtained, and inversion of an atmospheric extinction coefficient and visibility calculation are further carried out. The laser radar visible instrument is expensive and is not suitable for mass arrangement.
The visibility detection method based on the video image comprises a camera model calibration method, a template matching method, a dark channel prior method, a double brightness difference method, a deep learning method and the like: the camera model calibration method directly takes a road as a target object, determines the extinction coefficient of the atmosphere through a real-time graphic processing program, and calculates the visibility of the atmosphere by using a Koschmieder law. The method requires locating and geometrically calibrating specific targets on the road, such as lane lines, road signs, road boundaries, etc. It has the disadvantage that an accurate geometric calibration of the camera is required, and a high contrast reference is required in the scene. And comparing and analyzing the image of the detection scene with the weather image with known visibility by using a template matching method to obtain the relative visibility. The method does not need to calibrate a camera or a reference object, but needs a large amount of weather image libraries calibrated with visibility information. The dark channel priori method obtains the transmissivity of the target object to the shooting point according to the dark channel priori theory, and derives the atmospheric extinction coefficient by using the transmissivity so as to estimate the visibility. The current research shows that the transmission obtained by the method is not accurate enough, and the real-time performance of the optimization algorithm is not good. The dual brightness difference method calculates visibility by using the ratio of the background brightness differences of two objects with different distances near the horizon and the corresponding horizontal sky, and has the advantages of being capable of detecting visibility even at night and the disadvantage of needing to build artificial objects. The current visibility prediction model operates on fog images, and needs to shoot corresponding non-fog images of the same scene under different weather conditions to compare visibility, or identify significant objects such as lane marks or traffic signs in the fog images, so as to provide distance clues, and also use a plurality of fog images of the same scene, or obtain polarization with different degrees through a rotating polarization filter connected to a camera. However, obtaining sufficient images is time consuming and it is difficult to find the maximum and minimum degrees of polarization in rapid scene changes.
(2) Fog concentration detection based on image features
The choi et al adopts twelve NSS features to construct an estimated fog concentration model to study fog concentration, and the method needs to count 12 NSS features of an input image, so that the computing time is long in the video monitoring field, and real-time monitoring cannot be realized.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention aims to provide a water area defogging method based on fog concentration sensing, which can more accurately estimate the transmissivity and the atmospheric light value according to a fog concentration sensing model, solve the problems of fog concentration measurement and monitoring in a fog environment in a water area environment downstream ship driving process, be applicable to a inland travel ship-borne video monitoring system and solve the driving safety problem of inland travel ships.
In order to achieve the above object, the present invention adopts the following technical scheme:
a water area defogging method based on fog concentration perception comprises the following steps:
step 1, feature selection
The characteristics associated with fog are shown in table 1.
TABLE 1 image characteristics
Figure BDA0004110827080000041
The correlation between 12 features and scene depth was calculated using pearson correlation coefficient PCC and spearman order correlation coefficient SROCC as shown in table 2 below:
TABLE 2 correlation of features
Index (I) f 1 f 2 f 3 f 4 f 5 f 6 f 7 f 8 f 9 f 10 f 11 f 12
PCC 0.26 0.32 0.27 0.33 0.36 0.45 0.34 0.16 0.15 0.25 0.38 0.35
SROCC 0.23 0.32 0.24 0.32 0.34 0.43 0.31 0.14 0.14 0.24 0.34 0.33
According to the correlation analysis of Table 2, f 8 And f 9 The correlation degree of (2) is lower, the characteristics of two waterway foggy-day images are replaced, the color value and gray scale standard difference in the HSV color space are calculated according to the following formula:
Figure BDA0004110827080000042
Figure BDA0004110827080000043
v is the color value in HSV color space, R, G, B is the color space three channels, I gray (i, j) is a gradation image after color image conversion, the width and height of the image are W and H,
Figure BDA0004110827080000044
is the average gray value of the image;
the correlation between new feature sets after replacement, as shown in table 3, and scene depth is shown in table 4 below:
TABLE 3 novel feature set
Sequence number Feature names
f 1 MSCN parameter variance
f 2 ,f 3 MSCN vertical product variance
f 4 Sharpening degree
f 5 Sharpening degree variance
f 6 Contrast (Gray)
f 7 Contrast (yellow-blue)
f 8 ' Color values in HSV color space
f 9 ' Standard deviation of gray scale
f 10 Pixel level dark channel prior
f 11 Saturation level
f 12 Chroma of colour
TABLE 4 New feature set correlation analysis
Index (I) f 1 f 2 f 3 f 4 f 5 f 6 f 7 f 8 ' f 9 ' f 10 f 11 f 12
PCC 0.26 0.32 0.27 0.33 0.36 0.45 0.34 0.34 0.64 0.25 0.38 0.35
SROCC 0.23 0.32 0.24 0.32 0.34 0.43 0.31 0.32 0.58 0.24 0.34 0.33
According to the correlation analysis results of Table 4, contrast f 6 Standard deviation of gray scale f 9 ' and saturation f 11 There is a large correlation coefficient value, and therefore, all three features are taken as the features of mist concentration estimation;
the relationship among contrast, gray standard deviation, saturation and transmittance is:
Figure BDA0004110827080000051
/>
Figure BDA0004110827080000052
Figure BDA0004110827080000053
wherein v is I Is the brightness value of the foggy image, v A Is the atmospheric light value in HSV space; v I Is the characteristic value of the saturation of the image,
Figure BDA0004110827080000061
is v I Mean value of Deltav I (x) For contrast at a certain position x, w in the image block J Contrast value, sigma, of haze-free image J Is the gray standard deviation sigma of the haze-free image I Is the gray standard deviation s of the foggy image I Is the saturation value of the foggy image, s J The saturation value of the haze-free image is represented by N, the number of image blocks in the image is represented by t (x), the transmissivity is represented by x, a certain position in the image is represented by x, and the image is represented by omega;
step 2, calculating the fog concentration
After a group of new fog related characteristics are determined, selecting a part of a RESIDE data set as training data of a network, wherein the RESIDE data set comprises clear fog-free images of outdoor scene images and corresponding synthesized fog images, most of the images are water images, selecting a plurality of clear fog-free inland channel images from the clear fog-free images to create a fog-free image corpus, and selecting corresponding fog inland channel images to create a fog image corpus;
(1) Dividing a frame of image in a monitoring video on a ship as a test image into a plurality of test image blocks with p pixels, and calculating 3 characteristics of contrast, gray variance value and saturation for each divided test image block; thus, each test image block yields a 3-dimensional vector δ; meanwhile, calculating an empirical covariance matrix sigma of the feature set of all the test image blocks in the test image;
(2) Adopting two multivariate Gaussian models, namely a fog-free MVG model and a fog MVG model, which are respectively two standard MVG models of fog-related characteristics extracted from a fog-free image corpus and a fog image corpus;
the haze free MVG model is defined as follows:
Figure BDA0004110827080000062
wherein f 1 For describing the foggless image feature set [ sigma ] J 、w J 、s J ]One of the eigenvectors, mu 1 Is the characteristic vector f 1 Is the mean value of Σ 1 Is the characteristic vector f 1 R is the dimension of the vector;
likewise, definition of the hazy MVG model:
Figure BDA0004110827080000071
wherein f 2 For describing foggy image feature set [ sigma ] J 、w J 、s J ]One of the eigenvectors, mu 2 Is the characteristic vector f 2 Is the mean value of Σ 2 Is the characteristic vector f 2 Is a covariance matrix of (a);
(3) Computing haze free level c for individual test image blocks f The method comprises the steps of predicting the haze level of a test image by measuring the mahalanobis distance between a multivariate Gaussian MVG model fitted by haze perception statistical features extracted from the test image and a standard multivariate Gaussian MVG model of haze perception features extracted from a plurality of haze-free image corpuses;
Figure BDA0004110827080000072
(4) Calculating foggy level c of a single test image block ff Mars distance measurement between a multivariate Gaussian MVG model fitted by fog-perception statistics extracted from a test image and a standard multivariate Gaussian MVG model of fog-perception features extracted from a corpus of several foggy images to predict foggy of a test image blockA stage;
Figure BDA0004110827080000073
the fog concentration c of a single test image patch is defined as follows:
Figure BDA0004110827080000074
Σ is a positively symmetric covariance matrix, i.e. there is a non-singular matrix L such that the equation Σ=ll -1 The establishment, vector Y= (delta-mu) L -1 Obeying the multielement normal distribution; and the average value of the feature vector of one foggy image is smaller than that of the feature vector of the standard foggy image and is larger than that of the feature vector of the standard foggy image, so that the foggy grade is simplified as follows:
Figure BDA0004110827080000075
wherein L is 1 For a non-singular matrix to make the equation Σ 1 =L 1 L 1 -1 Hold, L 1 -1 Is L 1 An inverse matrix of (a);
the haze rating is reduced to:
Figure BDA0004110827080000076
wherein L is 2 For a non-singular matrix to make the equation Σ 2 =L 2 L 2 -1 Hold, L 2 -1 Is L 2 An inverse matrix;
for any image block of the foggy image, the mahalanobis distance between the foggy image and the foggy MVG model is relatively large, and the mahalanobis distance between the foggy MVG model and the foggy MVG model is relatively small; in contrast, for any image block of the defogging image, the mahalanobis distance between the defogging MVG model and the image block is smaller than the mahalanobis distance between the defogging MVG model and the defogging MVG modelThe mahalanobis distance is relatively large; thus c f -c ff The larger the value of (c) is, the higher the mist concentration is, and the mist concentration calculation is simplified to the following formula:
Figure BDA0004110827080000081
wherein c 0 C is the difference between the standard haze free rating and the standard haze rating 0 =||(f 11 )L 1 -1 || 1 +||(f 22 )L 2 -1 || 1 Is a fixed value;
the feature value of the image is related to the scene depth, the transmittance is related to the scene depth, and in order to solve the linear relation between the feature value and the transmittance, the fog concentration model is simplified into:
Figure BDA0004110827080000082
where k= (f 11 )L 1 -1 -(f 22 )L 2 -1 =μ 1 L 1 -12 L 2 -1 -f 1 L 1 -1 -f 2 L 2 -1
When the image is a clear image, the characteristic value of the image block is higher, and then the fog concentration of the image block is smaller; in contrast, when the image is a foggy image, the characteristic value of the image is low, the foggy concentration of the image is high, and the foggy concentration model can be approximately expressed as a linear relationship of a linear combination of the contrast, the gray standard deviation and the saturation of the image, and the foggy concentration model is approximately expressed as follows:
c=b-(c 1 ×σ J +c 2 ×w J +c 3 ×s J )
wherein b is the offset of the mist concentration,
Figure BDA0004110827080000083
B=μ 1 L 1 -12 L 2 -1 ,/>
Figure BDA0004110827080000084
A=L 1 -1 +L 2 -1 the method comprises the steps of carrying out a first treatment on the surface of the Substituting the relation between the 3 kinds of fog related features and the transmittance into the approximate fog concentration model expression to obtain a fog concentration model related to the transmittance, wherein the fog concentration model related to the transmittance is represented by the following formula:
Figure BDA0004110827080000091
c (t) is a haze concentration model related to transmittance;
step 3, calculating the transmissivity
The transmittance t (x) is obtained by obtaining the minimum value of the mist concentration, and the formula is as follows:
t(x)=argminc(t)
the haze concentration model is brought into the above to obtain a transmittance model as follows:
Figure BDA0004110827080000092
step 4, calculating the atmospheric light value
The atmospheric light value is selected as the most opaque area in the image, and the larger the image block fog level in the image is, the larger the fog concentration is, so that the atmospheric light value is the maximum value of the image block fog level; the atmospheric light value A is calculated by utilizing the fog concentration model of the image, so that a more accurate atmospheric light value A can be obtained, and the interference of a highlight object or region is avoided;
step 5, defogging the image
The atmospheric scattering model is as follows:
I(x)=J(x)t(x)+A(1-t(x))
wherein, I (x) is a foggy image, J (x) is a corresponding foggy image, t (x) is transmissivity, and A is an atmospheric light value;
estimating A and t (x) according to the step 3 and the step 4, and taking the A and t (x) into an atmospheric scattering model to obtain a haze-free image J (x) as shown in the following formula;
Figure BDA0004110827080000093
compared with the prior art, the invention has the following advantages:
1) Simple and effective: the cost of the instrument can be reduced without using manual and optical instruments, corresponding haze-free images of the same scene are not required to be shot under different weather conditions, and lane marks or traffic signs are not required to provide distance clues for the inland waterway.
2) The fog concentration detection is more accurate: the feature of visibility is a single feature, and in a foggy environment, fog is not uniformly distributed and varies with depth of field, so that the accuracy of selecting the visibility as judging the concentration of the fog is not high. In fog concentration detection, instead of selecting the feature of visibility, the correlation of 12 image features is analyzed by utilizing PCC and SROCC.
3) Selecting features with high correlation reduces the time complexity: compared with 12 features of the statistical image, the method selects 3 most relevant features to construct a fog concentration detection model, so that the calculation time is shortened.
4) Better defogging effect: the transmissivity and the atmospheric light value estimated according to the fog concentration perception model of the water fog are more accurate, the transmissivity is refined by using the rapid guiding filtering, and the restored image cannot have the problems of color distortion, blocking effect and the like due to the estimated deviation of the transmissivity.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2a and 2b are image transmittance graphs of an image without fog and an image with fog, respectively, after improvement.
Fig. 3 is a graph comparing the defogging effect of the dark channel defogging algorithm and the method of the present invention.
Fig. 4 is a graph showing defogging effect of the method of the present invention on a foggy image.
Fig. 5 is a schematic view of image acquisition.
Detailed Description
The invention is described in further detail below with reference to the attached drawings and detailed description:
as shown in fig. 5, the camera is connected to the tablet through the switch, and the image collected by the camera is displayed on the tablet to complete the collection of the image.
Since dark channel theory assumes that dark pixels with gray values very low or equal to 0 are present in the image blocks around each pixel of the haze-free image, dark channel algorithms tend to estimate a lower transmittance for the sky area, resulting in color distortion, excessive haze problems of the haze-free result. Therefore, the concentration distribution of the image fog is obtained according to the fog concentration model, the transmissivity and the atmospheric light value of the image are estimated more accurately according to the fog concentration model, the transmissivity is thinned by using rapid guiding filtering, and the fog-free image is restored better, and the flow chart is shown in the following figure 1.
As shown in fig. 1, the water area defogging method based on fog concentration sensing of the invention comprises the following steps:
step 1, feature selection
Choi et al found that foggy images have some characteristic attenuation characteristics by counting the characteristics of a large number of foggy and non-foggy images, choi given a set of fog-related characteristics, which were MSCN coefficient variances (f 1 ) MSCN coefficient vertical product variance (positive mode) (f 2 ) MSCN coefficient vertical product variance (negative mode) (f 3 ) Sharpening degree (f 4 ) Sharpening degree variance (f 5 ) Contrast (gray) (f 6 ) Contrast (yellow-blue) (f 7 ) Contrast (red-green) (f 8 ) Image entropy (f 9 ) Pixel level dark channel a priori (f 10 ) Color saturation (f) of HSV color space 11 ) Degree of color (f 12 ) As shown in table 1.
TABLE 1 image characteristics
Figure BDA0004110827080000111
/>
Figure BDA0004110827080000121
The correlation between 12 features and scene depth was calculated using Pearson Correlation Coefficient (PCC) and spearman order correlation coefficient (SROCC), as shown in table 2 below:
TABLE 2 correlation of features
Index (I) f 1 f 2 f 3 f 4 f 5 f 6 f 7 f 8 f 9 f 10 f 11 f 12
PCC 0.26 0.32 0.27 0.33 0.36 0.45 0.34 0.16 0.15 0.25 0.38 0.35
SROCC 0.23 0.32 0.24 0.32 0.34 0.43 0.31 0.14 0.14 0.24 0.34 0.33
According to the correlation analysis of Table 2, f 8 And f 9 The correlation degree of the system is low, the characteristics of two waterway foggy-day images are replaced, the color value and gray scale in the HSV color space are poor, and the system is characterized in thatThe calculation formula is as follows:
Figure BDA0004110827080000122
Figure BDA0004110827080000123
v is the color value in HSV color space, R, G, B is the color space three channels, I gray (i, j) is a gradation image after color image conversion, the width and height of the image are W and H,
Figure BDA0004110827080000124
is the average gray value of the image.
The correlation between new feature sets after replacement, as shown in table 3, and scene depth is shown in table 4 below:
TABLE 3 novel feature set
Figure BDA0004110827080000125
/>
Figure BDA0004110827080000131
TABLE 4 New feature set correlation analysis
Index (I) f 1 f 2 f 3 f 4 f 5 f 6 f 7 f 8 ' f 9 ' f 10 f 11 f 12
PCC 0.26 0.32 0.27 0.33 0.36 0.45 0.34 0.34 0.64 0.25 0.38 0.35
SROCC 0.23 0.32 0.24 0.32 0.34 0.43 0.31 0.32 0.58 0.24 0.34 0.33
According to the correlation analysis results of Table 4, the contrast (f 6 ) Standard deviation of gray scale (f 9 ') and saturation (f) 11 ) There is a large correlation coefficient value, and therefore, all of these three features are taken as features of the mist concentration estimation. Through the experimental analysis, the invention selects the contrast (f 6 ) Standard deviation of gray scale (f 9 ') and saturation (f) 11 ) A mist concentration estimation is performed.
The relationship among contrast, gray standard deviation, saturation and transmittance is:
Figure BDA0004110827080000132
Figure BDA0004110827080000133
Figure BDA0004110827080000134
wherein v is I Is the brightness value of the foggy image, v A Is the atmospheric light value in HSV space; v I Is the characteristic value of the saturation of the image,
Figure BDA0004110827080000135
is v I Mean value of Deltav I (x) For contrast at a certain position x, w in the image block J Contrast value, sigma, of haze-free image J Is the gray standard deviation sigma of the haze-free image I Is the gray standard deviation s of the foggy image I Is the saturation value of the foggy image, s J The saturation value of the haze-free image is represented by N, the number of image blocks in the image is represented by t (x), the transmittance is represented by x, a certain position in the image is represented by x, and the image is represented by omega.
Step 2, calculating the fog concentration
After a group of new fog related characteristics are determined, a part in a RESIDE data set is selected as training data of a network, wherein the RESIDE data set comprises clear fog-free images of outdoor scene images and corresponding synthesized fog images, most of the images are water images, 150 clear fog-free inland channel images are selected to create a fog-free image corpus, and the corresponding fog inland channel images are selected to create a fog image corpus.
(1) And dividing one frame of image in the monitoring video on the ship as a test image into a plurality of test image blocks with p pixels, and calculating 3 characteristics of contrast, gray variance and saturation for each divided test image block. Thus, each test image block results in a 3-dimensional vector δ. At the same time, an empirical covariance matrix Σ of the feature set of all test image blocks in the test image is calculated.
(2) Two multivariate Gaussian (Multivariate Gaussian, MVG) models, namely a haze-free MVG model and a haze MVG model, are respectively two standard MVG models of haze-related features extracted from a haze-free image corpus and a haze-free image corpus.
The haze free MVG model is defined as follows:
Figure BDA0004110827080000141
wherein f 1 For describing the foggless image feature set [ sigma ] J 、w J 、s J ]One of the eigenvectors, mu 1 Is the characteristic vector f 1 Mean value of Sigma 1 Is the characteristic vector f 1 R is the dimension of the vector, which is 3 in this formula.
Likewise, definition of the hazy MVG model:
Figure BDA0004110827080000151
wherein f 2 For describing foggy image feature set [ sigma ] J 、w J 、s J ]One of the eigenvectors, mu 2 Is the characteristic vector f 2 Is the mean value of Σ 2 Is the characteristic vector f 2 Is used for the co-variance matrix of (a),
(3) Computing haze free level c for individual test image blocks f The method comprises the step of predicting the haze level of a test image by measuring the mahalanobis distance between a multivariate Gaussian MVG model fitted by haze perception statistical features extracted from the test image and a standard multivariate Gaussian MVG model of haze perception features extracted from a 150 haze-free image corpus.
Figure BDA0004110827080000152
(4) Calculating foggy level c of a single test image block ff The Mars distance measurement between a multivariate Gaussian MVG model fitted by fog perception statistical features extracted from a test image and a standard multivariate Gaussian MVG model of fog perception features extracted from a 150-sheet fog image corpus is used for predicting the fog level of a test image block.
Figure BDA0004110827080000153
The fog concentration c of a single test image patch is defined as follows:
Figure BDA0004110827080000154
sigma is a positive symmetric covariance matrix, i.e. there is oneThe non-singular matrix L is such that the equation Σ= LL -1 The establishment, vector Y= (delta-mu) L -1 Obeys a multivariate normal distribution. And the average value of the feature vector of one foggy image is generally smaller than that of the feature vector of the standard foggy image and larger than that of the feature vector of the standard foggy image, so that the foggy grade can be simplified as follows:
Figure BDA0004110827080000155
wherein L is 1 For a non-singular matrix to make the equation Σ 1 =L 1 L 1 -1 Hold, L 1 -1 Is L 1 Is a matrix of inverse of (a). The level of fogging can be reduced to:
Figure BDA0004110827080000156
wherein L is 2 For a non-singular matrix to make the equation Σ 2 =L 2 L 2 -1 Hold, L 2 -1 Is L 2 An inverse matrix.
For any image block of the foggy image, the mahalanobis distance between the foggy image and the foggy MVG model is relatively large, and the mahalanobis distance between the foggy MVG model and the foggy MVG model is relatively small. In contrast, for any image block of the defogging image, the mahalanobis distance between the defogging MVG model and the image block is smaller, and the mahalanobis distance between the defogging MVG model and the image block is larger. Thus, (c) f -c ff ) The larger the value of (c) is, the higher the mist concentration is, and the mist concentration calculation can be simplified to the formula:
Figure BDA0004110827080000161
wherein c 0 C is the difference between the standard haze free rating and the standard haze rating 0 =||(f 11 )L 1 -1 || 1 +||(f 22 )L 2 -1 || 1 Is a fixed value.
The feature value of the image is related to the scene depth, the transmittance is related to the scene depth, and in order to solve the linear relation between the feature value and the transmittance, the fog concentration model can be simplified into:
Figure BDA0004110827080000162
where k= (f 11 )L 1 -1 -(f 22 )L 2 -1 =μ 1 L 1 -12 L 2 -1 -f 1 L 1 -1 -f 2 L 2 -1
When the image is a clear image, the characteristic value of the image block is higher, and then the fog density of the image block is smaller. In contrast, when the image is a foggy image, the characteristic value of the image is lower, the foggy concentration of the image is larger, the foggy concentration model can be approximately expressed as a linear relationship of a linear combination of the contrast, the gray standard deviation and the saturation of the image, and the foggy concentration model can be approximately expressed as follows:
c=b-(c 1 ×σ J +c 2 ×w J +c 3 ×s J )
wherein b is the offset of the mist concentration,
Figure BDA0004110827080000163
B=μ 1 L 1 -12 L 2 -1
Figure BDA0004110827080000164
A=L 1 -1 +L 2 -1 . Substituting the relation between the 3 kinds of fog-related features and the transmittance into the approximate fog concentration model expression can obtain a fog concentration model related to the transmittance as follows:
Figure BDA0004110827080000171
c (t) is a haze concentration model related to transmittance.
Step 3, calculating the transmissivity
Acquiring a fog concentration level in an image according to a fog concentration model, when the fog concentration is greater than 1, carrying out defogging treatment, and calculating the transmissivity according to the fog concentration model related to the transmissivity so as to defog to obtain a clear image; and if the transmission is less than 1, not calculating the transmissivity, not defogging, and directly playing and displaying the monitoring video.
The core principle for defogging an image is to reduce the concentration of fog in the image, so the transmittance t (x) can be obtained by taking the minimum value of the fog concentration, and the formula is as follows:
t(x)=argminc(t)
the transmittance model obtained by bringing the fog concentration model into the above model is:
Figure BDA0004110827080000172
transmittance maps estimated using the fog density model for the fog-free image and the fog image, respectively, are shown in fig. 2a and 2 b: as can be seen from the figures: the transmissivity of the image is estimated more accurately, and the boundary and texture parts of the image can be well reserved.
Step 4, calculating the atmospheric light value
The atmospheric light value is selected as the most opaque area in the image, and the larger the image block fog level in the image is, the larger the fog concentration is, so that the atmospheric light value is the maximum value of the image block fog level. The atmospheric light value A is calculated by utilizing the fog concentration model of the image, so that a more accurate atmospheric light value A can be obtained, and the interference of high-light objects or areas is avoided. The more dense the fog in the fog image, the greater the fog density coefficient for that point.
Step 5, defogging the image
The atmospheric scattering model is as follows:
I(x)=J(x)t(x)+A(1-t(x))
wherein I (x) is a foggy image, J (x) is a corresponding foggy image, t (x) is transmittance, and A is an atmospheric light value.
A and t (x) are estimated according to the steps 3 and 4, and the haze-free image J (x) can be obtained by taking the A and t (x) into an atmospheric scattering model as shown in the following formula.
Figure BDA0004110827080000181
As shown in fig. 3, the image obtained by the dark channel defogging algorithm when the image defogging is performed has sky color distortion, a sky red area in the image has other colors, a block effect and halation appear in a tree area, and the whole color is dark; after improvement, the defogging effect of the defogging algorithm is carried out according to the fog concentration distribution, the color is more real, and the defogging effect is better.
Defogging is carried out according to the estimated transmissivity of the fog concentration model and the atmospheric light value, and a defogging effect graph for fog is shown in the following figure 4: as can be seen from the figures: the method achieves better effect after defogging, the restored image has higher contrast and brightness, the defect of dark results after the traditional defogging algorithm is overcome, the saturation of the processed results is moderate, the color is vivid, the edge part of the image can be effectively reserved, and no obvious distortion phenomenon occurs.
According to the method, the fog concentration of the foggy scene is predicted from a single image under the conditions of no reference to the corresponding foggy image, no dependence on the remarkable objects in the scene, no dependence on the side geographic camera information, no estimation of the depth-dependent transmission map and no artificial judgment training. Compared with manual methods, instrument methods and video image methods, the accuracy is improved, errors and the cost of optical instruments are reduced, and meanwhile distance clues are not needed to be provided by other marks. The fog concentration model may not only predict the fog concentration of the entire image, but may also provide a local fog concentration level for each image patch. Through analyzing the water area environment, two water area characteristics are added, simultaneously through analyzing the relation between the characteristics and the transmissivity, three characteristics with higher relativity are selected, the calculation time is reduced when the fog concentration is calculated subsequently, the fog concentration is judged more accurately, the transmissivity and the atmospheric light value can be estimated more accurately through a fog concentration model aiming at the water area environment, a better basis is provided for a subsequent defogging algorithm, and a better defogging effect is achieved.

Claims (1)

1. A water area defogging method based on fog concentration perception is characterized in that: the method comprises the following steps:
step 1, feature selection
The characteristics associated with fog are shown in table 1.
TABLE 1 image characteristics
Figure FDA0004110827070000011
The correlation between 12 features and scene depth was calculated using pearson correlation coefficient PCC and spearman order correlation coefficient SROCC as shown in table 2 below:
TABLE 2 correlation of features
Index (I) f 1 f 2 f 3 f 4 f 5 f 6 f 7 f 8 f 9 f 10 f 11 f 12 PCC 0.26 0.32 0.27 0.33 0.36 0.45 0.34 0.16 0.15 0.25 0.38 0.35 SROCC 0.23 0.32 0.24 0.32 0.34 0.43 0.31 0.14 0.14 0.24 0.34 0.33
According to the correlation analysis of Table 2, f 8 And f 9 The correlation degree of (2) is lower, the characteristics of two waterway foggy-day images are replaced, the color value and gray scale standard difference in the HSV color space are calculated according to the following formula:
Figure FDA0004110827070000012
Figure FDA0004110827070000013
v is the color value in HSV color space, R, G, B is the color space three channels, I gray (i, j) is a gradation image after color image conversion, the width and height of the image are W and H,
Figure FDA0004110827070000021
is the average gray value of the image;
the correlation between new feature sets after replacement, as shown in table 3, and scene depth is shown in table 4 below:
TABLE 3 novel feature set
Sequence number Feature names f 1 MSCN parameter variance f 2 ,f 3 MSCN vertical product variance f 4 Sharpening degree f 5 Sharpening degree variance f 6 Contrast (Gray) f 7 Contrast (yellow-blue) f 8 ' Color values in HSV color space f 9 ' Standard deviation of gray scale f 10 Pixel level dark channel prior f 11 Saturation level f 12 Chroma of colour
TABLE 4 New feature set correlation analysis
Index (I) f 1 f 2 f 3 f 4 f 5 f 6 f 7 f 8 ' f 9 ' f 10 f 11 f 12 PCC 0.26 0.32 0.27 0.33 0.36 0.45 0.34 0.34 0.64 0.25 0.38 0.35 SROCC 0.23 0.32 0.24 0.32 0.34 0.43 0.31 0.32 0.58 0.24 0.34 0.33
According to the correlation analysis results of Table 4, contrast f 6 Standard deviation of gray scale f 9 ' and saturation f 11 There is a large correlation coefficient value, and therefore, all three features are taken as the features of mist concentration estimation;
the relationship among contrast, gray standard deviation, saturation and transmittance is:
Figure FDA0004110827070000022
Figure FDA0004110827070000023
Figure FDA0004110827070000031
wherein v is I Is the brightness value of the foggy image, v A Is the atmospheric light value in HSV space; v I Is the characteristic value of the saturation of the image,
Figure FDA0004110827070000032
is v I Mean value of Deltav I (x) For contrast at a certain position x, w in the image block J Contrast value, sigma, of haze-free image J Is the gray standard deviation sigma of the haze-free image I Is the gray standard deviation s of the foggy image I Is the saturation value of the foggy image, s J The saturation value of the haze-free image is represented by N, the number of image blocks in the image is represented by t (x), the transmissivity is represented by x, a certain position in the image is represented by x, and the image is represented by omega;
step 2, calculating the fog concentration
After a group of new fog related characteristics are determined, selecting a part of a RESIDE data set as training data of a network, wherein the RESIDE data set comprises clear fog-free images of outdoor scene images and corresponding synthesized fog images, most of the images are water images, selecting a plurality of clear fog-free inland channel images from the clear fog-free images to create a fog-free image corpus, and selecting corresponding fog inland channel images to create a fog image corpus;
(1) Dividing a frame of image in a monitoring video on a ship as a test image into a plurality of test image blocks with p pixels, and calculating 3 characteristics of contrast, gray variance value and saturation for each divided test image block; thus, each test image block yields a 3-dimensional vector δ; meanwhile, calculating an empirical covariance matrix sigma of the feature set of all the test image blocks in the test image;
(2) Adopting two multivariate Gaussian models, namely a fog-free MVG model and a fog MVG model, which are respectively two standard MVG models of fog-related characteristics extracted from a fog-free image corpus and a fog image corpus;
the haze free MVG model is defined as follows:
Figure FDA0004110827070000033
wherein f 1 For describing the foggless image feature set [ sigma ] J 、w J 、s J ]One of the eigenvectors, mu 1 Is the characteristic vector f 1 Is the mean value of Σ 1 Is the characteristic vector f 1 R is the dimension of the vector;
likewise, definition of the hazy MVG model:
Figure FDA0004110827070000041
wherein f 2 For describing foggy image feature set [ sigma ] J 、w J 、s J ]One of the eigenvectors, mu 2 Is the characteristic vector f 2 Is the mean value of Σ 2 Is the characteristic vector f 2 Is a covariance matrix of (a);
(3) Computing haze free level c for individual test image blocks f The method comprises the steps of predicting the haze level of a test image by measuring the mahalanobis distance between a multivariate Gaussian MVG model fitted by haze perception statistical features extracted from the test image and a standard multivariate Gaussian MVG model of haze perception features extracted from a plurality of haze-free image corpuses;
Figure FDA0004110827070000042
(4) Calculating foggy level c of a single test image block ff The method comprises the steps of predicting the foggy level of a test image block by measuring the mahalanobis distance between a multi-Gaussian MVG model fitted by fog perception statistical features extracted from the test image and a standard multi-Gaussian MVG model of the fog perception features extracted from a plurality of foggy image corpuses;
Figure FDA0004110827070000043
the fog concentration c of a single test image patch is defined as follows:
Figure FDA0004110827070000044
Σ is a positively symmetric covariance matrix, i.e. there is one non-singular matrix L such that the equation Σ=ll -1 The establishment, vector Y= (delta-mu) L -1 Obeying the multielement normal distribution; and the average value of the feature vector of one foggy image is smaller than that of the feature vector of the standard foggy image and is larger than that of the feature vector of the standard foggy image, so that the foggy grade is simplified as follows:
Figure FDA0004110827070000045
wherein L is 1 For a non-singular matrix to make the equation Σ 1 =L 1 L 1 -1 Hold, L 1 -1 Is L 1 An inverse matrix of (a);
the haze rating is reduced to:
Figure FDA0004110827070000051
wherein L is 2 For a non-singular matrix to make the equation Σ 2 =L 2 L 2 -1 It is true that the method is that,L 2 -1 is L 2 An inverse matrix;
for any image block of the foggy image, the mahalanobis distance between the foggy image and the foggy MVG model is relatively large, and the mahalanobis distance between the foggy MVG model and the foggy MVG model is relatively small; in contrast, for any image block of the fog-free image, the mahalanobis distance between the image block and the fog-free MVG model is smaller, and the mahalanobis distance between the image block and the fog-free MVG model is larger; thus c f -c ff The larger the value of (c) is, the higher the mist concentration is, and the mist concentration calculation is simplified to the following formula:
Figure FDA0004110827070000052
wherein c 0 C is the difference between the standard haze free rating and the standard haze rating 0 =||(f-μ 1 )L 1 -1 || 1 +||(f-μ 2 )L 2 -1 || 1 Is a fixed value;
the feature value of the image is related to the scene depth, the transmittance is related to the scene depth, and in order to solve the linear relation between the feature value and the transmittance, the fog concentration model is simplified into:
Figure FDA0004110827070000053
where k= (f 11 )L 1 -1 -(f 22 )L 2 -1 =μ 1 L 1 -12 L 2 -1 -f 1 L 1 -1 -f 2 L 2 -1
When the image is a clear image, the characteristic value of the image block is higher, and then the fog concentration of the image block is smaller; in contrast, when the image is a foggy image, the characteristic value of the image is low, the foggy concentration of the image is high, and the foggy concentration model can be approximately expressed as a linear relationship of a linear combination of the contrast, the gray standard deviation and the saturation of the image, and the foggy concentration model is approximately expressed as follows:
c=b-(c 1 ×σ J +c 2 ×w J +c 3 ×s J )
wherein b is the offset of the mist concentration,
Figure FDA0004110827070000054
B=μ 1 L 1 -12 L 2 -1 ,/>
Figure FDA0004110827070000055
A=L 1 -1 +L 2 -1 the method comprises the steps of carrying out a first treatment on the surface of the Substituting the relation between the 3 kinds of fog related features and the transmittance into the approximate fog concentration model expression to obtain a fog concentration model related to the transmittance, wherein the fog concentration model related to the transmittance is represented by the following formula:
Figure FDA0004110827070000061
c (t) is a haze concentration model related to transmittance;
step 3, calculating the transmissivity
The transmittance t (x) is obtained by obtaining the minimum value of the mist concentration, and the formula is as follows:
t(x)=argminc(t)
the haze concentration model is brought into the above to obtain a transmittance model as follows:
Figure FDA0004110827070000062
step 4, calculating the atmospheric light value
The atmospheric light value is selected as the most opaque area in the image, and the larger the image block fog level in the image is, the larger the fog concentration is, so that the atmospheric light value is the maximum value of the image block fog level; the atmospheric light value A is calculated by utilizing the fog concentration model of the image, so that a more accurate atmospheric light value A can be obtained, and the interference of a highlight object or region is avoided;
step 5, defogging the image
The atmospheric scattering model is as follows:
I(x)=J(x)t(x)+A(1-t(x))
wherein, I (x) is a foggy image, J (x) is a corresponding foggy image, t (x) is transmissivity, and A is an atmospheric light value;
firstly, calculating fog concentration in an image according to a fog concentration model, when the fog concentration is greater than 1, carrying out defogging treatment, estimating A and t (x) according to the step 3 and the step 4, and carrying the A and t (x) into an atmospheric scattering model to obtain a defogging image J (x) as shown in the following formula;
Figure FDA0004110827070000071
/>
CN202310205615.7A 2023-03-06 2023-03-06 Water area defogging method based on fog concentration sensing Pending CN116188316A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310205615.7A CN116188316A (en) 2023-03-06 2023-03-06 Water area defogging method based on fog concentration sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310205615.7A CN116188316A (en) 2023-03-06 2023-03-06 Water area defogging method based on fog concentration sensing

Publications (1)

Publication Number Publication Date
CN116188316A true CN116188316A (en) 2023-05-30

Family

ID=86450517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310205615.7A Pending CN116188316A (en) 2023-03-06 2023-03-06 Water area defogging method based on fog concentration sensing

Country Status (1)

Country Link
CN (1) CN116188316A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117196971A (en) * 2023-08-14 2023-12-08 上海为旌科技有限公司 Image defogging method and device based on atmospheric scattering model and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117196971A (en) * 2023-08-14 2023-12-08 上海为旌科技有限公司 Image defogging method and device based on atmospheric scattering model and readable storage medium
CN117196971B (en) * 2023-08-14 2024-05-31 上海为旌科技有限公司 Image defogging method and device based on atmospheric scattering model and readable storage medium

Similar Documents

Publication Publication Date Title
CN104809707B (en) A kind of single width Misty Image visibility method of estimation
CN104011737B (en) Method for detecting mist
CN110765912B (en) SAR image ship target detection method based on statistical constraint and Mask R-CNN
CN101561932A (en) Method and device for detecting real-time movement target under dynamic and complicated background
CN109741285B (en) Method and system for constructing underwater image data set
CN110288539A (en) A kind of mobile clear method of underwater picture with dark channel prior in color combining space
CN113850747B (en) Underwater image sharpening processing method based on light attenuation and depth estimation
CN110849807A (en) Monitoring method and system suitable for road visibility based on deep learning
CN116188316A (en) Water area defogging method based on fog concentration sensing
CN105447825A (en) Image defogging method and system
CN102855485A (en) Automatic wheat earing detection method
CN108189757A (en) A kind of driving safety prompt system
CN113313047A (en) Lane line detection method and system based on lane structure prior
CN104050678A (en) Underwater monitoring color image quality measurement method
CN112884795A (en) Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion
CN116664448B (en) Medium-high visibility calculation method and system based on image defogging
CN112767267B (en) Image defogging method based on simulation polarization fog-carrying scene data set
CN112488997B (en) Method for detecting and evaluating color reproduction of ancient painting printed matter based on characteristic interpolation
CN106204523A (en) A kind of image quality evaluation method and device
CN109285187A (en) A kind of farthest visible point detecting method based on traffic surveillance videos image
US9349056B2 (en) Method of measuring road markings
CN109783973A (en) A kind of atmospheric visibility calculation method based on image degradation model
Meng et al. Highway visibility detection method based on surveillance video
CN115100577A (en) Visibility recognition method and system based on neural network, electronic device and storage medium
CN114720425A (en) Visibility monitoring system and method based on image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination