CN117830722A - Foggy image visibility level classification method based on passive fog density segmentation - Google Patents

Foggy image visibility level classification method based on passive fog density segmentation Download PDF

Info

Publication number
CN117830722A
CN117830722A CN202311864518.5A CN202311864518A CN117830722A CN 117830722 A CN117830722 A CN 117830722A CN 202311864518 A CN202311864518 A CN 202311864518A CN 117830722 A CN117830722 A CN 117830722A
Authority
CN
China
Prior art keywords
image
visibility
data
foggy
fog
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311864518.5A
Other languages
Chinese (zh)
Inventor
陈赞
伍星
梁卓然
冯远静
胡德云
王志磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202311864518.5A priority Critical patent/CN117830722A/en
Publication of CN117830722A publication Critical patent/CN117830722A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Image Processing (AREA)

Abstract

A foggy image visibility level classification method based on passive fog density segmentation comprises the following steps: step 1, collecting image data from video monitoring of a meteorological area station and a social station, marking visibility data, removing invalid images and constructing a data set; step 2, dividing a visibility level interval, and sequencing an image data set to obtain an image sequence with low to high visibility; step 3, selecting a proper reference image; step 4, image segmentation, namely removing a sky area and selecting a middle-distant view area; step 5, training a model until convergence; and 6, testing the visibility level of the image by using the trained model. The invention has higher classification precision and stronger adaptability, can better meet the requirements of practical application, can be widely applied to the fields of traffic management, automatic driving systems, monitoring systems and the like, and provides an effective means for improving visual perception and decision under foggy weather conditions.

Description

Foggy image visibility level classification method based on passive fog density segmentation
Technical Field
The invention relates to the field of artificial intelligent image processing, in particular to a foggy image visibility level classification method based on passive foggy density segmentation. The method can be widely applied to the fields of traffic management, automatic driving systems, monitoring systems and the like, and provides an effective means for improving visual perception and decision making under foggy weather conditions.
Background
In many practical application scenes, the foggy weather conditions can seriously influence the definition and visibility of images, and bring great challenges to traffic and monitoring systems and the like. The existing foggy day image processing method mainly focuses on defogging technology, but lacks fine classification of visibility level of foggy day images. In practical application, the processing strategies and countermeasures under different visibility levels are different, so that an accurate and rapid classification method is needed to better cope with the image processing requirements under different foggy days.
In the current technology, visibility level classification of foggy images is mainly based on rule of thumb or dependent on meteorological data measured by sensors. However, these methods are often limited by complex environmental changes and sensor errors, resulting in poor classification accuracy. For example, rules of thumb are not universally applicable, typically based on experience in a particular scenario or region, and cannot accommodate complex weather conditions and cope with multi-source data. The visibility detection method, which relies on sensor measurements, is costly to maintain and calibrate, requiring regular maintenance and calibration to ensure its performance and accuracy. In addition, the visibility detection method of sensor measurement is not applicable to mobile platforms, such as automobiles, aircrafts, etc., which limits the possibility of obtaining accurate visibility information in a mobile environment.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a foggy image visibility level classification method based on passive foggy density segmentation, and the visibility level classification task is realized on the basis of no professional visibility detection equipment. Specifically, a medium-distance view interested region of an image is extracted through a passive fog density segmentation model, key features of fog images are learned through a neural network, and finally accurate classification of different visibility levels is achieved. Compared with the traditional method, the method has higher classification precision and stronger adaptability, can better meet the requirements of practical application, and provides a feasible solution for the development of the foggy-day image processing field.
The technical scheme adopted for solving the technical problems is as follows:
a foggy image visibility level classification method based on passive fog density segmentation comprises the following steps:
step 1, data acquisition: image data are collected from video monitoring of meteorological regional stations and social stations, and related scenes comprise meteorological stations, urban highways and village and town buildings; the regional station image refers to an image shot by a monitoring camera in the professional weather observation station, the social video image refers to an image shot by a social monitoring camera, the visibility marking data of the regional station image come from a professional visibility detection instrument in the regional station, and the visibility marking data of the social video image use the observation data of the weather observation station closest to the regional station; classifying the images according to different scenes, wherein each scene corresponds to a table for recording visibility, eliminating damaged images and error images, and constructing a daytime visibility image data set;
step 2, data preprocessing: firstly, dividing a visibility level interval according to actual requirements; secondly, the data set obtained in the step 1 is ordered by image sampling time, and each scene is reordered according to the visibility reference value, so that an image sequence with low to high visibility is obtained;
step 3, selecting a reference image: 2, selecting a reference image according to the visibility interval in the step and combining the actual condition of each scene;
step 4, image segmentation: the middle-distant view and non-sky region of the image are the regions with the most relevant fog features, so that before each image is sent into a model to start training, an image segmentation network based on passive fog density is firstly used for obtaining a mask image only containing the distant view in the non-sky for carrying out image segmentation operation;
step 5, model training: randomly extracting 2 images from the image sequence in the step 2, firstly respectively performing segmentation operation by using the mask images obtained in the step 4, and then inputting the segmented images into a visibility model for training until convergence;
step 6, visibility level classification test: and (3) using the model trained in the step (5), respectively inputting the test image and the 3 reference images into the model in pairs to compare the visibility, and finally obtaining the visibility level of the test image.
The technical conception of the invention is as follows: the foggy image visibility level classification method based on passive foggy density segmentation mainly comprises six parts including data acquisition, data preprocessing, reference image selection, image segmentation, model training and visibility classification test. The preparation of the data set is a first step before the subsequent algorithm design and test, and the quality of the data set directly relates to the accuracy of the final classification result; the preprocessing of the data removes areas such as sky and roads of the image, and avoids the influence of partial image sky and road areas on the accuracy of the algorithm; and then, learning key fog features of the image through a neural network, training a model according to a set loss function until convergence, inputting the test image and the reference image into the trained model for visibility comparison, and finally obtaining the visibility level of the test image.
The beneficial effects of the invention are as follows: the method has higher classification precision and stronger adaptability, can better meet the requirements of practical application, and provides a feasible solution for the development of the foggy-day image processing field.
Drawings
Fig. 1 is a flow chart of the foggy day image visibility level classification of the present invention.
Fig. 2 is a flow chart of image segmentation based on passive fog density in accordance with the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clear, the present invention is further described in the following supplementary description with reference to the specific embodiments and the accompanying drawings.
Referring to fig. 1 and 2, a method for classifying visibility levels of foggy images based on passive foggy density segmentation can make full use of differences of fog-related features between historical foggy images and clear images to realize rapid and accurate classification of visibility of images acquired by a camera, and includes the following steps:
step 1, data set preparation, wherein the process is as follows:
1.1, acquiring image data of video monitoring stations of regional stations and social stations at the same time interval, automatically cleaning data by using scripts according to time and position information of the video monitoring stations and the meteorological stations, and primarily screening to obtain image data with the time from 8 a.m. to 5 a.m. and the distance from the nearest meteorological station being less than 3km m;
1.2, manually rechecking the data of the primary screening, and deleting images with missing visibility reference values, abnormal visibility, acquisition failure, large-area rainwater and other foreign matter shielding; randomly deleting the repeated high-visibility images of the same scene to ensure that the data of different visibility levels are distributed uniformly; finally, updating visibility table data corresponding to each scene, so as to finish the preparation of a data set;
step 2, data preprocessing: the preprocessing operation comprises the steps of dividing the image visibility level interval and sorting the image visibility, and the process is as follows:
2.1, dividing visibility level intervals: the images and weather data obtained by screening are divided into 4 sections according to the visibility, and specific data are shown in table 1. Because the data of the big fog and the medium fog are less, the data of the interval is subjected to data volume expansion by the methods of mirroring, overturning, cutting and size scaling;
visibility of Mist level
<0.2km Large fog
0.2-1km Medium fog
1-10km Mist of small size
>10km No fog
TABLE 1
2.2, visibility ordering: according to the visibility reference value, sequencing the images of each scene from low to high to obtain a coarse-row image sequence, manually re-arranging the coarse-row image sequence, and adjusting the sequence of individual images to obtain a more accurate image sequence;
step 3, selecting a reference image: according to the visibility interval divided in the step 2, 3 reference images with the visibility of 0.2km, 1km and 10km are selected respectively so as to be used when the visibility is tested; however, the reality is limited, and it may be difficult to select 3 reference images with poor visibility, so when each reference image is selected, a threshold value that floats up and down is set, for example, 200±10m is used for replacing 200m;
step 4, image segmentation: the sky area of the image is easily misjudged as fog because the pixel value of the sky area is higher, which is similar to the pixel value of the fog area; buildings and pavements in the close-range area are easily affected by rainwater and specular light refraction, so that the visibility of the buildings and pavements is not in accordance with the actual conditions; therefore, the passive fog density model is adopted to carry out sky removal segmentation on the image, and meanwhile, a middle-distant view area containing elements such as mountain buildings and the like as a background is selected as an interested area.
The passive fog density model is a physical model capable of calculating the visibility fraction of a single image without reference images, the model selects 12 fog perception statistical characteristics including chromaticity, image entropy, sharpness and the like to calculate fog perception density, the deviation between a multi-element Gaussian model for testing 12 fog characteristic distribution of an image and two reference multi-element Gaussian models is calculated to calculate the fog perception density, and the expression of a d-dimensional multi-element Gaussian model is as follows:
wherein f is a fog perception statistical feature set of the image, and v and sigma represent mean and covariance matrixes respectively;
the process of segmenting an image using a passive fog density model is as follows:
4.1, calculating the fog density D, referring to FIG. 2, firstly cutting a test image into blocks according to the size of 2X 2 pixels, calculating the fog perception statistical characteristics of each image block, and fitting to obtain a 12-dimensional multi-element Gaussian model M t (v t ,∑ t ) Calculating each image block to a foggy multi-element Gaussian model M f (v ff ) And a haze-free multivariate Gaussian model M ff (v ff ,∑ ff ) Mahalanobis-like distance between:
wherein M is f (v ff ) And M ff (v ffff ) It is known that T and-1 represent the transpose and the inverse of the matrix, respectively;
by calculating D f And D ff The ratio of (2) to the fog density level distribution D of the test image is:
4.2 calculating a segmentation mask map, wherein the fog density value of the sky area is much higher than that of other areas, and the threshold d is set according to different scenes thres The fog density distribution diagram of the test image is higher than d thres Setting the value of the near field region to 0 and setting the value of the other part to 1, expanding the near field region according to the same pixel of 2 times of the near field region to restore the size of the near field region to the width of the original image and the height of the original image, and finally obtaining an image segmentation mask;
step 5, training a model: referring to fig. 1, 2 RGB images are randomly sampled from an image sequence with ordered original visibility, after image segmentation, fusion is performed on channels, advanced features are extracted by using a neural network model, a comparison module is used for outputting a prediction tag, and finally a loss function is established by using the prediction tag and a real tag, so that a visibility model starts to be trained; the loss function for the relational model training is as follows:
wherein y represents the tag value,representing the predicted value;
step 6, visibility level classification test: referring to fig. 1, after the test image and the reference image are subjected to image segmentation, the test image and the reference image are input into the trained model in the step 5 for testing, and finally the visibility level of the test image is obtained.
By adopting the technical scheme, the invention has the following advantages: the method can effectively utilize the image shot by the traffic camera to predict the visibility of the current image on the basis of not using other visibility observation equipment, realizes low-cost high-precision visibility monitoring, and has higher theoretical and engineering application values.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. A foggy image visibility level classification method based on passive fog density segmentation is characterized by comprising the following steps:
step 1, data acquisition: image data are collected from video monitoring of meteorological regional stations and social stations, and related scenes comprise meteorological stations, urban highways and village and town buildings; the regional station image refers to an image shot by a monitoring camera in the professional weather observation station, the social video image refers to an image shot by a social monitoring camera, the visibility marking data of the regional station image come from a professional visibility detection instrument in the regional station, and the visibility marking data of the social video image use the observation data of the weather observation station closest to the regional station; classifying the images according to different scenes, wherein each scene corresponds to a table for recording visibility, eliminating damaged images and error images, and constructing a daytime visibility image data set;
step 2, data preprocessing: firstly, dividing a visibility level interval according to actual requirements; secondly, the data set obtained in the step 1 is ordered by image sampling time, and each scene is reordered according to the visibility reference value, so that an image sequence with low to high visibility is obtained;
step 3, selecting a reference image: 2, selecting a reference image according to the visibility interval in the step and combining the actual condition of each scene;
step 4, image segmentation: the middle-distant view and non-sky region of the image are the regions with the most relevant fog features, so that before each image is sent into a model to start training, an image segmentation network based on passive fog density is firstly used for obtaining a mask image only containing the distant view in the non-sky for carrying out image segmentation operation;
step 5, model training: randomly extracting 2 images from the image sequence in the step 2, firstly respectively performing segmentation operation by using the mask images obtained in the step 4, and then inputting the segmented images into a visibility model for training until convergence;
step 6, visibility level classification test: and (3) using the model trained in the step (5), respectively inputting the test image and the 3 reference images into the model in pairs to compare the visibility, and finally obtaining the visibility level of the test image.
2. The method for classifying the visibility level of a foggy image based on passive foggy density segmentation as set forth in claim 1, wherein the process of step 1 is as follows:
1.1, acquiring image data of video monitoring stations of regional stations and social stations at the same time interval, automatically cleaning data by using scripts according to time and position information of the video monitoring stations and the meteorological observation stations, and screening the image data with the distance between the scenes and the nearest meteorological observation stations being less than 3km meters;
1.2, manually rechecking the primary screening data, deleting images which are blocked by foreign matters such as missing visibility reference values, abnormal visibility, acquisition failure, large-area rainwater and the like, randomly deleting high-visibility images which repeatedly appear in the same scene, balancing the data distribution of different visibility levels, and finally updating visibility table data corresponding to each scene to finish the preparation of a data set.
3. The method for classifying the visibility level of a foggy image based on passive foggy density segmentation according to claim 1 or 2, wherein the procedure of the step 2 is as follows:
2.1, dividing visibility level intervals: according to the pictures and meteorological data obtained by screening, the pictures and the meteorological data are divided into 4 intervals according to the visibility: less than 0.2km, 0.2-1km, 1-10km, and more than 10km. Because the data of the big fog and the medium fog are less, the data of the interval is subjected to data volume expansion by the methods of mirroring, overturning, cutting and size scaling;
2.2 visibility ordering: and according to the visibility reference value, sequencing the images of each scene from low to high to obtain a coarse-row image sequence. And (3) manually re-arranging the image sequence of the coarse row, and adjusting the sequence of individual images to obtain a more accurate image sequence.
4. The method for classifying visibility levels of foggy images based on passive foggy density division according to claim 1 or 2, wherein in the step 3, 3 reference images with visibility of 0.2km, 1km and 10km are selected respectively according to the visibility interval divided in the step 2, so as to be used in the visibility test; when each reference image is selected, a threshold value which floats up and down is set.
5. The method for classifying the visibility level of a foggy image based on the passive foggy density division according to claim 1 or 2, wherein in the step 4, the passive foggy density model is a physical model for calculating the visibility score of a single image without reference to the image. The model selects 12 fog perception statistical characteristics including chromaticity, image entropy, sharpness and the like to calculate fog perception density. The fog perception density is calculated by calculating the deviation between a multi-element Gaussian model of the 12 fog feature distribution of the test image and two reference multi-element Gaussian models, and the expression of the d-dimensional multi-element Gaussian model is as follows:
wherein f is a fog perception statistical feature set of the image, and v and sigma represent mean and covariance matrixes respectively;
the process of segmenting an image using a passive fog density model is as follows:
4.1, calculating the fog density D: firstly, cutting a test image into blocks according to the size of 2 multiplied by 2 pixels, calculating fog perception statistical characteristics of each image block, and fitting to obtain a 12-dimensional multi-element Gaussian model M t (v t ,∑ t ) Calculating each image block to a foggy multi-element Gaussian model M f (v f ,∑ f ) And a haze-free multivariate Gaussian model M ff (v ff ,∑ ff ) Mahalanobis-like distance between:
wherein M is f (v f ,∑ f ) And M ff (v ff ,∑ ff ) It is known that T and-1 represent the transpose and the inverse of the matrix, respectively;
by calculating D f And D ff The ratio of (2) to the fog density level distribution D of the test image is:
4.2 calculating a segmentation mask map: the fog density value of the sky area is much higher than that of other areas, and the threshold d is set according to different scenes thres The fog density distribution diagram of the test image is higher than d thres The value of (2) is set to 0, the value of the near field region is set to 0, the rest is set to 1, and the size of the image is restored to the width x height of the original image according to the same pixel expansion of 2 x 2 times, and finally the image segmentation mask is obtained.
6. The method for classifying the visibility level of the foggy image based on passive foggy density segmentation according to claim 1 or 2, wherein in the step 5, 2 RGB images are randomly sampled from an image sequence with ordered original visibility, after image segmentation, fusion is firstly performed on channels, advanced features are extracted by using a neural network model, a predictive label is output by using a comparison module, and finally a loss function is established by using the predictive label and a real label, so as to train a visibility model; the loss function for the relational model training is as follows:
wherein y represents the tag value,representing the predicted value.
CN202311864518.5A 2023-12-29 2023-12-29 Foggy image visibility level classification method based on passive fog density segmentation Pending CN117830722A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311864518.5A CN117830722A (en) 2023-12-29 2023-12-29 Foggy image visibility level classification method based on passive fog density segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311864518.5A CN117830722A (en) 2023-12-29 2023-12-29 Foggy image visibility level classification method based on passive fog density segmentation

Publications (1)

Publication Number Publication Date
CN117830722A true CN117830722A (en) 2024-04-05

Family

ID=90518620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311864518.5A Pending CN117830722A (en) 2023-12-29 2023-12-29 Foggy image visibility level classification method based on passive fog density segmentation

Country Status (1)

Country Link
CN (1) CN117830722A (en)

Similar Documents

Publication Publication Date Title
CN111353413B (en) Low-missing-report-rate defect identification method for power transmission equipment
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN110008854B (en) Unmanned aerial vehicle image highway geological disaster identification method based on pre-training DCNN
CN110849807B (en) Monitoring method and system suitable for road visibility based on deep learning
CN109214308A (en) A kind of traffic abnormity image identification method based on focal loss function
CN111143932A (en) Bridge health state assessment method, device, system and equipment
CN112308292A (en) Method for drawing fire risk grade distribution map
CN109375290B (en) Cross-sea bridge fog monitoring system based on machine learning and application method thereof
CN110956207B (en) Method for detecting full-element change of optical remote sensing image
CN111414878B (en) Social attribute analysis and image processing method and device for land parcels
CN113469278A (en) Strong weather target identification method based on deep convolutional neural network
CN110765900B (en) Automatic detection illegal building method and system based on DSSD
CN117036326A (en) Defect detection method based on multi-mode fusion
CN110059544B (en) Pedestrian detection method and system based on road scene
CN115880580A (en) Intelligent extraction method for optical remote sensing image road information under influence of cloud layer
CN117830722A (en) Foggy image visibility level classification method based on passive fog density segmentation
CN114463678A (en) Rainfall type identification method using camera video image
CN115063684A (en) Agricultural machinery track identification method based on remote sensing image scene division and application method thereof
CN112380985A (en) Real-time detection method for intrusion foreign matters in transformer substation
JIA et al. Crack damage detection of bridge based on convolutional neural networks
CN112396572A (en) Composite insulator double-light fusion method based on feature enhancement and Gaussian pyramid
KR102460705B1 (en) Method for discriminating sea fog using image based on artificial intelligence and apparatus thereof
CN117849907B (en) Meteorological disaster targeted early warning method and system based on multi-source data
CN113763342B (en) Expressway marking detection method based on unmanned aerial vehicle remote sensing
CN114120025A (en) Deep learning-based weather identification and degree quantification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination