CN113128581A - Visibility detection method, device and system based on machine learning and storage medium - Google Patents

Visibility detection method, device and system based on machine learning and storage medium Download PDF

Info

Publication number
CN113128581A
CN113128581A CN202110392515.0A CN202110392515A CN113128581A CN 113128581 A CN113128581 A CN 113128581A CN 202110392515 A CN202110392515 A CN 202110392515A CN 113128581 A CN113128581 A CN 113128581A
Authority
CN
China
Prior art keywords
image
visibility
detection
machine learning
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110392515.0A
Other languages
Chinese (zh)
Inventor
沈岳峰
卜清军
侯敏
常春辉
王紫滨
吴桐
呼莉莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Binhai New Area Meteorological Bureau Tianjin Binhai New Area Meteorological Early Warning Center
Original Assignee
Tianjin Binhai New Area Meteorological Bureau Tianjin Binhai New Area Meteorological Early Warning Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Binhai New Area Meteorological Bureau Tianjin Binhai New Area Meteorological Early Warning Center filed Critical Tianjin Binhai New Area Meteorological Bureau Tianjin Binhai New Area Meteorological Early Warning Center
Priority to CN202110392515.0A priority Critical patent/CN113128581A/en
Publication of CN113128581A publication Critical patent/CN113128581A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a visibility detection method based on machine learning, which comprises the steps of collecting historical images and visibility grade values; extracting dark channel image characteristics, image gradient amplitude characteristics and image contrast amplitude characteristics of the historical image; labeling and classifying the extracted dark channel image characteristics, image gradient amplitude characteristics and image contrast amplitude characteristics according to visibility grade values to obtain a plurality of groups of training samples; constructing a support vector machine algorithm model with a kernel function of a Gaussian kernel function; training and optimizing a support vector machine algorithm model by using a training sample to obtain a visibility grade detection model; acquiring a real-time image; and inputting the extracted dark channel image characteristics, image gradient amplitude characteristics and image contrast amplitude characteristics of the real-time image into a visibility grade detection model to obtain a visibility grade detection value. The method greatly reduces the requirement of the model on the number of samples, and can obtain a relatively accurate visibility grade detection result by adopting a small sample set.

Description

Visibility detection method, device and system based on machine learning and storage medium
Technical Field
The application relates to the technical field of meteorological detection, in particular to a visibility detection method device, a visibility detection system and a storage medium based on machine learning.
Background
Visibility detection plays an important role in transportation, marine transportation, and operation safety during transportation. Under low visibility, the possibility of accidents in traffic, navigation and transportation operations is greatly increased.
Currently, there are three main ways for visibility detection: (1) the physical hardware detection method mainly includes a projection type and a scattering type. The method has the advantages that hardware equipment is expensive, multi-point arrangement is realized, and maintenance is difficult. Therefore, the method has locality in visibility detection, and cannot comprehensively detect the visibility in real time; (2) visual inspection: the visibility condition is judged artificially by training professional personnel, and the judgment result of the method is related to the characteristics of human eyes, so that the method has strong artificial subjectivity, and consumes time and labor; (3) an image pickup method: the visibility detection is carried out by simulating human eyes through the camera, the method is flexible, comprehensive deployment can be realized, and the detection result is accurate and objective.
Scholars at home and abroad propose various methods for detecting visibility by using a camera shooting method. In 1949, Steffens proposed the idea of measuring visibility by using visual features of images, and deduced the visibility value by manually taking pictures, washing pictures, and manually calculating the contrast between a target object and a background. The method has low accuracy and complicated process. In recent years, an algorithm for performing end-to-end learning by using a convolutional neural network has been applied to visibility detection. Li and the like firstly use the convolutional neural network in the aspect of visibility detection, train a visibility grade detection model based on AlexNe, and extract features to classify the visibility grade. Palvanov et al combines a traditional image preprocessing method with a convolutional neural network, and obtains comprehensive classification of visibility levels in fog images by learning an original image and an image subjected to fast Fourier transform or spectral filtering.
Visibility detection algorithms in the prior art all combine a CNN and a logistic regression function, however, the performance of such algorithms depends on the size of a data set and the quality of training sample labels to a great extent, and at present, a fog image with accurate visibility labels is difficult to obtain.
Disclosure of Invention
In order to solve the technical problems that the existing method for detecting visibility by shooting is complicated in process and depends on a large number of data sets or at least partially solves the technical problems, the application provides a visibility detection method, a device, a system and a storage medium based on machine learning.
In a first aspect, the present application provides a visibility detection method based on machine learning, including:
collecting historical images and visibility grade values;
extracting dark channel image features, image gradient amplitude features and image contrast amplitude features of the historical image;
labeling and classifying the extracted dark channel image characteristics, the image gradient amplitude characteristics and the image contrast amplitude characteristics according to the visibility grade value to obtain a plurality of groups of training samples;
constructing a support vector machine algorithm model, and selecting a Gaussian kernel function as a kernel function of the support vector machine algorithm;
training and optimizing the support vector machine algorithm model by using the training sample to obtain a visibility grade detection model;
acquiring a real-time image;
and inputting the extracted dark channel image characteristics, image gradient amplitude characteristics and image contrast amplitude characteristics of the real-time image into the visibility grade detection model to obtain a visibility grade detection value.
Preferably, the visibility detection method based on machine learning further includes: and outputting the visibility level detection value.
Preferably, before extracting the dark channel image feature, the image gradient magnitude feature and the image contrast magnitude feature of the historical image, the method further includes: and preprocessing the historical image.
Preferably, the constructing of the support vector machine algorithm model specifically includes:
according to the kernel function classification principle of the support vector machine, the following optimization models and constraint conditions are established:
Figure BDA0003017286860000031
Figure BDA0003017286860000032
Figure BDA0003017286860000033
wherein, k (x)i,xj) Is a kernel function, and C is a regularization parameter; the gaussian kernel function formula is as follows:
Figure BDA0003017286860000034
where σ is the bandwidth of the Gaussian kernel, and σ > 0.
Preferably, the extracting dark channel image features of the historical image specifically includes:
dividing the historical image into a plurality of panes w x h, wherein each pane is marked as w (x);
calculating the dark channel map characteristics of the historical image according to the following formula for each pane:
Figure BDA0003017286860000035
wherein, Jc(y) is an image, Jdark(x) Is the dark channel of the image;
and refining the dark channel image by adopting guide filtering, wherein a calculation formula is as follows:
Figure BDA0003017286860000036
wherein the content of the first and second substances,
Figure BDA0003017286860000041
for the transformed image, SkIs a certain window of the original image (a)k,bk) I (i) is a guide image, and is a linear coefficient in which the window area is constant.
Preferably, the extracting of the image gradient amplitude feature of the historical image specifically includes:
acquiring image gradient characteristics through a sobel operator: using the window region of n x n, calculating an edge feature value according to the following formula:
Figure BDA0003017286860000042
wherein G (x, y) represents the edge characteristic value of x, y of the pixel point, GxRepresenting the line-wise characteristic component, G, of the pixel point xyRepresenting the characteristic component of the column direction of the pixel point x;
the gradient amplitude signature is calculated according to the following formula:
Figure BDA0003017286860000043
wherein, V represents the gradient amplitude of the image, H, W represents the image size, G (x, y) represents the gradient value corresponding to the pixel point.
Preferably, the extracting the image contrast amplitude feature of the historical image specifically includes:
acquiring the Weber's law contrast characteristics of the image according to the following formula:
Figure BDA0003017286860000044
where C (x, y) is the pixel point-line contrast, f (x)1,y1) To calculate the gray value of a pixel point, f (x)2,y2) Taking the pixel value of the adjacent left pixel point, wherein M is the maximum gray value of the picture, and min is the minimum value of the gray values of the adjacent pixel and the pixel;
calculating an image contrast magnitude signature according to the formula:
Figure BDA0003017286860000045
wherein, CmeanRepresenting the contrast amplitude of the image, H, W representing the image size, and C (x, y) representing the lateral weber's law contrast value corresponding to the pixel point.
In a second aspect, the present application further provides a visibility detection apparatus based on machine learning, including:
a memory for storing program instructions;
a processor for invoking the program instructions stored in the memory to implement the machine learning based visibility detection method according to any one of the first aspect.
In a third aspect, the present application further provides a computer-readable storage medium storing program code for implementing the visibility detection method based on machine learning according to any one of the first aspect.
In a fourth aspect, the present application further provides a visibility detection system based on machine learning, including: the system comprises a display server, an image acquisition terminal, a visibility detection server, a database and a third-party platform, wherein the image acquisition terminal, the visibility detection server, the database and the third-party platform are connected to the display server;
the display server is used for receiving an acquisition result of the image acquisition terminal and a visibility grade value detected by the visibility detection terminal, wherein the acquisition result comprises a historical image and/or a real-time image;
the display server is also used for calling the visibility detection server to carry out visibility detection;
the visibility detection server is configured to perform visibility detection by using the visibility detection method according to any one of the first aspects, and feed a visibility level detection value back to the display server;
the display server is also used for storing the visibility level detection value to the database and providing a data access interface;
the third party platform sends a visibility detection request to the display server by calling the data access interface, and the display server responds to the visibility detection request and feeds back the latest visibility level detection value searched from the database to the third party platform for display.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: the visibility detection method based on machine learning preferentially extracts the significant features related to fog concentration: dark channel image characteristics, image gradient amplitude characteristics and image contrast amplitude characteristics are combined with a support vector machine algorithm in machine learning to carry out model training, so that the requirement of the model on the number of samples is greatly reduced, and a small sample set can be adopted to obtain a relatively accurate visibility grade detection result.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a visibility detection method based on machine learning according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a visibility detection system based on machine learning according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
For convenience of understanding, the visibility detection method based on machine learning provided by the embodiments of the present application is described in detail below, and the visibility detection method based on machine learning includes the following steps:
step S1, collecting historical images and visibility grade values;
step S2, extracting dark channel image features, image gradient amplitude features and image contrast amplitude features of the historical image;
step S3, labeling and classifying the extracted dark channel image features, the image gradient amplitude features and the image contrast amplitude features according to the visibility grade values to obtain a plurality of groups of training samples;
step S4, constructing a support vector machine algorithm model, and selecting a Gaussian kernel function as a kernel function of the support vector machine algorithm;
step S5, training and optimizing the support vector machine algorithm model by using the training sample to obtain a visibility grade detection model;
step S6, acquiring a real-time image;
and step S7, inputting the extracted dark channel image characteristics, image gradient amplitude characteristics and image contrast amplitude characteristics of the real-time image into the visibility grade detection model to obtain a visibility grade detection value.
In some embodiments of the present application, step S1 is a data collecting step, which can collect historical images (or historical video data) and natural weather data by using internet of things technology, where the natural weather data includes visibility level values. In particular, meteorological instruments are used for meteorological observation and provide natural meteorological data for measurement points, which can include meteorological parameters and visibility level values. The historical images can be images which are shot by image acquisition equipment arranged at a meteorological measuring point and can reflect visibility, wherein the meteorological instruments and the image acquisition equipment at the meteorological measuring point often acquire data periodically, data at each moment are not acquired, and data are acquired at intervals, so the historical images in the embodiment of the application can specifically refer to the historical image data of the meteorological station, the meteorological instruments and the image acquisition equipment at the measuring point in a preset time period before the moment of acquiring the current meteorological data, the current moment refers to the time of acquiring the current image data, and the real-time images refer to the image data acquired at the current moment.
In some embodiments of the present application, before step S1, the machine learning based visibility detection method can further include the following data washing step: and preprocessing the historical image. Error data caused by dirt on a monitoring camera or a large number of similar pictures often exist in a historical image. The data needs to be screened and cleared; that is, images with image quality not meeting the standard (for example, pixels not meeting the requirement of the preset pixels) are removed in advance, and if a plurality of same or similar images exist, the images can be deleted, and only one or a part of the images are reserved.
Extracting significance characteristics related to fog concentration after data acquisition and data cleaning: after the dark channel image features, the image gradient amplitude features and the image contrast amplitude features are extracted, data labeling of a training set is performed on the extracted features through step S3 to obtain multiple groups of training samples, and the training samples are marked as (x)i,yi) N, (i ═ 1, 2.. no); the specific process of data labeling is to classify historical images according to corresponding historical data (visibility grade values) detected by a visibility instrument.
In some embodiments of the present application, the step S4 constructs a support vector machine algorithm model, which specifically includes:
according to the kernel function classification principle of the support vector machine, the following optimization models and constraint conditions are established:
Figure BDA0003017286860000081
Figure BDA0003017286860000082
Figure BDA0003017286860000083
wherein, k (x)i,xj) Is a kernel function, and C is a regularization parameter; the larger C is, the overfitting phenomenon is easily caused, a Gaussian kernel function is selected as a kernel function of the support vector machine in the embodiment of the application, and the formula of the Gaussian kernel function is as follows:
Figure BDA0003017286860000084
where σ is the bandwidth of the Gaussian kernel, and σ > 0.
The parameter optimization of the constructed support vector machine algorithm model is introduced as follows: and carrying out support vector machine parameter optimization by adopting grid search and five-fold cross validation.
As can be seen from step S4, the parameters to be optimized by the support vector machine algorithm adopted in the embodiment of the present application are the regularization parameter C and the bandwidth σ of the gaussian kernel. Thus, the interval of C is selected to be [ Cmin,Cmax]The range of σ is [ σ ]minmax]And combining every two parameters in each group in the interval, calculating the accuracy according to a cross validation algorithm, and obtaining the C and sigma combinations with the highest accuracy. If multiple groups of combinations exist, the minimum C combination is selected to ensure the generalization of the model.
And training and optimizing the vector machine algorithm model by using a plurality of groups of training samples, and finally outputting to obtain the visibility grade detection model. The visibility grade detection model can be used for detecting the visibility grade of the input real-time image.
In some embodiments of the present application, the visibility detection method based on machine learning further includes: and outputting the visibility level detection value.
For convenience of understanding, the following specifically describes the extracting of the dark channel image feature of the history image in step S3, and specifically includes:
first, the historical image is divided into several w × h panes (as an example, 15 × 15 navigation panes can be employed), each pane being denoted as w (x);
then, calculating the dark channel map characteristic of the historical image according to the following formula for each pane:
Figure BDA0003017286860000091
wherein, Jc(y) is an image, Jdark(x) Is the dark channel of the image;
because the collected dark channel image features are fuzzy, the dark channel image is refined by adopting the guided filtering, and the calculation formula is as follows:
Figure BDA0003017286860000092
wherein the content of the first and second substances,
Figure BDA0003017286860000093
for the transformed image, SkIs a certain window of the original image (a)k,bk) The linear coefficient is constant in the window area, I (i) is a guide image, and the gray scale image of the original image is used as the guide image.
To maintain the minimum difference between the original image and the transformed image, the parameter a is calculated using the following formulakAnd bk
Figure BDA0003017286860000101
Figure BDA0003017286860000102
Wherein
Figure BDA0003017286860000103
Is the variance, mu, of the window areakMean value of window area, covk(I, t) shows the guide image I and the dark channel map t in the window SkThe correlation function in (2) is selected,
Figure BDA0003017286860000104
representing the mean of the dark channel map pixel values in the window.
In some specific embodiments of the present application, the extracting, in step S3, the image gradient amplitude feature of the historical image specifically includes:
firstly, acquiring image gradient characteristics through a sobel operator: using the window area of n x n, calculating the edge characteristic value according to the following formula:
Figure BDA0003017286860000105
wherein G (x, y) represents the edge characteristic value of x, y of the pixel point, GxRepresents the row of pixel point xCharacteristic component of direction, GyRepresenting the characteristic component of the column direction of the pixel point x;
then, the gradient amplitude characteristic is calculated according to the following formula:
Figure BDA0003017286860000106
wherein, V represents the gradient amplitude of the image, H, W represents the image size, G (x, y) represents the gradient value corresponding to the pixel point.
In some specific embodiments of the present application, the extracting, in step S3, the image contrast amplitude feature of the historical image specifically includes:
acquiring the Weber's law contrast characteristics of the image according to the following formula:
Figure BDA0003017286860000107
where C (x, y) is the pixel point-line contrast, f (x)1,y1) To calculate the gray value of a pixel point, f (x)2,y2) Taking the pixel value of the adjacent left pixel point, wherein M is the maximum gray value of the picture, and min is the minimum value of the gray values of the adjacent pixel and the pixel;
calculating an image contrast magnitude signature according to the formula:
Figure BDA0003017286860000111
wherein, CmeanRepresenting the contrast amplitude of the image, H, W representing the image size, and C (x, y) representing the lateral weber's law contrast value corresponding to the pixel point.
For convenience of understanding, in combination with the various embodiments described above, referring to fig. 1, the visibility detection method based on machine learning can include two major parts, namely, a model training process and a visibility detection process, wherein the model training process can include historical data acquisition (corresponding to data acquisition by a web technology in the above embodiments, and historical image data acquisition from monitoring equipment, meteorological instruments and the like, and visibility level values acquisition from visibility instruments), data cleaning (corresponding to preprocessing of historical image data in the above embodiments), data classification and feature extraction (corresponding to steps S2 and S3), model training, model optimization and finally outputting a model to obtain a visibility model (corresponding to steps S4 and S5).
The visibility detection process can include real-time data collection (equivalent to step S6), feature extraction and input of features to the visibility model, and output of visibility levels (equivalent to step S7).
In still other embodiments of the present application, there is also provided a visibility detection apparatus based on machine learning, including:
a memory for storing program instructions;
a processor for invoking the program instructions stored in the memory to implement the visibility detection method based on machine learning as described in any of the above embodiments.
In further specific embodiments of the present application, a computer-readable storage medium is further provided, which stores program codes for implementing the visibility detection method based on machine learning described in any one of the above embodiments.
In further specific embodiments of the present application, there is also provided a machine learning based visibility detection system, including: the system comprises a display server, an image acquisition terminal, a visibility detection server, a database and a third-party platform, wherein the image acquisition terminal, the visibility detection server, the database and the third-party platform are connected to the display server;
the display server is used for receiving an acquisition result of the image acquisition terminal and a visibility grade value detected by the visibility detection terminal, wherein the acquisition result comprises a historical image and/or a real-time image;
the display server is also used for calling the visibility detection server to carry out visibility detection;
the visibility detection server is configured to perform visibility detection by using the visibility detection method according to any one of the first aspects, and feed a visibility level detection value back to the display server;
the display server is also used for storing the visibility level detection value to the database and providing a data access interface;
the third party platform sends a visibility detection request to the display server by calling the data access interface, and the display server responds to the visibility detection request and feeds back the latest visibility level detection value searched from the database to the third party platform for display.
For convenience of understanding, referring to fig. 2, a composition schematic diagram of a visibility detection system based on machine learning is shown, wherein an image acquisition terminal acquires a frame of picture in real time and transmits the frame of picture to a display server, the display server forwards the picture to the visibility detection server for detection, a visibility grade detection model with good training and optimization is stored in the visibility detection server, the picture is input to the detection model, the detection is performed by the visibility detection method, an obtained visibility value (visibility grade) is returned to the display server, and the display server stores the visibility grade or the visibility value to a database; the third-party platform can read the video stream shot by the image acquisition equipment for display on one hand, and can call a display server interface to acquire the visibility grade on the other hand, and the display server responds to the acquisition request of the third-party platform and is connected with the database to acquire the latest visibility grade; and responding to the visibility grade acquisition request of the display server, returning the latest visibility grade to the display server by the database, and acquiring the visibility grade and displaying successfully by the third-party platform.
The system adopts double servers, the visibility grade detection server is used for identifying the visibility grade, and the display server is used for storing data and providing an interface for the outside. The visibility grade detection process is divided into three modules: monitoring video display, visibility detection and real-time visibility data acquisition and display. The display of the monitoring video is mainly performed through a Real Time Streaming Protocol (RTSP) display system.
The visibility detection process comprises the following steps: firstly, a frame of real-time picture is collected, then a visibility grade detection model in a visibility grade detection server is called through a display server to carry out detection and identification, and after identification, identified data are stored in a database.
The process of acquiring and displaying the real-time visibility data comprises the following steps: the browser of the third-party platform calls the display server through a Hypertext Transfer Protocol (HTTP) to obtain the visibility data access interface, the display server searches the latest data from the database and returns the latest data to the browser, and the browser displays the latest visibility data.
The three modules (monitoring video display, visibility detection and real-time visibility data acquisition and display) are coordinated with each other to form a closed-loop visibility detection process.
In some embodiments of the present application, 110 pictures of different visibility levels are used for training. The final test accuracy can reach 89.1%. Collecting a color image (1920 x 1080) and extracting features according to the steps, inputting the visibility grade detection model, and predicting by the visibility grade detection model, wherein the prediction speed is 605 ms.
The visibility detection method based on machine learning preferentially extracts the significant features related to fog concentration: the method has the advantages that the dark channel image characteristics, the image gradient amplitude characteristics and the image contrast amplitude characteristics are combined with a support vector machine algorithm in machine learning to carry out model training, the visibility grade detection precision can be improved, the requirements of the model on the number of samples are greatly reduced, and a small sample set can be used for obtaining a relatively accurate detection result.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A visibility detection method based on machine learning is characterized by comprising the following steps:
collecting historical images and visibility grade values;
extracting dark channel image features, image gradient amplitude features and image contrast amplitude features of the historical image;
labeling and classifying the extracted dark channel image characteristics, the image gradient amplitude characteristics and the image contrast amplitude characteristics according to the visibility grade value to obtain a plurality of groups of training samples;
constructing a support vector machine algorithm model, and selecting a Gaussian kernel function as a kernel function of the support vector machine algorithm;
training and optimizing the support vector machine algorithm model by using the training sample to obtain a visibility grade detection model;
acquiring a real-time image;
and inputting the extracted dark channel image characteristics, image gradient amplitude characteristics and image contrast amplitude characteristics of the real-time image into the visibility grade detection model to obtain a visibility grade detection value.
2. The machine learning-based visibility detection method according to claim 1, further comprising: and outputting the visibility level detection value.
3. The machine-learning-based visibility detection method according to claim 1, further comprising, before extracting dark-channel image features, image gradient magnitude features, and image contrast magnitude features of the history image: and preprocessing the historical image.
4. The visibility detection method based on machine learning as claimed in claim 1, wherein constructing a support vector machine algorithm model specifically comprises:
according to the kernel function classification principle of the support vector machine, the following optimization models and constraint conditions are established:
Figure FDA0003017286850000011
Figure FDA0003017286850000012
Figure FDA0003017286850000013
wherein, k (x)i,xj) Is a kernel function, and C is a regularization parameter; the gaussian kernel function formula is as follows:
Figure FDA0003017286850000021
where σ is the bandwidth of the Gaussian kernel, and σ > 0.
5. The visibility detection method based on machine learning according to claim 1, wherein extracting dark channel image features of the history image specifically includes:
dividing the historical image into a plurality of panes w x h, wherein each pane is marked as w (x);
calculating the dark channel map characteristics of the historical image according to the following formula for each pane:
Figure FDA0003017286850000022
wherein, Jc(y) is an image, Jdark(x) Is the dark channel of the image;
and refining the dark channel image by adopting guide filtering, wherein a calculation formula is as follows:
Figure FDA0003017286850000023
wherein the content of the first and second substances,
Figure FDA0003017286850000024
for the transformed image, SkIs a certain window of the original image (a)k,bk) I (i) is a guide image, and is a linear coefficient in which the window area is constant.
6. The visibility detection method based on machine learning according to claim 1, wherein extracting image gradient magnitude features of the historical image specifically includes:
acquiring image gradient characteristics through a sobel operator: using the window region of n x n, calculating an edge feature value according to the following formula:
Figure FDA0003017286850000025
wherein G (x, y) represents the edge characteristic value of x, y of the pixel point, GxRepresenting the line-wise characteristic component, G, of the pixel point xyRepresenting the characteristic component of the column direction of the pixel point x;
the gradient amplitude signature is calculated according to the following formula:
Figure FDA0003017286850000026
wherein, V represents the gradient amplitude of the image, H, W represents the image size, G (x, y) represents the gradient value corresponding to the pixel point.
7. The visibility detection method based on machine learning according to claim 1, wherein extracting image contrast amplitude features of the historical images specifically includes:
acquiring the Weber's law contrast characteristics of the image according to the following formula:
Figure FDA0003017286850000031
where C (x, y) is the pixel point-line contrast, f (x)1,y1) To calculate the gray value of a pixel point, f (x)2,y2) Taking the pixel value of the adjacent left pixel point, wherein M is the maximum gray value of the picture, and min is the minimum value of the gray values of the adjacent pixel and the pixel;
calculating an image contrast magnitude signature according to the formula:
Figure FDA0003017286850000032
wherein, CmeanRepresenting the contrast amplitude of the image, H, W representing the image size, and C (x, y) representing the lateral weber's law contrast value corresponding to the pixel point.
8. A visibility detection device based on machine learning, comprising:
a memory for storing program instructions;
a processor for invoking the program instructions stored in the memory to implement the machine learning based visibility detection method of any one of claims 1 to 7.
9. A computer-readable storage medium characterized in that the computer-readable storage medium stores a program code for implementing the machine learning-based visibility detection method according to any one of claims 1 to 7.
10. A visibility detection system based on machine learning, comprising: the system comprises a display server, an image acquisition terminal, a visibility detection server, a database and a third-party platform, wherein the image acquisition terminal, the visibility detection server, the database and the third-party platform are connected to the display server;
the display server is used for receiving an acquisition result of the image acquisition terminal and a visibility grade value detected by the visibility detection terminal, wherein the acquisition result comprises a historical image and/or a real-time image;
the display server is also used for calling the visibility detection server to carry out visibility detection;
the visibility detection server is used for executing the visibility detection method as claimed in any one of claims 1 to 7 to perform visibility detection, and feeding back a visibility level detection value to the display server;
the display server is also used for storing the visibility level detection value to the database and providing a data access interface;
the third party platform sends a visibility detection request to the display server by calling the data access interface, and the display server responds to the visibility detection request and feeds back the latest visibility level detection value searched from the database to the third party platform for display.
CN202110392515.0A 2021-04-13 2021-04-13 Visibility detection method, device and system based on machine learning and storage medium Pending CN113128581A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110392515.0A CN113128581A (en) 2021-04-13 2021-04-13 Visibility detection method, device and system based on machine learning and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110392515.0A CN113128581A (en) 2021-04-13 2021-04-13 Visibility detection method, device and system based on machine learning and storage medium

Publications (1)

Publication Number Publication Date
CN113128581A true CN113128581A (en) 2021-07-16

Family

ID=76775905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110392515.0A Pending CN113128581A (en) 2021-04-13 2021-04-13 Visibility detection method, device and system based on machine learning and storage medium

Country Status (1)

Country Link
CN (1) CN113128581A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824491A (en) * 2023-06-16 2023-09-29 北京百度网讯科技有限公司 Visibility detection method, training method and device of detection model and storage medium
CN117152361A (en) * 2023-10-26 2023-12-01 天津市滨海新区气象局(天津市滨海新区气象预警中心) Remote sensing image visibility estimation method based on attention mechanism

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN109741322A (en) * 2019-01-08 2019-05-10 南京蓝绿物联科技有限公司 A kind of visibility measurement method based on machine learning
CN109858494A (en) * 2018-12-28 2019-06-07 武汉科技大学 Conspicuousness object detection method and device in a kind of soft image
CN110849807A (en) * 2019-11-22 2020-02-28 山东交通学院 Monitoring method and system suitable for road visibility based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN109858494A (en) * 2018-12-28 2019-06-07 武汉科技大学 Conspicuousness object detection method and device in a kind of soft image
CN109741322A (en) * 2019-01-08 2019-05-10 南京蓝绿物联科技有限公司 A kind of visibility measurement method based on machine learning
CN110849807A (en) * 2019-11-22 2020-02-28 山东交通学院 Monitoring method and system suitable for road visibility based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刘南辉: "基于能见度检测的道路限速监拍系统", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *
田金文,田甜作;张天序总主编: "《航天航空导航制导图像信息技术与系统研究丛书 图像匹配导航定位技术》", 31 January 2021, 武汉:华中科学技术大学出版社 *
许茜 等: "基于图像理解的能见度测量方法", 《模式识别与人工智能》 *
谭琨著: "《高光谱遥感影像半监督分类研究》", 31 January 2014, 中国矿业大学出版社 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824491A (en) * 2023-06-16 2023-09-29 北京百度网讯科技有限公司 Visibility detection method, training method and device of detection model and storage medium
CN117152361A (en) * 2023-10-26 2023-12-01 天津市滨海新区气象局(天津市滨海新区气象预警中心) Remote sensing image visibility estimation method based on attention mechanism
CN117152361B (en) * 2023-10-26 2024-01-30 天津市滨海新区气象局(天津市滨海新区气象预警中心) Remote sensing image visibility estimation method based on attention mechanism

Similar Documents

Publication Publication Date Title
CN111696139B (en) White feather breeding hen group weight estimation system and method based on RGB image
CN113065578B (en) Image visual semantic segmentation method based on double-path region attention coding and decoding
CN108898085A (en) Intelligent road disease detection method based on mobile phone video
CN113128581A (en) Visibility detection method, device and system based on machine learning and storage medium
CN104268505A (en) Automatic cloth defect point detection and recognition device and method based on machine vision
CN109948476B (en) Human face skin detection system based on computer vision and implementation method thereof
CN111784017B (en) Road traffic accident number prediction method based on road condition factor regression analysis
CN111898581A (en) Animal detection method, device, electronic equipment and readable storage medium
CN111950812B (en) Method and device for automatically identifying and predicting rainfall
CN108711148A (en) A kind of wheel tyre defect intelligent detecting method based on deep learning
CN116863274A (en) Semi-supervised learning-based steel plate surface defect detection method and system
CN110736709A (en) blueberry maturity nondestructive testing method based on deep convolutional neural network
CN110956615A (en) Image quality evaluation model training method and device, electronic equipment and storage medium
CN110728269B (en) High-speed rail contact net support pole number plate identification method based on C2 detection data
CN106570440A (en) People counting method and people counting device based on image analysis
CN114022761A (en) Detection and positioning method and device for power transmission line tower based on satellite remote sensing image
CN114067438A (en) Thermal infrared vision-based parking apron human body action recognition method and system
CN112084851A (en) Hand hygiene effect detection method, device, equipment and medium
CN116012701A (en) Water treatment dosing control method and device based on alum blossom detection
CN116129135A (en) Tower crane safety early warning method based on small target visual identification and virtual entity mapping
CN114494845A (en) Artificial intelligence hidden danger troubleshooting system and method for construction project site
CN111553500B (en) Railway traffic contact net inspection method based on attention mechanism full convolution network
CN115830514B (en) Whole river reach surface flow velocity calculation method and system suitable for curved river channel
CN116580026A (en) Automatic optical detection method, equipment and storage medium for appearance defects of precision parts
CN115100577A (en) Visibility recognition method and system based on neural network, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210716

RJ01 Rejection of invention patent application after publication