CN114973110B - On-line monitoring method and system for highway weather - Google Patents

On-line monitoring method and system for highway weather Download PDF

Info

Publication number
CN114973110B
CN114973110B CN202210890668.2A CN202210890668A CN114973110B CN 114973110 B CN114973110 B CN 114973110B CN 202210890668 A CN202210890668 A CN 202210890668A CN 114973110 B CN114973110 B CN 114973110B
Authority
CN
China
Prior art keywords
fog
image
sample
vector
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210890668.2A
Other languages
Chinese (zh)
Other versions
CN114973110A (en
Inventor
李文
徐莉
郑小燕
刘强
杨苗
陈龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Jiutong Zhilu Technology Co ltd
Original Assignee
Sichuan Jiutong Zhilu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Jiutong Zhilu Technology Co ltd filed Critical Sichuan Jiutong Zhilu Technology Co ltd
Priority to CN202210890668.2A priority Critical patent/CN114973110B/en
Publication of CN114973110A publication Critical patent/CN114973110A/en
Application granted granted Critical
Publication of CN114973110B publication Critical patent/CN114973110B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for online monitoring of highway weather. The detection method comprises the steps of collecting monitoring images in monitoring equipment; the monitoring equipment is monitoring equipment on a highway; the monitoring image includes a first monitoring image and a second monitoring image. And inputting the first monitoring image and the second monitoring image into a fog detection model to obtain a fog detection condition. The fog detection condition comprises a fog characteristic vector and an image change value. The image change value represents a degree of change of the fog in the second monitoring image compared with the first monitoring image; elements in the mist feature vector respectively represent the mist type and the mist degree. And obtaining a fog judgment value based on the fog characteristic vector and the image change value. The detection system comprises an acquisition module, a detection module and a judgment module. The detection method and the detection system can more accurately identify the condition that the blurred image is not fog, and greatly improve the accuracy of fog identification.

Description

On-line monitoring method and system for highway weather
Technical Field
The invention relates to the technical field of computers, in particular to a method and a system for online monitoring of highway weather.
Background
At present, the fog is generally detected and identified by adopting a characteristic extraction and convolution mode. By adopting the feature extraction method, the situation that the background cannot be separated because the fog is diffused in the whole picture is caused, and the detection and the identification are difficult because the shape and the color of the fog are various and the features cannot be extracted independently. The detection is carried out by adopting a convolution mode, a large amount of data is needed for the detection of fog, the detection efficiency is low when dense fog begins, and the fog cannot be accurately identified due to the interference of fuzzy pictures, gray objects and the like.
Disclosure of Invention
The invention aims to provide an online monitoring method and system for highway weather, which are used for solving the problems in the prior art.
In a first aspect, an embodiment of the present invention provides an online highway weather monitoring method, including: collecting a monitoring image in monitoring equipment; the monitoring equipment is monitoring equipment on a highway; the monitoring image comprises a first monitoring image and a second monitoring image;
inputting the first monitoring image and the second monitoring image into a fog detection model to obtain a fog detection condition; the fog detection condition comprises a fog characteristic vector and an image change value; the image change value represents a degree of change of the fog in the second monitoring image compared with the first monitoring image; the fog characteristic vector comprises a first element and a plurality of second elements, the first elements respectively represent fog types, and the second elements represent fog degrees;
obtaining a fog judgment value based on the fog characteristic vector and the image change value; if the fog judgment value is 1, the fog is detected, and if the fog judgment value is 0, the fog is not detected;
the fog detection model comprises: a feature extraction convolution network, an image change calculation layer and two full connection layers; inputting the characteristic extraction convolution network into a first monitoring image and a second monitoring image; the output of the feature extraction convolution network is a first feature map and a second feature map; the input of the first full connection layer is the first characteristic diagram and the second characteristic diagram; the output of the first fully connected layer is a first eigenvector and a second eigenvector; the input of the image change value calculation layer is the first feature vector and the second feature vector; the output of the image change value calculation layer is an image change value; the input of the second full connection layer is the second characteristic diagram; the output of the first fully-connected layer is a mist characteristic vector.
Optionally, the training method of the fog detection model includes:
obtaining a training set, wherein the training set comprises a plurality of training pictures and labeled data; the plurality of training pictures comprise basic sample images, positive sample images and negative sample images; the positive sample image is an image obtained within 2s after the basic sample image is obtained; the base sample image represents a fogged image; the negative examples represent pictures without fog but otherwise obscuring the background;
preprocessing the basic sample image, the positive sample image and the negative sample image in the fog detection training set to obtain a preprocessed smoke image; the pretreatment smoke map comprises a basic sample pretreatment smoke map, a positive sample pretreatment smoke map and a negative sample pretreatment smoke map;
inputting the preprocessed smoke diagram into the feature extraction convolution network to obtain a smoke feature diagram; the fog characteristic diagram comprises a basic sample fog characteristic diagram, a positive sample fog characteristic diagram and a negative sample fog characteristic diagram;
obtaining an image change value based on the basic sample fog characteristic diagram, the positive sample fog characteristic diagram and the first full connection layer;
obtaining a fog detection characteristic vector based on the basic sample fog characteristic diagram, the negative sample fog characteristic diagram and the second full-connection layer; the fog detection characteristic vector comprises a basic sample fog characteristic detection vector and a negative sample fog characteristic detection vector; the fog detection characteristic vector comprises a category and a fog degree, wherein the category comprises fog and fog-free; the fog degree comprises no fog, light fog, dense fog, strong dense fog, extra strong dense fog, extremely strong dense fog and blind fog;
obtaining a fog detection loss value based on the basic sample fog characteristic detection vector, the negative sample fog characteristic detection vector, the image change value and the labeling data;
obtaining the maximum iteration times of the fog detection model training;
and stopping training when the fog detection loss value is less than or equal to a fog threshold value or the training iteration number reaches the maximum iteration number, so as to obtain a trained fog detection model.
Optionally, the inputting the preprocessed smoke map into the feature extraction convolution network to obtain a smoke feature map includes:
inputting the basic sample pretreatment smoke image in the pretreatment smoke image into a feature extraction convolution network to obtain a basic sample fog feature image;
inputting a positive sample pretreatment smoke image in the pretreatment smoke image into a feature extraction convolution network to obtain a positive sample fog feature image;
inputting the negative sample pretreatment smoke image in the pretreatment smoke image into a feature extraction convolution network to obtain a negative sample fog feature image.
Optionally, the obtaining an image change value based on the basic sample fog characteristic diagram, the positive sample fog characteristic diagram and the first full connection layer includes:
inputting the basic sample fog characteristic diagram into a first full-connection layer to obtain a basic sample fog change vector;
inputting the positive sample fog characteristic diagram into a first full-connection layer to obtain a positive sample fog change vector;
obtaining a fog change similarity value based on the fog change direction of the basic sample and the fog change vector of the positive sample;
the fog change similarity value is obtained by the calculation mode of the following formula:
Figure GDA0003854017020000031
wherein dist is the mist variation similarity value; x is the number ofiRepresenting elements in the basic sample fog variation vector; y isiRepresenting elements in the positive sample mist variation vector; m represents the length of the basic sample fog change direction and the positive sample fog change vector; i represents the ith element in the basic sample fog change direction and the positive sample fog change vector; the value of i is an integer between 1 and m.
Optionally, obtaining a fog detection feature vector based on the basic sample fog feature map, the negative sample fog feature map, and the second full connection layer includes:
inputting the basic sample fog characteristic diagram into the second full-connection layer to obtain a basic sample fog detection vector;
and inputting the negative sample fog characteristic diagram into the second full-connection layer to obtain a negative sample fog detection vector.
Optionally, the obtaining of the fog detection loss value based on the basic sample fog feature detection vector, the negative sample fog feature detection vector, the image change value, and the labeling data includes: the fog change similarity value is obtained by the calculation mode of the following formula:
Figure GDA0003854017020000041
wherein Loss is the similar value of the fog change; c is the image change value; a is aiDetecting elements in the vector for the basic sample mist; p is a radical ofiDetecting elements in the vector for the positive sample mist; n isiAnd detecting elements in the vector for the negative sample fog.
Optionally, obtaining a fog determination value based on the fog feature vector and the image variation value includes:
obtaining a fog category based on the fog feature vector;
obtaining a mist presence value; the fog presence value is a product of the fog category multiplied by the image change value;
if the fog existing value is larger than the fog existing threshold value, the fog judgment value is set as 1;
and if the fog existence value is less than or equal to the fog existence threshold value, setting the fog judgment value as 0.
In a second aspect, an embodiment of the present invention provides an online highway weather monitoring system, including: an acquisition module: collecting a monitoring image in monitoring equipment; the monitoring equipment is monitoring equipment on a highway; the monitoring image comprises a first monitoring image and a second monitoring image;
a detection module: inputting the first monitoring image and the second monitoring image into a fog detection model to obtain a fog detection condition; the fog detection condition comprises a fog characteristic vector and an image change value; the image change value represents a degree of change of the fog in the second monitoring image compared with the first monitoring image; the fog characteristic vector comprises a first element and a plurality of second elements, the first elements respectively represent fog types, and the second elements represent fog degrees;
a judging module: obtaining a fog judgment value based on the fog characteristic vector and the image change value; if the fog judgment value is 1, the fog is detected, and if the fog judgment value is 0, the fog is not detected;
the fog detection model comprises: a feature extraction convolution network, an image change calculation layer and two full connection layers; inputting the characteristic extraction convolution network into a first monitoring image and a second monitoring image; the output of the feature extraction convolution network is a first feature map and a second feature map; the input of the first full connection layer is the first characteristic diagram and the second characteristic diagram; the output of the first full-connection layer is a first eigenvector and a second eigenvector; the input of the image change value calculation layer is the first feature vector and the second feature vector; the output of the image change value calculation layer is an image change value; the input of the second full connection layer is the second characteristic diagram; and the output of the first full-connection layer is a fog characteristic vector.
Optionally, the training method of the fog detection model includes:
obtaining a training set, wherein the training set comprises a plurality of training pictures and labeled data; the plurality of training pictures comprise basic sample images, positive sample images and negative sample images; the positive sample image is an image obtained within 2s after the basic sample image is obtained; the base sample image represents a fogged image; the negative examples represent pictures without fog but otherwise obscuring the background;
preprocessing the basic sample image, the positive sample image and the negative sample image in the fog detection training set to obtain a preprocessed smoke image; the pre-treatment smoke map comprises a basic sample pre-treatment smoke map, a positive sample pre-treatment smoke map and a negative sample pre-treatment smoke map;
inputting the preprocessed smoke diagram into the feature extraction convolution network to obtain a smoke feature diagram; the fog characteristic diagram comprises a basic sample fog characteristic diagram, a positive sample fog characteristic diagram and a negative sample fog characteristic diagram;
obtaining an image change value based on the basic sample fog characteristic diagram, the positive sample fog characteristic diagram and the first full connection layer;
obtaining a fog detection characteristic vector based on the basic sample fog characteristic diagram, the negative sample fog characteristic diagram and the second full-connection layer; the fog detection feature vector comprises a basic sample fog feature detection vector and a negative sample fog feature detection vector; the fog detection characteristic vector comprises a category and a fog degree, wherein the category comprises fog and fog-free; the fog degree comprises no fog, light fog, dense fog, strong dense fog, extra strong dense fog, extremely strong dense fog and blind fog;
obtaining a fog detection loss value based on the basic sample fog characteristic detection vector, the negative sample fog characteristic detection vector, the image change value and the labeled data;
obtaining the maximum iteration times of the fog detection model training;
and stopping training when the fog detection loss value is less than or equal to the fog threshold value or the training iteration number reaches the maximum iteration number, so as to obtain a trained fog detection model.
Optionally, the inputting the preprocessed smoke map into the feature extraction convolution network to obtain a smoke feature map includes:
inputting the basic sample pretreatment smoke image in the pretreatment smoke image into a feature extraction convolution network to obtain a basic sample fog feature image;
inputting the positive sample pretreatment smoke image in the pretreatment smoke image into a feature extraction convolution network to obtain a positive sample fog feature image;
inputting the negative sample pretreatment smoke image in the pretreatment smoke image into a feature extraction convolution network to obtain a negative sample fog feature image.
Compared with the prior art, the embodiment of the invention achieves the following beneficial effects:
when fog starts to gather, fog is identified in a convolution mode through a video detection method. The neural network is trained by the positive sample and the negative sample simultaneously, so that the fog recognition accuracy of the neural network is improved. Meanwhile, the positive sample is an image obtained within 2s after the basic sample image is obtained, and the image change value image of the positive sample and the basic sample is obtained, so that the fuzzy image but not the fog condition is more accurately identified, and the fog identification accuracy is greatly improved.
Drawings
Fig. 1 is a flowchart of an online highway weather monitoring method according to an embodiment of the present invention.
FIG. 2 is a diagram of a using process of a fog detection model for on-line monitoring of highway weather provided by the embodiment of the invention.
Fig. 3 is a schematic block structure diagram of an electronic device according to an embodiment of the present invention.
The labels in the figure are: a bus 500; a receiver 501; a processor 502; a transmitter 503; a memory 504; a bus interface 505.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
Example 1
As shown in fig. 1, an embodiment of the present invention provides an online highway weather monitoring method, including:
s101: collecting a monitoring image in monitoring equipment; the monitoring equipment is monitoring equipment on a highway; the monitoring image includes a first monitoring image and a second monitoring image.
S102: inputting the first monitoring image and the second monitoring image into a fog detection model to obtain a fog detection condition; the fog detection condition comprises a fog characteristic vector and an image change value; the image change value represents a degree of change of the fog in the second monitoring image compared with the first monitoring image; the fog feature vector comprises a first element and a plurality of second elements, the first elements respectively represent fog categories, and the second elements represent fog degrees.
S103: obtaining a fog judgment value based on the fog characteristic vector and the image change value; if the fog judgment value is 1, the fog is detected, and if the fog judgment value is 0, the fog is not detected.
The fog detection model comprises: a feature extraction convolution network, an image change calculation layer and two full connection layers; the input of the feature extraction convolutional network is a first monitoring image and a second monitoring image. The output of the feature extraction convolutional network is a first feature map and a second feature map. The input of the first full connection layer is the first characteristic diagram and the second characteristic diagram; the output of the first fully-connected layer is a first eigenvector and a second eigenvector. The input of the image change value calculation layer is the first feature vector and the second feature vector; the output of the image change value calculation layer is an image change value; the input of the second full connection layer is the second characteristic diagram; the output of the first fully-connected layer is a mist characteristic vector.
The highway weather on-line monitoring process is shown in fig. 2, two pictures are input, a first feature graph and a second feature graph are obtained through the same feature extraction convolution network respectively, and a first feature vector and a second feature vector are obtained through the first full-connection layer of the first feature graph and the second feature graph respectively. And calculating an image change value from the first feature vector and the second feature vector. And the second feature map obtains a third feature vector through a second full-connection layer. And judging whether the fog is generated or not according to the third feature vector and the image change value.
The fog characteristic vector comprises a first element and a plurality of second elements, the first elements respectively represent fog types, and the second elements represent fog degrees. The number of the second elements in this example was 7, and the element values in the second elements indicated the possible degree of the degree of fogging. Fog is indicated by a fog eigenvector of [0.95,0.1,0.2,0.1,0.97,0.3,0.2,0.1], and the degree of fog is shown in table 1 as a 4 th order, strong, dense fog with a visibility of 0.1-0.5 kilometers.
Wherein the basic image, the positive sample image and the negative sample image are images without influence of other factors except fog and blur.
By the method, the positive sample and the negative sample train the neural network at the same time, and the fog recognition accuracy of the neural network is improved. Meanwhile, the positive sample is an image obtained within 2s after the basic sample image is obtained, and the image change value images of the positive sample and the basic sample are obtained, so that the condition that the fuzzy image is not a fog image is more accurately identified, and the fog identification accuracy is greatly improved.
Optionally, the training method of the fog detection model includes:
obtaining a training set, wherein the training set comprises a plurality of training pictures and labeled data; the plurality of training pictures includes a base sample image, a positive sample image, and a negative sample image. The positive sample image is an image obtained within 2s after the basic sample image is obtained; the base sample image represents a fogged image. The negative examples represent pictures without fog but otherwise obscuring the background.
And preprocessing the basic sample image, the positive sample image and the negative sample image in the fog detection training set to obtain a preprocessed smoke image. The pretreatment smoke map comprises a base sample pretreatment smoke map, a positive sample pretreatment smoke map, and a negative sample pretreatment smoke map.
Inputting the preprocessed smoke diagram into the feature extraction convolution network to obtain a smoke feature diagram; the fog characteristic diagram comprises a basic sample fog characteristic diagram, a positive sample fog characteristic diagram and a negative sample fog characteristic diagram.
Obtaining an image change value based on the basic sample fog characteristic diagram, the positive sample fog characteristic diagram and the first full connection layer; an image change value of 1 indicates an image change, and an image change value of 0 indicates no image change.
Obtaining a fog detection characteristic vector based on the basic sample fog characteristic diagram, the negative sample fog characteristic diagram and the second full-connection layer; the fog detection characteristic vector comprises a basic sample fog characteristic detection vector and a negative sample fog characteristic detection vector; the fog detection characteristic vector comprises a category and a fog degree, wherein the category comprises fog and fog-free; the fog degree includes no fog, light fog, dense fog, strong dense fog, extra strong dense fog, extremely strong dense fog and blind fog.
And obtaining a fog detection loss value based on the basic sample fog characteristic detection vector, the negative sample fog characteristic detection vector, the image change value and the labeling data.
And obtaining the maximum iteration times of the fog detection model training.
And stopping training when the fog detection loss value is less than or equal to a fog threshold value or the training iteration number reaches the maximum iteration number, so as to obtain a trained fog detection model.
The fog threshold of this example is 1. The negative examples represent pictures without fog but otherwise obscuring the background, which may be blurred pictures and pictures obscured with gray objects. A second positive sample smoke map in the set of positive samples is obtained within 2s after the first positive sample smoke map. The labeled data of the training set includes a smoke category and a smoke degree relative to the feature vector output. The pictures in the training set can be preprocessed by a mixcut method.
Wherein, the series, name and visibility of the fog degree are shown in table 1:
TABLE 1
Number of stages Name(s) Visibility (kilometer)
Stage 1 Fog-free 10≤
Stage 2 Light fog 1-10
Stage 3 Dense fog 0.5-1
Stage 4 Strong dense fog 0.1-0.5
Stage 5 Strong dense fog 0.02-0.1
Stage 6 Extremely strong dense fog 0.001-0.02
Stage 7 Blind fog 0-0.001
By the above method, the positive sample image is obtained within 2s after the basic sample image is obtained because the fog is fast in the process of gathering. And adding the influence factor under the condition that whether the basic sample and the positive sample are judged to be fog or not by convolution, thereby more accurately judging whether the fuzzy condition in the monitored image is caused by fog or not. And training the image by a method of calculating a loss function.
Optionally, the inputting the preprocessed smoke map into the feature extraction convolution network to obtain a smoke feature map includes:
and inputting the basic sample pretreatment smoke image in the pretreatment smoke image into a feature extraction convolution network to obtain a basic sample fog feature image.
And inputting the positive sample preprocessed smoke diagram in the preprocessed smoke diagram into a characteristic extraction convolution network to obtain a positive sample smoke characteristic diagram.
Inputting the negative sample pretreatment smoke image in the pretreatment smoke image into a feature extraction convolution network to obtain a negative sample fog feature image.
The feature extraction convolutional network used in this embodiment is a deformed YOLOV3 network. The feature extraction convolutional network is used to extract many features of the mist.
With the method, because the mist characteristic is difficult to extract independently, the characteristic of the mist is extracted and identified by adopting a convolution method, so that the required characteristic can be extracted simply.
Optionally, obtaining an image change value based on the basic sample fog characteristic diagram, the positive sample fog characteristic diagram and the first full connection layer includes:
and inputting the basic sample fog characteristic diagram into a first full-connection layer to obtain a basic sample fog change vector.
And inputting the positive sample fog characteristic diagram into a first full-connection layer to obtain a positive sample fog change vector.
And obtaining a fog change similarity value based on the fog change direction of the basic sample and the fog change vector of the positive sample.
The fog change similarity value is obtained by a calculation mode according to the following formula:
Figure GDA0003854017020000101
wherein dist is the mist variation similarity value; x is the number ofiRepresenting elements in the basic sample fog variation vector; y isiRepresenting elements in the positive sample mist variation vector; m represents the length of the basic sample fog change direction and the positive sample fog change vector; j represents the ith element in the fog change direction of the basic sample and the fog change vector of the positive sample; the value of i is an integer between 1 and m.
Wherein the fog change threshold is 1. In this embodiment, the number m of the basic sample mist change vector elements is 3, and the basic sample mist change vector elements represent features of a mist picture, including mist texture, mist shape, and mist color.
By the method, the value range of the first fog change vector and the value range of the second fog change vector are controlled to be between 0 and 1 by the softmax function in the first full-connection layer, and calculation is facilitated later.
Optionally, obtaining a fog detection feature vector based on the basic sample fog feature map, the negative sample fog feature map, and the second full connection layer includes:
and inputting the basic sample fog characteristic diagram into the second full-connection layer to obtain a basic sample fog detection vector.
And inputting the negative sample fog characteristic diagram into the second full-connection layer to obtain a negative sample fog detection vector.
Optionally, the obtaining of the fog detection loss value based on the basic sample fog feature detection vector, the negative sample fog feature detection vector, the image variation value, and the annotation data includes:
the fog change similarity value is obtained by a calculation mode according to the following formula:
Figure GDA0003854017020000111
wherein Loss is the similar value of the fog change; c is the image change value; a isiDetecting elements in the vector for the base sample mist; p is a radical of formulaiDetecting elements in the vector for the positive sample mist; n isiAnd detecting elements in the vector for the negative sample fog.
By the above method, the closer the features of the base sample and the positive sample are to each other, the farther away the negative sample is from the base sample. Meanwhile, the change degree between the basic sample and the positive sample is added to the monitoring image which is obtained in 2s after the basic sample image is obtained because the positive sample is taken to be foggy. The smaller the degree of change, the larger the case of being interpreted as a negative sample, and the larger the degree of change, the smaller the case of being interpreted as a negative sample.
Optionally, a fog judgment value is obtained based on the fog feature vector and the image variation value; if the fog determination value is 1, it indicates that fog is detected, and if the fog determination value is 0, it indicates that fog is not detected, including:
and obtaining the fog category based on the fog feature vector.
A mist presence value is obtained. The fog presence value is a product of the fog category multiplied by the image change value.
If the fog existence value is larger than the fog existence threshold value, the fog judgment value is set as 1.
And if the fog existence value is less than or equal to the fog existence threshold value, setting the fog judgment value as 0.
Here, the mist existence threshold value of the present embodiment is 0.2.
By the method, only two continuously acquired monitoring images are adopted in actual detection, and the acquisition time of the monitoring images is only 2s apart. The fog existence value is that the fog type is fog, which means that the detected picture is possibly fog, and whether the image is a fog image or other images like just fuzzy images can be more accurately judged by judging the image change values of the two monitoring images. Therefore, the judgment is made by calculating the fog existence value as the product of the fog type multiplied by the image change value and the fog existence threshold value.
Example 2
Based on the above on-line monitoring method for highway weather, the embodiment of the invention also provides an on-line monitoring system for highway weather, which comprises an acquisition module, a detection module and a judgment module.
The acquisition module is used for acquiring monitoring images in the monitoring equipment. The monitoring equipment is monitoring equipment on a highway. The monitoring image includes a first monitoring image and a second monitoring image.
Wherein, the detection module is used for the fog condition of detection. And inputting the first monitoring image and the second monitoring image into a fog detection model to obtain a fog detection condition. The fog detection condition comprises a fog characteristic vector and an image change value. The image change value represents a degree of change of the fog in the second monitored image compared to the first monitored image. The fog feature vector comprises a first element and a plurality of second elements, the first elements respectively represent fog categories, and the second elements represent fog degrees.
Wherein, the judging module is used for detecting whether fog exists. Obtaining a fog judgment value based on the fog characteristic vector and the image change value; if the fog judgment value is 1, the fog is detected, and if the fog judgment value is 0, the fog is not detected.
The fog detection model comprises: a feature extraction convolution network, an image change calculation layer and two full connection layers; inputting the characteristic extraction convolution network into a first monitoring image and a second monitoring image; the output of the feature extraction convolution network is a first feature map and a second feature map; the input of the first full connection layer is the first characteristic diagram and the second characteristic diagram; the output of the first full-connection layer is a first eigenvector and a second eigenvector; the input of the image change value calculation layer is the first feature vector and the second feature vector; the output of the image change value calculation layer is an image change value; the input of the second full connection layer is the second characteristic diagram; the output of the first fully-connected layer is a mist characteristic vector.
The specific manner in which the respective modules perform operations has been described in detail in the embodiments related to the method, and will not be elaborated upon here.
The fog detection model comprises the following steps: the system comprises a feature extraction convolution network, an image change calculation layer, two full connection layers and a fog judgment fault; inputting the characteristic extraction convolution network into a first monitoring image and a second monitoring image; the output of the feature extraction convolution network is a first feature map and a second feature map; the input of the first full connection layer is the first characteristic diagram and the second characteristic diagram; the output of the first full-connection layer is a first eigenvector and a second eigenvector; the input of the image change value calculation layer is the first feature vector and the second feature vector; the output of the image change value calculation layer is an image change value; the input of the second full connection layer is the second characteristic diagram; the output of the first full-connection layer is a fog characteristic vector; the input of the fog judging layer is the image change value and the fog characteristic vector, and the output of the fog judging layer is the fog detection condition.
The method comprises the steps of collecting monitoring images in monitoring equipment; the monitoring equipment is monitoring equipment on a highway; the monitoring image includes a first monitoring image and a second monitoring image.
And inputting the first monitoring image and the second monitoring image into a fog detection model to obtain a fog detection condition.
An embodiment of the present invention further provides an electronic device, as shown in fig. 3, including a memory 504, a processor 502 and a computer program stored on the memory 504 and executable on the processor 502, wherein the processor 502 implements the steps of any one of the methods of the on-line highway weather monitoring method described above when executing the program.
Wherein in fig. 3 a bus architecture (represented by bus 500) is shown, the bus 500 can include any number of interconnected buses and bridges, the bus 500 linking together various circuits including one or more processors, represented by processor 502, and memory, represented by memory 504. The bus 500 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 505 provides an interface between the bus 500 and the receiver 501 and transmitter 503. The receiver 501 and the transmitter 503 may be the same element, i.e., a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 502 is responsible for managing the bus 500 and general processing, and the memory 504 may be used for storing data used by the processor 502 in performing operations.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of any one of the methods of the above-mentioned online highway weather monitoring method and the above-mentioned related data.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system is apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in an apparatus according to an embodiment of the invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website, or provided on a carrier signal, or provided in any other form.
The above description is only for the purpose of illustrating preferred embodiments of the present invention and is not to be construed as limiting the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. An online monitoring method for highway weather is characterized by comprising the following steps:
collecting a monitoring image in monitoring equipment; the monitoring equipment is monitoring equipment on a highway; the monitoring image comprises a first monitoring image and a second monitoring image; two continuously obtained monitoring images are adopted, and the time for obtaining the monitoring images is only 2s apart;
inputting the first monitoring image and the second monitoring image into a fog detection model to obtain a fog detection condition; the fog detection condition comprises a fog characteristic vector and an image change value; the image change value represents a degree of change of the fog in the second monitoring image compared with the first monitoring image; the fog characteristic vector comprises a first element and a plurality of second elements, the first elements respectively represent fog categories, and the second elements represent fog degrees;
obtaining a fog judgment value based on the fog characteristic vector and the image change value; if the fog judgment value is 1, the fog is detected, and if the fog judgment value is 0, the fog is not detected;
the obtaining of the fog judgment value based on the fog characteristic vector and the image change value includes:
obtaining a fog category based on the fog feature vector;
obtaining a mist presence value; the fog existence value is the product of the fog category and the image change value;
if the fog existing value is larger than the fog existing threshold value, the fog judgment value is set as 1;
if the fog existing value is less than or equal to the fog existing threshold value, the fog judgment value is set as 0;
the fog detection model comprises: a feature extraction convolution network, an image change calculation layer and two full connection layers; inputting the characteristic extraction convolution network into a first monitoring image and a second monitoring image; the output of the feature extraction convolution network is a first feature map and a second feature map; the input of the first full connection layer is the first characteristic diagram and the second characteristic diagram; the output of the first full-connection layer is a first eigenvector and a second eigenvector; the input of the image change value calculation layer is the first feature vector and the second feature vector; the output of the image change value calculation layer is an image change value; the input of the second full connection layer is the second characteristic diagram; and the output of the second full-connection layer is a fog characteristic vector.
2. The online highway meteorological monitoring method according to claim 1, wherein the training method of the fog detection model comprises the following steps:
obtaining a training set, wherein the training set comprises a plurality of training pictures and labeled data; the plurality of training pictures comprise basic sample images, positive sample images and negative sample images; the positive sample image is an image obtained within 2s after the basic sample image is obtained; the base sample image represents a fogged image; the negative sample image represents a picture without fog but otherwise obscuring the background;
preprocessing the basic sample image, the positive sample image and the negative sample image in the training set to obtain a preprocessed smoke image; the pretreatment smoke map comprises a basic sample pretreatment smoke map, a positive sample pretreatment smoke map and a negative sample pretreatment smoke map;
inputting the preprocessed smoke diagram into the feature extraction convolution network to obtain a smoke feature diagram; the fog characteristic diagram comprises a basic sample fog characteristic diagram, a positive sample fog characteristic diagram and a negative sample fog characteristic diagram;
obtaining an image change value based on the basic sample fog characteristic diagram, the positive sample fog characteristic diagram and the first full connection layer;
obtaining a fog detection characteristic vector based on the basic sample fog characteristic diagram, the negative sample fog characteristic diagram and the second full-connection layer; the fog detection characteristic vector comprises a basic sample fog characteristic detection vector and a negative sample fog characteristic detection vector; the fog detection feature vector comprises a category and a fog degree, wherein the category comprises fog and fog-free; the fog degree comprises no fog, light fog, dense fog, strong dense fog, extra strong dense fog, extremely strong dense fog and blind fog;
obtaining a fog detection loss value based on the basic sample fog characteristic detection vector, the negative sample fog characteristic detection vector, the image change value and the labeling data;
obtaining the maximum iteration times of the fog detection model training;
and stopping training when the fog detection loss value is less than or equal to a fog threshold value or the training iteration number reaches the maximum iteration number, so as to obtain a trained fog detection model.
3. The on-line highway meteorological monitoring method according to claim 2, wherein the inputting the preprocessed smoke map into the feature extraction convolutional network to obtain a smoke feature map comprises: inputting the basic sample pretreatment smoke image in the pretreatment smoke image into a feature extraction convolution network to obtain a basic sample fog feature image;
inputting the positive sample pretreatment smoke image in the pretreatment smoke image into a feature extraction convolution network to obtain a positive sample fog feature image;
inputting the negative sample pretreatment smoke image in the pretreatment smoke image into a feature extraction convolution network to obtain a negative sample fog feature image.
4. The on-line highway meteorological monitoring method according to claim 2, wherein the obtaining an image change value based on the basic sample fog characteristic map, the positive sample fog characteristic map and the first full connection layer comprises:
inputting the basic sample fog characteristic diagram into a first full-connection layer to obtain a basic sample fog change vector;
inputting the positive sample fog characteristic diagram into a first full-connection layer to obtain a positive sample fog change vector;
obtaining a fog change similarity value based on the fog change direction of the basic sample and the fog change vector of the positive sample;
the fog change similarity value is obtained by the following formula calculation mode:
Figure FDA0003854017010000031
wherein dist is the mist variation similarity value; x is the number ofiRepresenting elements in the basic sample fog variation vector; y isiRepresenting elements in the positive sample mist variation vector; m represents the lengths of the basic sample fog change direction and the positive sample fog change vector; i represents the ith element in the basic sample fog change direction and the positive sample fog change vector; i has a value of 1 to mIs an integer between.
5. The on-line highway meteorological monitoring method according to claim 2, wherein the obtaining of the fog detection feature vector based on the basic sample fog feature map, the negative sample fog feature map and the second full connection layer comprises:
inputting the basic sample fog characteristic diagram into the second full-connection layer to obtain a basic sample fog detection vector;
and inputting the negative sample fog characteristic diagram into the second full-connection layer to obtain a negative sample fog detection vector.
6. The on-line highway weather monitoring method according to claim 2, wherein the obtaining of the fog detection loss value based on the basic sample fog characteristic detection vector, the negative sample fog characteristic detection vector, the image change value and the labeled data comprises:
the fog detection loss value is obtained by the following formula calculation mode:
Figure FDA0003854017010000041
wherein Loss is the Loss value of the fog detection; c is the image change value; a is aiDetecting elements in the vector for the fog features of the basic sample; p is a radical of formulaiMarking data; n is a radical of an alkyl radicaliDetecting elements in the vector for the negative sample fog features; margin is a constant.
7. The utility model provides a highway meteorological on-line monitoring system which characterized in that includes:
an acquisition module: collecting a monitoring image in monitoring equipment; the monitoring equipment is monitoring equipment on a highway; the monitoring image comprises a first monitoring image and a second monitoring image; two continuously obtained monitoring images are adopted, and the time for obtaining the monitoring images is only 2s apart;
a detection module: inputting the first monitoring image and the second monitoring image into a fog detection model to obtain a fog detection condition; the fog detection condition comprises a fog characteristic vector and an image change value; the image change value represents a degree of change of the fog in the second monitoring image compared with the first monitoring image; the fog characteristic vector comprises a first element and a plurality of second elements, the first elements respectively represent fog types, and the second elements represent fog degrees;
a judging module: obtaining a fog judgment value based on the fog characteristic vector and the image change value; if the fog judgment value is 1, the fog is detected, and if the fog judgment value is 0, the fog is not detected; the obtaining of the fog judgment value based on the fog characteristic vector and the image change value includes:
obtaining a fog category based on the fog feature vector;
obtaining a mist existence value; the fog existence value is the product of the fog category and the image change value;
if the fog existing value is larger than the fog existing threshold value, the fog judgment value is set as 1;
if the fog existing value is less than or equal to the fog existing threshold value, the fog judgment value is set as 0;
the fog detection model comprises: a feature extraction convolution network, an image change calculation layer and two full connection layers; inputting the characteristic extraction convolution network into a first monitoring image and a second monitoring image; the output of the feature extraction convolution network is a first feature map and a second feature map; the input of the first full connection layer is the first characteristic diagram and the second characteristic diagram; the output of the first fully connected layer is a first eigenvector and a second eigenvector; the input of the image change value calculation layer is the first feature vector and the second feature vector; the output of the image change value calculation layer is an image change value; the input of the second full connection layer is the second characteristic diagram; the output of the first fully-connected layer is a mist characteristic vector.
8. The on-line highway weather monitoring system according to claim 7, wherein the training method of the fog detection model comprises the following steps:
obtaining a training set, wherein the training set comprises a plurality of training pictures and labeled data; the plurality of training pictures comprise basic sample images, positive sample images and negative sample images; the positive sample image is an image obtained within 2s after the basic sample image is obtained; the base sample image represents a fogged image; the negative sample image represents a picture without fog but with the background obscured for other reasons;
preprocessing the basic sample image, the positive sample image and the negative sample image in the fog detection training set to obtain a preprocessed smoke image; the pre-treatment smoke map comprises a basic sample pre-treatment smoke map, a positive sample pre-treatment smoke map and a negative sample pre-treatment smoke map;
inputting the preprocessed smoke diagram into the feature extraction convolution network to obtain a smoke feature diagram; the fog characteristic diagram comprises a basic sample fog characteristic diagram, a positive sample fog characteristic diagram and a negative sample fog characteristic diagram;
obtaining an image change value based on the basic sample fog characteristic diagram, the positive sample fog characteristic diagram and the first full connection layer;
obtaining a fog detection characteristic vector based on the basic sample fog characteristic diagram, the negative sample fog characteristic diagram and the second full-connection layer; the fog detection characteristic vector comprises a basic sample fog characteristic detection vector and a negative sample fog characteristic detection vector; the fog detection characteristic vector comprises a category and a fog degree, wherein the category comprises fog and fog-free; the fog degree comprises no fog, light fog, dense fog, strong dense fog, extra strong dense fog, extremely strong dense fog and blind fog;
obtaining a fog detection loss value based on the basic sample fog characteristic detection vector, the negative sample fog characteristic detection vector, the image change value and the labeled data;
obtaining the maximum iteration times of the fog detection model training;
and stopping training when the fog detection loss value is less than or equal to the fog threshold value or the training iteration number reaches the maximum iteration number, so as to obtain a trained fog detection model.
9. The on-line highway meteorological monitoring system according to claim 8, wherein the inputting the preprocessed smoke map into the feature extraction convolutional network to obtain a smoke feature map comprises:
inputting the basic sample pretreatment smoke image in the pretreatment smoke image into a feature extraction convolution network to obtain a basic sample fog feature image;
inputting a positive sample pretreatment smoke image in the pretreatment smoke image into a feature extraction convolution network to obtain a positive sample fog feature image;
inputting the negative sample pretreatment smoke image in the pretreatment smoke image into a feature extraction convolution network to obtain a negative sample fog feature image.
CN202210890668.2A 2022-07-27 2022-07-27 On-line monitoring method and system for highway weather Active CN114973110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210890668.2A CN114973110B (en) 2022-07-27 2022-07-27 On-line monitoring method and system for highway weather

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210890668.2A CN114973110B (en) 2022-07-27 2022-07-27 On-line monitoring method and system for highway weather

Publications (2)

Publication Number Publication Date
CN114973110A CN114973110A (en) 2022-08-30
CN114973110B true CN114973110B (en) 2022-11-01

Family

ID=82969655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210890668.2A Active CN114973110B (en) 2022-07-27 2022-07-27 On-line monitoring method and system for highway weather

Country Status (1)

Country Link
CN (1) CN114973110B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751744A (en) * 2008-12-10 2010-06-23 中国科学院自动化研究所 Detection and early warning method of smoke
CN109961070A (en) * 2019-03-22 2019-07-02 国网河北省电力有限公司电力科学研究院 The method of mist body concentration is distinguished in a kind of power transmission line intelligent image monitoring
KR102223991B1 (en) * 2020-06-29 2021-03-08 세종대학교산학협력단 Apparatus for detecting sea fog based on satellite observation in visible and near-infrared bands and method thereof
CN112686105A (en) * 2020-12-18 2021-04-20 云南省交通规划设计研究院有限公司 Fog concentration grade identification method based on video image multi-feature fusion
CN112906463A (en) * 2021-01-15 2021-06-04 上海东普信息科技有限公司 Image-based fire detection method, device, equipment and storage medium
CN113537099A (en) * 2021-07-21 2021-10-22 招商局重庆交通科研设计院有限公司 Dynamic detection method for fire smoke in highway tunnel
CN113642447A (en) * 2021-08-09 2021-11-12 杭州弈胜科技有限公司 Monitoring image vehicle detection method and system based on convolutional neural network cascade
CN114359196A (en) * 2021-12-27 2022-04-15 以萨技术股份有限公司 Fog detection method and system
CN114580541A (en) * 2022-03-07 2022-06-03 郑州轻工业大学 Fire disaster video smoke identification method based on time-space domain double channels

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020100664A1 (en) * 2018-11-13 2020-05-22 ソニー株式会社 Image processing device, image processing method, and program
CN111401246B (en) * 2020-03-17 2024-06-04 广东智媒云图科技股份有限公司 Smoke concentration detection method, device, equipment and storage medium
CN114359733A (en) * 2022-01-06 2022-04-15 盛视科技股份有限公司 Vision-based smoke fire detection method and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751744A (en) * 2008-12-10 2010-06-23 中国科学院自动化研究所 Detection and early warning method of smoke
CN109961070A (en) * 2019-03-22 2019-07-02 国网河北省电力有限公司电力科学研究院 The method of mist body concentration is distinguished in a kind of power transmission line intelligent image monitoring
KR102223991B1 (en) * 2020-06-29 2021-03-08 세종대학교산학협력단 Apparatus for detecting sea fog based on satellite observation in visible and near-infrared bands and method thereof
CN112686105A (en) * 2020-12-18 2021-04-20 云南省交通规划设计研究院有限公司 Fog concentration grade identification method based on video image multi-feature fusion
CN112906463A (en) * 2021-01-15 2021-06-04 上海东普信息科技有限公司 Image-based fire detection method, device, equipment and storage medium
CN113537099A (en) * 2021-07-21 2021-10-22 招商局重庆交通科研设计院有限公司 Dynamic detection method for fire smoke in highway tunnel
CN113642447A (en) * 2021-08-09 2021-11-12 杭州弈胜科技有限公司 Monitoring image vehicle detection method and system based on convolutional neural network cascade
CN114359196A (en) * 2021-12-27 2022-04-15 以萨技术股份有限公司 Fog detection method and system
CN114580541A (en) * 2022-03-07 2022-06-03 郑州轻工业大学 Fire disaster video smoke identification method based on time-space domain double channels

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《Sea fog detection based on unsupervised domain adaptation》;Mengqiu XU等;《Chinese Journal of Aeronautics》;20220430;第35卷(第4期);第415-425页 *
《基于FPGA的实时图像去雾系统》;陈龙 等;《成都信息工程大学学报》;20210415;第36卷(第02期);第138-142页 *
《定点监控下可疑违章建筑物识别关键技术研究》;陈孝烽;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20220115(第01期);第C038-299页 *
《高速公路雨雾天气短时预测方法研究》;贾骏;《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》;20220315(第03期);第B026-205页 *

Also Published As

Publication number Publication date
CN114973110A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN111161315B (en) Multi-target tracking method and system based on graph neural network
CN110705405A (en) Target labeling method and device
CN112668164A (en) Transformer fault diagnosis method and system for inducing ordered weighted evidence reasoning
CN110807924A (en) Multi-parameter fusion method and system based on full-scale full-sample real-time traffic data
CN110084289B (en) Image annotation method and device, electronic equipment and storage medium
CN112329623A (en) Early warning method for visibility detection and visibility safety grade division in foggy days
CN110910440B (en) Power transmission line length determination method and system based on power image data
CN112287896A (en) Unmanned aerial vehicle aerial image target detection method and system based on deep learning
CN113989257A (en) Electric power comprehensive pipe gallery settlement crack identification method based on artificial intelligence technology
CN116385958A (en) Edge intelligent detection method for power grid inspection and monitoring
CN114973110B (en) On-line monitoring method and system for highway weather
CN112802011A (en) Fan blade defect detection method based on VGG-BLS
CN112116561B (en) Power grid transmission line detection method and device based on image processing fusion network weight
CN114140751A (en) Examination room monitoring method and system
CN112133100B (en) Vehicle detection method based on R-CNN
CN114743048A (en) Method and device for detecting abnormal straw picture
CN114882469A (en) Traffic sign detection method and system based on DL-SSD model
CN113673589A (en) Label selection self-adaptive increment detection method and system based on frame distance measurement
CN113537463A (en) Countermeasure sample defense method and device based on data disturbance
CN111611872A (en) Novel binocular vision vehicle detection method and system
US20230260257A1 (en) Iterative refinement of annotated datasets
CN116468205B (en) Method and system for monitoring environment-friendly detection quality of motor vehicle
CN113449674B (en) Pig face identification method and system
CN115359293A (en) Auxiliary wind-blown sand treatment method and system and electronic equipment
CN112163573A (en) Insulator identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant